WorldWideScience

Sample records for stochastic gravity approach

  1. Stochastic Gravity: Theory and Applications

    Directory of Open Access Journals (Sweden)

    Hu Bei Lok

    2008-05-01

    Full Text Available Whereas semiclassical gravity is based on the semiclassical Einstein equation with sources given by the expectation value of the stress-energy tensor of quantum fields, stochastic semiclassical gravity is based on the Einstein–Langevin equation, which has, in addition, sources due to the noise kernel. The noise kernel is the vacuum expectation value of the (operator-valued stress-energy bitensor, which describes the fluctuations of quantum-matter fields in curved spacetimes. A new improved criterion for the validity of semiclassical gravity may also be formulated from the viewpoint of this theory. In the first part of this review we describe the fundamentals of this new theory via two approaches: the axiomatic and the functional. The axiomatic approach is useful to see the structure of the theory from the framework of semiclassical gravity, showing the link from the mean value of the stress-energy tensor to the correlation functions. The functional approach uses the Feynman–Vernon influence functional and the Schwinger–Keldysh closed-time-path effective action methods. In the second part, we describe three applications of stochastic gravity. First, we consider metric perturbations in a Minkowski spacetime, compute the two-point correlation functions of these perturbations and prove that Minkowski spacetime is a stable solution of semiclassical gravity. Second, we discuss structure formation from the stochastic-gravity viewpoint, which can go beyond the standard treatment by incorporating the full quantum effect of the inflaton fluctuations. Third, using the Einstein–Langevin equation, we discuss the backreaction of Hawking radiation and the behavior of metric fluctuations for both the quasi-equilibrium condition of a black-hole in a box and the fully nonequilibrium condition of an evaporating black hole spacetime. Finally, we briefly discuss the theoretical structure of stochastic gravity in relation to quantum gravity and point out

  2. Stochastic Gravity: Theory and Applications

    Directory of Open Access Journals (Sweden)

    Hu Bei Lok

    2004-01-01

    Full Text Available Whereas semiclassical gravity is based on the semiclassical Einstein equation with sources given by the expectation value of the stress-energy tensor of quantum fields, stochastic semiclassical gravity is based on the Einstein-Langevin equation, which has in addition sources due to the noise kernel. The noise kernel is the vacuum expectation value of the (operator-valued stress-energy bi-tensor which describes the fluctuations of quantum matter fields in curved spacetimes. In the first part, we describe the fundamentals of this new theory via two approaches: the axiomatic and the functional. The axiomatic approach is useful to see the structure of the theory from the framework of semiclassical gravity, showing the link from the mean value of the stress-energy tensor to their correlation functions. The functional approach uses the Feynman-Vernon influence functional and the Schwinger-Keldysh closed-time-path effective action methods which are convenient for computations. It also brings out the open systems concepts and the statistical and stochastic contents of the theory such as dissipation, fluctuations, noise, and decoherence. We then focus on the properties of the stress-energy bi-tensor. We obtain a general expression for the noise kernel of a quantum field defined at two distinct points in an arbitrary curved spacetime as products of covariant derivatives of the quantum field's Green function. In the second part, we describe three applications of stochastic gravity theory. First, we consider metric perturbations in a Minkowski spacetime. We offer an analytical solution of the Einstein-Langevin equation and compute the two-point correlation functions for the linearized Einstein tensor and for the metric perturbations. Second, we discuss structure formation from the stochastic gravity viewpoint, which can go beyond the standard treatment by incorporating the full quantum effect of the inflaton fluctuations. Third, we discuss the backreaction

  3. Stochastic quantum gravity

    International Nuclear Information System (INIS)

    Rumpf, H.

    1987-01-01

    We begin with a naive application of the Parisi-Wu scheme to linearized gravity. This will lead into trouble as one peculiarity of the full theory, the indefiniteness of the Euclidean action, shows up already at this level. After discussing some proposals to overcome this problem, Minkowski space stochastic quantization will be introduced. This will still not result in an acceptable quantum theory of linearized gravity, as the Feynman propagator turns out to be non-causal. This defect will be remedied only after a careful analysis of general covariance in stochastic quantization has been performed. The analysis requires the notion of a metric on the manifold of metrics, and a natural candidate for this is singled out. With this a consistent stochastic quantization of Einstein gravity becomes possible. It is even possible, at least perturbatively, to return to the Euclidean regime. 25 refs. (Author)

  4. Stochastic gravity: a primer with applications

    International Nuclear Information System (INIS)

    Hu, B L; Verdaguer, E

    2003-01-01

    Stochastic semiclassical gravity of the 1990s is a theory naturally evolved from semiclassical gravity of the 1970s and 1980s. It improves on the semiclassical Einstein equation with source given by the expectation value of the stress-energy tensor of quantum matter fields in curved spacetime by incorporating an additional source due to their fluctuations. In stochastic semiclassical gravity the main object of interest is the noise kernel, the vacuum expectation value of the (operator-valued) stress-energy bi-tensor, and the centrepiece is the (semiclassical) Einstein-Langevin equation. We describe this new theory via two approaches: the axiomatic and the functional. The axiomatic approach is useful to see the structure of the theory from the framework of semiclassical gravity, showing the link from the mean value of the energy-momentum tensor to their correlation functions. The functional approach uses the Feynman-Vernon influence functional and the Schwinger-Keldysh closed-time-path effective action methods which are convenient for computations. It also brings out the open system concepts and the statistical and stochastic contents of the theory such as dissipation, fluctuations, noise and decoherence. We then describe the applications of stochastic gravity to the backreaction problems in cosmology and black-hole physics. In the first problem, we study the backreaction of conformally coupled quantum fields in a weakly inhomogeneous cosmology. In the second problem, we study the backreaction of a thermal field in the gravitational background of a quasi-static black hole (enclosed in a box) and its fluctuations. These examples serve to illustrate closely the ideas and techniques presented in the first part. This topical review is intended as a first introduction providing readers with some basic ideas and working knowledge. Thus, we place more emphasis here on pedagogy than completeness. (Further discussions of ideas, issues and ongoing research topics can be found

  5. Stochastic gravity: a primer with applications

    Energy Technology Data Exchange (ETDEWEB)

    Hu, B L [Department of Physics, University of Maryland, College Park, MD 20742-4111 (United States); Verdaguer, E [Departament de Fisica Fonamental and CER en Astrofisica Fisica de Particules i Cosmologia, Universitat de Barcelona, Av. Diagonal 647, 08028 Barcelona (Spain)

    2003-03-21

    Stochastic semiclassical gravity of the 1990s is a theory naturally evolved from semiclassical gravity of the 1970s and 1980s. It improves on the semiclassical Einstein equation with source given by the expectation value of the stress-energy tensor of quantum matter fields in curved spacetime by incorporating an additional source due to their fluctuations. In stochastic semiclassical gravity the main object of interest is the noise kernel, the vacuum expectation value of the (operator-valued) stress-energy bi-tensor, and the centrepiece is the (semiclassical) Einstein-Langevin equation. We describe this new theory via two approaches: the axiomatic and the functional. The axiomatic approach is useful to see the structure of the theory from the framework of semiclassical gravity, showing the link from the mean value of the energy-momentum tensor to their correlation functions. The functional approach uses the Feynman-Vernon influence functional and the Schwinger-Keldysh closed-time-path effective action methods which are convenient for computations. It also brings out the open system concepts and the statistical and stochastic contents of the theory such as dissipation, fluctuations, noise and decoherence. We then describe the applications of stochastic gravity to the backreaction problems in cosmology and black-hole physics. In the first problem, we study the backreaction of conformally coupled quantum fields in a weakly inhomogeneous cosmology. In the second problem, we study the backreaction of a thermal field in the gravitational background of a quasi-static black hole (enclosed in a box) and its fluctuations. These examples serve to illustrate closely the ideas and techniques presented in the first part. This topical review is intended as a first introduction providing readers with some basic ideas and working knowledge. Thus, we place more emphasis here on pedagogy than completeness. (Further discussions of ideas, issues and ongoing research topics can be found

  6. BRS invariant stochastic quantization of Einstein gravity

    International Nuclear Information System (INIS)

    Nakazawa, Naohito.

    1989-11-01

    We study stochastic quantization of gravity in terms of a BRS invariant canonical operator formalism. By introducing artificially canonical momentum variables for the original field variables, a canonical formulation of stochastic quantization is proposed in the sense that the Fokker-Planck hamiltonian is the generator of the fictitious time translation. Then we show that there exists a nilpotent BRS symmetry in an enlarged phase space of the first-class constrained systems. The phase space is spanned by the dynamical variables, their canonical conjugate momentum variables, Faddeev-Popov ghost and anti-ghost. We apply the general BRS invariant formulation to stochastic quantization of gravity which is described as a second-class constrained system in terms of a pair of Langevin equations coupled with white noises. It is shown that the stochastic action of gravity includes explicitly the De Witt's type superspace metric which leads to a geometrical interpretation of quantum gravity analogous to nonlinear σ-models. (author)

  7. On the Langevin equation for stochastic quantization of gravity

    International Nuclear Information System (INIS)

    Nakazawa, Naohito.

    1989-10-01

    We study the Langevin equation for stochastic quantization of gravity. By introducing two independent variables with a second-class constraint for the gravitational field, we formulate a pair of the Langevin equations for gravity which couples with white noises. After eliminating the multiplier field for the second-class constraint, we show that the equations leads to stochastic quantization of gravity including an unique superspace metric. (author)

  8. Stochastic quantization and gravity

    International Nuclear Information System (INIS)

    Rumpf, H.

    1984-01-01

    We give a preliminary account of the application of stochastic quantization to the gravitational field. We start in Section I from Nelson's formulation of quantum mechanics as Newtonian stochastic mechanics and only then introduce the Parisi-Wu stochastic quantization scheme on which all the later discussion will be based. In Section II we present a generalization of the scheme that is applicable to fields in physical (i.e. Lorentzian) space-time and treat the free linearized gravitational field in this manner. The most remarkable result of this is the noncausal propagation of conformal gravitons. Moreover the concept of stochastic gauge-fixing is introduced and a complete discussion of all the covariant gauges is given. A special symmetry relating two classes of covariant gauges is exhibited. Finally Section III contains some preliminary remarks on full nonlinear gravity. In particular we argue that in contrast to gauge fields the stochastic gravitational field cannot be transformed to a Gaussian process. (Author)

  9. Stochastic quantum gravity-(2+1)-dimensional case

    International Nuclear Information System (INIS)

    Hosoya, Akio

    1991-01-01

    At first the amazing coincidences are pointed out in quantum field theory in curved space-time and quantum gravity, when they exhibit stochasticity. To explore the origin of them, the (2+1)-dimensional quantum gravity is considered as a toy model. It is shown that the torus universe in the (2+1)-dimensional quantum gravity is a quantum chaos in a rigorous sense. (author). 15 refs

  10. Stochastic Geometry and Quantum Gravity: Some Rigorous Results

    Science.gov (United States)

    Zessin, H.

    The aim of these lectures is a short introduction into some recent developments in stochastic geometry which have one of its origins in simplicial gravity theory (see Regge Nuovo Cimento 19: 558-571, 1961). The aim is to define and construct rigorously point processes on spaces of Euclidean simplices in such a way that the configurations of these simplices are simplicial complexes. The main interest then is concentrated on their curvature properties. We illustrate certain basic ideas from a mathematical point of view. An excellent representation of this area can be found in Schneider and Weil (Stochastic and Integral Geometry, Springer, Berlin, 2008. German edition: Stochastische Geometrie, Teubner, 2000). In Ambjørn et al. (Quantum Geometry Cambridge University Press, Cambridge, 1997) you find a beautiful account from the physical point of view. More recent developments in this direction can be found in Ambjørn et al. ("Quantum gravity as sum over spacetimes", Lect. Notes Phys. 807. Springer, Heidelberg, 2010). After an informal axiomatic introduction into the conceptual foundations of Regge's approach the first lecture recalls the concepts and notations used. It presents the fundamental zero-infinity law of stochastic geometry and the construction of cluster processes based on it. The second lecture presents the main mathematical object, i.e. Poisson-Delaunay surfaces possessing an intrinsic random metric structure. The third and fourth lectures discuss their ergodic behaviour and present the two-dimensional Regge model of pure simplicial quantum gravity. We terminate with the formulation of basic open problems. Proofs are given in detail only in a few cases. In general the main ideas are developed. Sufficiently complete references are given.

  11. Stochastic Background of Relic Scalar Gravitational Waves tuned by Extended Gravity

    International Nuclear Information System (INIS)

    De Laurentis, Mariafelicia; Capozziello, Salvatore

    2009-01-01

    A stochastic background of relic gravitational waves is achieved by the so called adiabatically-amplified zero-point fluctuations process derived from early inflation. It provides a distinctive spectrum of relic gravitational waves. In the framework of scalar-tensor gravity, we discuss the scalar modes of gravitational waves and the primordial production of this scalar component which is generated beside tensorial one. Then analyze seven different viable f(R)-gravities towards the Solar System tests and stochastic gravitational waves background. It is demonstrated that seven viable f(R)-gravities under consideration not only satisfy the local tests, but additionally, pass the above PPN-and stochastic gravitational waves bounds for large classes of parameters.

  12. THE IMPACT OF COMPETITIVENESS ON TRADE EFFICIENCY: THE ASIAN EXPERIENCE BY USING THE STOCHASTIC FRONTIER GRAVITY MODEL

    Directory of Open Access Journals (Sweden)

    Memduh Alper Demir

    2017-12-01

    Full Text Available The purpose of this study is to examine the bilateral machinery and transport equipment trade efficiency of selected fourteen Asian countries by applying stochastic frontier gravity model. These selected countries have the top machinery and transport equipment trade (both export and import volumes in Asia. The model we use includes variables such as income, market size of trading partners, distance, common culture, common border, common language and global economic crisis similar to earlier studies using the stochastic frontier gravity models. Our work, however, includes an extra variable called normalized revealed comparative advantage (NRCA index additionally. The NRCA index is comparable across commodity, country and time. Thus, the NRCA index is calculated and then included in our stochastic frontier gravity model to see the impact of competitiveness (here measured by the NRCA index on the efficiency of trade.

  13. Approaches to quantum gravity. Loop quantum gravity, spinfoams and topos approach

    International Nuclear Information System (INIS)

    Flori, Cecilia

    2010-01-01

    One of the main challenges in theoretical physics over the last five decades has been to reconcile quantum mechanics with general relativity into a theory of quantum gravity. However, such a theory has been proved to be hard to attain due to i) conceptual difficulties present in both the component theories (General Relativity (GR) and Quantum Theory); ii) lack of experimental evidence, since the regimes at which quantum gravity is expected to be applicable are far beyond the range of conceivable experiments. Despite these difficulties, various approaches for a theory of Quantum Gravity have been developed. In this thesis we focus on two such approaches: Loop Quantum Gravity and the Topos theoretic approach. The choice fell on these approaches because, although they both reject the Copenhagen interpretation of quantum theory, their underpinning philosophical approach to formulating a quantum theory of gravity are radically different. In particular LQG is a rather conservative scheme, inheriting all the formalism of both GR and Quantum Theory, as it tries to bring to its logical extreme consequences the possibility of combining the two. On the other hand, the Topos approach involves the idea that a radical change of perspective is needed in order to solve the problem of quantum gravity, especially in regard to the fundamental concepts of 'space' and 'time'. Given the partial successes of both approaches, the hope is that it might be possible to find a common ground in which each approach can enrich the other. This thesis is divided in two parts: in the first part we analyse LQG, paying particular attention to the semiclassical properties of the volume operator. Such an operator plays a pivotal role in defining the dynamics of the theory, thus testing its semiclassical limit is of uttermost importance. We then proceed to analyse spin foam models (SFM), which are an attempt at a covariant or path integral formulation of canonical Loop Quantum Gravity (LQG). In

  14. Approaches to quantum gravity. Loop quantum gravity, spinfoams and topos approach

    Energy Technology Data Exchange (ETDEWEB)

    Flori, Cecilia

    2010-07-23

    One of the main challenges in theoretical physics over the last five decades has been to reconcile quantum mechanics with general relativity into a theory of quantum gravity. However, such a theory has been proved to be hard to attain due to i) conceptual difficulties present in both the component theories (General Relativity (GR) and Quantum Theory); ii) lack of experimental evidence, since the regimes at which quantum gravity is expected to be applicable are far beyond the range of conceivable experiments. Despite these difficulties, various approaches for a theory of Quantum Gravity have been developed. In this thesis we focus on two such approaches: Loop Quantum Gravity and the Topos theoretic approach. The choice fell on these approaches because, although they both reject the Copenhagen interpretation of quantum theory, their underpinning philosophical approach to formulating a quantum theory of gravity are radically different. In particular LQG is a rather conservative scheme, inheriting all the formalism of both GR and Quantum Theory, as it tries to bring to its logical extreme consequences the possibility of combining the two. On the other hand, the Topos approach involves the idea that a radical change of perspective is needed in order to solve the problem of quantum gravity, especially in regard to the fundamental concepts of 'space' and 'time'. Given the partial successes of both approaches, the hope is that it might be possible to find a common ground in which each approach can enrich the other. This thesis is divided in two parts: in the first part we analyse LQG, paying particular attention to the semiclassical properties of the volume operator. Such an operator plays a pivotal role in defining the dynamics of the theory, thus testing its semiclassical limit is of uttermost importance. We then proceed to analyse spin foam models (SFM), which are an attempt at a covariant or path integral formulation of canonical Loop Quantum

  15. Dynamics of non-holonomic systems with stochastic transport

    Science.gov (United States)

    Holm, D. D.; Putkaradze, V.

    2018-01-01

    This paper formulates a variational approach for treating observational uncertainty and/or computational model errors as stochastic transport in dynamical systems governed by action principles under non-holonomic constraints. For this purpose, we derive, analyse and numerically study the example of an unbalanced spherical ball rolling under gravity along a stochastic path. Our approach uses the Hamilton-Pontryagin variational principle, constrained by a stochastic rolling condition, which we show is equivalent to the corresponding stochastic Lagrange-d'Alembert principle. In the example of the rolling ball, the stochasticity represents uncertainty in the observation and/or error in the computational simulation of the angular velocity of rolling. The influence of the stochasticity on the deterministically conserved quantities is investigated both analytically and numerically. Our approach applies to a wide variety of stochastic, non-holonomically constrained systems, because it preserves the mathematical properties inherited from the variational principle.

  16. Dimensional flow and fuzziness in quantum gravity: Emergence of stochastic spacetime

    Directory of Open Access Journals (Sweden)

    Gianluca Calcagni

    2017-10-01

    Full Text Available We show that the uncertainty in distance and time measurements found by the heuristic combination of quantum mechanics and general relativity is reproduced in a purely classical and flat multi-fractal spacetime whose geometry changes with the probed scale (dimensional flow and has non-zero imaginary dimension, corresponding to a discrete scale invariance at short distances. Thus, dimensional flow can manifest itself as an intrinsic measurement uncertainty and, conversely, measurement-uncertainty estimates are generally valid because they rely on this universal property of quantum geometries. These general results affect multi-fractional theories, a recent proposal related to quantum gravity, in two ways: they can fix two parameters previously left free (in particular, the value of the spacetime dimension at short scales and point towards a reinterpretation of the ultraviolet structure of geometry as a stochastic foam or fuzziness. This is also confirmed by a correspondence we establish between Nottale scale relativity and the stochastic geometry of multi-fractional models.

  17. Dimensional flow and fuzziness in quantum gravity: Emergence of stochastic spacetime

    International Nuclear Information System (INIS)

    Calcagni, Gianluca; Ronco, Michele

    2017-01-01

    We show that the uncertainty in distance and time measurements found by the heuristic combination of quantum mechanics and general relativity is reproduced in a purely classical and flat multi-fractal spacetime whose geometry changes with the probed scale (dimensional flow) and has non-zero imaginary dimension, corresponding to a discrete scale invariance at short distances. Thus, dimensional flow can manifest itself as an intrinsic measurement uncertainty and, conversely, measurement-uncertainty estimates are generally valid because they rely on this universal property of quantum geometries. These general results affect multi-fractional theories, a recent proposal related to quantum gravity, in two ways: they can fix two parameters previously left free (in particular, the value of the spacetime dimension at short scales) and point towards a reinterpretation of the ultraviolet structure of geometry as a stochastic foam or fuzziness. This is also confirmed by a correspondence we establish between Nottale scale relativity and the stochastic geometry of multi-fractional models.

  18. Dimensional flow and fuzziness in quantum gravity: Emergence of stochastic spacetime

    Science.gov (United States)

    Calcagni, Gianluca; Ronco, Michele

    2017-10-01

    We show that the uncertainty in distance and time measurements found by the heuristic combination of quantum mechanics and general relativity is reproduced in a purely classical and flat multi-fractal spacetime whose geometry changes with the probed scale (dimensional flow) and has non-zero imaginary dimension, corresponding to a discrete scale invariance at short distances. Thus, dimensional flow can manifest itself as an intrinsic measurement uncertainty and, conversely, measurement-uncertainty estimates are generally valid because they rely on this universal property of quantum geometries. These general results affect multi-fractional theories, a recent proposal related to quantum gravity, in two ways: they can fix two parameters previously left free (in particular, the value of the spacetime dimension at short scales) and point towards a reinterpretation of the ultraviolet structure of geometry as a stochastic foam or fuzziness. This is also confirmed by a correspondence we establish between Nottale scale relativity and the stochastic geometry of multi-fractional models.

  19. Stochastic quantization of Einstein gravity

    International Nuclear Information System (INIS)

    Rumpf, H.

    1986-01-01

    We determine a one-parameter family of covariant Langevin equations for the metric tensor of general relativity corresponding to DeWitt's one-parameter family of supermetrics. The stochastic source term in these equations can be expressed in terms of a Gaussian white noise upon the introduction of a stochastic tetrad field. The only physically acceptable resolution of a mathematical ambiguity in the ansatz for the source term is the adoption of Ito's calculus. By taking the formal equilibrium limit of the stochastic metric a one-parameter family of covariant path-integral measures for general relativity is obtained. There is a unique parameter value, distinguished by any one of the following three properties: (i) the metric is harmonic with respect to the supermetric, (ii) the path-integral measure is that of DeWitt, (iii) the supermetric governs the linearized Einstein dynamics. Moreover the Feynman propagator corresponding to this parameter is causal. Finally we show that a consistent stochastic perturbation theory gives rise to a new type of diagram containing ''stochastic vertices.''

  20. Space-Wise approach for airborne gravity data modelling

    Science.gov (United States)

    Sampietro, D.; Capponi, M.; Mansi, A. H.; Gatti, A.; Marchetti, P.; Sansò, F.

    2017-05-01

    Regional gravity field modelling by means of remove-compute-restore procedure is nowadays widely applied in different contexts: it is the most used technique for regional gravimetric geoid determination, and it is also used in exploration geophysics to predict grids of gravity anomalies (Bouguer, free-air, isostatic, etc.), which are useful to understand and map geological structures in a specific region. Considering this last application, due to the required accuracy and resolution, airborne gravity observations are usually adopted. However, due to the relatively high acquisition velocity, presence of atmospheric turbulence, aircraft vibration, instrumental drift, etc., airborne data are usually contaminated by a very high observation error. For this reason, a proper procedure to filter the raw observations in both the low and high frequencies should be applied to recover valuable information. In this work, a software to filter and grid raw airborne observations is presented: the proposed solution consists in a combination of an along-track Wiener filter and a classical Least Squares Collocation technique. Basically, the proposed procedure is an adaptation to airborne gravimetry of the Space-Wise approach, developed by Politecnico di Milano to process data coming from the ESA satellite mission GOCE. Among the main differences with respect to the satellite application of this approach, there is the fact that, while in processing GOCE data the stochastic characteristics of the observation error can be considered a-priori well known, in airborne gravimetry, due to the complex environment in which the observations are acquired, these characteristics are unknown and should be retrieved from the dataset itself. The presented solution is suited for airborne data analysis in order to be able to quickly filter and grid gravity observations in an easy way. Some innovative theoretical aspects focusing in particular on the theoretical covariance modelling are presented too

  1. Modelling airborne gravity data by means of adapted Space-Wise approach

    Science.gov (United States)

    Sampietro, Daniele; Capponi, Martina; Hamdi Mansi, Ahmed; Gatti, Andrea

    2017-04-01

    Regional gravity field modelling by means of remove - restore procedure is nowadays widely applied to predict grids of gravity anomalies (Bouguer, free-air, isostatic, etc.) in gravimetric geoid determination as well as in exploration geophysics. Considering this last application, due to the required accuracy and resolution, airborne gravity observations are generally adopted. However due to the relatively high acquisition velocity, presence of atmospheric turbulence, aircraft vibration, instrumental drift, etc. airborne data are contaminated by a very high observation error. For this reason, a proper procedure to filter the raw observations both in the low and high frequency should be applied to recover valuable information. In this work, a procedure to predict a grid or a set of filtered along track gravity anomalies, by merging GGM and airborne dataset, is presented. The proposed algorithm, like the Space-Wise approach developed by Politecnico di Milano in the framework of GOCE data analysis, is based on a combination of along track Wiener filter and Least Squares Collocation adjustment and properly considers the different altitudes of the gravity observations. Among the main differences with respect to the satellite application of the Space-Wise approach there is the fact that, while in processing GOCE data the stochastic characteristics of the observation error can be considered a-priori well known, in airborne gravimetry, due to the complex environment in which the observations are acquired, these characteristics are unknown and should be retrieved from the dataset itself. Some innovative theoretical aspects focusing in particular on the theoretical covariance modelling are presented too. In the end, the goodness of the procedure is evaluated by means of a test on real data recovering the gravitational signal with a predicted accuracy of about 0.25 mGal.

  2. Generalized Lagrangian Path Approach to Manifestly-Covariant Quantum Gravity Theory

    Directory of Open Access Journals (Sweden)

    Massimo Tessarotto

    2018-03-01

    Full Text Available A trajectory-based representation for the quantum theory of the gravitational field is formulated. This is achieved in terms of a covariant Generalized Lagrangian-Path (GLP approach which relies on a suitable statistical representation of Bohmian Lagrangian trajectories, referred to here as GLP-representation. The result is established in the framework of the manifestly-covariant quantum gravity theory (CQG-theory proposed recently and the related CQG-wave equation advancing in proper-time the quantum state associated with massive gravitons. Generally non-stationary analytical solutions for the CQG-wave equation with non-vanishing cosmological constant are determined in such a framework, which exhibit Gaussian-like probability densities that are non-dispersive in proper-time. As a remarkable outcome of the theory achieved by implementing these analytical solutions, the existence of an emergent gravity phenomenon is proven to hold. Accordingly, it is shown that a mean-field background space-time metric tensor can be expressed in terms of a suitable statistical average of stochastic fluctuations of the quantum gravitational field whose quantum-wave dynamics is described by GLP trajectories.

  3. Stochastic approach to microphysics

    Energy Technology Data Exchange (ETDEWEB)

    Aron, J.C.

    1987-01-01

    The presently widespread idea of ''vacuum population'', together with the quantum concept of vacuum fluctuations leads to assume a random level below that of matter. This stochastic approach starts by a reminder of the author's previous work, first on the relation of diffusion laws with the foundations of microphysics, and then on hadron spectrum. Following the latter, a random quark model is advanced; it gives to quark pairs properties similar to those of a harmonic oscillator or an elastic string, imagined as an explanation to their asymptotic freedom and their confinement. The stochastic study of such interactions as electron-nucleon, jets in e/sup +/e/sup -/ collisions, or pp -> ..pi../sup 0/ + X, gives form factors closely consistent with experiment. The conclusion is an epistemological comment (complementarity between stochastic and quantum domains, E.P.R. paradox, etc...).

  4. Stochastic dynamics of new inflation

    International Nuclear Information System (INIS)

    Nakao, Ken-ichi; Nambu, Yasusada; Sasaki, Misao.

    1988-07-01

    We investigate thoroughly the dynamics of an inflation-driving scalar field in terms of an extended version of the stochastic approach proposed by Starobinsky and discuss the spacetime structure of the inflationary universe. To avoid any complications which might arise due to quantum gravity, we concentrate our discussions on the new inflationary universe scenario in which all the energy scales involved are well below the planck mass. The investigation is done both analytically and numerically. In particular, we present a full numerical analysis of the stochastic scalar field dynamics on the phase space. Then implications of the results are discussed. (author)

  5. MEASURING INFLATION THROUGH STOCHASTIC APPROACH TO INDEX NUMBERS FOR PAKISTAN

    Directory of Open Access Journals (Sweden)

    Zahid Asghar

    2010-09-01

    Full Text Available This study attempts to estimate the rate of inflation in Pakistan through stochastic approach to index numbers which provides not only point estimate but also confidence interval for the rate of inflation. There are two types of approaches to index number theory namely: the functional economic approaches and the stochastic approach. The attraction of stochastic approach is that it estimates the rate of inflation in which uncertainty and statistical ideas play a major roll of screening index numbers. We have used extended stochastic approach to index numbers for measuring inflation by allowing for the systematic changes in the relative prices. We use CPI data covering the period July 2001--March 2008 for Pakistan.

  6. Analyses of the stratospheric dynamics simulated by a GCM with a stochastic nonorographic gravity wave parameterization

    Science.gov (United States)

    Serva, Federico; Cagnazzo, Chiara; Riccio, Angelo

    2016-04-01

    version of the model, the default and a new stochastic version, in which the value of the perturbation field at launching level is not constant and uniform, but extracted at each time-step and grid-point from a given PDF. With this approach we are trying to add further variability to the effects given by the deterministic NOGW parameterization: the impact on the simulated climate will be assessed focusing on the Quasi-Biennial Oscillation of the equatorial stratosphere (known to be driven also by gravity waves) and on the variability of the mid-to-high latitudes atmosphere. The different characteristics of the circulation will be compared with recent reanalysis products in order to determine the advantages of the stochastic approach over the traditional deterministic scheme.

  7. Stochastic inflation and nonlinear gravity

    International Nuclear Information System (INIS)

    Salopek, D.S.; Bond, J.R.

    1991-01-01

    We show how nonlinear effects of the metric and scalar fields may be included in stochastic inflation. Our formalism can be applied to non-Gaussian fluctuation models for galaxy formation. Fluctuations with wavelengths larger than the horizon length are governed by a network of Langevin equations for the physical fields. Stochastic noise terms arise from quantum fluctuations that are assumed to become classical at horizon crossing and that then contribute to the background. Using Hamilton-Jacobi methods, we solve the Arnowitt-Deser-Misner constraint equations which allows us to separate the growing modes from the decaying ones in the drift phase following each stochastic impulse. We argue that the most reasonable choice of time hypersurfaces for the Langevin system during inflation is T=ln(Ha), where H and a are the local values of the Hubble parameter and the scale factor, since T is the natural time for evolving the short-wavelength scalar field fluctuations in an inhomogeneous background

  8. Stochastic quantization of gravity and string fields

    International Nuclear Information System (INIS)

    Rumpf, H.

    1986-01-01

    The stochastic quantization method of Parisi and Wu is generalized so as to make it applicable to Einstein's theory of gravitation. The generalization is based on the existence of a preferred metric in field configuration space, involves Ito's calculus, and introduces a complex stochastic process adapted to Lorentzian spacetime. It implies formally the path integral measure of DeWitt, a causual Feynman propagator, and a consistent stochastic perturbation theory. The lineraized version of the theory is also obtained from the stochastic quantization of the free string field theory of Siegel and Zwiebach. (Author)

  9. BRS symmetry in stochastic quantization of the gravitational field

    International Nuclear Information System (INIS)

    Nakazawa, Naohito.

    1989-12-01

    We study stochastic quantization of gravity in terms of a BRS invariant canonical operator formalism. By introducing artificially canonical momentum variables for the original field variables, a canonical formulation of stochastic quantization is proposed in a sense that the Fokker-Planck hamiltonian is the generator of the fictitious time translation. Then we show that there exists a nilpotent BRS symmetry in an enlarged phase space for gravity (in general, for the first-class constrained systems). The stochastic action of gravity includes explicitly an unique De Witt's type superspace metric which leads to a geometrical interpretation of quantum gravity analogous to nonlinear σ-models. (author)

  10. Stochastic Approaches Within a High Resolution Rapid Refresh Ensemble

    Science.gov (United States)

    Jankov, I.

    2017-12-01

    It is well known that global and regional numerical weather prediction (NWP) ensemble systems are under-dispersive, producing unreliable and overconfident ensemble forecasts. Typical approaches to alleviate this problem include the use of multiple dynamic cores, multiple physics suite configurations, or a combination of the two. While these approaches may produce desirable results, they have practical and theoretical deficiencies and are more difficult and costly to maintain. An active area of research that promotes a more unified and sustainable system is the use of stochastic physics. Stochastic approaches include Stochastic Parameter Perturbations (SPP), Stochastic Kinetic Energy Backscatter (SKEB), and Stochastic Perturbation of Physics Tendencies (SPPT). The focus of this study is to assess model performance within a convection-permitting ensemble at 3-km grid spacing across the Contiguous United States (CONUS) using a variety of stochastic approaches. A single physics suite configuration based on the operational High-Resolution Rapid Refresh (HRRR) model was utilized and ensemble members produced by employing stochastic methods. Parameter perturbations (using SPP) for select fields were employed in the Rapid Update Cycle (RUC) land surface model (LSM) and Mellor-Yamada-Nakanishi-Niino (MYNN) Planetary Boundary Layer (PBL) schemes. Within MYNN, SPP was applied to sub-grid cloud fraction, mixing length, roughness length, mass fluxes and Prandtl number. In the RUC LSM, SPP was applied to hydraulic conductivity and tested perturbing soil moisture at initial time. First iterative testing was conducted to assess the initial performance of several configuration settings (e.g. variety of spatial and temporal de-correlation lengths). Upon selection of the most promising candidate configurations using SPP, a 10-day time period was run and more robust statistics were gathered. SKEB and SPPT were included in additional retrospective tests to assess the impact of using

  11. A Proposed Stochastic Finite Difference Approach Based on Homogenous Chaos Expansion

    Directory of Open Access Journals (Sweden)

    O. H. Galal

    2013-01-01

    Full Text Available This paper proposes a stochastic finite difference approach, based on homogenous chaos expansion (SFDHC. The said approach can handle time dependent nonlinear as well as linear systems with deterministic or stochastic initial and boundary conditions. In this approach, included stochastic parameters are modeled as second-order stochastic processes and are expanded using Karhunen-Loève expansion, while the response function is approximated using homogenous chaos expansion. Galerkin projection is used in converting the original stochastic partial differential equation (PDE into a set of coupled deterministic partial differential equations and then solved using finite difference method. Two well-known equations were used for efficiency validation of the method proposed. First one being the linear diffusion equation with stochastic parameter and the second is the nonlinear Burger's equation with stochastic parameter and stochastic initial and boundary conditions. In both of these examples, the probability distribution function of the response manifested close conformity to the results obtained from Monte Carlo simulation with optimized computational cost.

  12. A gauge-theoretic approach to gravity.

    Science.gov (United States)

    Krasnov, Kirill

    2012-08-08

    Einstein's general relativity (GR) is a dynamical theory of the space-time metric. We describe an approach in which GR becomes an SU(2) gauge theory. We start at the linearized level and show how a gauge-theoretic Lagrangian for non-interacting massless spin two particles (gravitons) takes a much more simple and compact form than in the standard metric description. Moreover, in contrast to the GR situation, the gauge theory Lagrangian is convex. We then proceed with a formulation of the full nonlinear theory. The equivalence to the metric-based GR holds only at the level of solutions of the field equations, that is, on-shell. The gauge-theoretic approach also makes it clear that GR is not the only interacting theory of massless spin two particles, in spite of the GR uniqueness theorems available in the metric description. Thus, there is an infinite-parameter class of gravity theories all describing just two propagating polarizations of the graviton. We describe how matter can be coupled to gravity in this formulation and, in particular, how both the gravity and Yang-Mills arise as sectors of a general diffeomorphism-invariant gauge theory. We finish by outlining a possible scenario of the ultraviolet completion of quantum gravity within this approach.

  13. Symmetries of stochastic differential equations: A geometric approach

    Energy Technology Data Exchange (ETDEWEB)

    De Vecchi, Francesco C., E-mail: francesco.devecchi@unimi.it; Ugolini, Stefania, E-mail: stefania.ugolini@unimi.it [Dipartimento di Matematica, Università degli Studi di Milano, via Saldini 50, Milano (Italy); Morando, Paola, E-mail: paola.morando@unimi.it [DISAA, Università degli Studi di Milano, via Celoria 2, Milano (Italy)

    2016-06-15

    A new notion of stochastic transformation is proposed and applied to the study of both weak and strong symmetries of stochastic differential equations (SDEs). The correspondence between an algebra of weak symmetries for a given SDE and an algebra of strong symmetries for a modified SDE is proved under suitable regularity assumptions. This general approach is applied to a stochastic version of a two dimensional symmetric ordinary differential equation and to the case of two dimensional Brownian motion.

  14. Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations

    Science.gov (United States)

    Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying

    2010-09-01

    Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).

  15. The Spin-Foam Approach to Quantum Gravity.

    Science.gov (United States)

    Perez, Alejandro

    2013-01-01

    This article reviews the present status of the spin-foam approach to the quantization of gravity. Special attention is payed to the pedagogical presentation of the recently-introduced new models for four-dimensional quantum gravity. The models are motivated by a suitable implementation of the path integral quantization of the Plebanski formulation of gravity on a simplicial regularization. The article also includes a self-contained treatment of 2+1 gravity. The simple nature of the latter provides the basis and a perspective for the analysis of both conceptual and technical issues that remain open in four dimensions.

  16. The Spin-Foam Approach to Quantum Gravity

    Directory of Open Access Journals (Sweden)

    Alejandro Perez

    2013-02-01

    Full Text Available This article reviews the present status of the spin-foam approach to the quantization of gravity. Special attention is payed to the pedagogical presentation of the recently-introduced new models for four-dimensional quantum gravity. The models are motivated by a suitable implementation of the path integral quantization of the Plebanski formulation of gravity on a simplicial regularization. The article also includes a self contained treatment of 2+1 gravity. The simple nature of the latter provides the basis and a perspective for the analysis of both conceptual and technical issues that remain open in four dimensions.

  17. Markovian approach: From Ising model to stochastic radiative transfer

    International Nuclear Information System (INIS)

    Kassianov, E.; Veron, D.

    2009-01-01

    The origin of the Markovian approach can be traced back to 1906; however, it gained explicit recognition in the last few decades. This overview outlines some important applications of the Markovian approach, which illustrate its immense prestige, respect, and success. These applications include examples in the statistical physics, astronomy, mathematics, computational science and the stochastic transport problem. In particular, the overview highlights important contributions made by Pomraning and Titov to the neutron and radiation transport theory in a stochastic medium with homogeneous statistics. Using simple probabilistic assumptions (Markovian approximation), they have introduced a simplified, but quite realistic, representation of the neutron/radiation transfer through a two-component discrete stochastic mixture. New concepts and methodologies introduced by these two distinguished scientists allow us to generalize the Markovian treatment to the stochastic medium with inhomogeneous statistics and demonstrate its improved predictive performance for the down-welling shortwave fluxes. (authors)

  18. A combined stochastic programming and optimal control approach to personal finance and pensions

    DEFF Research Database (Denmark)

    Konicz, Agnieszka Karolina; Pisinger, David; Rasmussen, Kourosh Marjani

    2015-01-01

    The paper presents a model that combines a dynamic programming (stochastic optimal control) approach and a multi-stage stochastic linear programming approach (SLP), integrated into one SLP formulation. Stochastic optimal control produces an optimal policy that is easy to understand and implement....

  19. Stochastic approach to equilibrium and nonequilibrium thermodynamics.

    Science.gov (United States)

    Tomé, Tânia; de Oliveira, Mário J

    2015-04-01

    We develop the stochastic approach to thermodynamics based on stochastic dynamics, which can be discrete (master equation) and continuous (Fokker-Planck equation), and on two assumptions concerning entropy. The first is the definition of entropy itself and the second the definition of entropy production rate, which is non-negative and vanishes in thermodynamic equilibrium. Based on these assumptions, we study interacting systems with many degrees of freedom in equilibrium or out of thermodynamic equilibrium and how the macroscopic laws are derived from the stochastic dynamics. These studies include the quasiequilibrium processes; the convexity of the equilibrium surface; the monotonic time behavior of thermodynamic potentials, including entropy; the bilinear form of the entropy production rate; the Onsager coefficients and reciprocal relations; and the nonequilibrium steady states of chemical reactions.

  20. Noether symmetry approach in f(G,T) gravity

    Energy Technology Data Exchange (ETDEWEB)

    Shamir, M.F.; Ahmad, Mushtaq [National University of Computer and Emerging Sciences, Lahore Campus (Pakistan)

    2017-01-15

    We explore the recently introduced modified Gauss-Bonnet gravity (Sharif and Ikram in Eur Phys J C 76:640, 2016), f(G,T) pragmatic with G, the Gauss-Bonnet term, and T, the trace of the energy-momentum tensor. Noether symmetry approach has been used to develop some cosmologically viable f(G,T) gravity models. The Noether equations of modified gravity are reported for flat FRW universe. Two specific models have been studied to determine the conserved quantities and exact solutions. In particular, the well known deSitter solution is reconstructed for some specific choice of f(G,T) gravity model. (orig.)

  1. A stochastic programming approach to manufacturing flow control

    OpenAIRE

    Haurie, Alain; Moresino, Francesco

    2012-01-01

    This paper proposes and tests an approximation of the solution of a class of piecewise deterministic control problems, typically used in the modeling of manufacturing flow processes. This approximation uses a stochastic programming approach on a suitably discretized and sampled system. The method proceeds through two stages: (i) the Hamilton-Jacobi-Bellman (HJB) dynamic programming equations for the finite horizon continuous time stochastic control problem are discretized over a set of sample...

  2. a Perturbation Approach to Translational Gravity

    Science.gov (United States)

    Julve, J.; Tiemblo, A.

    2013-05-01

    Within a gauge formulation of 3+1 gravity relying on a nonlinear realization of the group of isometries of space-time, a natural expansion of the metric tensor arises and a simple choice of the gravity dynamical variables is possible. We show that the expansion parameter can be identified with the gravitational constant and that the first-order depends only on a diagonal matrix in the ensuing perturbation approach. The explicit first-order solution is calculated in the static isotropic case, and its general structure is worked out in the harmonic gauge.

  3. Gas contract portfolio management: a stochastic programming approach

    International Nuclear Information System (INIS)

    Haurie, A.; Smeers, Y.; Zaccour, G.

    1991-01-01

    This paper deals with a stochastic programming model which complements long range market simulation models generating scenarios concerning the evolution of demand and prices for gas in different market segments. Agas company has to negociate contracts with lengths going from one to twenty years. This stochastic model is designed to assess the risk associated with committing the gas production capacity of the company to these market segments. Different approaches are presented to overcome the difficulties associated with the very large size of the resulting optimization problem

  4. Alternative Approaches to Technical Efficiency Estimation in the Stochastic Frontier Model

    OpenAIRE

    Acquah, H. de-Graft; Onumah, E. E.

    2014-01-01

    Estimating the stochastic frontier model and calculating technical efficiency of decision making units are of great importance in applied production economic works. This paper estimates technical efficiency from the stochastic frontier model using Jondrow, and Battese and Coelli approaches. In order to compare alternative methods, simulated data with sample sizes of 60 and 200 are generated from stochastic frontier model commonly applied to agricultural firms. Simulated data is employed to co...

  5. A penalty guided stochastic fractal search approach for system reliability optimization

    International Nuclear Information System (INIS)

    Mellal, Mohamed Arezki; Zio, Enrico

    2016-01-01

    Modern industry requires components and systems with high reliability levels. In this paper, we address the system reliability optimization problem. A penalty guided stochastic fractal search approach is developed for solving reliability allocation, redundancy allocation, and reliability–redundancy allocation problems. Numerical results of ten case studies are presented as benchmark problems for highlighting the superiority of the proposed approach compared to others from literature. - Highlights: • System reliability optimization is investigated. • A penalty guided stochastic fractal search approach is developed. • Results of ten case studies are compared with previously published methods. • Performance of the approach is demonstrated.

  6. Equivalence between the semiclassical and effective approaches to gravity

    International Nuclear Information System (INIS)

    Paszko, Ricardo; Accioly, Antonio

    2010-01-01

    Semiclassical and effective theories of gravitation are quite distinct from each other as far as the approximation scheme employed is concerned. In fact, while in the semiclassical approach gravity is a classical field and the particles and/or remaining fields are quantized, in the effective approach everything is quantized, including gravity, but the Feynman amplitude is expanded in terms of the momentum exchanged between the particles and/or fields. In this paper, we show that these approaches, despite being radically different, lead to equivalent results if one of the masses under consideration is much greater than all the other energies involved.

  7. On the stochastic approach to inflation and the initial conditions in the universe

    International Nuclear Information System (INIS)

    Pollock, M.D.

    1986-05-01

    By applying stochastic methods to a theory in which a potential V(Φ) causes a period of quasi-expansion of the universe, Starobinsky has derived an expression for the probability distribution P(V) appropriate to chaotic inflation in the classical approximation. We obtain the corresponding expression for a broken-symmetry theory of gravity. For the Coleman-Weinberg potential, it appears most probable that the initial value of Φ is Φ i O , in which case inflation occurs naturally, because V(Φ i )>O

  8. The impact of trade costs on rare earth exports : a stochastic frontier estimation approach.

    Energy Technology Data Exchange (ETDEWEB)

    Sanyal, Prabuddha; Brady, Patrick Vane; Vugrin, Eric D.

    2013-09-01

    The study develops a novel stochastic frontier modeling approach to the gravity equation for rare earth element (REE) trade between China and its trading partners between 2001 and 2009. The novelty lies in differentiating betweenbehind the border' trade costs by China and theimplicit beyond the border costs' of China's trading partners. Results indicate that the significance level of the independent variables change dramatically over the time period. While geographical distance matters for trade flows in both periods, the effect of income on trade flows is significantly attenuated, possibly capturing the negative effects of financial crises in the developed world. Second, the total export losses due tobehind the border' trade costs almost tripled over the time period. Finally, looking atimplicit beyond the border' trade costs, results show China gaining in some markets, although it is likely that some countries are substituting away from Chinese REE exports.

  9. Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology.

    Science.gov (United States)

    Schaff, James C; Gao, Fei; Li, Ye; Novak, Igor L; Slepchenko, Boris M

    2016-12-01

    Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium 'sparks' as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell.

  10. Regularization of quantum gravity in the matrix model approach

    International Nuclear Information System (INIS)

    Ueda, Haruhiko

    1991-02-01

    We study divergence problem of the partition function in the matrix model approach for two-dimensional quantum gravity. We propose a new model V(φ) = 1/2Trφ 2 + g 4 /NTrφ 4 + g'/N 4 Tr(φ 4 ) 2 and show that in the sphere case it has no divergence problem and the critical exponent is of pure gravity. (author)

  11. Simulation of the stochastic wave loads using a physical modeling approach

    DEFF Research Database (Denmark)

    Liu, W.F.; Sichani, Mahdi Teimouri; Nielsen, Søren R.K.

    2013-01-01

    In analyzing stochastic dynamic systems, analysis of the system uncertainty due to randomness in the loads plays a crucial role. Typically time series of the stochastic loads are simulated using traditional random phase method. This approach combined with fast Fourier transform algorithm makes...... reliability or its uncertainty. Moreover applicability of the probability density evolution method on engineering problems faces critical difficulties when the system embeds too many random variables. Hence it is useful to devise a method which can make realization of the stochastic load processes with low...

  12. Measuring Gravity in International Trade Flows

    Directory of Open Access Journals (Sweden)

    E. Young Song

    2004-12-01

    Full Text Available The purpose of this paper is two-fold. One is to clarify the concept of gravity in international trade flows. The other is to measure the strength of gravity in international trade flows in a way that is consistent with a well-defined concept of gravity. This paper shows that the widely accepted belief that specialization is the source of gravity is not well grounded on theory. We propose to define gravity in international trade as the force that makes the market shares of an exporting country constant in all importing countries, regardless of their sizes. In a stochastic context, we should interpret it as implying that the strength of gravity increases i as the correlation between market shares and market sizes gets weaker and ii as the variance of market shares gets smaller. We estimate an empirical gravity equation thoroughly based on this definition of gravity. We find that a strong degree of gravity exists in most bilateral trade, regardless of income levels of countries, and in trade of most manThe purpose of this paper is two-fold. One is to clarify the concept of gravity in international trade flows. The other is to measure the strength of gravity in international trade flows in a way that is consistent with a well-defined concept of gravity. This paper shows that the widely accepted belief that specialization is the source of gravity is not well grounded on theory. We propose to define gravity in international trade as the force that makes the market shares of an exporting country constant in all importing countries, regardless of their sizes. In a stochastic context, we should interpret it as implying that the strength of gravity increases i as the correlation between market shares and market sizes gets weaker and ii as the variance of market shares gets smaller. We estimate an empirical gravity equation thoroughly based on this definition of gravity. We find that a strong degree of gravity exists in most bilateral trade, regardless of

  13. A stochastic approach to multi-gene expression dynamics

    International Nuclear Information System (INIS)

    Ochiai, T.; Nacher, J.C.; Akutsu, T.

    2005-01-01

    In the last years, tens of thousands gene expression profiles for cells of several organisms have been monitored. Gene expression is a complex transcriptional process where mRNA molecules are translated into proteins, which control most of the cell functions. In this process, the correlation among genes is crucial to determine the specific functions of genes. Here, we propose a novel multi-dimensional stochastic approach to deal with the gene correlation phenomena. Interestingly, our stochastic framework suggests that the study of the gene correlation requires only one theoretical assumption-Markov property-and the experimental transition probability, which characterizes the gene correlation system. Finally, a gene expression experiment is proposed for future applications of the model

  14. Stochastic approach and fluctuation theorem for charge transport in diodes

    Science.gov (United States)

    Gu, Jiayin; Gaspard, Pierre

    2018-05-01

    A stochastic approach for charge transport in diodes is developed in consistency with the laws of electricity, thermodynamics, and microreversibility. In this approach, the electron and hole densities are ruled by diffusion-reaction stochastic partial differential equations and the electric field generated by the charges is determined with the Poisson equation. These equations are discretized in space for the numerical simulations of the mean density profiles, the mean electric potential, and the current-voltage characteristics. Moreover, the full counting statistics of the carrier current and the measured total current including the contribution of the displacement current are investigated. On the basis of local detailed balance, the fluctuation theorem is shown to hold for both currents.

  15. Stochastic resonance a mathematical approach in the small noise limit

    CERN Document Server

    Herrmann, Samuel; Pavlyukevich, Ilya; Peithmann, Dierk

    2013-01-01

    Stochastic resonance is a phenomenon arising in a wide spectrum of areas in the sciences ranging from physics through neuroscience to chemistry and biology. This book presents a mathematical approach to stochastic resonance which is based on a large deviations principle (LDP) for randomly perturbed dynamical systems with a weak inhomogeneity given by an exogenous periodicity of small frequency. Resonance, the optimal tuning between period length and noise amplitude, is explained by optimizing the LDP's rate function. The authors show that not all physical measures of tuning quality are robust with respect to dimension reduction. They propose measures of tuning quality based on exponential transition rates explained by large deviations techniques and show that these measures are robust. The book sheds some light on the shortcomings and strengths of different concepts used in the theory and applications of stochastic resonance without attempting to give a comprehensive overview of the many facets of stochastic ...

  16. A stochastic approach to anelastic creep

    International Nuclear Information System (INIS)

    Venkataraman, G.

    1976-01-01

    Anelastic creep or the time-dependent yielding or a material subjected to external stresses has been found to be of great importantance in technology in the recent years, particularly in engineering structures including nuclear reactors wherein structural members may be under stress. The physics aspects underlying this phenomenon is dealt with in detail. The basics of time-dependent elasticity, constitutive relation, network models, constitutive equation in the frequency domain and its mearurements, and stochastic approach to creep are discussed. (K.B.)

  17. A new approach to stochastic transport via the functional Volterra expansion

    International Nuclear Information System (INIS)

    Ziya Akcasu, A.; Corngold, N.

    2005-01-01

    In this paper we present a new algorithm (FDA) for the calculation of the mean and the variance of the flux in stochastic transport when the transport equation contains a spatially random parameter θ(r), such as the density of the medium. The approach is based on the renormalized functional Volterra expansion of the flux around its mean. The attractive feature of the approach is that it explicitly displays the functional dependence of the flux on the products of θ(r i ), and hence enables one to take ensemble averages directly to calculate the moments of the flux in terms of the correlation functions of the underlying random process. The renormalized deterministic transport equation for the mean flux has been obtained to the second order in θ(r), and a functional relationship between the variance and the mean flux has been derived to calculate the variance to this order. The feasibility and accuracy of FDA has been demonstrated in the case of stochastic diffusion, using the diffusion equation with a spatially random diffusion coefficient. The connection of FDA with the well-established approximation schemes in the field of stochastic linear differential equations, such as the Bourret approximation, developed by Van Kampen using cumulant expansion, and by Terwiel using projection operator formalism, which has recently been extended to stochastic transport by Corngold. We hope that FDA's potential will be explored numerically in more realistic applications of the stochastic transport. (authors)

  18. The stochastic system approach for estimating dynamic treatments effect.

    Science.gov (United States)

    Commenges, Daniel; Gégout-Petit, Anne

    2015-10-01

    The problem of assessing the effect of a treatment on a marker in observational studies raises the difficulty that attribution of the treatment may depend on the observed marker values. As an example, we focus on the analysis of the effect of a HAART on CD4 counts, where attribution of the treatment may depend on the observed marker values. This problem has been treated using marginal structural models relying on the counterfactual/potential response formalism. Another approach to causality is based on dynamical models, and causal influence has been formalized in the framework of the Doob-Meyer decomposition of stochastic processes. Causal inference however needs assumptions that we detail in this paper and we call this approach to causality the "stochastic system" approach. First we treat this problem in discrete time, then in continuous time. This approach allows incorporating biological knowledge naturally. When working in continuous time, the mechanistic approach involves distinguishing the model for the system and the model for the observations. Indeed, biological systems live in continuous time, and mechanisms can be expressed in the form of a system of differential equations, while observations are taken at discrete times. Inference in mechanistic models is challenging, particularly from a numerical point of view, but these models can yield much richer and reliable results.

  19. Error performance analysis in K-tier uplink cellular networks using a stochastic geometric approach

    KAUST Repository

    Afify, Laila H.; Elsawy, Hesham; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2015-01-01

    -in-Distribution approach that utilizes stochastic geometric tools to account for the network geometry in the performance characterization. Different from the other stochastic geometry models adopted in the literature, the developed analysis accounts for important

  20. Exploration of possible quantum gravity effects with neutrinos I: Decoherence in neutrino oscillations experiments

    International Nuclear Information System (INIS)

    Sakharov, Alexander; Mavromatos, Nick; Sarkar, Sarben; Meregaglia, Anselmo; Rubbia, Andre

    2009-01-01

    Quantum gravity may involve models with stochastic fluctuations of the associated metric field, around some fixed background value. Such stochastic models of gravity may induce decoherence for matter propagating in such fluctuating space time. In most cases, this leads to fewer neutrinos of all active flavours being detected in a long baseline experiment as compared to three-flavour standard neutrino oscillations. We discuss the potential of the CNGS and J-PARC beams in constraining models of quantum-gravity induced decoherence using neutrino oscillations as a probe. We use as much as possible model-independent parameterizations, even though they are motivated by specific microscopic models, for fits to the expected experimental data which yield bounds on quantum-gravity decoherence parameters.

  1. A perturbative approach to neutron stars in f(T, T)-gravity

    Energy Technology Data Exchange (ETDEWEB)

    Pace, Mark; Said, Jackson Levi [University of Malta, Department of Physics, Msida (Malta); University of Malta, Institute of Space Sciences and Astronomy, Msida (Malta)

    2017-05-15

    We derive a Tolman-Oppenheimer-Volkoff equation in neutron star systems within the modified f(T, T)-gravity class of models using a perturbative approach. In our approach f(T, T)-gravity is considered to be a static spherically symmetric space-time. In this instance the metric is built from a more fundamental vierbein which can be used to relate inertial and global coordinates. A linear function f = T(r) + T(r) + χh(T, T) + O(χ{sup 2}) is taken as the Lagrangian density for the gravitational action. Finally we impose the polytropic equation of state of neutron star upon the derived equations in order to derive the mass profile and mass-central density relations of the neutron star in f(T, T)-gravity. (orig.)

  2. Fat versus Thin Threading Approach on GPUs: Application to Stochastic Simulation of Chemical Reactions

    KAUST Repository

    Klingbeil, Guido; Erban, Radek; Giles, Mike; Maini, Philip K.

    2012-01-01

    We explore two different threading approaches on a graphics processing unit (GPU) exploiting two different characteristics of the current GPU architecture. The fat thread approach tries to minimize data access time by relying on shared memory and registers potentially sacrificing parallelism. The thin thread approach maximizes parallelism and tries to hide access latencies. We apply these two approaches to the parallel stochastic simulation of chemical reaction systems using the stochastic simulation algorithm (SSA) by Gillespie [14]. In these cases, the proposed thin thread approach shows comparable performance while eliminating the limitation of the reaction system's size. © 2006 IEEE.

  3. Fat versus Thin Threading Approach on GPUs: Application to Stochastic Simulation of Chemical Reactions

    KAUST Repository

    Klingbeil, Guido

    2012-02-01

    We explore two different threading approaches on a graphics processing unit (GPU) exploiting two different characteristics of the current GPU architecture. The fat thread approach tries to minimize data access time by relying on shared memory and registers potentially sacrificing parallelism. The thin thread approach maximizes parallelism and tries to hide access latencies. We apply these two approaches to the parallel stochastic simulation of chemical reaction systems using the stochastic simulation algorithm (SSA) by Gillespie [14]. In these cases, the proposed thin thread approach shows comparable performance while eliminating the limitation of the reaction system\\'s size. © 2006 IEEE.

  4. Stochastic Control of Energy Efficient Buildings: A Semidefinite Programming Approach

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Xiao [ORNL; Dong, Jin [ORNL; Djouadi, Seddik M [ORNL; Nutaro, James J [ORNL; Kuruganti, Teja [ORNL

    2015-01-01

    The key goal in energy efficient buildings is to reduce energy consumption of Heating, Ventilation, and Air- Conditioning (HVAC) systems while maintaining a comfortable temperature and humidity in the building. This paper proposes a novel stochastic control approach for achieving joint performance and power control of HVAC. We employ a constrained Stochastic Linear Quadratic Control (cSLQC) by minimizing a quadratic cost function with a disturbance assumed to be Gaussian. The problem is formulated to minimize the expected cost subject to a linear constraint and a probabilistic constraint. By using cSLQC, the problem is reduced to a semidefinite optimization problem, where the optimal control can be computed efficiently by Semidefinite programming (SDP). Simulation results are provided to demonstrate the effectiveness and power efficiency by utilizing the proposed control approach.

  5. A stochastic approach to chemical evolution

    International Nuclear Information System (INIS)

    Copi, C.J.

    1997-01-01

    Observations of elemental abundances in the Galaxy have repeatedly shown an intrinsic scatter as a function of time and metallicity. The standard approach to chemical evolution does not attempt to address this scatter in abundances since only the mean evolution is followed. In this work, the scatter is addressed via a stochastic approach to solving chemical evolution models. Three simple chemical evolution scenarios are studied using this stochastic approach: a closed box model, an infall model, and an outflow model. These models are solved for the solar neighborhood in a Monte Carlo fashion. The evolutionary history of one particular region is determined randomly based on the star formation rate and the initial mass function. Following the evolution in an ensemble of such regions leads to the predicted spread in abundances expected, based solely on different evolutionary histories of otherwise identical regions. In this work, 13 isotopes are followed, including the light elements, the CNO elements, a few α-elements, and iron. It is found that the predicted spread in abundances for a 10 5 M circle-dot region is in good agreement with observations for the α-elements. For CN, the agreement is not as good, perhaps indicating the need for more physics input for low-mass stellar evolution. Similarly for the light elements, the predicted scatter is quite small, which is in contradiction to the observations of 3 He in HII regions. The models are tuned for the solar neighborhood so that good agreement with HII regions is not expected. This has important implications for low-mass stellar evolution and on using chemical evolution to determine the primordial light-element abundances in order to test big bang nucleosynthesis. copyright 1997 The American Astronomical Society

  6. An Approach to Stochastic Peridynamic Theory.

    Energy Technology Data Exchange (ETDEWEB)

    Demmie, Paul N.

    2018-04-01

    In many material systems, man-made or natural, we have an incomplete knowledge of geometric or material properties, which leads to uncertainty in predicting their performance under dynamic loading. Given the uncertainty and a high degree of spatial variability in properties of materials subjected to impact, a stochastic theory of continuum mechanics would be useful for modeling dynamic response of such systems. Peridynamic theory is such a theory. It is formulated as an integro- differential equation that does not employ spatial derivatives, and provides for a consistent formulation of both deformation and failure of materials. We discuss an approach to stochastic peridynamic theory and illustrate the formulation with examples of impact loading of geological materials with uncorrelated or correlated material properties. We examine wave propagation and damage to the material. The most salient feature is the absence of spallation, referred to as disorder toughness, which generalizes similar results from earlier quasi-static damage mechanics. Acknowledgements This research was made possible by the support from DTRA grant HDTRA1-08-10-BRCWM. I thank Dr. Martin Ostoja-Starzewski for introducing me to the mechanics of random materials and collaborating with me throughout and after this DTRA project.

  7. Stochastic Boolean networks: An efficient approach to modeling gene regulatory networks

    Directory of Open Access Journals (Sweden)

    Liang Jinghang

    2012-08-01

    network inferred from a T cell immune response dataset. An SBN can also implement the function of an asynchronous PBN and is potentially useful in a hybrid approach in combination with a continuous or single-molecule level stochastic model. Conclusions Stochastic Boolean networks (SBNs are proposed as an efficient approach to modelling gene regulatory networks (GRNs. The SBN approach is able to recover biologically-proven regulatory behaviours, such as the oscillatory dynamics of the p53-Mdm2 network and the dynamic attractors in a T cell immune response network. The proposed approach can further predict the network dynamics when the genes are under perturbation, thus providing biologically meaningful insights for a better understanding of the dynamics of GRNs. The algorithms and methods described in this paper have been implemented in Matlab packages, which are attached as Additional files.

  8. A two-stage stochastic programming approach for operating multi-energy systems

    DEFF Research Database (Denmark)

    Zeng, Qing; Fang, Jiakun; Chen, Zhe

    2017-01-01

    This paper provides a two-stage stochastic programming approach for joint operating multi-energy systems under uncertainty. Simulation is carried out in a test system to demonstrate the feasibility and efficiency of the proposed approach. The test energy system includes a gas subsystem with a gas...

  9. Simple Planar Truss (Linear, Nonlinear and Stochastic Approach

    Directory of Open Access Journals (Sweden)

    Frydrýšek Karel

    2016-11-01

    Full Text Available This article deals with a simple planar and statically determinate pin-connected truss. It demonstrates the processes and methods of derivations and solutions according to 1st and 2nd order theories. The article applies linear and nonlinear approaches and their simplifications via a Maclaurin series. Programming connected with the stochastic Simulation-Based Reliability Method (i.e. the direct Monte Carlo approach is used to conduct a probabilistic reliability assessment (i.e. a calculation of the probability that plastic deformation will occur in members of the truss.

  10. Gravity waves from tachyonic preheating after hybrid inflation

    Energy Technology Data Exchange (ETDEWEB)

    Dufaux, Jean-Francois [Instituto de Fisica Teorica UAM/CSIC, Universidad Autonoma de Madrid, Cantoblanco, 28049 Madrid (Spain); Felder, Gary [Department of Physics, Clark Science Center, Smith College, Northampton, MA 01063 (United States); Kofman, Lev [CITA, University of Toronto, 60 St. George Street, Toronto, ON M5S 3H8 (Canada); Navros, Olga, E-mail: jeff.dufaux@uam.es, E-mail: gfelder@email.smith.edu, E-mail: kofman@cita.utoronto.ca, E-mail: navros@email.unc.edu [Department of Mathematics, University of North Carolina Chapel Hill, CB3250 Philips Hall, Chapel Hill, NC 27599 (United States)

    2009-03-15

    We study the stochastic background of gravitational waves produced from preheating in hybrid inflation models. We investigate different dynamical regimes of preheating in these models and we compute the resulting gravity wave spectra using analytical estimates and numerical simulations. We discuss the dependence of the gravity wave frequencies and amplitudes on the various potential parameters. We find that large regions of the parameter space leads to gravity waves that may be observable in upcoming interferometric experiments, including Advanced LIGO, but this generally requires very small coupling constants.

  11. A Direct Approach to Determine the External Disturbing Gravity Field by Applying Green Integral with the Ground Boundary Value

    Directory of Open Access Journals (Sweden)

    TIAN Jialei

    2015-11-01

    Full Text Available By using the ground as the boundary, Molodensky problem usually gets the solution in form of series. Higher order terms reflect the correction between a smooth surface and the ground boundary. Application difficulties arise from not only computational complexity and stability maintenance, but also data-intensiveness. Therefore, in this paper, starting from the application of external gravity disturbance, Green formula is used on digital terrain surface. In the case of ignoring the influence of horizontal component of the integral, the expression formula of external disturbance potential determined by boundary value consisted of ground gravity anomalies and height anomaly difference are obtained, whose kernel function is reciprocal of distance and Poisson core respectively. With this method, there is no need of continuation of ground data. And kernel function is concise, and suitable for the stochastic computation of external disturbing gravity field.

  12. Stochastic Turing Patterns: Analysis of Compartment-Based Approaches

    KAUST Repository

    Cao, Yang; Erban, Radek

    2014-01-01

    © 2014, Society for Mathematical Biology. Turing patterns can be observed in reaction-diffusion systems where chemical species have different diffusion constants. In recent years, several studies investigated the effects of noise on Turing patterns and showed that the parameter regimes, for which stochastic Turing patterns are observed, can be larger than the parameter regimes predicted by deterministic models, which are written in terms of partial differential equations (PDEs) for species concentrations. A common stochastic reaction-diffusion approach is written in terms of compartment-based (lattice-based) models, where the domain of interest is divided into artificial compartments and the number of molecules in each compartment is simulated. In this paper, the dependence of stochastic Turing patterns on the compartment size is investigated. It has previously been shown (for relatively simpler systems) that a modeler should not choose compartment sizes which are too small or too large, and that the optimal compartment size depends on the diffusion constant. Taking these results into account, we propose and study a compartment-based model of Turing patterns where each chemical species is described using a different set of compartments. It is shown that the parameter regions where spatial patterns form are different from the regions obtained by classical deterministic PDE-based models, but they are also different from the results obtained for the stochastic reaction-diffusion models which use a single set of compartments for all chemical species. In particular, it is argued that some previously reported results on the effect of noise on Turing patterns in biological systems need to be reinterpreted.

  13. Stochastic Turing Patterns: Analysis of Compartment-Based Approaches

    KAUST Repository

    Cao, Yang

    2014-11-25

    © 2014, Society for Mathematical Biology. Turing patterns can be observed in reaction-diffusion systems where chemical species have different diffusion constants. In recent years, several studies investigated the effects of noise on Turing patterns and showed that the parameter regimes, for which stochastic Turing patterns are observed, can be larger than the parameter regimes predicted by deterministic models, which are written in terms of partial differential equations (PDEs) for species concentrations. A common stochastic reaction-diffusion approach is written in terms of compartment-based (lattice-based) models, where the domain of interest is divided into artificial compartments and the number of molecules in each compartment is simulated. In this paper, the dependence of stochastic Turing patterns on the compartment size is investigated. It has previously been shown (for relatively simpler systems) that a modeler should not choose compartment sizes which are too small or too large, and that the optimal compartment size depends on the diffusion constant. Taking these results into account, we propose and study a compartment-based model of Turing patterns where each chemical species is described using a different set of compartments. It is shown that the parameter regions where spatial patterns form are different from the regions obtained by classical deterministic PDE-based models, but they are also different from the results obtained for the stochastic reaction-diffusion models which use a single set of compartments for all chemical species. In particular, it is argued that some previously reported results on the effect of noise on Turing patterns in biological systems need to be reinterpreted.

  14. Forward modeling of gravity data using geostatistically generated subsurface density variations

    Science.gov (United States)

    Phelps, Geoffrey

    2016-01-01

    Using geostatistical models of density variations in the subsurface, constrained by geologic data, forward models of gravity anomalies can be generated by discretizing the subsurface and calculating the cumulative effect of each cell (pixel). The results of such stochastically generated forward gravity anomalies can be compared with the observed gravity anomalies to find density models that match the observed data. These models have an advantage over forward gravity anomalies generated using polygonal bodies of homogeneous density because generating numerous realizations explores a larger region of the solution space. The stochastic modeling can be thought of as dividing the forward model into two components: that due to the shape of each geologic unit and that due to the heterogeneous distribution of density within each geologic unit. The modeling demonstrates that the internally heterogeneous distribution of density within each geologic unit can contribute significantly to the resulting calculated forward gravity anomaly. Furthermore, the stochastic models match observed statistical properties of geologic units, the solution space is more broadly explored by producing a suite of successful models, and the likelihood of a particular conceptual geologic model can be compared. The Vaca Fault near Travis Air Force Base, California, can be successfully modeled as a normal or strike-slip fault, with the normal fault model being slightly more probable. It can also be modeled as a reverse fault, although this structural geologic configuration is highly unlikely given the realizations we explored.

  15. A Constructive Sharp Approach to Functional Quantization of Stochastic Processes

    OpenAIRE

    Junglen, Stefan; Luschgy, Harald

    2010-01-01

    We present a constructive approach to the functional quantization problem of stochastic processes, with an emphasis on Gaussian processes. The approach is constructive, since we reduce the infinite-dimensional functional quantization problem to a finite-dimensional quantization problem that can be solved numerically. Our approach achieves the sharp rate of the minimal quantization error and can be used to quantize the path space for Gaussian processes and also, for example, Lévy processes.

  16. Continuous strong Markov processes in dimension one a stochastic calculus approach

    CERN Document Server

    Assing, Sigurd

    1998-01-01

    The book presents an in-depth study of arbitrary one-dimensional continuous strong Markov processes using methods of stochastic calculus. Departing from the classical approaches, a unified investigation of regular as well as arbitrary non-regular diffusions is provided. A general construction method for such processes, based on a generalization of the concept of a perfect additive functional, is developed. The intrinsic decomposition of a continuous strong Markov semimartingale is discovered. The book also investigates relations to stochastic differential equations and fundamental examples of irregular diffusions.

  17. An Asymptotic and Stochastic Theory for the Effects of Surface Gravity Waves on Currents and Infragravity Waves

    Science.gov (United States)

    McWilliams, J. C.; Lane, E.; Melville, K.; Restrepo, J.; Sullivan, P.

    2004-12-01

    Oceanic surface gravity waves are approximately irrotational, weakly nonlinear, and conservative, and they have a much shorter time scale than oceanic currents and longer waves (e.g., infragravity waves) --- except where the primary surface waves break. This provides a framework for an asymptotic theory, based on separation of time (and space) scales, of wave-averaged effects associated with the conservative primary wave dynamics combined with a stochastic representation of the momentum transfer and induced mixing associated with non-conservative wave breaking. Such a theory requires only modest information about the primary wave field from measurements or operational model forecasts and thus avoids the enormous burden of calculating the waves on their intrinsically small space and time scales. For the conservative effects, the result is a vortex force associated with the primary wave's Stokes drift; a wave-averaged Bernoulli head and sea-level set-up; and an incremental material advection by the Stokes drift. This can be compared to the "radiation stress" formalism of Longuet-Higgins, Stewart, and Hasselmann; it is shown to be a preferable representation since the radiation stress is trivial at its apparent leading order. For the non-conservative breaking effects, a population of stochastic impulses is added to the current and infragravity momentum equations with distribution functions taken from measurements. In offshore wind-wave equilibria, these impulses replace the conventional surface wind stress and cause significant differences in the surface boundary layer currents and entrainment rate, particularly when acting in combination with the conservative vortex force. In the surf zone, where breaking associated with shoaling removes nearly all of the primary wave momentum and energy, the stochastic forcing plays an analogous role as the widely used nearshore radiation stress parameterizations. This talk describes the theoretical framework and presents some

  18. Backward-stochastic-differential-equation approach to modeling of gene expression.

    Science.gov (United States)

    Shamarova, Evelina; Chertovskih, Roman; Ramos, Alexandre F; Aguiar, Paulo

    2017-03-01

    In this article, we introduce a backward method to model stochastic gene expression and protein-level dynamics. The protein amount is regarded as a diffusion process and is described by a backward stochastic differential equation (BSDE). Unlike many other SDE techniques proposed in the literature, the BSDE method is backward in time; that is, instead of initial conditions it requires the specification of end-point ("final") conditions, in addition to the model parametrization. To validate our approach we employ Gillespie's stochastic simulation algorithm (SSA) to generate (forward) benchmark data, according to predefined gene network models. Numerical simulations show that the BSDE method is able to correctly infer the protein-level distributions that preceded a known final condition, obtained originally from the forward SSA. This makes the BSDE method a powerful systems biology tool for time-reversed simulations, allowing, for example, the assessment of the biological conditions (e.g., protein concentrations) that preceded an experimentally measured event of interest (e.g., mitosis, apoptosis, etc.).

  19. Temporal gravity field modeling based on least square collocation with short-arc approach

    Science.gov (United States)

    ran, jiangjun; Zhong, Min; Xu, Houze; Liu, Chengshu; Tangdamrongsub, Natthachet

    2014-05-01

    After the launch of the Gravity Recovery And Climate Experiment (GRACE) in 2002, several research centers have attempted to produce the finest gravity model based on different approaches. In this study, we present an alternative approach to derive the Earth's gravity field, and two main objectives are discussed. Firstly, we seek the optimal method to estimate the accelerometer parameters, and secondly, we intend to recover the monthly gravity model based on least square collocation method. The method has been paid less attention compared to the least square adjustment method because of the massive computational resource's requirement. The positions of twin satellites are treated as pseudo-observations and unknown parameters at the same time. The variance covariance matrices of the pseudo-observations and the unknown parameters are valuable information to improve the accuracy of the estimated gravity solutions. Our analyses showed that introducing a drift parameter as an additional accelerometer parameter, compared to using only a bias parameter, leads to a significant improvement of our estimated monthly gravity field. The gravity errors outside the continents are significantly reduced based on the selected set of the accelerometer parameters. We introduced the improved gravity model namely the second version of Institute of Geodesy and Geophysics, Chinese Academy of Sciences (IGG-CAS 02). The accuracy of IGG-CAS 02 model is comparable to the gravity solutions computed from the Geoforschungszentrum (GFZ), the Center for Space Research (CSR) and the NASA Jet Propulsion Laboratory (JPL). In term of the equivalent water height, the correlation coefficients over the study regions (the Yangtze River valley, the Sahara desert, and the Amazon) among four gravity models are greater than 0.80.

  20. Stability analysis of stochastic delayed cellular neural networks by LMI approach

    International Nuclear Information System (INIS)

    Zhu Wenli; Hu Jin

    2006-01-01

    Some sufficient mean square exponential stability conditions for a class of stochastic DCNN model are obtained via the LMI approach. These conditions improve and generalize some existing global asymptotic stability conditions for DCNN model

  1. A GOCE-only global gravity field model by the space-wise approach

    DEFF Research Database (Denmark)

    Migliaccio, Frederica; Reguzzoni, Mirko; Gatti, Andrea

    2011-01-01

    The global gravity field model computed by the spacewise approach is one of three official solutions delivered by ESA from the analysis of the GOCE data. The model consists of a set of spherical harmonic coefficients and the corresponding error covariance matrix. The main idea behind this approach...... the orbit to reduce the noise variance and correlation before gridding the data. In the first release of the space-wise approach, based on a period of about two months, some prior information coming from existing gravity field models entered into the solution especially at low degrees and low orders...... degrees; the second is an internally computed GOCE-only prior model to be used in place of the official quick-look model, thus removing the dependency on EIGEN5C especially in the polar gaps. Once the procedure to obtain a GOCE-only solution has been outlined, a new global gravity field model has been...

  2. Stochastic Thermodynamics: A Dynamical Systems Approach

    Directory of Open Access Journals (Sweden)

    Tanmay Rajpurohit

    2017-12-01

    Full Text Available In this paper, we develop an energy-based, large-scale dynamical system model driven by Markov diffusion processes to present a unified framework for statistical thermodynamics predicated on a stochastic dynamical systems formalism. Specifically, using a stochastic state space formulation, we develop a nonlinear stochastic compartmental dynamical system model characterized by energy conservation laws that is consistent with statistical thermodynamic principles. In particular, we show that the difference between the average supplied system energy and the average stored system energy for our stochastic thermodynamic model is a martingale with respect to the system filtration. In addition, we show that the average stored system energy is equal to the mean energy that can be extracted from the system and the mean energy that can be delivered to the system in order to transfer it from a zero energy level to an arbitrary nonempty subset in the state space over a finite stopping time.

  3. Field-theoretic approach to gravity in the flat space-time

    Energy Technology Data Exchange (ETDEWEB)

    Cavalleri, G [Centro Informazioni Studi Esperienze, Milan (Italy); Milan Univ. (Italy). Ist. di Fisica); Spinelli, G [Istituto di Matematica del Politecnico di Milano, Milano (Italy)

    1980-01-01

    In this paper it is discussed how the field-theoretical approach to gravity starting from the flat space-time is wider than the Einstein approach. The flat approach is able to predict the structure of the observable space as a consequence of the behaviour of the particle proper masses. The field equations are formally equal to Einstein's equations without the cosmological term.

  4. Robust approach to f(R) gravity

    International Nuclear Information System (INIS)

    Jaime, Luisa G.; Patino, Leonardo; Salgado, Marcelo

    2011-01-01

    We consider metric f(R) theories of gravity without mapping them to their scalar-tensor counterpart, but using the Ricci scalar itself as an ''extra'' degree of freedom. This approach avoids then the introduction of a scalar-field potential that might be ill defined (not single valued). In order to explicitly show the usefulness of this method, we focus on static and spherically symmetric spacetimes and deal with the recent controversy about the existence of extended relativistic objects in certain class of f(R) models.

  5. Maximum likelihood approach for several stochastic volatility models

    International Nuclear Information System (INIS)

    Camprodon, Jordi; Perelló, Josep

    2012-01-01

    Volatility measures the amplitude of price fluctuations. Despite it being one of the most important quantities in finance, volatility is not directly observable. Here we apply a maximum likelihood method which assumes that price and volatility follow a two-dimensional diffusion process where volatility is the stochastic diffusion coefficient of the log-price dynamics. We apply this method to the simplest versions of the expOU, the OU and the Heston stochastic volatility models and we study their performance in terms of the log-price probability, the volatility probability, and its Mean First-Passage Time. The approach has some predictive power on the future returns amplitude by only knowing the current volatility. The assumed models do not consider long-range volatility autocorrelation and the asymmetric return-volatility cross-correlation but the method still yields very naturally these two important stylized facts. We apply the method to different market indices and with a good performance in all cases. (paper)

  6. An Improved Asymptotic Sampling Approach For Stochastic Finite Element Stiffness of a Laterally Loaded Monopile

    DEFF Research Database (Denmark)

    Vahdatirad, Mohammadjavad; Bayat, Mehdi; Andersen, Lars Vabbersgaard

    2012-01-01

    In this study a stochastic approach is conducted to obtain the horizontal and rotational stiffness of an offshore monopile foundation. A nonlinear stochastic p-y curve is integrated into a finite element scheme for calculation of the monopile response in over-consolidated clay having spatial...

  7. An Augmented Incomplete Factorization Approach for Computing the Schur Complement in Stochastic Optimization

    KAUST Repository

    Petra, Cosmin G.; Schenk, Olaf; Lubin, Miles; Gä ertner, Klaus

    2014-01-01

    We present a scalable approach and implementation for solving stochastic optimization problems on high-performance computers. In this work we revisit the sparse linear algebra computations of the parallel solver PIPS with the goal of improving the shared-memory performance and decreasing the time to solution. These computations consist of solving sparse linear systems with multiple sparse right-hand sides and are needed in our Schur-complement decomposition approach to compute the contribution of each scenario to the Schur matrix. Our novel approach uses an incomplete augmented factorization implemented within the PARDISO linear solver and an outer BiCGStab iteration to efficiently absorb pivot perturbations occurring during factorization. This approach is capable of both efficiently using the cores inside a computational node and exploiting sparsity of the right-hand sides. We report on the performance of the approach on highperformance computers when solving stochastic unit commitment problems of unprecedented size (billions of variables and constraints) that arise in the optimization and control of electrical power grids. Our numerical experiments suggest that supercomputers can be efficiently used to solve power grid stochastic optimization problems with thousands of scenarios under the strict "real-time" requirements of power grid operators. To our knowledge, this has not been possible prior to the present work. © 2014 Society for Industrial and Applied Mathematics.

  8. Stochastic approaches to inflation model building

    International Nuclear Information System (INIS)

    Ramirez, Erandy; Liddle, Andrew R.

    2005-01-01

    While inflation gives an appealing explanation of observed cosmological data, there are a wide range of different inflation models, providing differing predictions for the initial perturbations. Typically models are motivated either by fundamental physics considerations or by simplicity. An alternative is to generate large numbers of models via a random generation process, such as the flow equations approach. The flow equations approach is known to predict a definite structure to the observational predictions. In this paper, we first demonstrate a more efficient implementation of the flow equations exploiting an analytic solution found by Liddle (2003). We then consider alternative stochastic methods of generating large numbers of inflation models, with the aim of testing whether the structures generated by the flow equations are robust. We find that while typically there remains some concentration of points in the observable plane under the different methods, there is significant variation in the predictions amongst the methods considered

  9. 3D stochastic inversion and joint inversion of potential fields for multi scale parameters

    Science.gov (United States)

    Shamsipour, Pejman

    In this thesis we present the development of new techniques for the interpretation of potential field (gravity and magnetic data), which are the most widespread economic geophysical methods used for oil and mineral exploration. These new techniques help to address the long-standing issue with the interpretation of potential fields, namely the intrinsic non-uniqueness inversion of these types of data. The thesis takes the form of three papers (four including Appendix), which have been published, or soon to be published, in respected international journals. The purpose of the thesis is to introduce new methods based on 3D stochastical approaches for: 1) Inversion of potential field data (magnetic), 2) Multiscale Inversion using surface and borehole data and 3) Joint inversion of geophysical potential field data. We first present a stochastic inversion method based on a geostatistical approach to recover 3D susceptibility models from magnetic data. The aim of applying geostatistics is to provide quantitative descriptions of natural variables distributed in space or in time and space. We evaluate the uncertainty on the parameter model by using geostatistical unconditional simulations. The realizations are post-conditioned by cokriging to observation data. In order to avoid the natural tendency of the estimated structure to lay near the surface, depth weighting is included in the cokriging system. Then, we introduce algorithm for multiscale inversion, the presented algorithm has the capability of inverting data on multiple supports. The method involves four main steps: i. upscaling of borehole parameters (It could be density or susceptibility) to block parameters, ii. selection of block to use as constraints based on a threshold on kriging variance, iii. inversion of observation data with selected block densities as constraints, and iv. downscaling of inverted parameters to small prisms. Two modes of application are presented: estimation and simulation. Finally, a novel

  10. Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach

    Science.gov (United States)

    Aguilo, Miguel A.; Warner, James E.

    2017-01-01

    This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.

  11. Revisiting the cape cod bacteria injection experiment using a stochastic modeling approach

    Science.gov (United States)

    Maxwell, R.M.; Welty, C.; Harvey, R.W.

    2007-01-01

    Bromide and resting-cell bacteria tracer tests conducted in a sandy aquifer at the U.S. Geological Survey Cape Cod site in 1987 were reinterpreted using a three-dimensional stochastic approach. Bacteria transport was coupled to colloid filtration theory through functional dependence of local-scale colloid transport parameters upon hydraulic conductivity and seepage velocity in a stochastic advection - dispersion/attachment - detachment model. Geostatistical information on the hydraulic conductivity (K) field that was unavailable at the time of the original test was utilized as input. Using geostatistical parameters, a groundwater flow and particle-tracking model of conservative solute transport was calibrated to the bromide-tracer breakthrough data. An optimization routine was employed over 100 realizations to adjust the mean and variance ofthe natural-logarithm of hydraulic conductivity (InK) field to achieve best fit of a simulated, average bromide breakthrough curve. A stochastic particle-tracking model for the bacteria was run without adjustments to the local-scale colloid transport parameters. Good predictions of mean bacteria breakthrough were achieved using several approaches for modeling components of the system. Simulations incorporating the recent Tufenkji and Elimelech (Environ. Sci. Technol. 2004, 38, 529-536) correlation equation for estimating single collector efficiency were compared to those using the older Rajagopalan and Tien (AIChE J. 1976, 22, 523-533) model. Both appeared to work equally well at predicting mean bacteria breakthrough using a constant mean bacteria diameter for this set of field conditions. Simulations using a distribution of bacterial cell diameters available from original field notes yielded a slight improvement in the model and data agreement compared to simulations using an average bacterial diameter. The stochastic approach based on estimates of local-scale parameters for the bacteria-transport process reasonably captured

  12. Superconformal gravity in Hamiltonian form: another approach to the renormalization of gravitation

    International Nuclear Information System (INIS)

    Kaku, M.

    1983-01-01

    We reexpress superconformal gravity in Hamiltonian form, explicitly displaying all 24 generators of the group as Dirac constraints on the Hilbert space. From this, we can establish a firm foundation for the canonical quantization of superconformal gravity. The purpose of writing down the Hamiltonian form of the theory is to reexamine the question of renormalization and unitarity. Usually, we start with unitary theories of gravity, such as the Einstein-Hilbert action or supergravity, both of which are probably not renormalizable. In this series of papers, we take the opposite approach and start with a theory which is renormalizable but has problems with unitarity. Conformal and superconformal gravity are both plagued with dipole ghosts when we use perturbation theory to quantize the theories. It is difficult to interpret the results of perturbation theory because the asymptotic states have zero norm and the potential between particles grows linearly with the separation distance. The purpose of writing the Hamiltonian form of these theories is to approach the question of unitarity from a different point of view. For example, a strong-coupling approach to these theories may yield a totally different perturbation expansion. We speculate that canonically quantizing the theory by power expanding in the strong-coupling regime may yield a different set of asymptotic states, somewhat similar to the situation in gauge theories. In this series of papers, we wish to reopen the question of the unitarity of conformal theories. We conjecture that ghosts are ''confined.''

  13. Approaches to emergent spacetime in gauge/gravity duality

    Science.gov (United States)

    Sully, James Kenneth

    2013-08-01

    In this thesis we explore approaches to emergent local spacetime in gauge/gravity duality. We first conjecture that every CFT with a large-N type limit and a parametrically large gap in the spectrum of single-trace operators has a local bulk dual. We defend this conjecture by counting consistent solutions to the four-point function in simple scalar models and matching to the number of local interaction terms in the bulk. Next, we proceed to explicitly construct local bulk operators using smearing functions. We argue that this construction allows one to probe inside black hole horizons for only short times. We then suggest that the failure to construct bulk operators inside a black hole at late times is indicative of a break-down of local effective field theory at the black hole horizon. We argue that the postulates of black hole complementarity are inconsistent and cannot be realized within gauge/gravity duality. We argue that the most conservative solution is a firewall at the black hole horizon and we critically explore alternative resolutions. We then examine the CGHS model of two-dimensional gravity to look for dynamical formation of firewalls. We find that the CGHS model does not exhibit firewalls, but rather contains long-lived remnants. We argue that, while this is consistent for the CGHS model, it cannot be so in higher-dimensional theories of gravity. Lastly, we turn to F-theory, and detail local and global obstructions to writing elliptic fibrations in Tate form. We determine more general possible forms.

  14. Ostrogradski Hamiltonian approach for geodetic brane gravity

    International Nuclear Information System (INIS)

    Cordero, Ruben; Molgado, Alberto; Rojas, Efrain

    2010-01-01

    We present an alternative Hamiltonian description of a branelike universe immersed in a flat background spacetime. This model is named geodetic brane gravity. We set up the Regge-Teitelboim model to describe our Universe where such field theory is originally thought as a second order derivative theory. We refer to an Ostrogradski Hamiltonian formalism to prepare the system to its quantization. This approach comprize the manage of both first- and second-class constraints and the counting of degrees of freedom follows accordingly.

  15. A polynomial-chaos-expansion-based building block approach for stochastic analysis of photonic circuits

    Science.gov (United States)

    Waqas, Abi; Melati, Daniele; Manfredi, Paolo; Grassi, Flavia; Melloni, Andrea

    2018-02-01

    The Building Block (BB) approach has recently emerged in photonic as a suitable strategy for the analysis and design of complex circuits. Each BB can be foundry related and contains a mathematical macro-model of its functionality. As well known, statistical variations in fabrication processes can have a strong effect on their functionality and ultimately affect the yield. In order to predict the statistical behavior of the circuit, proper analysis of the uncertainties effects is crucial. This paper presents a method to build a novel class of Stochastic Process Design Kits for the analysis of photonic circuits. The proposed design kits directly store the information on the stochastic behavior of each building block in the form of a generalized-polynomial-chaos-based augmented macro-model obtained by properly exploiting stochastic collocation and Galerkin methods. Using this approach, we demonstrate that the augmented macro-models of the BBs can be calculated once and stored in a BB (foundry dependent) library and then used for the analysis of any desired circuit. The main advantage of this approach, shown here for the first time in photonics, is that the stochastic moments of an arbitrary photonic circuit can be evaluated by a single simulation only, without the need for repeated simulations. The accuracy and the significant speed-up with respect to the classical Monte Carlo analysis are verified by means of classical photonic circuit example with multiple uncertain variables.

  16. Approaching complexity by stochastic methods: From biological systems to turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Friedrich, Rudolf [Institute for Theoretical Physics, University of Muenster, D-48149 Muenster (Germany); Peinke, Joachim [Institute of Physics, Carl von Ossietzky University, D-26111 Oldenburg (Germany); Sahimi, Muhammad [Mork Family Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, CA 90089-1211 (United States); Reza Rahimi Tabar, M., E-mail: mohammed.r.rahimi.tabar@uni-oldenburg.de [Department of Physics, Sharif University of Technology, Tehran 11155-9161 (Iran, Islamic Republic of); Institute of Physics, Carl von Ossietzky University, D-26111 Oldenburg (Germany); Fachbereich Physik, Universitaet Osnabrueck, Barbarastrasse 7, 49076 Osnabrueck (Germany)

    2011-09-15

    This review addresses a central question in the field of complex systems: given a fluctuating (in time or space), sequentially measured set of experimental data, how should one analyze the data, assess their underlying trends, and discover the characteristics of the fluctuations that generate the experimental traces? In recent years, significant progress has been made in addressing this question for a class of stochastic processes that can be modeled by Langevin equations, including additive as well as multiplicative fluctuations or noise. Important results have emerged from the analysis of temporal data for such diverse fields as neuroscience, cardiology, finance, economy, surface science, turbulence, seismic time series and epileptic brain dynamics, to name but a few. Furthermore, it has been recognized that a similar approach can be applied to the data that depend on a length scale, such as velocity increments in fully developed turbulent flow, or height increments that characterize rough surfaces. A basic ingredient of the approach to the analysis of fluctuating data is the presence of a Markovian property, which can be detected in real systems above a certain time or length scale. This scale is referred to as the Markov-Einstein (ME) scale, and has turned out to be a useful characteristic of complex systems. We provide a review of the operational methods that have been developed for analyzing stochastic data in time and scale. We address in detail the following issues: (i) reconstruction of stochastic evolution equations from data in terms of the Langevin equations or the corresponding Fokker-Planck equations and (ii) intermittency, cascades, and multiscale correlation functions.

  17. Approaching complexity by stochastic methods: From biological systems to turbulence

    International Nuclear Information System (INIS)

    Friedrich, Rudolf; Peinke, Joachim; Sahimi, Muhammad; Reza Rahimi Tabar, M.

    2011-01-01

    This review addresses a central question in the field of complex systems: given a fluctuating (in time or space), sequentially measured set of experimental data, how should one analyze the data, assess their underlying trends, and discover the characteristics of the fluctuations that generate the experimental traces? In recent years, significant progress has been made in addressing this question for a class of stochastic processes that can be modeled by Langevin equations, including additive as well as multiplicative fluctuations or noise. Important results have emerged from the analysis of temporal data for such diverse fields as neuroscience, cardiology, finance, economy, surface science, turbulence, seismic time series and epileptic brain dynamics, to name but a few. Furthermore, it has been recognized that a similar approach can be applied to the data that depend on a length scale, such as velocity increments in fully developed turbulent flow, or height increments that characterize rough surfaces. A basic ingredient of the approach to the analysis of fluctuating data is the presence of a Markovian property, which can be detected in real systems above a certain time or length scale. This scale is referred to as the Markov-Einstein (ME) scale, and has turned out to be a useful characteristic of complex systems. We provide a review of the operational methods that have been developed for analyzing stochastic data in time and scale. We address in detail the following issues: (i) reconstruction of stochastic evolution equations from data in terms of the Langevin equations or the corresponding Fokker-Planck equations and (ii) intermittency, cascades, and multiscale correlation functions.

  18. Gravity inversion predicts the nature of the amundsen basin and its continental borderlands near greenland

    DEFF Research Database (Denmark)

    Døssing, Arne; Hansen, Thomas Mejer; Olesen, Arne Vestergaard

    2014-01-01

    the results of 3-D gravity inversion for predicting the sediment thickness and basement geometry within the Amundsen Basin and along its borderlands. We use the recently published LOMGRAV-09 gravity compilation and adopt a process-oriented iterative cycle approach that minimizes misfit between an Earth model...... and observations. The sensitivity of our results to lateral variations in depth and density contrast of the Moho is further tested by a stochastic inversion. Within their limitations, the approach and setup used herein provides the first detailed model of the sediment thickness and basement geometry in the Arctic...... above high-relief basement in the central Amundsen Basin. Significantly, an up to 7 km deep elongated sedimentary basin is predicted along the northern edge of the Morris Jesup Rise. This basin continues into the Klenova Valley south of the Lomonosov Ridge and correlates with an offshore continuation...

  19. Elitism and Stochastic Dominance

    OpenAIRE

    Bazen, Stephen; Moyes, Patrick

    2011-01-01

    Stochastic dominance has typically been used with a special emphasis on risk and inequality reduction something captured by the concavity of the utility function in the expected utility model. We claim that the applicability of the stochastic dominance approach goes far beyond risk and inequality measurement provided suitable adpations be made. We apply in the paper the stochastic dominance approach to the measurment of elitism which may be considered the opposite of egalitarianism. While the...

  20. Benchmarking the stochastic time-dependent variational approach for excitation dynamics in molecular aggregates

    Energy Technology Data Exchange (ETDEWEB)

    Chorošajev, Vladimir [Department of Theoretical Physics, Faculty of Physics, Vilnius University, Sauletekio 9-III, 10222 Vilnius (Lithuania); Gelzinis, Andrius; Valkunas, Leonas [Department of Theoretical Physics, Faculty of Physics, Vilnius University, Sauletekio 9-III, 10222 Vilnius (Lithuania); Department of Molecular Compound Physics, Center for Physical Sciences and Technology, Sauletekio 3, 10222 Vilnius (Lithuania); Abramavicius, Darius, E-mail: darius.abramavicius@ff.vu.lt [Department of Theoretical Physics, Faculty of Physics, Vilnius University, Sauletekio 9-III, 10222 Vilnius (Lithuania)

    2016-12-20

    Highlights: • The Davydov ansatze can be used for finite temperature simulations with an extension. • The accuracy is high if the system is strongly coupled to the environmental phonons. • The approach can simulate time-resolved fluorescence spectra. - Abstract: Time dependent variational approach is a convenient method to characterize the excitation dynamics in molecular aggregates for different strengths of system-bath interaction a, which does not require any additional perturbative schemes. Until recently, however, this method was only applicable in zero temperature case. It has become possible to extend this method for finite temperatures with the introduction of stochastic time dependent variational approach. Here we present a comparison between this approach and the exact hierarchical equations of motion approach for describing excitation dynamics in a broad range of temperatures. We calculate electronic population evolution, absorption and auxiliary time resolved fluorescence spectra in different regimes and find that the stochastic approach shows excellent agreement with the exact approach when the system-bath coupling is sufficiently large and temperatures are high. The differences between the two methods are larger, when temperatures are lower or the system-bath coupling is small.

  1. Market Efficiency of Oil Spot and Futures: A Stochastic Dominance Approach

    NARCIS (Netherlands)

    H.H. Lean (Hooi Hooi); M.J. McAleer (Michael); W.-K. Wong (Wing-Keung)

    2010-01-01

    textabstractThis paper examines the market efficiency of oil spot and futures prices by using a stochastic dominance (SD) approach. As there is no evidence of an SD relationship between oil spot and futures, we conclude that there is no arbitrage opportunity between these two markets, and that both

  2. Multi-Period Natural Gas Market Modeling. Applications, Stochastic Extensions and Solution Approaches

    International Nuclear Information System (INIS)

    Egging, R.G.

    2010-11-01

    This dissertation develops deterministic and stochastic multi-period mixed complementarity problems (MCP) for the global natural gas market, as well as solution approaches for large-scale stochastic MCP. The deterministic model is unique in the combination of the level of detail of the actors in the natural gas markets and the transport options, the detailed regional and global coverage, the multi-period approach with endogenous capacity expansions for transportation and storage infrastructure, the seasonal variation in demand and the representation of market power according to Nash-Cournot theory. The model is applied to several scenarios for the natural gas market that cover the formation of a cartel by the members of the Gas Exporting Countries Forum, a low availability of unconventional gas in the United States, and cost reductions in long-distance gas transportation. The results provide insights in how different regions are affected by various developments, in terms of production, consumption, traded volumes, prices and profits of market participants. The stochastic MCP is developed and applied to a global natural gas market problem with four scenarios for a time horizon until 2050 with nineteen regions and containing 78,768 variables. The scenarios vary in the possibility of a gas market cartel formation and varying depletion rates of gas reserves in the major gas importing regions. Outcomes for hedging decisions of market participants show some significant shifts in the timing and location of infrastructure investments, thereby affecting local market situations. A first application of Benders decomposition (BD) is presented to solve a large-scale stochastic MCP for the global gas market with many hundreds of first-stage capacity expansion variables and market players exerting various levels of market power. The largest problem solved successfully using BD contained 47,373 variables of which 763 first-stage variables, however using BD did not result in

  3. Perturbative approach to non-Markovian stochastic Schroedinger equations

    International Nuclear Information System (INIS)

    Gambetta, Jay; Wiseman, H.M.

    2002-01-01

    In this paper we present a perturbative procedure that allows one to numerically solve diffusive non-Markovian stochastic Schroedinger equations, for a wide range of memory functions. To illustrate this procedure numerical results are presented for a classically driven two-level atom immersed in an environment with a simple memory function. It is observed that as the order of the perturbation is increased the numerical results for the ensemble average state ρ red (t) approach the exact reduced state found via Imamog-barlu ' s enlarged system method [Phys. Rev. A 50, 3650 (1994)

  4. Group manifold approach to gravity and supergravity theories

    International Nuclear Information System (INIS)

    d'Auria, R.; Fre, P.; Regge, T.

    1981-05-01

    Gravity theories are presented from the point of view of group manifold formulation. The differential geometry of groups and supergroups is discussed first; the notion of connection and related Yang-Mills potentials is introduced. Then ordinary Einstein gravity is discussed in the Cartan formulation. This discussion provides a first example which will then be generalized to more complicated theories, in particular supergravity. The distinction between ''pure'' and ''impure' theories is also set forth. Next, the authors develop an axiomatic approach to rheonomic theories related to the concept of Chevalley cohomology on group manifolds, and apply these principles to N = 1 supergravity. Then the panorama of so far constructed pure and impure group manifold supergravities is presented. The pure d = 5 N = 2 case is discussed in some detail, and N = 2 and N = 3 in d = 4 are considered as examples of the impure theories. The way a pure theory becomes impure after dimensional reduction is illustrated. Next, the role of kinematical superspace constraints as a subset of the group-manifold equations of motion is discussed, and the use of this approach to obtain the auxiliary fields is demonstrated. Finally, the application of the group manifold method to supersymmetric Super Yang-Mills theories is addressed

  5. Issues concerning gravity waves from first-order phase transitions

    International Nuclear Information System (INIS)

    Kosowsky, A.

    1993-01-01

    The stochastic background of gravitational radiation is a unique and potentially valuable source of information about the early universe. Photons thermally decoupled when the universe was around 100,000 years old; electromagnetic radiation cannot directly provide information about the epoch earlier than this. In contrast, gravitons presumably decoupled around the Planck time, when the universe was only 10 -44 seconds old. Since gravity wave propagate virtually unimpeded, any energetic event in the evolution of the universe will leave an imprint on the gravity wave background. Turner and Wilczek first suggested that first-order phase transitions, and particularly transitions which occur via the nucleation, expansion, and percolation of vacuum bubbles, will be a particularly efficient source of gravitational radiation. Detailed calculations with scalar-field vacuum bubbles confirm this conjecture and show that strongly first-order phase transitions are probably the strongest stochastic gravity-wave source yet conjectured. In this work the author first reviews the vacuum bubble calculations, stressing their physical assumptions. The author then discusses realistic scenarios for first-order phase transitions and describes how the calculations must be modified and extended to produce reliable results. 11 refs

  6. Hybrid approaches for multiple-species stochastic reaction–diffusion models

    International Nuclear Information System (INIS)

    Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen

    2015-01-01

    Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. - Highlights: • A novel hybrid stochastic/deterministic reaction–diffusion simulation method is given. • Can massively speed up stochastic simulations while preserving stochastic effects. • Can handle multiple reacting species. • Can handle moving boundaries

  7. Hybrid approaches for multiple-species stochastic reaction–diffusion models

    Energy Technology Data Exchange (ETDEWEB)

    Spill, Fabian, E-mail: fspill@bu.edu [Department of Biomedical Engineering, Boston University, 44 Cummington Street, Boston, MA 02215 (United States); Department of Mechanical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139 (United States); Guerrero, Pilar [Department of Mathematics, University College London, Gower Street, London WC1E 6BT (United Kingdom); Alarcon, Tomas [Centre de Recerca Matematica, Campus de Bellaterra, Edifici C, 08193 Bellaterra (Barcelona) (Spain); Departament de Matemàtiques, Universitat Atonòma de Barcelona, 08193 Bellaterra (Barcelona) (Spain); Maini, Philip K. [Wolfson Centre for Mathematical Biology, Mathematical Institute, University of Oxford, Oxford OX2 6GG (United Kingdom); Byrne, Helen [Wolfson Centre for Mathematical Biology, Mathematical Institute, University of Oxford, Oxford OX2 6GG (United Kingdom); Computational Biology Group, Department of Computer Science, University of Oxford, Oxford OX1 3QD (United Kingdom)

    2015-10-15

    Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. - Highlights: • A novel hybrid stochastic/deterministic reaction–diffusion simulation method is given. • Can massively speed up stochastic simulations while preserving stochastic effects. • Can handle multiple reacting species. • Can handle moving boundaries.

  8. The Approach to Defining Gravity Factors of Influence on the Foreign Trade Relations of Countries

    Directory of Open Access Journals (Sweden)

    Kalyuzhna Nataliya G.

    2017-03-01

    Full Text Available The aim of the article is to determine the gravity factors of influence on the foreign trade relations of countries on the basis of the results of the comparative analysis of the classical specifications of the gravity model of foreign trade and the domestic experience in gravity modeling. It is substantiated that a gravity model is one of the tools of economic and mathematical modeling, the use of which is characterized by a high level of adequacy and ensures prediction of foreign trade conditions. The main approaches to the definition of explanatory variables in the gravity equation of foreign trade are analyzed, and the author’s approach to the selection of the factors of the gravity model is proposed. As the first explanatory variable in the specification of the gravity model of foreign trade and the characteristics of the importance of economies of foreign trade partners, it is proposed to use the GDP calculated at purchasing power parity with the expected positive and statistically significant coefficient. As the second explanatory variable of the gravity equation of foreign trade, it is proposed to use a complex characteristic of the “trade distance” between countries, which reflects the current conditions of bilateral trade and depends on factors influencing the foreign trade turnover between countries — both directly (static proportionality of transport costs of geographical remoteness, and indirectly (dynamic institutional conditions of bilateral relations. The expediency of using the world average annual price for oil as the quantitative equivalent of the “trading distance” index is substantiated. Prospects for further research in this direction are identifying the form and force of influence of certain basic gravity variables on the foreign trade relations of certain partner countries and determining the appropriateness of including additional factors in the composition of the gravity equation of foreign trade.

  9. A Monte Carlo approach to constraining uncertainties in modelled downhole gravity gradiometry applications

    Science.gov (United States)

    Matthews, Samuel J.; O'Neill, Craig; Lackie, Mark A.

    2017-06-01

    Gravity gradiometry has a long legacy, with airborne/marine applications as well as surface applications receiving renewed recent interest. Recent instrumental advances has led to the emergence of downhole gravity gradiometry applications that have the potential for greater resolving power than borehole gravity alone. This has promise in both the petroleum and geosequestration industries; however, the effect of inherent uncertainties in the ability of downhole gravity gradiometry to resolve a subsurface signal is unknown. Here, we utilise the open source modelling package, Fatiando a Terra, to model both the gravity and gravity gradiometry responses of a subsurface body. We use a Monte Carlo approach to vary the geological structure and reference densities of the model within preset distributions. We then perform 100 000 simulations to constrain the mean response of the buried body as well as uncertainties in these results. We varied our modelled borehole to be either centred on the anomaly, adjacent to the anomaly (in the x-direction), and 2500 m distant to the anomaly (also in the x-direction). We demonstrate that gravity gradiometry is able to resolve a reservoir-scale modelled subsurface density variation up to 2500 m away, and that certain gravity gradient components (Gzz, Gxz, and Gxx) are particularly sensitive to this variation in gravity/gradiometry above the level of uncertainty in the model. The responses provided by downhole gravity gradiometry modelling clearly demonstrate a technique that can be utilised in determining a buried density contrast, which will be of particular use in the emerging industry of CO2 geosequestration. The results also provide a strong benchmark for the development of newly emerging prototype downhole gravity gradiometers.

  10. A stochastic approach to the derivation of exemption and clearance levels

    International Nuclear Information System (INIS)

    Deckert, A.

    1997-01-01

    Deciding what clearance levels are appropriate for a particular waste stream inherently involves a number of uncertainties. Some of these uncertainties can be quantified using stochastic modeling techniques, which can aid the process of decision making. In this presentation the German approach to dealing with the uncertainties involved in setting clearance levels is addressed. (author)

  11. Decoherence in quantum gravity: issues and critiques

    Energy Technology Data Exchange (ETDEWEB)

    Anastopoulos, C [Department of Physics, University of Patras, 26500 Patras (Greece); Hu, B L [Department of Physics, University of Maryland, College Park, Maryland 20742-4111 (United States)

    2007-05-15

    An increasing number of papers have appeared in recent years on decoherence in quantum gravity at the Planck energy. We discuss the meaning of decoherence in quantum gravity starting from the common notion that quantum gravity is a theory for the microscopic structures of spacetime, and invoking some generic features of quantum decoherence from the open systems viewpoint. We dwell on a range of issues bearing on this process including the relation between statistical and quantum, noise from effective field theory, the meaning of stochasticity, the origin of non-unitarity and the nature of nonlocality in this and related contexts. To expound these issues we critique on two representative theories: One claims that decoherence in quantum gravity scale leads to the violation of CPT symmetry at sub-Planckian energy which is used to explain today's particle phenomenology. The other uses this process in place with the Brownian motion model to prove that spacetime foam behaves like a thermal bath. A companion paper will deal with intrinsic and fundamental decoherence which also bear on issues in classical and quantum gravity.

  12. Decoherence in quantum gravity: issues and critiques

    International Nuclear Information System (INIS)

    Anastopoulos, C; Hu, B L

    2007-01-01

    An increasing number of papers have appeared in recent years on decoherence in quantum gravity at the Planck energy. We discuss the meaning of decoherence in quantum gravity starting from the common notion that quantum gravity is a theory for the microscopic structures of spacetime, and invoking some generic features of quantum decoherence from the open systems viewpoint. We dwell on a range of issues bearing on this process including the relation between statistical and quantum, noise from effective field theory, the meaning of stochasticity, the origin of non-unitarity and the nature of nonlocality in this and related contexts. To expound these issues we critique on two representative theories: One claims that decoherence in quantum gravity scale leads to the violation of CPT symmetry at sub-Planckian energy which is used to explain today's particle phenomenology. The other uses this process in place with the Brownian motion model to prove that spacetime foam behaves like a thermal bath. A companion paper will deal with intrinsic and fundamental decoherence which also bear on issues in classical and quantum gravity

  13. Multi-period natural gas market modeling Applications, stochastic extensions and solution approaches

    Science.gov (United States)

    Egging, Rudolf Gerardus

    This dissertation develops deterministic and stochastic multi-period mixed complementarity problems (MCP) for the global natural gas market, as well as solution approaches for large-scale stochastic MCP. The deterministic model is unique in the combination of the level of detail of the actors in the natural gas markets and the transport options, the detailed regional and global coverage, the multi-period approach with endogenous capacity expansions for transportation and storage infrastructure, the seasonal variation in demand and the representation of market power according to Nash-Cournot theory. The model is applied to several scenarios for the natural gas market that cover the formation of a cartel by the members of the Gas Exporting Countries Forum, a low availability of unconventional gas in the United States, and cost reductions in long-distance gas transportation. 1 The results provide insights in how different regions are affected by various developments, in terms of production, consumption, traded volumes, prices and profits of market participants. The stochastic MCP is developed and applied to a global natural gas market problem with four scenarios for a time horizon until 2050 with nineteen regions and containing 78,768 variables. The scenarios vary in the possibility of a gas market cartel formation and varying depletion rates of gas reserves in the major gas importing regions. Outcomes for hedging decisions of market participants show some significant shifts in the timing and location of infrastructure investments, thereby affecting local market situations. A first application of Benders decomposition (BD) is presented to solve a large-scale stochastic MCP for the global gas market with many hundreds of first-stage capacity expansion variables and market players exerting various levels of market power. The largest problem solved successfully using BD contained 47,373 variables of which 763 first-stage variables, however using BD did not result in

  14. A comparison of the stochastic and machine learning approaches in hydrologic time series forecasting

    Science.gov (United States)

    Kim, T.; Joo, K.; Seo, J.; Heo, J. H.

    2016-12-01

    Hydrologic time series forecasting is an essential task in water resources management and it becomes more difficult due to the complexity of runoff process. Traditional stochastic models such as ARIMA family has been used as a standard approach in time series modeling and forecasting of hydrological variables. Due to the nonlinearity in hydrologic time series data, machine learning approaches has been studied with the advantage of discovering relevant features in a nonlinear relation among variables. This study aims to compare the predictability between the traditional stochastic model and the machine learning approach. Seasonal ARIMA model was used as the traditional time series model, and Random Forest model which consists of decision tree and ensemble method using multiple predictor approach was applied as the machine learning approach. In the application, monthly inflow data from 1986 to 2015 of Chungju dam in South Korea were used for modeling and forecasting. In order to evaluate the performances of the used models, one step ahead and multi-step ahead forecasting was applied. Root mean squared error and mean absolute error of two models were compared.

  15. Squeezing more information out of time variable gravity data with a temporal decomposition approach

    DEFF Research Database (Denmark)

    Barletta, Valentina Roberta; Bordoni, A.; Aoudia, A.

    2012-01-01

    an explorative approach based on a suitable time series decomposition, which does not rely on predefined time signatures. The comparison and validation against the fitting approach commonly used in GRACE literature shows a very good agreement for what concerns trends and periodic signals on one side......A measure of the Earth's gravity contains contributions from solid Earth as well as climate-related phenomena, that cannot be easily distinguished both in time and space. After more than 7years, the GRACE gravity data available now support more elaborate analysis on the time series. We propose...... used to assess the possibility of finding evidence of meaningful geophysical signals different from hydrology over Africa in GRACE data. In this case we conclude that hydrological phenomena are dominant and so time variable gravity data in Africa can be directly used to calibrate hydrological models....

  16. Approach of regional gravity field modeling from GRACE data for improvement of geoid modeling for Japan

    Science.gov (United States)

    Kuroishi, Y.; Lemoine, F. G.; Rowlands, D. D.

    2006-12-01

    The latest gravimetric geoid model for Japan, JGEOID2004, suffers from errors at long wavelengths (around 1000 km) in a range of +/- 30 cm. The model was developed by combining surface gravity data with a global marine altimetric gravity model, using EGM96 as a foundation, and the errors at long wavelength are presumably attributed to EGM96 errors. The Japanese islands and their vicinity are located in a region of plate convergence boundaries, producing substantial gravity and geoid undulations in a wide range of wavelengths. Because of the geometry of the islands and trenches, precise information on gravity in the surrounding oceans should be incorporated in detail, even if the geoid model is required to be accurate only over land. The Kuroshio Current, which runs south of Japan, causes high sea surface variability, making altimetric gravity field determination complicated. To reduce the long-wavelength errors in the geoid model, we are investigating GRACE data for regional gravity field modeling at long wavelengths in the vicinity of Japan. Our approach is based on exclusive use of inter- satellite range-rate data with calibrated accelerometer data and attitude data, for regional or global gravity field recovery. In the first step, we calibrate accelerometer data in terms of scales and biases by fitting dynamically calculated orbits to GPS-determined precise orbits. The calibration parameters of accelerometer data thus obtained are used in the second step to recover a global/regional gravity anomaly field. This approach is applied to GRACE data obtained for the year 2005 and resulting global/regional gravity models are presented and discussed.

  17. Structural factoring approach for analyzing stochastic networks

    Science.gov (United States)

    Hayhurst, Kelly J.; Shier, Douglas R.

    1991-01-01

    The problem of finding the distribution of the shortest path length through a stochastic network is investigated. A general algorithm for determining the exact distribution of the shortest path length is developed based on the concept of conditional factoring, in which a directed, stochastic network is decomposed into an equivalent set of smaller, generally less complex subnetworks. Several network constructs are identified and exploited to reduce significantly the computational effort required to solve a network problem relative to complete enumeration. This algorithm can be applied to two important classes of stochastic path problems: determining the critical path distribution for acyclic networks and the exact two-terminal reliability for probabilistic networks. Computational experience with the algorithm was encouraging and allowed the exact solution of networks that have been previously analyzed only by approximation techniques.

  18. Optimal Integration of Intermittent Renewables: A System LCOE Stochastic Approach

    Directory of Open Access Journals (Sweden)

    Carlo Lucheroni

    2018-03-01

    Full Text Available We propose a system level approach to value the impact on costs of the integration of intermittent renewable generation in a power system, based on expected breakeven cost and breakeven cost risk. To do this, we carefully reconsider the definition of Levelized Cost of Electricity (LCOE when extended to non-dispatchable generation, by examining extra costs and gains originated by the costly management of random power injections. We are thus lead to define a ‘system LCOE’ as a system dependent LCOE that takes properly into account intermittent generation. In order to include breakeven cost risk we further extend this deterministic approach to a stochastic setting, by introducing a ‘stochastic system LCOE’. This extension allows us to discuss the optimal integration of intermittent renewables from a broad, system level point of view. This paper thus aims to provide power producers and policy makers with a new methodological scheme, still based on the LCOE but which updates this valuation technique to current energy system configurations characterized by a large share of non-dispatchable production. Quantifying and optimizing the impact of intermittent renewables integration on power system costs, risk and CO 2 emissions, the proposed methodology can be used as powerful tool of analysis for assessing environmental and energy policies.

  19. Stochastic approach to the derivation of emission limits for wastewater treatment plants.

    Science.gov (United States)

    Stransky, D; Kabelkova, I; Bares, V

    2009-01-01

    Stochastic approach to the derivation of WWTP emission limits meeting probabilistically defined environmental quality standards (EQS) is presented. The stochastic model is based on the mixing equation with input data defined by probability density distributions and solved by Monte Carlo simulations. The approach was tested on a study catchment for total phosphorus (P(tot)). The model assumes input variables independency which was proved for the dry-weather situation. Discharges and P(tot) concentrations both in the study creek and WWTP effluent follow log-normal probability distribution. Variation coefficients of P(tot) concentrations differ considerably along the stream (c(v)=0.415-0.884). The selected value of the variation coefficient (c(v)=0.420) affects the derived mean value (C(mean)=0.13 mg/l) of the P(tot) EQS (C(90)=0.2 mg/l). Even after supposed improvement of water quality upstream of the WWTP to the level of the P(tot) EQS, the WWTP emission limits calculated would be lower than the values of the best available technology (BAT). Thus, minimum dilution ratios for the meaningful application of the combined approach to the derivation of P(tot) emission limits for Czech streams are discussed.

  20. Modelling and application of stochastic processes

    CERN Document Server

    1986-01-01

    The subject of modelling and application of stochastic processes is too vast to be exhausted in a single volume. In this book, attention is focused on a small subset of this vast subject. The primary emphasis is on realization and approximation of stochastic systems. Recently there has been considerable interest in the stochastic realization problem, and hence, an attempt has been made here to collect in one place some of the more recent approaches and algorithms for solving the stochastic realiza­ tion problem. Various different approaches for realizing linear minimum-phase systems, linear nonminimum-phase systems, and bilinear systems are presented. These approaches range from time-domain methods to spectral-domain methods. An overview of the chapter contents briefly describes these approaches. Also, in most of these chapters special attention is given to the problem of developing numerically ef­ ficient algorithms for obtaining reduced-order (approximate) stochastic realizations. On the application side,...

  1. Stochastic Discount Factor Approach to International Risk-Sharing:A Robustness Check of the Bilateral Setting

    NARCIS (Netherlands)

    Hadzi-Vaskov, M.; Kool, C.J.M.

    2007-01-01

    This paper presents a robustness check of the stochastic discount factor approach to international (bilateral) risk-sharing given in Brandt, Cochrane, and Santa-Clara (2006). We demonstrate two main inherent limitations of the bilateral SDF approach to international risk-sharing. First, the discount

  2. Moment problems and the causal set approach to quantum gravity

    International Nuclear Information System (INIS)

    Ash, Avner; McDonald, Patrick

    2003-01-01

    We study a collection of discrete Markov chains related to the causal set approach to modeling discrete theories of quantum gravity. The transition probabilities of these chains satisfy a general covariance principle, a causality principle, and a renormalizability condition. The corresponding dynamics are completely determined by a sequence of non-negative real coupling constants. Using techniques related to the classical moment problem, we give a complete description of any such sequence of coupling constants. We prove a representation theorem: every discrete theory of quantum gravity arising from causal set dynamics satisfying covariance, causality, and renormalizability corresponds to a unique probability distribution function on the non-negative real numbers, with the coupling constants defining the theory given by the moments of the distribution

  3. Stochastic Fractional Programming Approach to a Mean and Variance Model of a Transportation Problem

    Directory of Open Access Journals (Sweden)

    V. Charles

    2011-01-01

    Full Text Available In this paper, we propose a stochastic programming model, which considers a ratio of two nonlinear functions and probabilistic constraints. In the former, only expected model has been proposed without caring variability in the model. On the other hand, in the variance model, the variability played a vital role without concerning its counterpart, namely, the expected model. Further, the expected model optimizes the ratio of two linear cost functions where as variance model optimize the ratio of two non-linear functions, that is, the stochastic nature in the denominator and numerator and considering expectation and variability as well leads to a non-linear fractional program. In this paper, a transportation model with stochastic fractional programming (SFP problem approach is proposed, which strikes the balance between previous models available in the literature.

  4. Stochastic optimal control, forward-backward stochastic differential equations and the Schroedinger equation

    Energy Technology Data Exchange (ETDEWEB)

    Paul, Wolfgang; Koeppe, Jeanette [Institut fuer Physik, Martin Luther Universitaet, 06099 Halle (Germany); Grecksch, Wilfried [Institut fuer Mathematik, Martin Luther Universitaet, 06099 Halle (Germany)

    2016-07-01

    The standard approach to solve a non-relativistic quantum problem is through analytical or numerical solution of the Schroedinger equation. We show a way to go around it. This way is based on the derivation of the Schroedinger equation from conservative diffusion processes and the establishment of (several) stochastic variational principles leading to the Schroedinger equation under the assumption of a kinematics described by Nelson's diffusion processes. Mathematically, the variational principle can be considered as a stochastic optimal control problem linked to the forward-backward stochastic differential equations of Nelson's stochastic mechanics. The Hamilton-Jacobi-Bellmann equation of this control problem is the Schroedinger equation. We present the mathematical background and how to turn it into a numerical scheme for analyzing a quantum system without using the Schroedinger equation and exemplify the approach for a simple 1d problem.

  5. Stochastic inflation: Quantum phase-space approach

    International Nuclear Information System (INIS)

    Habib, S.

    1992-01-01

    In this paper a quantum-mechanical phase-space picture is constructed for coarse-grained free quantum fields in an inflationary universe. The appropriate stochastic quantum Liouville equation is derived. Explicit solutions for the phase-space quantum distribution function are found for the cases of power-law and exponential expansions. The expectation values of dynamical variables with respect to these solutions are compared to the corresponding cutoff regularized field-theoretic results (we do not restrict ourselves only to left-angle Φ 2 right-angle). Fair agreement is found provided the coarse-graining scale is kept within certain limits. By focusing on the full phase-space distribution function rather than a reduced distribution it is shown that the thermodynamic interpretation of the stochastic formalism faces several difficulties (e.g., there is no fluctuation-dissipation theorem). The coarse graining does not guarantee an automatic classical limit as quantum correlations turn out to be crucial in order to get results consistent with standard quantum field theory. Therefore, the method does not by itself constitute an explanation of the quantum to classical transition in the early Universe. In particular, we argue that the stochastic equations do not lead to decoherence

  6. Conservative diffusions: a constructive approach to Nelson's stochastic mechanics

    International Nuclear Information System (INIS)

    Carlen, E.A.

    1984-01-01

    In Nelson's stochastic mechanics, quantum phenomena are described in terms of diffusions instead of wave functions; this thesis is a study of that description. Concern here is with the possibility of describing, as opposed to explaining, quantum phenomena in terms of diffusions. In this direction, the following questions arise: ''Do the diffusion of stochastic mechanics - which are formally given by stochastic differential equations with extremely singular coefficients - really exist.'' Given that they exist, one can ask, ''Do these diffusions have physically reasonable paths to study the behavior of physical systems.'' These are the questions treated in this thesis. In Chapter 1, stochastic mechanics and diffusion theory are reviewed, using the Guerra-Morato variational principle to establish the connection with the Schroedinger equation. Chapter II settles the first of the questions raised above. Using PDE methods, the diffusions of stochastic mechanics are constructed. The result is sufficiently general to be of independent mathematical interest. In Chapter III, potential scattering in stochastic mechanics is treated and direct probabilistic methods of studying quantum scattering problems are discussed. The results provide a solid YES in answer to the second question raised above

  7. Stochastic approach for radionuclides quantification

    Science.gov (United States)

    Clement, A.; Saurel, N.; Perrin, G.

    2018-01-01

    Gamma spectrometry is a passive non-destructive assay used to quantify radionuclides present in more or less complex objects. Basic methods using empirical calibration with a standard in order to quantify the activity of nuclear materials by determining the calibration coefficient are useless on non-reproducible, complex and single nuclear objects such as waste packages. Package specifications as composition or geometry change from one package to another and involve a high variability of objects. Current quantification process uses numerical modelling of the measured scene with few available data such as geometry or composition. These data are density, material, screen, geometric shape, matrix composition, matrix and source distribution. Some of them are strongly dependent on package data knowledge and operator backgrounds. The French Commissariat à l'Energie Atomique (CEA) is developing a new methodology to quantify nuclear materials in waste packages and waste drums without operator adjustment and internal package configuration knowledge. This method suggests combining a global stochastic approach which uses, among others, surrogate models available to simulate the gamma attenuation behaviour, a Bayesian approach which considers conditional probability densities of problem inputs, and Markov Chains Monte Carlo algorithms (MCMC) which solve inverse problems, with gamma ray emission radionuclide spectrum, and outside dimensions of interest objects. The methodology is testing to quantify actinide activity in different kind of matrix, composition, and configuration of sources standard in terms of actinide masses, locations and distributions. Activity uncertainties are taken into account by this adjustment methodology.

  8. Lifetime distribution in thermal fatigue - a stochastic geometry approach

    International Nuclear Information System (INIS)

    Kullig, E.; Michel, B.

    1996-02-01

    The present report describes the interpretation approach for crack patterns which are generated on the smooth surface of austenitic specimens under thermal fatigue loading. A framework for the fracture mechanics characterization of equibiaxially loaded branched surface cracks is developed which accounts also for crack interaction effects. Advanced methods for the statistical evaluation of crack patterns using suitable characteristic quantities are developed. An efficient simulation procedure allows to identify the impact of different variables of the stochastic crack growth model with respect to the generated crack patterns. (orig.) [de

  9. Stochastic geometry of critical curves, Schramm-Loewner evolutions and conformal field theory

    International Nuclear Information System (INIS)

    Gruzberg, Ilya A

    2006-01-01

    Conformally invariant curves that appear at critical points in two-dimensional statistical mechanics systems and their fractal geometry have received a lot of attention in recent years. On the one hand, Schramm (2000 Israel J. Math. 118 221 (Preprint math.PR/9904022)) has invented a new rigorous as well as practical calculational approach to critical curves, based on a beautiful unification of conformal maps and stochastic processes, and by now known as Schramm-Loewner evolution (SLE). On the other hand, Duplantier (2000 Phys. Rev. Lett. 84 1363; Fractal Geometry and Applications: A Jubilee of Benot Mandelbrot: Part 2 (Proc. Symp. Pure Math. vol 72) (Providence, RI: American Mathematical Society) p 365 (Preprint math-ph/0303034)) has applied boundary quantum gravity methods to calculate exact multifractal exponents associated with critical curves. In the first part of this paper, I provide a pedagogical introduction to SLE. I present mathematical facts from the theory of conformal maps and stochastic processes related to SLE. Then I review basic properties of SLE and provide practical derivation of various interesting quantities related to critical curves, including fractal dimensions and crossing probabilities. The second part of the paper is devoted to a way of describing critical curves using boundary conformal field theory (CFT) in the so-called Coulomb gas formalism. This description provides an alternative (to quantum gravity) way of obtaining the multifractal spectrum of critical curves using only traditional methods of CFT based on free bosonic fields

  10. Approaches for modeling within subject variability in pharmacometric count data analysis: dynamic inter-occasion variability and stochastic differential equations.

    Science.gov (United States)

    Deng, Chenhui; Plan, Elodie L; Karlsson, Mats O

    2016-06-01

    Parameter variation in pharmacometric analysis studies can be characterized as within subject parameter variability (WSV) in pharmacometric models. WSV has previously been successfully modeled using inter-occasion variability (IOV), but also stochastic differential equations (SDEs). In this study, two approaches, dynamic inter-occasion variability (dIOV) and adapted stochastic differential equations, were proposed to investigate WSV in pharmacometric count data analysis. These approaches were applied to published count models for seizure counts and Likert pain scores. Both approaches improved the model fits significantly. In addition, stochastic simulation and estimation were used to explore further the capability of the two approaches to diagnose and improve models where existing WSV is not recognized. The results of simulations confirmed the gain in introducing WSV as dIOV and SDEs when parameters vary randomly over time. Further, the approaches were also informative as diagnostics of model misspecification, when parameters changed systematically over time but this was not recognized in the structural model. The proposed approaches in this study offer strategies to characterize WSV and are not restricted to count data.

  11. Hybrid approaches for multiple-species stochastic reaction-diffusion models

    Science.gov (United States)

    Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen

    2015-10-01

    Reaction-diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction-diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model.

  12. Hybrid approaches for multiple-species stochastic reaction-diffusion models.

    KAUST Repository

    Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K; Byrne, Helen

    2015-01-01

    Reaction-diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction-diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model.

  13. Hybrid approaches for multiple-species stochastic reaction-diffusion models.

    KAUST Repository

    Spill, Fabian

    2015-10-01

    Reaction-diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction-diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model.

  14. Discrete Approaches to Quantum Gravity in Four Dimensions

    Directory of Open Access Journals (Sweden)

    Loll Renate

    1998-01-01

    Full Text Available The construction of a consistent theory of quantum gravity is a problem in theoretical physics that has so far defied all attempts at resolution. One ansatz to try to obtain a non-trivial quantum theory proceeds via a discretization of space-time and the Einstein action. I review here three major areas of research: gauge-theoretic approaches, both in a path-integral and a Hamiltonian formulation; quantum Regge calculus; and the method of dynamical triangulations, confining attention to work that is strictly four-dimensional, strictly discrete, and strictly quantum in nature.

  15. Effectiveness Testing of a Piezoelectric Energy Harvester for an Automobile Wheel Using Stochastic Resonance

    Directory of Open Access Journals (Sweden)

    Yunshun Zhang

    2016-10-01

    Full Text Available The collection of clean power from ambient vibrations is considered a promising method for energy harvesting. For the case of wheel rotation, the present study investigates the effectiveness of a piezoelectric energy harvester, with the application of stochastic resonance to optimize the efficiency of energy harvesting. It is hypothesized that when the wheel rotates at variable speeds, the energy harvester is subjected to on-road noise as ambient excitations and a tangentially acting gravity force as a periodic modulation force, which can stimulate stochastic resonance. The energy harvester was miniaturized with a bistable cantilever structure, and the on-road noise was measured for the implementation of a vibrator in an experimental setting. A validation experiment revealed that the harvesting system was optimized to capture power that was approximately 12 times that captured under only on-road noise excitation and 50 times that captured under only the periodic gravity force. Moreover, the investigation of up-sweep excitations with increasing rotational frequency confirmed that stochastic resonance is effective in optimizing the performance of the energy harvester, with a certain bandwidth of vehicle speeds. An actual-vehicle experiment validates that the prototype harvester using stochastic resonance is capable of improving power generation performance for practical tire application.

  16. Anamorphic quasiperiodic universes in modified and Einstein gravity with loop quantum gravity corrections

    Science.gov (United States)

    Amaral, Marcelo M.; Aschheim, Raymond; Bubuianu, Laurenţiu; Irwin, Klee; Vacaru, Sergiu I.; Woolridge, Daniel

    2017-09-01

    The goal of this work is to elaborate on new geometric methods of constructing exact and parametric quasiperiodic solutions for anamorphic cosmology models in modified gravity theories, MGTs, and general relativity, GR. There exist previously studied generic off-diagonal and diagonalizable cosmological metrics encoding gravitational and matter fields with quasicrystal like structures, QC, and holonomy corrections from loop quantum gravity, LQG. We apply the anholonomic frame deformation method, AFDM, in order to decouple the (modified) gravitational and matter field equations in general form. This allows us to find integral varieties of cosmological solutions determined by generating functions, effective sources, integration functions and constants. The coefficients of metrics and connections for such cosmological configurations depend, in general, on all spacetime coordinates and can be chosen to generate observable (quasi)-periodic/aperiodic/fractal/stochastic/(super) cluster/filament/polymer like (continuous, stochastic, fractal and/or discrete structures) in MGTs and/or GR. In this work, we study new classes of solutions for anamorphic cosmology with LQG holonomy corrections. Such solutions are characterized by nonlinear symmetries of generating functions for generic off-diagonal cosmological metrics and generalized connections, with possible nonholonomic constraints to Levi-Civita configurations and diagonalizable metrics depending only on a time like coordinate. We argue that anamorphic quasiperiodic cosmological models integrate the concept of quantum discrete spacetime, with certain gravitational QC-like vacuum and nonvacuum structures. And, that of a contracting universe that homogenizes, isotropizes and flattens without introducing initial conditions or multiverse problems.

  17. Exploring Ackermann and LQR stability control of stochastic state-space model of hexacopter equipped with robotic arm

    Science.gov (United States)

    Ibrahim, I. N.; Akkad, M. A. Al; Abramov, I. V.

    2018-05-01

    This paper discusses the control of Unmanned Aerial Vehicles (UAVs) for active interaction and manipulation of objects. The manipulator motion with an unknown payload was analysed concerning force and moment disturbances, which influence the mass distribution, and the centre of gravity (CG). Therefore, a general dynamics mathematical model of a hexacopter was formulated where a stochastic state-space model was extracted in order to build anti-disturbance controllers. Based on the compound pendulum method, the disturbances model that simulates the robotic arm with a payload was inserted into the stochastic model. This study investigates two types of controllers in order to study the stability of a hexacopter. A controller based on Ackermann’s method and the other - on the linear quadratic regulator (LQR) approach - were presented. The latter constitutes a challenge for UAV control performance especially with the presence of uncertainties and disturbances.

  18. Stochastic parameterizing manifolds and non-Markovian reduced equations stochastic manifolds for nonlinear SPDEs II

    CERN Document Server

    Chekroun, Mickaël D; Wang, Shouhong

    2015-01-01

    In this second volume, a general approach is developed to provide approximate parameterizations of the "small" scales by the "large" ones for a broad class of stochastic partial differential equations (SPDEs). This is accomplished via the concept of parameterizing manifolds (PMs), which are stochastic manifolds that improve, for a given realization of the noise, in mean square error the partial knowledge of the full SPDE solution when compared to its projection onto some resolved modes. Backward-forward systems are designed to give access to such PMs in practice. The key idea consists of representing the modes with high wave numbers as a pullback limit depending on the time-history of the modes with low wave numbers. Non-Markovian stochastic reduced systems are then derived based on such a PM approach. The reduced systems take the form of stochastic differential equations involving random coefficients that convey memory effects. The theory is illustrated on a stochastic Burgers-type equation.

  19. Relating covariant and canonical approaches to triangulated models of quantum gravity

    International Nuclear Information System (INIS)

    Arnsdorf, Matthias

    2002-01-01

    In this paper we explore the relation between covariant and canonical approaches to quantum gravity and BF theory. We will focus on the dynamical triangulation and spin-foam models, which have in common that they can be defined in terms of sums over spacetime triangulations. Our aim is to show how we can recover these covariant models from a canonical framework by providing two regularizations of the projector onto the kernel of the Hamiltonian constraint. This link is important for the understanding of the dynamics of quantum gravity. In particular, we will see how in the simplest dynamical triangulation model we can recover the Hamiltonian constraint via our definition of the projector. Our discussion of spin-foam models will show how the elementary spin-network moves in loop quantum gravity, which were originally assumed to describe the Hamiltonian constraint action, are in fact related to the time-evolution generated by the constraint. We also show that the Immirzi parameter is important for the understanding of a continuum limit of the theory

  20. A Stochastic Maximum Principle for a Stochastic Differential Game of a Mean-Field Type

    Energy Technology Data Exchange (ETDEWEB)

    Hosking, John Joseph Absalom, E-mail: j.j.a.hosking@cma.uio.no [University of Oslo, Centre of Mathematics for Applications (CMA) (Norway)

    2012-12-15

    We construct a stochastic maximum principle (SMP) which provides necessary conditions for the existence of Nash equilibria in a certain form of N-agent stochastic differential game (SDG) of a mean-field type. The information structure considered for the SDG is of a possible asymmetric and partial type. To prove our SMP we take an approach based on spike-variations and adjoint representation techniques, analogous to that of S. Peng (SIAM J. Control Optim. 28(4):966-979, 1990) in the optimal stochastic control context. In our proof we apply adjoint representation procedures at three points. The first-order adjoint processes are defined as solutions to certain mean-field backward stochastic differential equations, and second-order adjoint processes of a first type are defined as solutions to certain backward stochastic differential equations. Second-order adjoint processes of a second type are defined as solutions of certain backward stochastic equations of a type that we introduce in this paper, and which we term conditional mean-field backward stochastic differential equations. From the resulting representations, we show that the terms relating to these second-order adjoint processes of the second type are of an order such that they do not appear in our final SMP equations. A comparable situation exists in an article by R. Buckdahn, B. Djehiche, and J. Li (Appl. Math. Optim. 64(2):197-216, 2011) that constructs a SMP for a mean-field type optimal stochastic control problem; however, the approach we take of using these second-order adjoint processes of a second type to deal with the type of terms that we refer to as the second form of quadratic-type terms represents an alternative to a development, to our setting, of the approach used in their article for their analogous type of term.

  1. A Stochastic Maximum Principle for a Stochastic Differential Game of a Mean-Field Type

    International Nuclear Information System (INIS)

    Hosking, John Joseph Absalom

    2012-01-01

    We construct a stochastic maximum principle (SMP) which provides necessary conditions for the existence of Nash equilibria in a certain form of N-agent stochastic differential game (SDG) of a mean-field type. The information structure considered for the SDG is of a possible asymmetric and partial type. To prove our SMP we take an approach based on spike-variations and adjoint representation techniques, analogous to that of S. Peng (SIAM J. Control Optim. 28(4):966–979, 1990) in the optimal stochastic control context. In our proof we apply adjoint representation procedures at three points. The first-order adjoint processes are defined as solutions to certain mean-field backward stochastic differential equations, and second-order adjoint processes of a first type are defined as solutions to certain backward stochastic differential equations. Second-order adjoint processes of a second type are defined as solutions of certain backward stochastic equations of a type that we introduce in this paper, and which we term conditional mean-field backward stochastic differential equations. From the resulting representations, we show that the terms relating to these second-order adjoint processes of the second type are of an order such that they do not appear in our final SMP equations. A comparable situation exists in an article by R. Buckdahn, B. Djehiche, and J. Li (Appl. Math. Optim. 64(2):197–216, 2011) that constructs a SMP for a mean-field type optimal stochastic control problem; however, the approach we take of using these second-order adjoint processes of a second type to deal with the type of terms that we refer to as the second form of quadratic-type terms represents an alternative to a development, to our setting, of the approach used in their article for their analogous type of term.

  2. Intercepting virtual balls approaching under different gravity conditions: evidence for spatial prediction.

    Science.gov (United States)

    Russo, Marta; Cesqui, Benedetta; La Scaleia, Barbara; Ceccarelli, Francesca; Maselli, Antonella; Moscatelli, Alessandro; Zago, Myrka; Lacquaniti, Francesco; d'Avella, Andrea

    2017-10-01

    To accurately time motor responses when intercepting falling balls we rely on an internal model of gravity. However, whether and how such a model is also used to estimate the spatial location of interception is still an open question. Here we addressed this issue by asking 25 participants to intercept balls projected from a fixed location 6 m in front of them and approaching along trajectories with different arrival locations, flight durations, and gravity accelerations (0 g and 1 g ). The trajectories were displayed in an immersive virtual reality system with a wide field of view. Participants intercepted approaching balls with a racket, and they were free to choose the time and place of interception. We found that participants often achieved a better performance with 1 g than 0 g balls. Moreover, the interception points were distributed along the direction of a 1 g path for both 1 g and 0 g balls. In the latter case, interceptions tended to cluster on the upper half of the racket, indicating that participants aimed at a lower position than the actual 0 g path. These results suggest that an internal model of gravity was probably used in predicting the interception locations. However, we found that the difference in performance between 1 g and 0 g balls was modulated by flight duration, the difference being larger for faster balls. In addition, the number of peaks in the hand speed profiles increased with flight duration, suggesting that visual information was used to adjust the motor response, correcting the prediction to some extent. NEW & NOTEWORTHY Here we show that an internal model of gravity plays a key role in predicting where to intercept a fast-moving target. Participants also assumed an accelerated motion when intercepting balls approaching in a virtual environment at constant velocity. We also show that the role of visual information in guiding interceptive movement increases when more time is available. Copyright © 2017 the American Physiological

  3. Oriented stochastic data envelopment models: ranking comparison to stochastic frontier approach

    Czech Academy of Sciences Publication Activity Database

    Brázdik, František

    -, č. 271 (2005), s. 1-46 ISSN 1211-3298 Institutional research plan: CEZ:AV0Z70850503 Keywords : stochastic data envelopment analysis * linear programming * rice farm Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp271.pdf

  4. Variance decomposition in stochastic simulators.

    Science.gov (United States)

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  5. Variance decomposition in stochastic simulators

    Science.gov (United States)

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  6. Variance decomposition in stochastic simulators

    Energy Technology Data Exchange (ETDEWEB)

    Le Maître, O. P., E-mail: olm@limsi.fr [LIMSI-CNRS, UPR 3251, Orsay (France); Knio, O. M., E-mail: knio@duke.edu [Department of Mechanical Engineering and Materials Science, Duke University, Durham, North Carolina 27708 (United States); Moraes, A., E-mail: alvaro.moraesgutierrez@kaust.edu.sa [King Abdullah University of Science and Technology, Thuwal (Saudi Arabia)

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  7. Variance decomposition in stochastic simulators

    KAUST Repository

    Le Maî tre, O. P.; Knio, O. M.; Moraes, Alvaro

    2015-01-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  8. Optimising stochastic trajectories in exact quantum jump approaches of interacting systems

    International Nuclear Information System (INIS)

    Lacroix, D.

    2004-11-01

    The standard methods used to substitute the quantum dynamics of two interacting systems by a quantum jump approach based on the Stochastic Schroedinger Equation (SSE) are described. It turns out that for a given situation, there exists an infinite number of SSE reformulation. This fact is used to propose general strategies to optimise the stochastic paths in order to reduce the statistical fluctuations. In this procedure, called the 'adaptative noise method', a specific SSE is obtained for which the noise depends explicitly on both the initial state and on the properties of the interaction Hamiltonian. It is also shown that this method can be further improved by the introduction of a mean-field dynamics. The different optimisation procedures are illustrated quantitatively in the case of interacting spins. A significant reduction of the statistical fluctuations is obtained. Consequently, a much smaller number of trajectories is needed to accurately reproduce the exact dynamics as compared to the standard SSE method. (author)

  9. Probability, Statistics, and Stochastic Processes

    CERN Document Server

    Olofsson, Peter

    2011-01-01

    A mathematical and intuitive approach to probability, statistics, and stochastic processes This textbook provides a unique, balanced approach to probability, statistics, and stochastic processes. Readers gain a solid foundation in all three fields that serves as a stepping stone to more advanced investigations into each area. This text combines a rigorous, calculus-based development of theory with a more intuitive approach that appeals to readers' sense of reason and logic, an approach developed through the author's many years of classroom experience. The text begins with three chapters that d

  10. Stochastic Discount Factor Approach to International Risk-Sharing: Evidence from Fixed Exchange Rate Episodes

    NARCIS (Netherlands)

    Hadzi-Vaskov, M.; Kool, C.J.M.

    2007-01-01

    This paper presents evidence of the stochastic discount factor approach to international risk-sharing applied to fixed exchange rate regimes. We calculate risk-sharing indices for two episodes of fixed or very rigid exchange rates: the Eurozone before and after the introduction of the Euro, and

  11. A chance-constrained stochastic approach to intermodal container routing problems.

    Science.gov (United States)

    Zhao, Yi; Liu, Ronghui; Zhang, Xi; Whiteing, Anthony

    2018-01-01

    We consider a container routing problem with stochastic time variables in a sea-rail intermodal transportation system. The problem is formulated as a binary integer chance-constrained programming model including stochastic travel times and stochastic transfer time, with the objective of minimising the expected total cost. Two chance constraints are proposed to ensure that the container service satisfies ship fulfilment and cargo on-time delivery with pre-specified probabilities. A hybrid heuristic algorithm is employed to solve the binary integer chance-constrained programming model. Two case studies are conducted to demonstrate the feasibility of the proposed model and to analyse the impact of stochastic variables and chance-constraints on the optimal solution and total cost.

  12. Three Least-Squares Minimization Approaches to Interpret Gravity Data Due to Dipping Faults

    Science.gov (United States)

    Abdelrahman, E. M.; Essa, K. S.

    2015-02-01

    We have developed three different least-squares minimization approaches to determine, successively, the depth, dip angle, and amplitude coefficient related to the thickness and density contrast of a buried dipping fault from first moving average residual gravity anomalies. By defining the zero-anomaly distance and the anomaly value at the origin of the moving average residual profile, the problem of depth determination is transformed into a constrained nonlinear gravity inversion. After estimating the depth of the fault, the dip angle is estimated by solving a nonlinear inverse problem. Finally, after estimating the depth and dip angle, the amplitude coefficient is determined using a linear equation. This method can be applied to residuals as well as to measured gravity data because it uses the moving average residual gravity anomalies to estimate the model parameters of the faulted structure. The proposed method was tested on noise-corrupted synthetic and real gravity data. In the case of the synthetic data, good results are obtained when errors are given in the zero-anomaly distance and the anomaly value at the origin, and even when the origin is determined approximately. In the case of practical data (Bouguer anomaly over Gazal fault, south Aswan, Egypt), the fault parameters obtained are in good agreement with the actual ones and with those given in the published literature.

  13. A stochastic approach for quantifying immigrant integration: the Spanish test case

    Science.gov (United States)

    Agliari, Elena; Barra, Adriano; Contucci, Pierluigi; Sandell, Richard; Vernia, Cecilia

    2014-10-01

    We apply stochastic process theory to the analysis of immigrant integration. Using a unique and detailed data set from Spain, we study the relationship between local immigrant density and two social and two economic immigration quantifiers for the period 1999-2010. As opposed to the classic time-series approach, by letting immigrant density play the role of ‘time’ and the quantifier the role of ‘space,’ it becomes possible to analyse the behavior of the quantifiers by means of continuous time random walks. Two classes of results are then obtained. First, we show that social integration quantifiers evolve following diffusion law, while the evolution of economic quantifiers exhibits ballistic dynamics. Second, we make predictions of best- and worst-case scenarios taking into account large local fluctuations. Our stochastic process approach to integration lends itself to interesting forecasting scenarios which, in the hands of policy makers, have the potential to improve political responses to integration problems. For instance, estimating the standard first-passage time and maximum-span walk reveals local differences in integration performance for different immigration scenarios. Thus, by recognizing the importance of local fluctuations around national means, this research constitutes an important tool to assess the impact of immigration phenomena on municipal budgets and to set up solid multi-ethnic plans at the municipal level as immigration pressures build.

  14. A stochastic approach for quantifying immigrant integration: the Spanish test case

    International Nuclear Information System (INIS)

    Agliari, Elena; Barra, Adriano; Contucci, Pierluigi; Sandell, Richard; Vernia, Cecilia

    2014-01-01

    We apply stochastic process theory to the analysis of immigrant integration. Using a unique and detailed data set from Spain, we study the relationship between local immigrant density and two social and two economic immigration quantifiers for the period 1999–2010. As opposed to the classic time-series approach, by letting immigrant density play the role of ‘time’ and the quantifier the role of ‘space,’ it becomes possible to analyse the behavior of the quantifiers by means of continuous time random walks. Two classes of results are then obtained. First, we show that social integration quantifiers evolve following diffusion law, while the evolution of economic quantifiers exhibits ballistic dynamics. Second, we make predictions of best- and worst-case scenarios taking into account large local fluctuations. Our stochastic process approach to integration lends itself to interesting forecasting scenarios which, in the hands of policy makers, have the potential to improve political responses to integration problems. For instance, estimating the standard first-passage time and maximum-span walk reveals local differences in integration performance for different immigration scenarios. Thus, by recognizing the importance of local fluctuations around national means, this research constitutes an important tool to assess the impact of immigration phenomena on municipal budgets and to set up solid multi-ethnic plans at the municipal level as immigration pressures build. (paper)

  15. A stochastic security approach to energy and spinning reserve scheduling considering demand response program

    International Nuclear Information System (INIS)

    Partovi, Farzad; Nikzad, Mehdi; Mozafari, Babak; Ranjbar, Ali Mohamad

    2011-01-01

    In this paper a new algorithm for allocating energy and determining the optimum amount of network active power reserve capacity and the share of generating units and demand side contribution in providing reserve capacity requirements for day-ahead market is presented. In the proposed method, the optimum amount of reserve requirement is determined based on network security set by operator. In this regard, Expected Load Not Supplied (ELNS) is used to evaluate system security in each hour. The proposed method has been implemented over the IEEE 24-bus test system and the results are compared with a deterministic security approach, which considers certain and fixed amount of reserve capacity in each hour. This comparison is done from economic and technical points of view. The promising results show the effectiveness of the proposed model which is formulated as mixed integer linear programming (MILP) and solved by GAMS software. -- Highlights: → Determination of optimal spinning reserve capacity requirement in order to satisfy desired security level set by system operator based on stochastic approach. → Scheduling energy and spinning reserve markets simultaneously. → Comparing the stochastic approach with deterministic approach to determine the advantages and disadvantages of each. → Examine the effect of demand response participation in reserve market to provide spinning reserve.

  16. Path probability of stochastic motion: A functional approach

    Science.gov (United States)

    Hattori, Masayuki; Abe, Sumiyoshi

    2016-06-01

    The path probability of a particle undergoing stochastic motion is studied by the use of functional technique, and the general formula is derived for the path probability distribution functional. The probability of finding paths inside a tube/band, the center of which is stipulated by a given path, is analytically evaluated in a way analogous to continuous measurements in quantum mechanics. Then, the formalism developed here is applied to the stochastic dynamics of stock price in finance.

  17. Estimation of an Optimal Stimulus Amplitude for Using Vestibular Stochastic Stimulation to Improve Balance Function

    Science.gov (United States)

    Goel, R.; Kofman, I.; DeDios, Y. E.; Jeevarajan, J.; Stepanyan, V.; Nair, M.; Congdon, S.; Fregia, M.; Peters, B.; Cohen, H.; hide

    2015-01-01

    Sensorimotor changes such as postural and gait instabilities can affect the functional performance of astronauts when they transition across different gravity environments. We are developing a method, based on stochastic resonance (SR), to enhance information transfer by applying non-zero levels of external noise on the vestibular system (vestibular stochastic resonance, VSR). The goal of this project was to determine optimal levels of stimulation for SR applications by using a defined vestibular threshold of motion detection.

  18. Stochastic volatility and stochastic leverage

    DEFF Research Database (Denmark)

    Veraart, Almut; Veraart, Luitgard A. M.

    This paper proposes the new concept of stochastic leverage in stochastic volatility models. Stochastic leverage refers to a stochastic process which replaces the classical constant correlation parameter between the asset return and the stochastic volatility process. We provide a systematic...... treatment of stochastic leverage and propose to model the stochastic leverage effect explicitly, e.g. by means of a linear transformation of a Jacobi process. Such models are both analytically tractable and allow for a direct economic interpretation. In particular, we propose two new stochastic volatility...... models which allow for a stochastic leverage effect: the generalised Heston model and the generalised Barndorff-Nielsen & Shephard model. We investigate the impact of a stochastic leverage effect in the risk neutral world by focusing on implied volatilities generated by option prices derived from our new...

  19. Restructuring of workflows to minimise errors via stochastic model checking: An automated evolutionary approach

    International Nuclear Information System (INIS)

    Herbert, L.T.; Hansen, Z.N.L.

    2016-01-01

    This paper presents a framework for the automated restructuring of stochastic workflows to reduce the impact of faults. The framework allows for the modelling of workflows by means of a formalised subset of the BPMN workflow language. We extend this modelling formalism to describe faults and incorporate an intention preserving stochastic semantics able to model both probabilistic- and non-deterministic behaviour. Stochastic model checking techniques are employed to generate the state-space of a given workflow. Possible improvements obtained by restructuring are measured by employing the framework's capacity for tracking real-valued quantities associated with states and transitions of the workflow. The space of possible restructurings of a workflow is explored by means of an evolutionary algorithm, where the goals for improvement are defined in terms of optimising quantities, typically employed to model resources, associated with a workflow. The approach is fully automated and only the modelling of the production workflows, potential faults and the expression of the goals require manual input. We present the design of a software tool implementing this framework and explore the practical utility of this approach through an industrial case study in which the risk of production failures and their impact are reduced by restructuring the workflow. - Highlights: • We present a framework which allows for the automated restructuring of workflows. • This framework seeks to minimise the impact of errors on the workflow. • We illustrate a scalable software implementation of this framework. • We explore the practical utility of this approach through an industry case. • The impact of errors can be substantially reduced by restructuring the workflow.

  20. Stochastic congestion management in power markets using efficient scenario approaches

    International Nuclear Information System (INIS)

    Esmaili, Masoud; Amjady, Nima; Shayanfar, Heidar Ali

    2010-01-01

    Congestion management in electricity markets is traditionally performed using deterministic values of system parameters assuming a fixed network configuration. In this paper, a stochastic programming framework is proposed for congestion management considering the power system uncertainties comprising outage of generating units and transmission branches. The Forced Outage Rate of equipment is employed in the stochastic programming. Using the Monte Carlo simulation, possible scenarios of power system operating states are generated and a probability is assigned to each scenario. The performance of the ordinary as well as Lattice rank-1 and rank-2 Monte Carlo simulations is evaluated in the proposed congestion management framework. As a tradeoff between computation time and accuracy, scenario reduction based on the standard deviation of accepted scenarios is adopted. The stochastic congestion management solution is obtained by aggregating individual solutions of accepted scenarios. Congestion management using the proposed stochastic framework provides a more realistic solution compared with traditional deterministic solutions. Results of testing the proposed stochastic congestion management on the 24-bus reliability test system indicate the efficiency of the proposed framework.

  1. The group manifold approach to unified gravity

    International Nuclear Information System (INIS)

    Regge, T.

    1984-01-01

    These lectures start with a synopsis of historical results in the construction of unified theories of gravity. The author keeps some mathematical rigour throughout the lectures. He gives a provisional description of supermanifolds and a set of formal rules intended to manipulate superforms or supermanifolds. Super Lie groups are discussed as well as the dimensional reduction of gravity theories, the Kaluza-Klein theory. A formal introduction of supersymmetry is given. (Auth.)

  2. A random walk approach to stochastic neutron transport

    International Nuclear Information System (INIS)

    Mulatier, Clelia de

    2015-01-01

    One of the key goals of nuclear reactor physics is to determine the distribution of the neutron population within a reactor core. This population indeed fluctuates due to the stochastic nature of the interactions of the neutrons with the nuclei of the surrounding medium: scattering, emission of neutrons from fission events and capture by nuclear absorption. Due to these physical mechanisms, the stochastic process performed by neutrons is a branching random walk. For most applications, the neutron population considered is very large, and all physical observables related to its behaviour, such as the heat production due to fissions, are well characterised by their average values. Generally, these mean quantities are governed by the classical neutron transport equation, called linear Boltzmann equation. During my PhD, using tools from branching random walks and anomalous diffusion, I have tackled two aspects of neutron transport that cannot be approached by the linear Boltzmann equation. First, thanks to the Feynman-Kac backward formalism, I have characterised the phenomenon of 'neutron clustering' that has been highlighted for low-density configuration of neutrons and results from strong fluctuations in space and time of the neutron population. Then, I focused on several properties of anomalous (non-exponential) transport, that can model neutron transport in strongly heterogeneous and disordered media, such as pebble-bed reactors. One of the novel aspects of this work is that problems are treated in the presence of boundaries. Indeed, even though real systems are finite (confined geometries), most of previously existing results were obtained for infinite systems. (author) [fr

  3. Anomalies and gravity

    International Nuclear Information System (INIS)

    Mielke, Eckehard W.

    2006-01-01

    Anomalies in Yang-Mills type gauge theories of gravity are reviewed. Particular attention is paid to the relation between the Dirac spin, the axial current j5 and the non-covariant gauge spin C. Using diagrammatic techniques, we show that only generalizations of the U(1)- Pontrjagin four-form F and F = dC arise in the chiral anomaly, even when coupled to gravity. Implications for Ashtekar's canonical approach to quantum gravity are discussed

  4. A concise course on stochastic partial differential equations

    CERN Document Server

    Prévôt, Claudia

    2007-01-01

    These lectures concentrate on (nonlinear) stochastic partial differential equations (SPDE) of evolutionary type. All kinds of dynamics with stochastic influence in nature or man-made complex systems can be modelled by such equations. To keep the technicalities minimal we confine ourselves to the case where the noise term is given by a stochastic integral w.r.t. a cylindrical Wiener process.But all results can be easily generalized to SPDE with more general noises such as, for instance, stochastic integral w.r.t. a continuous local martingale. There are basically three approaches to analyze SPDE: the "martingale measure approach", the "mild solution approach" and the "variational approach". The purpose of these notes is to give a concise and as self-contained as possible an introduction to the "variational approach". A large part of necessary background material, such as definitions and results from the theory of Hilbert spaces, are included in appendices.

  5. A DG approach to the numerical solution of the Stein-Stein stochastic volatility option pricing model

    Science.gov (United States)

    Hozman, J.; Tichý, T.

    2017-12-01

    Stochastic volatility models enable to capture the real world features of the options better than the classical Black-Scholes treatment. Here we focus on pricing of European-style options under the Stein-Stein stochastic volatility model when the option value depends on the time, on the price of the underlying asset and on the volatility as a function of a mean reverting Orstein-Uhlenbeck process. A standard mathematical approach to this model leads to the non-stationary second-order degenerate partial differential equation of two spatial variables completed by the system of boundary and terminal conditions. In order to improve the numerical valuation process for a such pricing equation, we propose a numerical technique based on the discontinuous Galerkin method and the Crank-Nicolson scheme. Finally, reference numerical experiments on real market data illustrate comprehensive empirical findings on options with stochastic volatility.

  6. A Q-Learning Approach to Flocking With UAVs in a Stochastic Environment.

    Science.gov (United States)

    Hung, Shao-Ming; Givigi, Sidney N

    2017-01-01

    In the past two decades, unmanned aerial vehicles (UAVs) have demonstrated their efficacy in supporting both military and civilian applications, where tasks can be dull, dirty, dangerous, or simply too costly with conventional methods. Many of the applications contain tasks that can be executed in parallel, hence the natural progression is to deploy multiple UAVs working together as a force multiplier. However, to do so requires autonomous coordination among the UAVs, similar to swarming behaviors seen in animals and insects. This paper looks at flocking with small fixed-wing UAVs in the context of a model-free reinforcement learning problem. In particular, Peng's Q(λ) with a variable learning rate is employed by the followers to learn a control policy that facilitates flocking in a leader-follower topology. The problem is structured as a Markov decision process, where the agents are modeled as small fixed-wing UAVs that experience stochasticity due to disturbances such as winds and control noises, as well as weight and balance issues. Learned policies are compared to ones solved using stochastic optimal control (i.e., dynamic programming) by evaluating the average cost incurred during flight according to a cost function. Simulation results demonstrate the feasibility of the proposed learning approach at enabling agents to learn how to flock in a leader-follower topology, while operating in a nonstationary stochastic environment.

  7. A retrodictive stochastic simulation algorithm

    International Nuclear Information System (INIS)

    Vaughan, T.G.; Drummond, P.D.; Drummond, A.J.

    2010-01-01

    In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.

  8. International Diversification Versus Domestic Diversification: Mean-Variance Portfolio Optimization and Stochastic Dominance Approaches

    Directory of Open Access Journals (Sweden)

    Fathi Abid

    2014-05-01

    Full Text Available This paper applies the mean-variance portfolio optimization (PO approach and the stochastic dominance (SD test to examine preferences for international diversification versus domestic diversification from American investors’ viewpoints. Our PO results imply that the domestic diversification strategy dominates the international diversification strategy at a lower risk level and the reverse is true at a higher risk level. Our SD analysis shows that there is no arbitrage opportunity between international and domestic stock markets; domestically diversified portfolios with smaller risk dominate internationally diversified portfolios with larger risk and vice versa; and at the same risk level, there is no difference between the domestically and internationally diversified portfolios. Nonetheless, we cannot find any domestically diversified portfolios that stochastically dominate all internationally diversified portfolios, but we find some internationally diversified portfolios with small risk that dominate all the domestically diversified portfolios.

  9. Topics in string theory and quantum gravity

    CERN Document Server

    Alvarez-Gaume, Luis

    1992-01-01

    These are the lecture notes for the Les Houches Summer School on Quantum Gravity held in July 1992. The notes present some general critical assessment of other (non-string) approaches to quantum gravity, and a selected set of topics concerning what we have learned so far about the subject from string theory. Since these lectures are long (133 A4 pages), we include in this abstract the table of contents, which should help the user of the bulletin board in deciding whether to latex and print the full file. 1-FIELD THEORETICAL APPROACH TO QUANTUM GRAVITY: Linearized gravity; Supergravity; Kaluza-Klein theories; Quantum field theory and classical gravity; Euclidean approach to Quantum Gravity; Canonical quantization of gravity; Gravitational Instantons. 2-CONSISTENCY CONDITIONS: ANOMALIES: Generalities about anomalies; Spinors in 2n dimensions; When can we expect to find anomalies?; The Atiyah-Singer Index Theorem and the computation of anomalies; Examples: Green-Schwarz cancellation mechanism and Witten's SU(2) ...

  10. Study of stochastic approaches of the n-bodies problem: application to the nuclear fragmentation

    International Nuclear Information System (INIS)

    Guarnera, A.

    1996-01-01

    In the last decade nuclear physics research has found, with the observation of phenomena such as multifragmentation or vaporization, the possibility to get a deeper insight into the nuclear matter phase diagram. For example, a spinodal decomposition scenario has been proposed to explain the multifragmentation: because of the initial compression, the system may enter a region, the spinodal zone, in which the nuclear matter is no longer stable, and so any fluctuation leads to the formation of fragments. This thesis deals with spinodal decomposition within the theoretical framework of stochastic mean filed approaches, in which the one-body density function may experience a stochastic evolution. We have shown that these approaches are able to describe phenomena, such as first order phase transitions, in which fluctuations and many-body correlations plan an important role. In the framework of stochastic mean-filed approaches we have shown that the fragment production by spinodal decomposition is characterized by typical time scales of the order of 100 fm/c and by typical size scales around the Neon mass. We have also shown that these features are robust and that they are not affected significantly by a possible expansion of the system or by the finite size of nuclei. We have proposed as a signature of the spinodal decomposition some typical partition of the largest fragments. The study and the comparison with experimental data, performed for the reactions Xe + Cu at 45 MeV/A and Xe + Sn at 50 MeV/A, have shown a remarkable agreement. Moreover we would like to stress that the theory does not contain any adjustable parameter. These results seem to give a strong indication of the possibility to observe a spinodal decomposition of nuclei. (author)

  11. Renormalization in the stochastic quantization of field theories

    International Nuclear Information System (INIS)

    Brunelli, J.C.

    1991-01-01

    In the stochastic quantization scheme of Parisi and Wu the renormalization of the stochastic theory of some models in field theory is studied. Following the path integral approach for stochastic process the 1/N expansion of the non linear sigma model is performed and, using a Ward identity obtained, from a BRS symmetry of the effective action of this formulation. It is shown the renormalizability of the model. Using the Langevin approach for stochastic process the renormalizability of the massive Thirring model is studied showing perturbatively the vanishing of the renormalization group's beta functions at finite fictitious time. (author)

  12. A decoupled approach to filter design for stochastic systems

    Science.gov (United States)

    Barbata, A.; Zasadzinski, M.; Ali, H. Souley; Messaoud, H.

    2016-08-01

    This paper presents a new theorem to guarantee the almost sure exponential stability for a class of stochastic triangular systems by studying only the stability of each diagonal subsystems. This result allows to solve the filtering problem of the stochastic systems with multiplicative noises by using the almost sure exponential stability concept. Two kinds of observers are treated: the full-order and reduced-order cases.

  13. On the foundations of the random lattice approach to quantum gravity

    International Nuclear Information System (INIS)

    Levin, A.; Morozov, A.

    1990-01-01

    We discuss the problem which can arise in the identification of conventional 2D quantum gravity, involving the sum over Riemann surfaces, with the results of the lattice approach, based on the enumeration of the Feynman graphs of matrix models. A potential difficulty is related to the (hypothetical) fact that the arithmetic curves are badly distributed in the module spaces for high enough genera (at least for g≥17). (orig.)

  14. Modeling flow in fractured medium. Uncertainty analysis with stochastic continuum approach

    International Nuclear Information System (INIS)

    Niemi, A.

    1994-01-01

    For modeling groundwater flow in formation-scale fractured media, no general method exists for scaling the highly heterogeneous hydraulic conductivity data to model parameters. The deterministic approach is limited in representing the heterogeneity of a medium and the application of fracture network models has both conceptual and practical limitations as far as site-scale studies are concerned. The study investigates the applicability of stochastic continuum modeling at the scale of data support. No scaling of the field data is involved, and the original variability is preserved throughout the modeling. Contributions of various aspects to the total uncertainty in the modeling prediction can also be determined with this approach. Data from five crystalline rock sites in Finland are analyzed. (107 refs., 63 figs., 7 tabs.)

  15. Kinetics of subdiffusion-assisted reactions: non-Markovian stochastic Liouville equation approach

    International Nuclear Information System (INIS)

    Shushin, A I

    2005-01-01

    Anomalous specific features of the kinetics of subdiffusion-assisted bimolecular reactions (time-dependence, dependence on parameters of systems, etc) are analysed in detail with the use of the non-Markovian stochastic Liouville equation (SLE), which has been recently derived within the continuous-time random-walk (CTRW) approach. In the CTRW approach, subdiffusive motion of particles is modelled by jumps whose onset probability distribution function is of a long-tailed form. The non-Markovian SLE allows for rigorous describing of some peculiarities of these reactions; for example, very slow long-time behaviour of the kinetics, non-analytical dependence of the reaction rate on the reactivity of particles, strong manifestation of fluctuation kinetics showing itself in very slowly decreasing behaviour of the kinetics at very long times, etc

  16. Multi-criteria multi-stakeholder decision analysis using a fuzzy-stochastic approach for hydrosystem management

    Science.gov (United States)

    Subagadis, Y. H.; Schütze, N.; Grundmann, J.

    2014-09-01

    The conventional methods used to solve multi-criteria multi-stakeholder problems are less strongly formulated, as they normally incorporate only homogeneous information at a time and suggest aggregating objectives of different decision-makers avoiding water-society interactions. In this contribution, Multi-Criteria Group Decision Analysis (MCGDA) using a fuzzy-stochastic approach has been proposed to rank a set of alternatives in water management decisions incorporating heterogeneous information under uncertainty. The decision making framework takes hydrologically, environmentally, and socio-economically motivated conflicting objectives into consideration. The criteria related to the performance of the physical system are optimized using multi-criteria simulation-based optimization, and fuzzy linguistic quantifiers have been used to evaluate subjective criteria and to assess stakeholders' degree of optimism. The proposed methodology is applied to find effective and robust intervention strategies for the management of a coastal hydrosystem affected by saltwater intrusion due to excessive groundwater extraction for irrigated agriculture and municipal use. Preliminary results show that the MCGDA based on a fuzzy-stochastic approach gives useful support for robust decision-making and is sensitive to the decision makers' degree of optimism.

  17. Memristor-based neural networks: Synaptic versus neuronal stochasticity

    KAUST Repository

    Naous, Rawan

    2016-11-02

    In neuromorphic circuits, stochasticity in the cortex can be mapped into the synaptic or neuronal components. The hardware emulation of these stochastic neural networks are currently being extensively studied using resistive memories or memristors. The ionic process involved in the underlying switching behavior of the memristive elements is considered as the main source of stochasticity of its operation. Building on its inherent variability, the memristor is incorporated into abstract models of stochastic neurons and synapses. Two approaches of stochastic neural networks are investigated. Aside from the size and area perspective, the impact on the system performance, in terms of accuracy, recognition rates, and learning, among these two approaches and where the memristor would fall into place are the main comparison points to be considered.

  18. Technical Efficiency in the Chilean Agribusiness Sector - a Stochastic Meta-Frontier Approach

    OpenAIRE

    Larkner, Sebastian; Brenes Muñoz, Thelma; Aedo, Edinson Rivera; Brümmer, Bernhard

    2013-01-01

    The Chilean economy is strongly export-oriented, which is also true for the Chilean agribusiness industry. This paper investigates the technical efficiency of the Chilean food processing industry between 2001 and 2007. We use a dataset from the 2,471 of firms in food processing industry. The observations are from the ‘Annual National Industrial Survey’. A stochastic meta-frontier approach is used in order to analyse the drivers of technical efficiency. We include variables capturing the effec...

  19. Monthly gravity field recovery from GRACE orbits and K-band measurements using variational equations approach

    Directory of Open Access Journals (Sweden)

    Changqing Wang

    2015-07-01

    Full Text Available The Gravity Recovery and Climate Experiment (GRACE mission can significantly improve our knowledge of the temporal variability of the Earth's gravity field. We obtained monthly gravity field solutions based on variational equations approach from GPS-derived positions of GRACE satellites and K-band range-rate measurements. The impact of different fixed data weighting ratios in temporal gravity field recovery while combining the two types of data was investigated for the purpose of deriving the best combined solution. The monthly gravity field solution obtained through above procedures was named as the Institute of Geodesy and Geophysics (IGG temporal gravity field models. IGG temporal gravity field models were compared with GRACE Release05 (RL05 products in following aspects: (i the trend of the mass anomaly in China and its nearby regions within 2005–2010; (ii the root mean squares of the global mass anomaly during 2005–2010; (iii time-series changes in the mean water storage in the region of the Amazon Basin and the Sahara Desert between 2005 and 2010. The results showed that IGG solutions were almost consistent with GRACE RL05 products in above aspects (i–(iii. Changes in the annual amplitude of mean water storage in the Amazon Basin were 14.7 ± 1.2 cm for IGG, 17.1 ± 1.3 cm for the Centre for Space Research (CSR, 16.4 ± 0.9 cm for the GeoForschungsZentrum (GFZ and 16.9 ± 1.2 cm for the Jet Propulsion Laboratory (JPL in terms of equivalent water height (EWH, respectively. The root mean squares of the mean mass anomaly in Sahara were 1.2 cm, 0.9 cm, 0.9 cm and 1.2 cm for temporal gravity field models of IGG, CSR, GFZ and JPL, respectively. Comparison suggested that IGG temporal gravity field solutions were at the same accuracy level with the latest temporal gravity field solutions published by CSR, GFZ and JPL.

  20. Extended Theories of Gravity

    International Nuclear Information System (INIS)

    Capozziello, Salvatore; De Laurentis, Mariafelicia

    2011-01-01

    Extended Theories of Gravity can be considered as a new paradigm to cure shortcomings of General Relativity at infrared and ultraviolet scales. They are an approach that, by preserving the undoubtedly positive results of Einstein’s theory, is aimed to address conceptual and experimental problems recently emerged in astrophysics, cosmology and High Energy Physics. In particular, the goal is to encompass, in a self-consistent scheme, problems like inflation, dark energy, dark matter, large scale structure and, first of all, to give at least an effective description of Quantum Gravity. We review the basic principles that any gravitational theory has to follow. The geometrical interpretation is discussed in a broad perspective in order to highlight the basic assumptions of General Relativity and its possible extensions in the general framework of gauge theories. Principles of such modifications are presented, focusing on specific classes of theories like f(R)-gravity and scalar–tensor gravity in the metric and Palatini approaches. The special role of torsion is also discussed. The conceptual features of these theories are fully explored and attention is paid to the issues of dynamical and conformal equivalence between them considering also the initial value problem. A number of viability criteria are presented considering the post-Newtonian and the post-Minkowskian limits. In particular, we discuss the problems of neutrino oscillations and gravitational waves in extended gravity. Finally, future perspectives of extended gravity are considered with possibility to go beyond a trial and error approach.

  1. Lattice gravity and strings

    International Nuclear Information System (INIS)

    Jevicki, A.; Ninomiya, M.

    1985-01-01

    We are concerned with applications of the simplicial discretization method (Regge calculus) to two-dimensional quantum gravity with emphasis on the physically relevant string model. Beginning with the discretization of gravity and matter we exhibit a discrete version of the conformal trace anomaly. Proceeding to the string problem we show how the direct approach of (finite difference) discretization based on Nambu action corresponds to unsatisfactory treatment of gravitational degrees. Based on the Regge approach we then propose a discretization corresponding to the Polyakov string. In this context we are led to a natural geometric version of the associated Liouville model and two-dimensional gravity. (orig.)

  2. Quantum Gravity Effects in Cosmology

    Directory of Open Access Journals (Sweden)

    Gu Je-An

    2018-01-01

    Full Text Available Within the geometrodynamic approach to quantum cosmology, we studied the quantum gravity effects in cosmology. The Gibbons-Hawking temperature is corrected by quantum gravity due to spacetime fluctuations and the power spectrum as well as any probe field will experience the effective temperature, a quantum gravity effect.

  3. Quantum Gravity Mathematical Models and Experimental Bounds

    CERN Document Server

    Fauser, Bertfried; Zeidler, Eberhard

    2007-01-01

    The construction of a quantum theory of gravity is the most fundamental challenge confronting contemporary theoretical physics. The different physical ideas which evolved while developing a theory of quantum gravity require highly advanced mathematical methods. This book presents different mathematical approaches to formulate a theory of quantum gravity. It represents a carefully selected cross-section of lively discussions about the issue of quantum gravity which took place at the second workshop "Mathematical and Physical Aspects of Quantum Gravity" in Blaubeuren, Germany. This collection covers in a unique way aspects of various competing approaches. A unique feature of the book is the presentation of different approaches to quantum gravity making comparison feasible. This feature is supported by an extensive index. The book is mainly addressed to mathematicians and physicists who are interested in questions related to mathematical physics. It allows the reader to obtain a broad and up-to-date overview on ...

  4. Stochastic quantization

    International Nuclear Information System (INIS)

    Klauder, J.R.

    1983-01-01

    The author provides an introductory survey to stochastic quantization in which he outlines this new approach for scalar fields, gauge fields, fermion fields, and condensed matter problems such as electrons in solids and the statistical mechanics of quantum spins. (Auth.)

  5. Stochastic methods in quantum mechanics

    CERN Document Server

    Gudder, Stanley P

    2005-01-01

    Practical developments in such fields as optical coherence, communication engineering, and laser technology have developed from the applications of stochastic methods. This introductory survey offers a broad view of some of the most useful stochastic methods and techniques in quantum physics, functional analysis, probability theory, communications, and electrical engineering. Starting with a history of quantum mechanics, it examines both the quantum logic approach and the operational approach, with explorations of random fields and quantum field theory.The text assumes a basic knowledge of fun

  6. 100 years after Smoluchowski: stochastic processes in cell biology

    International Nuclear Information System (INIS)

    Holcman, D; Schuss, Z

    2017-01-01

    100 years after Smoluchowski introduced his approach to stochastic processes, they are now at the basis of mathematical and physical modeling in cellular biology: they are used for example to analyse and to extract features from a large number (tens of thousands) of single molecular trajectories or to study the diffusive motion of molecules, proteins or receptors. Stochastic modeling is a new step in large data analysis that serves extracting cell biology concepts. We review here Smoluchowski’s approach to stochastic processes and provide several applications for coarse-graining diffusion, studying polymer models for understanding nuclear organization and finally, we discuss the stochastic jump dynamics of telomeres across cell division and stochastic gene regulation. (topical review)

  7. New 3D Gravity Model of the Lithosphere and new Approach of the Gravity Field Transformation in the Western Carpathian-Pannonian Region

    Science.gov (United States)

    Bielik, M.; Tasarova, Z. A.; Goetze, H.; Mikuska, J.; Pasteka, R.

    2007-12-01

    The 3-D forward modeling was performed for the Western Carpathians and the Pannonian Basin system. The density model includes 31 cross-sections, extends to depth of 220 km. By means of the combined 3-D modeling, new estimates of the density distribution of the crust and upper mantle, as well as depths of the Moho were derived. These data allowed to perform gravity stripping, which in the area of the Pannonian Basin is crucial for the signal analysis of the gravity field. In this region, namely, two pronounced features (i.e. the deep sedimentary basins and shallow Moho) with opposite gravity effects make it impossible to analyze the Bouguer anomaly by field separation or filtering. The results revealed a significantly different nature of the Western Carpathian- Pannonian region (ALACAPA and Tisza-Dacia microplates) from the European Platform lithosphere (i.e. these microplates to be much less dense than the surrounding European Platform lithosphere). The calculation of the transformed gravity maps by means of new method provided the additional information on the lithospheric structure. The use of existing elevation information represents an independent approach to the problem of transformation of gravity maps. Instead of standard separation and transformation methods both in wave-number and spatial domains, this method is based on the estimating of really existing linear trends within the values of complete Bouguer anomalies (CBA), which are understood as a function defined in 3D space. An important assumption that the points with known input values of CBA lie on a horizontal plane is therefore not required. Instead, the points with known CBA and elevation values are treated in their original positions, i.e. on the Earth surface.

  8. Cosmology of f(R) gravity in the metric variational approach

    Science.gov (United States)

    Li, Baojiu; Barrow, John D.

    2007-04-01

    We consider the cosmologies that arise in a subclass of f(R) gravity with f(R)=R+μ2n+2/(-R)n and n∈(-1,0) in the metric (as opposed to the Palatini) variational approach to deriving the gravitational field equations. The calculations of the isotropic and homogeneous cosmological models are undertaken in the Jordan frame and at both the background and the perturbation levels. For the former, we also discuss the connection to the Einstein frame in which the extra degree of freedom in the theory is associated with a scalar field sharing some of the properties of a “chameleon” field. For the latter, we derive the cosmological perturbation equations in general theories of f(R) gravity in covariant form and implement them numerically to calculate the cosmic microwave background (CMB) temperature and matter power spectra of the cosmological model. The CMB power is shown to reduce at low l’s, and the matter power spectrum is almost scale independent at small scales, thus having a similar shape to that in standard general relativity. These are in stark contrast with what was found in the Palatini f(R) gravity, where the CMB power is largely amplified at low l’s and the matter spectrum is strongly scale dependent at small scales. These features make the present model more adaptable than that arising from the Palatini f(R) field equations, and none of the data on background evolution, CMB power spectrum, or matter power spectrum currently rule it out.

  9. A dynamically adaptive wavelet approach to stochastic computations based on polynomial chaos - capturing all scales of random modes on independent grids

    International Nuclear Information System (INIS)

    Ren Xiaoan; Wu Wenquan; Xanthis, Leonidas S.

    2011-01-01

    Highlights: → New approach for stochastic computations based on polynomial chaos. → Development of dynamically adaptive wavelet multiscale solver using space refinement. → Accurate capture of steep gradients and multiscale features in stochastic problems. → All scales of each random mode are captured on independent grids. → Numerical examples demonstrate the need for different space resolutions per mode. - Abstract: In stochastic computations, or uncertainty quantification methods, the spectral approach based on the polynomial chaos expansion in random space leads to a coupled system of deterministic equations for the coefficients of the expansion. The size of this system increases drastically when the number of independent random variables and/or order of polynomial chaos expansions increases. This is invariably the case for large scale simulations and/or problems involving steep gradients and other multiscale features; such features are variously reflected on each solution component or random/uncertainty mode requiring the development of adaptive methods for their accurate resolution. In this paper we propose a new approach for treating such problems based on a dynamically adaptive wavelet methodology involving space-refinement on physical space that allows all scales of each solution component to be refined independently of the rest. We exemplify this using the convection-diffusion model with random input data and present three numerical examples demonstrating the salient features of the proposed method. Thus we establish a new, elegant and flexible approach for stochastic problems with steep gradients and multiscale features based on polynomial chaos expansions.

  10. Stochastic modeling and analysis of telecoms networks

    CERN Document Server

    Decreusefond, Laurent

    2012-01-01

    This book addresses the stochastic modeling of telecommunication networks, introducing the main mathematical tools for that purpose, such as Markov processes, real and spatial point processes and stochastic recursions, and presenting a wide list of results on stability, performances and comparison of systems.The authors propose a comprehensive mathematical construction of the foundations of stochastic network theory: Markov chains, continuous time Markov chains are extensively studied using an original martingale-based approach. A complete presentation of stochastic recursions from an

  11. Demand side management scheme in smart grid with cloud computing approach using stochastic dynamic programming

    Directory of Open Access Journals (Sweden)

    S. Sofana Reka

    2016-09-01

    Full Text Available This paper proposes a cloud computing framework in smart grid environment by creating small integrated energy hub supporting real time computing for handling huge storage of data. A stochastic programming approach model is developed with cloud computing scheme for effective demand side management (DSM in smart grid. Simulation results are obtained using GUI interface and Gurobi optimizer in Matlab in order to reduce the electricity demand by creating energy networks in a smart hub approach.

  12. Trade Performance and Potential of the Philippines: An Application of Stochastic Frontier Gravity Model

    OpenAIRE

    Deluna, Roperto Jr

    2013-01-01

    This study was conducted to investigate the issue of what Philippine merchandise trade flows would be if countries operated at the frontier of the gravity model. The study sought to estimate the coefficients of the gravity model. The estimated coefficients were used to estimate merchandise export potentials and technical efficiency of each country in the sample and these were also aggregated to measure impact of country groups, RTAs and inter-regional trading agreements. Result of the ...

  13. Stochastic Community Assembly: Does It Matter in Microbial Ecology?

    Science.gov (United States)

    Zhou, Jizhong; Ning, Daliang

    2017-12-01

    Understanding the mechanisms controlling community diversity, functions, succession, and biogeography is a central, but poorly understood, topic in ecology, particularly in microbial ecology. Although stochastic processes are believed to play nonnegligible roles in shaping community structure, their importance relative to deterministic processes is hotly debated. The importance of ecological stochasticity in shaping microbial community structure is far less appreciated. Some of the main reasons for such heavy debates are the difficulty in defining stochasticity and the diverse methods used for delineating stochasticity. Here, we provide a critical review and synthesis of data from the most recent studies on stochastic community assembly in microbial ecology. We then describe both stochastic and deterministic components embedded in various ecological processes, including selection, dispersal, diversification, and drift. We also describe different approaches for inferring stochasticity from observational diversity patterns and highlight experimental approaches for delineating ecological stochasticity in microbial communities. In addition, we highlight research challenges, gaps, and future directions for microbial community assembly research. Copyright © 2017 American Society for Microbiology.

  14. A one-dimensional stochastic approach to the study of cyclic voltammetry with adsorption effects

    Energy Technology Data Exchange (ETDEWEB)

    Samin, Adib J. [The Department of Mechanical and Aerospace Engineering, The Ohio State University, 201 W 19" t" h Avenue, Columbus, Ohio 43210 (United States)

    2016-05-15

    In this study, a one-dimensional stochastic model based on the random walk approach is used to simulate cyclic voltammetry. The model takes into account mass transport, kinetics of the redox reactions, adsorption effects and changes in the morphology of the electrode. The model is shown to display the expected behavior. Furthermore, the model shows consistent qualitative agreement with a finite difference solution. This approach allows for an understanding of phenomena on a microscopic level and may be useful for analyzing qualitative features observed in experimentally recorded signals.

  15. A one-dimensional stochastic approach to the study of cyclic voltammetry with adsorption effects

    International Nuclear Information System (INIS)

    Samin, Adib J.

    2016-01-01

    In this study, a one-dimensional stochastic model based on the random walk approach is used to simulate cyclic voltammetry. The model takes into account mass transport, kinetics of the redox reactions, adsorption effects and changes in the morphology of the electrode. The model is shown to display the expected behavior. Furthermore, the model shows consistent qualitative agreement with a finite difference solution. This approach allows for an understanding of phenomena on a microscopic level and may be useful for analyzing qualitative features observed in experimentally recorded signals.

  16. Newton-Cartan gravity revisited

    NARCIS (Netherlands)

    Andringa, Roel

    2016-01-01

    In this research Newton's old theory of gravity is rederived using an algebraic approach known as the gauging procedure. The resulting theory is Newton's theory in the mathematical language of Einstein's General Relativity theory, in which gravity is spacetime curvature. The gauging procedure sheds

  17. Multi-criteria multi-stakeholder decision analysis using a fuzzy-stochastic approach for hydrosystem management

    Directory of Open Access Journals (Sweden)

    Y. H. Subagadis

    2014-09-01

    Full Text Available The conventional methods used to solve multi-criteria multi-stakeholder problems are less strongly formulated, as they normally incorporate only homogeneous information at a time and suggest aggregating objectives of different decision-makers avoiding water–society interactions. In this contribution, Multi-Criteria Group Decision Analysis (MCGDA using a fuzzy-stochastic approach has been proposed to rank a set of alternatives in water management decisions incorporating heterogeneous information under uncertainty. The decision making framework takes hydrologically, environmentally, and socio-economically motivated conflicting objectives into consideration. The criteria related to the performance of the physical system are optimized using multi-criteria simulation-based optimization, and fuzzy linguistic quantifiers have been used to evaluate subjective criteria and to assess stakeholders' degree of optimism. The proposed methodology is applied to find effective and robust intervention strategies for the management of a coastal hydrosystem affected by saltwater intrusion due to excessive groundwater extraction for irrigated agriculture and municipal use. Preliminary results show that the MCGDA based on a fuzzy-stochastic approach gives useful support for robust decision-making and is sensitive to the decision makers' degree of optimism.

  18. Error performance analysis in K-tier uplink cellular networks using a stochastic geometric approach

    KAUST Repository

    Afify, Laila H.

    2015-09-14

    In this work, we develop an analytical paradigm to analyze the average symbol error probability (ASEP) performance of uplink traffic in a multi-tier cellular network. The analysis is based on the recently developed Equivalent-in-Distribution approach that utilizes stochastic geometric tools to account for the network geometry in the performance characterization. Different from the other stochastic geometry models adopted in the literature, the developed analysis accounts for important communication system parameters and goes beyond signal-to-interference-plus-noise ratio characterization. That is, the presented model accounts for the modulation scheme, constellation type, and signal recovery techniques to model the ASEP. To this end, we derive single integral expressions for the ASEP for different modulation schemes due to aggregate network interference. Finally, all theoretical findings of the paper are verified via Monte Carlo simulations.

  19. Modeling stochasticity in biochemical reaction networks

    International Nuclear Information System (INIS)

    Constantino, P H; Vlysidis, M; Smadbeck, P; Kaznessis, Y N

    2016-01-01

    Small biomolecular systems are inherently stochastic. Indeed, fluctuations of molecular species are substantial in living organisms and may result in significant variation in cellular phenotypes. The chemical master equation (CME) is the most detailed mathematical model that can describe stochastic behaviors. However, because of its complexity the CME has been solved for only few, very small reaction networks. As a result, the contribution of CME-based approaches to biology has been very limited. In this review we discuss the approach of solving CME by a set of differential equations of probability moments, called moment equations. We present different approaches to produce and to solve these equations, emphasizing the use of factorial moments and the zero information entropy closure scheme. We also provide information on the stability analysis of stochastic systems. Finally, we speculate on the utility of CME-based modeling formalisms, especially in the context of synthetic biology efforts. (topical review)

  20. Continuous-Time Public Good Contribution Under Uncertainty: A Stochastic Control Approach

    International Nuclear Information System (INIS)

    Ferrari, Giorgio; Riedel, Frank; Steg, Jan-Henrik

    2017-01-01

    In this paper we study continuous-time stochastic control problems with both monotone and classical controls motivated by the so-called public good contribution problem. That is the problem of n economic agents aiming to maximize their expected utility allocating initial wealth over a given time period between private consumption and irreversible contributions to increase the level of some public good. We investigate the corresponding social planner problem and the case of strategic interaction between the agents, i.e. the public good contribution game. We show existence and uniqueness of the social planner’s optimal policy, we characterize it by necessary and sufficient stochastic Kuhn–Tucker conditions and we provide its expression in terms of the unique optional solution of a stochastic backward equation. Similar stochastic first order conditions prove to be very useful for studying any Nash equilibria of the public good contribution game. In the symmetric case they allow us to prove (qualitative) uniqueness of the Nash equilibrium, which we again construct as the unique optional solution of a stochastic backward equation. We finally also provide a detailed analysis of the so-called free rider effect.

  1. Continuous-Time Public Good Contribution Under Uncertainty: A Stochastic Control Approach

    Energy Technology Data Exchange (ETDEWEB)

    Ferrari, Giorgio, E-mail: giorgio.ferrari@uni-bielefeld.de; Riedel, Frank, E-mail: frank.riedel@uni-bielefeld.de; Steg, Jan-Henrik, E-mail: jsteg@uni-bielefeld.de [Bielefeld University, Center for Mathematical Economics (Germany)

    2017-06-15

    In this paper we study continuous-time stochastic control problems with both monotone and classical controls motivated by the so-called public good contribution problem. That is the problem of n economic agents aiming to maximize their expected utility allocating initial wealth over a given time period between private consumption and irreversible contributions to increase the level of some public good. We investigate the corresponding social planner problem and the case of strategic interaction between the agents, i.e. the public good contribution game. We show existence and uniqueness of the social planner’s optimal policy, we characterize it by necessary and sufficient stochastic Kuhn–Tucker conditions and we provide its expression in terms of the unique optional solution of a stochastic backward equation. Similar stochastic first order conditions prove to be very useful for studying any Nash equilibria of the public good contribution game. In the symmetric case they allow us to prove (qualitative) uniqueness of the Nash equilibrium, which we again construct as the unique optional solution of a stochastic backward equation. We finally also provide a detailed analysis of the so-called free rider effect.

  2. Robust synthetic biology design: stochastic game theory approach.

    Science.gov (United States)

    Chen, Bor-Sen; Chang, Chia-Hung; Lee, Hsiao-Ching

    2009-07-15

    Synthetic biology is to engineer artificial biological systems to investigate natural biological phenomena and for a variety of applications. However, the development of synthetic gene networks is still difficult and most newly created gene networks are non-functioning due to uncertain initial conditions and disturbances of extra-cellular environments on the host cell. At present, how to design a robust synthetic gene network to work properly under these uncertain factors is the most important topic of synthetic biology. A robust regulation design is proposed for a stochastic synthetic gene network to achieve the prescribed steady states under these uncertain factors from the minimax regulation perspective. This minimax regulation design problem can be transformed to an equivalent stochastic game problem. Since it is not easy to solve the robust regulation design problem of synthetic gene networks by non-linear stochastic game method directly, the Takagi-Sugeno (T-S) fuzzy model is proposed to approximate the non-linear synthetic gene network via the linear matrix inequality (LMI) technique through the Robust Control Toolbox in Matlab. Finally, an in silico example is given to illustrate the design procedure and to confirm the efficiency and efficacy of the proposed robust gene design method. http://www.ee.nthu.edu.tw/bschen/SyntheticBioDesign_supplement.pdf.

  3. All-loop calculations of total, elastic and single diffractive cross sections in RFT via the stochastic approach

    International Nuclear Information System (INIS)

    Kolevatov, R. S.; Boreskov, K. G.

    2013-01-01

    We apply the stochastic approach to the calculation of the Reggeon Field Theory (RFT) elastic amplitude and its single diffractive cut. The results for the total, elastic and single difractive cross sections with account of all Pomeron loops are obtained.

  4. All-loop calculations of total, elastic and single diffractive cross sections in RFT via the stochastic approach

    Energy Technology Data Exchange (ETDEWEB)

    Kolevatov, R. S. [SUBATECH, Ecole des Mines de Nantes, 4 rue Alfred Kastler, 44307 Nantes Cedex 3 (France); Boreskov, K. G. [Institute of Theoretical and Experimental Physics, 117259, Moscow (Russian Federation)

    2013-04-15

    We apply the stochastic approach to the calculation of the Reggeon Field Theory (RFT) elastic amplitude and its single diffractive cut. The results for the total, elastic and single difractive cross sections with account of all Pomeron loops are obtained.

  5. Variational approach to gravity field theories from Newton to Einstein and beyond

    CERN Document Server

    Vecchiato, Alberto

    2017-01-01

    This book offers a detailed and stimulating account of the Lagrangian, or variational, approach to general relativity and beyond. The approach more usually adopted when describing general relativity is to introduce the required concepts of differential geometry and derive the field and geodesic equations from purely geometrical properties. Demonstration of the physical meaning then requires the weak field approximation of these equations to recover their Newtonian counterparts. The potential downside of this approach is that it tends to suit the mathematical mind and requires the physicist to study and work in a completely unfamiliar environment. In contrast, the approach to general relativity described in this book will be especially suited to physics students. After an introduction to field theories and the variational approach, individual sections focus on the variational approach in relation to special relativity, general relativity, and alternative theories of gravity. Throughout the text, solved exercis...

  6. Public Transportation Hub Location with Stochastic Demand: An Improved Approach Based on Multiple Attribute Group Decision-Making

    Directory of Open Access Journals (Sweden)

    Sen Liu

    2015-01-01

    Full Text Available Urban public transportation hubs are the key nodes of the public transportation system. The location of such hubs is a combinatorial problem. Many factors can affect the decision-making of location, including both quantitative and qualitative factors; however, most current research focuses solely on either the quantitative or the qualitative factors. Little has been done to combine these two approaches. To fulfill this gap in the research, this paper proposes a novel approach to the public transportation hub location problem, which takes both quantitative and qualitative factors into account. In this paper, an improved multiple attribute group decision-making (MAGDM method based on TOPSIS (Technique for Order Preference by Similarity to Ideal Solution and deviation is proposed to convert the qualitative factors of each hub into quantitative evaluation values. A location model with stochastic passenger flows is then established based on the above evaluation values. Finally, stochastic programming theory is applied to solve the model and to determine the location result. A numerical study shows that this approach is applicable and effective.

  7. (Non-) homomorphic approaches to denoise intensity SAR images with non-local means and stochastic distances

    Science.gov (United States)

    Penna, Pedro A. A.; Mascarenhas, Nelson D. A.

    2018-02-01

    The development of new methods to denoise images still attract researchers, who seek to combat the noise with the minimal loss of resolution and details, like edges and fine structures. Many algorithms have the goal to remove additive white Gaussian noise (AWGN). However, it is not the only type of noise which interferes in the analysis and interpretation of images. Therefore, it is extremely important to expand the filters capacity to different noise models present in li-terature, for example the multiplicative noise called speckle that is present in synthetic aperture radar (SAR) images. The state-of-the-art algorithms in remote sensing area work with similarity between patches. This paper aims to develop two approaches using the non local means (NLM), developed for AWGN. In our research, we expanded its capacity for intensity SAR ima-ges speckle. The first approach is grounded on the use of stochastic distances based on the G0 distribution without transforming the data to the logarithm domain, like homomorphic transformation. It takes into account the speckle and backscatter to estimate the parameters necessary to compute the stochastic distances on NLM. The second method uses a priori NLM denoising with a homomorphic transformation and applies the inverse Gamma distribution to estimate the parameters that were used into NLM with stochastic distances. The latter method also presents a new alternative to compute the parameters for the G0 distribution. Finally, this work compares and analyzes the synthetic and real results of the proposed methods with some recent filters of the literature.

  8. Spreading dynamics on complex networks: a general stochastic approach.

    Science.gov (United States)

    Noël, Pierre-André; Allard, Antoine; Hébert-Dufresne, Laurent; Marceau, Vincent; Dubé, Louis J

    2014-12-01

    Dynamics on networks is considered from the perspective of Markov stochastic processes. We partially describe the state of the system through network motifs and infer any missing data using the available information. This versatile approach is especially well adapted for modelling spreading processes and/or population dynamics. In particular, the generality of our framework and the fact that its assumptions are explicitly stated suggests that it could be used as a common ground for comparing existing epidemics models too complex for direct comparison, such as agent-based computer simulations. We provide many examples for the special cases of susceptible-infectious-susceptible and susceptible-infectious-removed dynamics (e.g., epidemics propagation) and we observe multiple situations where accurate results may be obtained at low computational cost. Our perspective reveals a subtle balance between the complex requirements of a realistic model and its basic assumptions.

  9. Stochastic efficiency: five case studies

    International Nuclear Information System (INIS)

    Proesmans, Karel; Broeck, Christian Van den

    2015-01-01

    Stochastic efficiency is evaluated in five case studies: driven Brownian motion, effusion with a thermo-chemical and thermo-velocity gradient, a quantum dot and a model for information to work conversion. The salient features of stochastic efficiency, including the maximum of the large deviation function at the reversible efficiency, are reproduced. The approach to and extrapolation into the asymptotic time regime are documented. (paper)

  10. A Monte Carlo Study on Multiple Output Stochastic Frontiers: Comparison of Two Approaches

    DEFF Research Database (Denmark)

    Henningsen, Geraldine; Henningsen, Arne; Jensen, Uwe

    , dividing all other output quantities by the selected output quantity, and using these ratios as regressors (OD). Another approach is the stochastic ray production frontier (SR) which transforms the output quantities into their Euclidean distance as the dependent variable and their polar coordinates......In the estimation of multiple output technologies in a primal approach, the main question is how to handle the multiple outputs. Often an output distance function is used, where the classical approach is to exploit its homogeneity property by selecting one output quantity as the dependent variable...... of both specifications for the case of a Translog output distance function with respect to different common statistical problems as well as problems arising as a consequence of zero values in the output quantities. Although, our results partly show clear reactions to statistical misspecifications...

  11. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  12. Loop Quantum Gravity

    Directory of Open Access Journals (Sweden)

    Rovelli Carlo

    1998-01-01

    Full Text Available The problem of finding the quantum theory of the gravitational field, and thus understanding what is quantum spacetime, is still open. One of the most active of the current approaches is loop quantum gravity. Loop quantum gravity is a mathematically well-defined, non-perturbative and background independent quantization of general relativity, with its conventional matter couplings. Research in loop quantum gravity today forms a vast area, ranging from mathematical foundations to physical applications. Among the most significant results obtained are: (i The computation of the physical spectra of geometrical quantities such as area and volume, which yields quantitative predictions on Planck-scale physics. (ii A derivation of the Bekenstein-Hawking black hole entropy formula. (iii An intriguing physical picture of the microstructure of quantum physical space, characterized by a polymer-like Planck scale discreteness. This discreteness emerges naturally from the quantum theory and provides a mathematically well-defined realization of Wheeler's intuition of a spacetime ``foam''. Long standing open problems within the approach (lack of a scalar product, over-completeness of the loop basis, implementation of reality conditions have been fully solved. The weak part of the approach is the treatment of the dynamics: at present there exist several proposals, which are intensely debated. Here, I provide a general overview of ideas, techniques, results and open problems of this candidate theory of quantum gravity, and a guide to the relevant literature.

  13. Effects of artificial gravity on the cardiovascular system: Computational approach

    Science.gov (United States)

    Diaz Artiles, Ana; Heldt, Thomas; Young, Laurence R.

    2016-09-01

    Artificial gravity has been suggested as a multisystem countermeasure against the negative effects of weightlessness. However, many questions regarding the appropriate configuration are still unanswered, including optimal g-level, angular velocity, gravity gradient, and exercise protocol. Mathematical models can provide unique insight into these questions, particularly when experimental data is very expensive or difficult to obtain. In this research effort, a cardiovascular lumped-parameter model is developed to simulate the short-term transient hemodynamic response to artificial gravity exposure combined with ergometer exercise, using a bicycle mounted on a short-radius centrifuge. The model is thoroughly described and preliminary simulations are conducted to show the model capabilities and potential applications. The model consists of 21 compartments (including systemic circulation, pulmonary circulation, and a cardiac model), and it also includes the rapid cardiovascular control systems (arterial baroreflex and cardiopulmonary reflex). In addition, the pressure gradient resulting from short-radius centrifugation is captured in the model using hydrostatic pressure sources located at each compartment. The model also includes the cardiovascular effects resulting from exercise such as the muscle pump effect. An initial set of artificial gravity simulations were implemented using the Massachusetts Institute of Technology (MIT) Compact-Radius Centrifuge (CRC) configuration. Three centripetal acceleration (artificial gravity) levels were chosen: 1 g, 1.2 g, and 1.4 g, referenced to the subject's feet. Each simulation lasted 15.5 minutes and included a baseline period, the spin-up process, the ergometer exercise period (5 minutes of ergometer exercise at 30 W with a simulated pedal cadence of 60 RPM), and the spin-down process. Results showed that the cardiovascular model is able to predict the cardiovascular dynamics during gravity changes, as well as the expected

  14. Breaking the theoretical scaling limit for predicting quasiparticle energies: the stochastic GW approach.

    Science.gov (United States)

    Neuhauser, Daniel; Gao, Yi; Arntsen, Christopher; Karshenas, Cyrus; Rabani, Eran; Baer, Roi

    2014-08-15

    We develop a formalism to calculate the quasiparticle energy within the GW many-body perturbation correction to the density functional theory. The occupied and virtual orbitals of the Kohn-Sham Hamiltonian are replaced by stochastic orbitals used to evaluate the Green function G, the polarization potential W, and, thereby, the GW self-energy. The stochastic GW (sGW) formalism relies on novel theoretical concepts such as stochastic time-dependent Hartree propagation, stochastic matrix compression, and spatial or temporal stochastic decoupling techniques. Beyond the theoretical interest, the formalism enables linear scaling GW calculations breaking the theoretical scaling limit for GW as well as circumventing the need for energy cutoff approximations. We illustrate the method for silicon nanocrystals of varying sizes with N_{e}>3000 electrons.

  15. Multivariate moment closure techniques for stochastic kinetic models

    International Nuclear Information System (INIS)

    Lakatos, Eszter; Ale, Angelique; Kirk, Paul D. W.; Stumpf, Michael P. H.

    2015-01-01

    Stochastic effects dominate many chemical and biochemical processes. Their analysis, however, can be computationally prohibitively expensive and a range of approximation schemes have been proposed to lighten the computational burden. These, notably the increasingly popular linear noise approximation and the more general moment expansion methods, perform well for many dynamical regimes, especially linear systems. At higher levels of nonlinearity, it comes to an interplay between the nonlinearities and the stochastic dynamics, which is much harder to capture correctly by such approximations to the true stochastic processes. Moment-closure approaches promise to address this problem by capturing higher-order terms of the temporally evolving probability distribution. Here, we develop a set of multivariate moment-closures that allows us to describe the stochastic dynamics of nonlinear systems. Multivariate closure captures the way that correlations between different molecular species, induced by the reaction dynamics, interact with stochastic effects. We use multivariate Gaussian, gamma, and lognormal closure and illustrate their use in the context of two models that have proved challenging to the previous attempts at approximating stochastic dynamics: oscillations in p53 and Hes1. In addition, we consider a larger system, Erk-mediated mitogen-activated protein kinases signalling, where conventional stochastic simulation approaches incur unacceptably high computational costs

  16. Multivariate moment closure techniques for stochastic kinetic models

    Energy Technology Data Exchange (ETDEWEB)

    Lakatos, Eszter, E-mail: e.lakatos13@imperial.ac.uk; Ale, Angelique; Kirk, Paul D. W.; Stumpf, Michael P. H., E-mail: m.stumpf@imperial.ac.uk [Department of Life Sciences, Centre for Integrative Systems Biology and Bioinformatics, Imperial College London, London SW7 2AZ (United Kingdom)

    2015-09-07

    Stochastic effects dominate many chemical and biochemical processes. Their analysis, however, can be computationally prohibitively expensive and a range of approximation schemes have been proposed to lighten the computational burden. These, notably the increasingly popular linear noise approximation and the more general moment expansion methods, perform well for many dynamical regimes, especially linear systems. At higher levels of nonlinearity, it comes to an interplay between the nonlinearities and the stochastic dynamics, which is much harder to capture correctly by such approximations to the true stochastic processes. Moment-closure approaches promise to address this problem by capturing higher-order terms of the temporally evolving probability distribution. Here, we develop a set of multivariate moment-closures that allows us to describe the stochastic dynamics of nonlinear systems. Multivariate closure captures the way that correlations between different molecular species, induced by the reaction dynamics, interact with stochastic effects. We use multivariate Gaussian, gamma, and lognormal closure and illustrate their use in the context of two models that have proved challenging to the previous attempts at approximating stochastic dynamics: oscillations in p53 and Hes1. In addition, we consider a larger system, Erk-mediated mitogen-activated protein kinases signalling, where conventional stochastic simulation approaches incur unacceptably high computational costs.

  17. Towards the map of quantum gravity

    Science.gov (United States)

    Mielczarek, Jakub; Trześniewski, Tomasz

    2018-06-01

    In this paper we point out some possible links between different approaches to quantum gravity and theories of the Planck scale physics. In particular, connections between loop quantum gravity, causal dynamical triangulations, Hořava-Lifshitz gravity, asymptotic safety scenario, Quantum Graphity, deformations of relativistic symmetries and nonlinear phase space models are discussed. The main focus is on quantum deformations of the Hypersurface Deformations Algebra and Poincaré algebra, nonlinear structure of phase space, the running dimension of spacetime and nontrivial phase diagram of quantum gravity. We present an attempt to arrange the observed relations in the form of a graph, highlighting different aspects of quantum gravity. The analysis is performed in the spirit of a mind map, which represents the architectural approach to the studied theory, being a natural way to describe the properties of a complex system. We hope that the constructed graphs (maps) will turn out to be helpful in uncovering the global picture of quantum gravity as a particular complex system and serve as a useful guide for the researchers.

  18. Studying the intervention of an unusual term in f(T) gravity via the Noether symmetry approach. On a new term for gravity actions

    Energy Technology Data Exchange (ETDEWEB)

    Tajahmad, Behzad [University of Tabriz, Faculty of Physics, Tabriz (Iran, Islamic Republic of)

    2017-08-15

    As has been done before, we study an unknown coupling function, i.e. F(φ), together with a function of torsion and also curvature, i.e. f(T) and f(R), generally depending upon a scalar field. In the f(R) case, it comes from quantum correlations and other sources. Now, what if beside this term in f(T) gravity context, we enhance the action through another term which depends upon both scalar field and its derivatives? In this paper, we have added such an unprecedented term in the generic common action of f(T) gravity such that in this new term, an unknown function of torsion has coupled with an unknown function of both scalar field and its derivatives. We explain in detail why we can append such a term. By the Noether symmetry approach, we consider its behavior and effect. We show that it does not produce an anomaly, but rather it works successfully, and numerical analysis of the exact solutions of field equations coincides with all most important observational data, particularly late-time-accelerated expansion. So, this new term may be added to the gravitational actions of f(T) gravity. (orig.)

  19. A Stochastic Programming Approach for a Multi-Site Supply Chain Planning in Textile and Apparel Industry under Demand Uncertainty

    Directory of Open Access Journals (Sweden)

    Houssem Felfel

    2015-11-01

    Full Text Available In this study, a new stochastic model is proposed to deal with a multi-product, multi-period, multi-stage, multi-site production and transportation supply chain planning problem under demand uncertainty. A two-stage stochastic linear programming approach is used to maximize the expected profit. Decisions such as the production amount, the inventory level of finished and semi-finished product, the amount of backorder and the quantity of products to be transported between upstream and downstream plants in each period are considered. The robustness of production supply chain plan is then evaluated using statistical and risk measures. A case study from a real textile and apparel industry is shown in order to compare the performances of the proposed stochastic programming model and the deterministic model.

  20. Stationary solutions of linear stochastic delay differential equations: applications to biological systems.

    Science.gov (United States)

    Frank, T D; Beek, P J

    2001-08-01

    Recently, Küchler and Mensch [Stochastics Stochastics Rep. 40, 23 (1992)] derived exact stationary probability densities for linear stochastic delay differential equations. This paper presents an alternative derivation of these solutions by means of the Fokker-Planck approach introduced by Guillouzic [Phys. Rev. E 59, 3970 (1999); 61, 4906 (2000)]. Applications of this approach, which is argued to have greater generality, are discussed in the context of stochastic models for population growth and tracking movements.

  1. A Stochastic Bi-Level Scheduling Approach for the Participation of EV Aggregators in Competitive Electricity Markets

    DEFF Research Database (Denmark)

    Rashidizaheh-Kermani, Homa; Vahedipour-Dahraie, Mostafa; Najafi, Hamid Reza

    2017-01-01

    are modeled via stochastic programming. Therefore, a two-level problem is formulated here, in which the aggregator makes decision in the upper level and the EV clients purchase energy to charge their EVs in the lower level. Then the obtained nonlinear bi-level framework is transformed into a single......This paper proposes a stochastic bi-level decision-making model for an electric vehicle (EV) aggregator in a competitive environment. In this approach, the EV aggregator decides to participate in day-ahead (DA) and balancing markets and provides energy price offers to the EV owners in order...... is assessed in a realistic case study and the results show that the proposed model would be effective for an EV aggregator decision-making problem in a competitive environment....

  2. A Least Squares Collocation Approach with GOCE gravity gradients for regional Moho-estimation

    Science.gov (United States)

    Rieser, Daniel; Mayer-Guerr, Torsten

    2014-05-01

    The depth of the Moho discontinuity is commonly derived by either seismic observations, gravity measurements or combinations of both. In this study, we aim to use the gravity gradient measurements of the GOCE satellite mission in a Least Squares Collocation (LSC) approach for the estimation of the Moho depth on regional scale. Due to its mission configuration and measurement setup, GOCE is able to contribute valuable information in particular in the medium wavelengths of the gravity field spectrum, which is also of special interest for the crust-mantle boundary. In contrast to other studies we use the full information of the gradient tensor in all three dimensions. The problem outline is formulated as isostatically compensated topography according to the Airy-Heiskanen model. By using a topography model in spherical harmonics representation the topographic influences can be reduced from the gradient observations. Under the assumption of constant mantle and crustal densities, surface densities are directly derived by LSC on regional scale, which in turn are converted in Moho depths. First investigations proofed the ability of this method to resolve the gravity inversion problem already with a small amount of GOCE data and comparisons with other seismic and gravitmetric Moho models for the European region show promising results. With the recently reprocessed GOCE gradients, an improved data set shall be used for the derivation of the Moho depth. In this contribution the processing strategy will be introduced and the most recent developments and results using the currently available GOCE data shall be presented.

  3. Accounting for time- and space-varying changes in the gravity field to improve the network adjustment of relative-gravity data

    Science.gov (United States)

    Kennedy, Jeffrey R.; Ferre, Ty P.A.

    2015-01-01

    The relative gravimeter is the primary terrestrial instrument for measuring spatially and temporally varying gravitational fields. The background noise of the instrument—that is, non-linear drift and random tares—typically requires some form of least-squares network adjustment to integrate data collected during a campaign that may take several days to weeks. Here, we present an approach to remove the change in the observed relative-gravity differences caused by hydrologic or other transient processes during a single campaign, so that the adjusted gravity values can be referenced to a single epoch. The conceptual approach is an example of coupled hydrogeophysical inversion, by which a hydrologic model is used to inform and constrain the geophysical forward model. The hydrologic model simulates the spatial variation of the rate of change of gravity as either a linear function of distance from an infiltration source, or using a 3-D numerical groundwater model. The linear function can be included in and solved for as part of the network adjustment. Alternatively, the groundwater model is used to predict the change of gravity at each station through time, from which the accumulated gravity change is calculated and removed from the data prior to the network adjustment. Data from a field experiment conducted at an artificial-recharge facility are used to verify our approach. Maximum gravity change due to hydrology (observed using a superconducting gravimeter) during the relative-gravity field campaigns was up to 2.6 μGal d−1, each campaign was between 4 and 6 d and one month elapsed between campaigns. The maximum absolute difference in the estimated gravity change between two campaigns, two months apart, using the standard network adjustment method and the new approach, was 5.5 μGal. The maximum gravity change between the same two campaigns was 148 μGal, and spatial variation in gravity change revealed zones of preferential infiltration and areas of relatively

  4. Global height datum unification: a new approach in gravity potential space

    Science.gov (United States)

    Ardalan, A. A.; Safari, A.

    2005-12-01

    The problem of “global height datum unification” is solved in the gravity potential space based on: (1) high-resolution local gravity field modeling, (2) geocentric coordinates of the reference benchmark, and (3) a known value of the geoid’s potential. The high-resolution local gravity field model is derived based on a solution of the fixed-free two-boundary-value problem of the Earth’s gravity field using (a) potential difference values (from precise leveling), (b) modulus of the gravity vector (from gravimetry), (c) astronomical longitude and latitude (from geodetic astronomy and/or combination of (GNSS) Global Navigation Satellite System observations with total station measurements), (d) and satellite altimetry. Knowing the height of the reference benchmark in the national height system and its geocentric GNSS coordinates, and using the derived high-resolution local gravity field model, the gravity potential value of the zero point of the height system is computed. The difference between the derived gravity potential value of the zero point of the height system and the geoid’s potential value is computed. This potential difference gives the offset of the zero point of the height system from geoid in the “potential space”, which is transferred into “geometry space” using the transformation formula derived in this paper. The method was applied to the computation of the offset of the zero point of the Iranian height datum from the geoid’s potential value W 0=62636855.8 m2/s2. According to the geometry space computations, the height datum of Iran is 0.09 m below the geoid.

  5. A stochastic parameterization for deep convection using cellular automata

    Science.gov (United States)

    Bengtsson, L.; Steinheimer, M.; Bechtold, P.; Geleyn, J.

    2012-12-01

    Cumulus parameterizations used in most operational weather and climate models today are based on the mass-flux concept which took form in the early 1970's. In such schemes it is assumed that a unique relationship exists between the ensemble-average of the sub-grid convection, and the instantaneous state of the atmosphere in a vertical grid box column. However, such a relationship is unlikely to be described by a simple deterministic function (Palmer, 2011). Thus, because of the statistical nature of the parameterization challenge, it has been recognized by the community that it is important to introduce stochastic elements to the parameterizations (for instance: Plant and Craig, 2008, Khouider et al. 2010, Frenkel et al. 2011, Bentsson et al. 2011, but the list is far from exhaustive). There are undoubtedly many ways in which stochastisity can enter new developments. In this study we use a two-way interacting cellular automata (CA), as its intrinsic nature possesses many qualities interesting for deep convection parameterization. In the one-dimensional entraining plume approach, there is no parameterization of horizontal transport of heat, moisture or momentum due to cumulus convection. In reality, mass transport due to gravity waves that propagate in the horizontal can trigger new convection, important for the organization of deep convection (Huang, 1988). The self-organizational characteristics of the CA allows for lateral communication between adjacent NWP model grid-boxes, and temporal memory. Thus the CA scheme used in this study contain three interesting components for representation of cumulus convection, which are not present in the traditional one-dimensional bulk entraining plume method: horizontal communication, memory and stochastisity. The scheme is implemented in the high resolution regional NWP model ALARO, and simulations show enhanced organization of convective activity along squall-lines. Probabilistic evaluation demonstrate an enhanced spread in

  6. A stochastic approach for automatic generation of urban drainage systems.

    Science.gov (United States)

    Möderl, M; Butler, D; Rauch, W

    2009-01-01

    Typically, performance evaluation of new developed methodologies is based on one or more case studies. The investigation of multiple real world case studies is tedious and time consuming. Moreover extrapolating conclusions from individual investigations to a general basis is arguable and sometimes even wrong. In this article a stochastic approach is presented to evaluate new developed methodologies on a broader basis. For the approach the Matlab-tool "Case Study Generator" is developed which generates a variety of different virtual urban drainage systems automatically using boundary conditions e.g. length of urban drainage system, slope of catchment surface, etc. as input. The layout of the sewer system is based on an adapted Galton-Watson branching process. The sub catchments are allocated considering a digital terrain model. Sewer system components are designed according to standard values. In total, 10,000 different virtual case studies of urban drainage system are generated and simulated. Consequently, simulation results are evaluated using a performance indicator for surface flooding. Comparison between results of the virtual and two real world case studies indicates the promise of the method. The novelty of the approach is that it is possible to get more general conclusions in contrast to traditional evaluations with few case studies.

  7. Dynamic stochastic optimization

    CERN Document Server

    Ermoliev, Yuri; Pflug, Georg

    2004-01-01

    Uncertainties and changes are pervasive characteristics of modern systems involving interactions between humans, economics, nature and technology. These systems are often too complex to allow for precise evaluations and, as a result, the lack of proper management (control) may create significant risks. In order to develop robust strategies we need approaches which explic­ itly deal with uncertainties, risks and changing conditions. One rather general approach is to characterize (explicitly or implicitly) uncertainties by objec­ tive or subjective probabilities (measures of confidence or belief). This leads us to stochastic optimization problems which can rarely be solved by using the standard deterministic optimization and optimal control methods. In the stochastic optimization the accent is on problems with a large number of deci­ sion and random variables, and consequently the focus ofattention is directed to efficient solution procedures rather than to (analytical) closed-form solu­ tions. Objective an...

  8. Memory effects on stochastic resonance

    Science.gov (United States)

    Neiman, Alexander; Sung, Wokyung

    1996-02-01

    We study the phenomenon of stochastic resonance (SR) in a bistable system with internal colored noise. In this situation the system possesses time-dependent memory friction connected with noise via the fluctuation-dissipation theorem, so that in the absence of periodic driving the system approaches the thermodynamic equilibrium state. For this non-Markovian case we find that memory usually suppresses stochastic resonance. However, for a large memory time SR can be enhanced by the memory.

  9. A Stochastic Flows Approach for Asset Allocation with Hidden Economic Environment

    Directory of Open Access Journals (Sweden)

    Tak Kuen Siu

    2015-01-01

    Full Text Available An optimal asset allocation problem for a quite general class of utility functions is discussed in a simple two-state Markovian regime-switching model, where the appreciation rate of a risky share changes over time according to the state of a hidden economy. As usual, standard filtering theory is used to transform a financial model with hidden information into one with complete information, where a martingale approach is applied to discuss the optimal asset allocation problem. Using a martingale representation coupled with stochastic flows of diffeomorphisms for the filtering equation, the integrand in the martingale representation is identified which gives rise to an optimal portfolio strategy under some differentiability conditions.

  10. A stochastic differential equations approach for the description of helium bubble size distributions in irradiated metals

    Energy Technology Data Exchange (ETDEWEB)

    Seif, Dariush, E-mail: dariush.seif@iwm-extern.fraunhofer.de [Fraunhofer Institut für Werkstoffmechanik, Freiburg 79108 (Germany); Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, CA 90095-1597 (United States); Ghoniem, Nasr M. [Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, CA 90095-1597 (United States)

    2014-12-15

    A rate theory model based on the theory of nonlinear stochastic differential equations (SDEs) is developed to estimate the time-dependent size distribution of helium bubbles in metals under irradiation. Using approaches derived from Itô’s calculus, rate equations for the first five moments of the size distribution in helium–vacancy space are derived, accounting for the stochastic nature of the atomic processes involved. In the first iteration of the model, the distribution is represented as a bivariate Gaussian distribution. The spread of the distribution about the mean is obtained by white-noise terms in the second-order moments, driven by fluctuations in the general absorption and emission of point defects by bubbles, and fluctuations stemming from collision cascades. This statistical model for the reconstruction of the distribution by its moments is coupled to a previously developed reduced-set, mean-field, rate theory model. As an illustrative case study, the model is applied to a tungsten plasma facing component under irradiation. Our findings highlight the important role of stochastic atomic fluctuations on the evolution of helium–vacancy cluster size distributions. It is found that when the average bubble size is small (at low dpa levels), the relative spread of the distribution is large and average bubble pressures may be very large. As bubbles begin to grow in size, average bubble pressures decrease, and stochastic fluctuations have a lessened effect. The distribution becomes tighter as it evolves in time, corresponding to a more uniform bubble population. The model is formulated in a general way, capable of including point defect drift due to internal temperature and/or stress gradients. These arise during pulsed irradiation, and also during steady irradiation as a result of externally applied or internally generated non-homogeneous stress fields. Discussion is given into how the model can be extended to include full spatial resolution and how the

  11. A stochastic differential equations approach for the description of helium bubble size distributions in irradiated metals

    Science.gov (United States)

    Seif, Dariush; Ghoniem, Nasr M.

    2014-12-01

    A rate theory model based on the theory of nonlinear stochastic differential equations (SDEs) is developed to estimate the time-dependent size distribution of helium bubbles in metals under irradiation. Using approaches derived from Itô's calculus, rate equations for the first five moments of the size distribution in helium-vacancy space are derived, accounting for the stochastic nature of the atomic processes involved. In the first iteration of the model, the distribution is represented as a bivariate Gaussian distribution. The spread of the distribution about the mean is obtained by white-noise terms in the second-order moments, driven by fluctuations in the general absorption and emission of point defects by bubbles, and fluctuations stemming from collision cascades. This statistical model for the reconstruction of the distribution by its moments is coupled to a previously developed reduced-set, mean-field, rate theory model. As an illustrative case study, the model is applied to a tungsten plasma facing component under irradiation. Our findings highlight the important role of stochastic atomic fluctuations on the evolution of helium-vacancy cluster size distributions. It is found that when the average bubble size is small (at low dpa levels), the relative spread of the distribution is large and average bubble pressures may be very large. As bubbles begin to grow in size, average bubble pressures decrease, and stochastic fluctuations have a lessened effect. The distribution becomes tighter as it evolves in time, corresponding to a more uniform bubble population. The model is formulated in a general way, capable of including point defect drift due to internal temperature and/or stress gradients. These arise during pulsed irradiation, and also during steady irradiation as a result of externally applied or internally generated non-homogeneous stress fields. Discussion is given into how the model can be extended to include full spatial resolution and how the

  12. A stochastic differential equations approach for the description of helium bubble size distributions in irradiated metals

    International Nuclear Information System (INIS)

    Seif, Dariush; Ghoniem, Nasr M.

    2014-01-01

    A rate theory model based on the theory of nonlinear stochastic differential equations (SDEs) is developed to estimate the time-dependent size distribution of helium bubbles in metals under irradiation. Using approaches derived from Itô’s calculus, rate equations for the first five moments of the size distribution in helium–vacancy space are derived, accounting for the stochastic nature of the atomic processes involved. In the first iteration of the model, the distribution is represented as a bivariate Gaussian distribution. The spread of the distribution about the mean is obtained by white-noise terms in the second-order moments, driven by fluctuations in the general absorption and emission of point defects by bubbles, and fluctuations stemming from collision cascades. This statistical model for the reconstruction of the distribution by its moments is coupled to a previously developed reduced-set, mean-field, rate theory model. As an illustrative case study, the model is applied to a tungsten plasma facing component under irradiation. Our findings highlight the important role of stochastic atomic fluctuations on the evolution of helium–vacancy cluster size distributions. It is found that when the average bubble size is small (at low dpa levels), the relative spread of the distribution is large and average bubble pressures may be very large. As bubbles begin to grow in size, average bubble pressures decrease, and stochastic fluctuations have a lessened effect. The distribution becomes tighter as it evolves in time, corresponding to a more uniform bubble population. The model is formulated in a general way, capable of including point defect drift due to internal temperature and/or stress gradients. These arise during pulsed irradiation, and also during steady irradiation as a result of externally applied or internally generated non-homogeneous stress fields. Discussion is given into how the model can be extended to include full spatial resolution and how the

  13. A robust decision-making approach for p-hub median location problems based on two-stage stochastic programming and mean-variance theory : a real case study

    NARCIS (Netherlands)

    Ahmadi, T.; Karimi, H.; Davoudpour, H.

    2015-01-01

    The stochastic location-allocation p-hub median problems are related to long-term decisions made in risky situations. Due to the importance of this type of problems in real-world applications, the authors were motivated to propose an approach to obtain more reliable policies in stochastic

  14. Earthquake occurrence as stochastic event: (1) theoretical models

    Energy Technology Data Exchange (ETDEWEB)

    Basili, A.; Basili, M.; Cagnetti, V.; Colombino, A.; Jorio, V.M.; Mosiello, R.; Norelli, F.; Pacilio, N.; Polinari, D.

    1977-01-01

    The present article intends liaisoning the stochastic approach in the description of earthquake processes suggested by Lomnitz with the experimental evidence reached by Schenkova that the time distribution of some earthquake occurrence is better described by a Negative Bionomial distribution than by a Poisson distribution. The final purpose of the stochastic approach might be a kind of new way for labeling a given area in terms of seismic risk.

  15. Stochastic Levy Divergence and Maxwell's Equations

    Directory of Open Access Journals (Sweden)

    B. O. Volkov

    2015-01-01

    Full Text Available One of the main reasons for interest in the Levy Laplacian and its analogues such as Levy d'Alembertian is a connection of these operators with gauge fields. The theorem proved by Accardi, Gibillisco and Volovich stated that a connection in a bundle over a Euclidean space or over a Minkowski space is a solution of the Yang-Mills equations if and only if the corresponding parallel transport to the connection is a solution of the Laplace equation for the Levy Laplacian or of the d'Alembert equation for the Levy d'Alembertian respectively (see [5, 6]. There are two approaches to define Levy type operators, both of which date back to the original works of Levy [7]. The first is that the Levy Laplacian (or Levy d'Alembertian is defined as an integral functional generated by a special form of the second derivative. This approach is used in the works [5, 6], as well as in the paper [8] of Leandre and Volovich, where stochastic Levy-Laplacian is discussed. Another approach to the Levy Laplacian is defining it as the Cesaro mean of second order derivatives along the family of vectors, which is an orthonormal basis in the Hilbert space. This definition of the Levy Laplacian is used for the description of solutions of the Yang-Mills equations in the paper [10].The present work shows that the definitions of the Levy Laplacian and the Levy d'Alembertian based on Cesaro averaging of the second order directional derivatives can be transferred to the stochastic case. In the article the values of these operators on a stochastic parallel transport associated with a connection (vector potential are found. In this case, unlike the deterministic case and the stochastic case of Levy Laplacian from [8], these values are not equal to zero if the vector potential corresponding to the stochastic parallel transport is a solution of the Maxwell's equations. As a result, two approaches to definition of the Levy Laplacian in the stochastic case give different operators. This

  16. A Column Generation Approach to the Capacitated Vehicle Routing Problem with Stochastic Demands

    DEFF Research Database (Denmark)

    Christiansen, Christian Holk; Lysgaard, Jens

    . The CVRPSD can be formulated as a Set Partitioning Problem. We show that, under the above assumptions on demands, the associated column generation subproblem can be solved using a dynamic programming scheme which is similar to that used in the case of deterministic demands. To evaluate the potential of our......In this article we introduce a new exact solution approach to the Capacitated Vehicle Routing Problem with Stochastic Demands (CVRPSD). In particular, we consider the case where all customer demands are distributed independently and where each customer's demand follows a Poisson distribution...

  17. Adaptive topographic mass correction for satellite gravity and gravity gradient data

    Science.gov (United States)

    Holzrichter, Nils; Szwillus, Wolfgang; Götze, Hans-Jürgen

    2014-05-01

    Subsurface modelling with gravity data includes a reliable topographic mass correction. Since decades, this mandatory step is a standard procedure. However, originally methods were developed for local terrestrial surveys. Therefore, these methods often include defaults like a limited correction area of 167 km around an observation point, resampling topography depending on the distance to the station or disregard the curvature of the earth. New satellite gravity data (e.g. GOCE) can be used for large scale lithospheric modelling with gravity data. The investigation areas can include thousands of kilometres. In addition, measurements are located in the flight height of the satellite (e.g. ~250 km for GOCE). The standard definition of the correction area and the specific grid spacing around an observation point was not developed for stations located in these heights and areas of these dimensions. This asks for a revaluation of the defaults used for topographic correction. We developed an algorithm which resamples the topography based on an adaptive approach. Instead of resampling topography depending on the distance to the station, the grids will be resampled depending on its influence at the station. Therefore, the only value the user has to define is the desired accuracy of the topographic correction. It is not necessary to define the grid spacing and a limited correction area. Furthermore, the algorithm calculates the topographic mass response with a spherical shaped polyhedral body. We show examples for local and global gravity datasets and compare the results of the topographic mass correction to existing approaches. We provide suggestions how satellite gravity and gradient data should be corrected.

  18. Stochastic frontier model approach for measuring stock market efficiency with different distributions.

    Science.gov (United States)

    Hasan, Md Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md Azizul

    2012-01-01

    The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time-varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation.

  19. Stochastic frontier model approach for measuring stock market efficiency with different distributions.

    Directory of Open Access Journals (Sweden)

    Md Zobaer Hasan

    Full Text Available The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time-varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation.

  20. gravity

    Indian Academy of Sciences (India)

    We study the cosmological dynamics for R p exp( λ R ) gravity theory in the metric formalism, using dynamical systems approach. Considering higher-dimensional FRW geometries in case of an imperfect fluid which has two different scale factors in the normal and extra dimensions, we find the exact solutions, and study its ...

  1. NLP model and stochastic multi-start optimization approach for heat exchanger networks

    International Nuclear Information System (INIS)

    Núñez-Serna, Rosa I.; Zamora, Juan M.

    2016-01-01

    Highlights: • An NLP model for the optimal design of heat exchanger networks is proposed. • The NLP model is developed from a stage-wise grid diagram representation. • A two-phase stochastic multi-start optimization methodology is utilized. • Improved network designs are obtained with different heat load distributions. • Structural changes and reductions in the number of heat exchangers are produced. - Abstract: Heat exchanger network synthesis methodologies frequently identify good network structures, which nevertheless, might be accompanied by suboptimal values of design variables. The objective of this work is to develop a nonlinear programming (NLP) model and an optimization approach that aim at identifying the best values for intermediate temperatures, sub-stream flow rate fractions, heat loads and areas for a given heat exchanger network topology. The NLP model that minimizes the total annual cost of the network is constructed based on a stage-wise grid diagram representation. To improve the possibilities of obtaining global optimal designs, a two-phase stochastic multi-start optimization algorithm is utilized for the solution of the developed model. The effectiveness of the proposed optimization approach is illustrated with the optimization of two network designs proposed in the literature for two well-known benchmark problems. Results show that from the addressed base network topologies it is possible to achieve improved network designs, with redistributions in exchanger heat loads that lead to reductions in total annual costs. The results also show that the optimization of a given network design sometimes leads to structural simplifications and reductions in the total number of heat exchangers of the network, thereby exposing alternative viable network topologies initially not anticipated.

  2. Electricity price modeling with stochastic time change

    International Nuclear Information System (INIS)

    Borovkova, Svetlana; Schmeck, Maren Diane

    2017-01-01

    In this paper, we develop a novel approach to electricity price modeling, based on the powerful technique of stochastic time change. This technique allows us to incorporate the characteristic features of electricity prices (such as seasonal volatility, time varying mean reversion and seasonally occurring price spikes) into the model in an elegant and economically justifiable way. The stochastic time change introduces stochastic as well as deterministic (e.g., seasonal) features in the price process' volatility and in the jump component. We specify the base process as a mean reverting jump diffusion and the time change as an absolutely continuous stochastic process with seasonal component. The activity rate of the stochastic time change can be related to the factors that influence supply and demand. Here we use the temperature as a proxy for the demand and hence, as the driving factor of the stochastic time change, and show that this choice leads to realistic price paths. We derive properties of the resulting price process and develop the model calibration procedure. We calibrate the model to the historical EEX power prices and apply it to generating realistic price paths by Monte Carlo simulations. We show that the simulated price process matches the distributional characteristics of the observed electricity prices in periods of both high and low demand. - Highlights: • We develop a novel approach to electricity price modeling, based on the powerful technique of stochastic time change. • We incorporate the characteristic features of electricity prices, such as seasonal volatility and spikes into the model. • We use the temperature as a proxy for the demand and hence, as the driving factor of the stochastic time change • We derive properties of the resulting price process and develop the model calibration procedure. • We calibrate the model to the historical EEX power prices and apply it to generating realistic price paths.

  3. BOOK REVIEW: Quantum Gravity: third edition Quantum Gravity: third edition

    Science.gov (United States)

    Rovelli, Carlo

    2012-09-01

    The request by Classical and Quantum Gravity to review the third edition of Claus Kiefer's 'Quantum Gravity' puts me in a slightly awkward position. This is a remarkably good book, which every person working in quantum gravity should have on the shelf. But in my opinion quantum gravity has undergone some dramatic advances in the last few years, of which the book makes no mention. Perhaps the omission only attests to the current vitality of the field, where progress is happening fast, but it is strange for me to review a thoughtful, knowledgeable and comprehensive book on my own field of research, which ignores what I myself consider the most interesting results to date. Kiefer's book is unique as a broad introduction and a reliable overview of quantum gravity. There are numerous books in the field which (often notwithstanding titles) focus on a single approach. There are also countless conference proceedings and article collections aiming to be encyclopaedic, but offering disorganized patchworks. Kiefer's book is a careful and thoughtful presentation of all aspects of the immense problem of quantum gravity. Kiefer is very learned, and brings together three rare qualities: he is pedagogical, he is capable of simplifying matter to the bones and capturing the essential, and he offers a serious and balanced evaluation of views and ideas. In a fractured field based on a major problem that does not yet have a solution, these qualities are precious. I recommend Kiefer's book to my students entering the field: to work in quantum gravity one needs a vast amount of technical knowledge as well as a grasp of different ideas, and Kiefer's book offers this with remarkable clarity. This novel third edition simplifies and improves the presentation of several topics, but also adds very valuable new material on quantum gravity phenomenology, loop quantum cosmology, asymptotic safety, Horava-Lifshitz gravity, analogue gravity, the holographic principle, and more. This is a testament

  4. Stochastic modeling of consumer preferences for health care institutions.

    Science.gov (United States)

    Malhotra, N K

    1983-01-01

    This paper proposes a stochastic procedure for modeling consumer preferences via LOGIT analysis. First, a simple, non-technical exposition of the use of a stochastic approach in health care marketing is presented. Second, a study illustrating the application of the LOGIT model in assessing consumer preferences for hospitals is given. The paper concludes with several implications of the proposed approach.

  5. Scale-invariant gravity: geometrodynamics

    International Nuclear Information System (INIS)

    Anderson, Edward; Barbour, Julian; Foster, Brendan; Murchadha, Niall O

    2003-01-01

    We present a scale-invariant theory, conformal gravity, which closely resembles the geometrodynamical formulation of general relativity (GR). While previous attempts to create scale-invariant theories of gravity have been based on Weyl's idea of a compensating field, our direct approach dispenses with this and is built by extension of the method of best matching w.r.t. scaling developed in the parallel particle dynamics paper by one of the authors. In spatially compact GR, there is an infinity of degrees of freedom that describe the shape of 3-space which interact with a single volume degree of freedom. In conformal gravity, the shape degrees of freedom remain, but the volume is no longer a dynamical variable. Further theories and formulations related to GR and conformal gravity are presented. Conformal gravity is successfully coupled to scalars and the gauge fields of nature. It should describe the solar system observations as well as GR does, but its cosmology and quantization will be completely different

  6. Adaptive Asymptotical Synchronization for Stochastic Complex Networks with Time-Delay and Markovian Switching

    Directory of Open Access Journals (Sweden)

    Xueling Jiang

    2014-01-01

    Full Text Available The problem of adaptive asymptotical synchronization is discussed for the stochastic complex dynamical networks with time-delay and Markovian switching. By applying the stochastic analysis approach and the M-matrix method for stochastic complex networks, several sufficient conditions to ensure adaptive asymptotical synchronization for stochastic complex networks are derived. Through the adaptive feedback control techniques, some suitable parameters update laws are obtained. Simulation result is provided to substantiate the effectiveness and characteristics of the proposed approach.

  7. Productive efficiency of tea industry: A stochastic frontier approach

    African Journals Online (AJOL)

    USER

    2010-06-21

    Jun 21, 2010 ... Key words: Technical efficiency, stochastic frontier, translog ... present low performance of the tea industry in Bangladesh. ... The Technical inefficiency effect .... administrative, technical, clerical, sales and purchase staff.

  8. Stochastic quantization and gauge theories

    International Nuclear Information System (INIS)

    Kolck, U. van.

    1987-01-01

    Stochastic quantization is presented taking the Flutuation-Dissipation Theorem as a guide. It is shown that the original approach of Parisi and Wu to gauge theories fails to give the right results to gauge invariant quantities when dimensional regularization is used. Although there is a simple solution in an abelian theory, in the non-abelian case it is probably necessary to start from a BRST invariant action instead of a gauge invariant one. Stochastic regularizations are also discussed. (author) [pt

  9. Stochastic partial differential equations a modeling, white noise functional approach

    CERN Document Server

    Holden, Helge; Ubøe, Jan; Zhang, Tusheng

    1996-01-01

    This book is based on research that, to a large extent, started around 1990, when a research project on fluid flow in stochastic reservoirs was initiated by a group including some of us with the support of VISTA, a research coopera­ tion between the Norwegian Academy of Science and Letters and Den norske stats oljeselskap A.S. (Statoil). The purpose of the project was to use stochastic partial differential equations (SPDEs) to describe the flow of fluid in a medium where some of the parameters, e.g., the permeability, were stochastic or "noisy". We soon realized that the theory of SPDEs at the time was insufficient to handle such equations. Therefore it became our aim to develop a new mathematically rigorous theory that satisfied the following conditions. 1) The theory should be physically meaningful and realistic, and the corre­ sponding solutions should make sense physically and should be useful in applications. 2) The theory should be general enough to handle many of the interesting SPDEs that occur in r...

  10. On a new approach for constructing wormholes in Einstein-Born-Infeld gravity

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jin Young [Kunsan National University, Department of Physics, Kunsan (Korea, Republic of); Park, Mu-In [Sogang University, Research Institute for Basic Science, Seoul (Korea, Republic of)

    2016-11-15

    We study a new approach for the wormhole construction in Einstein-Born-Infeld gravity, which does not require exotic matters in the Einstein equation. The Born-Infeld field equation is not modified by coordinate independent conditions of continuous metric tensor and its derivatives, even though the Born-Infeld fields have discontinuities in their derivatives at the throat in general. We study the relation of the newly introduced conditions with the usual continuity equation for the energy-momentum tensor and the gravitational Bianchi identity. We find that there is no violation of energy conditions for the Born-Infeld fields contrary to the usual approaches. The exoticity of the energy-momentum tensor is not essential for sustaining wormholes. Some open problems are discussed. (orig.)

  11. Tail-constraining stochastic linear–quadratic control: a large deviation and statistical physics approach

    International Nuclear Information System (INIS)

    Chertkov, Michael; Kolokolov, Igor; Lebedev, Vladimir

    2012-01-01

    The standard definition of the stochastic risk-sensitive linear–quadratic (RS-LQ) control depends on the risk parameter, which is normally left to be set exogenously. We reconsider the classical approach and suggest two alternatives, resolving the spurious freedom naturally. One approach consists in seeking for the minimum of the tail of the probability distribution function (PDF) of the cost functional at some large fixed value. Another option suggests minimizing the expectation value of the cost functional under a constraint on the value of the PDF tail. Under the assumption of resulting control stability, both problems are reduced to static optimizations over a stationary control matrix. The solutions are illustrated using the examples of scalar and 1D chain (string) systems. The large deviation self-similar asymptotic of the cost functional PDF is analyzed. (paper)

  12. Modular and Stochastic Approaches to Molecular Pathway Models of ATM, TGF beta, and WNT Signaling

    Science.gov (United States)

    Cucinotta, Francis A.; O'Neill, Peter; Ponomarev, Artem; Carra, Claudio; Whalen, Mary; Pluth, Janice M.

    2009-01-01

    Deterministic pathway models that describe the biochemical interactions of a group of related proteins, their complexes, activation through kinase, etc. are often the basis for many systems biology models. Low dose radiation effects present a unique set of challenges to these models including the importance of stochastic effects due to the nature of radiation tracks and small number of molecules activated, and the search for infrequent events that contribute to cancer risks. We have been studying models of the ATM, TGF -Smad and WNT signaling pathways with the goal of applying pathway models to the investigation of low dose radiation cancer risks. Modeling challenges include introduction of stochastic models of radiation tracks, their relationships to more than one substrate species that perturb pathways, and the identification of a representative set of enzymes that act on the dominant substrates. Because several pathways are activated concurrently by radiation the development of modular pathway approach is of interest.

  13. a Stochastic Approach to Multiobjective Optimization of Large-Scale Water Reservoir Networks

    Science.gov (United States)

    Bottacin-Busolin, A.; Worman, A. L.

    2013-12-01

    A main challenge for the planning and management of water resources is the development of multiobjective strategies for operation of large-scale water reservoir networks. The optimal sequence of water releases from multiple reservoirs depends on the stochastic variability of correlated hydrologic inflows and on various processes that affect water demand and energy prices. Although several methods have been suggested, large-scale optimization problems arising in water resources management are still plagued by the high dimensional state space and by the stochastic nature of the hydrologic inflows. In this work, the optimization of reservoir operation is approached using approximate dynamic programming (ADP) with policy iteration and function approximators. The method is based on an off-line learning process in which operating policies are evaluated for a number of stochastic inflow scenarios, and the resulting value functions are used to design new, improved policies until convergence is attained. A case study is presented of a multi-reservoir system in the Dalälven River, Sweden, which includes 13 interconnected reservoirs and 36 power stations. Depending on the late spring and summer peak discharges, the lowlands adjacent to Dalälven can often be flooded during the summer period, and the presence of stagnating floodwater during the hottest months of the year is the cause of a large proliferation of mosquitos, which is a major problem for the people living in the surroundings. Chemical pesticides are currently being used as a preventive countermeasure, which do not provide an effective solution to the problem and have adverse environmental impacts. In this study, ADP was used to analyze the feasibility of alternative operating policies for reducing the flood risk at a reasonable economic cost for the hydropower companies. To this end, mid-term operating policies were derived by combining flood risk reduction with hydropower production objectives. The performance

  14. Quantum gravity from noncommutative spacetime

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jungjai [Daejin University, Pocheon (Korea, Republic of); Yang, Hyunseok [Korea Institute for Advanced Study, Seoul (Korea, Republic of)

    2014-12-15

    We review a novel and authentic way to quantize gravity. This novel approach is based on the fact that Einstein gravity can be formulated in terms of a symplectic geometry rather than a Riemannian geometry in the context of emergent gravity. An essential step for emergent gravity is to realize the equivalence principle, the most important property in the theory of gravity (general relativity), from U(1) gauge theory on a symplectic or Poisson manifold. Through the realization of the equivalence principle, which is an intrinsic property in symplectic geometry known as the Darboux theorem or the Moser lemma, one can understand how diffeomorphism symmetry arises from noncommutative U(1) gauge theory; thus, gravity can emerge from the noncommutative electromagnetism, which is also an interacting theory. As a consequence, a background-independent quantum gravity in which the prior existence of any spacetime structure is not a priori assumed but is defined by using the fundamental ingredients in quantum gravity theory can be formulated. This scheme for quantum gravity can be used to resolve many notorious problems in theoretical physics, such as the cosmological constant problem, to understand the nature of dark energy, and to explain why gravity is so weak compared to other forces. In particular, it leads to a remarkable picture of what matter is. A matter field, such as leptons and quarks, simply arises as a stable localized geometry, which is a topological object in the defining algebra (noncommutative *-algebra) of quantum gravity.

  15. Quantum gravity from noncommutative spacetime

    International Nuclear Information System (INIS)

    Lee, Jungjai; Yang, Hyunseok

    2014-01-01

    We review a novel and authentic way to quantize gravity. This novel approach is based on the fact that Einstein gravity can be formulated in terms of a symplectic geometry rather than a Riemannian geometry in the context of emergent gravity. An essential step for emergent gravity is to realize the equivalence principle, the most important property in the theory of gravity (general relativity), from U(1) gauge theory on a symplectic or Poisson manifold. Through the realization of the equivalence principle, which is an intrinsic property in symplectic geometry known as the Darboux theorem or the Moser lemma, one can understand how diffeomorphism symmetry arises from noncommutative U(1) gauge theory; thus, gravity can emerge from the noncommutative electromagnetism, which is also an interacting theory. As a consequence, a background-independent quantum gravity in which the prior existence of any spacetime structure is not a priori assumed but is defined by using the fundamental ingredients in quantum gravity theory can be formulated. This scheme for quantum gravity can be used to resolve many notorious problems in theoretical physics, such as the cosmological constant problem, to understand the nature of dark energy, and to explain why gravity is so weak compared to other forces. In particular, it leads to a remarkable picture of what matter is. A matter field, such as leptons and quarks, simply arises as a stable localized geometry, which is a topological object in the defining algebra (noncommutative *-algebra) of quantum gravity.

  16. Methods and models in mathematical biology deterministic and stochastic approaches

    CERN Document Server

    Müller, Johannes

    2015-01-01

    This book developed from classes in mathematical biology taught by the authors over several years at the Technische Universität München. The main themes are modeling principles, mathematical principles for the analysis of these models, and model-based analysis of data. The key topics of modern biomathematics are covered: ecology, epidemiology, biochemistry, regulatory networks, neuronal networks, and population genetics. A variety of mathematical methods are introduced, ranging from ordinary and partial differential equations to stochastic graph theory and  branching processes. A special emphasis is placed on the interplay between stochastic and deterministic models.

  17. Stochastic dynamics and irreversibility

    CERN Document Server

    Tomé, Tânia

    2015-01-01

    This textbook presents an exposition of stochastic dynamics and irreversibility. It comprises the principles of probability theory and the stochastic dynamics in continuous spaces, described by Langevin and Fokker-Planck equations, and in discrete spaces, described by Markov chains and master equations. Special concern is given to the study of irreversibility, both in systems that evolve to equilibrium and in nonequilibrium stationary states. Attention is also given to the study of models displaying phase transitions and critical phenomema both in thermodynamic equilibrium and out of equilibrium. These models include the linear Glauber model, the Glauber-Ising model, lattice models with absorbing states such as the contact process and those used in population dynamic and spreading of epidemic, probabilistic cellular automata, reaction-diffusion processes, random sequential adsorption and dynamic percolation. A stochastic approach to chemical reaction is also presented.The textbook is intended for students of ...

  18. Stochastic processes

    CERN Document Server

    Borodin, Andrei N

    2017-01-01

    This book provides a rigorous yet accessible introduction to the theory of stochastic processes. A significant part of the book is devoted to the classic theory of stochastic processes. In turn, it also presents proofs of well-known results, sometimes together with new approaches. Moreover, the book explores topics not previously covered elsewhere, such as distributions of functionals of diffusions stopped at different random times, the Brownian local time, diffusions with jumps, and an invariance principle for random walks and local times. Supported by carefully selected material, the book showcases a wealth of examples that demonstrate how to solve concrete problems by applying theoretical results. It addresses a broad range of applications, focusing on concrete computational techniques rather than on abstract theory. The content presented here is largely self-contained, making it suitable for researchers and graduate students alike.

  19. Introduction to probability and stochastic processes with applications

    CERN Document Server

    Castañ, Blanco; Arunachalam, Viswanathan; Dharmaraja, Selvamuthu

    2012-01-01

    An easily accessible, real-world approach to probability and stochastic processes Introduction to Probability and Stochastic Processes with Applications presents a clear, easy-to-understand treatment of probability and stochastic processes, providing readers with a solid foundation they can build upon throughout their careers. With an emphasis on applications in engineering, applied sciences, business and finance, statistics, mathematics, and operations research, the book features numerous real-world examples that illustrate how random phenomena occur in nature and how to use probabilistic t

  20. Stochastic models of the Social Security trust funds.

    Science.gov (United States)

    Burdick, Clark; Manchester, Joyce

    Each year in March, the Board of Trustees of the Social Security trust funds reports on the current and projected financial condition of the Social Security programs. Those programs, which pay monthly benefits to retired workers and their families, to the survivors of deceased workers, and to disabled workers and their families, are financed through the Old-Age, Survivors, and Disability Insurance (OASDI) Trust Funds. In their 2003 report, the Trustees present, for the first time, results from a stochastic model of the combined OASDI trust funds. Stochastic modeling is an important new tool for Social Security policy analysis and offers the promise of valuable new insights into the financial status of the OASDI trust funds and the effects of policy changes. The results presented in this article demonstrate that several stochastic models deliver broadly consistent results even though they use very different approaches and assumptions. However, they also show that the variation in trust fund outcomes differs as the approach and assumptions are varied. Which approach and assumptions are best suited for Social Security policy analysis remains an open question. Further research is needed before the promise of stochastic modeling is fully realized. For example, neither parameter uncertainty nor variability in ultimate assumption values is recognized explicitly in the analyses. Despite this caveat, stochastic modeling results are already shedding new light on the range and distribution of trust fund outcomes that might occur in the future.

  1. Massive gravity from bimetric gravity

    International Nuclear Information System (INIS)

    Baccetti, Valentina; Martín-Moruno, Prado; Visser, Matt

    2013-01-01

    We discuss the subtle relationship between massive gravity and bimetric gravity, focusing particularly on the manner in which massive gravity may be viewed as a suitable limit of bimetric gravity. The limiting procedure is more delicate than currently appreciated. Specifically, this limiting procedure should not unnecessarily constrain the background metric, which must be externally specified by the theory of massive gravity itself. The fact that in bimetric theories one always has two sets of metric equations of motion continues to have an effect even in the massive gravity limit, leading to additional constraints besides the one set of equations of motion naively expected. Thus, since solutions of bimetric gravity in the limit of vanishing kinetic term are also solutions of massive gravity, but the contrary statement is not necessarily true, there is no complete continuity in the parameter space of the theory. In particular, we study the massive cosmological solutions which are continuous in the parameter space, showing that many interesting cosmologies belong to this class. (paper)

  2. Visualisation for Stochastic Process Algebras: The Graphic Truth

    DEFF Research Database (Denmark)

    Smith, Michael James Andrew; Gilmore, Stephen

    2011-01-01

    and stochastic activity networks provide an automaton-based view of the model, which may be easier to visualise, at the expense of portability. In this paper, we argue that we can achieve the benefits of both approaches by generating a graphical view of a stochastic process algebra model, which is synchronised...

  3. Stochastic thermodynamics

    Science.gov (United States)

    Eichhorn, Ralf; Aurell, Erik

    2014-04-01

    'Stochastic thermodynamics as a conceptual framework combines the stochastic energetics approach introduced a decade ago by Sekimoto [1] with the idea that entropy can consistently be assigned to a single fluctuating trajectory [2]'. This quote, taken from Udo Seifert's [3] 2008 review, nicely summarizes the basic ideas behind stochastic thermodynamics: for small systems, driven by external forces and in contact with a heat bath at a well-defined temperature, stochastic energetics [4] defines the exchanged work and heat along a single fluctuating trajectory and connects them to changes in the internal (system) energy by an energy balance analogous to the first law of thermodynamics. Additionally, providing a consistent definition of trajectory-wise entropy production gives rise to second-law-like relations and forms the basis for a 'stochastic thermodynamics' along individual fluctuating trajectories. In order to construct meaningful concepts of work, heat and entropy production for single trajectories, their definitions are based on the stochastic equations of motion modeling the physical system of interest. Because of this, they are valid even for systems that are prevented from equilibrating with the thermal environment by external driving forces (or other sources of non-equilibrium). In that way, the central notions of equilibrium thermodynamics, such as heat, work and entropy, are consistently extended to the non-equilibrium realm. In the (non-equilibrium) ensemble, the trajectory-wise quantities acquire distributions. General statements derived within stochastic thermodynamics typically refer to properties of these distributions, and are valid in the non-equilibrium regime even beyond the linear response. The extension of statistical mechanics and of exact thermodynamic statements to the non-equilibrium realm has been discussed from the early days of statistical mechanics more than 100 years ago. This debate culminated in the development of linear response

  4. Nonlinear Damping Identification in Nonlinear Dynamic System Based on Stochastic Inverse Approach

    Directory of Open Access Journals (Sweden)

    S. L. Han

    2012-01-01

    Full Text Available The nonlinear model is crucial to prepare, supervise, and analyze mechanical system. In this paper, a new nonparametric and output-only identification procedure for nonlinear damping is studied. By introducing the concept of the stochastic state space, we formulate a stochastic inverse problem for a nonlinear damping. The solution of the stochastic inverse problem is designed as probabilistic expression via the hierarchical Bayesian formulation by considering various uncertainties such as the information insufficiency in parameter of interests or errors in measurement. The probability space is estimated using Markov chain Monte Carlo (MCMC. The applicability of the proposed method is demonstrated through numerical experiment and particular application to a realistic problem related to ship roll motion.

  5. Stochastic theory of relaxation and collisional broadening of spectral line shapes

    International Nuclear Information System (INIS)

    Faid, K.

    1986-01-01

    A complete stochastic theory of relaxation is developed in terms of a homogeneous equation for the averaged density matrix of a system immersed in a thermal bath. This theory is then used as the basis of a new stochastic approach to the phenomenon of collisional broadening of spectral line shapes. Single-photon and multiphoton processes are studied. The features of a line shape are linked by simple expressions to the statistical properties of a stochastic hermitian Hamiltonian. The ordinary line shape predicted by Kubo's approach is generalized. The present approach predicts broadening as well as asymmetry and shift. A representation of line shapes in multiphoton processes by diagrams is also developed

  6. Parameter-free resolution of the superposition of stochastic signals

    Energy Technology Data Exchange (ETDEWEB)

    Scholz, Teresa, E-mail: tascholz@fc.ul.pt [Center for Theoretical and Computational Physics, University of Lisbon (Portugal); Raischel, Frank [Center for Geophysics, IDL, University of Lisbon (Portugal); Closer Consulting, Av. Eng. Duarte Pacheco Torre 1 15" 0, 1070-101 Lisboa (Portugal); Lopes, Vitor V. [DEIO-CIO, University of Lisbon (Portugal); UTEC–Universidad de Ingeniería y Tecnología, Lima (Peru); Lehle, Bernd; Wächter, Matthias; Peinke, Joachim [Institute of Physics and ForWind, Carl-von-Ossietzky University of Oldenburg, Oldenburg (Germany); Lind, Pedro G. [Institute of Physics and ForWind, Carl-von-Ossietzky University of Oldenburg, Oldenburg (Germany); Institute of Physics, University of Osnabrück, Osnabrück (Germany)

    2017-01-30

    This paper presents a direct method to obtain the deterministic and stochastic contribution of the sum of two independent stochastic processes, one of which is an Ornstein–Uhlenbeck process and the other a general (non-linear) Langevin process. The method is able to distinguish between the stochastic processes, retrieving their corresponding stochastic evolution equations. This framework is based on a recent approach for the analysis of multidimensional Langevin-type stochastic processes in the presence of strong measurement (or observational) noise, which is here extended to impose neither constraints nor parameters and extract all coefficients directly from the empirical data sets. Using synthetic data, it is shown that the method yields satisfactory results.

  7. Stochastic forward and inverse groundwater flow and solute transport modeling

    NARCIS (Netherlands)

    Janssen, G.M.C.M.

    2008-01-01

    Keywords: calibration, inverse modeling, stochastic modeling, nonlinear biodegradation, stochastic-convective, advective-dispersive, travel time, network design, non-Gaussian distribution, multimodal distribution, representers

    This thesis offers three new approaches that contribute

  8. US residential energy demand and energy efficiency: A stochastic demand frontier approach

    International Nuclear Information System (INIS)

    Filippini, Massimo; Hunt, Lester C.

    2012-01-01

    This paper estimates a US frontier residential aggregate energy demand function using panel data for 48 ‘states’ over the period 1995 to 2007 using stochastic frontier analysis (SFA). Utilizing an econometric energy demand model, the (in)efficiency of each state is modeled and it is argued that this represents a measure of the inefficient use of residential energy in each state (i.e. ‘waste energy’). This underlying efficiency for the US is therefore observed for each state as well as the relative efficiency across the states. Moreover, the analysis suggests that energy intensity is not necessarily a good indicator of energy efficiency, whereas by controlling for a range of economic and other factors, the measure of energy efficiency obtained via this approach is. This is a novel approach to model residential energy demand and efficiency and it is arguably particularly relevant given current US energy policy discussions related to energy efficiency.

  9. Fast image reconstruction for Compton camera using stochastic origin ensemble approach.

    Science.gov (United States)

    Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna

    2011-01-01

    Compton camera has been proposed as a potential imaging tool in astronomy, industry, homeland security, and medical diagnostics. Due to the inherent geometrical complexity of Compton camera data, image reconstruction of distributed sources can be ineffective and/or time-consuming when using standard techniques such as filtered backprojection or maximum likelihood-expectation maximization (ML-EM). In this article, the authors demonstrate a fast reconstruction of Compton camera data using a novel stochastic origin ensembles (SOE) approach based on Markov chains. During image reconstruction, the origins of the measured events are randomly assigned to locations on conical surfaces, which are the Compton camera analogs of lines-of-responses in PET. Therefore, the image is defined as an ensemble of origin locations of all possible event origins. During the course of reconstruction, the origins of events are stochastically moved and the acceptance of the new event origin is determined by the predefined acceptance probability, which is proportional to the change in event density. For example, if the event density at the new location is higher than in the previous location, the new position is always accepted. After several iterations, the reconstructed distribution of origins converges to a quasistationary state which can be voxelized and displayed. Comparison with the list-mode ML-EM reveals that the postfiltered SOE algorithm has similar performance in terms of image quality while clearly outperforming ML-EM in relation to reconstruction time. In this study, the authors have implemented and tested a new image reconstruction algorithm for the Compton camera based on the stochastic origin ensembles with Markov chains. The algorithm uses list-mode data, is parallelizable, and can be used for any Compton camera geometry. SOE algorithm clearly outperforms list-mode ML-EM for simple Compton camera geometry in terms of reconstruction time. The difference in computational time

  10. GRACE gravity field modeling with an investigation on correlation between nuisance parameters and gravity field coefficients

    Science.gov (United States)

    Zhao, Qile; Guo, Jing; Hu, Zhigang; Shi, Chuang; Liu, Jingnan; Cai, Hua; Liu, Xianglin

    2011-05-01

    The GRACE (Gravity Recovery And Climate Experiment) monthly gravity models have been independently produced and published by several research institutions, such as Center for Space Research (CSR), GeoForschungsZentrum (GFZ), Jet Propulsion Laboratory (JPL), Centre National d’Etudes Spatiales (CNES) and Delft Institute of Earth Observation and Space Systems (DEOS). According to their processing standards, above institutions use the traditional variational approach except that the DEOS exploits the acceleration approach. The background force models employed are rather similar. The produced gravity field models generally agree with one another in the spatial pattern. However, there are some discrepancies in the gravity signal amplitude between solutions produced by different institutions. In particular, 10%-30% signal amplitude differences in some river basins can be observed. In this paper, we implemented a variant of the traditional variational approach and computed two sets of monthly gravity field solutions using the data from January 2005 to December 2006. The input data are K-band range-rates (KBRR) and kinematic orbits of GRACE satellites. The main difference in the production of our two types of models is how to deal with nuisance parameters. This type of parameters is necessary to absorb low-frequency errors in the data, which are mainly the aliasing and instrument errors. One way is to remove the nuisance parameters before estimating the geopotential coefficients, called NPARB approach in the paper. The other way is to estimate the nuisance parameters and geopotential coefficients simultaneously, called NPESS approach. These two types of solutions mainly differ in geopotential coefficients from degree 2 to 5. This can be explained by the fact that the nuisance parameters and the gravity field coefficients are highly correlated, particularly at low degrees. We compare these solutions with the official and published ones by means of spectral analysis. It is

  11. Stochastic goal-oriented error estimation with memory

    Science.gov (United States)

    Ackmann, Jan; Marotzke, Jochem; Korn, Peter

    2017-11-01

    We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.

  12. Algebraic and stochastic coding theory

    CERN Document Server

    Kythe, Dave K

    2012-01-01

    Using a simple yet rigorous approach, Algebraic and Stochastic Coding Theory makes the subject of coding theory easy to understand for readers with a thorough knowledge of digital arithmetic, Boolean and modern algebra, and probability theory. It explains the underlying principles of coding theory and offers a clear, detailed description of each code. More advanced readers will appreciate its coverage of recent developments in coding theory and stochastic processes. After a brief review of coding history and Boolean algebra, the book introduces linear codes, including Hamming and Golay codes.

  13. Monthly gravity field solutions based on GRACE observations generated with the Celestial Mechanics Approach

    Science.gov (United States)

    Meyer, Ulrich; Jäggi, Adrian; Beutler, Gerhard

    2012-09-01

    The main objective of the Gravity Recovery And Climate Experiment (GRACE) satellite mission consists of determining the temporal variations of the Earth's gravity field. These variations are captured by time series of gravity field models of limited resolution at, e.g., monthly intervals. We present a new time series of monthly models, which was computed with the so-called Celestial Mechanics Approach (CMA), developed at the Astronomical Institute of the University of Bern (AIUB). The secular and seasonal variations in the monthly models are tested for statistical significance. Calibrated errors are derived from inter-annual variations. The time-variable signal can be extracted at least up to degree 60, but the gravity field coefficients of orders above 45 are heavily contaminated by noise. This is why a series of monthly models is computed up to a maximum degree of 60, but only a maximum order of 45. Spectral analysis of the residual time-variable signal shows a distinctive peak at a period of 160 days, which shows up in particular in the C20 spherical harmonic coefficient. Basic filter- and scaling-techniques are introduced to evaluate the monthly models. For this purpose, the variability over the oceans is investigated, which serves as a measure for the noisiness of the models. The models in selected regions show the expected seasonal and secular variations, which are in good agreement with the monthly models of the Helmholtz Centre Potsdam, German Research Centre for Geosciences (GFZ). The results also reveal a few small outliers, illustrating the necessity for improved data screening. Our monthly models are available at the web page of the International Centre for Global Earth Models (ICGEM).

  14. Stochasticity Modeling in Memristors

    KAUST Repository

    Naous, Rawan

    2015-10-26

    Diverse models have been proposed over the past years to explain the exhibiting behavior of memristors, the fourth fundamental circuit element. The models varied in complexity ranging from a description of physical mechanisms to a more generalized mathematical modeling. Nonetheless, stochasticity, a widespread observed phenomenon, has been immensely overlooked from the modeling perspective. This inherent variability within the operation of the memristor is a vital feature for the integration of this nonlinear device into the stochastic electronics realm of study. In this paper, experimentally observed innate stochasticity is modeled in a circuit compatible format. The model proposed is generic and could be incorporated into variants of threshold-based memristor models in which apparent variations in the output hysteresis convey the switching threshold shift. Further application as a noise injection alternative paves the way for novel approaches in the fields of neuromorphic engineering circuits design. On the other hand, extra caution needs to be paid to variability intolerant digital designs based on non-deterministic memristor logic.

  15. Stochasticity Modeling in Memristors

    KAUST Repository

    Naous, Rawan; Al-Shedivat, Maruan; Salama, Khaled N.

    2015-01-01

    Diverse models have been proposed over the past years to explain the exhibiting behavior of memristors, the fourth fundamental circuit element. The models varied in complexity ranging from a description of physical mechanisms to a more generalized mathematical modeling. Nonetheless, stochasticity, a widespread observed phenomenon, has been immensely overlooked from the modeling perspective. This inherent variability within the operation of the memristor is a vital feature for the integration of this nonlinear device into the stochastic electronics realm of study. In this paper, experimentally observed innate stochasticity is modeled in a circuit compatible format. The model proposed is generic and could be incorporated into variants of threshold-based memristor models in which apparent variations in the output hysteresis convey the switching threshold shift. Further application as a noise injection alternative paves the way for novel approaches in the fields of neuromorphic engineering circuits design. On the other hand, extra caution needs to be paid to variability intolerant digital designs based on non-deterministic memristor logic.

  16. Energy-Efficient FPGA-Based Parallel Quasi-Stochastic Computing

    Directory of Open Access Journals (Sweden)

    Ramu Seva

    2017-11-01

    Full Text Available The high performance of FPGA (Field Programmable Gate Array in image processing applications is justified by its flexible reconfigurability, its inherent parallel nature and the availability of a large amount of internal memories. Lately, the Stochastic Computing (SC paradigm has been found to be significantly advantageous in certain application domains including image processing because of its lower hardware complexity and power consumption. However, its viability is deemed to be limited due to its serial bitstream processing and excessive run-time requirement for convergence. To address these issues, a novel approach is proposed in this work where an energy-efficient implementation of SC is accomplished by introducing fast-converging Quasi-Stochastic Number Generators (QSNGs and parallel stochastic bitstream processing, which are well suited to leverage FPGA’s reconfigurability and abundant internal memory resources. The proposed approach has been tested on the Virtex-4 FPGA, and results have been compared with the serial and parallel implementations of conventional stochastic computation using the well-known SC edge detection and multiplication circuits. Results prove that by using this approach, execution time, as well as the power consumption are decreased by a factor of 3.5 and 4.5 for the edge detection circuit and multiplication circuit, respectively.

  17. A constrained approach to multiscale stochastic simulation of chemically reacting systems

    KAUST Repository

    Cotter, Simon L.

    2011-01-01

    Stochastic simulation of coupled chemical reactions is often computationally intensive, especially if a chemical system contains reactions occurring on different time scales. In this paper, we introduce a multiscale methodology suitable to address this problem, assuming that the evolution of the slow species in the system is well approximated by a Langevin process. It is based on the conditional stochastic simulation algorithm (CSSA) which samples from the conditional distribution of the suitably defined fast variables, given values for the slow variables. In the constrained multiscale algorithm (CMA) a single realization of the CSSA is then used for each value of the slow variable to approximate the effective drift and diffusion terms, in a similar manner to the constrained mean-force computations in other applications such as molecular dynamics. We then show how using the ensuing Fokker-Planck equation approximation, we can in turn approximate average switching times in stochastic chemical systems. © 2011 American Institute of Physics.

  18. BOOK REVIEW: Canonical Gravity and Applications: Cosmology, Black Holes, and Quantum Gravity Canonical Gravity and Applications: Cosmology, Black Holes, and Quantum Gravity

    Science.gov (United States)

    Husain, Viqar

    2012-03-01

    Research on quantum gravity from a non-perturbative 'quantization of geometry' perspective has been the focus of much research in the past two decades, due to the Ashtekar-Barbero Hamiltonian formulation of general relativity. This approach provides an SU(2) gauge field as the canonical configuration variable; the analogy with Yang-Mills theory at the kinematical level opened up some research space to reformulate the old Wheeler-DeWitt program into what is now known as loop quantum gravity (LQG). The author is known for his work in the LQG approach to cosmology, which was the first application of this formalism that provided the possibility of exploring physical questions. Therefore the flavour of the book is naturally informed by this history. The book is based on a set of graduate-level lectures designed to impart a working knowledge of the canonical approach to gravitation. It is more of a textbook than a treatise, unlike three other recent books in this area by Kiefer [1], Rovelli [2] and Thiemann [3]. The style and choice of topics of these authors are quite different; Kiefer's book provides a broad overview of the path integral and canonical quantization methods from a historical perspective, whereas Rovelli's book focuses on philosophical and formalistic aspects of the problems of time and observables, and gives a development of spin-foam ideas. Thiemann's is much more a mathematical physics book, focusing entirely on the theory of representing constraint operators on a Hilbert space and charting a mathematical trajectory toward a physical Hilbert space for quantum gravity. The significant difference from these books is that Bojowald covers mainly classical topics until the very last chapter, which contains the only discussion of quantization. In its coverage of classical gravity, the book has some content overlap with Poisson's book [4], and with Ryan and Shepley's older work on relativistic cosmology [5]; for instance the contents of chapter five of the

  19. Approaches to Validation of Models for Low Gravity Fluid Behavior

    Science.gov (United States)

    Chato, David J.; Marchetta, Jeffery; Hochstein, John I.; Kassemi, Mohammad

    2005-01-01

    This paper details the author experiences with the validation of computer models to predict low gravity fluid behavior. It reviews the literature of low gravity fluid behavior as a starting point for developing a baseline set of test cases. It examines authors attempts to validate their models against these cases and the issues they encountered. The main issues seem to be that: Most of the data is described by empirical correlation rather than fundamental relation; Detailed measurements of the flow field have not been made; Free surface shapes are observed but through thick plastic cylinders, and therefore subject to a great deal of optical distortion; and Heat transfer process time constants are on the order of minutes to days but the zero-gravity time available has been only seconds.

  20. Horizon thermodynamics in fourth-order gravity

    Directory of Open Access Journals (Sweden)

    Meng-Sen Ma

    2017-03-01

    Full Text Available In the framework of horizon thermodynamics, the field equations of Einstein gravity and some other second-order gravities can be rewritten as the thermodynamic identity: dE=TdS−PdV. However, in order to construct the horizon thermodynamics in higher-order gravity, we have to simplify the field equations firstly. In this paper, we study the fourth-order gravity and convert it to second-order gravity via a so-called “Legendre transformation” at the cost of introducing two other fields besides the metric field. With this simplified theory, we implement the conventional procedure in the construction of the horizon thermodynamics in 3 and 4 dimensional spacetime. We find that the field equations in the fourth-order gravity can also be written as the thermodynamic identity. Moreover, we can use this approach to derive the same black hole mass as that by other methods.

  1. Group field theory and simplicial quantum gravity

    International Nuclear Information System (INIS)

    Oriti, D

    2010-01-01

    We present a new group field theory for 4D quantum gravity. It incorporates the constraints that give gravity from BF theory and has quantum amplitudes with the explicit form of simplicial path integrals for first-order gravity. The geometric interpretation of the variables and of the contributions to the quantum amplitudes is manifest. This allows a direct link with other simplicial gravity approaches, like quantum Regge calculus, in the form of the amplitudes of the model, and dynamical triangulations, which we show to correspond to a simple restriction of the same.

  2. Spatial stochasticity and non-continuum effects in gas flows

    Energy Technology Data Exchange (ETDEWEB)

    Dadzie, S. Kokou, E-mail: k.dadzie@glyndwr.ac.uk [Mechanical and Aeronautical Engineering, Glyndwr University, Mold Road, Wrexham LL11 2AW (United Kingdom); Reese, Jason M., E-mail: jason.reese@strath.ac.uk [Department of Mechanical and Aerospace Engineering, University of Strathclyde, Glasgow G1 1XJ (United Kingdom)

    2012-02-06

    We investigate the relationship between spatial stochasticity and non-continuum effects in gas flows. A kinetic model for a dilute gas is developed using strictly a stochastic molecular model reasoning, without primarily referring to either the Liouville or the Boltzmann equations for dilute gases. The kinetic equation, a stochastic version of the well-known deterministic Boltzmann equation for dilute gas, is then associated with a set of macroscopic equations for the case of a monatomic gas. Tests based on a heat conduction configuration and sound wave dispersion show that spatial stochasticity can explain some non-continuum effects seen in gases. -- Highlights: ► We investigate effects of molecular spatial stochasticity in non-continuum regime. ► Present a simplify spatial stochastic kinetic equation. ► Present a spatial stochastic macroscopic flow equations. ► Show effects of the new model on sound wave dispersion prediction. ► Show effects of the new approach in density profiles in a heat conduction.

  3. Stochastic integration and differential equations

    CERN Document Server

    Protter, Philip E

    2003-01-01

    It has been 15 years since the first edition of Stochastic Integration and Differential Equations, A New Approach appeared, and in those years many other texts on the same subject have been published, often with connections to applications, especially mathematical finance. Yet in spite of the apparent simplicity of approach, none of these books has used the functional analytic method of presenting semimartingales and stochastic integration. Thus a 2nd edition seems worthwhile and timely, though it is no longer appropriate to call it "a new approach". The new edition has several significant changes, most prominently the addition of exercises for solution. These are intended to supplement the text, but lemmas needed in a proof are never relegated to the exercises. Many of the exercises have been tested by graduate students at Purdue and Cornell Universities. Chapter 3 has been completely redone, with a new, more intuitive and simultaneously elementary proof of the fundamental Doob-Meyer decomposition theorem, t...

  4. A stochastic aerodynamic model for stationary blades in unsteady 3D wind fields

    International Nuclear Information System (INIS)

    Fluck, Manuel; Crawford, Curran

    2016-01-01

    Dynamic loads play an important roll in the design of wind turbines, but establishing the life-time aerodynamic loads (e.g. extreme and fatigue loads) is a computationally expensive task. Conventional (deterministic) methods to analyze long term loads, which rely on the repeated analysis of multiple different wind samples, are usually too expensive to be included in optimization routines. We present a new stochastic approach, which solves the aerodynamic system equations (Lagrangian vortex model) in the stochastic space, and thus arrive directly at a stochastic description of the coupled loads along a turbine blade. This new approach removes the requirement of analyzing multiple different realizations. Instead, long term loads can be extracted from a single stochastic solution, a procedure that is obviously significantly faster. Despite the reduced analysis time, results obtained from the stochastic approach match deterministic result well for a simple test-case (a stationary blade). In future work, the stochastic method will be extended to rotating blades, thus opening up new avenues to include long term loads into turbine optimization. (paper)

  5. PREFACE: Conceptual and Technical Challenges for Quantum Gravity 2014 - Parallel session: Noncommutative Geometry and Quantum Gravity

    Science.gov (United States)

    Martinetti, P.; Wallet, J.-C.; Amelino-Camelia, G.

    2015-08-01

    The conference Conceptual and Technical Challenges for Quantum Gravity at Sapienza University of Rome, from 8 to 12 September 2014, has provided a beautiful opportunity for an encounter between different approaches and different perspectives on the quantum-gravity problem. It contributed to a higher level of shared knowledge among the quantum-gravity communities pursuing each specific research program. There were plenary talks on many different approaches, including in particular string theory, loop quantum gravity, spacetime noncommutativity, causal dynamical triangulations, asymptotic safety and causal sets. Contributions from the perspective of philosophy of science were also welcomed. In addition several parallel sessions were organized. The present volume collects contributions from the Noncommutative Geometry and Quantum Gravity parallel session4, with additional invited contributions from specialists in the field. Noncommutative geometry in its many incarnations appears at the crossroad of many researches in theoretical and mathematical physics: • from models of quantum space-time (with or without breaking of Lorentz symmetry) to loop gravity and string theory, • from early considerations on UV-divergencies in quantum field theory to recent models of gauge theories on noncommutative spacetime, • from Connes description of the standard model of elementary particles to recent Pati-Salam like extensions. This volume provides an overview of these various topics, interesting for the specialist as well as accessible to the newcomer. 4partially funded by CNRS PEPS /PTI ''Metric aspect of noncommutative geometry: from Monge to Higgs''

  6. Stochastic equations theory and applications in acoustics, hydrodynamics, magnetohydrodynamics, and radiophysics

    CERN Document Server

    Klyatskin, Valery I

    2015-01-01

    This monograph set presents a consistent and self-contained framework of stochastic dynamic systems with maximal possible completeness. Volume 1 presents the basic concepts, exact results, and asymptotic approximations of the theory of stochastic equations on the basis of the developed functional approach. This approach offers a possibility of both obtaining exact solutions to stochastic problems for a number of models of fluctuating parameters and constructing various asymptotic buildings. Ideas of statistical topography are used to discuss general issues of generating coherent structures from chaos with probability one, i.e., almost in every individual realization of random parameters. The general theory is illustrated with certain problems and applications of stochastic mathematical physics in various fields such as mechanics, hydrodynamics, magnetohydrodynamics, acoustics, optics, and radiophysics.  

  7. Stochastic failure modelling of unidirectional composite ply failure

    International Nuclear Information System (INIS)

    Whiteside, M.B.; Pinho, S.T.; Robinson, P.

    2012-01-01

    Stochastic failure envelopes are generated through parallelised Monte Carlo Simulation of a physically based failure criteria for unidirectional carbon fibre/epoxy matrix composite plies. Two examples are presented to demonstrate the consequence on failure prediction of both statistical interaction of failure modes and uncertainty in global misalignment. Global variance-based Sobol sensitivity indices are computed to decompose the observed variance within the stochastic failure envelopes into contributions from physical input parameters. The paper highlights a selection of the potential advantages stochastic methodologies offer over the traditional deterministic approach.

  8. Stochastic Analysis 2010

    CERN Document Server

    Crisan, Dan

    2011-01-01

    "Stochastic Analysis" aims to provide mathematical tools to describe and model high dimensional random systems. Such tools arise in the study of Stochastic Differential Equations and Stochastic Partial Differential Equations, Infinite Dimensional Stochastic Geometry, Random Media and Interacting Particle Systems, Super-processes, Stochastic Filtering, Mathematical Finance, etc. Stochastic Analysis has emerged as a core area of late 20th century Mathematics and is currently undergoing a rapid scientific development. The special volume "Stochastic Analysis 2010" provides a sa

  9. Stochastic Effects; Application in Nuclear Physics

    International Nuclear Information System (INIS)

    Mazonka, O.

    2000-04-01

    Stochastic effects in nuclear physics refer to the study of the dynamics of nuclear systems evolving under stochastic equations of motion. In this dissertation we restrict our attention to classical scattering models. We begin with introduction of the model of nuclear dynamics and deterministic equations of evolution. We apply a Langevin approach - an additional property of the model, which reflect the statistical nature of low energy nuclear behaviour. We than concentrate our attention on the problem of calculating tails of distribution functions, which actually is the problem of calculating probabilities of rare outcomes. Two general strategies are proposed. Result and discussion follow. Finally in the appendix we consider stochastic effects in nonequilibrium systems. A few exactly solvable models are presented. For one model we show explicitly that stochastic behaviour in a microscopic description can lead to ordered collective effects on the macroscopic scale. Two others are solved to confirm the predictions of the fluctuation theorem. (author)

  10. Stochastic light-cone CTMRG: a new DMRG approach to stochastic models 02.50.Ey Stochastic processes; 64.60.Ht Dynamic critical phenomena; 02.70.-c Computational techniques; 05.10.Cc Renormalization group methods;

    CERN Document Server

    Kemper, A; Nishino, T; Schadschneider, A; Zittartz, J

    2003-01-01

    We develop a new variant of the recently introduced stochastic transfer matrix DMRG which we call stochastic light-cone corner-transfer-matrix DMRG (LCTMRG). It is a numerical method to compute dynamic properties of one-dimensional stochastic processes. As suggested by its name, the LCTMRG is a modification of the corner-transfer-matrix DMRG, adjusted by an additional causality argument. As an example, two reaction-diffusion models, the diffusion-annihilation process and the branch-fusion process are studied and compared with exact data and Monte Carlo simulations to estimate the capability and accuracy of the new method. The number of possible Trotter steps of more than 10 sup 5 shows a considerable improvement on the old stochastic TMRG algorithm.

  11. Measuring energy efficiency under heterogeneous technologies using a latent class stochastic frontier approach: An application to Chinese energy economy

    International Nuclear Information System (INIS)

    Lin, Boqiang; Du, Kerui

    2014-01-01

    The importance of technology heterogeneity in estimating economy-wide energy efficiency has been emphasized by recent literature. Some studies use the metafrontier analysis approach to estimate energy efficiency. However, for such studies, some reliable priori information is needed to divide the sample observations properly, which causes a difficulty in unbiased estimation of energy efficiency. Moreover, separately estimating group-specific frontiers might lose some common information across different groups. In order to overcome these weaknesses, this paper introduces a latent class stochastic frontier approach to measure energy efficiency under heterogeneous technologies. An application of the proposed model to Chinese energy economy is presented. Results show that the overall energy efficiency of China's provinces is not high, with an average score of 0.632 during the period from 1997 to 2010. - Highlights: • We introduce a latent class stochastic frontier approach to measure energy efficiency. • Ignoring technological heterogeneity would cause biased estimates of energy efficiency. • An application of the proposed model to Chinese energy economy is presented. • There is still a long way for China to develop an energy efficient regime

  12. Modeling the future evolution of the virtual water trade network: A combination of network and gravity models

    Science.gov (United States)

    Sartori, Martina; Schiavo, Stefano; Fracasso, Andrea; Riccaboni, Massimo

    2017-12-01

    The paper investigates how the topological features of the virtual water (VW) network and the size of the associated VW flows are likely to change over time, under different socio-economic and climate scenarios. We combine two alternative models of network formation -a stochastic and a fitness model, used to describe the structure of VW flows- with a gravity model of trade to predict the intensity of each bilateral flow. This combined approach is superior to existing methodologies in its ability to replicate the observed features of VW trade. The insights from the models are used to forecast future VW flows in 2020 and 2050, under different climatic scenarios, and compare them with future water availability. Results suggest that the current trend of VW exports is not sustainable for all countries. Moreover, our approach highlights that some VW importers might be exposed to "imported water stress" as they rely heavily on imports from countries whose water use is unsustainable.

  13. Prima facie questions in quantum gravity

    Science.gov (United States)

    Isham, C. J.

    The long history of the study of quantum gravity has thrown up a complex web of ideas and approaches. The aim of this article is to unravel this web a little by analysing some of the {\\em prima facie\\/} questions that can be asked of almost any approach to quantum gravity and whose answers assist in classifying the different schemes. Particular emphasis is placed on (i) the role of background conceptual and technical structure; (ii) the role of spacetime diffeomorphisms; and (iii) the problem of time.

  14. Fixation of Cs to marine sediments estimated by a stochastic modelling approach.

    Science.gov (United States)

    Børretzen, Peer; Salbu, Brit

    2002-01-01

    irreversible sediment phase. while about 12.5 years are needed before 99.7% of the Cs ions are fixed. Thus, according to the model estimates the contact time between 137Cs ions leached from dumped waste and the Stepovogo Fjord sediment should be about 3 years before the sediment will act as an efficient permanent sink. Until then a significant fraction of 137Cs should be considered mobile. The stochastic modelling approach provides useful tools when assessing sediment-seawater interactions over time, and should be easily applicable to all sediment-seawater systems including a sink term.

  15. Stochastic modelling of two-phase flows including phase change

    International Nuclear Information System (INIS)

    Hurisse, O.; Minier, J.P.

    2011-01-01

    Stochastic modelling has already been developed and applied for single-phase flows and incompressible two-phase flows. In this article, we propose an extension of this modelling approach to two-phase flows including phase change (e.g. for steam-water flows). Two aspects are emphasised: a stochastic model accounting for phase transition and a modelling constraint which arises from volume conservation. To illustrate the whole approach, some remarks are eventually proposed for two-fluid models. (authors)

  16. Discrete quantum gravity

    International Nuclear Information System (INIS)

    Williams, Ruth M

    2006-01-01

    A review is given of a number of approaches to discrete quantum gravity, with a restriction to those likely to be relevant in four dimensions. This paper is dedicated to Rafael Sorkin on the occasion of his sixtieth birthday

  17. On the stochastic approach to inflation and the initial conditions in the universe

    Science.gov (United States)

    Pollock, M. D.

    1988-03-01

    By the application of stochastic methods to a theory in which a potential V(ø) causes a period of quasi-exponential expansion of the universe, an expression for the probability distribution P(V) appropriate for chaotic inflation has recently been derived. The method was developed by Starobinsky and by Linde. Beyond some critical point øc, long-wavelength quantum fluctuations δø ~H/2π cannot be ignored. The effect of these fluctuation in general relativity for values of ø such that V(ø)>V(ø) has been considered by Linde, who concluded that most of the present universe arises as a result of expansion of domains with a domains with a maximum possible value of ø, such that V(ømax ~ mp4. We obtain the corresponding expression for P in a broken-symmetry theory of gravity, in which the newtonian gravitational constant is replaced by G = (8πɛø2)-1, and also for a theory which includes higher-derivative terms R2 = γR2 + βR2 1n(R/μ2), so that the trace anomaly is Tanom ~βR2 , in which an effective inflation field øe can be defined as øe2 = 24γR. Conclusions analogous to those of Linde can be drawn in both these theories. Present address: Tata Institute of Fundamental Research, Homi Bhabha Road, Colaba, Bombay 400.005, India.

  18. Stochastic inflation in phase space: is slow roll a stochastic attractor?

    Energy Technology Data Exchange (ETDEWEB)

    Grain, Julien [Institut d' Astrophysique Spatiale, UMR8617, CNRS, Univ. Paris Sud, Université Paris-Saclay, Bt. 121, Orsay, F-91405 (France); Vennin, Vincent, E-mail: julien.grain@ias.u-psud.fr, E-mail: vincent.vennin@port.ac.uk [Institute of Cosmology and Gravitation, University of Portsmouth, Dennis Sciama Building, Burnaby Road, Portsmouth, PO13FX (United Kingdom)

    2017-05-01

    An appealing feature of inflationary cosmology is the presence of a phase-space attractor, ''slow roll'', which washes out the dependence on initial field velocities. We investigate the robustness of this property under backreaction from quantum fluctuations using the stochastic inflation formalism in the phase-space approach. A Hamiltonian formulation of stochastic inflation is presented, where it is shown that the coarse-graining procedure—where wavelengths smaller than the Hubble radius are integrated out—preserves the canonical structure of free fields. This means that different sets of canonical variables give rise to the same probability distribution which clarifies the literature with respect to this issue. The role played by the quantum-to-classical transition is also analysed and is shown to constrain the coarse-graining scale. In the case of free fields, we find that quantum diffusion is aligned in phase space with the slow-roll direction. This implies that the classical slow-roll attractor is immune to stochastic effects and thus generalises to a stochastic attractor regardless of initial conditions, with a relaxation time at least as short as in the classical system. For non-test fields or for test fields with non-linear self interactions however, quantum diffusion and the classical slow-roll flow are misaligned. We derive a condition on the coarse-graining scale so that observational corrections from this misalignment are negligible at leading order in slow roll.

  19. Stochasticity and determinism in models of hematopoiesis.

    Science.gov (United States)

    Kimmel, Marek

    2014-01-01

    This chapter represents a novel view of modeling in hematopoiesis, synthesizing both deterministic and stochastic approaches. Whereas the stochastic models work in situations where chance dominates, for example when the number of cells is small, or under random mutations, the deterministic models are more important for large-scale, normal hematopoiesis. New types of models are on the horizon. These models attempt to account for distributed environments such as hematopoietic niches and their impact on dynamics. Mixed effects of such structures and chance events are largely unknown and constitute both a challenge and promise for modeling. Our discussion is presented under the separate headings of deterministic and stochastic modeling; however, the connections between both are frequently mentioned. Four case studies are included to elucidate important examples. We also include a primer of deterministic and stochastic dynamics for the reader's use.

  20. Robust nonlinear autoregressive moving average model parameter estimation using stochastic recurrent artificial neural networks

    DEFF Research Database (Denmark)

    Chon, K H; Hoyer, D; Armoundas, A A

    1999-01-01

    In this study, we introduce a new approach for estimating linear and nonlinear stochastic autoregressive moving average (ARMA) model parameters, given a corrupt signal, using artificial recurrent neural networks. This new approach is a two-step approach in which the parameters of the deterministic...... part of the stochastic ARMA model are first estimated via a three-layer artificial neural network (deterministic estimation step) and then reestimated using the prediction error as one of the inputs to the artificial neural networks in an iterative algorithm (stochastic estimation step). The prediction...... error is obtained by subtracting the corrupt signal of the estimated ARMA model obtained via the deterministic estimation step from the system output response. We present computer simulation examples to show the efficacy of the proposed stochastic recurrent neural network approach in obtaining accurate...

  1. Aspects if stochastic models for short-term hydropower scheduling and bidding

    Energy Technology Data Exchange (ETDEWEB)

    Belsnes, Michael Martin [Sintef Energy, Trondheim (Norway); Follestad, Turid [Sintef Energy, Trondheim (Norway); Wolfgang, Ove [Sintef Energy, Trondheim (Norway); Fosso, Olav B. [Dep. of electric power engineering NTNU, Trondheim (Norway)

    2012-07-01

    This report discusses challenges met when turning from deterministic to stochastic decision support models for short-term hydropower scheduling and bidding. The report describes characteristics of the short-term scheduling and bidding problem, different market and bidding strategies, and how a stochastic optimization model can be formulated. A review of approaches for stochastic short-term modelling and stochastic modelling for the input variables inflow and market prices is given. The report discusses methods for approximating the predictive distribution of uncertain variables by scenario trees. Benefits of using a stochastic over a deterministic model are illustrated by a case study, where increased profit is obtained to a varying degree depending on the reservoir filling and price structure. Finally, an approach for assessing the effect of using a size restricted scenario tree to approximate the predictive distribution for stochastic input variables is described. The report is a summary of the findings of Work package 1 of the research project #Left Double Quotation Mark#Optimal short-term scheduling of wind and hydro resources#Right Double Quotation Mark#. The project aims at developing a prototype for an operational stochastic short-term scheduling model. Based on the investigations summarized in the report, it is concluded that using a deterministic equivalent formulation of the stochastic optimization problem is convenient and sufficient for obtaining a working prototype. (author)

  2. Gravity from entanglement and RG flow in a top-down approach

    Science.gov (United States)

    Kwon, O.-Kab; Jang, Dongmin; Kim, Yoonbai; Tolla, D. D.

    2018-05-01

    The duality between a d-dimensional conformal field theory with relevant deformation and a gravity theory on an asymptotically AdS d+1 geometry, has become a suitable tool in the investigation of the emergence of gravity from quantum entanglement in field theory. Recently, we have tested the duality between the mass-deformed ABJM theory and asymptotically AdS4 gravity theory, which is obtained from the KK reduction of the 11-dimensional supergravity on the LLM geometry. In this paper, we extend the KK reduction procedure beyond the linear order and establish non-trivial KK maps between 4-dimensional fields and 11-dimensional fluctuations. We rely on this gauge/gravity duality to calculate the entanglement entropy by using the Ryu-Takayanagi holographic formula and the path integral method developed by Faulkner. We show that the entanglement entropies obtained using these two methods agree when the asymptotically AdS4 metric satisfies the linearized Einstein equation with nonvanishing energy-momentum tensor for two scalar fields. These scalar fields encode the information of the relevant deformation of the ABJM theory. This confirms that the asymptotic limit of LLM geometry is the emergent gravity of the quantum entanglement in the mass-deformed ABJM theory with a small mass parameter. We also comment on the issue of the relative entropy and the Fisher information in our setup.

  3. Hybrid Semantics of Stochastic Programs with Dynamic Reconfiguration

    Directory of Open Access Journals (Sweden)

    Alberto Policriti

    2009-10-01

    Full Text Available We begin by reviewing a technique to approximate the dynamics of stochastic programs --written in a stochastic process algebra-- by a hybrid system, suitable to capture a mixed discrete/continuous evolution. In a nutshell, the discrete dynamics is kept stochastic while the continuous evolution is given in terms of ODEs, and the overall technique, therefore, naturally associates a Piecewise Deterministic Markov Process with a stochastic program. The specific contribution in this work consists in an increase of the flexibility of the translation scheme, obtained by allowing a dynamic reconfiguration of the degree of discreteness/continuity of the semantics. We also discuss the relationships of this approach with other hybrid simulation strategies for biochemical systems.

  4. The quest for quantum gravity

    International Nuclear Information System (INIS)

    Au, G.

    1995-03-01

    One of the greatest challenges facing theoretical physics lies in reconciling Einstein's classical theory of gravity - general relativity -with quantum field theory. Although both theories have been experimentally supported in their respective regimes, they are as compatible as a square peg and a round hole. This article summarises the current status of the superstring approach to the problem, the status of the Ashtekar program, and problem of time in quantum gravity

  5. Constraining Stochastic Parametrisation Schemes Using High-Resolution Model Simulations

    Science.gov (United States)

    Christensen, H. M.; Dawson, A.; Palmer, T.

    2017-12-01

    Stochastic parametrisations are used in weather and climate models as a physically motivated way to represent model error due to unresolved processes. Designing new stochastic schemes has been the target of much innovative research over the last decade. While a focus has been on developing physically motivated approaches, many successful stochastic parametrisation schemes are very simple, such as the European Centre for Medium-Range Weather Forecasts (ECMWF) multiplicative scheme `Stochastically Perturbed Parametrisation Tendencies' (SPPT). The SPPT scheme improves the skill of probabilistic weather and seasonal forecasts, and so is widely used. However, little work has focused on assessing the physical basis of the SPPT scheme. We address this matter by using high-resolution model simulations to explicitly measure the `error' in the parametrised tendency that SPPT seeks to represent. The high resolution simulations are first coarse-grained to the desired forecast model resolution before they are used to produce initial conditions and forcing data needed to drive the ECMWF Single Column Model (SCM). By comparing SCM forecast tendencies with the evolution of the high resolution model, we can measure the `error' in the forecast tendencies. In this way, we provide justification for the multiplicative nature of SPPT, and for the temporal and spatial scales of the stochastic perturbations. However, we also identify issues with the SPPT scheme. It is therefore hoped these measurements will improve both holistic and process based approaches to stochastic parametrisation. Figure caption: Instantaneous snapshot of the optimal SPPT stochastic perturbation, derived by comparing high-resolution simulations with a low resolution forecast model.

  6. A stochastic-programming approach to integrated asset and liability ...

    African Journals Online (AJOL)

    This increase in complexity has provided an impetus for the investigation into integrated asset- and liability-management frameworks that could realistically address dynamic portfolio allocation in a risk-controlled way. In this paper the authors propose a multi-stage dynamic stochastic-programming model for the integrated ...

  7. Loop quantum gravity

    International Nuclear Information System (INIS)

    Pullin, J.

    2015-01-01

    Loop quantum gravity is one of the approaches that are being studied to apply the rules of quantum mechanics to the gravitational field described by the theory of General Relativity . We present an introductory summary of the main ideas and recent results. (Author)

  8. Quantum state correction of relic gravitons from quantum gravity

    OpenAIRE

    Rosales, Jose-Luis

    1996-01-01

    The semiclassical approach to quantum gravity would yield the Schroedinger formalism for the wave function of metric perturbations or gravitons plus quantum gravity correcting terms in pure gravity; thus, in the inflationary scenario, we should expect correcting effects to the relic graviton (Zel'dovich) spectrum of the order (H/mPl)^2.

  9. Adaptive filtering of GOCE-derived gravity gradients of the disturbing potential in the context of the space-wise approach

    Science.gov (United States)

    Piretzidis, Dimitrios; Sideris, Michael G.

    2017-09-01

    connected to and employed in the first computational steps of the space-wise approach, where a time-wise Wiener filter is applied at the first stage of GOCE gravity gradient filtering. The results of this work can be extended to using other adaptive filtering algorithms, such as the recursive least-squares and recursive least-squares lattice filters.

  10. Stochastic Approach to Determine CO2 Hydrate Induction Time in Clay Mineral Suspensions

    Science.gov (United States)

    Lee, K.; Lee, S.; Lee, W.

    2008-12-01

    A large number of induction time data for carbon dioxide hydrate formation were obtained from a batch reactor consisting of four independent reaction cells. Using resistance temperature detector(RTD)s and a digital microscope, we successfully monitored the whole process of hydrate formation (i.e., nucleation and crystal growth) and detected the induction time. The experiments were carried out in kaolinite and montmorillonite suspensions at temperatures between 274 and 277 K and pressures ranging from 3.0 to 4.0 MPa. Each set of data was analyzed beforehand whether to be treated by stochastic manner or not. Geochemical factors potentially influencing the hydrate induction time under different experimental conditions were investigated by stochastic analyses. We observed that clay mineral type, pressure, and temperature significantly affect the stochastic behavior of the induction times for CO2 hydrate formation in this study. The hydrate formation kinetics along with stochastic analyses can provide basic understanding for CO2 hydrate storage in deep-sea sediment and geologic formation, securing its stability under the environments.

  11. Stochastic Eulerian Lagrangian methods for fluid-structure interactions with thermal fluctuations

    International Nuclear Information System (INIS)

    Atzberger, Paul J.

    2011-01-01

    We present approaches for the study of fluid-structure interactions subject to thermal fluctuations. A mixed mechanical description is utilized combining Eulerian and Lagrangian reference frames. We establish general conditions for operators coupling these descriptions. Stochastic driving fields for the formalism are derived using principles from statistical mechanics. The stochastic differential equations of the formalism are found to exhibit significant stiffness in some physical regimes. To cope with this issue, we derive reduced stochastic differential equations for several physical regimes. We also present stochastic numerical methods for each regime to approximate the fluid-structure dynamics and to generate efficiently the required stochastic driving fields. To validate the methodology in each regime, we perform analysis of the invariant probability distribution of the stochastic dynamics of the fluid-structure formalism. We compare this analysis with results from statistical mechanics. To further demonstrate the applicability of the methodology, we perform computational studies for spherical particles having translational and rotational degrees of freedom. We compare these studies with results from fluid mechanics. The presented approach provides for fluid-structure systems a set of rather general computational methods for treating consistently structure mechanics, hydrodynamic coupling, and thermal fluctuations.

  12. On square-wave-driven stochastic resonance for energy harvesting in a bistable system

    Energy Technology Data Exchange (ETDEWEB)

    Su, Dongxu, E-mail: sudx@iis.u-tokyo.ac.jp [Graduate School of Engineering, The University of Tokyo, Tokyo 1538505 (Japan); Zheng, Rencheng; Nakano, Kimihiko [Institute of Industrial Science, The University of Tokyo, Tokyo 1538505 (Japan); Cartmell, Matthew P [Department of Mechanical Engineering, University of Sheffield, Sheffield S1 3JD (United Kingdom)

    2014-11-15

    Stochastic resonance is a physical phenomenon through which the throughput of energy within an oscillator excited by a stochastic source can be boosted by adding a small modulating excitation. This study investigates the feasibility of implementing square-wave-driven stochastic resonance to enhance energy harvesting. The motivating hypothesis was that such stochastic resonance can be efficiently realized in a bistable mechanism. However, the condition for the occurrence of stochastic resonance is conventionally defined by the Kramers rate. This definition is inadequate because of the necessity and difficulty in estimating white noise density. A bistable mechanism has been designed using an explicit analytical model which implies a new approach for achieving stochastic resonance in the paper. Experimental tests confirm that the addition of a small-scale force to the bistable system excited by a random signal apparently leads to a corresponding amplification of the response that we now term square-wave-driven stochastic resonance. The study therefore indicates that this approach may be a promising way to improve the performance of an energy harvester under certain forms of random excitation.

  13. On square-wave-driven stochastic resonance for energy harvesting in a bistable system

    International Nuclear Information System (INIS)

    Su, Dongxu; Zheng, Rencheng; Nakano, Kimihiko; Cartmell, Matthew P

    2014-01-01

    Stochastic resonance is a physical phenomenon through which the throughput of energy within an oscillator excited by a stochastic source can be boosted by adding a small modulating excitation. This study investigates the feasibility of implementing square-wave-driven stochastic resonance to enhance energy harvesting. The motivating hypothesis was that such stochastic resonance can be efficiently realized in a bistable mechanism. However, the condition for the occurrence of stochastic resonance is conventionally defined by the Kramers rate. This definition is inadequate because of the necessity and difficulty in estimating white noise density. A bistable mechanism has been designed using an explicit analytical model which implies a new approach for achieving stochastic resonance in the paper. Experimental tests confirm that the addition of a small-scale force to the bistable system excited by a random signal apparently leads to a corresponding amplification of the response that we now term square-wave-driven stochastic resonance. The study therefore indicates that this approach may be a promising way to improve the performance of an energy harvester under certain forms of random excitation

  14. Stochastic rainfall modeling in West Africa: Parsimonious approaches for domestic rainwater harvesting assessment

    Science.gov (United States)

    Cowden, Joshua R.; Watkins, David W., Jr.; Mihelcic, James R.

    2008-10-01

    SummarySeveral parsimonious stochastic rainfall models are developed and compared for application to domestic rainwater harvesting (DRWH) assessment in West Africa. Worldwide, improved water access rates are lowest for Sub-Saharan Africa, including the West African region, and these low rates have important implications on the health and economy of the region. Domestic rainwater harvesting (DRWH) is proposed as a potential mechanism for water supply enhancement, especially for the poor urban households in the region, which is essential for development planning and poverty alleviation initiatives. The stochastic rainfall models examined are Markov models and LARS-WG, selected due to availability and ease of use for water planners in the developing world. A first-order Markov occurrence model with a mixed exponential amount model is selected as the best option for unconditioned Markov models. However, there is no clear advantage in selecting Markov models over the LARS-WG model for DRWH in West Africa, with each model having distinct strengths and weaknesses. A multi-model approach is used in assessing DRWH in the region to illustrate the variability associated with the rainfall models. It is clear DRWH can be successfully used as a water enhancement mechanism in West Africa for certain times of the year. A 200 L drum storage capacity could potentially optimize these simple, small roof area systems for many locations in the region.

  15. Late-time cosmological approach in mimetic f(R, T) gravity

    Energy Technology Data Exchange (ETDEWEB)

    Baffou, E.H. [Institut de Mathematiques et de Sciences Physiques (IMSP), Porto-Novo (Benin); Houndjo, M.J.S. [Institut de Mathematiques et de Sciences Physiques (IMSP), Porto-Novo (Benin); Faculte des Sciences et Techniques de Natitingou, Natitingou (Benin); Hamani-Daouda, M. [Universite de Niamey, Departement de Physique, Niamey (Niger); Alvarenga, F.G. [Universidade Federal do Espirito Santo, Departamento de Engenharia e Ciencias Naturais, CEUNES, Sao Mateus, ES (Brazil)

    2017-10-15

    In this paper, we investigate the late-time cosmic acceleration in mimetic f(R, T) gravity with the Lagrange multiplier and potential in a Universe containing, besides radiation and dark energy, a self-interacting (collisional) matter. We obtain through the modified Friedmann equations the main equation that can describe the cosmological evolution. Then, with several models from Q(z) and the well-known particular model f(R, T), we perform an analysis of the late-time evolution. We examine the behavior of the Hubble parameter, the dark energy equation of state and the total effective equation of state and in each case we compare the resulting picture with the non-collisional matter (assumed as dust) and also with the collisional matter in mimetic f(R, T) gravity. The results obtained are in good agreement with the observational data and show that in the presence of the collisional matter the dark energy oscillations in mimetic f(R, T) gravity can be damped. (orig.)

  16. Lectures on Topics in Spatial Stochastic Processes

    CERN Document Server

    Capasso, Vincenzo; Ivanoff, B Gail; Dozzi, Marco; Dalang, Robert C; Mountford, Thomas S

    2003-01-01

    The theory of stochastic processes indexed by a partially ordered set has been the subject of much research over the past twenty years. The objective of this CIME International Summer School was to bring to a large audience of young probabilists the general theory of spatial processes, including the theory of set-indexed martingales and to present the different branches of applications of this theory, including stochastic geometry, spatial statistics, empirical processes, spatial estimators and survival analysis. This theory has a broad variety of applications in environmental sciences, social sciences, structure of material and image analysis. In this volume, the reader will find different approaches which foster the development of tools to modelling the spatial aspects of stochastic problems.

  17. The multivariate supOU stochastic volatility model

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole; Stelzer, Robert

    Using positive semidefinite supOU (superposition of Ornstein-Uhlenbeck type) processes to describe the volatility, we introduce a multivariate stochastic volatility model for financial data which is capable of modelling long range dependence effects. The finiteness of moments and the second order...... structure of the volatility, the log returns, as well as their "squares" are discussed in detail. Moreover, we give several examples in which long memory effects occur and study how the model as well as the simple Ornstein-Uhlenbeck type stochastic volatility model behave under linear transformations....... In particular, the models are shown to be preserved under invertible linear transformations. Finally, we discuss how (sup)OU stochastic volatility models can be combined with a factor modelling approach....

  18. Stochastic and non-stochastic effects - a conceptual analysis

    International Nuclear Information System (INIS)

    Karhausen, L.R.

    1980-01-01

    The attempt to divide radiation effects into stochastic and non-stochastic effects is discussed. It is argued that radiation or toxicological effects are contingently related to radiation or chemical exposure. Biological effects in general can be described by general laws but these laws never represent a necessary connection. Actually stochastic effects express contingent, or empirical, connections while non-stochastic effects represent semantic and non-factual connections. These two expressions stem from two different levels of discourse. The consequence of this analysis for radiation biology and radiation protection is discussed. (author)

  19. Minimal Length, Measurability and Gravity

    Directory of Open Access Journals (Sweden)

    Alexander Shalyt-Margolin

    2016-03-01

    Full Text Available The present work is a continuation of the previous papers written by the author on the subject. In terms of the measurability (or measurable quantities notion introduced in a minimal length theory, first the consideration is given to a quantum theory in the momentum representation. The same terms are used to consider the Markov gravity model that here illustrates the general approach to studies of gravity in terms of measurable quantities.

  20. The quest for quantum gravity

    Energy Technology Data Exchange (ETDEWEB)

    Au, G

    1995-03-01

    One of the greatest challenges facing theoretical physics lies in reconciling Einstein`s classical theory of gravity - general relativity -with quantum field theory. Although both theories have been experimentally supported in their respective regimes, they are as compatible as a square peg and a round hole. This article summarises the current status of the superstring approach to the problem, the status of the Ashtekar program, and problem of time in quantum gravity.

  1. Testing quantum gravity through dumb holes

    Energy Technology Data Exchange (ETDEWEB)

    Pourhassan, Behnam, E-mail: b.pourhassan@du.ac.ir [School of Physics, Damghan University, Damghan (Iran, Islamic Republic of); Faizal, Mir, E-mail: f2mir@uwaterloo.ca [Department of Physics and Astronomy, University of Lethbridge, Lethbridge, AB T1K 3M4 (Canada); Irving K. Barber School of Arts and Sciences, University of British Columbia - Okanagan, Kelowna, BC V1V 1V7 (Canada); Capozziello, Salvatore, E-mail: capozzie@na.infn.it [Dipartimento di Fisica, Università di Napoli ”Frederico II” Complesso Universitario di Monte S. Angelo, Edificio G, Via Cinthia, I-80126 Napoli (Italy); Gran Sasso Science Institute (INFN), Via F. Crispi 7, I-67100 L’ Aquila (Italy)

    2017-02-15

    We propose a method to test the effects of quantum fluctuations on black holes by analyzing the effects of thermal fluctuations on dumb holes, the analogs for black holes. The proposal is based on the Jacobson formalism, where the Einstein field equations are viewed as thermodynamical relations, and so the quantum fluctuations are generated from the thermal fluctuations. It is well known that all approaches to quantum gravity generate logarithmic corrections to the entropy of a black hole and the coefficient of this term varies according to the different approaches to the quantum gravity. It is possible to demonstrate that such logarithmic terms are also generated from thermal fluctuations in dumb holes. In this paper, we claim that it is possible to experimentally test such corrections for dumb holes, and also obtain the correct coefficient for them. This fact can then be used to predict the effects of quantum fluctuations on realistic black holes, and so it can also be used, in principle, to experimentally test the different approaches to quantum gravity.

  2. Stochastic model simulation using Kronecker product analysis and Zassenhaus formula approximation.

    Science.gov (United States)

    Caglar, Mehmet Umut; Pal, Ranadip

    2013-01-01

    Probabilistic Models are regularly applied in Genetic Regulatory Network modeling to capture the stochastic behavior observed in the generation of biological entities such as mRNA or proteins. Several approaches including Stochastic Master Equations and Probabilistic Boolean Networks have been proposed to model the stochastic behavior in genetic regulatory networks. It is generally accepted that Stochastic Master Equation is a fundamental model that can describe the system being investigated in fine detail, but the application of this model is computationally enormously expensive. On the other hand, Probabilistic Boolean Network captures only the coarse-scale stochastic properties of the system without modeling the detailed interactions. We propose a new approximation of the stochastic master equation model that is able to capture the finer details of the modeled system including bistabilities and oscillatory behavior, and yet has a significantly lower computational complexity. In this new method, we represent the system using tensors and derive an identity to exploit the sparse connectivity of regulatory targets for complexity reduction. The algorithm involves an approximation based on Zassenhaus formula to represent the exponential of a sum of matrices as product of matrices. We derive upper bounds on the expected error of the proposed model distribution as compared to the stochastic master equation model distribution. Simulation results of the application of the model to four different biological benchmark systems illustrate performance comparable to detailed stochastic master equation models but with considerably lower computational complexity. The results also demonstrate the reduced complexity of the new approach as compared to commonly used Stochastic Simulation Algorithm for equivalent accuracy.

  3. Stochastic level-set variational implicit-solvent approach to solute-solvent interfacial fluctuations

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Shenggao, E-mail: sgzhou@suda.edu.cn, E-mail: bli@math.ucsd.edu [Department of Mathematics and Mathematical Center for Interdiscipline Research, Soochow University, 1 Shizi Street, Jiangsu, Suzhou 215006 (China); Sun, Hui; Cheng, Li-Tien [Department of Mathematics, University of California, San Diego, La Jolla, California 92093-0112 (United States); Dzubiella, Joachim [Soft Matter and Functional Materials, Helmholtz-Zentrum Berlin, 14109 Berlin, Germany and Institut für Physik, Humboldt-Universität zu Berlin, 12489 Berlin (Germany); Li, Bo, E-mail: sgzhou@suda.edu.cn, E-mail: bli@math.ucsd.edu [Department of Mathematics and Quantitative Biology Graduate Program, University of California, San Diego, La Jolla, California 92093-0112 (United States); McCammon, J. Andrew [Department of Chemistry and Biochemistry, Department of Pharmacology, Howard Hughes Medical Institute, University of California, San Diego, La Jolla, California 92093-0365 (United States)

    2016-08-07

    Recent years have seen the initial success of a variational implicit-solvent model (VISM), implemented with a robust level-set method, in capturing efficiently different hydration states and providing quantitatively good estimation of solvation free energies of biomolecules. The level-set minimization of the VISM solvation free-energy functional of all possible solute-solvent interfaces or dielectric boundaries predicts an equilibrium biomolecular conformation that is often close to an initial guess. In this work, we develop a theory in the form of Langevin geometrical flow to incorporate solute-solvent interfacial fluctuations into the VISM. Such fluctuations are crucial to biomolecular conformational changes and binding process. We also develop a stochastic level-set method to numerically implement such a theory. We describe the interfacial fluctuation through the “normal velocity” that is the solute-solvent interfacial force, derive the corresponding stochastic level-set equation in the sense of Stratonovich so that the surface representation is independent of the choice of implicit function, and develop numerical techniques for solving such an equation and processing the numerical data. We apply our computational method to study the dewetting transition in the system of two hydrophobic plates and a hydrophobic cavity of a synthetic host molecule cucurbit[7]uril. Numerical simulations demonstrate that our approach can describe an underlying system jumping out of a local minimum of the free-energy functional and can capture dewetting transitions of hydrophobic systems. In the case of two hydrophobic plates, we find that the wavelength of interfacial fluctuations has a strong influence to the dewetting transition. In addition, we find that the estimated energy barrier of the dewetting transition scales quadratically with the inter-plate distance, agreeing well with existing studies of molecular dynamics simulations. Our work is a first step toward the

  4. Simple stochastic simulation.

    Science.gov (United States)

    Schilstra, Maria J; Martin, Stephen R

    2009-01-01

    Stochastic simulations may be used to describe changes with time of a reaction system in a way that explicitly accounts for the fact that molecules show a significant degree of randomness in their dynamic behavior. The stochastic approach is almost invariably used when small numbers of molecules or molecular assemblies are involved because this randomness leads to significant deviations from the predictions of the conventional deterministic (or continuous) approach to the simulation of biochemical kinetics. Advances in computational methods over the three decades that have elapsed since the publication of Daniel Gillespie's seminal paper in 1977 (J. Phys. Chem. 81, 2340-2361) have allowed researchers to produce highly sophisticated models of complex biological systems. However, these models are frequently highly specific for the particular application and their description often involves mathematical treatments inaccessible to the nonspecialist. For anyone completely new to the field to apply such techniques in their own work might seem at first sight to be a rather intimidating prospect. However, the fundamental principles underlying the approach are in essence rather simple, and the aim of this article is to provide an entry point to the field for a newcomer. It focuses mainly on these general principles, both kinetic and computational, which tend to be not particularly well covered in specialist literature, and shows that interesting information may even be obtained using very simple operations in a conventional spreadsheet.

  5. Response spectrum analysis of a stochastic seismic model

    International Nuclear Information System (INIS)

    Kimura, Koji; Sakata, Masaru; Takemoto, Shinichiro.

    1990-01-01

    The stochastic response spectrum approach is presented for predicting the dynamic behavior of structures to earthquake excitation expressed by a random process, one of whose sample functions can be regarded as a recorded strong-motion earthquake accelerogram. The approach consists of modeling recorded ground motion by a random process and the root-mean-square response (rms) analysis of a single-degree-of-freedom system by using the moment equations method. The stochastic response spectrum is obtained as a plot of the maximum rms response versus the natural period of the system and is compared with the conventional response spectrum. (author)

  6. Light fermions in quantum gravity

    International Nuclear Information System (INIS)

    Eichhorn, Astrid; Gies, Holger

    2011-01-01

    We study the impact of quantum gravity, formulated as a quantum field theory of the metric, on chiral symmetry in a fermionic matter sector. Specifically we address the question of whether metric fluctuations can induce chiral symmetry breaking and bound state formation. Our results based on the functional renormalization group indicate that chiral symmetry is left intact even at strong gravitational coupling. In particular, we found that asymptotically safe quantum gravity where the gravitational couplings approach a non-Gaußian fixed point generically admits universes with light fermions. Our results thus further support quantum gravity theories built on fluctuations of the metric field such as the asymptotic-safety scenario. A study of chiral symmetry breaking through gravitational quantum effects may also serve as a significant benchmark test for other quantum gravity scenarios, since a completely broken chiral symmetry at the Planck scale would not be in accordance with the observation of light fermions in our universe. We demonstrate that this elementary observation already imposes constraints on a generic UV completion of gravity. (paper)

  7. Scalar field dark matter in hybrid approach

    NARCIS (Netherlands)

    Friedrich, Pavel; Prokopec, Tomislav

    2017-01-01

    We develop a hybrid formalism suitable for modeling scalar field dark matter, in which the phase-space distribution associated to the real scalar field is modeled by statistical equal-time two-point functions and gravity is treated by two stochastic gravitational fields in the longitudinal gauge (in

  8. SLFP: a stochastic linear fractional programming approach for sustainable waste management.

    Science.gov (United States)

    Zhu, H; Huang, G H

    2011-12-01

    A stochastic linear fractional programming (SLFP) approach is developed for supporting sustainable municipal solid waste management under uncertainty. The SLFP method can solve ratio optimization problems associated with random information, where chance-constrained programming is integrated into a linear fractional programming framework. It has advantages in: (1) comparing objectives of two aspects, (2) reflecting system efficiency, (3) dealing with uncertainty expressed as probability distributions, and (4) providing optimal-ratio solutions under different system-reliability conditions. The method is applied to a case study of waste flow allocation within a municipal solid waste (MSW) management system. The obtained solutions are useful for identifying sustainable MSW management schemes with maximized system efficiency under various constraint-violation risks. The results indicate that SLFP can support in-depth analysis of the interrelationships among system efficiency, system cost and system-failure risk. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. A real-space stochastic density matrix approach for density functional electronic structure.

    Science.gov (United States)

    Beck, Thomas L

    2015-12-21

    The recent development of real-space grid methods has led to more efficient, accurate, and adaptable approaches for large-scale electrostatics and density functional electronic structure modeling. With the incorporation of multiscale techniques, linear-scaling real-space solvers are possible for density functional problems if localized orbitals are used to represent the Kohn-Sham energy functional. These methods still suffer from high computational and storage overheads, however, due to extensive matrix operations related to the underlying wave function grid representation. In this paper, an alternative stochastic method is outlined that aims to solve directly for the one-electron density matrix in real space. In order to illustrate aspects of the method, model calculations are performed for simple one-dimensional problems that display some features of the more general problem, such as spatial nodes in the density matrix. This orbital-free approach may prove helpful considering a future involving increasingly parallel computing architectures. Its primary advantage is the near-locality of the random walks, allowing for simultaneous updates of the density matrix in different regions of space partitioned across the processors. In addition, it allows for testing and enforcement of the particle number and idempotency constraints through stabilization of a Feynman-Kac functional integral as opposed to the extensive matrix operations in traditional approaches.

  10. Effective spacetime understanding emergence in effective field theory and quantum gravity

    CERN Document Server

    Crowther, Karen

    2016-01-01

    This book discusses the notion that quantum gravity may represent the "breakdown" of spacetime at extremely high energy scales. If spacetime does not exist at the fundamental level, then it has to be considered "emergent", in other words an effective structure, valid at low energy scales. The author develops a conception of emergence appropriate to effective theories in physics, and shows how it applies (or could apply) in various approaches to quantum gravity, including condensed matter approaches, discrete approaches, and loop quantum gravity.

  11. Stochastic learning in oxide binary synaptic device for neuromorphic computing.

    Science.gov (United States)

    Yu, Shimeng; Gao, Bin; Fang, Zheng; Yu, Hongyu; Kang, Jinfeng; Wong, H-S Philip

    2013-01-01

    Hardware implementation of neuromorphic computing is attractive as a computing paradigm beyond the conventional digital computing. In this work, we show that the SET (off-to-on) transition of metal oxide resistive switching memory becomes probabilistic under a weak programming condition. The switching variability of the binary synaptic device implements a stochastic learning rule. Such stochastic SET transition was statistically measured and modeled for a simulation of a winner-take-all network for competitive learning. The simulation illustrates that with such stochastic learning, the orientation classification function of input patterns can be effectively realized. The system performance metrics were compared between the conventional approach using the analog synapse and the approach in this work that employs the binary synapse utilizing the stochastic learning. The feasibility of using binary synapse in the neurormorphic computing may relax the constraints to engineer continuous multilevel intermediate states and widens the material choice for the synaptic device design.

  12. Quantum Gravity

    International Nuclear Information System (INIS)

    Giribet, G E

    2005-01-01

    Claus Kiefer presents his book, Quantum Gravity, with his hope that '[the] book will convince readers of [the] outstanding problem [of unification and quantum gravity] and encourage them to work on its solution'. With this aim, the author presents a clear exposition of the fundamental concepts of gravity and the steps towards the understanding of its quantum aspects. The main part of the text is dedicated to the analysis of standard topics in the formulation of general relativity. An analysis of the Hamiltonian formulation of general relativity and the canonical quantization of gravity is performed in detail. Chapters four, five and eight provide a pedagogical introduction to the basic concepts of gravitational physics. In particular, aspects such as the quantization of constrained systems, the role played by the quadratic constraint, the ADM decomposition, the Wheeler-de Witt equation and the problem of time are treated in an expert and concise way. Moreover, other specific topics, such as the minisuperspace approach and the feasibility of defining extrinsic times for certain models, are discussed as well. The ninth chapter of the book is dedicated to the quantum gravitational aspects of string theory. Here, a minimalistic but clear introduction to string theory is presented, and this is actually done with emphasis on gravity. It is worth mentioning that no hard (nor explicit) computations are presented, even though the exposition covers the main features of the topic. For instance, black hole statistical physics (within the framework of string theory) is developed in a pedagogical and concise way by means of heuristical arguments. As the author asserts in the epilogue, the hope of the book is to give 'some impressions from progress' made in the study of quantum gravity since its beginning, i.e., since the end of 1920s. In my opinion, Kiefer's book does actually achieve this goal and gives an extensive review of the subject. (book review)

  13. Stochastic Averaging for Constrained Optimization With Application to Online Resource Allocation

    Science.gov (United States)

    Chen, Tianyi; Mokhtari, Aryan; Wang, Xin; Ribeiro, Alejandro; Giannakis, Georgios B.

    2017-06-01

    Existing approaches to resource allocation for nowadays stochastic networks are challenged to meet fast convergence and tolerable delay requirements. The present paper leverages online learning advances to facilitate stochastic resource allocation tasks. By recognizing the central role of Lagrange multipliers, the underlying constrained optimization problem is formulated as a machine learning task involving both training and operational modes, with the goal of learning the sought multipliers in a fast and efficient manner. To this end, an order-optimal offline learning approach is developed first for batch training, and it is then generalized to the online setting with a procedure termed learn-and-adapt. The novel resource allocation protocol permeates benefits of stochastic approximation and statistical learning to obtain low-complexity online updates with learning errors close to the statistical accuracy limits, while still preserving adaptation performance, which in the stochastic network optimization context guarantees queue stability. Analysis and simulated tests demonstrate that the proposed data-driven approach improves the delay and convergence performance of existing resource allocation schemes.

  14. A new approach for gravity localization in six-dimensional geometries

    International Nuclear Information System (INIS)

    Santos, Victor Pereira do Nascimento; Almeida, Carlos Alberto Santos de

    2011-01-01

    Full text: The idea that spacetime may have more than four dimensions is old, originally presented as an attempt to unify Maxwell's theory of Electromagnetism with the brand-new gravitation theory of Einstein. Such extra dimensions are in principle unobservable to the energy scales currently available. However, its effects can be seen in short distance gravity experiments and in observations in cosmology. Also, it is used as a mechanism to explain the difference between the energy scales of the weak force and gravity, which is called the hierarchy problem. The current framework for the extra dimension scenario is consider the four-dimensional known universe as embedded in a higher dimensional space called bulk. The form of this bulk determines how we perceive gravity in our universe; then, the behaviour of gravitational field depends on the geometry of the bulk. Metric solutions were already presented for string-like defect, with and without matter sources, where was shown that the gravity Newtonian potential grows with the inverse cube of distance. Such correction arises from a very particular mass spectrum for the gravitational field, which already contains the orbital angular momentum contributions. In this work we study the behaviour of gravitational field in a extra-dimensional braneworld scenario, using non-factorizable geometries (which preserves Poincare symmetry) and setting suitable matter distributions in order to verify its localization, for several geometries. For such geometries it is possible to find explicit solutions for the tensor fluctuations of the metric. (author)

  15. Noether symmetries of a modified model in teleparallel gravity and a new approach for exact solutions

    Energy Technology Data Exchange (ETDEWEB)

    Tajahmad, Behzad [University of Tabriz, Faculty of Physics, Tabriz (Iran, Islamic Republic of)

    2017-04-15

    In this paper, we present the Noether symmetries of flat FRW spacetime in the context of a new action in teleparallel gravity which we construct based on the f(R) version. This modified action contains a coupling between the scalar field potential and magnetism. Also, we introduce an innovative approach, the beyond Noether symmetry (B.N.S.) approach, for exact solutions which carry more conserved currents than the Noether approach. By data analysis of the exact solutions, obtained from the Noether approach, late-time acceleration and phase crossing are realized, and some deep connections with observational data such as the age of the universe, the present value of the scale factor as well as the state and deceleration parameters are observed. In the B.N.S. approach, we consider the dark energy dominated era. (orig.)

  16. Noether symmetries of a modified model in teleparallel gravity and a new approach for exact solutions

    International Nuclear Information System (INIS)

    Tajahmad, Behzad

    2017-01-01

    In this paper, we present the Noether symmetries of flat FRW spacetime in the context of a new action in teleparallel gravity which we construct based on the f(R) version. This modified action contains a coupling between the scalar field potential and magnetism. Also, we introduce an innovative approach, the beyond Noether symmetry (B.N.S.) approach, for exact solutions which carry more conserved currents than the Noether approach. By data analysis of the exact solutions, obtained from the Noether approach, late-time acceleration and phase crossing are realized, and some deep connections with observational data such as the age of the universe, the present value of the scale factor as well as the state and deceleration parameters are observed. In the B.N.S. approach, we consider the dark energy dominated era. (orig.)

  17. Thermodynamics and phases in quantum gravity

    International Nuclear Information System (INIS)

    Husain, Viqar; Mann, R B

    2009-01-01

    We give an approach for studying quantum gravity effects on black hole thermodynamics. This combines a quantum framework for gravitational collapse with quasi-local definitions of energy and surface gravity. Our arguments suggest that (i) the specific heat of a black hole becomes positive after a phase transition near the Planck scale,(ii) its entropy acquires a logarithmic correction and (iii) the mass loss rate is modified such that Hawking radiation stops near the Planck scale. These results are due essentially to a realization of fundamental discreteness in quantum gravity, and are in this sense potentially theory independent.

  18. Development of the negative gravity anomaly of the 85 degrees E Ridge, northeastern Indian Ocean – A process oriented modelling approach

    Digital Repository Service at National Institute of Oceanography (India)

    Sreejith, K.M.; Radhakrishna, M.; Krishna, K.S.; Majumdar, T.J.

    Te value. Entire process is repeated for different Te values ranging from 0 to 25 km, until a good fit is obtained between the observed and calculated gravity anomalies considering RMS error as well as amplitude and wavelength of the anomalies... as the goodness of fit. The model parameters used in the computations are given in table 1. 5. Crustal structure and elastic plate thickness (Te) beneath the ridge Following the approach described above, we have computed individual gravity anomalies contributed...

  19. Reliability Coupled Sensitivity Based Design Approach for Gravity Retaining Walls

    Science.gov (United States)

    Guha Ray, A.; Baidya, D. K.

    2012-09-01

    Sensitivity analysis involving different random variables and different potential failure modes of a gravity retaining wall focuses on the fact that high sensitivity of a particular variable on a particular mode of failure does not necessarily imply a remarkable contribution to the overall failure probability. The present paper aims at identifying a probabilistic risk factor ( R f ) for each random variable based on the combined effects of failure probability ( P f ) of each mode of failure of a gravity retaining wall and sensitivity of each of the random variables on these failure modes. P f is calculated by Monte Carlo simulation and sensitivity analysis of each random variable is carried out by F-test analysis. The structure, redesigned by modifying the original random variables with the risk factors, is safe against all the variations of random variables. It is observed that R f for friction angle of backfill soil ( φ 1 ) increases and cohesion of foundation soil ( c 2 ) decreases with an increase of variation of φ 1 , while R f for unit weights ( γ 1 and γ 2 ) for both soil and friction angle of foundation soil ( φ 2 ) remains almost constant for variation of soil properties. The results compared well with some of the existing deterministic and probabilistic methods and found to be cost-effective. It is seen that if variation of φ 1 remains within 5 %, significant reduction in cross-sectional area can be achieved. But if the variation is more than 7-8 %, the structure needs to be modified. Finally design guidelines for different wall dimensions, based on the present approach, are proposed.

  20. Reflected stochastic differential equation models for constrained animal movement

    Science.gov (United States)

    Hanks, Ephraim M.; Johnson, Devin S.; Hooten, Mevin B.

    2017-01-01

    Movement for many animal species is constrained in space by barriers such as rivers, shorelines, or impassable cliffs. We develop an approach for modeling animal movement constrained in space by considering a class of constrained stochastic processes, reflected stochastic differential equations. Our approach generalizes existing methods for modeling unconstrained animal movement. We present methods for simulation and inference based on augmenting the constrained movement path with a latent unconstrained path and illustrate this augmentation with a simulation example and an analysis of telemetry data from a Steller sea lion (Eumatopias jubatus) in southeast Alaska.

  1. One-loop quantum gravity repulsion in the early Universe.

    Science.gov (United States)

    Broda, Bogusław

    2011-03-11

    Perturbative quantum gravity formalism is applied to compute the lowest order corrections to the classical spatially flat cosmological Friedmann-Lemaître-Robertson-Walker solution (for the radiation). The presented approach is analogous to the approach applied to compute quantum corrections to the Coulomb potential in electrodynamics, or rather to the approach applied to compute quantum corrections to the Schwarzschild solution in gravity. In the framework of the standard perturbative quantum gravity, it is shown that the corrections to the classical deceleration, coming from the one-loop graviton vacuum polarization (self-energy), have (UV cutoff free) opposite to the classical repulsive properties which are not negligible in the very early Universe. The repulsive "quantum forces" resemble those known from loop quantum cosmology.

  2. Progress towards a space-borne quantum gravity gradiometer

    Science.gov (United States)

    Yu, Nan; Kohel, James M.; Ramerez-Serrano, Jaime; Kellogg, James R.; Lim, Lawrence; Maleki, Lute

    2004-01-01

    Quantum interferometer gravity gradiometer for 3D mapping is a project for developing the technology of atom interferometer-based gravity sensor in space. The atom interferometer utilizes atomic particles as free fall test masses to measure inertial forces with unprecedented sensitivity and precision. It also allows measurements of the gravity gradient tensor components for 3D mapping of subsurface mass distribution. The overall approach is based on recent advances of laser cooling and manipulation of atoms in atomic and optical physics. Atom interferometers have been demonstrated in research laboratories for gravity and gravity gradient measurements. In this approach, atoms are first laser cooled to micro-kelvin temperatures. Then they are allowed to freefall in vacuum as true drag-free test masses. During the free fall, a sequence of laser pulses is used to split and recombine the atom waves to realize the interferometric measurements. We have demonstrated atom interferometer operation in the Phase I period, and we are implementing the second generation for a complete gradiometer demonstration unit in the laboratory. Along with this development, we are developing technologies at component levels that will be more suited for realization of a space instrument. We will present an update of these developments and discuss the future directions of the quantum gravity gradiometer project.

  3. Distance measurement and wave dispersion in a Liouville-string approach to quantum gravity

    CERN Document Server

    Amelino-Camelia, G; Mavromatos, Nikolaos E; Nanopoulos, Dimitri V

    1997-01-01

    Within a Liouville approach to non-critical string theory, we discuss space-time foam effects on the propagation of low-energy particles. We find an induced frequency-dependent dispersion in the propagation of a wave packet, and observe that this would affect the outcome of measurements involving low-energy particles as probes. In particular, the maximum possible order of magnitude of the space-time foam effects would give rise to an error in the measurement of distance comparable to that independently obtained in some recent heuristic quantum-gravity analyses. We also briefly compare these error estimates with the precision of astrophysical measurements.

  4. Integrating stochastic time-dependent travel speed in solution methods for the dynamic dial-a-ride problem.

    Science.gov (United States)

    Schilde, M; Doerner, K F; Hartl, R F

    2014-10-01

    In urban areas, logistic transportation operations often run into problems because travel speeds change, depending on the current traffic situation. If not accounted for, time-dependent and stochastic travel speeds frequently lead to missed time windows and thus poorer service. Especially in the case of passenger transportation, it often leads to excessive passenger ride times as well. Therefore, time-dependent and stochastic influences on travel speeds are relevant for finding feasible and reliable solutions. This study considers the effect of exploiting statistical information available about historical accidents, using stochastic solution approaches for the dynamic dial-a-ride problem (dynamic DARP). The authors propose two pairs of metaheuristic solution approaches, each consisting of a deterministic method (average time-dependent travel speeds for planning) and its corresponding stochastic version (exploiting stochastic information while planning). The results, using test instances with up to 762 requests based on a real-world road network, show that in certain conditions, exploiting stochastic information about travel speeds leads to significant improvements over deterministic approaches.

  5. Stochastic approach for round-off error analysis in computing application to signal processing algorithms

    International Nuclear Information System (INIS)

    Vignes, J.

    1986-01-01

    Any result of algorithms provided by a computer always contains an error resulting from floating-point arithmetic round-off error propagation. Furthermore signal processing algorithms are also generally performed with data containing errors. The permutation-perturbation method, also known under the name CESTAC (controle et estimation stochastique d'arrondi de calcul) is a very efficient practical method for evaluating these errors and consequently for estimating the exact significant decimal figures of any result of algorithms performed on a computer. The stochastic approach of this method, its probabilistic proof, and the perfect agreement between the theoretical and practical aspects are described in this paper [fr

  6. A classical approach to higher-derivative gravity

    International Nuclear Information System (INIS)

    Accioly, A.J.

    1988-01-01

    Two classical routes towards higher-derivative gravity theory are described. The first one is a geometrical route, starting from first principles. The second route is a formal one, and is based on a recent theorem by Castagnino et.al. [J. Math. Phys. 28 (1987) 1854]. A cosmological solution of the higher-derivative field equations is exhibited which in a classical framework singles out this gravitation theory. (author) [pt

  7. A Combined Gravity Compensation Method for INS Using the Simplified Gravity Model and Gravity Database.

    Science.gov (United States)

    Zhou, Xiao; Yang, Gongliu; Wang, Jing; Wen, Zeyang

    2018-05-14

    In recent decades, gravity compensation has become an important way to reduce the position error of an inertial navigation system (INS), especially for a high-precision INS, because of the extensive application of high precision inertial sensors (accelerometers and gyros). This paper first deducts the INS's solution error considering gravity disturbance and simulates the results. Meanwhile, this paper proposes a combined gravity compensation method using a simplified gravity model and gravity database. This new combined method consists of two steps all together. Step 1 subtracts the normal gravity using a simplified gravity model. Step 2 first obtains the gravity disturbance on the trajectory of the carrier with the help of ELM training based on the measured gravity data (provided by Institute of Geodesy and Geophysics; Chinese Academy of sciences), and then compensates it into the error equations of the INS, considering the gravity disturbance, to further improve the navigation accuracy. The effectiveness and feasibility of this new gravity compensation method for the INS are verified through vehicle tests in two different regions; one is in flat terrain with mild gravity variation and the other is in complex terrain with fierce gravity variation. During 2 h vehicle tests, the positioning accuracy of two tests can improve by 20% and 38% respectively, after the gravity is compensated by the proposed method.

  8. Optimal Control Inventory Stochastic With Production Deteriorating

    Science.gov (United States)

    Affandi, Pardi

    2018-01-01

    In this paper, we are using optimal control approach to determine the optimal rate in production. Most of the inventory production models deal with a single item. First build the mathematical models inventory stochastic, in this model we also assume that the items are in the same store. The mathematical model of the problem inventory can be deterministic and stochastic models. In this research will be discussed how to model the stochastic as well as how to solve the inventory model using optimal control techniques. The main tool in the study problems for the necessary optimality conditions in the form of the Pontryagin maximum principle involves the Hamilton function. So we can have the optimal production rate in a production inventory system where items are subject deterioration.

  9. Stochastic approaches for time series forecasting of boron: a case study of Western Turkey.

    Science.gov (United States)

    Durdu, Omer Faruk

    2010-10-01

    In the present study, a seasonal and non-seasonal prediction of boron concentrations time series data for the period of 1996-2004 from Büyük Menderes river in western Turkey are addressed by means of linear stochastic models. The methodology presented here is to develop adequate linear stochastic models known as autoregressive integrated moving average (ARIMA) and multiplicative seasonal autoregressive integrated moving average (SARIMA) to predict boron content in the Büyük Menderes catchment. Initially, the Box-Whisker plots and Kendall's tau test are used to identify the trends during the study period. The measurements locations do not show significant overall trend in boron concentrations, though marginal increasing and decreasing trends are observed for certain periods at some locations. ARIMA modeling approach involves the following three steps: model identification, parameter estimation, and diagnostic checking. In the model identification step, considering the autocorrelation function (ACF) and partial autocorrelation function (PACF) results of boron data series, different ARIMA models are identified. The model gives the minimum Akaike information criterion (AIC) is selected as the best-fit model. The parameter estimation step indicates that the estimated model parameters are significantly different from zero. The diagnostic check step is applied to the residuals of the selected ARIMA models and the results indicate that the residuals are independent, normally distributed, and homoscadastic. For the model validation purposes, the predicted results using the best ARIMA models are compared to the observed data. The predicted data show reasonably good agreement with the actual data. The comparison of the mean and variance of 3-year (2002-2004) observed data vs predicted data from the selected best models show that the boron model from ARIMA modeling approaches could be used in a safe manner since the predicted values from these models preserve the basic

  10. Dynamics and entanglement in spherically symmetric quantum gravity

    International Nuclear Information System (INIS)

    Husain, Viqar; Terno, Daniel R.

    2010-01-01

    The gravity-scalar field system in spherical symmetry provides a natural setting for exploring gravitational collapse and its aftermath in quantum gravity. In a canonical approach, we give constructions of the Hamiltonian operator, and of semiclassical states peaked on constraint-free data. Such states provide explicit examples of physical states. We also show that matter-gravity entanglement is an inherent feature of physical states, whether or not there is a black hole.

  11. Stochastic dark energy from inflationary quantum fluctuations

    Science.gov (United States)

    Glavan, Dražen; Prokopec, Tomislav; Starobinsky, Alexei A.

    2018-05-01

    We study the quantum backreaction from inflationary fluctuations of a very light, non-minimally coupled spectator scalar and show that it is a viable candidate for dark energy. The problem is solved by suitably adapting the formalism of stochastic inflation. This allows us to self-consistently account for the backreaction on the background expansion rate of the Universe where its effects are large. This framework is equivalent to that of semiclassical gravity in which matter vacuum fluctuations are included at the one loop level, but purely quantum gravitational fluctuations are neglected. Our results show that dark energy in our model can be characterized by a distinct effective equation of state parameter (as a function of redshift) which allows for testing of the model at the level of the background.

  12. A stochastic-deterministic approach for evaluation of uncertainty in the predicted maximum fuel bundle enthalpy in a CANDU postulated LBLOCA event

    Energy Technology Data Exchange (ETDEWEB)

    Serghiuta, D.; Tholammakkil, J.; Shen, W., E-mail: Dumitru.Serghiuta@cnsc-ccsn.gc.ca [Canadian Nuclear Safety Commission, Ottawa, Ontario (Canada)

    2014-07-01

    A stochastic-deterministic approach based on representation of uncertainties by subjective probabilities is proposed for evaluation of bounding values of functional failure probability and assessment of probabilistic safety margins. The approach is designed for screening and limited independent review verification. Its application is illustrated for a postulated generic CANDU LBLOCA and evaluation of the possibility distribution function of maximum bundle enthalpy considering the reactor physics part of LBLOCA power pulse simulation only. The computer codes HELIOS and NESTLE-CANDU were used in a stochastic procedure driven by the computer code DAKOTA to simulate the LBLOCA power pulse using combinations of core neutronic characteristics randomly generated from postulated subjective probability distributions with deterministic constraints and fixed transient bundle-wise thermal hydraulic conditions. With this information, a bounding estimate of functional failure probability using the limit for the maximum fuel bundle enthalpy can be derived for use in evaluation of core damage frequency. (author)

  13. Quantum gravity and quantum cosmology

    CERN Document Server

    Papantonopoulos, Lefteris; Siopsis, George; Tsamis, Nikos

    2013-01-01

    Quantum gravity has developed into a fast-growing subject in physics and it is expected that probing the high-energy and high-curvature regimes of gravitating systems will shed some light on how to eventually achieve an ultraviolet complete quantum theory of gravity. Such a theory would provide the much needed information about fundamental problems of classical gravity, such as the initial big-bang singularity, the cosmological constant problem, Planck scale physics and the early-time inflationary evolution of our Universe.   While in the first part of this book concepts of quantum gravity are introduced and approached from different angles, the second part discusses these theories in connection with cosmological models and observations, thereby exploring which types of signatures of modern and mathematically rigorous frameworks can be detected by experiments. The third and final part briefly reviews the observational status of dark matter and dark energy, and introduces alternative cosmological models.   ...

  14. Improved operating strategies for uranium extraction: a stochastic simulation

    International Nuclear Information System (INIS)

    Broekman, B.R.

    1986-01-01

    Deterministic and stochastic simulations of a Western Transvaal uranium process are used in this research report to determine more profitable uranium plant operating strategies and to gauge the potential financial benefits of automatic process control. The deterministic simulation model was formulated using empirical and phenomenological process models. The model indicated that profitability increases significantly as the uranium leaching strategy becomes harsher. The stochastic simulation models use process variable distributions corresponding to manually and automatically controlled conditions to investigate the economic gains that may be obtained if a change is made from manual to automatic control of two important process variables. These lognormally distributed variables are the pachuca 1 sulphuric acid concentration and the ferric to ferrous ratio. The stochastic simulations show that automatic process control is justifiable in certain cases. Where the leaching strategy is relatively harsh, such as that in operation during January 1986, it is not possible to justify an automatic control system. Automatic control is, however, justifiable if a relatively mild leaching strategy is adopted. The stochastic and deterministic simulations represent two different approaches to uranium process modelling. This study has indicated the necessity for each approach to be applied in the correct context. It is contended that incorrect conclusions may have been drawn by other investigators in South Africa who failed to consider the two approaches separately

  15. 3D correlation imaging of the vertical gradient of gravity data

    International Nuclear Information System (INIS)

    Guo, Lianghui; Meng, Xiaohong; Shi, Lei

    2011-01-01

    We present a new 3D correlation imaging approach for vertical gradient of gravity data for deriving a 3D equivalent mass distribution in the subsurface. In this approach, we divide the subsurface space into a 3D regular grid, and then at each grid node calculate a cross correlation between the vertical gradient of the observed gravity data and the theoretical gravity vertical gradient due to a point mass source. The resultant correlation coefficients are used to describe the equivalent mass distribution in a probability sense. We simulate a geological syncline model intruded by a dike and later broken by two vertical faults. The vertical gradient of gravity anomaly of the model is calculated and used to test the approach. The results demonstrate that the equivalent mass distribution derived by the approach reflects the basic geological structures of the model. We also test the approach on the transformed vertical gradient of real Bouguer gravity data from a geothermal survey area in Northern China. The thermal reservoirs are located in the lower portion of the sedimentary basin. From the resultant equivalent mass distribution, we produce the depth distribution of the bottom interface of the basin and predict possible hidden faults present in the basin

  16. Bond-based linear indices of the non-stochastic and stochastic edge-adjacency matrix. 1. Theory and modeling of ChemPhys properties of organic molecules.

    Science.gov (United States)

    Marrero-Ponce, Yovani; Martínez-Albelo, Eugenio R; Casañola-Martín, Gerardo M; Castillo-Garit, Juan A; Echevería-Díaz, Yunaimy; Zaldivar, Vicente Romero; Tygat, Jan; Borges, José E Rodriguez; García-Domenech, Ramón; Torrens, Francisco; Pérez-Giménez, Facundo

    2010-11-01

    Novel bond-level molecular descriptors are proposed, based on linear maps similar to the ones defined in algebra theory. The kth edge-adjacency matrix (E(k)) denotes the matrix of bond linear indices (non-stochastic) with regard to canonical basis set. The kth stochastic edge-adjacency matrix, ES(k), is here proposed as a new molecular representation easily calculated from E(k). Then, the kth stochastic bond linear indices are calculated using ES(k) as operators of linear transformations. In both cases, the bond-type formalism is developed. The kth non-stochastic and stochastic total linear indices are calculated by adding the kth non-stochastic and stochastic bond linear indices, respectively, of all bonds in molecule. First, the new bond-based molecular descriptors (MDs) are tested for suitability, for the QSPRs, by analyzing regressions of novel indices for selected physicochemical properties of octane isomers (first round). General performance of the new descriptors in this QSPR studies is evaluated with regard to the well-known sets of 2D/3D MDs. From the analysis, we can conclude that the non-stochastic and stochastic bond-based linear indices have an overall good modeling capability proving their usefulness in QSPR studies. Later, the novel bond-level MDs are also used for the description and prediction of the boiling point of 28 alkyl-alcohols (second round), and to the modeling of the specific rate constant (log k), partition coefficient (log P), as well as the antibacterial activity of 34 derivatives of 2-furylethylenes (third round). The comparison with other approaches (edge- and vertices-based connectivity indices, total and local spectral moments, and quantum chemical descriptors as well as E-state/biomolecular encounter parameters) exposes a good behavior of our method in this QSPR studies. Finally, the approach described in this study appears to be a very promising structural invariant, useful not only for QSPR studies but also for similarity

  17. Population density approach for discrete mRNA distributions in generalized switching models for stochastic gene expression.

    Science.gov (United States)

    Stinchcombe, Adam R; Peskin, Charles S; Tranchina, Daniel

    2012-06-01

    We present a generalization of a population density approach for modeling and analysis of stochastic gene expression. In the model, the gene of interest fluctuates stochastically between an inactive state, in which transcription cannot occur, and an active state, in which discrete transcription events occur; and the individual mRNA molecules are degraded stochastically in an independent manner. This sort of model in simplest form with exponential dwell times has been used to explain experimental estimates of the discrete distribution of random mRNA copy number. In our generalization, the random dwell times in the inactive and active states, T_{0} and T_{1}, respectively, are independent random variables drawn from any specified distributions. Consequently, the probability per unit time of switching out of a state depends on the time since entering that state. Our method exploits a connection between the fully discrete random process and a related continuous process. We present numerical methods for computing steady-state mRNA distributions and an analytical derivation of the mRNA autocovariance function. We find that empirical estimates of the steady-state mRNA probability mass function from Monte Carlo simulations of laboratory data do not allow one to distinguish between underlying models with exponential and nonexponential dwell times in some relevant parameter regimes. However, in these parameter regimes and where the autocovariance function has negative lobes, the autocovariance function disambiguates the two types of models. Our results strongly suggest that temporal data beyond the autocovariance function is required in general to characterize gene switching.

  18. A philosophical approach to quantum field theory

    CERN Document Server

    Öttinger, Hans Christian

    2015-01-01

    This text presents an intuitive and robust mathematical image of fundamental particle physics based on a novel approach to quantum field theory, which is guided by four carefully motivated metaphysical postulates. In particular, the book explores a dissipative approach to quantum field theory, which is illustrated for scalar field theory and quantum electrodynamics, and proposes an attractive explanation of the Planck scale in quantum gravity. Offering a radically new perspective on this topic, the book focuses on the conceptual foundations of quantum field theory and ontological questions. It also suggests a new stochastic simulation technique in quantum field theory which is complementary to existing ones. Encouraging rigor in a field containing many mathematical subtleties and pitfalls this text is a helpful companion for students of physics and philosophers interested in quantum field theory, and it allows readers to gain an intuitive rather than a formal understanding.

  19. Stochastic Modelling of Shiroro River Stream flow Process

    OpenAIRE

    Musa, J. J

    2013-01-01

    Economists, social scientists and engineers provide insights into the drivers of anthropogenic climate change and the options for adaptation and mitigation, and yet other scientists, including geographers and biologists, study the impacts of climate change. This project concentrates mainly on the discharge from the Shiroro River. A stochastic approach is presented for modeling a time series by an Autoregressive Moving Average model (ARMA). The development and use of a stochastic stream flow m...

  20. Quantum gravity and the renormalisation group

    International Nuclear Information System (INIS)

    Litim, D.

    2011-01-01

    The Standard Model of particle physics is remarkably successful in describing three out of the four known fundamental forces of Nature. But what is up with gravity? Attempts to understand quantum gravity on the same footing as the other forces still face problems. Some time ago, it has been pointed out that gravity may very well exist as a fundamental quantum field theory provided its high-energy behaviour is governed by a fixed point under the renormalisation group. In recent years, this 'asymptotic safety' scenario has found significant support thanks to numerous renormalisation group studies, lattice simulations, and new ideas within perturbation theory. The lectures will give an introduction into the renormalisation group approach for quantum gravity, aimed at those who haven't met the topic before. After an introduction and overview, the key ideas and concepts of asymptotic safety for gravity are fleshed out. Results for gravitational high-energy fixed points and scaling exponents are discussed as well as key features of the gravitational phase diagram. The survey concludes with some phenomenological implications of fixed point gravity including the physics of black holes and particle physics beyond the Standard Model. (author)

  1. An Adynamical, Graphical Approach to Quantum Gravity and Unification

    Science.gov (United States)

    Stuckey, W. M.; Silberstein, Michael; McDevitt, Timothy

    We use graphical field gradients in an adynamical, background independent fashion to propose a new approach to quantum gravity (QG) and unification. Our proposed reconciliation of general relativity (GR) and quantum field theory (QFT) is based on a modification of their graphical instantiations, i.e. Regge calculus and lattice gauge theory (LGT), respectively, which we assume are fundamental to their continuum counterparts. Accordingly, the fundamental structure is a graphical amalgam of space, time, and sources (in parlance of QFT) called a "space-time source element". These are fundamental elements of space, time, and sources, not source elements in space and time. The transition amplitude for a space-time source element is computed using a path integral with discrete graphical action. The action for a space-time source element is constructed from a difference matrix K and source vector J on the graph, as in lattice gauge theory. K is constructed from graphical field gradients so that it contains a non-trivial null space and J is then restricted to the row space of K, so that it is divergence-free and represents a conserved exchange of energy-momentum. This construct of K and J represents an adynamical global constraint (AGC) between sources, the space-time metric, and the energy-momentum content of the element, rather than a dynamical law for time-evolved entities. In this view, one manifestation of quantum gravity becomes evident when, for example, a single space-time source element spans adjoining simplices of the Regge calculus graph. Thus, energy conservation for the space-time source element includes contributions to the deficit angles between simplices. This idea is used to correct proper distance in the Einstein-de Sitter (EdS) cosmology model yielding a fit of the Union2 Compilation supernova data that matches ΛCDM without having to invoke accelerating expansion or dark energy. A similar modification to LGT results in an adynamical account of quantum

  2. Microgrid Reliability Modeling and Battery Scheduling Using Stochastic Linear Programming

    Energy Technology Data Exchange (ETDEWEB)

    Cardoso, Goncalo; Stadler, Michael; Siddiqui, Afzal; Marnay, Chris; DeForest, Nicholas; Barbosa-Povoa, Ana; Ferrao, Paulo

    2013-05-23

    This paper describes the introduction of stochastic linear programming into Operations DER-CAM, a tool used to obtain optimal operating schedules for a given microgrid under local economic and environmental conditions. This application follows previous work on optimal scheduling of a lithium-iron-phosphate battery given the output uncertainty of a 1 MW molten carbonate fuel cell. Both are in the Santa Rita Jail microgrid, located in Dublin, California. This fuel cell has proven unreliable, partially justifying the consideration of storage options. Several stochastic DER-CAM runs are executed to compare different scenarios to values obtained by a deterministic approach. Results indicate that using a stochastic approach provides a conservative yet more lucrative battery schedule. Lower expected energy bills result, given fuel cell outages, in potential savings exceeding 6percent.

  3. Noether's stars in f (R) gravity

    Science.gov (United States)

    De Laurentis, Mariafelicia

    2018-05-01

    The Noether Symmetry Approach can be used to construct spherically symmetric solutions in f (R) gravity. Specifically, the Noether conserved quantity is related to the gravitational mass and a gravitational radius that reduces to the Schwarzschild radius in the limit f (R) → R. We show that it is possible to construct the M- R relation for neutron stars depending on the Noether conserved quantity and the associated gravitational radius. This approach enables the recovery of extreme massive stars that could not be stable in the standard Tolman-Oppenheimer-Volkoff based on General Relativity. Examples are given for some power law f (R) gravity models.

  4. Dynamical Regge calculus as lattice gravity

    International Nuclear Information System (INIS)

    Hagura, Hiroyuki

    2001-01-01

    We propose a hybrid approach to lattice quantum gravity by combining simultaneously the dynamical triangulation with the Regge calculus, called the dynamical Regge calculus (DRC). In this approach lattice diffeomorphism is realized as an exact symmetry by some hybrid (k, l) moves on the simplicial lattice. Numerical study of 3D pure gravity shows that an entropy of the DRC is not exponetially bounded if we adopt the uniform measure Π i dl i . On the other hand, using the scale-invariant measure Π i dl i /l i , we can calculate observables and observe a large hysteresis between two phases that indicates the first-order nature of the phase transition

  5. Stochastic calculus in physics

    International Nuclear Information System (INIS)

    Fox, R.F.

    1987-01-01

    The relationship of Ito-Stratonovich stochastic calculus to studies of weakly colored noise is explained. A functional calculus approach is used to obtain an effective Fokker-Planck equation for the weakly colored noise regime. In a smooth limit, this representation produces the Stratonovich version of the Ito-Stratonovich calculus for white noise. It also provides an approach to steady state behavior for strongly colored noise. Numerical simulation algorithms are explored, and a novel suggestion is made for efficient and accurate simulation of white noise equations

  6. Stochastic development regression on non-linear manifolds

    DEFF Research Database (Denmark)

    Kühnel, Line; Sommer, Stefan Horst

    2017-01-01

    We introduce a regression model for data on non-linear manifolds. The model describes the relation between a set of manifold valued observations, such as shapes of anatomical objects, and Euclidean explanatory variables. The approach is based on stochastic development of Euclidean diffusion...... processes to the manifold. Defining the data distribution as the transition distribution of the mapped stochastic process, parameters of the model, the non-linear analogue of design matrix and intercept, are found via maximum likelihood. The model is intrinsically related to the geometry encoded...

  7. Newtonian gravity in loop quantum gravity

    OpenAIRE

    Smolin, Lee

    2010-01-01

    We apply a recent argument of Verlinde to loop quantum gravity, to conclude that Newton's law of gravity emerges in an appropriate limit and setting. This is possible because the relationship between area and entropy is realized in loop quantum gravity when boundaries are imposed on a quantum spacetime.

  8. Stochastic dynamic analysis of marine risers considering Gaussian system uncertainties

    Science.gov (United States)

    Ni, Pinghe; Li, Jun; Hao, Hong; Xia, Yong

    2018-03-01

    This paper performs the stochastic dynamic response analysis of marine risers with material uncertainties, i.e. in the mass density and elastic modulus, by using Stochastic Finite Element Method (SFEM) and model reduction technique. These uncertainties are assumed having Gaussian distributions. The random mass density and elastic modulus are represented by using the Karhunen-Loève (KL) expansion. The Polynomial Chaos (PC) expansion is adopted to represent the vibration response because the covariance of the output is unknown. Model reduction based on the Iterated Improved Reduced System (IIRS) technique is applied to eliminate the PC coefficients of the slave degrees of freedom to reduce the dimension of the stochastic system. Monte Carlo Simulation (MCS) is conducted to obtain the reference response statistics. Two numerical examples are studied in this paper. The response statistics from the proposed approach are compared with those from MCS. It is noted that the computational time is significantly reduced while the accuracy is kept. The results demonstrate the efficiency of the proposed approach for stochastic dynamic response analysis of marine risers.

  9. Facilitated diffusion in a crowded environment: from kinetics to stochastics

    International Nuclear Information System (INIS)

    Meroz, Yasmine; Klafter, Joseph; Eliazar, Iddo

    2009-01-01

    Facilitated diffusion is a fundamental search process used to describe the problem of a searcher protein finding a specific target site over a very large DNA strand. In recent years macromolecular crowding has been recognized to affect this search process. In this paper, we bridge between two different modelling methodologies of facilitated diffusion: the physics-oriented kinetic approach, which yields the reaction rate of the search process, and the probability-oriented stochastic approach, which yields the probability distribution of the search duration. We translate the former approach to the latter, ascertaining that the two approaches yield coinciding results, both with and without macromolecular crowding. We further show that the stochastic approach markedly generalizes the kinetic approach by accommodating a vast array of search mechanisms, including mechanisms having no reaction rates, and thus being beyond the realm of the kinetic approach.

  10. Modeling stochasticity and robustness in gene regulatory networks.

    Science.gov (United States)

    Garg, Abhishek; Mohanram, Kartik; Di Cara, Alessandro; De Micheli, Giovanni; Xenarios, Ioannis

    2009-06-15

    Understanding gene regulation in biological processes and modeling the robustness of underlying regulatory networks is an important problem that is currently being addressed by computational systems biologists. Lately, there has been a renewed interest in Boolean modeling techniques for gene regulatory networks (GRNs). However, due to their deterministic nature, it is often difficult to identify whether these modeling approaches are robust to the addition of stochastic noise that is widespread in gene regulatory processes. Stochasticity in Boolean models of GRNs has been addressed relatively sparingly in the past, mainly by flipping the expression of genes between different expression levels with a predefined probability. This stochasticity in nodes (SIN) model leads to over representation of noise in GRNs and hence non-correspondence with biological observations. In this article, we introduce the stochasticity in functions (SIF) model for simulating stochasticity in Boolean models of GRNs. By providing biological motivation behind the use of the SIF model and applying it to the T-helper and T-cell activation networks, we show that the SIF model provides more biologically robust results than the existing SIN model of stochasticity in GRNs. Algorithms are made available under our Boolean modeling toolbox, GenYsis. The software binaries can be downloaded from http://si2.epfl.ch/ approximately garg/genysis.html.

  11. Chiral gravity, log gravity, and extremal CFT

    International Nuclear Information System (INIS)

    Maloney, Alexander; Song Wei; Strominger, Andrew

    2010-01-01

    We show that the linearization of all exact solutions of classical chiral gravity around the AdS 3 vacuum have positive energy. Nonchiral and negative-energy solutions of the linearized equations are infrared divergent at second order, and so are removed from the spectrum. In other words, chirality is confined and the equations of motion have linearization instabilities. We prove that the only stationary, axially symmetric solutions of chiral gravity are BTZ black holes, which have positive energy. It is further shown that classical log gravity--the theory with logarithmically relaxed boundary conditions--has finite asymptotic symmetry generators but is not chiral and hence may be dual at the quantum level to a logarithmic conformal field theories (CFT). Moreover we show that log gravity contains chiral gravity within it as a decoupled charge superselection sector. We formally evaluate the Euclidean sum over geometries of chiral gravity and show that it gives precisely the holomorphic extremal CFT partition function. The modular invariance and integrality of the expansion coefficients of this partition function are consistent with the existence of an exact quantum theory of chiral gravity. We argue that the problem of quantizing chiral gravity is the holographic dual of the problem of constructing an extremal CFT, while quantizing log gravity is dual to the problem of constructing a logarithmic extremal CFT.

  12. Noncausal stochastic calculus

    CERN Document Server

    Ogawa, Shigeyoshi

    2017-01-01

    This book presents an elementary introduction to the theory of noncausal stochastic calculus that arises as a natural alternative to the standard theory of stochastic calculus founded in 1944 by Professor Kiyoshi Itô. As is generally known, Itô Calculus is essentially based on the "hypothesis of causality", asking random functions to be adapted to a natural filtration generated by Brownian motion or more generally by square integrable martingale. The intention in this book is to establish a stochastic calculus that is free from this "hypothesis of causality". To be more precise, a noncausal theory of stochastic calculus is developed in this book, based on the noncausal integral introduced by the author in 1979. After studying basic properties of the noncausal stochastic integral, various concrete problems of noncausal nature are considered, mostly concerning stochastic functional equations such as SDE, SIE, SPDE, and others, to show not only the necessity of such theory of noncausal stochastic calculus but ...

  13. Fast radio bursts and the stochastic lifetime of black holes in quantum gravity

    Science.gov (United States)

    Barrau, Aurélien; Moulin, Flora; Martineau, Killian

    2018-03-01

    Nonperturbative quantum gravity effects might allow a black-to-white hole transition. We revisit this increasingly popular hypothesis by taking into account the fundamentally random nature of the bouncing time. We show that if the primordial mass spectrum of black holes is highly peaked, the expected signal can in fact match the wavelength of the observed fast radio bursts. On the other hand, if the primordial mass spectrum is wide and smooth, clear predictions are suggested and the sensitivity to the shape of the spectrum is studied.

  14. Stochastic energy balancing in substation energy management

    Directory of Open Access Journals (Sweden)

    Hassan Shirzeh

    2015-12-01

    Full Text Available In the current research, a smart grid is considered as a network of distributed interacting nodes represented by renewable energy sources, storage and loads. The source nodes become active or inactive in a stochastic manner due to the intermittent nature of natural resources such as wind and solar irradiance. Prediction and stochastic modelling of electrical energy flow is a critical task in such a network in order to achieve load levelling and/or peak shaving in order to minimise the fluctuation between off-peak and peak energy demand. An effective approach is proposed to model and administer the behaviour of source nodes in this grid through a scheduling strategy control algorithm using the historical data collected from the system. The stochastic model predicts future power consumption/injection to determine the power required for storage components. The stochastic models developed based on the Box-Jenkins method predict the most efficient state of the electrical energy flow between a distribution network and nodes and minimises the peak demand and off-peak consumption of acquiring electrical energy from the main grid. The performance of the models is validated against the autoregressive moving average (ARIMA and the Markov chain models used in previous work. The results demonstrate that the proposed method outperforms both the ARIMA and the Markov chain model in terms of forecast accuracy. Results are presented, the strengths and limitations of the approach are discussed, and possible future work is described.

  15. General relativity and gauge gravity theories of higher order

    International Nuclear Information System (INIS)

    Konopleva, N.P.

    1998-01-01

    It is a short review of today's gauge gravity theories and their relations with Einstein General Relativity. The conceptions of construction of the gauge gravity theories with higher derivatives are analyzed. GR is regarded as the gauge gravity theory corresponding to the choice of G ∞4 as the local gauge symmetry group and the symmetrical tensor of rank two g μν as the field variable. Using the mathematical technique, single for all fundamental interactions (namely variational formalism for infinite Lie groups), we can obtain Einstein's theory as the gauge theory without any changes. All other gauge approaches lead to non-Einstein theories of gravity. But above-mentioned mathematical technique permits us to construct the gauge gravity theory of higher order (for instance SO (3,1)-gravity) so that all vacuum solutions of Einstein equations are the solutions of the SO (3,1)-gravity theory. The structure of equations of SO(3,1)-gravity becomes analogous to Weeler-Misner geometrodynamics one

  16. Market efficiency of oil spot and futures: A mean-variance and stochastic dominance approach

    Energy Technology Data Exchange (ETDEWEB)

    Lean, Hooi Hooi [Economics Program, School of Social Sciences, Universiti Sains Malaysia (Malaysia); McAleer, Michael [Econometric Institute, Erasmus School of Economics, Erasmus University Rotterdam, and, Tinbergen Institute (Netherlands); Wong, Wing-Keung, E-mail: awong@hkbu.edu.h [Department of Economics, Hong Kong Baptist University (Hong Kong)

    2010-09-15

    This paper examines the market efficiency of oil spot and futures prices by using both mean-variance (MV) and stochastic dominance (SD) approaches. Based on the West Texas Intermediate crude oil data for the sample period 1989-2008, we find no evidence of any MV and SD relationships between oil spot and futures indices. This infers that there is no arbitrage opportunity between these two markets, spot and futures do not dominate one another, investors are indifferent to investing spot or futures, and the spot and futures oil markets are efficient and rational. The empirical findings are robust to each sub-period before and after the crises for different crises, and also to portfolio diversification.

  17. Market efficiency of oil spot and futures. A mean-variance and stochastic dominance approach

    Energy Technology Data Exchange (ETDEWEB)

    Lean, Hooi Hooi [Economics Program, School of Social Sciences, Universiti Sains Malaysia (Malaysia); McAleer, Michael [Econometric Institute, Erasmus School of Economics, Erasmus University Rotterdam (Netherlands); Wong, Wing-Keung [Department of Economics, Hong Kong Baptist University (China); Tinbergen Institute (Netherlands)

    2010-09-15

    This paper examines the market efficiency of oil spot and futures prices by using both mean-variance (MV) and stochastic dominance (SD) approaches. Based on the West Texas Intermediate crude oil data for the sample period 1989-2008, we find no evidence of any MV and SD relationships between oil spot and futures indices. This infers that there is no arbitrage opportunity between these two markets, spot and futures do not dominate one another, investors are indifferent to investing spot or futures, and the spot and futures oil markets are efficient and rational. The empirical findings are robust to each sub-period before and after the crises for different crises, and also to portfolio diversification. (author)

  18. Market efficiency of oil spot and futures: A mean-variance and stochastic dominance approach

    International Nuclear Information System (INIS)

    Lean, Hooi Hooi; McAleer, Michael; Wong, Wing-Keung

    2010-01-01

    This paper examines the market efficiency of oil spot and futures prices by using both mean-variance (MV) and stochastic dominance (SD) approaches. Based on the West Texas Intermediate crude oil data for the sample period 1989-2008, we find no evidence of any MV and SD relationships between oil spot and futures indices. This infers that there is no arbitrage opportunity between these two markets, spot and futures do not dominate one another, investors are indifferent to investing spot or futures, and the spot and futures oil markets are efficient and rational. The empirical findings are robust to each sub-period before and after the crises for different crises, and also to portfolio diversification.

  19. Lectures on 2D gravity and 2D string theory

    International Nuclear Information System (INIS)

    Ginsparg, P.; Moore, G.

    1992-01-01

    This report the following topics: loops and states in conformal field theory; brief review of the Liouville theory; 2D Euclidean quantum gravity I: path integral approach; 2D Euclidean quantum gravity II: canonical approach; states in 2D string theory; matrix model technology I: method of orthogonal polynomials; matrix model technology II: loops on the lattice; matrix model technology III: free fermions from the lattice; loops and states in matrix model quantum gravity; loops and states in the C=1 matrix model; 6V model fermi sea dynamics and collective field theory; and string scattering in two spacetime dimensions

  20. Pareto joint inversion of 2D magnetotelluric and gravity data

    Science.gov (United States)

    Miernik, Katarzyna; Bogacz, Adrian; Kozubal, Adam; Danek, Tomasz; Wojdyła, Marek

    2015-04-01

    In this contribution, the first results of the "Innovative technology of petrophysical parameters estimation of geological media using joint inversion algorithms" project were described. At this stage of the development, Pareto joint inversion scheme for 2D MT and gravity data was used. Additionally, seismic data were provided to set some constrains for the inversion. Sharp Boundary Interface(SBI) approach and description model with set of polygons were used to limit the dimensionality of the solution space. The main engine was based on modified Particle Swarm Optimization(PSO). This algorithm was properly adapted to handle two or more target function at once. Additional algorithm was used to eliminate non- realistic solution proposals. Because PSO is a method of stochastic global optimization, it requires a lot of proposals to be evaluated to find a single Pareto solution and then compose a Pareto front. To optimize this stage parallel computing was used for both inversion engine and 2D MT forward solver. There are many advantages of proposed solution of joint inversion problems. First of all, Pareto scheme eliminates cumbersome rescaling of the target functions, that can highly affect the final solution. Secondly, the whole set of solution is created in one optimization run, providing a choice of the final solution. This choice can be based off qualitative data, that are usually very hard to be incorporated into the regular inversion schema. SBI parameterisation not only limits the problem of dimensionality, but also makes constraining of the solution easier. At this stage of work, decision to test the approach using MT and gravity data was made, because this combination is often used in practice. It is important to mention, that the general solution is not limited to this two methods and it is flexible enough to be used with more than two sources of data. Presented results were obtained for synthetic models, imitating real geological conditions, where

  1. Fundamental Structure of Loop Quantum Gravity

    Science.gov (United States)

    Han, Muxin; Ma, Yongge; Huang, Weiming

    In the recent twenty years, loop quantum gravity, a background independent approach to unify general relativity and quantum mechanics, has been widely investigated. The aim of loop quantum gravity is to construct a mathematically rigorous, background independent, non-perturbative quantum theory for a Lorentzian gravitational field on a four-dimensional manifold. In the approach, the principles of quantum mechanics are combined with those of general relativity naturally. Such a combination provides us a picture of, so-called, quantum Riemannian geometry, which is discrete on the fundamental scale. Imposing the quantum constraints in analogy from the classical ones, the quantum dynamics of gravity is being studied as one of the most important issues in loop quantum gravity. On the other hand, the semi-classical analysis is being carried out to test the classical limit of the quantum theory. In this review, the fundamental structure of loop quantum gravity is presented pedagogically. Our main aim is to help non-experts to understand the motivations, basic structures, as well as general results. It may also be beneficial to practitioners to gain insights from different perspectives on the theory. We will focus on the theoretical framework itself, rather than its applications, and do our best to write it in modern and precise langauge while keeping the presentation accessible for beginners. After reviewing the classical connection dynamical formalism of general relativity, as a foundation, the construction of the kinematical Ashtekar-Isham-Lewandowski representation is introduced in the content of quantum kinematics. The algebraic structure of quantum kinematics is also discussed. In the content of quantum dynamics, we mainly introduce the construction of a Hamiltonian constraint operator and the master constraint project. At last, some applications and recent advances are outlined. It should be noted that this strategy of quantizing gravity can also be extended to

  2. Predicting Footbridge Response using Stochastic Load Models

    DEFF Research Database (Denmark)

    Pedersen, Lars; Frier, Christian

    2013-01-01

    Walking parameters such as step frequency, pedestrian mass, dynamic load factor, etc. are basically stochastic, although it is quite common to adapt deterministic models for these parameters. The present paper considers a stochastic approach to modeling the action of pedestrians, but when doing so...... decisions need to be made in terms of statistical distributions of walking parameters and in terms of the parameters describing the statistical distributions. The paper explores how sensitive computations of bridge response are to some of the decisions to be made in this respect. This is useful...

  3. Pricing Equity-Indexed Annuities under Stochastic Interest Rates Using Copulas

    Directory of Open Access Journals (Sweden)

    Patrice Gaillardetz

    2010-01-01

    Full Text Available We develop a consistent evaluation approach for equity-linked insurance products under stochastic interest rates. This pricing approach requires that the premium information of standard insurance products is given exogenously. In order to evaluate equity-linked products, we derive three martingale probability measures that reproduce the information from standard insurance products, interest rates, and equity index. These risk adjusted martingale probability measures are determined using copula theory and evolve with the stochastic interest rate process. A detailed numerical analysis is performed for existing equity-indexed annuities in the North American market.

  4. Considerations when ranking stochastically modeled oil sands resource models for mining applications

    Energy Technology Data Exchange (ETDEWEB)

    Etris, E.L. [Society of Petroleum Engineers, Canadian Section, Calgary, AB (Canada)]|[Petro-Canada, Calgary, AB (Canada); Idris, Y.; Hunter, A.C. [Petro-Canada, Calgary, AB (Canada)

    2008-10-15

    Alberta's Athabasca oil sands deposit has been targeted as a major resource for development. Bitumen recovery operations fall into 2 categories, namely mining and in situ operations. Mining recovery is done above ground level and consists of open pit digging, disaggregation of the bitumen-saturated sediment through crushing followed by pipeline transport in a water-based slurry and then separation of oil, water and sediment. In situ recovery consists of drilling wells and stimulating the oil sands in the subsurface with a thermal treatment to reduce the viscosity of the bitumen and allow it to come to the surface. Steam assisted gravity drainage (SAGD) is the most popular thermal treatment currently in use. Resource models that simulate the recovery process are needed for both mining and in situ recovery operations. Both types can benefit from the advantages of a stochastic modeling process for resource model building and uncertainty evaluation. Stochastic modeling provides a realistic geology and allows for multiple realizations, which mining operations can use to evaluate the variability of recoverable bitumen volumes and develop mine plans accordingly. This paper described the processes of stochastic modelling and of determining the appropriate single realization for mine planning as applied to the Fort Hills oil sands mine which is currently in the early planning stage. The modeling exercise was used to estimate the in-place resource and quantify the uncertainty in resource volumes. The stochastic models were checked against those generated from conventional methods to identify any differences and to make the appropriate adaptations. 13 refs., 3 tabs., 16 figs.

  5. Existence and density theorems for stochastic maps on commutative C*-algebras

    International Nuclear Information System (INIS)

    Alberti, P.M.; Uhlmann, A.

    1979-06-01

    Theorems are presented on the structure of stochastic and normalized positive linear maps over commutative C*-algebras. It is shown how strongly the solution of the n-tupel problem for stochastic maps relates to the fact that stochastic maps of finite rank are weakly dense within stochastic maps in case of a commutative C*-algebra. A new proof of the density theorem is given and (besides the solution of the n-tupel problem) results are derived concerning the extremal maps of certain convex subsets which are weakly dense. All stated facts suggest application in statistical physics (algebraic approach), especially concerning questions around evolution of classical systems. (author)

  6. Stochastic Load Models and Footbridge Response

    DEFF Research Database (Denmark)

    Pedersen, Lars; Frier, Christian

    2015-01-01

    Pedestrians may cause vibrations in footbridges and these vibrations may potentially be annoying. This calls for predictions of footbridge vibration levels and the paper considers a stochastic approach to modeling the action of pedestrians assuming walking parameters such as step frequency, pedes...

  7. A constrained approach to multiscale stochastic simulation of chemically reacting systems

    KAUST Repository

    Cotter, Simon L.; Zygalakis, Konstantinos C.; Kevrekidis, Ioannis G.; Erban, Radek

    2011-01-01

    Stochastic simulation of coupled chemical reactions is often computationally intensive, especially if a chemical system contains reactions occurring on different time scales. In this paper, we introduce a multiscale methodology suitable to address

  8. Random manifolds and quantum gravity

    International Nuclear Information System (INIS)

    Krzywicki, A.

    2000-01-01

    The non-perturbative, lattice field theory approach towards the quantization of Euclidean gravity is reviewed. Included is a tentative summary of the most significant results and a presentation of the current state of art

  9. Optimization of stochastic discrete systems and control on complex networks computational networks

    CERN Document Server

    Lozovanu, Dmitrii

    2014-01-01

    This book presents the latest findings on stochastic dynamic programming models and on solving optimal control problems in networks. It includes the authors' new findings on determining the optimal solution of discrete optimal control problems in networks and on solving game variants of Markov decision problems in the context of computational networks. First, the book studies the finite state space of Markov processes and reviews the existing methods and algorithms for determining the main characteristics in Markov chains, before proposing new approaches based on dynamic programming and combinatorial methods. Chapter two is dedicated to infinite horizon stochastic discrete optimal control models and Markov decision problems with average and expected total discounted optimization criteria, while Chapter three develops a special game-theoretical approach to Markov decision processes and stochastic discrete optimal control problems. In closing, the book's final chapter is devoted to finite horizon stochastic con...

  10. Spectral dimension in causal set quantum gravity

    International Nuclear Information System (INIS)

    Eichhorn, Astrid; Mizera, Sebastian

    2014-01-01

    We evaluate the spectral dimension in causal set quantum gravity by simulating random walks on causal sets. In contrast to other approaches to quantum gravity, we find an increasing spectral dimension at small scales. This observation can be connected to the nonlocality of causal set theory that is deeply rooted in its fundamentally Lorentzian nature. Based on its large-scale behaviour, we conjecture that the spectral dimension can serve as a tool to distinguish causal sets that approximate manifolds from those that do not. As a new tool to probe quantum spacetime in different quantum gravity approaches, we introduce a novel dimensional estimator, the causal spectral dimension, based on the meeting probability of two random walkers, which respect the causal structure of the quantum spacetime. We discuss a causal-set example, where the spectral dimension and the causal spectral dimension differ, due to the existence of a preferred foliation. (paper)

  11. Stochastic Collocation Applications in Computational Electromagnetics

    Directory of Open Access Journals (Sweden)

    Dragan Poljak

    2018-01-01

    Full Text Available The paper reviews the application of deterministic-stochastic models in some areas of computational electromagnetics. Namely, in certain problems there is an uncertainty in the input data set as some properties of a system are partly or entirely unknown. Thus, a simple stochastic collocation (SC method is used to determine relevant statistics about given responses. The SC approach also provides the assessment of related confidence intervals in the set of calculated numerical results. The expansion of statistical output in terms of mean and variance over a polynomial basis, via SC method, is shown to be robust and efficient approach providing a satisfactory convergence rate. This review paper provides certain computational examples from the previous work by the authors illustrating successful application of SC technique in the areas of ground penetrating radar (GPR, human exposure to electromagnetic fields, and buried lines and grounding systems.

  12. Adaptive stochastic Galerkin FEM with hierarchical tensor representations

    KAUST Repository

    Eigel, Martin

    2016-01-08

    PDE with stochastic data usually lead to very high-dimensional algebraic problems which easily become unfeasible for numerical computations because of the dense coupling structure of the discretised stochastic operator. Recently, an adaptive stochastic Galerkin FEM based on a residual a posteriori error estimator was presented and the convergence of the adaptive algorithm was shown. While this approach leads to a drastic reduction of the complexity of the problem due to the iterative discovery of the sparsity of the solution, the problem size and structure is still rather limited. To allow for larger and more general problems, we exploit the tensor structure of the parametric problem by representing operator and solution iterates in the tensor train (TT) format. The (successive) compression carried out with these representations can be seen as a generalisation of some other model reduction techniques, e.g. the reduced basis method. We show that this approach facilitates the efficient computation of different error indicators related to the computational mesh, the active polynomial chaos index set, and the TT rank. In particular, the curse of dimension is avoided.

  13. Stochastic microstructure characterization and reconstruction via supervised learning

    International Nuclear Information System (INIS)

    Bostanabad, Ramin; Bui, Anh Tuan; Xie, Wei; Apley, Daniel W.; Chen, Wei

    2016-01-01

    Microstructure characterization and reconstruction have become indispensable parts of computational materials science. The main contribution of this paper is to introduce a general methodology for practical and efficient characterization and reconstruction of stochastic microstructures based on supervised learning. The methodology is general in that it can be applied to a broad range of microstructures (clustered, porous, and anisotropic). By treating the digitized microstructure image as a set of training data, we generically learn the stochastic nature of the microstructure via fitting a supervised learning model to it (we focus on classification trees). The fitted supervised learning model provides an implicit characterization of the joint distribution of the collection of pixel phases in the image. Based on this characterization, we propose two different approaches to efficiently reconstruct any number of statistically equivalent microstructure samples. We test the approach on five examples and show that the spatial dependencies within the microstructures are well preserved, as evaluated via correlation and lineal-path functions. The main advantages of our approach stem from having a compact empirically-learned model that characterizes the stochastic nature of the microstructure, which not only makes reconstruction more computationally efficient than existing methods, but also provides insight into morphological complexity.

  14. Gravity on a little warped space

    International Nuclear Information System (INIS)

    George, Damien P.; McDonald, Kristian L.

    2011-01-01

    We investigate the consistent inclusion of 4D Einstein gravity on a truncated slice of AdS 5 whose bulk-gravity and UV scales are much less than the 4D Planck scale, M * Pl . Such 'Little Warped Spaces' have found phenomenological utility and can be motivated by string realizations of the Randall-Sundrum framework. Using the interval approach to brane-world gravity, we show that the inclusion of a large UV-localized Einstein-Hilbert term allows one to consistently incorporate 4D Einstein gravity into the low-energy theory. We detail the spectrum of Kaluza-Klein metric fluctuations and, in particular, examine the coupling of the little radion to matter. Furthermore, we show that Goldberger-Wise stabilization can be successfully implemented on such spaces. Our results demonstrate that realistic low-energy effective theories can be constructed on these spaces, and have relevance for existing models in the literature.

  15. Stochastic Watershed Models for Risk Based Decision Making

    Science.gov (United States)

    Vogel, R. M.

    2017-12-01

    Over half a century ago, the Harvard Water Program introduced the field of operational or synthetic hydrology providing stochastic streamflow models (SSMs), which could generate ensembles of synthetic streamflow traces useful for hydrologic risk management. The application of SSMs, based on streamflow observations alone, revolutionized water resources planning activities, yet has fallen out of favor due, in part, to their inability to account for the now nearly ubiquitous anthropogenic influences on streamflow. This commentary advances the modern equivalent of SSMs, termed `stochastic watershed models' (SWMs) useful as input to nearly all modern risk based water resource decision making approaches. SWMs are deterministic watershed models implemented using stochastic meteorological series, model parameters and model errors, to generate ensembles of streamflow traces that represent the variability in possible future streamflows. SWMs combine deterministic watershed models, which are ideally suited to accounting for anthropogenic influences, with recent developments in uncertainty analysis and principles of stochastic simulation

  16. Backward stochastic differential equations from linear to fully nonlinear theory

    CERN Document Server

    Zhang, Jianfeng

    2017-01-01

    This book provides a systematic and accessible approach to stochastic differential equations, backward stochastic differential equations, and their connection with partial differential equations, as well as the recent development of the fully nonlinear theory, including nonlinear expectation, second order backward stochastic differential equations, and path dependent partial differential equations. Their main applications and numerical algorithms, as well as many exercises, are included. The book focuses on ideas and clarity, with most results having been solved from scratch and most theories being motivated from applications. It can be considered a starting point for junior researchers in the field, and can serve as a textbook for a two-semester graduate course in probability theory and stochastic analysis. It is also accessible for graduate students majoring in financial engineering.

  17. Stochastic models for predicting pitting corrosion damage of HLRW containers

    International Nuclear Information System (INIS)

    Henshall, G.A.

    1991-10-01

    Stochastic models for predicting aqueous pitting corrosion damage of high-level radioactive-waste containers are described. These models could be used to predict the time required for the first pit to penetrate a container and the increase in the number of breaches at later times, both of which would be useful in the repository system performance analysis. Monte Carlo implementations of the stochastic models are described, and predictions of induction time, survival probability and pit depth distributions are presented. These results suggest that the pit nucleation probability decreases with exposure time and that pit growth may be a stochastic process. The advantages and disadvantages of the stochastic approach, methods for modeling the effects of environment, and plans for future work are discussed

  18. The gravity anomaly of Mount Amiata; different approaches for understanding anomaly source distribution

    Science.gov (United States)

    Girolami, C.; Barchi, M. R.; Heyde, I.; Pauselli, C.; Vetere, F.; Cannata, A.

    2017-11-01

    In this work, the gravity anomaly signal beneath Mount Amiata and its surroundings have been analysed to reconstruct the subsurface setting. In particular, the work focuses on the investigation of the geological bodies responsible for the Bouguer gravity minimum observed in this area.

  19. Synthetic Sediments and Stochastic Groundwater Hydrology

    Science.gov (United States)

    Wilson, J. L.

    2002-12-01

    For over twenty years the groundwater community has pursued the somewhat elusive goal of describing the effects of aquifer heterogeneity on subsurface flow and chemical transport. While small perturbation stochastic moment methods have significantly advanced theoretical understanding, why is it that stochastic applications use instead simulations of flow and transport through multiple realizations of synthetic geology? Allan Gutjahr was a principle proponent of the Fast Fourier Transform method for the synthetic generation of aquifer properties and recently explored new, more geologically sound, synthetic methods based on multi-scale Markov random fields. Focusing on sedimentary aquifers, how has the state-of-the-art of synthetic generation changed and what new developments can be expected, for example, to deal with issues like conceptual model uncertainty, the differences between measurement and modeling scales, and subgrid scale variability? What will it take to get stochastic methods, whether based on moments, multiple realizations, or some other approach, into widespread application?

  20. Stochastic partial differential equations an introduction

    CERN Document Server

    Liu, Wei

    2015-01-01

    This book provides an introduction to the theory of stochastic partial differential equations (SPDEs) of evolutionary type. SPDEs are one of the main research directions in probability theory with several wide ranging applications. Many types of dynamics with stochastic influence in nature or man-made complex systems can be modelled by such equations. The theory of SPDEs is based both on the theory of deterministic partial differential equations, as well as on modern stochastic analysis. Whilst this volume mainly follows the ‘variational approach’, it also contains a short account on the ‘semigroup (or mild solution) approach’. In particular, the volume contains a complete presentation of the main existence and uniqueness results in the case of locally monotone coefficients. Various types of generalized coercivity conditions are shown to guarantee non-explosion, but also a systematic approach to treat SPDEs with explosion in finite time is developed. It is, so far, the only book where the latter and t...

  1. Thirty years of precise gravity measurements at Mt. Vesuvius: an approach to detect underground mass movements

    Directory of Open Access Journals (Sweden)

    Giovanna Berrino

    2013-11-01

    Full Text Available Since 1982, high precision gravity measurements have been routinely carried out on Mt. Vesuvius. The gravity network consists of selected sites most of them coinciding with, or very close to, leveling benchmarks to remove the effect of the elevation changes from gravity variations. The reference station is located in Napoli, outside the volcanic area. Since 1986, absolute gravity measurements have been periodically made on a station on Mt. Vesuvius, close to a permanent gravity station established in 1987, and at the reference in Napoli. The results of the gravity measurements since 1982 are presented and discussed. Moderate gravity changes on short-time were generally observed. On long-term significant gravity changes occurred and the overall fields displayed well defined patterns. Several periods of evolution may be recognized. Gravity changes revealed by the relative surveys have been confirmed by repeated absolute measurements, which also confirmed the long-term stability of the reference site. The gravity changes over the recognized periods appear correlated with the seismic crises and with changes of the tidal parameters obtained by continuous measurements. The absence of significant ground deformation implies masses redistribution, essentially density changes without significant volume changes, such as fluids migration at the depth of the seismic foci, i.e. at a few kilometers. The fluid migration may occur through pre-existing geological structures, as also suggested by hydrological studies, and/or through new fractures generated by seismic activity. This interpretation is supported by the analyses of the spatial gravity changes overlapping the most significant and recent seismic crises.

  2. Palatini actions and quantum gravity phenomenology

    International Nuclear Information System (INIS)

    Olmo, Gonzalo J.

    2011-01-01

    We show that an invariant an universal length scale can be consistently introduced in a generally covariant theory through the gravitational sector using the Palatini approach. The resulting theory is able to capture different aspects of quantum gravity phenomenology in a single framework. In particular, it is found that in this theory field excitations propagating with different energy-densities perceive different background metrics, which is a fundamental characteristic of the DSR and Rainbow Gravity approaches. We illustrate these properties with a particular gravitational model and explicitly show how the soccer ball problem is avoided in this framework. The isotropic and anisotropic cosmologies of this model also avoid the big bang singularity by means of a big bounce

  3. Palatini actions and quantum gravity phenomenology

    Energy Technology Data Exchange (ETDEWEB)

    Olmo, Gonzalo J., E-mail: gonzalo.olmo@csic.es [Departamento de Física Teórica and IFIC, Centro Mixto Universidad de Valencia - CSIC, Facultad de Física, Universidad de Valencia, Burjassot-46100, Valencia (Spain)

    2011-10-01

    We show that an invariant an universal length scale can be consistently introduced in a generally covariant theory through the gravitational sector using the Palatini approach. The resulting theory is able to capture different aspects of quantum gravity phenomenology in a single framework. In particular, it is found that in this theory field excitations propagating with different energy-densities perceive different background metrics, which is a fundamental characteristic of the DSR and Rainbow Gravity approaches. We illustrate these properties with a particular gravitational model and explicitly show how the soccer ball problem is avoided in this framework. The isotropic and anisotropic cosmologies of this model also avoid the big bang singularity by means of a big bounce.

  4. Chance Constrained Input Relaxation to Congestion in Stochastic DEA. An Application to Iranian Hospitals.

    Science.gov (United States)

    Kheirollahi, Hooshang; Matin, Behzad Karami; Mahboubi, Mohammad; Alavijeh, Mehdi Mirzaei

    2015-01-01

    This article developed an approached model of congestion, based on relaxed combination of inputs, in stochastic data envelopment analysis (SDEA) with chance constrained programming approaches. Classic data envelopment analysis models with deterministic data have been used by many authors to identify congestion and estimate its levels; however, data envelopment analysis with stochastic data were rarely used to identify congestion. This article used chance constrained programming approaches to replace stochastic models with "deterministic equivalents". This substitution leads us to non-linear problems that should be solved. Finally, the proposed method based on relaxed combination of inputs was used to identify congestion input in six Iranian hospital with one input and two outputs in the period of 2009 to 2012.

  5. A Stochastic Multiobjective Optimization Framework for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Shibo He

    2010-01-01

    Full Text Available In wireless sensor networks (WSNs, there generally exist many different objective functions to be optimized. In this paper, we propose a stochastic multiobjective optimization approach to solve such kind of problem. We first formulate a general multiobjective optimization problem. We then decompose the optimization formulation through Lagrange dual decomposition and adopt the stochastic quasigradient algorithm to solve the primal-dual problem in a distributed way. We show theoretically that our algorithm converges to the optimal solution of the primal problem by using the knowledge of stochastic programming. Furthermore, the formulation provides a general stochastic multiobjective optimization framework for WSNs. We illustrate how the general framework works by considering an example of the optimal rate allocation problem in multipath WSNs with time-varying channel. Extensive simulation results are given to demonstrate the effectiveness of our algorithm.

  6. Stochastic Change Detection based on an Active Fault Diagnosis Approach

    DEFF Research Database (Denmark)

    Poulsen, Niels Kjølstad; Niemann, Hans Henrik

    2007-01-01

    The focus in this paper is on stochastic change detection applied in connection with active fault diagnosis (AFD). An auxiliary input signal is applied in AFD. This signal injection in the system will in general allow to obtain a fast change detection/isolation by considering the output or an err...

  7. Gravity induced wave function collapse

    Science.gov (United States)

    Gasbarri, G.; Toroš, M.; Donadi, S.; Bassi, A.

    2017-11-01

    Starting from an idea of S. L. Adler [in Quantum Nonlocality and Reality: 50 Years of Bell's Theorem, edited by M. Bell and S. Gao (Cambridge University Press, Cambridge, England 2016)], we develop a novel model of gravity induced spontaneous wave function collapse. The collapse is driven by complex stochastic fluctuations of the spacetime metric. After deriving the fundamental equations, we prove the collapse and amplification mechanism, the two most important features of a consistent collapse model. Under reasonable simplifying assumptions, we constrain the strength ξ of the complex metric fluctuations with available experimental data. We show that ξ ≥10-26 in order for the model to guarantee classicality of macro-objects, and at the same time ξ ≤10-20 in order not to contradict experimental evidence. As a comparison, in the recent discovery of gravitational waves in the frequency range 35 to 250 Hz, the (real) metric fluctuations reach a peak of ξ ˜10-21.

  8. Food Environment and Weight Outcomes: A Stochastic Frontier Approach

    OpenAIRE

    Li, Xun; Lopez, Rigoberto A.

    2013-01-01

    Food environment includes the presence of supermarkets, restaurants, warehouse clubs and supercenters, and other food outlets. This paper evaluates weight outcomes from a food environment using a stochastic production frontier and an equation for the determinants of efficiency, where the explanatory variables of the efficiency term include food environment indicators. Using individual consumer data and food environment data from New England counties, empirical results indicate that fruit and ...

  9. Distributed parallel computing in stochastic modeling of groundwater systems.

    Science.gov (United States)

    Dong, Yanhui; Li, Guomin; Xu, Haizhen

    2013-03-01

    Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.

  10. A delay fractioning approach to global synchronization of delayed complex networks with stochastic disturbances

    International Nuclear Information System (INIS)

    Wang Yao; Wang Zidong; Liang Jinling

    2008-01-01

    In this Letter, the synchronization problem is investigated for a class of stochastic complex networks with time delays. By utilizing a new Lyapunov functional form based on the idea of 'delay fractioning', we employ the stochastic analysis techniques and the properties of Kronecker product to establish delay-dependent synchronization criteria that guarantee the globally asymptotically mean-square synchronization of the addressed delayed networks with stochastic disturbances. These sufficient conditions, which are formulated in terms of linear matrix inequalities (LMIs), can be solved efficiently by the LMI toolbox in Matlab. The main results are proved to be much less conservative and the conservatism could be reduced further as the number of delay fractioning gets bigger. A simulation example is exploited to demonstrate the advantage and applicability of the proposed result

  11. Verification of f(R-gravity in binary pulsars

    Directory of Open Access Journals (Sweden)

    Dyadina Polina

    2016-01-01

    Full Text Available We develop the parameterized post-Keplerian approach for class of analytic f (R-gravity models. Using the double binary pulsar system PSR J0737-3039 data we obtain restrictions on the parameters of this class of f (R-models and show that f (R-gravity is not ruled out by the observations in strong field regime.

  12. Initialization and Restart in Stochastic Local Search: Computing a Most Probable Explanation in Bayesian Networks

    Science.gov (United States)

    Mengshoel, Ole J.; Wilkins, David C.; Roth, Dan

    2010-01-01

    For hard computational problems, stochastic local search has proven to be a competitive approach to finding optimal or approximately optimal problem solutions. Two key research questions for stochastic local search algorithms are: Which algorithms are effective for initialization? When should the search process be restarted? In the present work we investigate these research questions in the context of approximate computation of most probable explanations (MPEs) in Bayesian networks (BNs). We introduce a novel approach, based on the Viterbi algorithm, to explanation initialization in BNs. While the Viterbi algorithm works on sequences and trees, our approach works on BNs with arbitrary topologies. We also give a novel formalization of stochastic local search, with focus on initialization and restart, using probability theory and mixture models. Experimentally, we apply our methods to the problem of MPE computation, using a stochastic local search algorithm known as Stochastic Greedy Search. By carefully optimizing both initialization and restart, we reduce the MPE search time for application BNs by several orders of magnitude compared to using uniform at random initialization without restart. On several BNs from applications, the performance of Stochastic Greedy Search is competitive with clique tree clustering, a state-of-the-art exact algorithm used for MPE computation in BNs.

  13. STOCHASTIC FLOWS OF MAPPINGS

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In this paper, the stochastic flow of mappings generated by a Feller convolution semigroup on a compact metric space is studied. This kind of flow is the generalization of superprocesses of stochastic flows and stochastic diffeomorphism induced by the strong solutions of stochastic differential equations.

  14. Event-Triggered Faults Tolerant Control for Stochastic Systems with Time Delays

    Directory of Open Access Journals (Sweden)

    Ling Huang

    2016-01-01

    Full Text Available This paper is concerned with the state-feedback controller design for stochastic networked control systems (NCSs with random actuator failures and transmission delays. Firstly, an event-triggered scheme is introduced to optimize the performance of the stochastic NCSs. Secondly, stochastic NCSs under event-triggered scheme are modeled as stochastic time-delay systems. Thirdly, some less conservative delay-dependent stability criteria in terms of linear matrix inequalities for the codesign of both the controller gain and the trigger parameters are obtained by using delay-decomposition technique and convex combination approach. Finally, a numerical example is provided to show the less sampled data transmission and less conservatism of the proposed theory.

  15. Diffusion with intrinsic trapping in 2-d incompressible stochastic velocity fields

    International Nuclear Information System (INIS)

    Vlad, M.; Spineanu, F.; Misguich, J.H.; Vlad, M.; Spineanu, F.; Balescu, R.

    1998-10-01

    A new statistical approach that applies to the high Kubo number regimes for particle diffusion in stochastic velocity fields is presented. This 2-dimensional model describes the partial trapping of the particles in the stochastic field. the results are close to the numerical simulations and also to the estimations based on percolation theory. (authors)

  16. A stochastic logical system approach to model and optimal control of cyclic variation of residual gas fraction in combustion engines

    International Nuclear Information System (INIS)

    Wu, Yuhu; Kumar, Madan; Shen, Tielong

    2016-01-01

    Highlights: • An in-cylinder pressure based measuring method for the RGF is derived. • A stochastic logical dynamical model is proposed to represent the transient behavior of the RGF. • The receding horizon controller is designed to reduce the variance of the RGF. • The effectiveness of the proposed model and control approach is validated by the experimental evidence. - Abstract: In four stroke internal combustion engines, residual gas from the previous cycle is an important factor influencing the combustion quality of the current cycle, and the residual gas fraction (RGF) is a popular index to monitor the influence of residual gas. This paper investigates the cycle-to-cycle transient behavior of the RGF in the view of systems theory and proposes a multi-valued logic-based control strategy for attenuation of RGF fluctuation. First, an in-cylinder pressure sensor-based method for measuring the RGF is provided by following the physics of the in-cylinder transient state of four-stroke internal combustion engines. Then, the stochastic property of the RGF is examined based on statistical data obtained by conducting experiments on a full-scale gasoline engine test bench. Based on the observation of the examination, a stochastic logical transient model is proposed to represent the cycle-to-cycle transient behavior of the RGF, and with the model an optimal feedback control law, which targets on rejection of the RGF fluctuation, is derived in the framework of stochastic logical system theory. Finally, experimental results are demonstrated to show the effectiveness of the proposed model and the control strategy.

  17. Black holes in loop quantum gravity.

    Science.gov (United States)

    Perez, Alejandro

    2017-12-01

    This is a review of results on black hole physics in the context of loop quantum gravity. The key feature underlying these results is the discreteness of geometric quantities at the Planck scale predicted by this approach to quantum gravity. Quantum discreteness follows directly from the canonical quantization prescription when applied to the action of general relativity that is suitable for the coupling of gravity with gauge fields, and especially with fermions. Planckian discreteness and causal considerations provide the basic structure for the understanding of the thermal properties of black holes close to equilibrium. Discreteness also provides a fresh new look at more (at the moment) speculative issues, such as those concerning the fate of information in black hole evaporation. The hypothesis of discreteness leads, also, to interesting phenomenology with possible observational consequences. The theory of loop quantum gravity is a developing program; this review reports its achievements and open questions in a pedagogical manner, with an emphasis on quantum aspects of black hole physics.

  18. The Role of Stochastic Models in Interpreting the Origins of Biological Chirality

    Directory of Open Access Journals (Sweden)

    Gábor Lente

    2010-04-01

    Full Text Available This review summarizes recent stochastic modeling efforts in the theoretical research aimed at interpreting the origins of biological chirality. Stochastic kinetic models, especially those based on the continuous time discrete state approach, have great potential in modeling absolute asymmetric reactions, experimental examples of which have been reported in the past decade. An overview of the relevant mathematical background is given and several examples are presented to show how the significant numerical problems characteristic of the use of stochastic models can be overcome by non-trivial, but elementary algebra. In these stochastic models, a particulate view of matter is used rather than the concentration-based view of traditional chemical kinetics using continuous functions to describe the properties system. This has the advantage of giving adequate description of single-molecule events, which were probably important in the origin of biological chirality. The presented models can interpret and predict the random distribution of enantiomeric excess among repetitive experiments, which is the most striking feature of absolute asymmetric reactions. It is argued that the use of the stochastic kinetic approach should be much more widespread in the relevant literature.

  19. Distributed evaluation of stochastic Petri nets

    NARCIS (Netherlands)

    Bell, A.; Buchholz, Peter; Lehnert, Ralf; Pioro, Micha

    2004-01-01

    In this paper we present on the distributed performance evaluation and model checking of systems specified by stochastic Petri nets. The approaches discussed rely on an explicit state-space generation and target at the usage of clusters of workstations. We present results for systems with several

  20. Simplicial quantum gravity

    International Nuclear Information System (INIS)

    Hartle, J.B.

    1985-01-01

    Simplicial approximation and the ideas associated with the Regge calculus provide a concrete way of implementing a sum over histories formulation of quantum gravity. A simplicial geometry is made up of flat simplices joined together in a prescribed way together with an assignment of lengths to their edges. A sum over simplicial geometries is a sum over the different ways the simplices can be joined together with an integral over their edge lengths. The construction of the simplicial Euclidean action for this approach to quantum general relativity is illustrated. The recovery of the diffeomorphism group in the continuum limit is discussed. Some possible classes of simplicial complexes with which to define a sum over topologies are described. In two dimensional quantum gravity it is argued that a reasonable class is the class of pseudomanifolds

  1. Stochastic description of heterogeneities of permeability within groundwater flow models

    International Nuclear Information System (INIS)

    Cacas, M.C.; Lachassagne, P.; Ledoux, E.; Marsily, G. de

    1991-01-01

    In order to model radionuclide migration in the geosphere realistically at the field scale, the hydrogeologist needs to be able to simulate groundwater flow in heterogeneous media. Heterogeneity of the medium can be described using a stochastic approach, that affects the way in which a flow model is formulated. In this paper, we discuss the problems that we have encountered in modelling both continuous and fractured media. The stochastic approach leads to a methodology that enables local measurements of permeability to be integrated into a model which gives a good prediction of groundwater flow on a regional scale. 5 Figs.; 8 Refs

  2. Low Reynolds number suspension gravity currents.

    Science.gov (United States)

    Saha, Sandeep; Salin, Dominique; Talon, Laurent

    2013-08-01

    The extension of a gravity current in a lock-exchange problem, proceeds as square root of time in the viscous-buoyancy phase, where there is a balance between gravitational and viscous forces. In the presence of particles however, this scenario is drastically altered, because sedimentation reduces the motive gravitational force and introduces a finite distance and time at which the gravity current halts. We investigate the spreading of low Reynolds number suspension gravity currents using a novel approach based on the Lattice-Boltzmann (LB) method. The suspension is modeled as a continuous medium with a concentration-dependent viscosity. The settling of particles is simulated using a drift flux function approach that enables us to capture sudden discontinuities in particle concentration that travel as kinematic shock waves. Thereafter a numerical investigation of lock-exchange flows between pure fluids of unequal viscosity, reveals the existence of wall layers which reduce the spreading rate substantially compared to the lubrication theory prediction. In suspension gravity currents, we observe that the settling of particles leads to the formation of two additional fronts: a horizontal front near the top that descends vertically and a sediment layer at the bottom which aggrandises due to deposition of particles. Three phases are identified in the spreading process: the final corresponding to the mutual approach of the two horizontal fronts while the laterally advancing front halts indicating that the suspension current stops even before all the particles have settled. The first two regimes represent a constant and a decreasing spreading rate respectively. Finally we conduct experiments to substantiate the conclusions of our numerical and theoretical investigation.

  3. Multi-technique approach for deriving a VLBI signal extra-path variation model induced by gravity: the example of Medicina

    Science.gov (United States)

    Sarti, P.; Abbondanza, C.; Negusini, M.; Vittuari, L.

    2009-09-01

    During the measurement sessions gravity might induce significant deformations in large VLBI telescopes. If neglected or mismodelled, these deformations might bias the phase of the incoming signal thus corrupting the estimate of some crucial geodetic parameters (e.g. the height component of VLBI Reference Point). This paper describes a multi-technique approach implemented for measuring and quantifying the gravity-dependent deformations experienced by the 32-m diameter VLBI antenna of Medicina (Northern Italy). Such an approach integrates three different methods: Terrestrial Triangulations and Trilaterations (TTT), Laser Scanning (LS) and a Finite Element Model (FEM) of the antenna. The combination of the observations performed with these methods allows to accurately define an elevation-dependent model of the signal path variation which appears to be, for the Medicina telescope, non negligible. In the range [0,90] deg the signal path increases monotonically by almost 2 cm. The effect of such a variation has not been introduced in actual VLBI analysis yet; nevertheless this is the task we are going to pursue in the very next future.

  4. Even-dimensional topological gravity from Chern-Simons gravity

    International Nuclear Information System (INIS)

    Merino, N.; Perez, A.; Salgado, P.

    2009-01-01

    It is shown that the topological action for gravity in 2n-dimensions can be obtained from the (2n+1)-dimensional Chern-Simons gravity genuinely invariant under the Poincare group. The 2n-dimensional topological gravity is described by the dynamics of the boundary of a (2n+1)-dimensional Chern-Simons gravity theory with suitable boundary conditions. The field φ a , which is necessary to construct this type of topological gravity in even dimensions, is identified with the coset field associated with the non-linear realizations of the Poincare group ISO(d-1,1).

  5. Introduction to Stochastic Simulations for Chemical and Physical Processes: Principles and Applications

    Science.gov (United States)

    Weiss, Charles J.

    2017-01-01

    An introduction to digital stochastic simulations for modeling a variety of physical and chemical processes is presented. Despite the importance of stochastic simulations in chemistry, the prevalence of turn-key software solutions can impose a layer of abstraction between the user and the underlying approach obscuring the methodology being…

  6. Stochastic processes

    CERN Document Server

    Parzen, Emanuel

    1962-01-01

    Well-written and accessible, this classic introduction to stochastic processes and related mathematics is appropriate for advanced undergraduate students of mathematics with a knowledge of calculus and continuous probability theory. The treatment offers examples of the wide variety of empirical phenomena for which stochastic processes provide mathematical models, and it develops the methods of probability model-building.Chapter 1 presents precise definitions of the notions of a random variable and a stochastic process and introduces the Wiener and Poisson processes. Subsequent chapters examine

  7. Deterministic and stochastic CTMC models from Zika disease transmission

    Science.gov (United States)

    Zevika, Mona; Soewono, Edy

    2018-03-01

    Zika infection is one of the most important mosquito-borne diseases in the world. Zika virus (ZIKV) is transmitted by many Aedes-type mosquitoes including Aedes aegypti. Pregnant women with the Zika virus are at risk of having a fetus or infant with a congenital defect and suffering from microcephaly. Here, we formulate a Zika disease transmission model using two approaches, a deterministic model and a continuous-time Markov chain stochastic model. The basic reproduction ratio is constructed from a deterministic model. Meanwhile, the CTMC stochastic model yields an estimate of the probability of extinction and outbreaks of Zika disease. Dynamical simulations and analysis of the disease transmission are shown for the deterministic and stochastic models.

  8. Sensitivity of Base-Isolated Systems to Ground Motion Characteristics: A Stochastic Approach

    International Nuclear Information System (INIS)

    Kaya, Yavuz; Safak, Erdal

    2008-01-01

    Base isolators dissipate energy through their nonlinear behavior when subjected to earthquake-induced loads. A widely used base isolation system for structures involves installing lead-rubber bearings (LRB) at the foundation level. The force-deformation behavior of LRB isolators can be modeled by a bilinear hysteretic model. This paper investigates the effects of ground motion characteristics on the response of bilinear hysteretic oscillators by using a stochastic approach. Ground shaking is characterized by its power spectral density function (PSDF), which includes corner frequency, seismic moment, moment magnitude, and site effects as its parameters. The PSDF of the oscillator response is calculated by using the equivalent-linearization techniques of random vibration theory for hysteretic nonlinear systems. Knowing the PSDF of the response, we can calculate the mean square and the expected maximum response spectra for a range of natural periods and ductility values. The results show that moment magnitude is a critical factor determining the response. Site effects do not seem to have a significant influence

  9. Symbolic Computing in Probabilistic and Stochastic Analysis

    Directory of Open Access Journals (Sweden)

    Kamiński Marcin

    2015-12-01

    Full Text Available The main aim is to present recent developments in applications of symbolic computing in probabilistic and stochastic analysis, and this is done using the example of the well-known MAPLE system. The key theoretical methods discussed are (i analytical derivations, (ii the classical Monte-Carlo simulation approach, (iii the stochastic perturbation technique, as well as (iv some semi-analytical approaches. It is demonstrated in particular how to engage the basic symbolic tools implemented in any system to derive the basic equations for the stochastic perturbation technique and how to make an efficient implementation of the semi-analytical methods using an automatic differentiation and integration provided by the computer algebra program itself. The second important illustration is probabilistic extension of the finite element and finite difference methods coded in MAPLE, showing how to solve boundary value problems with random parameters in the environment of symbolic computing. The response function method belongs to the third group, where interference of classical deterministic software with the non-linear fitting numerical techniques available in various symbolic environments is displayed. We recover in this context the probabilistic structural response in engineering systems and show how to solve partial differential equations including Gaussian randomness in their coefficients.

  10. From complex to simple: interdisciplinary stochastic models

    International Nuclear Information System (INIS)

    Mazilu, D A; Zamora, G; Mazilu, I

    2012-01-01

    We present two simple, one-dimensional, stochastic models that lead to a qualitative understanding of very complex systems from biology, nanoscience and social sciences. The first model explains the complicated dynamics of microtubules, stochastic cellular highways. Using the theory of random walks in one dimension, we find analytical expressions for certain physical quantities, such as the time dependence of the length of the microtubules, and diffusion coefficients. The second one is a stochastic adsorption model with applications in surface deposition, epidemics and voter systems. We introduce the ‘empty interval method’ and show sample calculations for the time-dependent particle density. These models can serve as an introduction to the field of non-equilibrium statistical physics, and can also be used as a pedagogical tool to exemplify standard statistical physics concepts, such as random walks or the kinetic approach of the master equation. (paper)

  11. Sensitivity of Footbridge Vibrations to Stochastic Walking Parameters

    DEFF Research Database (Denmark)

    Pedersen, Lars; Frier, Christian

    2010-01-01

    of the pedestrian. A stochastic modelling approach is adopted for this paper and it facilitates quantifying the probability of exceeding various vibration levels, which is useful in a discussion of serviceability of a footbridge design. However, estimates of statistical distributions of footbridge vibration levels...... to walking loads might be influenced by the models assumed for the parameters of the load model (the walking parameters). The paper explores how sensitive estimates of the statistical distribution of vertical footbridge response are to various stochastic assumptions for the walking parameters. The basis...... for the study is a literature review identifying different suggestions as to how the stochastic nature of these parameters may be modelled, and a parameter study examines how the different models influence estimates of the statistical distribution of footbridge vibrations. By neglecting scatter in some...

  12. Stochastic and cyclic deposition of multiple subannual laminae in an urban lake (Twin Lake, Golden Valley, Minnesota, USA)

    Science.gov (United States)

    Myrbo, A.; Ustipak, K.; Demet, B.

    2013-12-01

    Twin Lake, a small, deep, meromictic urban lake in Minneapolis, Minnesota, annually deposits two to 10 laminae that are distinguished from one another by composition and resulting color. Sediment sources are both autochthonous and allochthonous, including pure and mixed laminae of authigenic calcite, algal organic matter, and diatoms, as well as at least three distinct types of sediment gravity flow deposits. Diagenetic iron sulfide and iron phosphate phases are minor components, but can affect color out of proportion to their abundance. We used L*a*b* color from digital images of a freeze core slab, and petrographic smear slides of individual laminae, to categorize 1080 laminae deposited between 1963 and 2010 CE (based on lead-210 dating). Some causal relationships exist between the ten categories identified: diatom blooms often occur directly above the debris of gravity flows that probably disrupt the phosphate-rich monimolomnion and fertilize the surface waters; calcite whitings only occur after diatom blooms that increase calcite saturation. Stochastic events, as represented by laminae rich in siliciclastics and other terrigenous material, or shallow-water microfossils and carbonate morphologies, are the dominant sediment source. The patterns of cyclic deposition (e.g., summer and winter sedimentation) that produce 'normal' varve couplets in some lakes are continually interrupted by these stochastic events, to such an extent that spectral analysis finds only a weak one-year cycle. Sediments deposited before about 1900, and extending through the entire Holocene sequence (~10m) are varve couplets interrupted by thick (20-90 cm) debris layers, indicating that gravity flows were lower in frequency but greater in magnitude before the historical period, probably due to an increased frequency of disturbance under urban land-use.

  13. Polar gravity fields from GOCE and airborne gravity

    DEFF Research Database (Denmark)

    Forsberg, René; Olesen, Arne Vestergaard; Yidiz, Hasan

    2011-01-01

    Airborne gravity, together with high-quality surface data and ocean satellite altimetric gravity, may supplement GOCE to make consistent, accurate high resolution global gravity field models. In the polar regions, the special challenge of the GOCE polar gap make the error characteristics...... of combination models especially sensitive to the correct merging of satellite and surface data. We outline comparisons of GOCE to recent airborne gravity surveys in both the Arctic and the Antarctic. The comparison is done to new 8-month GOCE solutions, as well as to a collocation prediction from GOCE gradients...... in Antarctica. It is shown how the enhanced gravity field solutions improve the determination of ocean dynamic topography in both the Arctic and in across the Drake Passage. For the interior of Antarctica, major airborne gravity programs are currently being carried out, and there is an urgent need...

  14. Models of gas-grain chemistry in interstellar cloud cores with a stochastic approach to surface chemistry

    Science.gov (United States)

    Stantcheva, T.; Herbst, E.

    2004-08-01

    We present a gas-grain model of homogeneous cold cloud cores with time-independent physical conditions. In the model, the gas-phase chemistry is treated via rate equations while the diffusive granular chemistry is treated stochastically. The two phases are coupled through accretion and evaporation. A small network of surface reactions accounts for the surface production of the stable molecules water, formaldehyde, methanol, carbon dioxide, ammonia, and methane. The calculations are run for a time of 107 years at three different temperatures: 10 K, 15 K, and 20 K. The results are compared with those produced in a totally deterministic gas-grain model that utilizes the rate equation method for both the gas-phase and surface chemistry. The results of the different models are in agreement for the abundances of the gaseous species except for later times when the surface chemistry begins to affect the gas. The agreement for the surface species, however, is somewhat mixed. The average abundances of highly reactive surface species can be orders of magnitude larger in the stochastic-deterministic model than in the purely deterministic one. For non-reactive species, the results of the models can disagree strongly at early times, but agree to well within an order of magnitude at later times for most molecules. Strong exceptions occur for CO and H2CO at 10 K, and for CO2 at 20 K. The agreement seems to be best at a temperature of 15 K. As opposed to the use of the normal rate equation method of surface chemistry, the modified rate method is in significantly better agreement with the stochastic-deterministic approach. Comparison with observations of molecular ices in dense clouds shows mixed agreement.

  15. Focus on quantum Einstein gravity Focus on quantum Einstein gravity

    Science.gov (United States)

    Ambjorn, Jan; Reuter, Martin; Saueressig, Frank

    2012-09-01

    time cosmology and the big bang, as well as TeV-scale gravity models testable at the Large Hadron Collider. On different grounds, Monte-Carlo studies of the gravitational partition function based on the discrete causal dynamical triangulations approach provide an a priori independent avenue towards unveiling the non-perturbative features of gravity. As a highlight, detailed simulations established that the phase diagram underlying causal dynamical triangulations contains a phase where the triangulations naturally give rise to four-dimensional, macroscopic universes. Moreover, there are indications for a second-order phase transition that naturally forms the discrete analog of the non-Gaussian fixed point seen in the continuum computations. Thus there is a good chance that the discrete and continuum computations will converge to the same fundamental physics. This focus issue collects a series of papers that outline the current frontiers of the gravitational asymptotic safety program. We hope that readers get an impression of the depth and variety of this research area as well as our excitement about the new and ongoing developments. References [1] Weinberg S 1979 General Relativity, an Einstein Centenary Survey ed S W Hawking and W Israel (Cambridge: Cambridge University Press)

  16. AESS: Accelerated Exact Stochastic Simulation

    Science.gov (United States)

    Jenkins, David D.; Peterson, Gregory D.

    2011-12-01

    The Stochastic Simulation Algorithm (SSA) developed by Gillespie provides a powerful mechanism for exploring the behavior of chemical systems with small species populations or with important noise contributions. Gene circuit simulations for systems biology commonly employ the SSA method, as do ecological applications. This algorithm tends to be computationally expensive, so researchers seek an efficient implementation of SSA. In this program package, the Accelerated Exact Stochastic Simulation Algorithm (AESS) contains optimized implementations of Gillespie's SSA that improve the performance of individual simulation runs or ensembles of simulations used for sweeping parameters or to provide statistically significant results. Program summaryProgram title: AESS Catalogue identifier: AEJW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: University of Tennessee copyright agreement No. of lines in distributed program, including test data, etc.: 10 861 No. of bytes in distributed program, including test data, etc.: 394 631 Distribution format: tar.gz Programming language: C for processors, CUDA for NVIDIA GPUs Computer: Developed and tested on various x86 computers and NVIDIA C1060 Tesla and GTX 480 Fermi GPUs. The system targets x86 workstations, optionally with multicore processors or NVIDIA GPUs as accelerators. Operating system: Tested under Ubuntu Linux OS and CentOS 5.5 Linux OS Classification: 3, 16.12 Nature of problem: Simulation of chemical systems, particularly with low species populations, can be accurately performed using Gillespie's method of stochastic simulation. Numerous variations on the original stochastic simulation algorithm have been developed, including approaches that produce results with statistics that exactly match the chemical master equation (CME) as well as other approaches that approximate the CME. Solution

  17. An approach to the drone fleet survivability assessment based on a stochastic continues-time model

    Science.gov (United States)

    Kharchenko, Vyacheslav; Fesenko, Herman; Doukas, Nikos

    2017-09-01

    An approach and the algorithm to the drone fleet survivability assessment based on a stochastic continues-time model are proposed. The input data are the number of the drones, the drone fleet redundancy coefficient, the drone stability and restoration rate, the limit deviation from the norms of the drone fleet recovery, the drone fleet operational availability coefficient, the probability of the drone failure-free operation, time needed for performing the required tasks by the drone fleet. The ways for improving the recoverable drone fleet survivability taking into account amazing factors of system accident are suggested. Dependencies of the drone fleet survivability rate both on the drone stability and the number of the drones are analysed.

  18. On Stochastic Finite-Time Control of Discrete-Time Fuzzy Systems with Packet Dropout

    Directory of Open Access Journals (Sweden)

    Yingqi Zhang

    2012-01-01

    Full Text Available This paper is concerned with the stochastic finite-time stability and stochastic finite-time boundedness problems for one family of fuzzy discrete-time systems over networks with packet dropout, parametric uncertainties, and time-varying norm-bounded disturbance. Firstly, we present the dynamic model description studied, in which the discrete-time fuzzy T-S systems with packet loss can be described by one class of fuzzy Markovian jump systems. Then, the concepts of stochastic finite-time stability and stochastic finite-time boundedness and problem formulation are given. Based on Lyapunov function approach, sufficient conditions on stochastic finite-time stability and stochastic finite-time boundedness are established for the resulting closed-loop fuzzy discrete-time system with Markovian jumps, and state-feedback controllers are designed to ensure stochastic finite-time stability and stochastic finite-time boundedness of the class of fuzzy systems. The stochastic finite-time stability and stochastic finite-time boundedness criteria can be tackled in the form of linear matrix inequalities with a fixed parameter. As an auxiliary result, we also give sufficient conditions on the stochastic stability of the class of fuzzy T-S systems with packet loss. Finally, two illustrative examples are presented to show the validity of the developed methodology.

  19. A stochastic optimization approach to reduce greenhouse gas emissions from buildings and transportation

    International Nuclear Information System (INIS)

    Karan, Ebrahim; Asadi, Somayeh; Ntaimo, Lewis

    2016-01-01

    The magnitude of building- and transportation-related GHG (greenhouse gas) emissions makes the adoption of all-EVs (electric vehicles) powered with renewable power as one of the most effective strategies to reduce emission of GHGs. This paper formulates the problem of GHG mitigation strategy under uncertain conditions and optimizes the strategies in which EVs are powered by solar energy. Under a pre-specified budget, the objective is to determine the type of EV and power generation capacity of the solar system in such a way as to maximize GHG emissions reductions. The model supports the three primary solar systems: off-grid, grid-tied, and hybrid. First, a stochastic optimization model using probability distributions of stochastic variables and EV and solar system specifications is developed. The model is then validated by comparing the estimated values of the optimal strategies and actual values. It is found that the mitigation strategies in which EVs are powered by a hybrid solar system lead to the best cost-expected reduction of CO_2 emissions ratio. The results show an accuracy of about 4% for mitigation strategies in which EVs are powered by a grid-tied or hybrid solar system and 11% when applied to estimate the CO_2 emissions reductions of an off-grid system. - Highlights: • The problem of GHG mitigation is formulated as a stochastic optimization problem. • The objective is to maximize CO_2 emissions reductions within a specified budget. • The stochastic model is validated using actual data. • The results show an estimation accuracy of 4–11%.

  20. Quantum stochastic walks on networks for decision-making.

    Science.gov (United States)

    Martínez-Martínez, Ismael; Sánchez-Burillo, Eduardo

    2016-03-31

    Recent experiments report violations of the classical law of total probability and incompatibility of certain mental representations when humans process and react to information. Evidence shows promise of a more general quantum theory providing a better explanation of the dynamics and structure of real decision-making processes than classical probability theory. Inspired by this, we show how the behavioral choice-probabilities can arise as the unique stationary distribution of quantum stochastic walkers on the classical network defined from Luce's response probabilities. This work is relevant because (i) we provide a very general framework integrating the positive characteristics of both quantum and classical approaches previously in confrontation, and (ii) we define a cognitive network which can be used to bring other connectivist approaches to decision-making into the quantum stochastic realm. We model the decision-maker as an open system in contact with her surrounding environment, and the time-length of the decision-making process reveals to be also a measure of the process' degree of interplay between the unitary and irreversible dynamics. Implementing quantum coherence on classical networks may be a door to better integrate human-like reasoning biases in stochastic models for decision-making.

  1. Quantum stochastic walks on networks for decision-making

    Science.gov (United States)

    Martínez-Martínez, Ismael; Sánchez-Burillo, Eduardo

    2016-03-01

    Recent experiments report violations of the classical law of total probability and incompatibility of certain mental representations when humans process and react to information. Evidence shows promise of a more general quantum theory providing a better explanation of the dynamics and structure of real decision-making processes than classical probability theory. Inspired by this, we show how the behavioral choice-probabilities can arise as the unique stationary distribution of quantum stochastic walkers on the classical network defined from Luce’s response probabilities. This work is relevant because (i) we provide a very general framework integrating the positive characteristics of both quantum and classical approaches previously in confrontation, and (ii) we define a cognitive network which can be used to bring other connectivist approaches to decision-making into the quantum stochastic realm. We model the decision-maker as an open system in contact with her surrounding environment, and the time-length of the decision-making process reveals to be also a measure of the process’ degree of interplay between the unitary and irreversible dynamics. Implementing quantum coherence on classical networks may be a door to better integrate human-like reasoning biases in stochastic models for decision-making.

  2. Quantum group structure and local fields in the algebraic approach to 2D gravity

    CERN Document Server

    Schnittger, Jens

    1994-01-01

    This review contains a summary of work by J.-L. Gervais and the author on the operator approach to 2d gravity. Special emphasis is placed on the construction of local observables -the Liouville exponentials and the Liouville field itself - and the underlying algebra of chiral vertex operators. The double quantum group structure arising from the presence of two screening charges is discussed and the generalized algebra and field operators are derived. In the last part, we show that our construction gives rise to a natural definition of a quantum tau function, which is a noncommutative version of the classical group-theoretic representation of the Liouville fields by Leznov and Saveliev.

  3. Modeling Stochastic Complexity in Complex Adaptive Systems: Non-Kolmogorov Probability and the Process Algebra Approach.

    Science.gov (United States)

    Sulis, William H

    2017-10-01

    Walter Freeman III pioneered the application of nonlinear dynamical systems theories and methodologies in his work on mesoscopic brain dynamics.Sadly, mainstream psychology and psychiatry still cling to linear correlation based data analysis techniques, which threaten to subvert the process of experimentation and theory building. In order to progress, it is necessary to develop tools capable of managing the stochastic complexity of complex biopsychosocial systems, which includes multilevel feedback relationships, nonlinear interactions, chaotic dynamics and adaptability. In addition, however, these systems exhibit intrinsic randomness, non-Gaussian probability distributions, non-stationarity, contextuality, and non-Kolmogorov probabilities, as well as the absence of mean and/or variance and conditional probabilities. These properties and their implications for statistical analysis are discussed. An alternative approach, the Process Algebra approach, is described. It is a generative model, capable of generating non-Kolmogorov probabilities. It has proven useful in addressing fundamental problems in quantum mechanics and in the modeling of developing psychosocial systems.

  4. Characterization and reconstruction of 3D stochastic microstructures via supervised learning.

    Science.gov (United States)

    Bostanabad, R; Chen, W; Apley, D W

    2016-12-01

    The need for computational characterization and reconstruction of volumetric maps of stochastic microstructures for understanding the role of material structure in the processing-structure-property chain has been highlighted in the literature. Recently, a promising characterization and reconstruction approach has been developed where the essential idea is to convert the digitized microstructure image into an appropriate training dataset to learn the stochastic nature of the morphology by fitting a supervised learning model to the dataset. This compact model can subsequently be used to efficiently reconstruct as many statistically equivalent microstructure samples as desired. The goal of this paper is to build upon the developed approach in three major directions by: (1) extending the approach to characterize 3D stochastic microstructures and efficiently reconstruct 3D samples, (2) improving the performance of the approach by incorporating user-defined predictors into the supervised learning model, and (3) addressing potential computational issues by introducing a reduced model which can perform as effectively as the full model. We test the extended approach on three examples and show that the spatial dependencies, as evaluated via various measures, are well preserved in the reconstructed samples. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  5. Loop quantum gravity in asymptotically flat spaces

    International Nuclear Information System (INIS)

    Arnsdorf, M.

    2000-01-01

    This thesis describes applications and extensions of the loop variable approach to non-perturbative quantum gravity. The common theme of the work presented, is the need to generalise loop quantum gravity to be applicable in cases where space is asymptotically flat, and no longer compact as is usually assumed. This is important for the study of isolated gravitational systems. It also presents a natural context in which to search for the semi-classical limit, one of the main outstanding problems in loop quantum gravity. In the first part of the thesis we study how isolated gravitational systems can be attributed particle-like properties. In particular, we show how spinorial states can arise in pure loop quantum gravity if spatial topology is non-trivial, thus confirming an old conjecture of Friedman and Sorkin. Heuristically, this corresponds to the idea that we can rotate isolated regions of spatial topology relative to the environment at infinity, and that only a 4π-rotation will take us back to the original configuration. To do this we extend the standard loop quantum gravity formalism by introducing a compactification of our non-compact spatial manifold, and study the knotting of embedded graphs. The second part of the thesis takes a more systematic approach to the study of loop quantum gravity on non-compact spaces. We look for new representations of the loop algebra, which give rise to quantum theories that are inequivalent to the standard one. These theories naturally describe excitations of a fiducial background state, which is specified via the choice of its vacuum expectation values. In particular, we can choose background states that describe the geometries of non-compact manifolds. We also discuss how suitable background states can be constructed that can approximate classical phase space data, in our case holonomies along embedded paths and geometrical quantities related to areas and volumes. These states extend the notion of the weave and provide a

  6. Learn-and-Adapt Stochastic Dual Gradients for Network Resource Allocation

    OpenAIRE

    Chen, Tianyi; Ling, Qing; Giannakis, Georgios B.

    2017-01-01

    Network resource allocation shows revived popularity in the era of data deluge and information explosion. Existing stochastic optimization approaches fall short in attaining a desirable cost-delay tradeoff. Recognizing the central role of Lagrange multipliers in network resource allocation, a novel learn-and-adapt stochastic dual gradient (LA-SDG) method is developed in this paper to learn the sample-optimal Lagrange multiplier from historical data, and accordingly adapt the upcoming resource...

  7. Stochastic goal programming based groundwater remediation management under human-health-risk uncertainty

    International Nuclear Information System (INIS)

    Li, Jing; He, Li; Lu, Hongwei; Fan, Xing

    2014-01-01

    Highlights: • We propose an integrated optimal groundwater remediation design approach. • The approach can address stochasticity in carcinogenic risks. • Goal programming is used to make the system approaching to ideal operation and remediation effects. • The uncertainty in slope factor is evaluated under different confidence levels. • Optimal strategies are obtained to support remediation design under uncertainty. - Abstract: An optimal design approach for groundwater remediation is developed through incorporating numerical simulation, health risk assessment, uncertainty analysis and nonlinear optimization within a general framework. Stochastic analysis and goal programming are introduced into the framework to handle uncertainties in real-world groundwater remediation systems. Carcinogenic risks associated with remediation actions are further evaluated at four confidence levels. The differences between ideal and predicted constraints are minimized by goal programming. The approach is then applied to a contaminated site in western Canada for creating a set of optimal remediation strategies. Results from the case study indicate that factors including environmental standards, health risks and technical requirements mutually affected and restricted themselves. Stochastic uncertainty existed in the entire process of remediation optimization, which should to be taken into consideration in groundwater remediation design

  8. The geometric role of symmetry breaking in gravity

    International Nuclear Information System (INIS)

    Wise, Derek K

    2012-01-01

    In gravity, breaking symmetry from a group G to a group H plays the role of describing geometry in relation to the geometry of the homogeneous space G/H. The deep reason for this is Cartan's 'method of equivalence,' giving, in particular, an exact correspondence between metrics and Cartan connections. I argue that broken symmetry is thus implicit in any gravity theory, for purely geometric reasons. As an application, I explain how this kind of thinking gives a new approach to Hamiltonian gravity in which an observer field spontaneously breaks Lorentz symmetry and gives a Cartan connection on space.

  9. Group theory approach to unification of gravity with internal symmetry gauge interactions. Part 1

    International Nuclear Information System (INIS)

    Samokhvalov, S.E.; Vanyashin, V.S.

    1990-12-01

    The infinite group of deformed diffeomorphisms of space-time continuum is put into the basis of the Gauge Theory of Gravity. This gives rise to some new ways for unification of gravity with other gauge interactions. (author). 7 refs

  10. A Volterra series approach to the approximation of stochastic nonlinear dynamics

    NARCIS (Netherlands)

    Wouw, van de N.; Nijmeijer, H.; Campen, van D.H.

    2002-01-01

    A response approximation method for stochastically excited, nonlinear, dynamic systems is presented. Herein, the output of the nonlinear system isapproximated by a finite-order Volterra series. The original nonlinear system is replaced by a bilinear system in order to determine the kernels of this

  11. Estimation of parameter sensitivities for stochastic reaction networks

    KAUST Repository

    Gupta, Ankit

    2016-01-07

    Quantification of the effects of parameter uncertainty is an important and challenging problem in Systems Biology. We consider this problem in the context of stochastic models of biochemical reaction networks where the dynamics is described as a continuous-time Markov chain whose states represent the molecular counts of various species. For such models, effects of parameter uncertainty are often quantified by estimating the infinitesimal sensitivities of some observables with respect to model parameters. The aim of this talk is to present a holistic approach towards this problem of estimating parameter sensitivities for stochastic reaction networks. Our approach is based on a generic formula which allows us to construct efficient estimators for parameter sensitivity using simulations of the underlying model. We will discuss how novel simulation techniques, such as tau-leaping approximations, multi-level methods etc. can be easily integrated with our approach and how one can deal with stiff reaction networks where reactions span multiple time-scales. We will demonstrate the efficiency and applicability of our approach using many examples from the biological literature.

  12. Some variance reduction methods for numerical stochastic homogenization.

    Science.gov (United States)

    Blanc, X; Le Bris, C; Legoll, F

    2016-04-28

    We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. © 2016 The Author(s).

  13. Computer Aided Continuous Time Stochastic Process Modelling

    DEFF Research Database (Denmark)

    Kristensen, N.R.; Madsen, Henrik; Jørgensen, Sten Bay

    2001-01-01

    A grey-box approach to process modelling that combines deterministic and stochastic modelling is advocated for identification of models for model-based control of batch and semi-batch processes. A computer-aided tool designed for supporting decision-making within the corresponding modelling cycle...

  14. Ep for efficient stochastic control with obstacles

    NARCIS (Netherlands)

    Mensink, T.; Verbeek, J.; Kappen, H.J.

    2010-01-01

    Abstract. We address the problem of continuous stochastic optimal control in the presence of hard obstacles. Due to the non-smooth character of the obstacles, the traditional approach using dynamic programming in combination with function approximation tends to fail. We consider a recently

  15. QCD ghost f(T)-gravity model

    Energy Technology Data Exchange (ETDEWEB)

    Karami, K.; Abdolmaleki, A.; Asadzadeh, S. [University of Kurdistan, Department of Physics, Sanandaj (Iran, Islamic Republic of); Safari, Z. [Research Institute for Astronomy and Astrophysics of Maragha (RIAAM), Maragha (Iran, Islamic Republic of)

    2013-09-15

    Within the framework of modified teleparallel gravity, we reconstruct a f(T) model corresponding to the QCD ghost dark energy scenario. For a spatially flat FRW universe containing only the pressureless matter, we obtain the time evolution of the torsion scalar T (or the Hubble parameter). Then, we calculate the effective torsion equation of state parameter of the QCD ghost f(T)-gravity model as well as the deceleration parameter of the universe. Furthermore, we fit the model parameters by using the latest observational data including SNeIa, CMB and BAO data. We also check the viability of our model using a cosmographic analysis approach. Moreover, we investigate the validity of the generalized second law (GSL) of gravitational thermodynamics for our model. Finally, we point out the growth rate of matter density perturbation. We conclude that in QCD ghost f(T)-gravity model, the universe begins a matter dominated phase and approaches a de Sitter regime at late times, as expected. Also this model is consistent with current data, passes the cosmographic test, satisfies the GSL and fits the data of the growth factor well as the {Lambda}CDM model. (orig.)

  16. Loop Quantum Gravity.

    Science.gov (United States)

    Rovelli, Carlo

    2008-01-01

    The problem of describing the quantum behavior of gravity, and thus understanding quantum spacetime , is still open. Loop quantum gravity is a well-developed approach to this problem. It is a mathematically well-defined background-independent quantization of general relativity, with its conventional matter couplings. Today research in loop quantum gravity forms a vast area, ranging from mathematical foundations to physical applications. Among the most significant results obtained so far are: (i) The computation of the spectra of geometrical quantities such as area and volume, which yield tentative quantitative predictions for Planck-scale physics. (ii) A physical picture of the microstructure of quantum spacetime, characterized by Planck-scale discreteness. Discreteness emerges as a standard quantum effect from the discrete spectra, and provides a mathematical realization of Wheeler's "spacetime foam" intuition. (iii) Control of spacetime singularities, such as those in the interior of black holes and the cosmological one. This, in particular, has opened up the possibility of a theoretical investigation into the very early universe and the spacetime regions beyond the Big Bang. (iv) A derivation of the Bekenstein-Hawking black-hole entropy. (v) Low-energy calculations, yielding n -point functions well defined in a background-independent context. The theory is at the roots of, or strictly related to, a number of formalisms that have been developed for describing background-independent quantum field theory, such as spin foams, group field theory, causal spin networks, and others. I give here a general overview of ideas, techniques, results and open problems of this candidate theory of quantum gravity, and a guide to the relevant literature.

  17. Stochastic solution of population balance equations for reactor networks

    International Nuclear Information System (INIS)

    Menz, William J.; Akroyd, Jethro; Kraft, Markus

    2014-01-01

    This work presents a sequential modular approach to solve a generic network of reactors with a population balance model using a stochastic numerical method. Full-coupling to the gas-phase is achieved through operator-splitting. The convergence of the stochastic particle algorithm in test networks is evaluated as a function of network size, recycle fraction and numerical parameters. These test cases are used to identify methods through which systematic and statistical error may be reduced, including by use of stochastic weighted algorithms. The optimal algorithm was subsequently used to solve a one-dimensional example of silicon nanoparticle synthesis using a multivariate particle model. This example demonstrated the power of stochastic methods in resolving particle structure by investigating the transient and spatial evolution of primary polydispersity, degree of sintering and TEM-style images. Highlights: •An algorithm is presented to solve reactor networks with a population balance model. •A stochastic method is used to solve the population balance equations. •The convergence and efficiency of the reported algorithms are evaluated. •The algorithm is applied to simulate silicon nanoparticle synthesis in a 1D reactor. •Particle structure is reported as a function of reactor length and time

  18. A study on multi-point gravity compensation of mirror bending system

    International Nuclear Information System (INIS)

    Sun Fuquan; Fu Yuan; Zhu Wanqian; Xue Song

    2011-01-01

    The sag of mirror due to gravity induces unacceptable slope errors in beamline mirror-bending system of a synchrotron radiation facility, and approaches must be found to eliminate the unwanted gravity effect. According to the beam bending theory, the multi-point gravity compensation method is applicable. Taking an example of the bent collimating mirror for the XAFS beam-line (BL14W) at Shanghai Synchrotron Radiation Facility (SSRF), the best position and value of the equilibrant were calculated through minimizing the gravity effect. With two, three and four points gravity compensation, slope errors were 0.179, 0.067 and 0.032 μrad,respectively, i.e.the multi-point gravity compensation is better than the two-point gravity compensation, which is used for the Phase I beamlines of SSRF. The four-point gravity compensation method reduces more slope error and stress due to four support points. (authors)

  19. Pricing long-dated insurance contracts with stochastic interest rates and stochastic volatility

    NARCIS (Netherlands)

    van Haastrecht, A.; Lord, R.; Pelsser, A.; Schrager, D.

    2009-01-01

    We consider the pricing of long-dated insurance contracts under stochastic interest rates and stochastic volatility. In particular, we focus on the valuation of insurance options with long-term equity or foreign exchange exposures. Our modeling framework extends the stochastic volatility model of

  20. Recursive stochastic effects in valley hybrid inflation

    Science.gov (United States)

    Levasseur, Laurence Perreault; Vennin, Vincent; Brandenberger, Robert

    2013-10-01

    Hybrid inflation is a two-field model where inflation ends because of a tachyonic instability, the duration of which is determined by stochastic effects and has important observational implications. Making use of the recursive approach to the stochastic formalism presented in [L. P. Levasseur, preceding article, Phys. Rev. D 88, 083537 (2013)], these effects are consistently computed. Through an analysis of backreaction, this method is shown to converge in the valley but points toward an (expected) instability in the waterfall. It is further shown that the quasistationarity of the auxiliary field distribution breaks down in the case of a short-lived waterfall. We find that the typical dispersion of the waterfall field at the critical point is then diminished, thus increasing the duration of the waterfall phase and jeopardizing the possibility of a short transition. Finally, we find that stochastic effects worsen the blue tilt of the curvature perturbations by an O(1) factor when compared with the usual slow-roll contribution.

  1. Stochastic Spectral Descent for Discrete Graphical Models

    International Nuclear Information System (INIS)

    Carlson, David; Hsieh, Ya-Ping; Collins, Edo; Carin, Lawrence; Cevher, Volkan

    2015-01-01

    Interest in deep probabilistic graphical models has in-creased in recent years, due to their state-of-the-art performance on many machine learning applications. Such models are typically trained with the stochastic gradient method, which can take a significant number of iterations to converge. Since the computational cost of gradient estimation is prohibitive even for modestly sized models, training becomes slow and practically usable models are kept small. In this paper we propose a new, largely tuning-free algorithm to address this problem. Our approach derives novel majorization bounds based on the Schatten- norm. Intriguingly, the minimizers of these bounds can be interpreted as gradient methods in a non-Euclidean space. We thus propose using a stochastic gradient method in non-Euclidean space. We both provide simple conditions under which our algorithm is guaranteed to converge, and demonstrate empirically that our algorithm leads to dramatically faster training and improved predictive ability compared to stochastic gradient descent for both directed and undirected graphical models.

  2. Stochastic Wake Modelling Based on POD Analysis

    Directory of Open Access Journals (Sweden)

    David Bastine

    2018-03-01

    Full Text Available In this work, large eddy simulation data is analysed to investigate a new stochastic modeling approach for the wake of a wind turbine. The data is generated by the large eddy simulation (LES model PALM combined with an actuator disk with rotation representing the turbine. After applying a proper orthogonal decomposition (POD, three different stochastic models for the weighting coefficients of the POD modes are deduced resulting in three different wake models. Their performance is investigated mainly on the basis of aeroelastic simulations of a wind turbine in the wake. Three different load cases and their statistical characteristics are compared for the original LES, truncated PODs and the stochastic wake models including different numbers of POD modes. It is shown that approximately six POD modes are enough to capture the load dynamics on large temporal scales. Modeling the weighting coefficients as independent stochastic processes leads to similar load characteristics as in the case of the truncated POD. To complete this simplified wake description, we show evidence that the small-scale dynamics can be captured by adding to our model a homogeneous turbulent field. In this way, we present a procedure to derive stochastic wake models from costly computational fluid dynamics (CFD calculations or elaborated experimental investigations. These numerically efficient models provide the added value of possible long-term studies. Depending on the aspects of interest, different minimalized models may be obtained.

  3. How (not) to use the Palatini formulation of scalar-tensor gravity

    International Nuclear Information System (INIS)

    Iglesias, Alberto; Kaloper, Nemanja; Park, Minjoon; Padilla, Antonio

    2007-01-01

    We revisit the problem of defining nonminimal gravity in the first order formalism. Specializing to scalar-tensor theories, which may be disguised as ''higher-derivative'' models with the gravitational Lagrangians that depend only on the Ricci scalar, we show how to recast these theories as Palatini-like gravities. The correct formulation utilizes the Lagrange multiplier method, which preserves the canonical structure of the theory, and yields the conventional metric scalar-tensor gravity. We explain the discrepancies between the naieve Palatini and the Lagrange multiplier approach, showing that the naieve Palatini approach really swaps the theory for another. The differences disappear only in the limit of ordinary general relativity, where an accidental redundancy ensures that the naieve Palatini approach works there. We outline the correct decoupling limits and the strong coupling regimes. As a corollary we find that the so-called ''modified source gravity'' models suffer from strong coupling problems at very low scales, and hence cannot be a realistic approximation of our universe. We also comment on a method to decouple the extra scalar using the chameleon mechanism

  4. The interpolation method of stochastic functions and the stochastic variational principle

    International Nuclear Information System (INIS)

    Liu Xianbin; Chen Qiu

    1993-01-01

    Uncertainties have been attaching more importance to increasingly in modern engineering structural design. Viewed on an appropriate scale, the inherent physical attributes (material properties) of many structural systems always exhibit some patterns of random variation in space and time, generally the random variation shows a small parameter fluctuation. For a linear mechanical system, the random variation is modeled as a random one of a linear partial differential operator and, in stochastic finite element method, a random variation of a stiffness matrix. Besides the stochasticity of the structural physical properties, the influences of random loads which always represent themselves as the random boundary conditions bring about much more complexities in structural analysis. Now the stochastic finite element method or the probabilistic finite element method is used to study the structural systems with random physical parameters, whether or not the loads are random. Differing from the general finite element theory, the main difficulty which the stochastic finite element method faces is the inverse operation of stochastic operators and stochastic matrices, since the inverse operators and the inverse matrices are statistically correlated to the random parameters and random loads. So far, many efforts have been made to obtain the reasonably approximate expressions of the inverse operators and inverse matrices, such as Perturbation Method, Neumann Expansion Method, Galerkin Method (in appropriate Hilbert Spaces defined for random functions), Orthogonal Expansion Method. Among these methods, Perturbation Method appear to be the most available. The advantage of these methods is that the fairly accurate response statistics can be obtained under the condition of the finite information of the input. However, the second-order statistics obtained by use of Perturbation Method and Neumann Expansion Method are not always the appropriate ones, because the relevant second

  5. The critical domain size of stochastic population models.

    Science.gov (United States)

    Reimer, Jody R; Bonsall, Michael B; Maini, Philip K

    2017-02-01

    Identifying the critical domain size necessary for a population to persist is an important question in ecology. Both demographic and environmental stochasticity impact a population's ability to persist. Here we explore ways of including this variability. We study populations with distinct dispersal and sedentary stages, which have traditionally been modelled using a deterministic integrodifference equation (IDE) framework. Individual-based models (IBMs) are the most intuitive stochastic analogues to IDEs but yield few analytic insights. We explore two alternate approaches; one is a scaling up to the population level using the Central Limit Theorem, and the other a variation on both Galton-Watson branching processes and branching processes in random environments. These branching process models closely approximate the IBM and yield insight into the factors determining the critical domain size for a given population subject to stochasticity.

  6. Polarimetric and angular light-scattering from dense media: Comparison of a vectorial radiative transfer model with analytical, stochastic and experimental approaches

    International Nuclear Information System (INIS)

    Riviere, Nicolas; Ceolato, Romain; Hespel, Laurent

    2013-01-01

    Our work presents computations via a vectorial radiative transfer model of the polarimetric and angular light scattered by a stratified dense medium with small and intermediate optical thickness. We report the validation of this model using analytical results and different computational methods like stochastic algorithms. Moreover, we check the model with experimental data from a specific scatterometer developed at the Onera. The advantages and disadvantages of a radiative approach are discussed. This paper represents a step toward the characterization of particles in dense media involving multiple scattering. -- Highlights: • A vectorial radiative transfer model to simulate the light scattered by stratified layers is developed. • The vectorial radiative transfer equation is solved using an adding–doubling technique. • The results are compared to analytical and stochastic data. • Validation with experimental data from a scatterometer developed at Onera is presented

  7. Portfolio Optimization with Stochastic Dividends and Stochastic Volatility

    Science.gov (United States)

    Varga, Katherine Yvonne

    2015-01-01

    We consider an optimal investment-consumption portfolio optimization model in which an investor receives stochastic dividends. As a first problem, we allow the drift of stock price to be a bounded function. Next, we consider a stochastic volatility model. In each problem, we use the dynamic programming method to derive the Hamilton-Jacobi-Bellman…

  8. High Weak Order Methods for Stochastic Differential Equations Based on Modified Equations

    KAUST Repository

    Abdulle, Assyr

    2012-01-01

    © 2012 Society for Industrial and Applied Mathematics. Inspired by recent advances in the theory of modified differential equations, we propose a new methodology for constructing numerical integrators with high weak order for the time integration of stochastic differential equations. This approach is illustrated with the constructions of new methods of weak order two, in particular, semi-implicit integrators well suited for stiff (meansquare stable) stochastic problems, and implicit integrators that exactly conserve all quadratic first integrals of a stochastic dynamical system. Numerical examples confirm the theoretical results and show the versatility of our methodology.

  9. The mass transfer approach to multivariate discrete first order stochastic dominance

    DEFF Research Database (Denmark)

    Østerdal, Lars Peter Raahave

    2010-01-01

    A fundamental result in the theory of stochastic dominance tells that first order dominance between two finite multivariate distributions is equivalent to the property that the one can be obtained from the other by shifting probability mass from one outcome to another that is worse a finite numbe...

  10. Parametric inference for stochastic differential equations: a smooth and match approach

    NARCIS (Netherlands)

    Gugushvili, S.; Spreij, P.

    2012-01-01

    We study the problem of parameter estimation for a univariate discretely observed ergodic diffusion process given as a solution to a stochastic differential equation. The estimation procedure we propose consists of two steps. In the first step, which is referred to as a smoothing step, we smooth the

  11. Exact and Approximate Stochastic Simulation of Intracellular Calcium Dynamics

    Directory of Open Access Journals (Sweden)

    Nicolas Wieder

    2011-01-01

    pathways. The purpose of the present paper is to provide an overview of the aforementioned simulation approaches and their mutual relationships in the spectrum ranging from stochastic to deterministic algorithms.

  12. Theory of stochastic differential equations with jumps and applications mathematical and analytical techniques with applications to engineering

    CERN Document Server

    SITU, Rong

    2005-01-01

    Derivation of Ito's formulas, Girsanov's theorems and martingale representation theorem for stochastic DEs with jumpsApplications to population controlReflecting stochastic DE techniqueApplications to the stock market. (Backward stochastic DE approach)Derivation of Black-Scholes formula for market with and without jumpsNon-linear filtering problems with jumps.

  13. Stochastic kinetics

    International Nuclear Information System (INIS)

    Colombino, A.; Mosiello, R.; Norelli, F.; Jorio, V.M.; Pacilio, N.

    1975-01-01

    A nuclear system kinetics is formulated according to a stochastic approach. The detailed probability balance equations are written for the probability of finding the mixed population of neutrons and detected neutrons, i.e. detectrons, at a given level for a given instant of time. Equations are integrated in search of a probability profile: a series of cases is analyzed through a progressive criterium. It tends to take into account an increasing number of physical processes within the chosen model. The most important contribution is that solutions interpret analytically experimental conditions of equilibrium (moise analysis) and non equilibrium (pulsed neutron measurements, source drop technique, start up procedures)

  14. Massive Gravity

    OpenAIRE

    de Rham, Claudia

    2014-01-01

    We review recent progress in massive gravity. We start by showing how different theories of massive gravity emerge from a higher-dimensional theory of general relativity, leading to the Dvali–Gabadadze–Porrati model (DGP), cascading gravity, and ghost-free massive gravity. We then explore their theoretical and phenomenological consistency, proving the absence of Boulware–Deser ghosts and reviewing the Vainshtein mechanism and the cosmological solutions in these models. Finally, we present alt...

  15. Circulation-based Modeling of Gravity Currents

    Science.gov (United States)

    Meiburg, E. H.; Borden, Z.

    2013-05-01

    Atmospheric and oceanic flows driven by predominantly horizontal density differences, such as sea breezes, thunderstorm outflows, powder snow avalanches, and turbidity currents, are frequently modeled as gravity currents. Efforts to develop simplified models of such currents date back to von Karman (1940), who considered a two-dimensional gravity current in an inviscid, irrotational and infinitely deep ambient. Benjamin (1968) presented an alternative model, focusing on the inviscid, irrotational flow past a gravity current in a finite-depth channel. More recently, Shin et al. (2004) proposed a model for gravity currents generated by partial-depth lock releases, considering a control volume that encompasses both fronts. All of the above models, in addition to the conservation of mass and horizontal momentum, invoke Bernoulli's law along some specific streamline in the flow field, in order to obtain a closed system of equations that can be solved for the front velocity as function of the current height. More recent computational investigations based on the Navier-Stokes equations, on the other hand, reproduce the dynamics of gravity currents based on the conservation of mass and momentum alone. We propose that it should therefore be possible to formulate a fundamental gravity current model without invoking Bernoulli's law. The talk will show that the front velocity of gravity currents can indeed be predicted as a function of their height from mass and momentum considerations alone, by considering the evolution of interfacial vorticity. This approach does not require information on the pressure field and therefore avoids the need for an energy closure argument such as those invoked by the earlier models. Predictions by the new theory are shown to be in close agreement with direct numerical simulation results. References Von Karman, T. 1940 The engineer grapples with nonlinear problems, Bull. Am. Math Soc. 46, 615-683. Benjamin, T.B. 1968 Gravity currents and related

  16. Stochastic global optimization as a filtering problem

    International Nuclear Information System (INIS)

    Stinis, Panos

    2012-01-01

    We present a reformulation of stochastic global optimization as a filtering problem. The motivation behind this reformulation comes from the fact that for many optimization problems we cannot evaluate exactly the objective function to be optimized. Similarly, we may not be able to evaluate exactly the functions involved in iterative optimization algorithms. For example, we may only have access to noisy measurements of the functions or statistical estimates provided through Monte Carlo sampling. This makes iterative optimization algorithms behave like stochastic maps. Naive global optimization amounts to evolving a collection of realizations of this stochastic map and picking the realization with the best properties. This motivates the use of filtering techniques to allow focusing on realizations that are more promising than others. In particular, we present a filtering reformulation of global optimization in terms of a special case of sequential importance sampling methods called particle filters. The increasing popularity of particle filters is based on the simplicity of their implementation and their flexibility. We utilize the flexibility of particle filters to construct a stochastic global optimization algorithm which can converge to the optimal solution appreciably faster than naive global optimization. Several examples of parametric exponential density estimation are provided to demonstrate the efficiency of the approach.

  17. Scales of gravity

    International Nuclear Information System (INIS)

    Dvali, Gia; Kolanovic, Marko; Nitti, Francesco; Gabadadze, Gregory

    2002-01-01

    We propose a framework in which the quantum gravity scale can be as low as 10 -3 eV. The key assumption is that the standard model ultraviolet cutoff is much higher than the quantum gravity scale. This ensures that we observe conventional weak gravity. We construct an explicit brane-world model in which the brane-localized standard model is coupled to strong 5D gravity of infinite-volume flat extra space. Because of the high ultraviolet scale, the standard model fields generate a large graviton kinetic term on the brane. This kinetic term 'shields' the standard model from the strong bulk gravity. As a result, an observer on the brane sees weak 4D gravity up to astronomically large distances beyond which gravity becomes five dimensional. Modeling quantum gravity above its scale by the closed string spectrum we show that the shielding phenomenon protects the standard model from an apparent phenomenological catastrophe due to the exponentially large number of light string states. The collider experiments, astrophysics, cosmology and gravity measurements independently point to the same lower bound on the quantum gravity scale, 10 -3 eV. For this value the model has experimental signatures both for colliders and for submillimeter gravity measurements. Black holes reveal certain interesting properties in this framework

  18. Digital hardware implementation of a stochastic two-dimensional neuron model.

    Science.gov (United States)

    Grassia, F; Kohno, T; Levi, T

    2016-11-01

    This study explores the feasibility of stochastic neuron simulation in digital systems (FPGA), which realizes an implementation of a two-dimensional neuron model. The stochasticity is added by a source of current noise in the silicon neuron using an Ornstein-Uhlenbeck process. This approach uses digital computation to emulate individual neuron behavior using fixed point arithmetic operation. The neuron model's computations are performed in arithmetic pipelines. It was designed in VHDL language and simulated prior to mapping in the FPGA. The experimental results confirmed the validity of the developed stochastic FPGA implementation, which makes the implementation of the silicon neuron more biologically plausible for future hybrid experiments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Stochastic modelling of conjugate heat transfer in near-wall turbulence

    International Nuclear Information System (INIS)

    Pozorski, Jacek; Minier, Jean-Pierre

    2006-01-01

    The paper addresses the conjugate heat transfer in turbulent flows with temperature assumed to be a passive scalar. The Lagrangian approach is applied and the heat transfer is modelled with the use of stochastic particles. The intensity of thermal fluctuations in near-wall turbulence is determined from the scalar probability density function (PDF) with externally provided dynamical statistics. A stochastic model for the temperature field in the wall material is proposed and boundary conditions for stochastic particles at the solid-fluid interface are formulated. The heated channel flow with finite-thickness walls is considered as a validation case. Computation results for the mean temperature profiles and the variance of thermal fluctuations are presented and compared with available DNS data

  20. Stochastic modelling of conjugate heat transfer in near-wall turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Pozorski, Jacek [Institute of Fluid-Flow Machinery, Polish Academy of Sciences, Fiszera 14, 80952 Gdansk (Poland)]. E-mail: jp@imp.gda.pl; Minier, Jean-Pierre [Research and Development Division, Electricite de France, 6 quai Watier, 78400 Chatou (France)

    2006-10-15

    The paper addresses the conjugate heat transfer in turbulent flows with temperature assumed to be a passive scalar. The Lagrangian approach is applied and the heat transfer is modelled with the use of stochastic particles. The intensity of thermal fluctuations in near-wall turbulence is determined from the scalar probability density function (PDF) with externally provided dynamical statistics. A stochastic model for the temperature field in the wall material is proposed and boundary conditions for stochastic particles at the solid-fluid interface are formulated. The heated channel flow with finite-thickness walls is considered as a validation case. Computation results for the mean temperature profiles and the variance of thermal fluctuations are presented and compared with available DNS data.

  1. Momentum and Stochastic Momentum for Stochastic Gradient, Newton, Proximal Point and Subspace Descent Methods

    KAUST Repository

    Loizou, Nicolas

    2017-12-27

    In this paper we study several classes of stochastic optimization algorithms enriched with heavy ball momentum. Among the methods studied are: stochastic gradient descent, stochastic Newton, stochastic proximal point and stochastic dual subspace ascent. This is the first time momentum variants of several of these methods are studied. We choose to perform our analysis in a setting in which all of the above methods are equivalent. We prove global nonassymptotic linear convergence rates for all methods and various measures of success, including primal function values, primal iterates (in L2 sense), and dual function values. We also show that the primal iterates converge at an accelerated linear rate in the L1 sense. This is the first time a linear rate is shown for the stochastic heavy ball method (i.e., stochastic gradient descent method with momentum). Under somewhat weaker conditions, we establish a sublinear convergence rate for Cesaro averages of primal iterates. Moreover, we propose a novel concept, which we call stochastic momentum, aimed at decreasing the cost of performing the momentum step. We prove linear convergence of several stochastic methods with stochastic momentum, and show that in some sparse data regimes and for sufficiently small momentum parameters, these methods enjoy better overall complexity than methods with deterministic momentum. Finally, we perform extensive numerical testing on artificial and real datasets, including data coming from average consensus problems.

  2. Momentum and Stochastic Momentum for Stochastic Gradient, Newton, Proximal Point and Subspace Descent Methods

    KAUST Repository

    Loizou, Nicolas; Richtarik, Peter

    2017-01-01

    In this paper we study several classes of stochastic optimization algorithms enriched with heavy ball momentum. Among the methods studied are: stochastic gradient descent, stochastic Newton, stochastic proximal point and stochastic dual subspace ascent. This is the first time momentum variants of several of these methods are studied. We choose to perform our analysis in a setting in which all of the above methods are equivalent. We prove global nonassymptotic linear convergence rates for all methods and various measures of success, including primal function values, primal iterates (in L2 sense), and dual function values. We also show that the primal iterates converge at an accelerated linear rate in the L1 sense. This is the first time a linear rate is shown for the stochastic heavy ball method (i.e., stochastic gradient descent method with momentum). Under somewhat weaker conditions, we establish a sublinear convergence rate for Cesaro averages of primal iterates. Moreover, we propose a novel concept, which we call stochastic momentum, aimed at decreasing the cost of performing the momentum step. We prove linear convergence of several stochastic methods with stochastic momentum, and show that in some sparse data regimes and for sufficiently small momentum parameters, these methods enjoy better overall complexity than methods with deterministic momentum. Finally, we perform extensive numerical testing on artificial and real datasets, including data coming from average consensus problems.

  3. Semi-Infinite Geology Modeling Algorithm (SIGMA): a Modular Approach to 3D Gravity

    Science.gov (United States)

    Chang, J. C.; Crain, K.

    2015-12-01

    Conventional 3D gravity computations can take up to days, weeks, and even months, depending on the size and resolution of the data being modeled. Additional modeling runs, due to technical malfunctions or additional data modifications, only compound computation times even further. We propose a new modeling algorithm that utilizes vertical line elements to approximate mass, and non-gridded (point) gravity observations. This algorithm is (1) magnitudes faster than conventional methods, (2) accurate to less than 0.1% error, and (3) modular. The modularity of this methodology means that researchers can modify their geology/terrain or gravity data, and only the modified component needs to be re-run. Additionally, land-, sea-, and air-based platforms can be modeled at their observation point, without having to filter data into a synthesized grid.

  4. Probabilistic Forecasts of Solar Irradiance by Stochastic Differential Equations

    DEFF Research Database (Denmark)

    Iversen, Jan Emil Banning; Morales González, Juan Miguel; Møller, Jan Kloppenborg

    2014-01-01

    approach allows for characterizing both the interdependence structure of prediction errors of short-term solar irradiance and their predictive distribution. Three different stochastic differential equation models are first fitted to a training data set and subsequently evaluated on a one-year test set...... included in probabilistic forecasts may be paramount for decision makers to efficiently make use of this uncertain and variable generation. In this paper, a stochastic differential equation framework for modeling the uncertainty associated with the solar irradiance point forecast is proposed. This modeling...

  5. Stochastic neuron models

    CERN Document Server

    Greenwood, Priscilla E

    2016-01-01

    This book describes a large number of open problems in the theory of stochastic neural systems, with the aim of enticing probabilists to work on them. This includes problems arising from stochastic models of individual neurons as well as those arising from stochastic models of the activities of small and large networks of interconnected neurons. The necessary neuroscience background to these problems is outlined within the text, so readers can grasp the context in which they arise. This book will be useful for graduate students and instructors providing material and references for applying probability to stochastic neuron modeling. Methods and results are presented, but the emphasis is on questions where additional stochastic analysis may contribute neuroscience insight. An extensive bibliography is included. Dr. Priscilla E. Greenwood is a Professor Emerita in the Department of Mathematics at the University of British Columbia. Dr. Lawrence M. Ward is a Professor in the Department of Psychology and the Brain...

  6. Model selection for integrated pest management with stochasticity.

    Science.gov (United States)

    Akman, Olcay; Comar, Timothy D; Hrozencik, Daniel

    2018-04-07

    In Song and Xiang (2006), an integrated pest management model with periodically varying climatic conditions was introduced. In order to address a wider range of environmental effects, the authors here have embarked upon a series of studies resulting in a more flexible modeling approach. In Akman et al. (2013), the impact of randomly changing environmental conditions is examined by incorporating stochasticity into the birth pulse of the prey species. In Akman et al. (2014), the authors introduce a class of models via a mixture of two birth-pulse terms and determined conditions for the global and local asymptotic stability of the pest eradication solution. With this work, the authors unify the stochastic and mixture model components to create further flexibility in modeling the impacts of random environmental changes on an integrated pest management system. In particular, we first determine the conditions under which solutions of our deterministic mixture model are permanent. We then analyze the stochastic model to find the optimal value of the mixing parameter that minimizes the variance in the efficacy of the pesticide. Additionally, we perform a sensitivity analysis to show that the corresponding pesticide efficacy determined by this optimization technique is indeed robust. Through numerical simulations we show that permanence can be preserved in our stochastic model. Our study of the stochastic version of the model indicates that our results on the deterministic model provide informative conclusions about the behavior of the stochastic model. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Optimal condition-based maintenance decisions for systems with dependent stochastic degradation of components

    International Nuclear Information System (INIS)

    Hong, H.P.; Zhou, W.; Zhang, S.; Ye, W.

    2014-01-01

    Components in engineered systems are subjected to stochastic deterioration due to the operating environmental conditions, and the uncertainty in material properties. The components need to be inspected and possibly replaced based on preventive or failure replacement criteria to provide the intended and safe operation of the system. In the present study, we investigate the influence of dependent stochastic degradation of multiple components on the optimal maintenance decisions. We use copula to model the dependent stochastic degradation of components, and formulate the optimal decision problem based on the minimum expected cost rule and the stochastic dominance rules. The latter is used to cope with decision maker's risk attitude. We illustrate the developed probabilistic analysis approach and the influence of the dependency of the stochastic degradation on the preferred decisions through numerical examples

  8. Quantum group structure and local fields in the algebraic approach to 2D gravity

    Science.gov (United States)

    Schnittger, J.

    1995-07-01

    This review contains a summary of the work by J.-L. Gervais and the author on the operator approach to 2d gravity. Special emphasis is placed on the construction of local observables — the Liouville exponentials and the Liouville field itself — and the underlying algebra of chiral vertex operators. The double quantum group structure arising from the presence of two screening charges is discussed and the generalized algebra and field operators are derived. In the last part, we show that our construction gives rise to a natural definition of a quantum tau function, which is a noncommutative version of the classical group-theoretic representation of the Liouville fields by Leznov and Saveliev.

  9. A new approach to the analysis of the phase space of f(R)-gravity

    Energy Technology Data Exchange (ETDEWEB)

    Carloni, S., E-mail: sante.carloni@tecnico.ulisboa.pt [Centro Multidisciplinar de Astrofisica—CENTRA, Instituto Superior Tecnico – IST, Universidade de Lisboa – UL, Avenida Rovisco Pais 1, 1049-001 (Portugal)

    2015-09-01

    We propose a new dynamical system formalism for the analysis of f(R) cosmologies. The new approach eliminates the need for cumbersome inversions to close the dynamical system and allows the analysis of the phase space of f(R)-gravity models which cannot be investigated using the standard technique. Differently form previously proposed similar techniques, the new method is constructed in such a way to associate to the fixed points scale factors, which contain four integration constants (i.e. solutions of fourth order differential equations). In this way a new light is shed on the physical meaning of the fixed points. We apply this technique to some f(R) Lagrangians relevant for inflationary and dark energy models.

  10. PPN-limit of Fourth Order Gravity inspired by Scalar-Tensor Gravity

    OpenAIRE

    Capozziello, S.; Troisi, A.

    2005-01-01

    Based on the {\\it dynamical} equivalence between higher order gravity and scalar-tensor gravity the PPN-limit of fourth order gravity is discussed. We exploit this analogy developing a fourth order gravity version of the Eddington PPN-parameters. As a result, Solar System experiments can be reconciled with higher order gravity, if physical constraints descending from experiments are fulfilled.

  11. Statistical mechanics of stochastic neural networks: Relationship between the self-consistent signal-to-noise analysis, Thouless-Anderson-Palmer equation, and replica symmetric calculation approaches

    International Nuclear Information System (INIS)

    Shiino, Masatoshi; Yamana, Michiko

    2004-01-01

    We study the statistical mechanical aspects of stochastic analog neural network models for associative memory with correlation type learning. We take three approaches to derive the set of the order parameter equations for investigating statistical properties of retrieval states: the self-consistent signal-to-noise analysis (SCSNA), the Thouless-Anderson-Palmer (TAP) equation, and the replica symmetric calculation. On the basis of the cavity method the SCSNA can be generalized to deal with stochastic networks. We establish the close connection between the TAP equation and the SCSNA to elucidate the relationship between the Onsager reaction term of the TAP equation and the output proportional term of the SCSNA that appear in the expressions for the local fields

  12. Terrestrial gravity data analysis for interim gravity model improvement

    Science.gov (United States)

    1987-01-01

    This is the first status report for the Interim Gravity Model research effort that was started on June 30, 1986. The basic theme of this study is to develop appropriate models and adjustment procedures for estimating potential coefficients from terrestrial gravity data. The plan is to use the latest gravity data sets to produce coefficient estimates as well as to provide normal equations to NASA for use in the TOPEX/POSEIDON gravity field modeling program.

  13. Asymptotic problems for stochastic partial differential equations

    Science.gov (United States)

    Salins, Michael

    Stochastic partial differential equations (SPDEs) can be used to model systems in a wide variety of fields including physics, chemistry, and engineering. The main SPDEs of interest in this dissertation are the semilinear stochastic wave equations which model the movement of a material with constant mass density that is exposed to both determinstic and random forcing. Cerrai and Freidlin have shown that on fixed time intervals, as the mass density of the material approaches zero, the solutions of the stochastic wave equation converge uniformly to the solutions of a stochastic heat equation, in probability. This is called the Smoluchowski-Kramers approximation. In Chapter 2, we investigate some of the multi-scale behaviors that these wave equations exhibit. In particular, we show that the Freidlin-Wentzell exit place and exit time asymptotics for the stochastic wave equation in the small noise regime can be approximated by the exit place and exit time asymptotics for the stochastic heat equation. We prove that the exit time and exit place asymptotics are characterized by quantities called quasipotentials and we prove that the quasipotentials converge. We then investigate the special case where the equation has a gradient structure and show that we can explicitly solve for the quasipotentials, and that the quasipotentials for the heat equation and wave equation are equal. In Chapter 3, we study the Smoluchowski-Kramers approximation in the case where the material is electrically charged and exposed to a magnetic field. Interestingly, if the system is frictionless, then the Smoluchowski-Kramers approximation does not hold. We prove that the Smoluchowski-Kramers approximation is valid for systems exposed to both a magnetic field and friction. Notably, we prove that the solutions to the second-order equations converge to the solutions of the first-order equation in an Lp sense. This strengthens previous results where convergence was proved in probability.

  14. Self-scheduling and bidding strategies of thermal units with stochastic emission constraints

    International Nuclear Information System (INIS)

    Laia, R.; Pousinho, H.M.I.; Melíco, R.; Mendes, V.M.F.

    2015-01-01

    Highlights: • The management of thermal power plants is considered for different emission allowance levels. • The uncertainty on electricity price is considered by a set of scenarios. • A stochastic MILP approach allows devising the bidding strategies and hedging against price uncertainty and emission allowances. - Abstract: This paper is on the self-scheduling problem for a thermal power producer taking part in a pool-based electricity market as a price-taker, having bilateral contracts and emission-constrained. An approach based on stochastic mixed-integer linear programming approach is proposed for solving the self-scheduling problem. Uncertainty regarding electricity price is considered through a set of scenarios computed by simulation and scenario-reduction. Thermal units are modelled by variable costs, start-up costs and technical operating constraints, such as: forbidden operating zones, ramp up/down limits and minimum up/down time limits. A requirement on emission allowances to mitigate carbon footprint is modelled by a stochastic constraint. Supply functions for different emission allowance levels are accessed in order to establish the optimal bidding strategy. A case study is presented to illustrate the usefulness and the proficiency of the proposed approach in supporting biding strategies

  15. Stability issues of black hole in non-local gravity

    Science.gov (United States)

    Myung, Yun Soo; Park, Young-Jai

    2018-04-01

    We discuss stability issues of Schwarzschild black hole in non-local gravity. It is shown that the stability analysis of black hole for the unitary and renormalizable non-local gravity with γ2 = - 2γ0 cannot be performed in the Lichnerowicz operator approach. On the other hand, for the unitary and non-renormalizable case with γ2 = 0, the black hole is stable against the metric perturbations. For non-unitary and renormalizable local gravity with γ2 = - 2γ0 = const (fourth-order gravity), the small black holes are unstable against the metric perturbations. This implies that what makes the problem difficult in stability analysis of black hole is the simultaneous requirement of unitarity and renormalizability around the Minkowski spacetime.

  16. A unified approach to stochastic integration on the real line

    DEFF Research Database (Denmark)

    Basse-O'Connor, Andreas; Graversen, Svend-Erik; Pedersen, Jan

    Stochastic integration on the predictable σ-field with respect to σ-finite L0-valued measures, also known as formal semimartingales, is studied. In particular, the triplet of such measures is introduced and used to characterize the set of integrable processes. Special attention is given to Lévy...... processes indexed by the real line. Surprisingly, many of the basic properties break down in this situation compared to the usual R+ case....

  17. Stochastic tools in turbulence

    CERN Document Server

    Lumey, John L

    2012-01-01

    Stochastic Tools in Turbulence discusses the available mathematical tools to describe stochastic vector fields to solve problems related to these fields. The book deals with the needs of turbulence in relation to stochastic vector fields, particularly, on three-dimensional aspects, linear problems, and stochastic model building. The text describes probability distributions and densities, including Lebesgue integration, conditional probabilities, conditional expectations, statistical independence, lack of correlation. The book also explains the significance of the moments, the properties of the

  18. Eigenvalues of the volume operator in loop quantum gravity

    International Nuclear Information System (INIS)

    Meissner, Krzysztof A

    2006-01-01

    We present a simple method to calculate certain sums of the eigenvalues of the volume operator in loop quantum gravity. We derive the asymptotic distribution of the eigenvalues in the classical limit of very large spins, which turns out to be of a very simple form. The results can be useful for example in the statistical approach to quantum gravity

  19. Numerical Methods for Stochastic Computations A Spectral Method Approach

    CERN Document Server

    Xiu, Dongbin

    2010-01-01

    The first graduate-level textbook to focus on fundamental aspects of numerical methods for stochastic computations, this book describes the class of numerical methods based on generalized polynomial chaos (gPC). These fast, efficient, and accurate methods are an extension of the classical spectral methods of high-dimensional random spaces. Designed to simulate complex systems subject to random inputs, these methods are widely used in many areas of computer science and engineering. The book introduces polynomial approximation theory and probability theory; describes the basic theory of gPC meth

  20. A hybrid stochastic approach for self-location of wireless sensors in indoor environments.

    Science.gov (United States)

    Lloret, Jaime; Tomas, Jesus; Garcia, Miguel; Canovas, Alejandro

    2009-01-01

    Indoor location systems, especially those using wireless sensor networks, are used in many application areas. While the need for these systems is widely proven, there is a clear lack of accuracy. Many of the implemented applications have high errors in their location estimation because of the issues arising in the indoor environment. Two different approaches had been proposed using WLAN location systems: on the one hand, the so-called deductive methods take into account the physical properties of signal propagation. These systems require a propagation model, an environment map, and the position of the radio-stations. On the other hand, the so-called inductive methods require a previous training phase where the system learns the received signal strength (RSS) in each location. This phase can be very time consuming. This paper proposes a new stochastic approach which is based on a combination of deductive and inductive methods whereby wireless sensors could determine their positions using WLAN technology inside a floor of a building. Our goal is to reduce the training phase in an indoor environment, but, without an loss of precision. Finally, we compare the measurements taken using our proposed method in a real environment with the measurements taken by other developed systems. Comparisons between the proposed system and other hybrid methods are also provided.