WorldWideScience

Sample records for stochastic gravity approach

  1. Stochastic quantum gravity

    Rumpf, H.

    1987-01-01

    We begin with a naive application of the Parisi-Wu scheme to linearized gravity. This will lead into trouble as one peculiarity of the full theory, the indefiniteness of the Euclidean action, shows up already at this level. After discussing some proposals to overcome this problem, Minkowski space stochastic quantization will be introduced. This will still not result in an acceptable quantum theory of linearized gravity, as the Feynman propagator turns out to be non-causal. This defect will be remedied only after a careful analysis of general covariance in stochastic quantization has been performed. The analysis requires the notion of a metric on the manifold of metrics, and a natural candidate for this is singled out. With this a consistent stochastic quantization of Einstein gravity becomes possible. It is even possible, at least perturbatively, to return to the Euclidean regime. 25 refs. (Author)

  2. Stochastic Gravity: Theory and Applications

    Hu Bei Lok

    2008-05-01

    Full Text Available Whereas semiclassical gravity is based on the semiclassical Einstein equation with sources given by the expectation value of the stress-energy tensor of quantum fields, stochastic semiclassical gravity is based on the Einstein–Langevin equation, which has, in addition, sources due to the noise kernel. The noise kernel is the vacuum expectation value of the (operator-valued stress-energy bitensor, which describes the fluctuations of quantum-matter fields in curved spacetimes. A new improved criterion for the validity of semiclassical gravity may also be formulated from the viewpoint of this theory. In the first part of this review we describe the fundamentals of this new theory via two approaches: the axiomatic and the functional. The axiomatic approach is useful to see the structure of the theory from the framework of semiclassical gravity, showing the link from the mean value of the stress-energy tensor to the correlation functions. The functional approach uses the Feynman–Vernon influence functional and the Schwinger–Keldysh closed-time-path effective action methods. In the second part, we describe three applications of stochastic gravity. First, we consider metric perturbations in a Minkowski spacetime, compute the two-point correlation functions of these perturbations and prove that Minkowski spacetime is a stable solution of semiclassical gravity. Second, we discuss structure formation from the stochastic-gravity viewpoint, which can go beyond the standard treatment by incorporating the full quantum effect of the inflaton fluctuations. Third, using the Einstein–Langevin equation, we discuss the backreaction of Hawking radiation and the behavior of metric fluctuations for both the quasi-equilibrium condition of a black-hole in a box and the fully nonequilibrium condition of an evaporating black hole spacetime. Finally, we briefly discuss the theoretical structure of stochastic gravity in relation to quantum gravity and point out

  3. Stochastic quantization and gravity

    Rumpf, H.

    1984-01-01

    We give a preliminary account of the application of stochastic quantization to the gravitational field. We start in Section I from Nelson's formulation of quantum mechanics as Newtonian stochastic mechanics and only then introduce the Parisi-Wu stochastic quantization scheme on which all the later discussion will be based. In Section II we present a generalization of the scheme that is applicable to fields in physical (i.e. Lorentzian) space-time and treat the free linearized gravitational field in this manner. The most remarkable result of this is the noncausal propagation of conformal gravitons. Moreover the concept of stochastic gauge-fixing is introduced and a complete discussion of all the covariant gauges is given. A special symmetry relating two classes of covariant gauges is exhibited. Finally Section III contains some preliminary remarks on full nonlinear gravity. In particular we argue that in contrast to gauge fields the stochastic gravitational field cannot be transformed to a Gaussian process. (Author)

  4. Stochastic Gravity: Theory and Applications

    Hu Bei Lok

    2004-01-01

    Full Text Available Whereas semiclassical gravity is based on the semiclassical Einstein equation with sources given by the expectation value of the stress-energy tensor of quantum fields, stochastic semiclassical gravity is based on the Einstein-Langevin equation, which has in addition sources due to the noise kernel. The noise kernel is the vacuum expectation value of the (operator-valued stress-energy bi-tensor which describes the fluctuations of quantum matter fields in curved spacetimes. In the first part, we describe the fundamentals of this new theory via two approaches: the axiomatic and the functional. The axiomatic approach is useful to see the structure of the theory from the framework of semiclassical gravity, showing the link from the mean value of the stress-energy tensor to their correlation functions. The functional approach uses the Feynman-Vernon influence functional and the Schwinger-Keldysh closed-time-path effective action methods which are convenient for computations. It also brings out the open systems concepts and the statistical and stochastic contents of the theory such as dissipation, fluctuations, noise, and decoherence. We then focus on the properties of the stress-energy bi-tensor. We obtain a general expression for the noise kernel of a quantum field defined at two distinct points in an arbitrary curved spacetime as products of covariant derivatives of the quantum field's Green function. In the second part, we describe three applications of stochastic gravity theory. First, we consider metric perturbations in a Minkowski spacetime. We offer an analytical solution of the Einstein-Langevin equation and compute the two-point correlation functions for the linearized Einstein tensor and for the metric perturbations. Second, we discuss structure formation from the stochastic gravity viewpoint, which can go beyond the standard treatment by incorporating the full quantum effect of the inflaton fluctuations. Third, we discuss the backreaction

  5. Stochastic gravity: a primer with applications

    Hu, B L; Verdaguer, E

    2003-01-01

    Stochastic semiclassical gravity of the 1990s is a theory naturally evolved from semiclassical gravity of the 1970s and 1980s. It improves on the semiclassical Einstein equation with source given by the expectation value of the stress-energy tensor of quantum matter fields in curved spacetime by incorporating an additional source due to their fluctuations. In stochastic semiclassical gravity the main object of interest is the noise kernel, the vacuum expectation value of the (operator-valued) stress-energy bi-tensor, and the centrepiece is the (semiclassical) Einstein-Langevin equation. We describe this new theory via two approaches: the axiomatic and the functional. The axiomatic approach is useful to see the structure of the theory from the framework of semiclassical gravity, showing the link from the mean value of the energy-momentum tensor to their correlation functions. The functional approach uses the Feynman-Vernon influence functional and the Schwinger-Keldysh closed-time-path effective action methods which are convenient for computations. It also brings out the open system concepts and the statistical and stochastic contents of the theory such as dissipation, fluctuations, noise and decoherence. We then describe the applications of stochastic gravity to the backreaction problems in cosmology and black-hole physics. In the first problem, we study the backreaction of conformally coupled quantum fields in a weakly inhomogeneous cosmology. In the second problem, we study the backreaction of a thermal field in the gravitational background of a quasi-static black hole (enclosed in a box) and its fluctuations. These examples serve to illustrate closely the ideas and techniques presented in the first part. This topical review is intended as a first introduction providing readers with some basic ideas and working knowledge. Thus, we place more emphasis here on pedagogy than completeness. (Further discussions of ideas, issues and ongoing research topics can be found

  6. Stochastic gravity: a primer with applications

    Hu, B L [Department of Physics, University of Maryland, College Park, MD 20742-4111 (United States); Verdaguer, E [Departament de Fisica Fonamental and CER en Astrofisica Fisica de Particules i Cosmologia, Universitat de Barcelona, Av. Diagonal 647, 08028 Barcelona (Spain)

    2003-03-21

    Stochastic semiclassical gravity of the 1990s is a theory naturally evolved from semiclassical gravity of the 1970s and 1980s. It improves on the semiclassical Einstein equation with source given by the expectation value of the stress-energy tensor of quantum matter fields in curved spacetime by incorporating an additional source due to their fluctuations. In stochastic semiclassical gravity the main object of interest is the noise kernel, the vacuum expectation value of the (operator-valued) stress-energy bi-tensor, and the centrepiece is the (semiclassical) Einstein-Langevin equation. We describe this new theory via two approaches: the axiomatic and the functional. The axiomatic approach is useful to see the structure of the theory from the framework of semiclassical gravity, showing the link from the mean value of the energy-momentum tensor to their correlation functions. The functional approach uses the Feynman-Vernon influence functional and the Schwinger-Keldysh closed-time-path effective action methods which are convenient for computations. It also brings out the open system concepts and the statistical and stochastic contents of the theory such as dissipation, fluctuations, noise and decoherence. We then describe the applications of stochastic gravity to the backreaction problems in cosmology and black-hole physics. In the first problem, we study the backreaction of conformally coupled quantum fields in a weakly inhomogeneous cosmology. In the second problem, we study the backreaction of a thermal field in the gravitational background of a quasi-static black hole (enclosed in a box) and its fluctuations. These examples serve to illustrate closely the ideas and techniques presented in the first part. This topical review is intended as a first introduction providing readers with some basic ideas and working knowledge. Thus, we place more emphasis here on pedagogy than completeness. (Further discussions of ideas, issues and ongoing research topics can be found

  7. BRS invariant stochastic quantization of Einstein gravity

    Nakazawa, Naohito.

    1989-11-01

    We study stochastic quantization of gravity in terms of a BRS invariant canonical operator formalism. By introducing artificially canonical momentum variables for the original field variables, a canonical formulation of stochastic quantization is proposed in the sense that the Fokker-Planck hamiltonian is the generator of the fictitious time translation. Then we show that there exists a nilpotent BRS symmetry in an enlarged phase space of the first-class constrained systems. The phase space is spanned by the dynamical variables, their canonical conjugate momentum variables, Faddeev-Popov ghost and anti-ghost. We apply the general BRS invariant formulation to stochastic quantization of gravity which is described as a second-class constrained system in terms of a pair of Langevin equations coupled with white noises. It is shown that the stochastic action of gravity includes explicitly the De Witt's type superspace metric which leads to a geometrical interpretation of quantum gravity analogous to nonlinear σ-models. (author)

  8. Stochastic quantization of Einstein gravity

    Rumpf, H.

    1986-01-01

    We determine a one-parameter family of covariant Langevin equations for the metric tensor of general relativity corresponding to DeWitt's one-parameter family of supermetrics. The stochastic source term in these equations can be expressed in terms of a Gaussian white noise upon the introduction of a stochastic tetrad field. The only physically acceptable resolution of a mathematical ambiguity in the ansatz for the source term is the adoption of Ito's calculus. By taking the formal equilibrium limit of the stochastic metric a one-parameter family of covariant path-integral measures for general relativity is obtained. There is a unique parameter value, distinguished by any one of the following three properties: (i) the metric is harmonic with respect to the supermetric, (ii) the path-integral measure is that of DeWitt, (iii) the supermetric governs the linearized Einstein dynamics. Moreover the Feynman propagator corresponding to this parameter is causal. Finally we show that a consistent stochastic perturbation theory gives rise to a new type of diagram containing ''stochastic vertices.''

  9. Stochastic inflation and nonlinear gravity

    Salopek, D.S.; Bond, J.R.

    1991-01-01

    We show how nonlinear effects of the metric and scalar fields may be included in stochastic inflation. Our formalism can be applied to non-Gaussian fluctuation models for galaxy formation. Fluctuations with wavelengths larger than the horizon length are governed by a network of Langevin equations for the physical fields. Stochastic noise terms arise from quantum fluctuations that are assumed to become classical at horizon crossing and that then contribute to the background. Using Hamilton-Jacobi methods, we solve the Arnowitt-Deser-Misner constraint equations which allows us to separate the growing modes from the decaying ones in the drift phase following each stochastic impulse. We argue that the most reasonable choice of time hypersurfaces for the Langevin system during inflation is T=ln(Ha), where H and a are the local values of the Hubble parameter and the scale factor, since T is the natural time for evolving the short-wavelength scalar field fluctuations in an inhomogeneous background

  10. Stochastic approach to microphysics

    Aron, J.C.

    1987-01-01

    The presently widespread idea of ''vacuum population'', together with the quantum concept of vacuum fluctuations leads to assume a random level below that of matter. This stochastic approach starts by a reminder of the author's previous work, first on the relation of diffusion laws with the foundations of microphysics, and then on hadron spectrum. Following the latter, a random quark model is advanced; it gives to quark pairs properties similar to those of a harmonic oscillator or an elastic string, imagined as an explanation to their asymptotic freedom and their confinement. The stochastic study of such interactions as electron-nucleon, jets in e/sup +/e/sup -/ collisions, or pp -> ..pi../sup 0/ + X, gives form factors closely consistent with experiment. The conclusion is an epistemological comment (complementarity between stochastic and quantum domains, E.P.R. paradox, etc...).

  11. On the Langevin equation for stochastic quantization of gravity

    Nakazawa, Naohito.

    1989-10-01

    We study the Langevin equation for stochastic quantization of gravity. By introducing two independent variables with a second-class constraint for the gravitational field, we formulate a pair of the Langevin equations for gravity which couples with white noises. After eliminating the multiplier field for the second-class constraint, we show that the equations leads to stochastic quantization of gravity including an unique superspace metric. (author)

  12. Stochastic Geometry and Quantum Gravity: Some Rigorous Results

    Zessin, H.

    The aim of these lectures is a short introduction into some recent developments in stochastic geometry which have one of its origins in simplicial gravity theory (see Regge Nuovo Cimento 19: 558-571, 1961). The aim is to define and construct rigorously point processes on spaces of Euclidean simplices in such a way that the configurations of these simplices are simplicial complexes. The main interest then is concentrated on their curvature properties. We illustrate certain basic ideas from a mathematical point of view. An excellent representation of this area can be found in Schneider and Weil (Stochastic and Integral Geometry, Springer, Berlin, 2008. German edition: Stochastische Geometrie, Teubner, 2000). In Ambjørn et al. (Quantum Geometry Cambridge University Press, Cambridge, 1997) you find a beautiful account from the physical point of view. More recent developments in this direction can be found in Ambjørn et al. ("Quantum gravity as sum over spacetimes", Lect. Notes Phys. 807. Springer, Heidelberg, 2010). After an informal axiomatic introduction into the conceptual foundations of Regge's approach the first lecture recalls the concepts and notations used. It presents the fundamental zero-infinity law of stochastic geometry and the construction of cluster processes based on it. The second lecture presents the main mathematical object, i.e. Poisson-Delaunay surfaces possessing an intrinsic random metric structure. The third and fourth lectures discuss their ergodic behaviour and present the two-dimensional Regge model of pure simplicial quantum gravity. We terminate with the formulation of basic open problems. Proofs are given in detail only in a few cases. In general the main ideas are developed. Sufficiently complete references are given.

  13. Stochastic quantum gravity-(2+1)-dimensional case

    Hosoya, Akio

    1991-01-01

    At first the amazing coincidences are pointed out in quantum field theory in curved space-time and quantum gravity, when they exhibit stochasticity. To explore the origin of them, the (2+1)-dimensional quantum gravity is considered as a toy model. It is shown that the torus universe in the (2+1)-dimensional quantum gravity is a quantum chaos in a rigorous sense. (author). 15 refs

  14. Stochastic quantization of gravity and string fields

    Rumpf, H.

    1986-01-01

    The stochastic quantization method of Parisi and Wu is generalized so as to make it applicable to Einstein's theory of gravitation. The generalization is based on the existence of a preferred metric in field configuration space, involves Ito's calculus, and introduces a complex stochastic process adapted to Lorentzian spacetime. It implies formally the path integral measure of DeWitt, a causual Feynman propagator, and a consistent stochastic perturbation theory. The lineraized version of the theory is also obtained from the stochastic quantization of the free string field theory of Siegel and Zwiebach. (Author)

  15. Stochastic approach for radionuclides quantification

    Clement, A.; Saurel, N.; Perrin, G.

    2018-01-01

    Gamma spectrometry is a passive non-destructive assay used to quantify radionuclides present in more or less complex objects. Basic methods using empirical calibration with a standard in order to quantify the activity of nuclear materials by determining the calibration coefficient are useless on non-reproducible, complex and single nuclear objects such as waste packages. Package specifications as composition or geometry change from one package to another and involve a high variability of objects. Current quantification process uses numerical modelling of the measured scene with few available data such as geometry or composition. These data are density, material, screen, geometric shape, matrix composition, matrix and source distribution. Some of them are strongly dependent on package data knowledge and operator backgrounds. The French Commissariat à l'Energie Atomique (CEA) is developing a new methodology to quantify nuclear materials in waste packages and waste drums without operator adjustment and internal package configuration knowledge. This method suggests combining a global stochastic approach which uses, among others, surrogate models available to simulate the gamma attenuation behaviour, a Bayesian approach which considers conditional probability densities of problem inputs, and Markov Chains Monte Carlo algorithms (MCMC) which solve inverse problems, with gamma ray emission radionuclide spectrum, and outside dimensions of interest objects. The methodology is testing to quantify actinide activity in different kind of matrix, composition, and configuration of sources standard in terms of actinide masses, locations and distributions. Activity uncertainties are taken into account by this adjustment methodology.

  16. Stochastic Background of Relic Scalar Gravitational Waves tuned by Extended Gravity

    De Laurentis, Mariafelicia; Capozziello, Salvatore

    2009-01-01

    A stochastic background of relic gravitational waves is achieved by the so called adiabatically-amplified zero-point fluctuations process derived from early inflation. It provides a distinctive spectrum of relic gravitational waves. In the framework of scalar-tensor gravity, we discuss the scalar modes of gravitational waves and the primordial production of this scalar component which is generated beside tensorial one. Then analyze seven different viable f(R)-gravities towards the Solar System tests and stochastic gravitational waves background. It is demonstrated that seven viable f(R)-gravities under consideration not only satisfy the local tests, but additionally, pass the above PPN-and stochastic gravitational waves bounds for large classes of parameters.

  17. Stochastic approach to equilibrium and nonequilibrium thermodynamics.

    Tomé, Tânia; de Oliveira, Mário J

    2015-04-01

    We develop the stochastic approach to thermodynamics based on stochastic dynamics, which can be discrete (master equation) and continuous (Fokker-Planck equation), and on two assumptions concerning entropy. The first is the definition of entropy itself and the second the definition of entropy production rate, which is non-negative and vanishes in thermodynamic equilibrium. Based on these assumptions, we study interacting systems with many degrees of freedom in equilibrium or out of thermodynamic equilibrium and how the macroscopic laws are derived from the stochastic dynamics. These studies include the quasiequilibrium processes; the convexity of the equilibrium surface; the monotonic time behavior of thermodynamic potentials, including entropy; the bilinear form of the entropy production rate; the Onsager coefficients and reciprocal relations; and the nonequilibrium steady states of chemical reactions.

  18. Analyses of the stratospheric dynamics simulated by a GCM with a stochastic nonorographic gravity wave parameterization

    Serva, Federico; Cagnazzo, Chiara; Riccio, Angelo

    2016-04-01

    version of the model, the default and a new stochastic version, in which the value of the perturbation field at launching level is not constant and uniform, but extracted at each time-step and grid-point from a given PDF. With this approach we are trying to add further variability to the effects given by the deterministic NOGW parameterization: the impact on the simulated climate will be assessed focusing on the Quasi-Biennial Oscillation of the equatorial stratosphere (known to be driven also by gravity waves) and on the variability of the mid-to-high latitudes atmosphere. The different characteristics of the circulation will be compared with recent reanalysis products in order to determine the advantages of the stochastic approach over the traditional deterministic scheme.

  19. a Perturbation Approach to Translational Gravity

    Julve, J.; Tiemblo, A.

    2013-05-01

    Within a gauge formulation of 3+1 gravity relying on a nonlinear realization of the group of isometries of space-time, a natural expansion of the metric tensor arises and a simple choice of the gravity dynamical variables is possible. We show that the expansion parameter can be identified with the gravitational constant and that the first-order depends only on a diagonal matrix in the ensuing perturbation approach. The explicit first-order solution is calculated in the static isotropic case, and its general structure is worked out in the harmonic gauge.

  20. A stochastic approach to anelastic creep

    Venkataraman, G.

    1976-01-01

    Anelastic creep or the time-dependent yielding or a material subjected to external stresses has been found to be of great importantance in technology in the recent years, particularly in engineering structures including nuclear reactors wherein structural members may be under stress. The physics aspects underlying this phenomenon is dealt with in detail. The basics of time-dependent elasticity, constitutive relation, network models, constitutive equation in the frequency domain and its mearurements, and stochastic approach to creep are discussed. (K.B.)

  1. A gauge-theoretic approach to gravity.

    Krasnov, Kirill

    2012-08-08

    Einstein's general relativity (GR) is a dynamical theory of the space-time metric. We describe an approach in which GR becomes an SU(2) gauge theory. We start at the linearized level and show how a gauge-theoretic Lagrangian for non-interacting massless spin two particles (gravitons) takes a much more simple and compact form than in the standard metric description. Moreover, in contrast to the GR situation, the gauge theory Lagrangian is convex. We then proceed with a formulation of the full nonlinear theory. The equivalence to the metric-based GR holds only at the level of solutions of the field equations, that is, on-shell. The gauge-theoretic approach also makes it clear that GR is not the only interacting theory of massless spin two particles, in spite of the GR uniqueness theorems available in the metric description. Thus, there is an infinite-parameter class of gravity theories all describing just two propagating polarizations of the graviton. We describe how matter can be coupled to gravity in this formulation and, in particular, how both the gravity and Yang-Mills arise as sectors of a general diffeomorphism-invariant gauge theory. We finish by outlining a possible scenario of the ultraviolet completion of quantum gravity within this approach.

  2. Stochastic approaches to inflation model building

    Ramirez, Erandy; Liddle, Andrew R.

    2005-01-01

    While inflation gives an appealing explanation of observed cosmological data, there are a wide range of different inflation models, providing differing predictions for the initial perturbations. Typically models are motivated either by fundamental physics considerations or by simplicity. An alternative is to generate large numbers of models via a random generation process, such as the flow equations approach. The flow equations approach is known to predict a definite structure to the observational predictions. In this paper, we first demonstrate a more efficient implementation of the flow equations exploiting an analytic solution found by Liddle (2003). We then consider alternative stochastic methods of generating large numbers of inflation models, with the aim of testing whether the structures generated by the flow equations are robust. We find that while typically there remains some concentration of points in the observable plane under the different methods, there is significant variation in the predictions amongst the methods considered

  3. An Approach to Stochastic Peridynamic Theory.

    Demmie, Paul N.

    2018-04-01

    In many material systems, man-made or natural, we have an incomplete knowledge of geometric or material properties, which leads to uncertainty in predicting their performance under dynamic loading. Given the uncertainty and a high degree of spatial variability in properties of materials subjected to impact, a stochastic theory of continuum mechanics would be useful for modeling dynamic response of such systems. Peridynamic theory is such a theory. It is formulated as an integro- differential equation that does not employ spatial derivatives, and provides for a consistent formulation of both deformation and failure of materials. We discuss an approach to stochastic peridynamic theory and illustrate the formulation with examples of impact loading of geological materials with uncorrelated or correlated material properties. We examine wave propagation and damage to the material. The most salient feature is the absence of spallation, referred to as disorder toughness, which generalizes similar results from earlier quasi-static damage mechanics. Acknowledgements This research was made possible by the support from DTRA grant HDTRA1-08-10-BRCWM. I thank Dr. Martin Ostoja-Starzewski for introducing me to the mechanics of random materials and collaborating with me throughout and after this DTRA project.

  4. A stochastic approach to chemical evolution

    Copi, C.J.

    1997-01-01

    Observations of elemental abundances in the Galaxy have repeatedly shown an intrinsic scatter as a function of time and metallicity. The standard approach to chemical evolution does not attempt to address this scatter in abundances since only the mean evolution is followed. In this work, the scatter is addressed via a stochastic approach to solving chemical evolution models. Three simple chemical evolution scenarios are studied using this stochastic approach: a closed box model, an infall model, and an outflow model. These models are solved for the solar neighborhood in a Monte Carlo fashion. The evolutionary history of one particular region is determined randomly based on the star formation rate and the initial mass function. Following the evolution in an ensemble of such regions leads to the predicted spread in abundances expected, based solely on different evolutionary histories of otherwise identical regions. In this work, 13 isotopes are followed, including the light elements, the CNO elements, a few α-elements, and iron. It is found that the predicted spread in abundances for a 10 5 M circle-dot region is in good agreement with observations for the α-elements. For CN, the agreement is not as good, perhaps indicating the need for more physics input for low-mass stellar evolution. Similarly for the light elements, the predicted scatter is quite small, which is in contradiction to the observations of 3 He in HII regions. The models are tuned for the solar neighborhood so that good agreement with HII regions is not expected. This has important implications for low-mass stellar evolution and on using chemical evolution to determine the primordial light-element abundances in order to test big bang nucleosynthesis. copyright 1997 The American Astronomical Society

  5. Symmetries of stochastic differential equations: A geometric approach

    De Vecchi, Francesco C., E-mail: francesco.devecchi@unimi.it; Ugolini, Stefania, E-mail: stefania.ugolini@unimi.it [Dipartimento di Matematica, Università degli Studi di Milano, via Saldini 50, Milano (Italy); Morando, Paola, E-mail: paola.morando@unimi.it [DISAA, Università degli Studi di Milano, via Celoria 2, Milano (Italy)

    2016-06-15

    A new notion of stochastic transformation is proposed and applied to the study of both weak and strong symmetries of stochastic differential equations (SDEs). The correspondence between an algebra of weak symmetries for a given SDE and an algebra of strong symmetries for a modified SDE is proved under suitable regularity assumptions. This general approach is applied to a stochastic version of a two dimensional symmetric ordinary differential equation and to the case of two dimensional Brownian motion.

  6. THE IMPACT OF COMPETITIVENESS ON TRADE EFFICIENCY: THE ASIAN EXPERIENCE BY USING THE STOCHASTIC FRONTIER GRAVITY MODEL

    Memduh Alper Demir

    2017-12-01

    Full Text Available The purpose of this study is to examine the bilateral machinery and transport equipment trade efficiency of selected fourteen Asian countries by applying stochastic frontier gravity model. These selected countries have the top machinery and transport equipment trade (both export and import volumes in Asia. The model we use includes variables such as income, market size of trading partners, distance, common culture, common border, common language and global economic crisis similar to earlier studies using the stochastic frontier gravity models. Our work, however, includes an extra variable called normalized revealed comparative advantage (NRCA index additionally. The NRCA index is comparable across commodity, country and time. Thus, the NRCA index is calculated and then included in our stochastic frontier gravity model to see the impact of competitiveness (here measured by the NRCA index on the efficiency of trade.

  7. Robust approach to f(R) gravity

    Jaime, Luisa G.; Patino, Leonardo; Salgado, Marcelo

    2011-01-01

    We consider metric f(R) theories of gravity without mapping them to their scalar-tensor counterpart, but using the Ricci scalar itself as an ''extra'' degree of freedom. This approach avoids then the introduction of a scalar-field potential that might be ill defined (not single valued). In order to explicitly show the usefulness of this method, we focus on static and spherically symmetric spacetimes and deal with the recent controversy about the existence of extended relativistic objects in certain class of f(R) models.

  8. Ostrogradski Hamiltonian approach for geodetic brane gravity

    Cordero, Ruben; Molgado, Alberto; Rojas, Efrain

    2010-01-01

    We present an alternative Hamiltonian description of a branelike universe immersed in a flat background spacetime. This model is named geodetic brane gravity. We set up the Regge-Teitelboim model to describe our Universe where such field theory is originally thought as a second order derivative theory. We refer to an Ostrogradski Hamiltonian formalism to prepare the system to its quantization. This approach comprize the manage of both first- and second-class constraints and the counting of degrees of freedom follows accordingly.

  9. MEASURING INFLATION THROUGH STOCHASTIC APPROACH TO INDEX NUMBERS FOR PAKISTAN

    Zahid Asghar

    2010-09-01

    Full Text Available This study attempts to estimate the rate of inflation in Pakistan through stochastic approach to index numbers which provides not only point estimate but also confidence interval for the rate of inflation. There are two types of approaches to index number theory namely: the functional economic approaches and the stochastic approach. The attraction of stochastic approach is that it estimates the rate of inflation in which uncertainty and statistical ideas play a major roll of screening index numbers. We have used extended stochastic approach to index numbers for measuring inflation by allowing for the systematic changes in the relative prices. We use CPI data covering the period July 2001--March 2008 for Pakistan.

  10. Stochastic Thermodynamics: A Dynamical Systems Approach

    Tanmay Rajpurohit

    2017-12-01

    Full Text Available In this paper, we develop an energy-based, large-scale dynamical system model driven by Markov diffusion processes to present a unified framework for statistical thermodynamics predicated on a stochastic dynamical systems formalism. Specifically, using a stochastic state space formulation, we develop a nonlinear stochastic compartmental dynamical system model characterized by energy conservation laws that is consistent with statistical thermodynamic principles. In particular, we show that the difference between the average supplied system energy and the average stored system energy for our stochastic thermodynamic model is a martingale with respect to the system filtration. In addition, we show that the average stored system energy is equal to the mean energy that can be extracted from the system and the mean energy that can be delivered to the system in order to transfer it from a zero energy level to an arbitrary nonempty subset in the state space over a finite stopping time.

  11. Space-Wise approach for airborne gravity data modelling

    Sampietro, D.; Capponi, M.; Mansi, A. H.; Gatti, A.; Marchetti, P.; Sansò, F.

    2017-05-01

    Regional gravity field modelling by means of remove-compute-restore procedure is nowadays widely applied in different contexts: it is the most used technique for regional gravimetric geoid determination, and it is also used in exploration geophysics to predict grids of gravity anomalies (Bouguer, free-air, isostatic, etc.), which are useful to understand and map geological structures in a specific region. Considering this last application, due to the required accuracy and resolution, airborne gravity observations are usually adopted. However, due to the relatively high acquisition velocity, presence of atmospheric turbulence, aircraft vibration, instrumental drift, etc., airborne data are usually contaminated by a very high observation error. For this reason, a proper procedure to filter the raw observations in both the low and high frequencies should be applied to recover valuable information. In this work, a software to filter and grid raw airborne observations is presented: the proposed solution consists in a combination of an along-track Wiener filter and a classical Least Squares Collocation technique. Basically, the proposed procedure is an adaptation to airborne gravimetry of the Space-Wise approach, developed by Politecnico di Milano to process data coming from the ESA satellite mission GOCE. Among the main differences with respect to the satellite application of this approach, there is the fact that, while in processing GOCE data the stochastic characteristics of the observation error can be considered a-priori well known, in airborne gravimetry, due to the complex environment in which the observations are acquired, these characteristics are unknown and should be retrieved from the dataset itself. The presented solution is suited for airborne data analysis in order to be able to quickly filter and grid gravity observations in an easy way. Some innovative theoretical aspects focusing in particular on the theoretical covariance modelling are presented too

  12. Approaches to quantum gravity. Loop quantum gravity, spinfoams and topos approach

    Flori, Cecilia

    2010-01-01

    One of the main challenges in theoretical physics over the last five decades has been to reconcile quantum mechanics with general relativity into a theory of quantum gravity. However, such a theory has been proved to be hard to attain due to i) conceptual difficulties present in both the component theories (General Relativity (GR) and Quantum Theory); ii) lack of experimental evidence, since the regimes at which quantum gravity is expected to be applicable are far beyond the range of conceivable experiments. Despite these difficulties, various approaches for a theory of Quantum Gravity have been developed. In this thesis we focus on two such approaches: Loop Quantum Gravity and the Topos theoretic approach. The choice fell on these approaches because, although they both reject the Copenhagen interpretation of quantum theory, their underpinning philosophical approach to formulating a quantum theory of gravity are radically different. In particular LQG is a rather conservative scheme, inheriting all the formalism of both GR and Quantum Theory, as it tries to bring to its logical extreme consequences the possibility of combining the two. On the other hand, the Topos approach involves the idea that a radical change of perspective is needed in order to solve the problem of quantum gravity, especially in regard to the fundamental concepts of 'space' and 'time'. Given the partial successes of both approaches, the hope is that it might be possible to find a common ground in which each approach can enrich the other. This thesis is divided in two parts: in the first part we analyse LQG, paying particular attention to the semiclassical properties of the volume operator. Such an operator plays a pivotal role in defining the dynamics of the theory, thus testing its semiclassical limit is of uttermost importance. We then proceed to analyse spin foam models (SFM), which are an attempt at a covariant or path integral formulation of canonical Loop Quantum Gravity (LQG). In

  13. Approaches to quantum gravity. Loop quantum gravity, spinfoams and topos approach

    Flori, Cecilia

    2010-07-23

    One of the main challenges in theoretical physics over the last five decades has been to reconcile quantum mechanics with general relativity into a theory of quantum gravity. However, such a theory has been proved to be hard to attain due to i) conceptual difficulties present in both the component theories (General Relativity (GR) and Quantum Theory); ii) lack of experimental evidence, since the regimes at which quantum gravity is expected to be applicable are far beyond the range of conceivable experiments. Despite these difficulties, various approaches for a theory of Quantum Gravity have been developed. In this thesis we focus on two such approaches: Loop Quantum Gravity and the Topos theoretic approach. The choice fell on these approaches because, although they both reject the Copenhagen interpretation of quantum theory, their underpinning philosophical approach to formulating a quantum theory of gravity are radically different. In particular LQG is a rather conservative scheme, inheriting all the formalism of both GR and Quantum Theory, as it tries to bring to its logical extreme consequences the possibility of combining the two. On the other hand, the Topos approach involves the idea that a radical change of perspective is needed in order to solve the problem of quantum gravity, especially in regard to the fundamental concepts of 'space' and 'time'. Given the partial successes of both approaches, the hope is that it might be possible to find a common ground in which each approach can enrich the other. This thesis is divided in two parts: in the first part we analyse LQG, paying particular attention to the semiclassical properties of the volume operator. Such an operator plays a pivotal role in defining the dynamics of the theory, thus testing its semiclassical limit is of uttermost importance. We then proceed to analyse spin foam models (SFM), which are an attempt at a covariant or path integral formulation of canonical Loop Quantum

  14. Structural factoring approach for analyzing stochastic networks

    Hayhurst, Kelly J.; Shier, Douglas R.

    1991-01-01

    The problem of finding the distribution of the shortest path length through a stochastic network is investigated. A general algorithm for determining the exact distribution of the shortest path length is developed based on the concept of conditional factoring, in which a directed, stochastic network is decomposed into an equivalent set of smaller, generally less complex subnetworks. Several network constructs are identified and exploited to reduce significantly the computational effort required to solve a network problem relative to complete enumeration. This algorithm can be applied to two important classes of stochastic path problems: determining the critical path distribution for acyclic networks and the exact two-terminal reliability for probabilistic networks. Computational experience with the algorithm was encouraging and allowed the exact solution of networks that have been previously analyzed only by approximation techniques.

  15. Dimensional flow and fuzziness in quantum gravity: Emergence of stochastic spacetime

    Gianluca Calcagni

    2017-10-01

    Full Text Available We show that the uncertainty in distance and time measurements found by the heuristic combination of quantum mechanics and general relativity is reproduced in a purely classical and flat multi-fractal spacetime whose geometry changes with the probed scale (dimensional flow and has non-zero imaginary dimension, corresponding to a discrete scale invariance at short distances. Thus, dimensional flow can manifest itself as an intrinsic measurement uncertainty and, conversely, measurement-uncertainty estimates are generally valid because they rely on this universal property of quantum geometries. These general results affect multi-fractional theories, a recent proposal related to quantum gravity, in two ways: they can fix two parameters previously left free (in particular, the value of the spacetime dimension at short scales and point towards a reinterpretation of the ultraviolet structure of geometry as a stochastic foam or fuzziness. This is also confirmed by a correspondence we establish between Nottale scale relativity and the stochastic geometry of multi-fractional models.

  16. Dimensional flow and fuzziness in quantum gravity: Emergence of stochastic spacetime

    Calcagni, Gianluca; Ronco, Michele

    2017-01-01

    We show that the uncertainty in distance and time measurements found by the heuristic combination of quantum mechanics and general relativity is reproduced in a purely classical and flat multi-fractal spacetime whose geometry changes with the probed scale (dimensional flow) and has non-zero imaginary dimension, corresponding to a discrete scale invariance at short distances. Thus, dimensional flow can manifest itself as an intrinsic measurement uncertainty and, conversely, measurement-uncertainty estimates are generally valid because they rely on this universal property of quantum geometries. These general results affect multi-fractional theories, a recent proposal related to quantum gravity, in two ways: they can fix two parameters previously left free (in particular, the value of the spacetime dimension at short scales) and point towards a reinterpretation of the ultraviolet structure of geometry as a stochastic foam or fuzziness. This is also confirmed by a correspondence we establish between Nottale scale relativity and the stochastic geometry of multi-fractional models.

  17. Dimensional flow and fuzziness in quantum gravity: Emergence of stochastic spacetime

    Calcagni, Gianluca; Ronco, Michele

    2017-10-01

    We show that the uncertainty in distance and time measurements found by the heuristic combination of quantum mechanics and general relativity is reproduced in a purely classical and flat multi-fractal spacetime whose geometry changes with the probed scale (dimensional flow) and has non-zero imaginary dimension, corresponding to a discrete scale invariance at short distances. Thus, dimensional flow can manifest itself as an intrinsic measurement uncertainty and, conversely, measurement-uncertainty estimates are generally valid because they rely on this universal property of quantum geometries. These general results affect multi-fractional theories, a recent proposal related to quantum gravity, in two ways: they can fix two parameters previously left free (in particular, the value of the spacetime dimension at short scales) and point towards a reinterpretation of the ultraviolet structure of geometry as a stochastic foam or fuzziness. This is also confirmed by a correspondence we establish between Nottale scale relativity and the stochastic geometry of multi-fractional models.

  18. Markovian approach: From Ising model to stochastic radiative transfer

    Kassianov, E.; Veron, D.

    2009-01-01

    The origin of the Markovian approach can be traced back to 1906; however, it gained explicit recognition in the last few decades. This overview outlines some important applications of the Markovian approach, which illustrate its immense prestige, respect, and success. These applications include examples in the statistical physics, astronomy, mathematics, computational science and the stochastic transport problem. In particular, the overview highlights important contributions made by Pomraning and Titov to the neutron and radiation transport theory in a stochastic medium with homogeneous statistics. Using simple probabilistic assumptions (Markovian approximation), they have introduced a simplified, but quite realistic, representation of the neutron/radiation transfer through a two-component discrete stochastic mixture. New concepts and methodologies introduced by these two distinguished scientists allow us to generalize the Markovian treatment to the stochastic medium with inhomogeneous statistics and demonstrate its improved predictive performance for the down-welling shortwave fluxes. (authors)

  19. A stochastic programming approach to manufacturing flow control

    Haurie, Alain; Moresino, Francesco

    2012-01-01

    This paper proposes and tests an approximation of the solution of a class of piecewise deterministic control problems, typically used in the modeling of manufacturing flow processes. This approximation uses a stochastic programming approach on a suitably discretized and sampled system. The method proceeds through two stages: (i) the Hamilton-Jacobi-Bellman (HJB) dynamic programming equations for the finite horizon continuous time stochastic control problem are discretized over a set of sample...

  20. Stochastic inflation: Quantum phase-space approach

    Habib, S.

    1992-01-01

    In this paper a quantum-mechanical phase-space picture is constructed for coarse-grained free quantum fields in an inflationary universe. The appropriate stochastic quantum Liouville equation is derived. Explicit solutions for the phase-space quantum distribution function are found for the cases of power-law and exponential expansions. The expectation values of dynamical variables with respect to these solutions are compared to the corresponding cutoff regularized field-theoretic results (we do not restrict ourselves only to left-angle Φ 2 right-angle). Fair agreement is found provided the coarse-graining scale is kept within certain limits. By focusing on the full phase-space distribution function rather than a reduced distribution it is shown that the thermodynamic interpretation of the stochastic formalism faces several difficulties (e.g., there is no fluctuation-dissipation theorem). The coarse graining does not guarantee an automatic classical limit as quantum correlations turn out to be crucial in order to get results consistent with standard quantum field theory. Therefore, the method does not by itself constitute an explanation of the quantum to classical transition in the early Universe. In particular, we argue that the stochastic equations do not lead to decoherence

  1. Stochastic resonance a mathematical approach in the small noise limit

    Herrmann, Samuel; Pavlyukevich, Ilya; Peithmann, Dierk

    2013-01-01

    Stochastic resonance is a phenomenon arising in a wide spectrum of areas in the sciences ranging from physics through neuroscience to chemistry and biology. This book presents a mathematical approach to stochastic resonance which is based on a large deviations principle (LDP) for randomly perturbed dynamical systems with a weak inhomogeneity given by an exogenous periodicity of small frequency. Resonance, the optimal tuning between period length and noise amplitude, is explained by optimizing the LDP's rate function. The authors show that not all physical measures of tuning quality are robust with respect to dimension reduction. They propose measures of tuning quality based on exponential transition rates explained by large deviations techniques and show that these measures are robust. The book sheds some light on the shortcomings and strengths of different concepts used in the theory and applications of stochastic resonance without attempting to give a comprehensive overview of the many facets of stochastic ...

  2. gravity

    We study the cosmological dynamics for R p exp( λ R ) gravity theory in the metric formalism, using dynamical systems approach. Considering higher-dimensional FRW geometries in case of an imperfect fluid which has two different scale factors in the normal and extra dimensions, we find the exact solutions, and study its ...

  3. The Spin-Foam Approach to Quantum Gravity.

    Perez, Alejandro

    2013-01-01

    This article reviews the present status of the spin-foam approach to the quantization of gravity. Special attention is payed to the pedagogical presentation of the recently-introduced new models for four-dimensional quantum gravity. The models are motivated by a suitable implementation of the path integral quantization of the Plebanski formulation of gravity on a simplicial regularization. The article also includes a self-contained treatment of 2+1 gravity. The simple nature of the latter provides the basis and a perspective for the analysis of both conceptual and technical issues that remain open in four dimensions.

  4. Noether symmetry approach in f(G,T) gravity

    Shamir, M.F.; Ahmad, Mushtaq [National University of Computer and Emerging Sciences, Lahore Campus (Pakistan)

    2017-01-15

    We explore the recently introduced modified Gauss-Bonnet gravity (Sharif and Ikram in Eur Phys J C 76:640, 2016), f(G,T) pragmatic with G, the Gauss-Bonnet term, and T, the trace of the energy-momentum tensor. Noether symmetry approach has been used to develop some cosmologically viable f(G,T) gravity models. The Noether equations of modified gravity are reported for flat FRW universe. Two specific models have been studied to determine the conserved quantities and exact solutions. In particular, the well known deSitter solution is reconstructed for some specific choice of f(G,T) gravity model. (orig.)

  5. The Spin-Foam Approach to Quantum Gravity

    Alejandro Perez

    2013-02-01

    Full Text Available This article reviews the present status of the spin-foam approach to the quantization of gravity. Special attention is payed to the pedagogical presentation of the recently-introduced new models for four-dimensional quantum gravity. The models are motivated by a suitable implementation of the path integral quantization of the Plebanski formulation of gravity on a simplicial regularization. The article also includes a self contained treatment of 2+1 gravity. The simple nature of the latter provides the basis and a perspective for the analysis of both conceptual and technical issues that remain open in four dimensions.

  6. The group manifold approach to unified gravity

    Regge, T.

    1984-01-01

    These lectures start with a synopsis of historical results in the construction of unified theories of gravity. The author keeps some mathematical rigour throughout the lectures. He gives a provisional description of supermanifolds and a set of formal rules intended to manipulate superforms or supermanifolds. Super Lie groups are discussed as well as the dimensional reduction of gravity theories, the Kaluza-Klein theory. A formal introduction of supersymmetry is given. (Auth.)

  7. Stochastic Approaches Within a High Resolution Rapid Refresh Ensemble

    Jankov, I.

    2017-12-01

    It is well known that global and regional numerical weather prediction (NWP) ensemble systems are under-dispersive, producing unreliable and overconfident ensemble forecasts. Typical approaches to alleviate this problem include the use of multiple dynamic cores, multiple physics suite configurations, or a combination of the two. While these approaches may produce desirable results, they have practical and theoretical deficiencies and are more difficult and costly to maintain. An active area of research that promotes a more unified and sustainable system is the use of stochastic physics. Stochastic approaches include Stochastic Parameter Perturbations (SPP), Stochastic Kinetic Energy Backscatter (SKEB), and Stochastic Perturbation of Physics Tendencies (SPPT). The focus of this study is to assess model performance within a convection-permitting ensemble at 3-km grid spacing across the Contiguous United States (CONUS) using a variety of stochastic approaches. A single physics suite configuration based on the operational High-Resolution Rapid Refresh (HRRR) model was utilized and ensemble members produced by employing stochastic methods. Parameter perturbations (using SPP) for select fields were employed in the Rapid Update Cycle (RUC) land surface model (LSM) and Mellor-Yamada-Nakanishi-Niino (MYNN) Planetary Boundary Layer (PBL) schemes. Within MYNN, SPP was applied to sub-grid cloud fraction, mixing length, roughness length, mass fluxes and Prandtl number. In the RUC LSM, SPP was applied to hydraulic conductivity and tested perturbing soil moisture at initial time. First iterative testing was conducted to assess the initial performance of several configuration settings (e.g. variety of spatial and temporal de-correlation lengths). Upon selection of the most promising candidate configurations using SPP, a 10-day time period was run and more robust statistics were gathered. SKEB and SPPT were included in additional retrospective tests to assess the impact of using

  8. A Constructive Sharp Approach to Functional Quantization of Stochastic Processes

    Junglen, Stefan; Luschgy, Harald

    2010-01-01

    We present a constructive approach to the functional quantization problem of stochastic processes, with an emphasis on Gaussian processes. The approach is constructive, since we reduce the infinite-dimensional functional quantization problem to a finite-dimensional quantization problem that can be solved numerically. Our approach achieves the sharp rate of the minimal quantization error and can be used to quantize the path space for Gaussian processes and also, for example, Lévy processes.

  9. Selective adsorption resonances: Quantum and stochastic approaches

    Sanz, A.S.; Miret-Artes, S.

    2007-01-01

    In this review we cover recent advances in the theory of the selective adsorption phenomenon that appears in light atom/molecule scattering off solid surfaces. Due to the universal van der Waals attractive interaction incoming gas particles can get trapped by the surface, this giving rise to the formation of quasi-bound states or resonances. The knowledge of the position and width of these resonances provides relevant direct information about the nature of the gas-surface interaction as well as about the evaporation and desorption mechanisms. This information can be obtained by means of a plethora of theoretical methods developed in both the energy and time domains, which we analyze and discuss here in detail. In particular, special emphasis is given to close-coupling, wave-packet, and trajectory-based formalisms. Furthermore, a novel description of selective adsorption resonances from a stochastic quantum perspective within the density matrix and Langevin formalisms, when correlations and fluctuations of the surface (considered as a thermal bath) are taken into account, is also proposed and discussed

  10. Gas contract portfolio management: a stochastic programming approach

    Haurie, A.; Smeers, Y.; Zaccour, G.

    1991-01-01

    This paper deals with a stochastic programming model which complements long range market simulation models generating scenarios concerning the evolution of demand and prices for gas in different market segments. Agas company has to negociate contracts with lengths going from one to twenty years. This stochastic model is designed to assess the risk associated with committing the gas production capacity of the company to these market segments. Different approaches are presented to overcome the difficulties associated with the very large size of the resulting optimization problem

  11. Regularization of quantum gravity in the matrix model approach

    Ueda, Haruhiko

    1991-02-01

    We study divergence problem of the partition function in the matrix model approach for two-dimensional quantum gravity. We propose a new model V(φ) = 1/2Trφ 2 + g 4 /NTrφ 4 + g'/N 4 Tr(φ 4 ) 2 and show that in the sphere case it has no divergence problem and the critical exponent is of pure gravity. (author)

  12. Equivalence between the semiclassical and effective approaches to gravity

    Paszko, Ricardo; Accioly, Antonio

    2010-01-01

    Semiclassical and effective theories of gravitation are quite distinct from each other as far as the approximation scheme employed is concerned. In fact, while in the semiclassical approach gravity is a classical field and the particles and/or remaining fields are quantized, in the effective approach everything is quantized, including gravity, but the Feynman amplitude is expanded in terms of the momentum exchanged between the particles and/or fields. In this paper, we show that these approaches, despite being radically different, lead to equivalent results if one of the masses under consideration is much greater than all the other energies involved.

  13. Stochastic approach and fluctuation theorem for charge transport in diodes

    Gu, Jiayin; Gaspard, Pierre

    2018-05-01

    A stochastic approach for charge transport in diodes is developed in consistency with the laws of electricity, thermodynamics, and microreversibility. In this approach, the electron and hole densities are ruled by diffusion-reaction stochastic partial differential equations and the electric field generated by the charges is determined with the Poisson equation. These equations are discretized in space for the numerical simulations of the mean density profiles, the mean electric potential, and the current-voltage characteristics. Moreover, the full counting statistics of the carrier current and the measured total current including the contribution of the displacement current are investigated. On the basis of local detailed balance, the fluctuation theorem is shown to hold for both currents.

  14. Stochastic Control of Energy Efficient Buildings: A Semidefinite Programming Approach

    Ma, Xiao [ORNL; Dong, Jin [ORNL; Djouadi, Seddik M [ORNL; Nutaro, James J [ORNL; Kuruganti, Teja [ORNL

    2015-01-01

    The key goal in energy efficient buildings is to reduce energy consumption of Heating, Ventilation, and Air- Conditioning (HVAC) systems while maintaining a comfortable temperature and humidity in the building. This paper proposes a novel stochastic control approach for achieving joint performance and power control of HVAC. We employ a constrained Stochastic Linear Quadratic Control (cSLQC) by minimizing a quadratic cost function with a disturbance assumed to be Gaussian. The problem is formulated to minimize the expected cost subject to a linear constraint and a probabilistic constraint. By using cSLQC, the problem is reduced to a semidefinite optimization problem, where the optimal control can be computed efficiently by Semidefinite programming (SDP). Simulation results are provided to demonstrate the effectiveness and power efficiency by utilizing the proposed control approach.

  15. A combined stochastic programming and optimal control approach to personal finance and pensions

    Konicz, Agnieszka Karolina; Pisinger, David; Rasmussen, Kourosh Marjani

    2015-01-01

    The paper presents a model that combines a dynamic programming (stochastic optimal control) approach and a multi-stage stochastic linear programming approach (SLP), integrated into one SLP formulation. Stochastic optimal control produces an optimal policy that is easy to understand and implement....

  16. Modelling airborne gravity data by means of adapted Space-Wise approach

    Sampietro, Daniele; Capponi, Martina; Hamdi Mansi, Ahmed; Gatti, Andrea

    2017-04-01

    Regional gravity field modelling by means of remove - restore procedure is nowadays widely applied to predict grids of gravity anomalies (Bouguer, free-air, isostatic, etc.) in gravimetric geoid determination as well as in exploration geophysics. Considering this last application, due to the required accuracy and resolution, airborne gravity observations are generally adopted. However due to the relatively high acquisition velocity, presence of atmospheric turbulence, aircraft vibration, instrumental drift, etc. airborne data are contaminated by a very high observation error. For this reason, a proper procedure to filter the raw observations both in the low and high frequency should be applied to recover valuable information. In this work, a procedure to predict a grid or a set of filtered along track gravity anomalies, by merging GGM and airborne dataset, is presented. The proposed algorithm, like the Space-Wise approach developed by Politecnico di Milano in the framework of GOCE data analysis, is based on a combination of along track Wiener filter and Least Squares Collocation adjustment and properly considers the different altitudes of the gravity observations. Among the main differences with respect to the satellite application of the Space-Wise approach there is the fact that, while in processing GOCE data the stochastic characteristics of the observation error can be considered a-priori well known, in airborne gravimetry, due to the complex environment in which the observations are acquired, these characteristics are unknown and should be retrieved from the dataset itself. Some innovative theoretical aspects focusing in particular on the theoretical covariance modelling are presented too. In the end, the goodness of the procedure is evaluated by means of a test on real data recovering the gravitational signal with a predicted accuracy of about 0.25 mGal.

  17. Error performance analysis in K-tier uplink cellular networks using a stochastic geometric approach

    Afify, Laila H.; Elsawy, Hesham; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2015-01-01

    -in-Distribution approach that utilizes stochastic geometric tools to account for the network geometry in the performance characterization. Different from the other stochastic geometry models adopted in the literature, the developed analysis accounts for important

  18. A stochastic approach to multi-gene expression dynamics

    Ochiai, T.; Nacher, J.C.; Akutsu, T.

    2005-01-01

    In the last years, tens of thousands gene expression profiles for cells of several organisms have been monitored. Gene expression is a complex transcriptional process where mRNA molecules are translated into proteins, which control most of the cell functions. In this process, the correlation among genes is crucial to determine the specific functions of genes. Here, we propose a novel multi-dimensional stochastic approach to deal with the gene correlation phenomena. Interestingly, our stochastic framework suggests that the study of the gene correlation requires only one theoretical assumption-Markov property-and the experimental transition probability, which characterizes the gene correlation system. Finally, a gene expression experiment is proposed for future applications of the model

  19. Simple Planar Truss (Linear, Nonlinear and Stochastic Approach

    Frydrýšek Karel

    2016-11-01

    Full Text Available This article deals with a simple planar and statically determinate pin-connected truss. It demonstrates the processes and methods of derivations and solutions according to 1st and 2nd order theories. The article applies linear and nonlinear approaches and their simplifications via a Maclaurin series. Programming connected with the stochastic Simulation-Based Reliability Method (i.e. the direct Monte Carlo approach is used to conduct a probabilistic reliability assessment (i.e. a calculation of the probability that plastic deformation will occur in members of the truss.

  20. The stochastic system approach for estimating dynamic treatments effect.

    Commenges, Daniel; Gégout-Petit, Anne

    2015-10-01

    The problem of assessing the effect of a treatment on a marker in observational studies raises the difficulty that attribution of the treatment may depend on the observed marker values. As an example, we focus on the analysis of the effect of a HAART on CD4 counts, where attribution of the treatment may depend on the observed marker values. This problem has been treated using marginal structural models relying on the counterfactual/potential response formalism. Another approach to causality is based on dynamical models, and causal influence has been formalized in the framework of the Doob-Meyer decomposition of stochastic processes. Causal inference however needs assumptions that we detail in this paper and we call this approach to causality the "stochastic system" approach. First we treat this problem in discrete time, then in continuous time. This approach allows incorporating biological knowledge naturally. When working in continuous time, the mechanistic approach involves distinguishing the model for the system and the model for the observations. Indeed, biological systems live in continuous time, and mechanisms can be expressed in the form of a system of differential equations, while observations are taken at discrete times. Inference in mechanistic models is challenging, particularly from a numerical point of view, but these models can yield much richer and reliable results.

  1. Stochastic Turing Patterns: Analysis of Compartment-Based Approaches

    Cao, Yang; Erban, Radek

    2014-01-01

    © 2014, Society for Mathematical Biology. Turing patterns can be observed in reaction-diffusion systems where chemical species have different diffusion constants. In recent years, several studies investigated the effects of noise on Turing patterns and showed that the parameter regimes, for which stochastic Turing patterns are observed, can be larger than the parameter regimes predicted by deterministic models, which are written in terms of partial differential equations (PDEs) for species concentrations. A common stochastic reaction-diffusion approach is written in terms of compartment-based (lattice-based) models, where the domain of interest is divided into artificial compartments and the number of molecules in each compartment is simulated. In this paper, the dependence of stochastic Turing patterns on the compartment size is investigated. It has previously been shown (for relatively simpler systems) that a modeler should not choose compartment sizes which are too small or too large, and that the optimal compartment size depends on the diffusion constant. Taking these results into account, we propose and study a compartment-based model of Turing patterns where each chemical species is described using a different set of compartments. It is shown that the parameter regions where spatial patterns form are different from the regions obtained by classical deterministic PDE-based models, but they are also different from the results obtained for the stochastic reaction-diffusion models which use a single set of compartments for all chemical species. In particular, it is argued that some previously reported results on the effect of noise on Turing patterns in biological systems need to be reinterpreted.

  2. Stochastic Turing Patterns: Analysis of Compartment-Based Approaches

    Cao, Yang

    2014-11-25

    © 2014, Society for Mathematical Biology. Turing patterns can be observed in reaction-diffusion systems where chemical species have different diffusion constants. In recent years, several studies investigated the effects of noise on Turing patterns and showed that the parameter regimes, for which stochastic Turing patterns are observed, can be larger than the parameter regimes predicted by deterministic models, which are written in terms of partial differential equations (PDEs) for species concentrations. A common stochastic reaction-diffusion approach is written in terms of compartment-based (lattice-based) models, where the domain of interest is divided into artificial compartments and the number of molecules in each compartment is simulated. In this paper, the dependence of stochastic Turing patterns on the compartment size is investigated. It has previously been shown (for relatively simpler systems) that a modeler should not choose compartment sizes which are too small or too large, and that the optimal compartment size depends on the diffusion constant. Taking these results into account, we propose and study a compartment-based model of Turing patterns where each chemical species is described using a different set of compartments. It is shown that the parameter regions where spatial patterns form are different from the regions obtained by classical deterministic PDE-based models, but they are also different from the results obtained for the stochastic reaction-diffusion models which use a single set of compartments for all chemical species. In particular, it is argued that some previously reported results on the effect of noise on Turing patterns in biological systems need to be reinterpreted.

  3. Lifetime distribution in thermal fatigue - a stochastic geometry approach

    Kullig, E.; Michel, B.

    1996-02-01

    The present report describes the interpretation approach for crack patterns which are generated on the smooth surface of austenitic specimens under thermal fatigue loading. A framework for the fracture mechanics characterization of equibiaxially loaded branched surface cracks is developed which accounts also for crack interaction effects. Advanced methods for the statistical evaluation of crack patterns using suitable characteristic quantities are developed. An efficient simulation procedure allows to identify the impact of different variables of the stochastic crack growth model with respect to the generated crack patterns. (orig.) [de

  4. Perturbative approach to non-Markovian stochastic Schroedinger equations

    Gambetta, Jay; Wiseman, H.M.

    2002-01-01

    In this paper we present a perturbative procedure that allows one to numerically solve diffusive non-Markovian stochastic Schroedinger equations, for a wide range of memory functions. To illustrate this procedure numerical results are presented for a classically driven two-level atom immersed in an environment with a simple memory function. It is observed that as the order of the perturbation is increased the numerical results for the ensemble average state ρ red (t) approach the exact reduced state found via Imamog-barlu ' s enlarged system method [Phys. Rev. A 50, 3650 (1994)

  5. Approaching complexity by stochastic methods: From biological systems to turbulence

    Friedrich, Rudolf [Institute for Theoretical Physics, University of Muenster, D-48149 Muenster (Germany); Peinke, Joachim [Institute of Physics, Carl von Ossietzky University, D-26111 Oldenburg (Germany); Sahimi, Muhammad [Mork Family Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, CA 90089-1211 (United States); Reza Rahimi Tabar, M., E-mail: mohammed.r.rahimi.tabar@uni-oldenburg.de [Department of Physics, Sharif University of Technology, Tehran 11155-9161 (Iran, Islamic Republic of); Institute of Physics, Carl von Ossietzky University, D-26111 Oldenburg (Germany); Fachbereich Physik, Universitaet Osnabrueck, Barbarastrasse 7, 49076 Osnabrueck (Germany)

    2011-09-15

    This review addresses a central question in the field of complex systems: given a fluctuating (in time or space), sequentially measured set of experimental data, how should one analyze the data, assess their underlying trends, and discover the characteristics of the fluctuations that generate the experimental traces? In recent years, significant progress has been made in addressing this question for a class of stochastic processes that can be modeled by Langevin equations, including additive as well as multiplicative fluctuations or noise. Important results have emerged from the analysis of temporal data for such diverse fields as neuroscience, cardiology, finance, economy, surface science, turbulence, seismic time series and epileptic brain dynamics, to name but a few. Furthermore, it has been recognized that a similar approach can be applied to the data that depend on a length scale, such as velocity increments in fully developed turbulent flow, or height increments that characterize rough surfaces. A basic ingredient of the approach to the analysis of fluctuating data is the presence of a Markovian property, which can be detected in real systems above a certain time or length scale. This scale is referred to as the Markov-Einstein (ME) scale, and has turned out to be a useful characteristic of complex systems. We provide a review of the operational methods that have been developed for analyzing stochastic data in time and scale. We address in detail the following issues: (i) reconstruction of stochastic evolution equations from data in terms of the Langevin equations or the corresponding Fokker-Planck equations and (ii) intermittency, cascades, and multiscale correlation functions.

  6. Approaching complexity by stochastic methods: From biological systems to turbulence

    Friedrich, Rudolf; Peinke, Joachim; Sahimi, Muhammad; Reza Rahimi Tabar, M.

    2011-01-01

    This review addresses a central question in the field of complex systems: given a fluctuating (in time or space), sequentially measured set of experimental data, how should one analyze the data, assess their underlying trends, and discover the characteristics of the fluctuations that generate the experimental traces? In recent years, significant progress has been made in addressing this question for a class of stochastic processes that can be modeled by Langevin equations, including additive as well as multiplicative fluctuations or noise. Important results have emerged from the analysis of temporal data for such diverse fields as neuroscience, cardiology, finance, economy, surface science, turbulence, seismic time series and epileptic brain dynamics, to name but a few. Furthermore, it has been recognized that a similar approach can be applied to the data that depend on a length scale, such as velocity increments in fully developed turbulent flow, or height increments that characterize rough surfaces. A basic ingredient of the approach to the analysis of fluctuating data is the presence of a Markovian property, which can be detected in real systems above a certain time or length scale. This scale is referred to as the Markov-Einstein (ME) scale, and has turned out to be a useful characteristic of complex systems. We provide a review of the operational methods that have been developed for analyzing stochastic data in time and scale. We address in detail the following issues: (i) reconstruction of stochastic evolution equations from data in terms of the Langevin equations or the corresponding Fokker-Planck equations and (ii) intermittency, cascades, and multiscale correlation functions.

  7. Trade Performance and Potential of the Philippines: An Application of Stochastic Frontier Gravity Model

    Deluna, Roperto Jr

    2013-01-01

    This study was conducted to investigate the issue of what Philippine merchandise trade flows would be if countries operated at the frontier of the gravity model. The study sought to estimate the coefficients of the gravity model. The estimated coefficients were used to estimate merchandise export potentials and technical efficiency of each country in the sample and these were also aggregated to measure impact of country groups, RTAs and inter-regional trading agreements. Result of the ...

  8. Optimal Integration of Intermittent Renewables: A System LCOE Stochastic Approach

    Carlo Lucheroni

    2018-03-01

    Full Text Available We propose a system level approach to value the impact on costs of the integration of intermittent renewable generation in a power system, based on expected breakeven cost and breakeven cost risk. To do this, we carefully reconsider the definition of Levelized Cost of Electricity (LCOE when extended to non-dispatchable generation, by examining extra costs and gains originated by the costly management of random power injections. We are thus lead to define a ‘system LCOE’ as a system dependent LCOE that takes properly into account intermittent generation. In order to include breakeven cost risk we further extend this deterministic approach to a stochastic setting, by introducing a ‘stochastic system LCOE’. This extension allows us to discuss the optimal integration of intermittent renewables from a broad, system level point of view. This paper thus aims to provide power producers and policy makers with a new methodological scheme, still based on the LCOE but which updates this valuation technique to current energy system configurations characterized by a large share of non-dispatchable production. Quantifying and optimizing the impact of intermittent renewables integration on power system costs, risk and CO 2 emissions, the proposed methodology can be used as powerful tool of analysis for assessing environmental and energy policies.

  9. Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations

    Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying

    2010-09-01

    Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).

  10. Maximum likelihood approach for several stochastic volatility models

    Camprodon, Jordi; Perelló, Josep

    2012-01-01

    Volatility measures the amplitude of price fluctuations. Despite it being one of the most important quantities in finance, volatility is not directly observable. Here we apply a maximum likelihood method which assumes that price and volatility follow a two-dimensional diffusion process where volatility is the stochastic diffusion coefficient of the log-price dynamics. We apply this method to the simplest versions of the expOU, the OU and the Heston stochastic volatility models and we study their performance in terms of the log-price probability, the volatility probability, and its Mean First-Passage Time. The approach has some predictive power on the future returns amplitude by only knowing the current volatility. The assumed models do not consider long-range volatility autocorrelation and the asymmetric return-volatility cross-correlation but the method still yields very naturally these two important stylized facts. We apply the method to different market indices and with a good performance in all cases. (paper)

  11. Gravity

    Gamow, George

    2003-01-01

    A distinguished physicist and teacher, George Gamow also possessed a special gift for making the intricacies of science accessible to a wide audience. In Gravity, he takes an enlightening look at three of the towering figures of science who unlocked many of the mysteries behind the laws of physics: Galileo, the first to take a close look at the process of free and restricted fall; Newton, originator of the concept of gravity as a universal force; and Einstein, who proposed that gravity is no more than the curvature of the four-dimensional space-time continuum.Graced with the author's own draw

  12. Moment problems and the causal set approach to quantum gravity

    Ash, Avner; McDonald, Patrick

    2003-01-01

    We study a collection of discrete Markov chains related to the causal set approach to modeling discrete theories of quantum gravity. The transition probabilities of these chains satisfy a general covariance principle, a causality principle, and a renormalizability condition. The corresponding dynamics are completely determined by a sequence of non-negative real coupling constants. Using techniques related to the classical moment problem, we give a complete description of any such sequence of coupling constants. We prove a representation theorem: every discrete theory of quantum gravity arising from causal set dynamics satisfying covariance, causality, and renormalizability corresponds to a unique probability distribution function on the non-negative real numbers, with the coupling constants defining the theory given by the moments of the distribution

  13. Spreading dynamics on complex networks: a general stochastic approach.

    Noël, Pierre-André; Allard, Antoine; Hébert-Dufresne, Laurent; Marceau, Vincent; Dubé, Louis J

    2014-12-01

    Dynamics on networks is considered from the perspective of Markov stochastic processes. We partially describe the state of the system through network motifs and infer any missing data using the available information. This versatile approach is especially well adapted for modelling spreading processes and/or population dynamics. In particular, the generality of our framework and the fact that its assumptions are explicitly stated suggests that it could be used as a common ground for comparing existing epidemics models too complex for direct comparison, such as agent-based computer simulations. We provide many examples for the special cases of susceptible-infectious-susceptible and susceptible-infectious-removed dynamics (e.g., epidemics propagation) and we observe multiple situations where accurate results may be obtained at low computational cost. Our perspective reveals a subtle balance between the complex requirements of a realistic model and its basic assumptions.

  14. An Asymptotic and Stochastic Theory for the Effects of Surface Gravity Waves on Currents and Infragravity Waves

    McWilliams, J. C.; Lane, E.; Melville, K.; Restrepo, J.; Sullivan, P.

    2004-12-01

    Oceanic surface gravity waves are approximately irrotational, weakly nonlinear, and conservative, and they have a much shorter time scale than oceanic currents and longer waves (e.g., infragravity waves) --- except where the primary surface waves break. This provides a framework for an asymptotic theory, based on separation of time (and space) scales, of wave-averaged effects associated with the conservative primary wave dynamics combined with a stochastic representation of the momentum transfer and induced mixing associated with non-conservative wave breaking. Such a theory requires only modest information about the primary wave field from measurements or operational model forecasts and thus avoids the enormous burden of calculating the waves on their intrinsically small space and time scales. For the conservative effects, the result is a vortex force associated with the primary wave's Stokes drift; a wave-averaged Bernoulli head and sea-level set-up; and an incremental material advection by the Stokes drift. This can be compared to the "radiation stress" formalism of Longuet-Higgins, Stewart, and Hasselmann; it is shown to be a preferable representation since the radiation stress is trivial at its apparent leading order. For the non-conservative breaking effects, a population of stochastic impulses is added to the current and infragravity momentum equations with distribution functions taken from measurements. In offshore wind-wave equilibria, these impulses replace the conventional surface wind stress and cause significant differences in the surface boundary layer currents and entrainment rate, particularly when acting in combination with the conservative vortex force. In the surf zone, where breaking associated with shoaling removes nearly all of the primary wave momentum and energy, the stochastic forcing plays an analogous role as the widely used nearshore radiation stress parameterizations. This talk describes the theoretical framework and presents some

  15. Generalized Lagrangian Path Approach to Manifestly-Covariant Quantum Gravity Theory

    Massimo Tessarotto

    2018-03-01

    Full Text Available A trajectory-based representation for the quantum theory of the gravitational field is formulated. This is achieved in terms of a covariant Generalized Lagrangian-Path (GLP approach which relies on a suitable statistical representation of Bohmian Lagrangian trajectories, referred to here as GLP-representation. The result is established in the framework of the manifestly-covariant quantum gravity theory (CQG-theory proposed recently and the related CQG-wave equation advancing in proper-time the quantum state associated with massive gravitons. Generally non-stationary analytical solutions for the CQG-wave equation with non-vanishing cosmological constant are determined in such a framework, which exhibit Gaussian-like probability densities that are non-dispersive in proper-time. As a remarkable outcome of the theory achieved by implementing these analytical solutions, the existence of an emergent gravity phenomenon is proven to hold. Accordingly, it is shown that a mean-field background space-time metric tensor can be expressed in terms of a suitable statistical average of stochastic fluctuations of the quantum gravitational field whose quantum-wave dynamics is described by GLP trajectories.

  16. Approaches to emergent spacetime in gauge/gravity duality

    Sully, James Kenneth

    2013-08-01

    In this thesis we explore approaches to emergent local spacetime in gauge/gravity duality. We first conjecture that every CFT with a large-N type limit and a parametrically large gap in the spectrum of single-trace operators has a local bulk dual. We defend this conjecture by counting consistent solutions to the four-point function in simple scalar models and matching to the number of local interaction terms in the bulk. Next, we proceed to explicitly construct local bulk operators using smearing functions. We argue that this construction allows one to probe inside black hole horizons for only short times. We then suggest that the failure to construct bulk operators inside a black hole at late times is indicative of a break-down of local effective field theory at the black hole horizon. We argue that the postulates of black hole complementarity are inconsistent and cannot be realized within gauge/gravity duality. We argue that the most conservative solution is a firewall at the black hole horizon and we critically explore alternative resolutions. We then examine the CGHS model of two-dimensional gravity to look for dynamical formation of firewalls. We find that the CGHS model does not exhibit firewalls, but rather contains long-lived remnants. We argue that, while this is consistent for the CGHS model, it cannot be so in higher-dimensional theories of gravity. Lastly, we turn to F-theory, and detail local and global obstructions to writing elliptic fibrations in Tate form. We determine more general possible forms.

  17. A stochastic approach for automatic generation of urban drainage systems.

    Möderl, M; Butler, D; Rauch, W

    2009-01-01

    Typically, performance evaluation of new developed methodologies is based on one or more case studies. The investigation of multiple real world case studies is tedious and time consuming. Moreover extrapolating conclusions from individual investigations to a general basis is arguable and sometimes even wrong. In this article a stochastic approach is presented to evaluate new developed methodologies on a broader basis. For the approach the Matlab-tool "Case Study Generator" is developed which generates a variety of different virtual urban drainage systems automatically using boundary conditions e.g. length of urban drainage system, slope of catchment surface, etc. as input. The layout of the sewer system is based on an adapted Galton-Watson branching process. The sub catchments are allocated considering a digital terrain model. Sewer system components are designed according to standard values. In total, 10,000 different virtual case studies of urban drainage system are generated and simulated. Consequently, simulation results are evaluated using a performance indicator for surface flooding. Comparison between results of the virtual and two real world case studies indicates the promise of the method. The novelty of the approach is that it is possible to get more general conclusions in contrast to traditional evaluations with few case studies.

  18. A random walk approach to stochastic neutron transport

    Mulatier, Clelia de

    2015-01-01

    One of the key goals of nuclear reactor physics is to determine the distribution of the neutron population within a reactor core. This population indeed fluctuates due to the stochastic nature of the interactions of the neutrons with the nuclei of the surrounding medium: scattering, emission of neutrons from fission events and capture by nuclear absorption. Due to these physical mechanisms, the stochastic process performed by neutrons is a branching random walk. For most applications, the neutron population considered is very large, and all physical observables related to its behaviour, such as the heat production due to fissions, are well characterised by their average values. Generally, these mean quantities are governed by the classical neutron transport equation, called linear Boltzmann equation. During my PhD, using tools from branching random walks and anomalous diffusion, I have tackled two aspects of neutron transport that cannot be approached by the linear Boltzmann equation. First, thanks to the Feynman-Kac backward formalism, I have characterised the phenomenon of 'neutron clustering' that has been highlighted for low-density configuration of neutrons and results from strong fluctuations in space and time of the neutron population. Then, I focused on several properties of anomalous (non-exponential) transport, that can model neutron transport in strongly heterogeneous and disordered media, such as pebble-bed reactors. One of the novel aspects of this work is that problems are treated in the presence of boundaries. Indeed, even though real systems are finite (confined geometries), most of previously existing results were obtained for infinite systems. (author) [fr

  19. Discrete Approaches to Quantum Gravity in Four Dimensions

    Loll Renate

    1998-01-01

    Full Text Available The construction of a consistent theory of quantum gravity is a problem in theoretical physics that has so far defied all attempts at resolution. One ansatz to try to obtain a non-trivial quantum theory proceeds via a discretization of space-time and the Einstein action. I review here three major areas of research: gauge-theoretic approaches, both in a path-integral and a Hamiltonian formulation; quantum Regge calculus; and the method of dynamical triangulations, confining attention to work that is strictly four-dimensional, strictly discrete, and strictly quantum in nature.

  20. Alternative Approaches to Technical Efficiency Estimation in the Stochastic Frontier Model

    Acquah, H. de-Graft; Onumah, E. E.

    2014-01-01

    Estimating the stochastic frontier model and calculating technical efficiency of decision making units are of great importance in applied production economic works. This paper estimates technical efficiency from the stochastic frontier model using Jondrow, and Battese and Coelli approaches. In order to compare alternative methods, simulated data with sample sizes of 60 and 200 are generated from stochastic frontier model commonly applied to agricultural firms. Simulated data is employed to co...

  1. Fast radio bursts and the stochastic lifetime of black holes in quantum gravity

    Barrau, Aurélien; Moulin, Flora; Martineau, Killian

    2018-03-01

    Nonperturbative quantum gravity effects might allow a black-to-white hole transition. We revisit this increasingly popular hypothesis by taking into account the fundamentally random nature of the bouncing time. We show that if the primordial mass spectrum of black holes is highly peaked, the expected signal can in fact match the wavelength of the observed fast radio bursts. On the other hand, if the primordial mass spectrum is wide and smooth, clear predictions are suggested and the sensitivity to the shape of the spectrum is studied.

  2. Group manifold approach to gravity and supergravity theories

    d'Auria, R.; Fre, P.; Regge, T.

    1981-05-01

    Gravity theories are presented from the point of view of group manifold formulation. The differential geometry of groups and supergroups is discussed first; the notion of connection and related Yang-Mills potentials is introduced. Then ordinary Einstein gravity is discussed in the Cartan formulation. This discussion provides a first example which will then be generalized to more complicated theories, in particular supergravity. The distinction between ''pure'' and ''impure' theories is also set forth. Next, the authors develop an axiomatic approach to rheonomic theories related to the concept of Chevalley cohomology on group manifolds, and apply these principles to N = 1 supergravity. Then the panorama of so far constructed pure and impure group manifold supergravities is presented. The pure d = 5 N = 2 case is discussed in some detail, and N = 2 and N = 3 in d = 4 are considered as examples of the impure theories. The way a pure theory becomes impure after dimensional reduction is illustrated. Next, the role of kinematical superspace constraints as a subset of the group-manifold equations of motion is discussed, and the use of this approach to obtain the auxiliary fields is demonstrated. Finally, the application of the group manifold method to supersymmetric Super Yang-Mills theories is addressed

  3. An Improved Asymptotic Sampling Approach For Stochastic Finite Element Stiffness of a Laterally Loaded Monopile

    Vahdatirad, Mohammadjavad; Bayat, Mehdi; Andersen, Lars Vabbersgaard

    2012-01-01

    In this study a stochastic approach is conducted to obtain the horizontal and rotational stiffness of an offshore monopile foundation. A nonlinear stochastic p-y curve is integrated into a finite element scheme for calculation of the monopile response in over-consolidated clay having spatial...

  4. Oriented stochastic data envelopment models: ranking comparison to stochastic frontier approach

    Brázdik, František

    -, č. 271 (2005), s. 1-46 ISSN 1211-3298 Institutional research plan: CEZ:AV0Z70850503 Keywords : stochastic data envelopment analysis * linear programming * rice farm Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp271.pdf

  5. Productive efficiency of tea industry: A stochastic frontier approach

    USER

    2010-06-21

    Jun 21, 2010 ... Key words: Technical efficiency, stochastic frontier, translog ... present low performance of the tea industry in Bangladesh. ... The Technical inefficiency effect .... administrative, technical, clerical, sales and purchase staff.

  6. A decoupled approach to filter design for stochastic systems

    Barbata, A.; Zasadzinski, M.; Ali, H. Souley; Messaoud, H.

    2016-08-01

    This paper presents a new theorem to guarantee the almost sure exponential stability for a class of stochastic triangular systems by studying only the stability of each diagonal subsystems. This result allows to solve the filtering problem of the stochastic systems with multiplicative noises by using the almost sure exponential stability concept. Two kinds of observers are treated: the full-order and reduced-order cases.

  7. Path probability of stochastic motion: A functional approach

    Hattori, Masayuki; Abe, Sumiyoshi

    2016-06-01

    The path probability of a particle undergoing stochastic motion is studied by the use of functional technique, and the general formula is derived for the path probability distribution functional. The probability of finding paths inside a tube/band, the center of which is stipulated by a given path, is analytically evaluated in a way analogous to continuous measurements in quantum mechanics. Then, the formalism developed here is applied to the stochastic dynamics of stock price in finance.

  8. A Proposed Stochastic Finite Difference Approach Based on Homogenous Chaos Expansion

    O. H. Galal

    2013-01-01

    Full Text Available This paper proposes a stochastic finite difference approach, based on homogenous chaos expansion (SFDHC. The said approach can handle time dependent nonlinear as well as linear systems with deterministic or stochastic initial and boundary conditions. In this approach, included stochastic parameters are modeled as second-order stochastic processes and are expanded using Karhunen-Loève expansion, while the response function is approximated using homogenous chaos expansion. Galerkin projection is used in converting the original stochastic partial differential equation (PDE into a set of coupled deterministic partial differential equations and then solved using finite difference method. Two well-known equations were used for efficiency validation of the method proposed. First one being the linear diffusion equation with stochastic parameter and the second is the nonlinear Burger's equation with stochastic parameter and stochastic initial and boundary conditions. In both of these examples, the probability distribution function of the response manifested close conformity to the results obtained from Monte Carlo simulation with optimized computational cost.

  9. Conservative diffusions: a constructive approach to Nelson's stochastic mechanics

    Carlen, E.A.

    1984-01-01

    In Nelson's stochastic mechanics, quantum phenomena are described in terms of diffusions instead of wave functions; this thesis is a study of that description. Concern here is with the possibility of describing, as opposed to explaining, quantum phenomena in terms of diffusions. In this direction, the following questions arise: ''Do the diffusion of stochastic mechanics - which are formally given by stochastic differential equations with extremely singular coefficients - really exist.'' Given that they exist, one can ask, ''Do these diffusions have physically reasonable paths to study the behavior of physical systems.'' These are the questions treated in this thesis. In Chapter 1, stochastic mechanics and diffusion theory are reviewed, using the Guerra-Morato variational principle to establish the connection with the Schroedinger equation. Chapter II settles the first of the questions raised above. Using PDE methods, the diffusions of stochastic mechanics are constructed. The result is sufficiently general to be of independent mathematical interest. In Chapter III, potential scattering in stochastic mechanics is treated and direct probabilistic methods of studying quantum scattering problems are discussed. The results provide a solid YES in answer to the second question raised above

  10. The impact of trade costs on rare earth exports : a stochastic frontier estimation approach.

    Sanyal, Prabuddha; Brady, Patrick Vane; Vugrin, Eric D.

    2013-09-01

    The study develops a novel stochastic frontier modeling approach to the gravity equation for rare earth element (REE) trade between China and its trading partners between 2001 and 2009. The novelty lies in differentiating betweenbehind the border' trade costs by China and theimplicit beyond the border costs' of China's trading partners. Results indicate that the significance level of the independent variables change dramatically over the time period. While geographical distance matters for trade flows in both periods, the effect of income on trade flows is significantly attenuated, possibly capturing the negative effects of financial crises in the developed world. Second, the total export losses due tobehind the border' trade costs almost tripled over the time period. Finally, looking atimplicit beyond the border' trade costs, results show China gaining in some markets, although it is likely that some countries are substituting away from Chinese REE exports.

  11. Stability analysis of stochastic delayed cellular neural networks by LMI approach

    Zhu Wenli; Hu Jin

    2006-01-01

    Some sufficient mean square exponential stability conditions for a class of stochastic DCNN model are obtained via the LMI approach. These conditions improve and generalize some existing global asymptotic stability conditions for DCNN model

  12. Stochastic congestion management in power markets using efficient scenario approaches

    Esmaili, Masoud; Amjady, Nima; Shayanfar, Heidar Ali

    2010-01-01

    Congestion management in electricity markets is traditionally performed using deterministic values of system parameters assuming a fixed network configuration. In this paper, a stochastic programming framework is proposed for congestion management considering the power system uncertainties comprising outage of generating units and transmission branches. The Forced Outage Rate of equipment is employed in the stochastic programming. Using the Monte Carlo simulation, possible scenarios of power system operating states are generated and a probability is assigned to each scenario. The performance of the ordinary as well as Lattice rank-1 and rank-2 Monte Carlo simulations is evaluated in the proposed congestion management framework. As a tradeoff between computation time and accuracy, scenario reduction based on the standard deviation of accepted scenarios is adopted. The stochastic congestion management solution is obtained by aggregating individual solutions of accepted scenarios. Congestion management using the proposed stochastic framework provides a more realistic solution compared with traditional deterministic solutions. Results of testing the proposed stochastic congestion management on the 24-bus reliability test system indicate the efficiency of the proposed framework.

  13. Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology.

    Schaff, James C; Gao, Fei; Li, Ye; Novak, Igor L; Slepchenko, Boris M

    2016-12-01

    Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium 'sparks' as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell.

  14. Methods and models in mathematical biology deterministic and stochastic approaches

    Müller, Johannes

    2015-01-01

    This book developed from classes in mathematical biology taught by the authors over several years at the Technische Universität München. The main themes are modeling principles, mathematical principles for the analysis of these models, and model-based analysis of data. The key topics of modern biomathematics are covered: ecology, epidemiology, biochemistry, regulatory networks, neuronal networks, and population genetics. A variety of mathematical methods are introduced, ranging from ordinary and partial differential equations to stochastic graph theory and  branching processes. A special emphasis is placed on the interplay between stochastic and deterministic models.

  15. Reliability Coupled Sensitivity Based Design Approach for Gravity Retaining Walls

    Guha Ray, A.; Baidya, D. K.

    2012-09-01

    Sensitivity analysis involving different random variables and different potential failure modes of a gravity retaining wall focuses on the fact that high sensitivity of a particular variable on a particular mode of failure does not necessarily imply a remarkable contribution to the overall failure probability. The present paper aims at identifying a probabilistic risk factor ( R f ) for each random variable based on the combined effects of failure probability ( P f ) of each mode of failure of a gravity retaining wall and sensitivity of each of the random variables on these failure modes. P f is calculated by Monte Carlo simulation and sensitivity analysis of each random variable is carried out by F-test analysis. The structure, redesigned by modifying the original random variables with the risk factors, is safe against all the variations of random variables. It is observed that R f for friction angle of backfill soil ( φ 1 ) increases and cohesion of foundation soil ( c 2 ) decreases with an increase of variation of φ 1 , while R f for unit weights ( γ 1 and γ 2 ) for both soil and friction angle of foundation soil ( φ 2 ) remains almost constant for variation of soil properties. The results compared well with some of the existing deterministic and probabilistic methods and found to be cost-effective. It is seen that if variation of φ 1 remains within 5 %, significant reduction in cross-sectional area can be achieved. But if the variation is more than 7-8 %, the structure needs to be modified. Finally design guidelines for different wall dimensions, based on the present approach, are proposed.

  16. Stochastic Change Detection based on an Active Fault Diagnosis Approach

    Poulsen, Niels Kjølstad; Niemann, Hans Henrik

    2007-01-01

    The focus in this paper is on stochastic change detection applied in connection with active fault diagnosis (AFD). An auxiliary input signal is applied in AFD. This signal injection in the system will in general allow to obtain a fast change detection/isolation by considering the output or an err...

  17. A stochastic-programming approach to integrated asset and liability ...

    This increase in complexity has provided an impetus for the investigation into integrated asset- and liability-management frameworks that could realistically address dynamic portfolio allocation in a risk-controlled way. In this paper the authors propose a multi-stage dynamic stochastic-programming model for the integrated ...

  18. Fat versus Thin Threading Approach on GPUs: Application to Stochastic Simulation of Chemical Reactions

    Klingbeil, Guido; Erban, Radek; Giles, Mike; Maini, Philip K.

    2012-01-01

    We explore two different threading approaches on a graphics processing unit (GPU) exploiting two different characteristics of the current GPU architecture. The fat thread approach tries to minimize data access time by relying on shared memory and registers potentially sacrificing parallelism. The thin thread approach maximizes parallelism and tries to hide access latencies. We apply these two approaches to the parallel stochastic simulation of chemical reaction systems using the stochastic simulation algorithm (SSA) by Gillespie [14]. In these cases, the proposed thin thread approach shows comparable performance while eliminating the limitation of the reaction system's size. © 2006 IEEE.

  19. Fat versus Thin Threading Approach on GPUs: Application to Stochastic Simulation of Chemical Reactions

    Klingbeil, Guido

    2012-02-01

    We explore two different threading approaches on a graphics processing unit (GPU) exploiting two different characteristics of the current GPU architecture. The fat thread approach tries to minimize data access time by relying on shared memory and registers potentially sacrificing parallelism. The thin thread approach maximizes parallelism and tries to hide access latencies. We apply these two approaches to the parallel stochastic simulation of chemical reaction systems using the stochastic simulation algorithm (SSA) by Gillespie [14]. In these cases, the proposed thin thread approach shows comparable performance while eliminating the limitation of the reaction system\\'s size. © 2006 IEEE.

  20. Simulation of the stochastic wave loads using a physical modeling approach

    Liu, W.F.; Sichani, Mahdi Teimouri; Nielsen, Søren R.K.

    2013-01-01

    In analyzing stochastic dynamic systems, analysis of the system uncertainty due to randomness in the loads plays a crucial role. Typically time series of the stochastic loads are simulated using traditional random phase method. This approach combined with fast Fourier transform algorithm makes...... reliability or its uncertainty. Moreover applicability of the probability density evolution method on engineering problems faces critical difficulties when the system embeds too many random variables. Hence it is useful to devise a method which can make realization of the stochastic load processes with low...

  1. A penalty guided stochastic fractal search approach for system reliability optimization

    Mellal, Mohamed Arezki; Zio, Enrico

    2016-01-01

    Modern industry requires components and systems with high reliability levels. In this paper, we address the system reliability optimization problem. A penalty guided stochastic fractal search approach is developed for solving reliability allocation, redundancy allocation, and reliability–redundancy allocation problems. Numerical results of ten case studies are presented as benchmark problems for highlighting the superiority of the proposed approach compared to others from literature. - Highlights: • System reliability optimization is investigated. • A penalty guided stochastic fractal search approach is developed. • Results of ten case studies are compared with previously published methods. • Performance of the approach is demonstrated.

  2. Food Environment and Weight Outcomes: A Stochastic Frontier Approach

    Li, Xun; Lopez, Rigoberto A.

    2013-01-01

    Food environment includes the presence of supermarkets, restaurants, warehouse clubs and supercenters, and other food outlets. This paper evaluates weight outcomes from a food environment using a stochastic production frontier and an equation for the determinants of efficiency, where the explanatory variables of the efficiency term include food environment indicators. Using individual consumer data and food environment data from New England counties, empirical results indicate that fruit and ...

  3. A unified approach to stochastic integration on the real line

    Basse-O'Connor, Andreas; Graversen, Svend-Erik; Pedersen, Jan

    Stochastic integration on the predictable σ-field with respect to σ-finite L0-valued measures, also known as formal semimartingales, is studied. In particular, the triplet of such measures is introduced and used to characterize the set of integrable processes. Special attention is given to Lévy...... processes indexed by the real line. Surprisingly, many of the basic properties break down in this situation compared to the usual R+ case....

  4. Robust synthetic biology design: stochastic game theory approach.

    Chen, Bor-Sen; Chang, Chia-Hung; Lee, Hsiao-Ching

    2009-07-15

    Synthetic biology is to engineer artificial biological systems to investigate natural biological phenomena and for a variety of applications. However, the development of synthetic gene networks is still difficult and most newly created gene networks are non-functioning due to uncertain initial conditions and disturbances of extra-cellular environments on the host cell. At present, how to design a robust synthetic gene network to work properly under these uncertain factors is the most important topic of synthetic biology. A robust regulation design is proposed for a stochastic synthetic gene network to achieve the prescribed steady states under these uncertain factors from the minimax regulation perspective. This minimax regulation design problem can be transformed to an equivalent stochastic game problem. Since it is not easy to solve the robust regulation design problem of synthetic gene networks by non-linear stochastic game method directly, the Takagi-Sugeno (T-S) fuzzy model is proposed to approximate the non-linear synthetic gene network via the linear matrix inequality (LMI) technique through the Robust Control Toolbox in Matlab. Finally, an in silico example is given to illustrate the design procedure and to confirm the efficiency and efficacy of the proposed robust gene design method. http://www.ee.nthu.edu.tw/bschen/SyntheticBioDesign_supplement.pdf.

  5. Stochastic partial differential equations a modeling, white noise functional approach

    Holden, Helge; Ubøe, Jan; Zhang, Tusheng

    1996-01-01

    This book is based on research that, to a large extent, started around 1990, when a research project on fluid flow in stochastic reservoirs was initiated by a group including some of us with the support of VISTA, a research coopera­ tion between the Norwegian Academy of Science and Letters and Den norske stats oljeselskap A.S. (Statoil). The purpose of the project was to use stochastic partial differential equations (SPDEs) to describe the flow of fluid in a medium where some of the parameters, e.g., the permeability, were stochastic or "noisy". We soon realized that the theory of SPDEs at the time was insufficient to handle such equations. Therefore it became our aim to develop a new mathematically rigorous theory that satisfied the following conditions. 1) The theory should be physically meaningful and realistic, and the corre­ sponding solutions should make sense physically and should be useful in applications. 2) The theory should be general enough to handle many of the interesting SPDEs that occur in r...

  6. An Adynamical, Graphical Approach to Quantum Gravity and Unification

    Stuckey, W. M.; Silberstein, Michael; McDevitt, Timothy

    We use graphical field gradients in an adynamical, background independent fashion to propose a new approach to quantum gravity (QG) and unification. Our proposed reconciliation of general relativity (GR) and quantum field theory (QFT) is based on a modification of their graphical instantiations, i.e. Regge calculus and lattice gauge theory (LGT), respectively, which we assume are fundamental to their continuum counterparts. Accordingly, the fundamental structure is a graphical amalgam of space, time, and sources (in parlance of QFT) called a "space-time source element". These are fundamental elements of space, time, and sources, not source elements in space and time. The transition amplitude for a space-time source element is computed using a path integral with discrete graphical action. The action for a space-time source element is constructed from a difference matrix K and source vector J on the graph, as in lattice gauge theory. K is constructed from graphical field gradients so that it contains a non-trivial null space and J is then restricted to the row space of K, so that it is divergence-free and represents a conserved exchange of energy-momentum. This construct of K and J represents an adynamical global constraint (AGC) between sources, the space-time metric, and the energy-momentum content of the element, rather than a dynamical law for time-evolved entities. In this view, one manifestation of quantum gravity becomes evident when, for example, a single space-time source element spans adjoining simplices of the Regge calculus graph. Thus, energy conservation for the space-time source element includes contributions to the deficit angles between simplices. This idea is used to correct proper distance in the Einstein-de Sitter (EdS) cosmology model yielding a fit of the Union2 Compilation supernova data that matches ΛCDM without having to invoke accelerating expansion or dark energy. A similar modification to LGT results in an adynamical account of quantum

  7. On the stochastic approach to inflation and the initial conditions in the universe

    Pollock, M.D.

    1986-05-01

    By applying stochastic methods to a theory in which a potential V(Φ) causes a period of quasi-expansion of the universe, Starobinsky has derived an expression for the probability distribution P(V) appropriate to chaotic inflation in the classical approximation. We obtain the corresponding expression for a broken-symmetry theory of gravity. For the Coleman-Weinberg potential, it appears most probable that the initial value of Φ is Φ i O , in which case inflation occurs naturally, because V(Φ i )>O

  8. Numerical Methods for Stochastic Computations A Spectral Method Approach

    Xiu, Dongbin

    2010-01-01

    The first graduate-level textbook to focus on fundamental aspects of numerical methods for stochastic computations, this book describes the class of numerical methods based on generalized polynomial chaos (gPC). These fast, efficient, and accurate methods are an extension of the classical spectral methods of high-dimensional random spaces. Designed to simulate complex systems subject to random inputs, these methods are widely used in many areas of computer science and engineering. The book introduces polynomial approximation theory and probability theory; describes the basic theory of gPC meth

  9. Dynamics of non-holonomic systems with stochastic transport

    Holm, D. D.; Putkaradze, V.

    2018-01-01

    This paper formulates a variational approach for treating observational uncertainty and/or computational model errors as stochastic transport in dynamical systems governed by action principles under non-holonomic constraints. For this purpose, we derive, analyse and numerically study the example of an unbalanced spherical ball rolling under gravity along a stochastic path. Our approach uses the Hamilton-Pontryagin variational principle, constrained by a stochastic rolling condition, which we show is equivalent to the corresponding stochastic Lagrange-d'Alembert principle. In the example of the rolling ball, the stochasticity represents uncertainty in the observation and/or error in the computational simulation of the angular velocity of rolling. The influence of the stochasticity on the deterministically conserved quantities is investigated both analytically and numerically. Our approach applies to a wide variety of stochastic, non-holonomically constrained systems, because it preserves the mathematical properties inherited from the variational principle.

  10. Stochastic Discount Factor Approach to International Risk-Sharing:A Robustness Check of the Bilateral Setting

    Hadzi-Vaskov, M.; Kool, C.J.M.

    2007-01-01

    This paper presents a robustness check of the stochastic discount factor approach to international (bilateral) risk-sharing given in Brandt, Cochrane, and Santa-Clara (2006). We demonstrate two main inherent limitations of the bilateral SDF approach to international risk-sharing. First, the discount

  11. A two-stage stochastic programming approach for operating multi-energy systems

    Zeng, Qing; Fang, Jiakun; Chen, Zhe

    2017-01-01

    This paper provides a two-stage stochastic programming approach for joint operating multi-energy systems under uncertainty. Simulation is carried out in a test system to demonstrate the feasibility and efficiency of the proposed approach. The test energy system includes a gas subsystem with a gas...

  12. Effects of artificial gravity on the cardiovascular system: Computational approach

    Diaz Artiles, Ana; Heldt, Thomas; Young, Laurence R.

    2016-09-01

    Artificial gravity has been suggested as a multisystem countermeasure against the negative effects of weightlessness. However, many questions regarding the appropriate configuration are still unanswered, including optimal g-level, angular velocity, gravity gradient, and exercise protocol. Mathematical models can provide unique insight into these questions, particularly when experimental data is very expensive or difficult to obtain. In this research effort, a cardiovascular lumped-parameter model is developed to simulate the short-term transient hemodynamic response to artificial gravity exposure combined with ergometer exercise, using a bicycle mounted on a short-radius centrifuge. The model is thoroughly described and preliminary simulations are conducted to show the model capabilities and potential applications. The model consists of 21 compartments (including systemic circulation, pulmonary circulation, and a cardiac model), and it also includes the rapid cardiovascular control systems (arterial baroreflex and cardiopulmonary reflex). In addition, the pressure gradient resulting from short-radius centrifugation is captured in the model using hydrostatic pressure sources located at each compartment. The model also includes the cardiovascular effects resulting from exercise such as the muscle pump effect. An initial set of artificial gravity simulations were implemented using the Massachusetts Institute of Technology (MIT) Compact-Radius Centrifuge (CRC) configuration. Three centripetal acceleration (artificial gravity) levels were chosen: 1 g, 1.2 g, and 1.4 g, referenced to the subject's feet. Each simulation lasted 15.5 minutes and included a baseline period, the spin-up process, the ergometer exercise period (5 minutes of ergometer exercise at 30 W with a simulated pedal cadence of 60 RPM), and the spin-down process. Results showed that the cardiovascular model is able to predict the cardiovascular dynamics during gravity changes, as well as the expected

  13. Approaches to Validation of Models for Low Gravity Fluid Behavior

    Chato, David J.; Marchetta, Jeffery; Hochstein, John I.; Kassemi, Mohammad

    2005-01-01

    This paper details the author experiences with the validation of computer models to predict low gravity fluid behavior. It reviews the literature of low gravity fluid behavior as a starting point for developing a baseline set of test cases. It examines authors attempts to validate their models against these cases and the issues they encountered. The main issues seem to be that: Most of the data is described by empirical correlation rather than fundamental relation; Detailed measurements of the flow field have not been made; Free surface shapes are observed but through thick plastic cylinders, and therefore subject to a great deal of optical distortion; and Heat transfer process time constants are on the order of minutes to days but the zero-gravity time available has been only seconds.

  14. A measure theoretical approach to quantum stochastic processes

    Waldenfels, Wilhelm von

    2014-04-01

    Authored by a leading researcher in the field. Self-contained presentation of the subject matter. Examines a number of worked examples in detail. This monograph takes as starting point that abstract quantum stochastic processes can be understood as a quantum field theory in one space and in one time coordinate. As a result it is appropriate to represent operators as power series of creation and annihilation operators in normal-ordered form, which can be achieved using classical measure theory. Considering in detail four basic examples (e.g. a two-level atom coupled to a heat bath of oscillators), in each case the Hamiltonian of the associated one-parameter strongly continuous group is determined and the spectral decomposition is explicitly calculated in the form of generalized eigen-vectors. Advanced topics include the theory of the Hudson-Parthasarathy equation and the amplified oscillator problem. To that end, a chapter on white noise calculus has also been included.

  15. Modeling collective emotions: a stochastic approach based on Brownian agents

    Schweitzer, F.

    2010-01-01

    We develop a agent-based framework to model the emergence of collective emotions, which is applied to online communities. Agents individual emotions are described by their valence and arousal. Using the concept of Brownian agents, these variables change according to a stochastic dynamics, which also considers the feedback from online communication. Agents generate emotional information, which is stored and distributed in a field modeling the online medium. This field affects the emotional states of agents in a non-linear manner. We derive conditions for the emergence of collective emotions, observable in a bimodal valence distribution. Dependent on a saturated or a super linear feedback between the information field and the agent's arousal, we further identify scenarios where collective emotions only appear once or in a repeated manner. The analytical results are illustrated by agent-based computer simulations. Our framework provides testable hypotheses about the emergence of collective emotions, which can be verified by data from online communities. (author)

  16. Hospital efficiency and transaction costs: a stochastic frontier approach.

    Ludwig, Martijn; Groot, Wim; Van Merode, Frits

    2009-07-01

    The make-or-buy decision of organizations is an important issue in the transaction cost theory, but is usually not analyzed from an efficiency perspective. Hospitals frequently have to decide whether to outsource or not. The main question we address is: Is the make-or-buy decision affected by the efficiency of hospitals? A one-stage stochastic cost frontier equation is estimated for Dutch hospitals. The make-or-buy decisions of ten different hospital services are used as explanatory variables to explain efficiency of hospitals. It is found that for most services the make-or-buy decision is not related to efficiency. Kitchen services are an important exception to this. Large hospitals tend to outsource less, which is supported by efficiency reasons. For most hospital services, outsourcing does not significantly affect the efficiency of hospitals. The focus on the make-or-buy decision may therefore be less important than often assumed.

  17. A measure theoretical approach to quantum stochastic processes

    Von Waldenfels, Wilhelm

    2014-01-01

    This monograph takes as starting point that abstract quantum stochastic processes can be understood as a quantum field theory in one space and in one time coordinate. As a result it is appropriate to represent operators as power series of creation and annihilation operators in normal-ordered form, which can be achieved using classical measure theory. Considering in detail four basic examples (e.g. a two-level atom coupled to a heat bath of oscillators), in each case the Hamiltonian of the associated one-parameter strongly continuous group is determined and the spectral decomposition is explicitly calculated in the form of generalized eigen-vectors. Advanced topics include the theory of the Hudson-Parthasarathy equation and the amplified oscillator problem. To that end, a chapter on white noise calculus has also been included.

  18. A new approach to stochastic transport via the functional Volterra expansion

    Ziya Akcasu, A.; Corngold, N.

    2005-01-01

    In this paper we present a new algorithm (FDA) for the calculation of the mean and the variance of the flux in stochastic transport when the transport equation contains a spatially random parameter θ(r), such as the density of the medium. The approach is based on the renormalized functional Volterra expansion of the flux around its mean. The attractive feature of the approach is that it explicitly displays the functional dependence of the flux on the products of θ(r i ), and hence enables one to take ensemble averages directly to calculate the moments of the flux in terms of the correlation functions of the underlying random process. The renormalized deterministic transport equation for the mean flux has been obtained to the second order in θ(r), and a functional relationship between the variance and the mean flux has been derived to calculate the variance to this order. The feasibility and accuracy of FDA has been demonstrated in the case of stochastic diffusion, using the diffusion equation with a spatially random diffusion coefficient. The connection of FDA with the well-established approximation schemes in the field of stochastic linear differential equations, such as the Bourret approximation, developed by Van Kampen using cumulant expansion, and by Terwiel using projection operator formalism, which has recently been extended to stochastic transport by Corngold. We hope that FDA's potential will be explored numerically in more realistic applications of the stochastic transport. (authors)

  19. A Monte Carlo approach to constraining uncertainties in modelled downhole gravity gradiometry applications

    Matthews, Samuel J.; O'Neill, Craig; Lackie, Mark A.

    2017-06-01

    Gravity gradiometry has a long legacy, with airborne/marine applications as well as surface applications receiving renewed recent interest. Recent instrumental advances has led to the emergence of downhole gravity gradiometry applications that have the potential for greater resolving power than borehole gravity alone. This has promise in both the petroleum and geosequestration industries; however, the effect of inherent uncertainties in the ability of downhole gravity gradiometry to resolve a subsurface signal is unknown. Here, we utilise the open source modelling package, Fatiando a Terra, to model both the gravity and gravity gradiometry responses of a subsurface body. We use a Monte Carlo approach to vary the geological structure and reference densities of the model within preset distributions. We then perform 100 000 simulations to constrain the mean response of the buried body as well as uncertainties in these results. We varied our modelled borehole to be either centred on the anomaly, adjacent to the anomaly (in the x-direction), and 2500 m distant to the anomaly (also in the x-direction). We demonstrate that gravity gradiometry is able to resolve a reservoir-scale modelled subsurface density variation up to 2500 m away, and that certain gravity gradient components (Gzz, Gxz, and Gxx) are particularly sensitive to this variation in gravity/gradiometry above the level of uncertainty in the model. The responses provided by downhole gravity gradiometry modelling clearly demonstrate a technique that can be utilised in determining a buried density contrast, which will be of particular use in the emerging industry of CO2 geosequestration. The results also provide a strong benchmark for the development of newly emerging prototype downhole gravity gradiometers.

  20. Temporal gravity field modeling based on least square collocation with short-arc approach

    ran, jiangjun; Zhong, Min; Xu, Houze; Liu, Chengshu; Tangdamrongsub, Natthachet

    2014-05-01

    After the launch of the Gravity Recovery And Climate Experiment (GRACE) in 2002, several research centers have attempted to produce the finest gravity model based on different approaches. In this study, we present an alternative approach to derive the Earth's gravity field, and two main objectives are discussed. Firstly, we seek the optimal method to estimate the accelerometer parameters, and secondly, we intend to recover the monthly gravity model based on least square collocation method. The method has been paid less attention compared to the least square adjustment method because of the massive computational resource's requirement. The positions of twin satellites are treated as pseudo-observations and unknown parameters at the same time. The variance covariance matrices of the pseudo-observations and the unknown parameters are valuable information to improve the accuracy of the estimated gravity solutions. Our analyses showed that introducing a drift parameter as an additional accelerometer parameter, compared to using only a bias parameter, leads to a significant improvement of our estimated monthly gravity field. The gravity errors outside the continents are significantly reduced based on the selected set of the accelerometer parameters. We introduced the improved gravity model namely the second version of Institute of Geodesy and Geophysics, Chinese Academy of Sciences (IGG-CAS 02). The accuracy of IGG-CAS 02 model is comparable to the gravity solutions computed from the Geoforschungszentrum (GFZ), the Center for Space Research (CSR) and the NASA Jet Propulsion Laboratory (JPL). In term of the equivalent water height, the correlation coefficients over the study regions (the Yangtze River valley, the Sahara desert, and the Amazon) among four gravity models are greater than 0.80.

  1. The Approach to Defining Gravity Factors of Influence on the Foreign Trade Relations of Countries

    Kalyuzhna Nataliya G.

    2017-03-01

    Full Text Available The aim of the article is to determine the gravity factors of influence on the foreign trade relations of countries on the basis of the results of the comparative analysis of the classical specifications of the gravity model of foreign trade and the domestic experience in gravity modeling. It is substantiated that a gravity model is one of the tools of economic and mathematical modeling, the use of which is characterized by a high level of adequacy and ensures prediction of foreign trade conditions. The main approaches to the definition of explanatory variables in the gravity equation of foreign trade are analyzed, and the author’s approach to the selection of the factors of the gravity model is proposed. As the first explanatory variable in the specification of the gravity model of foreign trade and the characteristics of the importance of economies of foreign trade partners, it is proposed to use the GDP calculated at purchasing power parity with the expected positive and statistically significant coefficient. As the second explanatory variable of the gravity equation of foreign trade, it is proposed to use a complex characteristic of the “trade distance” between countries, which reflects the current conditions of bilateral trade and depends on factors influencing the foreign trade turnover between countries — both directly (static proportionality of transport costs of geographical remoteness, and indirectly (dynamic institutional conditions of bilateral relations. The expediency of using the world average annual price for oil as the quantitative equivalent of the “trading distance” index is substantiated. Prospects for further research in this direction are identifying the form and force of influence of certain basic gravity variables on the foreign trade relations of certain partner countries and determining the appropriateness of including additional factors in the composition of the gravity equation of foreign trade.

  2. Handover management in dense cellular networks: A stochastic geometry approach

    Arshad, Rabe; Elsawy, Hesham; Sorour, Sameh; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2016-01-01

    Cellular operators are continuously densifying their networks to cope with the ever-increasing capacity demand. Furthermore, an extreme densification phase for cellular networks is foreseen to fulfill the ambitious fifth generation (5G) performance requirements. Network densification improves spectrum utilization and network capacity by shrinking base stations' (BSs) footprints and reusing the same spectrum more frequently over the spatial domain. However, network densification also increases the handover (HO) rate, which may diminish the capacity gains for mobile users due to HO delays. In highly dense 5G cellular networks, HO delays may neutralize or even negate the gains offered by network densification. In this paper, we present an analytical paradigm, based on stochastic geometry, to quantify the effect of HO delay on the average user rate in cellular networks. To this end, we propose a flexible handover scheme to reduce HO delay in case of highly dense cellular networks. This scheme allows skipping the HO procedure with some BSs along users' trajectories. The performance evaluation and testing of this scheme for only single HO skipping shows considerable gains in many practical scenarios. © 2016 IEEE.

  3. A stochastic programming approach towards optimization of biofuel supply chain

    Azadeh, Ali; Vafa Arani, Hamed; Dashti, Hossein

    2014-01-01

    Bioenergy has been recognized as an important source of energy that will reduce dependency on petroleum. It would have a positive impact on the economy, environment, and society. Production of bioenergy is expected to increase. As a result, we foresee an increase in the number of biorefineries in the near future. This paper analyzes challenges with supplying biomass to a biorefinery and shipping biofuel to demand centers. A stochastic linear programming model is proposed within a multi-period planning framework to maximize the expected profit. The model deals with a time-staged, multi-commodity, production/distribution system, facility locations and capacities, technologies, and material flows. We illustrate the model outputs and discuss the results through numerical examples considering disruptions in biofuel supply chain. Finally, sensitivity analyses are performed to gain managerial insights on how profit changes due to existing uncertainties. - Highlights: • A robust model of biofuel SC is proposed and a sensitivity analysis implemented. • Demand of products is a function of price and GBM (Geometric Brownian Motion) is used for prices of biofuels. • Uncertainties in SC network are captured through defining probabilistic scenarios. • Both traditional feedstock and lignocellulosic biomass are considered for biofuel production. • Developed model is applicable to any related biofuel supply chain regardless of region

  4. Handover management in dense cellular networks: A stochastic geometry approach

    Arshad, Rabe

    2016-07-26

    Cellular operators are continuously densifying their networks to cope with the ever-increasing capacity demand. Furthermore, an extreme densification phase for cellular networks is foreseen to fulfill the ambitious fifth generation (5G) performance requirements. Network densification improves spectrum utilization and network capacity by shrinking base stations\\' (BSs) footprints and reusing the same spectrum more frequently over the spatial domain. However, network densification also increases the handover (HO) rate, which may diminish the capacity gains for mobile users due to HO delays. In highly dense 5G cellular networks, HO delays may neutralize or even negate the gains offered by network densification. In this paper, we present an analytical paradigm, based on stochastic geometry, to quantify the effect of HO delay on the average user rate in cellular networks. To this end, we propose a flexible handover scheme to reduce HO delay in case of highly dense cellular networks. This scheme allows skipping the HO procedure with some BSs along users\\' trajectories. The performance evaluation and testing of this scheme for only single HO skipping shows considerable gains in many practical scenarios. © 2016 IEEE.

  5. A Stochastic Approach to Noise Modeling for Barometric Altimeters

    Angelo Maria Sabatini

    2013-11-01

    Full Text Available The question whether barometric altimeters can be applied to accurately track human motions is still debated, since their measurement performance are rather poor due to either coarse resolution or drifting behavior problems. As a step toward accurate short-time tracking of changes in height (up to few minutes, we develop a stochastic model that attempts to capture some statistical properties of the barometric altimeter noise. The barometric altimeter noise is decomposed in three components with different physical origin and properties: a deterministic time-varying mean, mainly correlated with global environment changes, and a first-order Gauss-Markov (GM random process, mainly accounting for short-term, local environment changes, the effects of which are prominent, respectively, for long-time and short-time motion tracking; an uncorrelated random process, mainly due to wideband electronic noise, including quantization noise. Autoregressive-moving average (ARMA system identification techniques are used to capture the correlation structure of the piecewise stationary GM component, and to estimate its standard deviation, together with the standard deviation of the uncorrelated component. M-point moving average filters used alone or in combination with whitening filters learnt from ARMA model parameters are further tested in few dynamic motion experiments and discussed for their capability of short-time tracking small-amplitude, low-frequency motions.

  6. A stochastic approach to the derivation of exemption and clearance levels

    Deckert, A.

    1997-01-01

    Deciding what clearance levels are appropriate for a particular waste stream inherently involves a number of uncertainties. Some of these uncertainties can be quantified using stochastic modeling techniques, which can aid the process of decision making. In this presentation the German approach to dealing with the uncertainties involved in setting clearance levels is addressed. (author)

  7. Stochastic Discount Factor Approach to International Risk-Sharing: Evidence from Fixed Exchange Rate Episodes

    Hadzi-Vaskov, M.; Kool, C.J.M.

    2007-01-01

    This paper presents evidence of the stochastic discount factor approach to international risk-sharing applied to fixed exchange rate regimes. We calculate risk-sharing indices for two episodes of fixed or very rigid exchange rates: the Eurozone before and after the introduction of the Euro, and

  8. Market Efficiency of Oil Spot and Futures: A Stochastic Dominance Approach

    H.H. Lean (Hooi Hooi); M.J. McAleer (Michael); W.-K. Wong (Wing-Keung)

    2010-01-01

    textabstractThis paper examines the market efficiency of oil spot and futures prices by using a stochastic dominance (SD) approach. As there is no evidence of an SD relationship between oil spot and futures, we conclude that there is no arbitrage opportunity between these two markets, and that both

  9. A classical approach to higher-derivative gravity

    Accioly, A.J.

    1988-01-01

    Two classical routes towards higher-derivative gravity theory are described. The first one is a geometrical route, starting from first principles. The second route is a formal one, and is based on a recent theorem by Castagnino et.al. [J. Math. Phys. 28 (1987) 1854]. A cosmological solution of the higher-derivative field equations is exhibited which in a classical framework singles out this gravitation theory. (author) [pt

  10. Continuous strong Markov processes in dimension one a stochastic calculus approach

    Assing, Sigurd

    1998-01-01

    The book presents an in-depth study of arbitrary one-dimensional continuous strong Markov processes using methods of stochastic calculus. Departing from the classical approaches, a unified investigation of regular as well as arbitrary non-regular diffusions is provided. A general construction method for such processes, based on a generalization of the concept of a perfect additive functional, is developed. The intrinsic decomposition of a continuous strong Markov semimartingale is discovered. The book also investigates relations to stochastic differential equations and fundamental examples of irregular diffusions.

  11. Stochastic dynamics of new inflation

    Nakao, Ken-ichi; Nambu, Yasusada; Sasaki, Misao.

    1988-07-01

    We investigate thoroughly the dynamics of an inflation-driving scalar field in terms of an extended version of the stochastic approach proposed by Starobinsky and discuss the spacetime structure of the inflationary universe. To avoid any complications which might arise due to quantum gravity, we concentrate our discussions on the new inflationary universe scenario in which all the energy scales involved are well below the planck mass. The investigation is done both analytically and numerically. In particular, we present a full numerical analysis of the stochastic scalar field dynamics on the phase space. Then implications of the results are discussed. (author)

  12. Field-theoretic approach to gravity in the flat space-time

    Cavalleri, G [Centro Informazioni Studi Esperienze, Milan (Italy); Milan Univ. (Italy). Ist. di Fisica); Spinelli, G [Istituto di Matematica del Politecnico di Milano, Milano (Italy)

    1980-01-01

    In this paper it is discussed how the field-theoretical approach to gravity starting from the flat space-time is wider than the Einstein approach. The flat approach is able to predict the structure of the observable space as a consequence of the behaviour of the particle proper masses. The field equations are formally equal to Einstein's equations without the cosmological term.

  13. On the stochastic approach to marine population dynamics

    Eduardo Ferrandis

    2007-03-01

    Full Text Available The purpose of this article is to deepen and structure the statistical basis of marine population dynamics. The starting point is the correspondence between the concepts of mortality, survival and lifetime distribution. This is the kernel of the possibilities that survival analysis techniques offer to marine population dynamics. A rigorous definition of survival and mortality based on their properties and their probabilistic versions is briefly presented. Some well established models for lifetime distribution, which generalise the usual simple exponential distribution, might be used with their corresponding survivals and mortalities. A critical review of some published models is also made, including original models proposed in the way opened by Caddy (1991 and Sparholt (1990, which allow for a continuously decreasing natural mortality. Considering these elements, the pure death process dealt with in the literature is used as a theoretical basis for the evolution of a marine cohort. The elaboration of this process is based on Chiang´s study of the probability distribution of the life table (Chiang, 1960 and provides specific structured models for stock evolution as a Markovian process. These models may introduce new ideas in the line of thinking developed by Gudmundsson (1987 and Sampson (1990 in order to model the evolution of a marine cohort by stochastic processes. The suitable approximation of these processes by means of Gaussian processes may allow theoretical and computational multivariate Gaussian analysis to be applied to the probabilistic treatment of fisheries issues. As a consequence, the necessary catch equation appears as a stochastic integral with respect to the mentioned Markovian process of the stock. The solution of this equation is available when the mortalities are proportional, hence the use of the proportional hazards model (Cox, 1959. The assumption of these proportional mortalities leads naturally to the construction of a

  14. Stochastic Fractional Programming Approach to a Mean and Variance Model of a Transportation Problem

    V. Charles

    2011-01-01

    Full Text Available In this paper, we propose a stochastic programming model, which considers a ratio of two nonlinear functions and probabilistic constraints. In the former, only expected model has been proposed without caring variability in the model. On the other hand, in the variance model, the variability played a vital role without concerning its counterpart, namely, the expected model. Further, the expected model optimizes the ratio of two linear cost functions where as variance model optimize the ratio of two non-linear functions, that is, the stochastic nature in the denominator and numerator and considering expectation and variability as well leads to a non-linear fractional program. In this paper, a transportation model with stochastic fractional programming (SFP problem approach is proposed, which strikes the balance between previous models available in the literature.

  15. A polynomial-chaos-expansion-based building block approach for stochastic analysis of photonic circuits

    Waqas, Abi; Melati, Daniele; Manfredi, Paolo; Grassi, Flavia; Melloni, Andrea

    2018-02-01

    The Building Block (BB) approach has recently emerged in photonic as a suitable strategy for the analysis and design of complex circuits. Each BB can be foundry related and contains a mathematical macro-model of its functionality. As well known, statistical variations in fabrication processes can have a strong effect on their functionality and ultimately affect the yield. In order to predict the statistical behavior of the circuit, proper analysis of the uncertainties effects is crucial. This paper presents a method to build a novel class of Stochastic Process Design Kits for the analysis of photonic circuits. The proposed design kits directly store the information on the stochastic behavior of each building block in the form of a generalized-polynomial-chaos-based augmented macro-model obtained by properly exploiting stochastic collocation and Galerkin methods. Using this approach, we demonstrate that the augmented macro-models of the BBs can be calculated once and stored in a BB (foundry dependent) library and then used for the analysis of any desired circuit. The main advantage of this approach, shown here for the first time in photonics, is that the stochastic moments of an arbitrary photonic circuit can be evaluated by a single simulation only, without the need for repeated simulations. The accuracy and the significant speed-up with respect to the classical Monte Carlo analysis are verified by means of classical photonic circuit example with multiple uncertain variables.

  16. Approach of regional gravity field modeling from GRACE data for improvement of geoid modeling for Japan

    Kuroishi, Y.; Lemoine, F. G.; Rowlands, D. D.

    2006-12-01

    The latest gravimetric geoid model for Japan, JGEOID2004, suffers from errors at long wavelengths (around 1000 km) in a range of +/- 30 cm. The model was developed by combining surface gravity data with a global marine altimetric gravity model, using EGM96 as a foundation, and the errors at long wavelength are presumably attributed to EGM96 errors. The Japanese islands and their vicinity are located in a region of plate convergence boundaries, producing substantial gravity and geoid undulations in a wide range of wavelengths. Because of the geometry of the islands and trenches, precise information on gravity in the surrounding oceans should be incorporated in detail, even if the geoid model is required to be accurate only over land. The Kuroshio Current, which runs south of Japan, causes high sea surface variability, making altimetric gravity field determination complicated. To reduce the long-wavelength errors in the geoid model, we are investigating GRACE data for regional gravity field modeling at long wavelengths in the vicinity of Japan. Our approach is based on exclusive use of inter- satellite range-rate data with calibrated accelerometer data and attitude data, for regional or global gravity field recovery. In the first step, we calibrate accelerometer data in terms of scales and biases by fitting dynamically calculated orbits to GPS-determined precise orbits. The calibration parameters of accelerometer data thus obtained are used in the second step to recover a global/regional gravity anomaly field. This approach is applied to GRACE data obtained for the year 2005 and resulting global/regional gravity models are presented and discussed.

  17. Demand side management scheme in smart grid with cloud computing approach using stochastic dynamic programming

    S. Sofana Reka

    2016-09-01

    Full Text Available This paper proposes a cloud computing framework in smart grid environment by creating small integrated energy hub supporting real time computing for handling huge storage of data. A stochastic programming approach model is developed with cloud computing scheme for effective demand side management (DSM in smart grid. Simulation results are obtained using GUI interface and Gurobi optimizer in Matlab in order to reduce the electricity demand by creating energy networks in a smart hub approach.

  18. Stochastic optimization in insurance a dynamic programming approach

    Azcue, Pablo

    2014-01-01

    The main purpose of the book is to show how a viscosity approach can be used to tackle control problems in insurance. The problems covered are the maximization of survival probability as well as the maximization of dividends in the classical collective risk model. The authors consider the possibility of controlling the risk process by reinsurance as well as by investments. They show that optimal value functions are characterized as either the unique or the smallest viscosity solution of the associated Hamilton-Jacobi-Bellman equation; they also study the structure of the optimal strategies and show how to find them. The viscosity approach was widely used in control problems related to mathematical finance but until quite recently it was not used to solve control problems related to actuarial mathematical science. This book is designed to familiarize the reader on how to use this approach. The intended audience is graduate students as well as researchers in this area.

  19. An Augmented Incomplete Factorization Approach for Computing the Schur Complement in Stochastic Optimization

    Petra, Cosmin G.; Schenk, Olaf; Lubin, Miles; Gä ertner, Klaus

    2014-01-01

    We present a scalable approach and implementation for solving stochastic optimization problems on high-performance computers. In this work we revisit the sparse linear algebra computations of the parallel solver PIPS with the goal of improving the shared-memory performance and decreasing the time to solution. These computations consist of solving sparse linear systems with multiple sparse right-hand sides and are needed in our Schur-complement decomposition approach to compute the contribution of each scenario to the Schur matrix. Our novel approach uses an incomplete augmented factorization implemented within the PARDISO linear solver and an outer BiCGStab iteration to efficiently absorb pivot perturbations occurring during factorization. This approach is capable of both efficiently using the cores inside a computational node and exploiting sparsity of the right-hand sides. We report on the performance of the approach on highperformance computers when solving stochastic unit commitment problems of unprecedented size (billions of variables and constraints) that arise in the optimization and control of electrical power grids. Our numerical experiments suggest that supercomputers can be efficiently used to solve power grid stochastic optimization problems with thousands of scenarios under the strict "real-time" requirements of power grid operators. To our knowledge, this has not been possible prior to the present work. © 2014 Society for Industrial and Applied Mathematics.

  20. Stochastic Computational Approach for Complex Nonlinear Ordinary Differential Equations

    Khan, Junaid Ali; Raja, Muhammad Asif Zahoor; Qureshi, Ijaz Mansoor

    2011-01-01

    We present an evolutionary computational approach for the solution of nonlinear ordinary differential equations (NLODEs). The mathematical modeling is performed by a feed-forward artificial neural network that defines an unsupervised error. The training of these networks is achieved by a hybrid intelligent algorithm, a combination of global search with genetic algorithm and local search by pattern search technique. The applicability of this approach ranges from single order NLODEs, to systems of coupled differential equations. We illustrate the method by solving a variety of model problems and present comparisons with solutions obtained by exact methods and classical numerical methods. The solution is provided on a continuous finite time interval unlike the other numerical techniques with comparable accuracy. With the advent of neuroprocessors and digital signal processors the method becomes particularly interesting due to the expected essential gains in the execution speed. (general)

  1. A GOCE-only global gravity field model by the space-wise approach

    Migliaccio, Frederica; Reguzzoni, Mirko; Gatti, Andrea

    2011-01-01

    The global gravity field model computed by the spacewise approach is one of three official solutions delivered by ESA from the analysis of the GOCE data. The model consists of a set of spherical harmonic coefficients and the corresponding error covariance matrix. The main idea behind this approach...... the orbit to reduce the noise variance and correlation before gridding the data. In the first release of the space-wise approach, based on a period of about two months, some prior information coming from existing gravity field models entered into the solution especially at low degrees and low orders...... degrees; the second is an internally computed GOCE-only prior model to be used in place of the official quick-look model, thus removing the dependency on EIGEN5C especially in the polar gaps. Once the procedure to obtain a GOCE-only solution has been outlined, a new global gravity field model has been...

  2. Higgs inflation and quantum gravity: an exact renormalisation group approach

    Saltas, Ippocratis D.

    2016-01-01

    We use the Wilsonian functional Renormalisation Group (RG) to study quantum corrections for the Higgs inflationary action including the effect of gravitons, and analyse the leading-order quantum gravitational corrections to the Higgs' quartic coupling, as well as its non-minimal coupling to gravity and Newton's constant, at the inflationary regime and beyond. We explain how within this framework the effect of Higgs and graviton loops can be sufficiently suppressed during inflation, and we also place a bound on the corresponding value of the infrared RG cut-off scale during inflation. Finally, we briefly discuss the potential embedding of the model within the scenario of Asymptotic Safety, while all main equations are explicitly presented

  3. Hamiltonian Approach to 2+1 Dimensional Gravity

    Cantini, L.; Menotti, P.; Seminara, D.

    2002-12-01

    It is shown that the reduced particle dynamics of 2+1 dimensional gravity in the maximally slicing gauge has hamiltonian form. We give the exact diffeomorphism which transforms the spinning cone metric in the Deser, Jackiw, 't Hooft gauge to the maximally slicing gauge. It is explicitly shown that the boundary term in the action, written in hamiltonian form gives the hamiltonian for the reduced particle dynamics. The quantum mechanical translation of the two particle hamiltonian gives rise to the logarithm of the Laplace-Beltrami operator on a cone whose angular deficit is given by the total energy of the system irrespective of the masses of the particles thus proving at the quantum level a conjecture by 't Hooft on the two particle dynamics.

  4. Covariant approach of perturbations in Lovelock type brane gravity

    Bagatella-Flores, Norma; Campuzano, Cuauhtemoc; Cruz, Miguel; Rojas, Efraín

    2016-12-01

    We develop a covariant scheme to describe the dynamics of small perturbations on Lovelock type extended objects propagating in a flat Minkowski spacetime. The higher-dimensional analogue of the Jacobi equation in this theory becomes a wave type equation for a scalar field Φ . Whithin this framework, we analyse the stability of membranes with a de Sitter geometry where we find that the Jacobi equation specializes to a Klein-Gordon (KG) equation for Φ possessing a tachyonic mass. This shows that, to some extent, these types of extended objects share the symmetries of the Dirac-Nambu-Goto (DNG) action which is by no means coincidental because the DNG model is the simplest included in this type of gravity.

  5. Technical Efficiency in the Chilean Agribusiness Sector - a Stochastic Meta-Frontier Approach

    Larkner, Sebastian; Brenes Muñoz, Thelma; Aedo, Edinson Rivera; Brümmer, Bernhard

    2013-01-01

    The Chilean economy is strongly export-oriented, which is also true for the Chilean agribusiness industry. This paper investigates the technical efficiency of the Chilean food processing industry between 2001 and 2007. We use a dataset from the 2,471 of firms in food processing industry. The observations are from the ‘Annual National Industrial Survey’. A stochastic meta-frontier approach is used in order to analyse the drivers of technical efficiency. We include variables capturing the effec...

  6. A perturbative approach to neutron stars in f(T, T)-gravity

    Pace, Mark; Said, Jackson Levi [University of Malta, Department of Physics, Msida (Malta); University of Malta, Institute of Space Sciences and Astronomy, Msida (Malta)

    2017-05-15

    We derive a Tolman-Oppenheimer-Volkoff equation in neutron star systems within the modified f(T, T)-gravity class of models using a perturbative approach. In our approach f(T, T)-gravity is considered to be a static spherically symmetric space-time. In this instance the metric is built from a more fundamental vierbein which can be used to relate inertial and global coordinates. A linear function f = T(r) + T(r) + χh(T, T) + O(χ{sup 2}) is taken as the Lagrangian density for the gravitational action. Finally we impose the polytropic equation of state of neutron star upon the derived equations in order to derive the mass profile and mass-central density relations of the neutron star in f(T, T)-gravity. (orig.)

  7. Squeezing more information out of time variable gravity data with a temporal decomposition approach

    Barletta, Valentina Roberta; Bordoni, A.; Aoudia, A.

    2012-01-01

    an explorative approach based on a suitable time series decomposition, which does not rely on predefined time signatures. The comparison and validation against the fitting approach commonly used in GRACE literature shows a very good agreement for what concerns trends and periodic signals on one side......A measure of the Earth's gravity contains contributions from solid Earth as well as climate-related phenomena, that cannot be easily distinguished both in time and space. After more than 7years, the GRACE gravity data available now support more elaborate analysis on the time series. We propose...... used to assess the possibility of finding evidence of meaningful geophysical signals different from hydrology over Africa in GRACE data. In this case we conclude that hydrological phenomena are dominant and so time variable gravity data in Africa can be directly used to calibrate hydrological models....

  8. Lignin Formation and the Effects of Gravity: A New Approach

    Lewis, Norman G.

    1997-01-01

    Two aspects of considerable importance in the enigmatic processes associated with lignification have made excellent progress. The first is that, even in a microgravity environment, compression wood formation, and hence altered lignin deposition, can be induced upon mechanically bending the stems of woody gymnosperms. It now needs to be established if an organism reorientating its woody stem tissue will generate this tissue in microgravity, in the absence of externally applied pressure. If it does not, then gravity has no effect on its formation, and instead it results from alterations in the stress gradient experienced by the organism impacted. The second area of progress involves establishing how the biochemical pathway to lignin is regulated, particularly with respect to selective monolignol biosynthesis. This is an important question since individual monomer deposition occurs in a temporally and spatially specific manner. In this regard, the elusive metabolic switch between E-p-coumaryl alcohol and E-coniferyl alcohol synthesis has been detected, the significance of which now needs to be defined at the enzyme and gene level. Switching between monolignol synthesis is important, since it is viewed to be a consequence of different perceptions by plants in the gravitational load experienced, and thus in the control of the type of lignification response. Additional experiments also revealed the rate-limiting processes involved in monolignol synthesis, and suggest that a biological system (involving metabolite concentrations, as well as enzymatic and gene (in)activation processes) is involved, rather than a single rate-limiting step.

  9. Cosmological histories in bimetric gravity: a graphical approach

    Mörtsell, E.

    2017-01-01

    The bimetric generalization of general relativity has been proven to be able to give an accelerated background expansion consistent with observations. Apart from the energy densities coupling to one or both of the metrics, the expansion will depend on the cosmological constant contribution to each of them, as well as the three parameters describing the interaction between the two metrics. Even for fixed values of these parameters can several possible solutions, so called branches, exist. Different branches can give similar background expansion histories for the observable metric, but may have different properties regarding, for example, the existence of ghosts and the rate of structure growth. In this paper, we outline a method to find viable solution branches for arbitrary parameter values. We show how possible expansion histories in bimetric gravity can be inferred qualitatively, by picturing the ratio of the scale factors of the two metrics as the spatial coordinate of a particle rolling along a frictionless track. A particularly interesting example discussed is a specific set of parameter values, where a cosmological dark matter background is mimicked without introducing ghost modes into the theory.

  10. Stochastic Loewner evolution as an approach to conformal field theory

    Mueller-Lohmann, Annekathrin

    2008-01-01

    The main focus on this work lies on the relationship between two-dimensional boundary Conformal Field Theories (BCFTs) and SCHRAMM-LOEWNER Evolutions (SLEs) as motivated by their connection to the scaling limit of Statistical Physics models at criticality. The BCFT approach used for the past 25 years is based on the algebraic formulation of local objects such as fields and their correlations in these models. Introduced in 1999, SLE describes the physical properties from a probabilistic point of view, studying measures on growing curves, i.e. global objects such as cluster interfaces. After a short motivation of the topic, followed by a more detailed introduction to two-dimensional boundary Conformal Field Theory and SCHRAMM-LOEWNER Evolution, we present the results of our original work. We extend the method of obtaining SLE variants for a change of measure of the single SLE to derive the most general BCFT model that can be related to SLE. Moreover, we interpret the change of the measure in the context of physics and Probability Theory. In addition, we discuss the meaning of bulk fields in BCFT as bulk force-points for the SLE variant SLE (κ, vector ρ). Furthermore, we investigate the short-distance expansion of the boundary condition changing fields, creating cluster interfaces that can be described by SLE, with other boundary or bulk fields. Thereby we derive new SLE martingales related to the existence of boundary fields with vanishing descendant on level three. We motivate that the short-distance scaling law of these martingales as adjustment of the measure can be interpreted as the SLE probability of curves coming close to the location of the second field. Finally, we extend the algebraic κ-relation for the allowed variances in multiple SLE, arising due to the commutation requirement of the infinitesimal growth operators, to the joint growth of two SLE traces. The analysis straightforwardly suggests the form of the infinitesimal LOEWNER mapping of joint

  11. Revisiting the cape cod bacteria injection experiment using a stochastic modeling approach

    Maxwell, R.M.; Welty, C.; Harvey, R.W.

    2007-01-01

    Bromide and resting-cell bacteria tracer tests conducted in a sandy aquifer at the U.S. Geological Survey Cape Cod site in 1987 were reinterpreted using a three-dimensional stochastic approach. Bacteria transport was coupled to colloid filtration theory through functional dependence of local-scale colloid transport parameters upon hydraulic conductivity and seepage velocity in a stochastic advection - dispersion/attachment - detachment model. Geostatistical information on the hydraulic conductivity (K) field that was unavailable at the time of the original test was utilized as input. Using geostatistical parameters, a groundwater flow and particle-tracking model of conservative solute transport was calibrated to the bromide-tracer breakthrough data. An optimization routine was employed over 100 realizations to adjust the mean and variance ofthe natural-logarithm of hydraulic conductivity (InK) field to achieve best fit of a simulated, average bromide breakthrough curve. A stochastic particle-tracking model for the bacteria was run without adjustments to the local-scale colloid transport parameters. Good predictions of mean bacteria breakthrough were achieved using several approaches for modeling components of the system. Simulations incorporating the recent Tufenkji and Elimelech (Environ. Sci. Technol. 2004, 38, 529-536) correlation equation for estimating single collector efficiency were compared to those using the older Rajagopalan and Tien (AIChE J. 1976, 22, 523-533) model. Both appeared to work equally well at predicting mean bacteria breakthrough using a constant mean bacteria diameter for this set of field conditions. Simulations using a distribution of bacterial cell diameters available from original field notes yielded a slight improvement in the model and data agreement compared to simulations using an average bacterial diameter. The stochastic approach based on estimates of local-scale parameters for the bacteria-transport process reasonably captured

  12. All-loop calculations of total, elastic and single diffractive cross sections in RFT via the stochastic approach

    Kolevatov, R. S.; Boreskov, K. G.

    2013-01-01

    We apply the stochastic approach to the calculation of the Reggeon Field Theory (RFT) elastic amplitude and its single diffractive cut. The results for the total, elastic and single difractive cross sections with account of all Pomeron loops are obtained.

  13. All-loop calculations of total, elastic and single diffractive cross sections in RFT via the stochastic approach

    Kolevatov, R. S. [SUBATECH, Ecole des Mines de Nantes, 4 rue Alfred Kastler, 44307 Nantes Cedex 3 (France); Boreskov, K. G. [Institute of Theoretical and Experimental Physics, 117259, Moscow (Russian Federation)

    2013-04-15

    We apply the stochastic approach to the calculation of the Reggeon Field Theory (RFT) elastic amplitude and its single diffractive cut. The results for the total, elastic and single difractive cross sections with account of all Pomeron loops are obtained.

  14. Benchmarking the stochastic time-dependent variational approach for excitation dynamics in molecular aggregates

    Chorošajev, Vladimir [Department of Theoretical Physics, Faculty of Physics, Vilnius University, Sauletekio 9-III, 10222 Vilnius (Lithuania); Gelzinis, Andrius; Valkunas, Leonas [Department of Theoretical Physics, Faculty of Physics, Vilnius University, Sauletekio 9-III, 10222 Vilnius (Lithuania); Department of Molecular Compound Physics, Center for Physical Sciences and Technology, Sauletekio 3, 10222 Vilnius (Lithuania); Abramavicius, Darius, E-mail: darius.abramavicius@ff.vu.lt [Department of Theoretical Physics, Faculty of Physics, Vilnius University, Sauletekio 9-III, 10222 Vilnius (Lithuania)

    2016-12-20

    Highlights: • The Davydov ansatze can be used for finite temperature simulations with an extension. • The accuracy is high if the system is strongly coupled to the environmental phonons. • The approach can simulate time-resolved fluorescence spectra. - Abstract: Time dependent variational approach is a convenient method to characterize the excitation dynamics in molecular aggregates for different strengths of system-bath interaction a, which does not require any additional perturbative schemes. Until recently, however, this method was only applicable in zero temperature case. It has become possible to extend this method for finite temperatures with the introduction of stochastic time dependent variational approach. Here we present a comparison between this approach and the exact hierarchical equations of motion approach for describing excitation dynamics in a broad range of temperatures. We calculate electronic population evolution, absorption and auxiliary time resolved fluorescence spectra in different regimes and find that the stochastic approach shows excellent agreement with the exact approach when the system-bath coupling is sufficiently large and temperatures are high. The differences between the two methods are larger, when temperatures are lower or the system-bath coupling is small.

  15. Multi-period natural gas market modeling Applications, stochastic extensions and solution approaches

    Egging, Rudolf Gerardus

    This dissertation develops deterministic and stochastic multi-period mixed complementarity problems (MCP) for the global natural gas market, as well as solution approaches for large-scale stochastic MCP. The deterministic model is unique in the combination of the level of detail of the actors in the natural gas markets and the transport options, the detailed regional and global coverage, the multi-period approach with endogenous capacity expansions for transportation and storage infrastructure, the seasonal variation in demand and the representation of market power according to Nash-Cournot theory. The model is applied to several scenarios for the natural gas market that cover the formation of a cartel by the members of the Gas Exporting Countries Forum, a low availability of unconventional gas in the United States, and cost reductions in long-distance gas transportation. 1 The results provide insights in how different regions are affected by various developments, in terms of production, consumption, traded volumes, prices and profits of market participants. The stochastic MCP is developed and applied to a global natural gas market problem with four scenarios for a time horizon until 2050 with nineteen regions and containing 78,768 variables. The scenarios vary in the possibility of a gas market cartel formation and varying depletion rates of gas reserves in the major gas importing regions. Outcomes for hedging decisions of market participants show some significant shifts in the timing and location of infrastructure investments, thereby affecting local market situations. A first application of Benders decomposition (BD) is presented to solve a large-scale stochastic MCP for the global gas market with many hundreds of first-stage capacity expansion variables and market players exerting various levels of market power. The largest problem solved successfully using BD contained 47,373 variables of which 763 first-stage variables, however using BD did not result in

  16. Multi-Period Natural Gas Market Modeling. Applications, Stochastic Extensions and Solution Approaches

    Egging, R.G.

    2010-11-01

    This dissertation develops deterministic and stochastic multi-period mixed complementarity problems (MCP) for the global natural gas market, as well as solution approaches for large-scale stochastic MCP. The deterministic model is unique in the combination of the level of detail of the actors in the natural gas markets and the transport options, the detailed regional and global coverage, the multi-period approach with endogenous capacity expansions for transportation and storage infrastructure, the seasonal variation in demand and the representation of market power according to Nash-Cournot theory. The model is applied to several scenarios for the natural gas market that cover the formation of a cartel by the members of the Gas Exporting Countries Forum, a low availability of unconventional gas in the United States, and cost reductions in long-distance gas transportation. The results provide insights in how different regions are affected by various developments, in terms of production, consumption, traded volumes, prices and profits of market participants. The stochastic MCP is developed and applied to a global natural gas market problem with four scenarios for a time horizon until 2050 with nineteen regions and containing 78,768 variables. The scenarios vary in the possibility of a gas market cartel formation and varying depletion rates of gas reserves in the major gas importing regions. Outcomes for hedging decisions of market participants show some significant shifts in the timing and location of infrastructure investments, thereby affecting local market situations. A first application of Benders decomposition (BD) is presented to solve a large-scale stochastic MCP for the global gas market with many hundreds of first-stage capacity expansion variables and market players exerting various levels of market power. The largest problem solved successfully using BD contained 47,373 variables of which 763 first-stage variables, however using BD did not result in

  17. Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach

    Aguilo, Miguel A.; Warner, James E.

    2017-01-01

    This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.

  18. Error performance analysis in K-tier uplink cellular networks using a stochastic geometric approach

    Afify, Laila H.

    2015-09-14

    In this work, we develop an analytical paradigm to analyze the average symbol error probability (ASEP) performance of uplink traffic in a multi-tier cellular network. The analysis is based on the recently developed Equivalent-in-Distribution approach that utilizes stochastic geometric tools to account for the network geometry in the performance characterization. Different from the other stochastic geometry models adopted in the literature, the developed analysis accounts for important communication system parameters and goes beyond signal-to-interference-plus-noise ratio characterization. That is, the presented model accounts for the modulation scheme, constellation type, and signal recovery techniques to model the ASEP. To this end, we derive single integral expressions for the ASEP for different modulation schemes due to aggregate network interference. Finally, all theoretical findings of the paper are verified via Monte Carlo simulations.

  19. Optimising stochastic trajectories in exact quantum jump approaches of interacting systems

    Lacroix, D.

    2004-11-01

    The standard methods used to substitute the quantum dynamics of two interacting systems by a quantum jump approach based on the Stochastic Schroedinger Equation (SSE) are described. It turns out that for a given situation, there exists an infinite number of SSE reformulation. This fact is used to propose general strategies to optimise the stochastic paths in order to reduce the statistical fluctuations. In this procedure, called the 'adaptative noise method', a specific SSE is obtained for which the noise depends explicitly on both the initial state and on the properties of the interaction Hamiltonian. It is also shown that this method can be further improved by the introduction of a mean-field dynamics. The different optimisation procedures are illustrated quantitatively in the case of interacting spins. A significant reduction of the statistical fluctuations is obtained. Consequently, a much smaller number of trajectories is needed to accurately reproduce the exact dynamics as compared to the standard SSE method. (author)

  20. Modular and Stochastic Approaches to Molecular Pathway Models of ATM, TGF beta, and WNT Signaling

    Cucinotta, Francis A.; O'Neill, Peter; Ponomarev, Artem; Carra, Claudio; Whalen, Mary; Pluth, Janice M.

    2009-01-01

    Deterministic pathway models that describe the biochemical interactions of a group of related proteins, their complexes, activation through kinase, etc. are often the basis for many systems biology models. Low dose radiation effects present a unique set of challenges to these models including the importance of stochastic effects due to the nature of radiation tracks and small number of molecules activated, and the search for infrequent events that contribute to cancer risks. We have been studying models of the ATM, TGF -Smad and WNT signaling pathways with the goal of applying pathway models to the investigation of low dose radiation cancer risks. Modeling challenges include introduction of stochastic models of radiation tracks, their relationships to more than one substrate species that perturb pathways, and the identification of a representative set of enzymes that act on the dominant substrates. Because several pathways are activated concurrently by radiation the development of modular pathway approach is of interest.

  1. International Diversification Versus Domestic Diversification: Mean-Variance Portfolio Optimization and Stochastic Dominance Approaches

    Fathi Abid

    2014-05-01

    Full Text Available This paper applies the mean-variance portfolio optimization (PO approach and the stochastic dominance (SD test to examine preferences for international diversification versus domestic diversification from American investors’ viewpoints. Our PO results imply that the domestic diversification strategy dominates the international diversification strategy at a lower risk level and the reverse is true at a higher risk level. Our SD analysis shows that there is no arbitrage opportunity between international and domestic stock markets; domestically diversified portfolios with smaller risk dominate internationally diversified portfolios with larger risk and vice versa; and at the same risk level, there is no difference between the domestically and internationally diversified portfolios. Nonetheless, we cannot find any domestically diversified portfolios that stochastically dominate all internationally diversified portfolios, but we find some internationally diversified portfolios with small risk that dominate all the domestically diversified portfolios.

  2. Monthly gravity field recovery from GRACE orbits and K-band measurements using variational equations approach

    Changqing Wang

    2015-07-01

    Full Text Available The Gravity Recovery and Climate Experiment (GRACE mission can significantly improve our knowledge of the temporal variability of the Earth's gravity field. We obtained monthly gravity field solutions based on variational equations approach from GPS-derived positions of GRACE satellites and K-band range-rate measurements. The impact of different fixed data weighting ratios in temporal gravity field recovery while combining the two types of data was investigated for the purpose of deriving the best combined solution. The monthly gravity field solution obtained through above procedures was named as the Institute of Geodesy and Geophysics (IGG temporal gravity field models. IGG temporal gravity field models were compared with GRACE Release05 (RL05 products in following aspects: (i the trend of the mass anomaly in China and its nearby regions within 2005–2010; (ii the root mean squares of the global mass anomaly during 2005–2010; (iii time-series changes in the mean water storage in the region of the Amazon Basin and the Sahara Desert between 2005 and 2010. The results showed that IGG solutions were almost consistent with GRACE RL05 products in above aspects (i–(iii. Changes in the annual amplitude of mean water storage in the Amazon Basin were 14.7 ± 1.2 cm for IGG, 17.1 ± 1.3 cm for the Centre for Space Research (CSR, 16.4 ± 0.9 cm for the GeoForschungsZentrum (GFZ and 16.9 ± 1.2 cm for the Jet Propulsion Laboratory (JPL in terms of equivalent water height (EWH, respectively. The root mean squares of the mean mass anomaly in Sahara were 1.2 cm, 0.9 cm, 0.9 cm and 1.2 cm for temporal gravity field models of IGG, CSR, GFZ and JPL, respectively. Comparison suggested that IGG temporal gravity field solutions were at the same accuracy level with the latest temporal gravity field solutions published by CSR, GFZ and JPL.

  3. A one-dimensional stochastic approach to the study of cyclic voltammetry with adsorption effects

    Samin, Adib J. [The Department of Mechanical and Aerospace Engineering, The Ohio State University, 201 W 19" t" h Avenue, Columbus, Ohio 43210 (United States)

    2016-05-15

    In this study, a one-dimensional stochastic model based on the random walk approach is used to simulate cyclic voltammetry. The model takes into account mass transport, kinetics of the redox reactions, adsorption effects and changes in the morphology of the electrode. The model is shown to display the expected behavior. Furthermore, the model shows consistent qualitative agreement with a finite difference solution. This approach allows for an understanding of phenomena on a microscopic level and may be useful for analyzing qualitative features observed in experimentally recorded signals.

  4. A one-dimensional stochastic approach to the study of cyclic voltammetry with adsorption effects

    Samin, Adib J.

    2016-01-01

    In this study, a one-dimensional stochastic model based on the random walk approach is used to simulate cyclic voltammetry. The model takes into account mass transport, kinetics of the redox reactions, adsorption effects and changes in the morphology of the electrode. The model is shown to display the expected behavior. Furthermore, the model shows consistent qualitative agreement with a finite difference solution. This approach allows for an understanding of phenomena on a microscopic level and may be useful for analyzing qualitative features observed in experimentally recorded signals.

  5. Three Least-Squares Minimization Approaches to Interpret Gravity Data Due to Dipping Faults

    Abdelrahman, E. M.; Essa, K. S.

    2015-02-01

    We have developed three different least-squares minimization approaches to determine, successively, the depth, dip angle, and amplitude coefficient related to the thickness and density contrast of a buried dipping fault from first moving average residual gravity anomalies. By defining the zero-anomaly distance and the anomaly value at the origin of the moving average residual profile, the problem of depth determination is transformed into a constrained nonlinear gravity inversion. After estimating the depth of the fault, the dip angle is estimated by solving a nonlinear inverse problem. Finally, after estimating the depth and dip angle, the amplitude coefficient is determined using a linear equation. This method can be applied to residuals as well as to measured gravity data because it uses the moving average residual gravity anomalies to estimate the model parameters of the faulted structure. The proposed method was tested on noise-corrupted synthetic and real gravity data. In the case of the synthetic data, good results are obtained when errors are given in the zero-anomaly distance and the anomaly value at the origin, and even when the origin is determined approximately. In the case of practical data (Bouguer anomaly over Gazal fault, south Aswan, Egypt), the fault parameters obtained are in good agreement with the actual ones and with those given in the published literature.

  6. Superconformal gravity in Hamiltonian form: another approach to the renormalization of gravitation

    Kaku, M.

    1983-01-01

    We reexpress superconformal gravity in Hamiltonian form, explicitly displaying all 24 generators of the group as Dirac constraints on the Hilbert space. From this, we can establish a firm foundation for the canonical quantization of superconformal gravity. The purpose of writing down the Hamiltonian form of the theory is to reexamine the question of renormalization and unitarity. Usually, we start with unitary theories of gravity, such as the Einstein-Hilbert action or supergravity, both of which are probably not renormalizable. In this series of papers, we take the opposite approach and start with a theory which is renormalizable but has problems with unitarity. Conformal and superconformal gravity are both plagued with dipole ghosts when we use perturbation theory to quantize the theories. It is difficult to interpret the results of perturbation theory because the asymptotic states have zero norm and the potential between particles grows linearly with the separation distance. The purpose of writing the Hamiltonian form of these theories is to approach the question of unitarity from a different point of view. For example, a strong-coupling approach to these theories may yield a totally different perturbation expansion. We speculate that canonically quantizing the theory by power expanding in the strong-coupling regime may yield a different set of asymptotic states, somewhat similar to the situation in gauge theories. In this series of papers, we wish to reopen the question of the unitarity of conformal theories. We conjecture that ghosts are ''confined.''

  7. A comparison of the stochastic and machine learning approaches in hydrologic time series forecasting

    Kim, T.; Joo, K.; Seo, J.; Heo, J. H.

    2016-12-01

    Hydrologic time series forecasting is an essential task in water resources management and it becomes more difficult due to the complexity of runoff process. Traditional stochastic models such as ARIMA family has been used as a standard approach in time series modeling and forecasting of hydrological variables. Due to the nonlinearity in hydrologic time series data, machine learning approaches has been studied with the advantage of discovering relevant features in a nonlinear relation among variables. This study aims to compare the predictability between the traditional stochastic model and the machine learning approach. Seasonal ARIMA model was used as the traditional time series model, and Random Forest model which consists of decision tree and ensemble method using multiple predictor approach was applied as the machine learning approach. In the application, monthly inflow data from 1986 to 2015 of Chungju dam in South Korea were used for modeling and forecasting. In order to evaluate the performances of the used models, one step ahead and multi-step ahead forecasting was applied. Root mean squared error and mean absolute error of two models were compared.

  8. A Q-Learning Approach to Flocking With UAVs in a Stochastic Environment.

    Hung, Shao-Ming; Givigi, Sidney N

    2017-01-01

    In the past two decades, unmanned aerial vehicles (UAVs) have demonstrated their efficacy in supporting both military and civilian applications, where tasks can be dull, dirty, dangerous, or simply too costly with conventional methods. Many of the applications contain tasks that can be executed in parallel, hence the natural progression is to deploy multiple UAVs working together as a force multiplier. However, to do so requires autonomous coordination among the UAVs, similar to swarming behaviors seen in animals and insects. This paper looks at flocking with small fixed-wing UAVs in the context of a model-free reinforcement learning problem. In particular, Peng's Q(λ) with a variable learning rate is employed by the followers to learn a control policy that facilitates flocking in a leader-follower topology. The problem is structured as a Markov decision process, where the agents are modeled as small fixed-wing UAVs that experience stochasticity due to disturbances such as winds and control noises, as well as weight and balance issues. Learned policies are compared to ones solved using stochastic optimal control (i.e., dynamic programming) by evaluating the average cost incurred during flight according to a cost function. Simulation results demonstrate the feasibility of the proposed learning approach at enabling agents to learn how to flock in a leader-follower topology, while operating in a nonstationary stochastic environment.

  9. Backward-stochastic-differential-equation approach to modeling of gene expression.

    Shamarova, Evelina; Chertovskih, Roman; Ramos, Alexandre F; Aguiar, Paulo

    2017-03-01

    In this article, we introduce a backward method to model stochastic gene expression and protein-level dynamics. The protein amount is regarded as a diffusion process and is described by a backward stochastic differential equation (BSDE). Unlike many other SDE techniques proposed in the literature, the BSDE method is backward in time; that is, instead of initial conditions it requires the specification of end-point ("final") conditions, in addition to the model parametrization. To validate our approach we employ Gillespie's stochastic simulation algorithm (SSA) to generate (forward) benchmark data, according to predefined gene network models. Numerical simulations show that the BSDE method is able to correctly infer the protein-level distributions that preceded a known final condition, obtained originally from the forward SSA. This makes the BSDE method a powerful systems biology tool for time-reversed simulations, allowing, for example, the assessment of the biological conditions (e.g., protein concentrations) that preceded an experimentally measured event of interest (e.g., mitosis, apoptosis, etc.).

  10. Stochastic Boolean networks: An efficient approach to modeling gene regulatory networks

    Liang Jinghang

    2012-08-01

    network inferred from a T cell immune response dataset. An SBN can also implement the function of an asynchronous PBN and is potentially useful in a hybrid approach in combination with a continuous or single-molecule level stochastic model. Conclusions Stochastic Boolean networks (SBNs are proposed as an efficient approach to modelling gene regulatory networks (GRNs. The SBN approach is able to recover biologically-proven regulatory behaviours, such as the oscillatory dynamics of the p53-Mdm2 network and the dynamic attractors in a T cell immune response network. The proposed approach can further predict the network dynamics when the genes are under perturbation, thus providing biologically meaningful insights for a better understanding of the dynamics of GRNs. The algorithms and methods described in this paper have been implemented in Matlab packages, which are attached as Additional files.

  11. Stochastic quantization

    Klauder, J.R.

    1983-01-01

    The author provides an introductory survey to stochastic quantization in which he outlines this new approach for scalar fields, gauge fields, fermion fields, and condensed matter problems such as electrons in solids and the statistical mechanics of quantum spins. (Auth.)

  12. A stochastic security approach to energy and spinning reserve scheduling considering demand response program

    Partovi, Farzad; Nikzad, Mehdi; Mozafari, Babak; Ranjbar, Ali Mohamad

    2011-01-01

    In this paper a new algorithm for allocating energy and determining the optimum amount of network active power reserve capacity and the share of generating units and demand side contribution in providing reserve capacity requirements for day-ahead market is presented. In the proposed method, the optimum amount of reserve requirement is determined based on network security set by operator. In this regard, Expected Load Not Supplied (ELNS) is used to evaluate system security in each hour. The proposed method has been implemented over the IEEE 24-bus test system and the results are compared with a deterministic security approach, which considers certain and fixed amount of reserve capacity in each hour. This comparison is done from economic and technical points of view. The promising results show the effectiveness of the proposed model which is formulated as mixed integer linear programming (MILP) and solved by GAMS software. -- Highlights: → Determination of optimal spinning reserve capacity requirement in order to satisfy desired security level set by system operator based on stochastic approach. → Scheduling energy and spinning reserve markets simultaneously. → Comparing the stochastic approach with deterministic approach to determine the advantages and disadvantages of each. → Examine the effect of demand response participation in reserve market to provide spinning reserve.

  13. A Direct Approach to Determine the External Disturbing Gravity Field by Applying Green Integral with the Ground Boundary Value

    TIAN Jialei

    2015-11-01

    Full Text Available By using the ground as the boundary, Molodensky problem usually gets the solution in form of series. Higher order terms reflect the correction between a smooth surface and the ground boundary. Application difficulties arise from not only computational complexity and stability maintenance, but also data-intensiveness. Therefore, in this paper, starting from the application of external gravity disturbance, Green formula is used on digital terrain surface. In the case of ignoring the influence of horizontal component of the integral, the expression formula of external disturbance potential determined by boundary value consisted of ground gravity anomalies and height anomaly difference are obtained, whose kernel function is reciprocal of distance and Poisson core respectively. With this method, there is no need of continuation of ground data. And kernel function is concise, and suitable for the stochastic computation of external disturbing gravity field.

  14. Restructuring of workflows to minimise errors via stochastic model checking: An automated evolutionary approach

    Herbert, L.T.; Hansen, Z.N.L.

    2016-01-01

    This paper presents a framework for the automated restructuring of stochastic workflows to reduce the impact of faults. The framework allows for the modelling of workflows by means of a formalised subset of the BPMN workflow language. We extend this modelling formalism to describe faults and incorporate an intention preserving stochastic semantics able to model both probabilistic- and non-deterministic behaviour. Stochastic model checking techniques are employed to generate the state-space of a given workflow. Possible improvements obtained by restructuring are measured by employing the framework's capacity for tracking real-valued quantities associated with states and transitions of the workflow. The space of possible restructurings of a workflow is explored by means of an evolutionary algorithm, where the goals for improvement are defined in terms of optimising quantities, typically employed to model resources, associated with a workflow. The approach is fully automated and only the modelling of the production workflows, potential faults and the expression of the goals require manual input. We present the design of a software tool implementing this framework and explore the practical utility of this approach through an industrial case study in which the risk of production failures and their impact are reduced by restructuring the workflow. - Highlights: • We present a framework which allows for the automated restructuring of workflows. • This framework seeks to minimise the impact of errors on the workflow. • We illustrate a scalable software implementation of this framework. • We explore the practical utility of this approach through an industry case. • The impact of errors can be substantially reduced by restructuring the workflow.

  15. Study of stochastic approaches of the n-bodies problem: application to the nuclear fragmentation

    Guarnera, A.

    1996-01-01

    In the last decade nuclear physics research has found, with the observation of phenomena such as multifragmentation or vaporization, the possibility to get a deeper insight into the nuclear matter phase diagram. For example, a spinodal decomposition scenario has been proposed to explain the multifragmentation: because of the initial compression, the system may enter a region, the spinodal zone, in which the nuclear matter is no longer stable, and so any fluctuation leads to the formation of fragments. This thesis deals with spinodal decomposition within the theoretical framework of stochastic mean filed approaches, in which the one-body density function may experience a stochastic evolution. We have shown that these approaches are able to describe phenomena, such as first order phase transitions, in which fluctuations and many-body correlations plan an important role. In the framework of stochastic mean-filed approaches we have shown that the fragment production by spinodal decomposition is characterized by typical time scales of the order of 100 fm/c and by typical size scales around the Neon mass. We have also shown that these features are robust and that they are not affected significantly by a possible expansion of the system or by the finite size of nuclei. We have proposed as a signature of the spinodal decomposition some typical partition of the largest fragments. The study and the comparison with experimental data, performed for the reactions Xe + Cu at 45 MeV/A and Xe + Sn at 50 MeV/A, have shown a remarkable agreement. Moreover we would like to stress that the theory does not contain any adjustable parameter. These results seem to give a strong indication of the possibility to observe a spinodal decomposition of nuclei. (author)

  16. A Column Generation Approach to the Capacitated Vehicle Routing Problem with Stochastic Demands

    Christiansen, Christian Holk; Lysgaard, Jens

    . The CVRPSD can be formulated as a Set Partitioning Problem. We show that, under the above assumptions on demands, the associated column generation subproblem can be solved using a dynamic programming scheme which is similar to that used in the case of deterministic demands. To evaluate the potential of our......In this article we introduce a new exact solution approach to the Capacitated Vehicle Routing Problem with Stochastic Demands (CVRPSD). In particular, we consider the case where all customer demands are distributed independently and where each customer's demand follows a Poisson distribution...

  17. Stochastic approach for round-off error analysis in computing application to signal processing algorithms

    Vignes, J.

    1986-01-01

    Any result of algorithms provided by a computer always contains an error resulting from floating-point arithmetic round-off error propagation. Furthermore signal processing algorithms are also generally performed with data containing errors. The permutation-perturbation method, also known under the name CESTAC (controle et estimation stochastique d'arrondi de calcul) is a very efficient practical method for evaluating these errors and consequently for estimating the exact significant decimal figures of any result of algorithms performed on a computer. The stochastic approach of this method, its probabilistic proof, and the perfect agreement between the theoretical and practical aspects are described in this paper [fr

  18. On the stochastic approach to inflation and the initial conditions in the universe

    Pollock, M. D.

    1988-03-01

    By the application of stochastic methods to a theory in which a potential V(ø) causes a period of quasi-exponential expansion of the universe, an expression for the probability distribution P(V) appropriate for chaotic inflation has recently been derived. The method was developed by Starobinsky and by Linde. Beyond some critical point øc, long-wavelength quantum fluctuations δø ~H/2π cannot be ignored. The effect of these fluctuation in general relativity for values of ø such that V(ø)>V(ø) has been considered by Linde, who concluded that most of the present universe arises as a result of expansion of domains with a domains with a maximum possible value of ø, such that V(ømax ~ mp4. We obtain the corresponding expression for P in a broken-symmetry theory of gravity, in which the newtonian gravitational constant is replaced by G = (8πɛø2)-1, and also for a theory which includes higher-derivative terms R2 = γR2 + βR2 1n(R/μ2), so that the trace anomaly is Tanom ~βR2 , in which an effective inflation field øe can be defined as øe2 = 24γR. Conclusions analogous to those of Linde can be drawn in both these theories. Present address: Tata Institute of Fundamental Research, Homi Bhabha Road, Colaba, Bombay 400.005, India.

  19. Stochastic approach to the derivation of emission limits for wastewater treatment plants.

    Stransky, D; Kabelkova, I; Bares, V

    2009-01-01

    Stochastic approach to the derivation of WWTP emission limits meeting probabilistically defined environmental quality standards (EQS) is presented. The stochastic model is based on the mixing equation with input data defined by probability density distributions and solved by Monte Carlo simulations. The approach was tested on a study catchment for total phosphorus (P(tot)). The model assumes input variables independency which was proved for the dry-weather situation. Discharges and P(tot) concentrations both in the study creek and WWTP effluent follow log-normal probability distribution. Variation coefficients of P(tot) concentrations differ considerably along the stream (c(v)=0.415-0.884). The selected value of the variation coefficient (c(v)=0.420) affects the derived mean value (C(mean)=0.13 mg/l) of the P(tot) EQS (C(90)=0.2 mg/l). Even after supposed improvement of water quality upstream of the WWTP to the level of the P(tot) EQS, the WWTP emission limits calculated would be lower than the values of the best available technology (BAT). Thus, minimum dilution ratios for the meaningful application of the combined approach to the derivation of P(tot) emission limits for Czech streams are discussed.

  20. (Non-) homomorphic approaches to denoise intensity SAR images with non-local means and stochastic distances

    Penna, Pedro A. A.; Mascarenhas, Nelson D. A.

    2018-02-01

    The development of new methods to denoise images still attract researchers, who seek to combat the noise with the minimal loss of resolution and details, like edges and fine structures. Many algorithms have the goal to remove additive white Gaussian noise (AWGN). However, it is not the only type of noise which interferes in the analysis and interpretation of images. Therefore, it is extremely important to expand the filters capacity to different noise models present in li-terature, for example the multiplicative noise called speckle that is present in synthetic aperture radar (SAR) images. The state-of-the-art algorithms in remote sensing area work with similarity between patches. This paper aims to develop two approaches using the non local means (NLM), developed for AWGN. In our research, we expanded its capacity for intensity SAR ima-ges speckle. The first approach is grounded on the use of stochastic distances based on the G0 distribution without transforming the data to the logarithm domain, like homomorphic transformation. It takes into account the speckle and backscatter to estimate the parameters necessary to compute the stochastic distances on NLM. The second method uses a priori NLM denoising with a homomorphic transformation and applies the inverse Gamma distribution to estimate the parameters that were used into NLM with stochastic distances. The latter method also presents a new alternative to compute the parameters for the G0 distribution. Finally, this work compares and analyzes the synthetic and real results of the proposed methods with some recent filters of the literature.

  1. A Monte Carlo Study on Multiple Output Stochastic Frontiers: Comparison of Two Approaches

    Henningsen, Geraldine; Henningsen, Arne; Jensen, Uwe

    , dividing all other output quantities by the selected output quantity, and using these ratios as regressors (OD). Another approach is the stochastic ray production frontier (SR) which transforms the output quantities into their Euclidean distance as the dependent variable and their polar coordinates......In the estimation of multiple output technologies in a primal approach, the main question is how to handle the multiple outputs. Often an output distance function is used, where the classical approach is to exploit its homogeneity property by selecting one output quantity as the dependent variable...... of both specifications for the case of a Translog output distance function with respect to different common statistical problems as well as problems arising as a consequence of zero values in the output quantities. Although, our results partly show clear reactions to statistical misspecifications...

  2. A Least Squares Collocation Approach with GOCE gravity gradients for regional Moho-estimation

    Rieser, Daniel; Mayer-Guerr, Torsten

    2014-05-01

    The depth of the Moho discontinuity is commonly derived by either seismic observations, gravity measurements or combinations of both. In this study, we aim to use the gravity gradient measurements of the GOCE satellite mission in a Least Squares Collocation (LSC) approach for the estimation of the Moho depth on regional scale. Due to its mission configuration and measurement setup, GOCE is able to contribute valuable information in particular in the medium wavelengths of the gravity field spectrum, which is also of special interest for the crust-mantle boundary. In contrast to other studies we use the full information of the gradient tensor in all three dimensions. The problem outline is formulated as isostatically compensated topography according to the Airy-Heiskanen model. By using a topography model in spherical harmonics representation the topographic influences can be reduced from the gradient observations. Under the assumption of constant mantle and crustal densities, surface densities are directly derived by LSC on regional scale, which in turn are converted in Moho depths. First investigations proofed the ability of this method to resolve the gravity inversion problem already with a small amount of GOCE data and comparisons with other seismic and gravitmetric Moho models for the European region show promising results. With the recently reprocessed GOCE gradients, an improved data set shall be used for the derivation of the Moho depth. In this contribution the processing strategy will be introduced and the most recent developments and results using the currently available GOCE data shall be presented.

  3. On the foundations of the random lattice approach to quantum gravity

    Levin, A.; Morozov, A.

    1990-01-01

    We discuss the problem which can arise in the identification of conventional 2D quantum gravity, involving the sum over Riemann surfaces, with the results of the lattice approach, based on the enumeration of the Feynman graphs of matrix models. A potential difficulty is related to the (hypothetical) fact that the arithmetic curves are badly distributed in the module spaces for high enough genera (at least for g≥17). (orig.)

  4. A stochastic approach for quantifying immigrant integration: the Spanish test case

    Agliari, Elena; Barra, Adriano; Contucci, Pierluigi; Sandell, Richard; Vernia, Cecilia

    2014-10-01

    We apply stochastic process theory to the analysis of immigrant integration. Using a unique and detailed data set from Spain, we study the relationship between local immigrant density and two social and two economic immigration quantifiers for the period 1999-2010. As opposed to the classic time-series approach, by letting immigrant density play the role of ‘time’ and the quantifier the role of ‘space,’ it becomes possible to analyse the behavior of the quantifiers by means of continuous time random walks. Two classes of results are then obtained. First, we show that social integration quantifiers evolve following diffusion law, while the evolution of economic quantifiers exhibits ballistic dynamics. Second, we make predictions of best- and worst-case scenarios taking into account large local fluctuations. Our stochastic process approach to integration lends itself to interesting forecasting scenarios which, in the hands of policy makers, have the potential to improve political responses to integration problems. For instance, estimating the standard first-passage time and maximum-span walk reveals local differences in integration performance for different immigration scenarios. Thus, by recognizing the importance of local fluctuations around national means, this research constitutes an important tool to assess the impact of immigration phenomena on municipal budgets and to set up solid multi-ethnic plans at the municipal level as immigration pressures build.

  5. A stochastic approach for quantifying immigrant integration: the Spanish test case

    Agliari, Elena; Barra, Adriano; Contucci, Pierluigi; Sandell, Richard; Vernia, Cecilia

    2014-01-01

    We apply stochastic process theory to the analysis of immigrant integration. Using a unique and detailed data set from Spain, we study the relationship between local immigrant density and two social and two economic immigration quantifiers for the period 1999–2010. As opposed to the classic time-series approach, by letting immigrant density play the role of ‘time’ and the quantifier the role of ‘space,’ it becomes possible to analyse the behavior of the quantifiers by means of continuous time random walks. Two classes of results are then obtained. First, we show that social integration quantifiers evolve following diffusion law, while the evolution of economic quantifiers exhibits ballistic dynamics. Second, we make predictions of best- and worst-case scenarios taking into account large local fluctuations. Our stochastic process approach to integration lends itself to interesting forecasting scenarios which, in the hands of policy makers, have the potential to improve political responses to integration problems. For instance, estimating the standard first-passage time and maximum-span walk reveals local differences in integration performance for different immigration scenarios. Thus, by recognizing the importance of local fluctuations around national means, this research constitutes an important tool to assess the impact of immigration phenomena on municipal budgets and to set up solid multi-ethnic plans at the municipal level as immigration pressures build. (paper)

  6. A stochastic differential equations approach for the description of helium bubble size distributions in irradiated metals

    Seif, Dariush; Ghoniem, Nasr M.

    2014-12-01

    A rate theory model based on the theory of nonlinear stochastic differential equations (SDEs) is developed to estimate the time-dependent size distribution of helium bubbles in metals under irradiation. Using approaches derived from Itô's calculus, rate equations for the first five moments of the size distribution in helium-vacancy space are derived, accounting for the stochastic nature of the atomic processes involved. In the first iteration of the model, the distribution is represented as a bivariate Gaussian distribution. The spread of the distribution about the mean is obtained by white-noise terms in the second-order moments, driven by fluctuations in the general absorption and emission of point defects by bubbles, and fluctuations stemming from collision cascades. This statistical model for the reconstruction of the distribution by its moments is coupled to a previously developed reduced-set, mean-field, rate theory model. As an illustrative case study, the model is applied to a tungsten plasma facing component under irradiation. Our findings highlight the important role of stochastic atomic fluctuations on the evolution of helium-vacancy cluster size distributions. It is found that when the average bubble size is small (at low dpa levels), the relative spread of the distribution is large and average bubble pressures may be very large. As bubbles begin to grow in size, average bubble pressures decrease, and stochastic fluctuations have a lessened effect. The distribution becomes tighter as it evolves in time, corresponding to a more uniform bubble population. The model is formulated in a general way, capable of including point defect drift due to internal temperature and/or stress gradients. These arise during pulsed irradiation, and also during steady irradiation as a result of externally applied or internally generated non-homogeneous stress fields. Discussion is given into how the model can be extended to include full spatial resolution and how the

  7. A stochastic differential equations approach for the description of helium bubble size distributions in irradiated metals

    Seif, Dariush; Ghoniem, Nasr M.

    2014-01-01

    A rate theory model based on the theory of nonlinear stochastic differential equations (SDEs) is developed to estimate the time-dependent size distribution of helium bubbles in metals under irradiation. Using approaches derived from Itô’s calculus, rate equations for the first five moments of the size distribution in helium–vacancy space are derived, accounting for the stochastic nature of the atomic processes involved. In the first iteration of the model, the distribution is represented as a bivariate Gaussian distribution. The spread of the distribution about the mean is obtained by white-noise terms in the second-order moments, driven by fluctuations in the general absorption and emission of point defects by bubbles, and fluctuations stemming from collision cascades. This statistical model for the reconstruction of the distribution by its moments is coupled to a previously developed reduced-set, mean-field, rate theory model. As an illustrative case study, the model is applied to a tungsten plasma facing component under irradiation. Our findings highlight the important role of stochastic atomic fluctuations on the evolution of helium–vacancy cluster size distributions. It is found that when the average bubble size is small (at low dpa levels), the relative spread of the distribution is large and average bubble pressures may be very large. As bubbles begin to grow in size, average bubble pressures decrease, and stochastic fluctuations have a lessened effect. The distribution becomes tighter as it evolves in time, corresponding to a more uniform bubble population. The model is formulated in a general way, capable of including point defect drift due to internal temperature and/or stress gradients. These arise during pulsed irradiation, and also during steady irradiation as a result of externally applied or internally generated non-homogeneous stress fields. Discussion is given into how the model can be extended to include full spatial resolution and how the

  8. A stochastic differential equations approach for the description of helium bubble size distributions in irradiated metals

    Seif, Dariush, E-mail: dariush.seif@iwm-extern.fraunhofer.de [Fraunhofer Institut für Werkstoffmechanik, Freiburg 79108 (Germany); Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, CA 90095-1597 (United States); Ghoniem, Nasr M. [Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, CA 90095-1597 (United States)

    2014-12-15

    A rate theory model based on the theory of nonlinear stochastic differential equations (SDEs) is developed to estimate the time-dependent size distribution of helium bubbles in metals under irradiation. Using approaches derived from Itô’s calculus, rate equations for the first five moments of the size distribution in helium–vacancy space are derived, accounting for the stochastic nature of the atomic processes involved. In the first iteration of the model, the distribution is represented as a bivariate Gaussian distribution. The spread of the distribution about the mean is obtained by white-noise terms in the second-order moments, driven by fluctuations in the general absorption and emission of point defects by bubbles, and fluctuations stemming from collision cascades. This statistical model for the reconstruction of the distribution by its moments is coupled to a previously developed reduced-set, mean-field, rate theory model. As an illustrative case study, the model is applied to a tungsten plasma facing component under irradiation. Our findings highlight the important role of stochastic atomic fluctuations on the evolution of helium–vacancy cluster size distributions. It is found that when the average bubble size is small (at low dpa levels), the relative spread of the distribution is large and average bubble pressures may be very large. As bubbles begin to grow in size, average bubble pressures decrease, and stochastic fluctuations have a lessened effect. The distribution becomes tighter as it evolves in time, corresponding to a more uniform bubble population. The model is formulated in a general way, capable of including point defect drift due to internal temperature and/or stress gradients. These arise during pulsed irradiation, and also during steady irradiation as a result of externally applied or internally generated non-homogeneous stress fields. Discussion is given into how the model can be extended to include full spatial resolution and how the

  9. Dissection of a Complex Disease Susceptibility Region Using a Bayesian Stochastic Search Approach to Fine Mapping.

    Chris Wallace

    2015-06-01

    Full Text Available Identification of candidate causal variants in regions associated with risk of common diseases is complicated by linkage disequilibrium (LD and multiple association signals. Nonetheless, accurate maps of these variants are needed, both to fully exploit detailed cell specific chromatin annotation data to highlight disease causal mechanisms and cells, and for design of the functional studies that will ultimately be required to confirm causal mechanisms. We adapted a Bayesian evolutionary stochastic search algorithm to the fine mapping problem, and demonstrated its improved performance over conventional stepwise and regularised regression through simulation studies. We then applied it to fine map the established multiple sclerosis (MS and type 1 diabetes (T1D associations in the IL-2RA (CD25 gene region. For T1D, both stepwise and stochastic search approaches identified four T1D association signals, with the major effect tagged by the single nucleotide polymorphism, rs12722496. In contrast, for MS, the stochastic search found two distinct competing models: a single candidate causal variant, tagged by rs2104286 and reported previously using stepwise analysis; and a more complex model with two association signals, one of which was tagged by the major T1D associated rs12722496 and the other by rs56382813. There is low to moderate LD between rs2104286 and both rs12722496 and rs56382813 (r2 ≃ 0:3 and our two SNP model could not be recovered through a forward stepwise search after conditioning on rs2104286. Both signals in the two variant model for MS affect CD25 expression on distinct subpopulations of CD4+ T cells, which are key cells in the autoimmune process. The results support a shared causal variant for T1D and MS. Our study illustrates the benefit of using a purposely designed model search strategy for fine mapping and the advantage of combining disease and protein expression data.

  10. Intercepting virtual balls approaching under different gravity conditions: evidence for spatial prediction.

    Russo, Marta; Cesqui, Benedetta; La Scaleia, Barbara; Ceccarelli, Francesca; Maselli, Antonella; Moscatelli, Alessandro; Zago, Myrka; Lacquaniti, Francesco; d'Avella, Andrea

    2017-10-01

    To accurately time motor responses when intercepting falling balls we rely on an internal model of gravity. However, whether and how such a model is also used to estimate the spatial location of interception is still an open question. Here we addressed this issue by asking 25 participants to intercept balls projected from a fixed location 6 m in front of them and approaching along trajectories with different arrival locations, flight durations, and gravity accelerations (0 g and 1 g ). The trajectories were displayed in an immersive virtual reality system with a wide field of view. Participants intercepted approaching balls with a racket, and they were free to choose the time and place of interception. We found that participants often achieved a better performance with 1 g than 0 g balls. Moreover, the interception points were distributed along the direction of a 1 g path for both 1 g and 0 g balls. In the latter case, interceptions tended to cluster on the upper half of the racket, indicating that participants aimed at a lower position than the actual 0 g path. These results suggest that an internal model of gravity was probably used in predicting the interception locations. However, we found that the difference in performance between 1 g and 0 g balls was modulated by flight duration, the difference being larger for faster balls. In addition, the number of peaks in the hand speed profiles increased with flight duration, suggesting that visual information was used to adjust the motor response, correcting the prediction to some extent. NEW & NOTEWORTHY Here we show that an internal model of gravity plays a key role in predicting where to intercept a fast-moving target. Participants also assumed an accelerated motion when intercepting balls approaching in a virtual environment at constant velocity. We also show that the role of visual information in guiding interceptive movement increases when more time is available. Copyright © 2017 the American Physiological

  11. Modeling flow in fractured medium. Uncertainty analysis with stochastic continuum approach

    Niemi, A.

    1994-01-01

    For modeling groundwater flow in formation-scale fractured media, no general method exists for scaling the highly heterogeneous hydraulic conductivity data to model parameters. The deterministic approach is limited in representing the heterogeneity of a medium and the application of fracture network models has both conceptual and practical limitations as far as site-scale studies are concerned. The study investigates the applicability of stochastic continuum modeling at the scale of data support. No scaling of the field data is involved, and the original variability is preserved throughout the modeling. Contributions of various aspects to the total uncertainty in the modeling prediction can also be determined with this approach. Data from five crystalline rock sites in Finland are analyzed. (107 refs., 63 figs., 7 tabs.)

  12. US residential energy demand and energy efficiency: A stochastic demand frontier approach

    Filippini, Massimo; Hunt, Lester C.

    2012-01-01

    This paper estimates a US frontier residential aggregate energy demand function using panel data for 48 ‘states’ over the period 1995 to 2007 using stochastic frontier analysis (SFA). Utilizing an econometric energy demand model, the (in)efficiency of each state is modeled and it is argued that this represents a measure of the inefficient use of residential energy in each state (i.e. ‘waste energy’). This underlying efficiency for the US is therefore observed for each state as well as the relative efficiency across the states. Moreover, the analysis suggests that energy intensity is not necessarily a good indicator of energy efficiency, whereas by controlling for a range of economic and other factors, the measure of energy efficiency obtained via this approach is. This is a novel approach to model residential energy demand and efficiency and it is arguably particularly relevant given current US energy policy discussions related to energy efficiency.

  13. Tail-constraining stochastic linear–quadratic control: a large deviation and statistical physics approach

    Chertkov, Michael; Kolokolov, Igor; Lebedev, Vladimir

    2012-01-01

    The standard definition of the stochastic risk-sensitive linear–quadratic (RS-LQ) control depends on the risk parameter, which is normally left to be set exogenously. We reconsider the classical approach and suggest two alternatives, resolving the spurious freedom naturally. One approach consists in seeking for the minimum of the tail of the probability distribution function (PDF) of the cost functional at some large fixed value. Another option suggests minimizing the expectation value of the cost functional under a constraint on the value of the PDF tail. Under the assumption of resulting control stability, both problems are reduced to static optimizations over a stationary control matrix. The solutions are illustrated using the examples of scalar and 1D chain (string) systems. The large deviation self-similar asymptotic of the cost functional PDF is analyzed. (paper)

  14. Kinetics of subdiffusion-assisted reactions: non-Markovian stochastic Liouville equation approach

    Shushin, A I

    2005-01-01

    Anomalous specific features of the kinetics of subdiffusion-assisted bimolecular reactions (time-dependence, dependence on parameters of systems, etc) are analysed in detail with the use of the non-Markovian stochastic Liouville equation (SLE), which has been recently derived within the continuous-time random-walk (CTRW) approach. In the CTRW approach, subdiffusive motion of particles is modelled by jumps whose onset probability distribution function is of a long-tailed form. The non-Markovian SLE allows for rigorous describing of some peculiarities of these reactions; for example, very slow long-time behaviour of the kinetics, non-analytical dependence of the reaction rate on the reactivity of particles, strong manifestation of fluctuation kinetics showing itself in very slowly decreasing behaviour of the kinetics at very long times, etc

  15. Modeling Stochastic Complexity in Complex Adaptive Systems: Non-Kolmogorov Probability and the Process Algebra Approach.

    Sulis, William H

    2017-10-01

    Walter Freeman III pioneered the application of nonlinear dynamical systems theories and methodologies in his work on mesoscopic brain dynamics.Sadly, mainstream psychology and psychiatry still cling to linear correlation based data analysis techniques, which threaten to subvert the process of experimentation and theory building. In order to progress, it is necessary to develop tools capable of managing the stochastic complexity of complex biopsychosocial systems, which includes multilevel feedback relationships, nonlinear interactions, chaotic dynamics and adaptability. In addition, however, these systems exhibit intrinsic randomness, non-Gaussian probability distributions, non-stationarity, contextuality, and non-Kolmogorov probabilities, as well as the absence of mean and/or variance and conditional probabilities. These properties and their implications for statistical analysis are discussed. An alternative approach, the Process Algebra approach, is described. It is a generative model, capable of generating non-Kolmogorov probabilities. It has proven useful in addressing fundamental problems in quantum mechanics and in the modeling of developing psychosocial systems.

  16. Variational approach to gravity field theories from Newton to Einstein and beyond

    Vecchiato, Alberto

    2017-01-01

    This book offers a detailed and stimulating account of the Lagrangian, or variational, approach to general relativity and beyond. The approach more usually adopted when describing general relativity is to introduce the required concepts of differential geometry and derive the field and geodesic equations from purely geometrical properties. Demonstration of the physical meaning then requires the weak field approximation of these equations to recover their Newtonian counterparts. The potential downside of this approach is that it tends to suit the mathematical mind and requires the physicist to study and work in a completely unfamiliar environment. In contrast, the approach to general relativity described in this book will be especially suited to physics students. After an introduction to field theories and the variational approach, individual sections focus on the variational approach in relation to special relativity, general relativity, and alternative theories of gravity. Throughout the text, solved exercis...

  17. a Stochastic Approach to Multiobjective Optimization of Large-Scale Water Reservoir Networks

    Bottacin-Busolin, A.; Worman, A. L.

    2013-12-01

    A main challenge for the planning and management of water resources is the development of multiobjective strategies for operation of large-scale water reservoir networks. The optimal sequence of water releases from multiple reservoirs depends on the stochastic variability of correlated hydrologic inflows and on various processes that affect water demand and energy prices. Although several methods have been suggested, large-scale optimization problems arising in water resources management are still plagued by the high dimensional state space and by the stochastic nature of the hydrologic inflows. In this work, the optimization of reservoir operation is approached using approximate dynamic programming (ADP) with policy iteration and function approximators. The method is based on an off-line learning process in which operating policies are evaluated for a number of stochastic inflow scenarios, and the resulting value functions are used to design new, improved policies until convergence is attained. A case study is presented of a multi-reservoir system in the Dalälven River, Sweden, which includes 13 interconnected reservoirs and 36 power stations. Depending on the late spring and summer peak discharges, the lowlands adjacent to Dalälven can often be flooded during the summer period, and the presence of stagnating floodwater during the hottest months of the year is the cause of a large proliferation of mosquitos, which is a major problem for the people living in the surroundings. Chemical pesticides are currently being used as a preventive countermeasure, which do not provide an effective solution to the problem and have adverse environmental impacts. In this study, ADP was used to analyze the feasibility of alternative operating policies for reducing the flood risk at a reasonable economic cost for the hydropower companies. To this end, mid-term operating policies were derived by combining flood risk reduction with hydropower production objectives. The performance

  18. Fixation of Cs to marine sediments estimated by a stochastic modelling approach.

    Børretzen, Peer; Salbu, Brit

    2002-01-01

    irreversible sediment phase. while about 12.5 years are needed before 99.7% of the Cs ions are fixed. Thus, according to the model estimates the contact time between 137Cs ions leached from dumped waste and the Stepovogo Fjord sediment should be about 3 years before the sediment will act as an efficient permanent sink. Until then a significant fraction of 137Cs should be considered mobile. The stochastic modelling approach provides useful tools when assessing sediment-seawater interactions over time, and should be easily applicable to all sediment-seawater systems including a sink term.

  19. Stochastic rainfall modeling in West Africa: Parsimonious approaches for domestic rainwater harvesting assessment

    Cowden, Joshua R.; Watkins, David W., Jr.; Mihelcic, James R.

    2008-10-01

    SummarySeveral parsimonious stochastic rainfall models are developed and compared for application to domestic rainwater harvesting (DRWH) assessment in West Africa. Worldwide, improved water access rates are lowest for Sub-Saharan Africa, including the West African region, and these low rates have important implications on the health and economy of the region. Domestic rainwater harvesting (DRWH) is proposed as a potential mechanism for water supply enhancement, especially for the poor urban households in the region, which is essential for development planning and poverty alleviation initiatives. The stochastic rainfall models examined are Markov models and LARS-WG, selected due to availability and ease of use for water planners in the developing world. A first-order Markov occurrence model with a mixed exponential amount model is selected as the best option for unconditioned Markov models. However, there is no clear advantage in selecting Markov models over the LARS-WG model for DRWH in West Africa, with each model having distinct strengths and weaknesses. A multi-model approach is used in assessing DRWH in the region to illustrate the variability associated with the rainfall models. It is clear DRWH can be successfully used as a water enhancement mechanism in West Africa for certain times of the year. A 200 L drum storage capacity could potentially optimize these simple, small roof area systems for many locations in the region.

  20. Stochastic frontier model approach for measuring stock market efficiency with different distributions.

    Hasan, Md Zobaer; Kamil, Anton Abdulbasah; Mustafa, Adli; Baten, Md Azizul

    2012-01-01

    The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE) market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time-varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation.

  1. Stochastic frontier model approach for measuring stock market efficiency with different distributions.

    Md Zobaer Hasan

    Full Text Available The stock market is considered essential for economic growth and expected to contribute to improved productivity. An efficient pricing mechanism of the stock market can be a driving force for channeling savings into profitable investments and thus facilitating optimal allocation of capital. This study investigated the technical efficiency of selected groups of companies of Bangladesh Stock Market that is the Dhaka Stock Exchange (DSE market, using the stochastic frontier production function approach. For this, the authors considered the Cobb-Douglas Stochastic frontier in which the technical inefficiency effects are defined by a model with two distributional assumptions. Truncated normal and half-normal distributions were used in the model and both time-variant and time-invariant inefficiency effects were estimated. The results reveal that technical efficiency decreased gradually over the reference period and that truncated normal distribution is preferable to half-normal distribution for technical inefficiency effects. The value of technical efficiency was high for the investment group and low for the bank group, as compared with other groups in the DSE market for both distributions in time-varying environment whereas it was high for the investment group but low for the ceramic group as compared with other groups in the DSE market for both distributions in time-invariant situation.

  2. Stochastic approaches for time series forecasting of boron: a case study of Western Turkey.

    Durdu, Omer Faruk

    2010-10-01

    In the present study, a seasonal and non-seasonal prediction of boron concentrations time series data for the period of 1996-2004 from Büyük Menderes river in western Turkey are addressed by means of linear stochastic models. The methodology presented here is to develop adequate linear stochastic models known as autoregressive integrated moving average (ARIMA) and multiplicative seasonal autoregressive integrated moving average (SARIMA) to predict boron content in the Büyük Menderes catchment. Initially, the Box-Whisker plots and Kendall's tau test are used to identify the trends during the study period. The measurements locations do not show significant overall trend in boron concentrations, though marginal increasing and decreasing trends are observed for certain periods at some locations. ARIMA modeling approach involves the following three steps: model identification, parameter estimation, and diagnostic checking. In the model identification step, considering the autocorrelation function (ACF) and partial autocorrelation function (PACF) results of boron data series, different ARIMA models are identified. The model gives the minimum Akaike information criterion (AIC) is selected as the best-fit model. The parameter estimation step indicates that the estimated model parameters are significantly different from zero. The diagnostic check step is applied to the residuals of the selected ARIMA models and the results indicate that the residuals are independent, normally distributed, and homoscadastic. For the model validation purposes, the predicted results using the best ARIMA models are compared to the observed data. The predicted data show reasonably good agreement with the actual data. The comparison of the mean and variance of 3-year (2002-2004) observed data vs predicted data from the selected best models show that the boron model from ARIMA modeling approaches could be used in a safe manner since the predicted values from these models preserve the basic

  3. Noether symmetries of a modified model in teleparallel gravity and a new approach for exact solutions

    Tajahmad, Behzad [University of Tabriz, Faculty of Physics, Tabriz (Iran, Islamic Republic of)

    2017-04-15

    In this paper, we present the Noether symmetries of flat FRW spacetime in the context of a new action in teleparallel gravity which we construct based on the f(R) version. This modified action contains a coupling between the scalar field potential and magnetism. Also, we introduce an innovative approach, the beyond Noether symmetry (B.N.S.) approach, for exact solutions which carry more conserved currents than the Noether approach. By data analysis of the exact solutions, obtained from the Noether approach, late-time acceleration and phase crossing are realized, and some deep connections with observational data such as the age of the universe, the present value of the scale factor as well as the state and deceleration parameters are observed. In the B.N.S. approach, we consider the dark energy dominated era. (orig.)

  4. Noether symmetries of a modified model in teleparallel gravity and a new approach for exact solutions

    Tajahmad, Behzad

    2017-01-01

    In this paper, we present the Noether symmetries of flat FRW spacetime in the context of a new action in teleparallel gravity which we construct based on the f(R) version. This modified action contains a coupling between the scalar field potential and magnetism. Also, we introduce an innovative approach, the beyond Noether symmetry (B.N.S.) approach, for exact solutions which carry more conserved currents than the Noether approach. By data analysis of the exact solutions, obtained from the Noether approach, late-time acceleration and phase crossing are realized, and some deep connections with observational data such as the age of the universe, the present value of the scale factor as well as the state and deceleration parameters are observed. In the B.N.S. approach, we consider the dark energy dominated era. (orig.)

  5. A patchwork approach to stochastic simulation: A route towards the analysis of morphology in multiphase systems

    El Ouassini, Ayoub [Ecole Polytechnique de Montreal, C.P. 6079, Station centre-ville, Montreal, Que., H3C-3A7 (Canada)], E-mail: ayoub.el-ouassini@polymtl.ca; Saucier, Antoine [Ecole Polytechnique de Montreal, departement de mathematiques et de genie industriel, C.P. 6079, Station centre-ville, Montreal, Que., H3C-3A7 (Canada)], E-mail: antoine.saucier@polymtl.ca; Marcotte, Denis [Ecole Polytechnique de Montreal, departement de genie civil, geologique et minier, C.P. 6079, Station centre-ville, Montreal, Que., H3C-3A7 (Canada)], E-mail: denis.marcotte@polymtl.ca; Favis, Basil D. [Ecole Polytechnique de Montreal, departement de genie chimique, C.P. 6079, Station centre-ville, Montreal, Que., H3C-3A7 (Canada)], E-mail: basil.favis@polymtl.ca

    2008-04-15

    We propose a new sequential stochastic simulation approach for black and white images in which we focus on the accurate reproduction of the small scale geometry. Our approach aims at reproducing correctly the connectivity properties and the geometry of clusters which are small with respect to a given length scale called block size. Our method is based on the analysis of statistical relationships between adjacent square pieces of image called blocks. We estimate the transition probabilities between adjacent blocks of pixels in a training image. The simulations are constructed by juxtaposing one by one square blocks of pixels, hence the term patchwork simulations. We compare the performance of patchwork simulations with Strebelle's multipoint simulation algorithm on several types of images of increasing complexity. For images composed of clusters which are small with respect to the block size (e.g. squares, discs and sticks), our patchwork approach produces better results than Strebelle's method. The most noticeable improvement is that the cluster geometry is usually reproduced accurately. The accuracy of the patchwork approach is limited primarily by the block size. Clusters which are significantly larger than the block size are usually not reproduced accurately. As an example, we applied this approach to the analysis of a co-continuous polymer blend morphology as derived from an electron microscope micrograph.

  6. A patchwork approach to stochastic simulation: A route towards the analysis of morphology in multiphase systems

    El Ouassini, Ayoub; Saucier, Antoine; Marcotte, Denis; Favis, Basil D.

    2008-01-01

    We propose a new sequential stochastic simulation approach for black and white images in which we focus on the accurate reproduction of the small scale geometry. Our approach aims at reproducing correctly the connectivity properties and the geometry of clusters which are small with respect to a given length scale called block size. Our method is based on the analysis of statistical relationships between adjacent square pieces of image called blocks. We estimate the transition probabilities between adjacent blocks of pixels in a training image. The simulations are constructed by juxtaposing one by one square blocks of pixels, hence the term patchwork simulations. We compare the performance of patchwork simulations with Strebelle's multipoint simulation algorithm on several types of images of increasing complexity. For images composed of clusters which are small with respect to the block size (e.g. squares, discs and sticks), our patchwork approach produces better results than Strebelle's method. The most noticeable improvement is that the cluster geometry is usually reproduced accurately. The accuracy of the patchwork approach is limited primarily by the block size. Clusters which are significantly larger than the block size are usually not reproduced accurately. As an example, we applied this approach to the analysis of a co-continuous polymer blend morphology as derived from an electron microscope micrograph

  7. Market efficiency of oil spot and futures: A mean-variance and stochastic dominance approach

    Lean, Hooi Hooi; McAleer, Michael; Wong, Wing-Keung

    2010-01-01

    This paper examines the market efficiency of oil spot and futures prices by using both mean-variance (MV) and stochastic dominance (SD) approaches. Based on the West Texas Intermediate crude oil data for the sample period 1989-2008, we find no evidence of any MV and SD relationships between oil spot and futures indices. This infers that there is no arbitrage opportunity between these two markets, spot and futures do not dominate one another, investors are indifferent to investing spot or futures, and the spot and futures oil markets are efficient and rational. The empirical findings are robust to each sub-period before and after the crises for different crises, and also to portfolio diversification.

  8. Market efficiency of oil spot and futures: A mean-variance and stochastic dominance approach

    Lean, Hooi Hooi [Economics Program, School of Social Sciences, Universiti Sains Malaysia (Malaysia); McAleer, Michael [Econometric Institute, Erasmus School of Economics, Erasmus University Rotterdam, and, Tinbergen Institute (Netherlands); Wong, Wing-Keung, E-mail: awong@hkbu.edu.h [Department of Economics, Hong Kong Baptist University (Hong Kong)

    2010-09-15

    This paper examines the market efficiency of oil spot and futures prices by using both mean-variance (MV) and stochastic dominance (SD) approaches. Based on the West Texas Intermediate crude oil data for the sample period 1989-2008, we find no evidence of any MV and SD relationships between oil spot and futures indices. This infers that there is no arbitrage opportunity between these two markets, spot and futures do not dominate one another, investors are indifferent to investing spot or futures, and the spot and futures oil markets are efficient and rational. The empirical findings are robust to each sub-period before and after the crises for different crises, and also to portfolio diversification.

  9. Market efficiency of oil spot and futures. A mean-variance and stochastic dominance approach

    Lean, Hooi Hooi [Economics Program, School of Social Sciences, Universiti Sains Malaysia (Malaysia); McAleer, Michael [Econometric Institute, Erasmus School of Economics, Erasmus University Rotterdam (Netherlands); Wong, Wing-Keung [Department of Economics, Hong Kong Baptist University (China); Tinbergen Institute (Netherlands)

    2010-09-15

    This paper examines the market efficiency of oil spot and futures prices by using both mean-variance (MV) and stochastic dominance (SD) approaches. Based on the West Texas Intermediate crude oil data for the sample period 1989-2008, we find no evidence of any MV and SD relationships between oil spot and futures indices. This infers that there is no arbitrage opportunity between these two markets, spot and futures do not dominate one another, investors are indifferent to investing spot or futures, and the spot and futures oil markets are efficient and rational. The empirical findings are robust to each sub-period before and after the crises for different crises, and also to portfolio diversification. (author)

  10. A Stochastic Flows Approach for Asset Allocation with Hidden Economic Environment

    Tak Kuen Siu

    2015-01-01

    Full Text Available An optimal asset allocation problem for a quite general class of utility functions is discussed in a simple two-state Markovian regime-switching model, where the appreciation rate of a risky share changes over time according to the state of a hidden economy. As usual, standard filtering theory is used to transform a financial model with hidden information into one with complete information, where a martingale approach is applied to discuss the optimal asset allocation problem. Using a martingale representation coupled with stochastic flows of diffeomorphisms for the filtering equation, the integrand in the martingale representation is identified which gives rise to an optimal portfolio strategy under some differentiability conditions.

  11. SLFP: a stochastic linear fractional programming approach for sustainable waste management.

    Zhu, H; Huang, G H

    2011-12-01

    A stochastic linear fractional programming (SLFP) approach is developed for supporting sustainable municipal solid waste management under uncertainty. The SLFP method can solve ratio optimization problems associated with random information, where chance-constrained programming is integrated into a linear fractional programming framework. It has advantages in: (1) comparing objectives of two aspects, (2) reflecting system efficiency, (3) dealing with uncertainty expressed as probability distributions, and (4) providing optimal-ratio solutions under different system-reliability conditions. The method is applied to a case study of waste flow allocation within a municipal solid waste (MSW) management system. The obtained solutions are useful for identifying sustainable MSW management schemes with maximized system efficiency under various constraint-violation risks. The results indicate that SLFP can support in-depth analysis of the interrelationships among system efficiency, system cost and system-failure risk. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. An approach to the drone fleet survivability assessment based on a stochastic continues-time model

    Kharchenko, Vyacheslav; Fesenko, Herman; Doukas, Nikos

    2017-09-01

    An approach and the algorithm to the drone fleet survivability assessment based on a stochastic continues-time model are proposed. The input data are the number of the drones, the drone fleet redundancy coefficient, the drone stability and restoration rate, the limit deviation from the norms of the drone fleet recovery, the drone fleet operational availability coefficient, the probability of the drone failure-free operation, time needed for performing the required tasks by the drone fleet. The ways for improving the recoverable drone fleet survivability taking into account amazing factors of system accident are suggested. Dependencies of the drone fleet survivability rate both on the drone stability and the number of the drones are analysed.

  13. Relating covariant and canonical approaches to triangulated models of quantum gravity

    Arnsdorf, Matthias

    2002-01-01

    In this paper we explore the relation between covariant and canonical approaches to quantum gravity and BF theory. We will focus on the dynamical triangulation and spin-foam models, which have in common that they can be defined in terms of sums over spacetime triangulations. Our aim is to show how we can recover these covariant models from a canonical framework by providing two regularizations of the projector onto the kernel of the Hamiltonian constraint. This link is important for the understanding of the dynamics of quantum gravity. In particular, we will see how in the simplest dynamical triangulation model we can recover the Hamiltonian constraint via our definition of the projector. Our discussion of spin-foam models will show how the elementary spin-network moves in loop quantum gravity, which were originally assumed to describe the Hamiltonian constraint action, are in fact related to the time-evolution generated by the constraint. We also show that the Immirzi parameter is important for the understanding of a continuum limit of the theory

  14. NLP model and stochastic multi-start optimization approach for heat exchanger networks

    Núñez-Serna, Rosa I.; Zamora, Juan M.

    2016-01-01

    Highlights: • An NLP model for the optimal design of heat exchanger networks is proposed. • The NLP model is developed from a stage-wise grid diagram representation. • A two-phase stochastic multi-start optimization methodology is utilized. • Improved network designs are obtained with different heat load distributions. • Structural changes and reductions in the number of heat exchangers are produced. - Abstract: Heat exchanger network synthesis methodologies frequently identify good network structures, which nevertheless, might be accompanied by suboptimal values of design variables. The objective of this work is to develop a nonlinear programming (NLP) model and an optimization approach that aim at identifying the best values for intermediate temperatures, sub-stream flow rate fractions, heat loads and areas for a given heat exchanger network topology. The NLP model that minimizes the total annual cost of the network is constructed based on a stage-wise grid diagram representation. To improve the possibilities of obtaining global optimal designs, a two-phase stochastic multi-start optimization algorithm is utilized for the solution of the developed model. The effectiveness of the proposed optimization approach is illustrated with the optimization of two network designs proposed in the literature for two well-known benchmark problems. Results show that from the addressed base network topologies it is possible to achieve improved network designs, with redistributions in exchanger heat loads that lead to reductions in total annual costs. The results also show that the optimization of a given network design sometimes leads to structural simplifications and reductions in the total number of heat exchangers of the network, thereby exposing alternative viable network topologies initially not anticipated.

  15. Stochastic level-set variational implicit-solvent approach to solute-solvent interfacial fluctuations

    Zhou, Shenggao, E-mail: sgzhou@suda.edu.cn, E-mail: bli@math.ucsd.edu [Department of Mathematics and Mathematical Center for Interdiscipline Research, Soochow University, 1 Shizi Street, Jiangsu, Suzhou 215006 (China); Sun, Hui; Cheng, Li-Tien [Department of Mathematics, University of California, San Diego, La Jolla, California 92093-0112 (United States); Dzubiella, Joachim [Soft Matter and Functional Materials, Helmholtz-Zentrum Berlin, 14109 Berlin, Germany and Institut für Physik, Humboldt-Universität zu Berlin, 12489 Berlin (Germany); Li, Bo, E-mail: sgzhou@suda.edu.cn, E-mail: bli@math.ucsd.edu [Department of Mathematics and Quantitative Biology Graduate Program, University of California, San Diego, La Jolla, California 92093-0112 (United States); McCammon, J. Andrew [Department of Chemistry and Biochemistry, Department of Pharmacology, Howard Hughes Medical Institute, University of California, San Diego, La Jolla, California 92093-0365 (United States)

    2016-08-07

    Recent years have seen the initial success of a variational implicit-solvent model (VISM), implemented with a robust level-set method, in capturing efficiently different hydration states and providing quantitatively good estimation of solvation free energies of biomolecules. The level-set minimization of the VISM solvation free-energy functional of all possible solute-solvent interfaces or dielectric boundaries predicts an equilibrium biomolecular conformation that is often close to an initial guess. In this work, we develop a theory in the form of Langevin geometrical flow to incorporate solute-solvent interfacial fluctuations into the VISM. Such fluctuations are crucial to biomolecular conformational changes and binding process. We also develop a stochastic level-set method to numerically implement such a theory. We describe the interfacial fluctuation through the “normal velocity” that is the solute-solvent interfacial force, derive the corresponding stochastic level-set equation in the sense of Stratonovich so that the surface representation is independent of the choice of implicit function, and develop numerical techniques for solving such an equation and processing the numerical data. We apply our computational method to study the dewetting transition in the system of two hydrophobic plates and a hydrophobic cavity of a synthetic host molecule cucurbit[7]uril. Numerical simulations demonstrate that our approach can describe an underlying system jumping out of a local minimum of the free-energy functional and can capture dewetting transitions of hydrophobic systems. In the case of two hydrophobic plates, we find that the wavelength of interfacial fluctuations has a strong influence to the dewetting transition. In addition, we find that the estimated energy barrier of the dewetting transition scales quadratically with the inter-plate distance, agreeing well with existing studies of molecular dynamics simulations. Our work is a first step toward the

  16. Monthly gravity field solutions based on GRACE observations generated with the Celestial Mechanics Approach

    Meyer, Ulrich; Jäggi, Adrian; Beutler, Gerhard

    2012-09-01

    The main objective of the Gravity Recovery And Climate Experiment (GRACE) satellite mission consists of determining the temporal variations of the Earth's gravity field. These variations are captured by time series of gravity field models of limited resolution at, e.g., monthly intervals. We present a new time series of monthly models, which was computed with the so-called Celestial Mechanics Approach (CMA), developed at the Astronomical Institute of the University of Bern (AIUB). The secular and seasonal variations in the monthly models are tested for statistical significance. Calibrated errors are derived from inter-annual variations. The time-variable signal can be extracted at least up to degree 60, but the gravity field coefficients of orders above 45 are heavily contaminated by noise. This is why a series of monthly models is computed up to a maximum degree of 60, but only a maximum order of 45. Spectral analysis of the residual time-variable signal shows a distinctive peak at a period of 160 days, which shows up in particular in the C20 spherical harmonic coefficient. Basic filter- and scaling-techniques are introduced to evaluate the monthly models. For this purpose, the variability over the oceans is investigated, which serves as a measure for the noisiness of the models. The models in selected regions show the expected seasonal and secular variations, which are in good agreement with the monthly models of the Helmholtz Centre Potsdam, German Research Centre for Geosciences (GFZ). The results also reveal a few small outliers, illustrating the necessity for improved data screening. Our monthly models are available at the web page of the International Centre for Global Earth Models (ICGEM).

  17. Quantum group structure and local fields in the algebraic approach to 2D gravity

    Schnittger, Jens

    1994-01-01

    This review contains a summary of work by J.-L. Gervais and the author on the operator approach to 2d gravity. Special emphasis is placed on the construction of local observables -the Liouville exponentials and the Liouville field itself - and the underlying algebra of chiral vertex operators. The double quantum group structure arising from the presence of two screening charges is discussed and the generalized algebra and field operators are derived. In the last part, we show that our construction gives rise to a natural definition of a quantum tau function, which is a noncommutative version of the classical group-theoretic representation of the Liouville fields by Leznov and Saveliev.

  18. Distance measurement and wave dispersion in a Liouville-string approach to quantum gravity

    Amelino-Camelia, G; Mavromatos, Nikolaos E; Nanopoulos, Dimitri V

    1997-01-01

    Within a Liouville approach to non-critical string theory, we discuss space-time foam effects on the propagation of low-energy particles. We find an induced frequency-dependent dispersion in the propagation of a wave packet, and observe that this would affect the outcome of measurements involving low-energy particles as probes. In particular, the maximum possible order of magnitude of the space-time foam effects would give rise to an error in the measurement of distance comparable to that independently obtained in some recent heuristic quantum-gravity analyses. We also briefly compare these error estimates with the precision of astrophysical measurements.

  19. Fast image reconstruction for Compton camera using stochastic origin ensemble approach.

    Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna

    2011-01-01

    Compton camera has been proposed as a potential imaging tool in astronomy, industry, homeland security, and medical diagnostics. Due to the inherent geometrical complexity of Compton camera data, image reconstruction of distributed sources can be ineffective and/or time-consuming when using standard techniques such as filtered backprojection or maximum likelihood-expectation maximization (ML-EM). In this article, the authors demonstrate a fast reconstruction of Compton camera data using a novel stochastic origin ensembles (SOE) approach based on Markov chains. During image reconstruction, the origins of the measured events are randomly assigned to locations on conical surfaces, which are the Compton camera analogs of lines-of-responses in PET. Therefore, the image is defined as an ensemble of origin locations of all possible event origins. During the course of reconstruction, the origins of events are stochastically moved and the acceptance of the new event origin is determined by the predefined acceptance probability, which is proportional to the change in event density. For example, if the event density at the new location is higher than in the previous location, the new position is always accepted. After several iterations, the reconstructed distribution of origins converges to a quasistationary state which can be voxelized and displayed. Comparison with the list-mode ML-EM reveals that the postfiltered SOE algorithm has similar performance in terms of image quality while clearly outperforming ML-EM in relation to reconstruction time. In this study, the authors have implemented and tested a new image reconstruction algorithm for the Compton camera based on the stochastic origin ensembles with Markov chains. The algorithm uses list-mode data, is parallelizable, and can be used for any Compton camera geometry. SOE algorithm clearly outperforms list-mode ML-EM for simple Compton camera geometry in terms of reconstruction time. The difference in computational time

  20. Dynamic-Programming Approaches to Single- and Multi-Stage Stochastic Knapsack Problems for Portfolio Optimization

    Khoo, Wai

    1999-01-01

    .... These problems model stochastic portfolio optimization problems (SPOPs) which assume deterministic unit weight, and normally distributed unit return with known mean and variance for each item type...

  1. A real-space stochastic density matrix approach for density functional electronic structure.

    Beck, Thomas L

    2015-12-21

    The recent development of real-space grid methods has led to more efficient, accurate, and adaptable approaches for large-scale electrostatics and density functional electronic structure modeling. With the incorporation of multiscale techniques, linear-scaling real-space solvers are possible for density functional problems if localized orbitals are used to represent the Kohn-Sham energy functional. These methods still suffer from high computational and storage overheads, however, due to extensive matrix operations related to the underlying wave function grid representation. In this paper, an alternative stochastic method is outlined that aims to solve directly for the one-electron density matrix in real space. In order to illustrate aspects of the method, model calculations are performed for simple one-dimensional problems that display some features of the more general problem, such as spatial nodes in the density matrix. This orbital-free approach may prove helpful considering a future involving increasingly parallel computing architectures. Its primary advantage is the near-locality of the random walks, allowing for simultaneous updates of the density matrix in different regions of space partitioned across the processors. In addition, it allows for testing and enforcement of the particle number and idempotency constraints through stabilization of a Feynman-Kac functional integral as opposed to the extensive matrix operations in traditional approaches.

  2. A DG approach to the numerical solution of the Stein-Stein stochastic volatility option pricing model

    Hozman, J.; Tichý, T.

    2017-12-01

    Stochastic volatility models enable to capture the real world features of the options better than the classical Black-Scholes treatment. Here we focus on pricing of European-style options under the Stein-Stein stochastic volatility model when the option value depends on the time, on the price of the underlying asset and on the volatility as a function of a mean reverting Orstein-Uhlenbeck process. A standard mathematical approach to this model leads to the non-stationary second-order degenerate partial differential equation of two spatial variables completed by the system of boundary and terminal conditions. In order to improve the numerical valuation process for a such pricing equation, we propose a numerical technique based on the discontinuous Galerkin method and the Crank-Nicolson scheme. Finally, reference numerical experiments on real market data illustrate comprehensive empirical findings on options with stochastic volatility.

  3. A Stochastic Programming Approach for a Multi-Site Supply Chain Planning in Textile and Apparel Industry under Demand Uncertainty

    Houssem Felfel

    2015-11-01

    Full Text Available In this study, a new stochastic model is proposed to deal with a multi-product, multi-period, multi-stage, multi-site production and transportation supply chain planning problem under demand uncertainty. A two-stage stochastic linear programming approach is used to maximize the expected profit. Decisions such as the production amount, the inventory level of finished and semi-finished product, the amount of backorder and the quantity of products to be transported between upstream and downstream plants in each period are considered. The robustness of production supply chain plan is then evaluated using statistical and risk measures. A case study from a real textile and apparel industry is shown in order to compare the performances of the proposed stochastic programming model and the deterministic model.

  4. New 3D Gravity Model of the Lithosphere and new Approach of the Gravity Field Transformation in the Western Carpathian-Pannonian Region

    Bielik, M.; Tasarova, Z. A.; Goetze, H.; Mikuska, J.; Pasteka, R.

    2007-12-01

    The 3-D forward modeling was performed for the Western Carpathians and the Pannonian Basin system. The density model includes 31 cross-sections, extends to depth of 220 km. By means of the combined 3-D modeling, new estimates of the density distribution of the crust and upper mantle, as well as depths of the Moho were derived. These data allowed to perform gravity stripping, which in the area of the Pannonian Basin is crucial for the signal analysis of the gravity field. In this region, namely, two pronounced features (i.e. the deep sedimentary basins and shallow Moho) with opposite gravity effects make it impossible to analyze the Bouguer anomaly by field separation or filtering. The results revealed a significantly different nature of the Western Carpathian- Pannonian region (ALACAPA and Tisza-Dacia microplates) from the European Platform lithosphere (i.e. these microplates to be much less dense than the surrounding European Platform lithosphere). The calculation of the transformed gravity maps by means of new method provided the additional information on the lithospheric structure. The use of existing elevation information represents an independent approach to the problem of transformation of gravity maps. Instead of standard separation and transformation methods both in wave-number and spatial domains, this method is based on the estimating of really existing linear trends within the values of complete Bouguer anomalies (CBA), which are understood as a function defined in 3D space. An important assumption that the points with known input values of CBA lie on a horizontal plane is therefore not required. Instead, the points with known CBA and elevation values are treated in their original positions, i.e. on the Earth surface.

  5. Breaking the theoretical scaling limit for predicting quasiparticle energies: the stochastic GW approach.

    Neuhauser, Daniel; Gao, Yi; Arntsen, Christopher; Karshenas, Cyrus; Rabani, Eran; Baer, Roi

    2014-08-15

    We develop a formalism to calculate the quasiparticle energy within the GW many-body perturbation correction to the density functional theory. The occupied and virtual orbitals of the Kohn-Sham Hamiltonian are replaced by stochastic orbitals used to evaluate the Green function G, the polarization potential W, and, thereby, the GW self-energy. The stochastic GW (sGW) formalism relies on novel theoretical concepts such as stochastic time-dependent Hartree propagation, stochastic matrix compression, and spatial or temporal stochastic decoupling techniques. Beyond the theoretical interest, the formalism enables linear scaling GW calculations breaking the theoretical scaling limit for GW as well as circumventing the need for energy cutoff approximations. We illustrate the method for silicon nanocrystals of varying sizes with N_{e}>3000 electrons.

  6. A simplified BBGKY hierarchy for correlated fermions from a stochastic mean-field approach

    Lacroix, Denis; Tanimura, Yusuke; Ayik, Sakir; Yilmaz, Bulent

    2016-01-01

    The stochastic mean-field (SMF) approach allows to treat correlations beyond mean-field using a set of independent mean-field trajectories with appropriate choice of fluctuating initial conditions. We show here that this approach is equivalent to a simplified version of the Bogolyubov-Born-Green-Kirkwood-Yvon (BBGKY) hierarchy between one-, two-,.., N -body degrees of freedom. In this simplified version, one-body degrees of freedom are coupled to fluctuations to all orders while retaining only specific terms of the general BBGKY hierarchy. The use of the simplified BBGKY is illustrated with the Lipkin-Meshkov-Glick (LMG) model. We show that a truncated version of this hierarchy can be useful, as an alternative to the SMF, especially in the weak coupling regime to get physical insight in the effect beyond mean-field. In particular, it leads to approximate analytical expressions for the quantum fluctuations both in the weak and strong coupling regime. In the strong coupling regime, it can only be used for short time evolution. In that case, it gives information on the evolution time-scale close to a saddle point associated to a quantum phase-transition. For long time evolution and strong coupling, we observed that the simplified BBGKY hierarchy cannot be truncated and only the full SMF with initial sampling leads to reasonable results. (orig.)

  7. A Stochastic Approach for Blurred Image Restoration and Optical Flow Computation on Field Image Sequence

    高文; 陈熙霖

    1997-01-01

    The blur in target images caused by camera vibration due to robot motion or hand shaking and by object(s) moving in the background scene is different to deal with in the computer vision system.In this paper,the authors study the relation model between motion and blur in the case of object motion existing in video image sequence,and work on a practical computation algorithm for both motion analysis and blut image restoration.Combining the general optical flow and stochastic process,the paper presents and approach by which the motion velocity can be calculated from blurred images.On the other hand,the blurred image can also be restored using the obtained motion information.For solving a problem with small motion limitation on the general optical flow computation,a multiresolution optical flow algoritm based on MAP estimation is proposed. For restoring the blurred image ,an iteration algorithm and the obtained motion velocity are used.The experiment shows that the proposed approach for both motion velocity computation and blurred image restoration works well.

  8. A hybrid stochastic approach for self-location of wireless sensors in indoor environments.

    Lloret, Jaime; Tomas, Jesus; Garcia, Miguel; Canovas, Alejandro

    2009-01-01

    Indoor location systems, especially those using wireless sensor networks, are used in many application areas. While the need for these systems is widely proven, there is a clear lack of accuracy. Many of the implemented applications have high errors in their location estimation because of the issues arising in the indoor environment. Two different approaches had been proposed using WLAN location systems: on the one hand, the so-called deductive methods take into account the physical properties of signal propagation. These systems require a propagation model, an environment map, and the position of the radio-stations. On the other hand, the so-called inductive methods require a previous training phase where the system learns the received signal strength (RSS) in each location. This phase can be very time consuming. This paper proposes a new stochastic approach which is based on a combination of deductive and inductive methods whereby wireless sensors could determine their positions using WLAN technology inside a floor of a building. Our goal is to reduce the training phase in an indoor environment, but, without an loss of precision. Finally, we compare the measurements taken using our proposed method in a real environment with the measurements taken by other developed systems. Comparisons between the proposed system and other hybrid methods are also provided.

  9. A Hybrid Stochastic Approach for Self-Location of Wireless Sensors in Indoor Environments

    Alejandro Canovas

    2009-05-01

    Full Text Available Indoor location systems, especially those using wireless sensor networks, are used in many application areas. While the need for these systems is widely proven, there is a clear lack of accuracy. Many of the implemented applications have high errors in their location estimation because of the issues arising in the indoor environment. Two different approaches had been proposed using WLAN location systems: on the one hand, the so-called deductive methods take into account the physical properties of signal propagation. These systems require a propagation model, an environment map, and the position of the radio-stations. On the other hand, the so-called inductive methods require a previous training phase where the system learns the received signal strength (RSS in each location. This phase can be very time consuming. This paper proposes a new stochastic approach which is based on a combination of deductive and inductive methods whereby wireless sensors could determine their positions using WLAN technology inside a floor of a building. Our goal is to reduce the training phase in an indoor environment, but, without an loss of precision. Finally, we compare the measurements taken using our proposed method in a real environment with the measurements taken by other developed systems. Comparisons between the proposed system and other hybrid methods are also provided.

  10. Hybrid approaches for multiple-species stochastic reaction–diffusion models

    Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen

    2015-01-01

    Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. - Highlights: • A novel hybrid stochastic/deterministic reaction–diffusion simulation method is given. • Can massively speed up stochastic simulations while preserving stochastic effects. • Can handle multiple reacting species. • Can handle moving boundaries

  11. Hybrid approaches for multiple-species stochastic reaction–diffusion models

    Spill, Fabian, E-mail: fspill@bu.edu [Department of Biomedical Engineering, Boston University, 44 Cummington Street, Boston, MA 02215 (United States); Department of Mechanical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139 (United States); Guerrero, Pilar [Department of Mathematics, University College London, Gower Street, London WC1E 6BT (United Kingdom); Alarcon, Tomas [Centre de Recerca Matematica, Campus de Bellaterra, Edifici C, 08193 Bellaterra (Barcelona) (Spain); Departament de Matemàtiques, Universitat Atonòma de Barcelona, 08193 Bellaterra (Barcelona) (Spain); Maini, Philip K. [Wolfson Centre for Mathematical Biology, Mathematical Institute, University of Oxford, Oxford OX2 6GG (United Kingdom); Byrne, Helen [Wolfson Centre for Mathematical Biology, Mathematical Institute, University of Oxford, Oxford OX2 6GG (United Kingdom); Computational Biology Group, Department of Computer Science, University of Oxford, Oxford OX1 3QD (United Kingdom)

    2015-10-15

    Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. - Highlights: • A novel hybrid stochastic/deterministic reaction–diffusion simulation method is given. • Can massively speed up stochastic simulations while preserving stochastic effects. • Can handle multiple reacting species. • Can handle moving boundaries.

  12. Approaches for modeling within subject variability in pharmacometric count data analysis: dynamic inter-occasion variability and stochastic differential equations.

    Deng, Chenhui; Plan, Elodie L; Karlsson, Mats O

    2016-06-01

    Parameter variation in pharmacometric analysis studies can be characterized as within subject parameter variability (WSV) in pharmacometric models. WSV has previously been successfully modeled using inter-occasion variability (IOV), but also stochastic differential equations (SDEs). In this study, two approaches, dynamic inter-occasion variability (dIOV) and adapted stochastic differential equations, were proposed to investigate WSV in pharmacometric count data analysis. These approaches were applied to published count models for seizure counts and Likert pain scores. Both approaches improved the model fits significantly. In addition, stochastic simulation and estimation were used to explore further the capability of the two approaches to diagnose and improve models where existing WSV is not recognized. The results of simulations confirmed the gain in introducing WSV as dIOV and SDEs when parameters vary randomly over time. Further, the approaches were also informative as diagnostics of model misspecification, when parameters changed systematically over time but this was not recognized in the structural model. The proposed approaches in this study offer strategies to characterize WSV and are not restricted to count data.

  13. On a new approach for constructing wormholes in Einstein-Born-Infeld gravity

    Kim, Jin Young [Kunsan National University, Department of Physics, Kunsan (Korea, Republic of); Park, Mu-In [Sogang University, Research Institute for Basic Science, Seoul (Korea, Republic of)

    2016-11-15

    We study a new approach for the wormhole construction in Einstein-Born-Infeld gravity, which does not require exotic matters in the Einstein equation. The Born-Infeld field equation is not modified by coordinate independent conditions of continuous metric tensor and its derivatives, even though the Born-Infeld fields have discontinuities in their derivatives at the throat in general. We study the relation of the newly introduced conditions with the usual continuity equation for the energy-momentum tensor and the gravitational Bianchi identity. We find that there is no violation of energy conditions for the Born-Infeld fields contrary to the usual approaches. The exoticity of the energy-momentum tensor is not essential for sustaining wormholes. Some open problems are discussed. (orig.)

  14. A dynamic multimedia fuzzy-stochastic integrated environmental risk assessment approach for contaminated sites management

    Hu, Yan; Wen, Jing-ya; Li, Xiao-li; Wang, Da-zhou; Li, Yu

    2013-01-01

    Highlights: • Using interval mathematics to describe spatial and temporal variability and parameter uncertainty. • Using fuzzy theory to quantify variability of environmental guideline values. • Using probabilistic approach to integrate interval concentrations and fuzzy environmental guideline. • Establishment of dynamic multimedia environmental integrated risk assessment framework. -- Abstract: A dynamic multimedia fuzzy-stochastic integrated environmental risk assessment approach was developed for contaminated sites management. The contaminant concentrations were simulated by a validated interval dynamic multimedia fugacity model, and different guideline values for the same contaminant were represented as a fuzzy environmental guideline. Then, the probability of violating environmental guideline (Pv) can be determined by comparison between the modeled concentrations and the fuzzy environmental guideline, and the constructed relationship between the Pvs and environmental risk levels was used to assess the environmental risk level. The developed approach was applied to assess the integrated environmental risk at a case study site in China, simulated from 1985 to 2020. Four scenarios were analyzed, including “residential land” and “industrial land” environmental guidelines under “strict” and “loose” strictness. It was found that PAH concentrations will increase steadily over time, with soil found to be the dominant sink. Source emission in soil was the leading input and atmospheric sedimentation was the dominant transfer process. The integrated environmental risks primarily resulted from petroleum spills and coke ovens, while the soil environmental risks came from coal combustion. The developed approach offers an effective tool for quantifying variability and uncertainty in the dynamic multimedia integrated environmental risk assessment and the contaminated site management

  15. A dynamic multimedia fuzzy-stochastic integrated environmental risk assessment approach for contaminated sites management

    Hu, Yan; Wen, Jing-ya; Li, Xiao-li; Wang, Da-zhou; Li, Yu, E-mail: liyuxx8@hotmail.com

    2013-10-15

    Highlights: • Using interval mathematics to describe spatial and temporal variability and parameter uncertainty. • Using fuzzy theory to quantify variability of environmental guideline values. • Using probabilistic approach to integrate interval concentrations and fuzzy environmental guideline. • Establishment of dynamic multimedia environmental integrated risk assessment framework. -- Abstract: A dynamic multimedia fuzzy-stochastic integrated environmental risk assessment approach was developed for contaminated sites management. The contaminant concentrations were simulated by a validated interval dynamic multimedia fugacity model, and different guideline values for the same contaminant were represented as a fuzzy environmental guideline. Then, the probability of violating environmental guideline (Pv) can be determined by comparison between the modeled concentrations and the fuzzy environmental guideline, and the constructed relationship between the Pvs and environmental risk levels was used to assess the environmental risk level. The developed approach was applied to assess the integrated environmental risk at a case study site in China, simulated from 1985 to 2020. Four scenarios were analyzed, including “residential land” and “industrial land” environmental guidelines under “strict” and “loose” strictness. It was found that PAH concentrations will increase steadily over time, with soil found to be the dominant sink. Source emission in soil was the leading input and atmospheric sedimentation was the dominant transfer process. The integrated environmental risks primarily resulted from petroleum spills and coke ovens, while the soil environmental risks came from coal combustion. The developed approach offers an effective tool for quantifying variability and uncertainty in the dynamic multimedia integrated environmental risk assessment and the contaminated site management.

  16. Advanced Computational Approaches for Characterizing Stochastic Cellular Responses to Low Dose, Low Dose Rate Exposures

    Scott, Bobby, R., Ph.D.

    2003-06-27

    OAK - B135 This project final report summarizes modeling research conducted in the U.S. Department of Energy (DOE), Low Dose Radiation Research Program at the Lovelace Respiratory Research Institute from October 1998 through June 2003. The modeling research described involves critically evaluating the validity of the linear nonthreshold (LNT) risk model as it relates to stochastic effects induced in cells by low doses of ionizing radiation and genotoxic chemicals. The LNT model plays a central role in low-dose risk assessment for humans. With the LNT model, any radiation (or genotoxic chemical) exposure is assumed to increase one¡¯s risk of cancer. Based on the LNT model, others have predicted tens of thousands of cancer deaths related to environmental exposure to radioactive material from nuclear accidents (e.g., Chernobyl) and fallout from nuclear weapons testing. Our research has focused on developing biologically based models that explain the shape of dose-response curves for low-dose radiation and genotoxic chemical-induced stochastic effects in cells. Understanding the shape of the dose-response curve for radiation and genotoxic chemical-induced stochastic effects in cells helps to better understand the shape of the dose-response curve for cancer induction in humans. We have used a modeling approach that facilitated model revisions over time, allowing for timely incorporation of new knowledge gained related to the biological basis for low-dose-induced stochastic effects in cells. Both deleterious (e.g., genomic instability, mutations, and neoplastic transformation) and protective (e.g., DNA repair and apoptosis) effects have been included in our modeling. Our most advanced model, NEOTRANS2, involves differing levels of genomic instability. Persistent genomic instability is presumed to be associated with nonspecific, nonlethal mutations and to increase both the risk for neoplastic transformation and for cancer occurrence. Our research results, based on

  17. Cosmology of f(R) gravity in the metric variational approach

    Li, Baojiu; Barrow, John D.

    2007-04-01

    We consider the cosmologies that arise in a subclass of f(R) gravity with f(R)=R+μ2n+2/(-R)n and n∈(-1,0) in the metric (as opposed to the Palatini) variational approach to deriving the gravitational field equations. The calculations of the isotropic and homogeneous cosmological models are undertaken in the Jordan frame and at both the background and the perturbation levels. For the former, we also discuss the connection to the Einstein frame in which the extra degree of freedom in the theory is associated with a scalar field sharing some of the properties of a “chameleon” field. For the latter, we derive the cosmological perturbation equations in general theories of f(R) gravity in covariant form and implement them numerically to calculate the cosmic microwave background (CMB) temperature and matter power spectra of the cosmological model. The CMB power is shown to reduce at low l’s, and the matter power spectrum is almost scale independent at small scales, thus having a similar shape to that in standard general relativity. These are in stark contrast with what was found in the Palatini f(R) gravity, where the CMB power is largely amplified at low l’s and the matter spectrum is strongly scale dependent at small scales. These features make the present model more adaptable than that arising from the Palatini f(R) field equations, and none of the data on background evolution, CMB power spectrum, or matter power spectrum currently rule it out.

  18. Group theory approach to unification of gravity with internal symmetry gauge interactions. Part 1

    Samokhvalov, S.E.; Vanyashin, V.S.

    1990-12-01

    The infinite group of deformed diffeomorphisms of space-time continuum is put into the basis of the Gauge Theory of Gravity. This gives rise to some new ways for unification of gravity with other gauge interactions. (author). 7 refs

  19. Sensitivity of Base-Isolated Systems to Ground Motion Characteristics: A Stochastic Approach

    Kaya, Yavuz; Safak, Erdal

    2008-01-01

    Base isolators dissipate energy through their nonlinear behavior when subjected to earthquake-induced loads. A widely used base isolation system for structures involves installing lead-rubber bearings (LRB) at the foundation level. The force-deformation behavior of LRB isolators can be modeled by a bilinear hysteretic model. This paper investigates the effects of ground motion characteristics on the response of bilinear hysteretic oscillators by using a stochastic approach. Ground shaking is characterized by its power spectral density function (PSDF), which includes corner frequency, seismic moment, moment magnitude, and site effects as its parameters. The PSDF of the oscillator response is calculated by using the equivalent-linearization techniques of random vibration theory for hysteretic nonlinear systems. Knowing the PSDF of the response, we can calculate the mean square and the expected maximum response spectra for a range of natural periods and ductility values. The results show that moment magnitude is a critical factor determining the response. Site effects do not seem to have a significant influence

  20. Studying the intervention of an unusual term in f(T) gravity via the Noether symmetry approach. On a new term for gravity actions

    Tajahmad, Behzad [University of Tabriz, Faculty of Physics, Tabriz (Iran, Islamic Republic of)

    2017-08-15

    As has been done before, we study an unknown coupling function, i.e. F(φ), together with a function of torsion and also curvature, i.e. f(T) and f(R), generally depending upon a scalar field. In the f(R) case, it comes from quantum correlations and other sources. Now, what if beside this term in f(T) gravity context, we enhance the action through another term which depends upon both scalar field and its derivatives? In this paper, we have added such an unprecedented term in the generic common action of f(T) gravity such that in this new term, an unknown function of torsion has coupled with an unknown function of both scalar field and its derivatives. We explain in detail why we can append such a term. By the Noether symmetry approach, we consider its behavior and effect. We show that it does not produce an anomaly, but rather it works successfully, and numerical analysis of the exact solutions of field equations coincides with all most important observational data, particularly late-time-accelerated expansion. So, this new term may be added to the gravitational actions of f(T) gravity. (orig.)

  1. A constrained approach to multiscale stochastic simulation of chemically reacting systems

    Cotter, Simon L.; Zygalakis, Konstantinos C.; Kevrekidis, Ioannis G.; Erban, Radek

    2011-01-01

    Stochastic simulation of coupled chemical reactions is often computationally intensive, especially if a chemical system contains reactions occurring on different time scales. In this paper, we introduce a multiscale methodology suitable to address

  2. A chance-constrained stochastic approach to intermodal container routing problems.

    Zhao, Yi; Liu, Ronghui; Zhang, Xi; Whiteing, Anthony

    2018-01-01

    We consider a container routing problem with stochastic time variables in a sea-rail intermodal transportation system. The problem is formulated as a binary integer chance-constrained programming model including stochastic travel times and stochastic transfer time, with the objective of minimising the expected total cost. Two chance constraints are proposed to ensure that the container service satisfies ship fulfilment and cargo on-time delivery with pre-specified probabilities. A hybrid heuristic algorithm is employed to solve the binary integer chance-constrained programming model. Two case studies are conducted to demonstrate the feasibility of the proposed model and to analyse the impact of stochastic variables and chance-constraints on the optimal solution and total cost.

  3. A new approach to the analysis of the phase space of f(R)-gravity

    Carloni, S., E-mail: sante.carloni@tecnico.ulisboa.pt [Centro Multidisciplinar de Astrofisica—CENTRA, Instituto Superior Tecnico – IST, Universidade de Lisboa – UL, Avenida Rovisco Pais 1, 1049-001 (Portugal)

    2015-09-01

    We propose a new dynamical system formalism for the analysis of f(R) cosmologies. The new approach eliminates the need for cumbersome inversions to close the dynamical system and allows the analysis of the phase space of f(R)-gravity models which cannot be investigated using the standard technique. Differently form previously proposed similar techniques, the new method is constructed in such a way to associate to the fixed points scale factors, which contain four integration constants (i.e. solutions of fourth order differential equations). In this way a new light is shed on the physical meaning of the fixed points. We apply this technique to some f(R) Lagrangians relevant for inflationary and dark energy models.

  4. Quantum group structure and local fields in the algebraic approach to 2D gravity

    Schnittger, J.

    1995-07-01

    This review contains a summary of the work by J.-L. Gervais and the author on the operator approach to 2d gravity. Special emphasis is placed on the construction of local observables — the Liouville exponentials and the Liouville field itself — and the underlying algebra of chiral vertex operators. The double quantum group structure arising from the presence of two screening charges is discussed and the generalized algebra and field operators are derived. In the last part, we show that our construction gives rise to a natural definition of a quantum tau function, which is a noncommutative version of the classical group-theoretic representation of the Liouville fields by Leznov and Saveliev.

  5. Edwards' approach to horizontal and vertical segregation in a mixture of hard spheres under gravity

    Fierro, Annalisa; Nicodemi, Mario; Coniglio, Antonio

    2003-01-01

    We study the phenomenon of size segregation, observed in models of vibrated granular mixtures such as powders or sand. This consists of the de-mixing of the different components of the system under shaking. Several mechanisms have been proposed to explain this phenomenon. However, the criteria for predicting segregation in a mixture, an issue of great practical importance, are largely unknown. In the present paper we study a binary hard-sphere mixture under gravity on a three-dimensional lattice using Monte Carlo simulations. The vertical and horizontal segregation observed during the tap dynamics is interpreted in the framework of a statistical mechanics approach to granular media in the manner of Edwards. A phase diagram for the vertical segregation is derived, and compared with the simulation data

  6. Cloud's Center of Gravity – a compact approach to analyze convective cloud development

    I. Koren

    2009-01-01

    Full Text Available As cloud resolving models become more detailed, with higher resolution outputs, it is often complicated to isolate the physical processes that control the cloud attributes. Moreover, due to the high dimensionality and complexity of the model output, the analysis and interpretation of the results can be very complicated. Here we suggest a novel approach to convective cloud analysis that yields more insight into the physical and temporal evolution of clouds, and is compact and efficient. The different (3-D cloud attributes are weighted and projected onto a single point in space and in time, that has properties of, or similar to, the Center Of Gravity (COG. The location, magnitude and spread of this variable are followed in time. The implications of the COG approach are demonstrated for a study of aerosol effects on a warm convective cloud. We show that in addition to reducing dramatically the dimensionality of the output, such an approach often enhances the signal, adds more information, and makes the physical description of cloud evolution clearer, allowing unambiguous comparison of clouds evolving in different environmental conditions. This approach may also be useful for analysis of cloud data retrieved from surface or space-based cloud radars.

  7. A stochastic approach for model reduction and memory function design in hydrogeophysical inversion

    Hou, Z.; Kellogg, A.; Terry, N.

    2009-12-01

    Geophysical (e.g., seismic, electromagnetic, radar) techniques and statistical methods are essential for research related to subsurface characterization, including monitoring subsurface flow and transport processes, oil/gas reservoir identification, etc. For deep subsurface characterization such as reservoir petroleum exploration, seismic methods have been widely used. Recently, electromagnetic (EM) methods have drawn great attention in the area of reservoir characterization. However, considering the enormous computational demand corresponding to seismic and EM forward modeling, it is usually a big problem to have too many unknown parameters in the modeling domain. For shallow subsurface applications, the characterization can be very complicated considering the complexity and nonlinearity of flow and transport processes in the unsaturated zone. It is warranted to reduce the dimension of parameter space to a reasonable level. Another common concern is how to make the best use of time-lapse data with spatial-temporal correlations. This is even more critical when we try to monitor subsurface processes using geophysical data collected at different times. The normal practice is to get the inverse images individually. These images are not necessarily continuous or even reasonably related, because of the non-uniqueness of hydrogeophysical inversion. We propose to use a stochastic framework by integrating minimum-relative-entropy concept, quasi Monto Carlo sampling techniques, and statistical tests. The approach allows efficient and sufficient exploration of all possibilities of model parameters and evaluation of their significances to geophysical responses. The analyses enable us to reduce the parameter space significantly. The approach can be combined with Bayesian updating, allowing us to treat the updated ‘posterior’ pdf as a memory function, which stores all the information up to date about the distributions of soil/field attributes/properties, then consider the

  8. Hybrid approaches for multiple-species stochastic reaction-diffusion models

    Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen

    2015-10-01

    Reaction-diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction-diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model.

  9. Hybrid approaches for multiple-species stochastic reaction-diffusion models.

    Spill, Fabian

    2015-10-01

    Reaction-diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction-diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model.

  10. Hybrid approaches for multiple-species stochastic reaction-diffusion models.

    Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K; Byrne, Helen

    2015-01-01

    Reaction-diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction-diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model.

  11. A multivariate and stochastic approach to identify key variables to rank dairy farms on profitability.

    Atzori, A S; Tedeschi, L O; Cannas, A

    2013-05-01

    The economic efficiency of dairy farms is the main goal of farmers. The objective of this work was to use routinely available information at the dairy farm level to develop an index of profitability to rank dairy farms and to assist the decision-making process of farmers to increase the economic efficiency of the entire system. A stochastic modeling approach was used to study the relationships between inputs and profitability (i.e., income over feed cost; IOFC) of dairy cattle farms. The IOFC was calculated as: milk revenue + value of male calves + culling revenue - herd feed costs. Two databases were created. The first one was a development database, which was created from technical and economic variables collected in 135 dairy farms. The second one was a synthetic database (sDB) created from 5,000 synthetic dairy farms using the Monte Carlo technique and based on the characteristics of the development database data. The sDB was used to develop a ranking index as follows: (1) principal component analysis (PCA), excluding IOFC, was used to identify principal components (sPC); and (2) coefficient estimates of a multiple regression of the IOFC on the sPC were obtained. Then, the eigenvectors of the sPC were used to compute the principal component values for the original 135 dairy farms that were used with the multiple regression coefficient estimates to predict IOFC (dRI; ranking index from development database). The dRI was used to rank the original 135 dairy farms. The PCA explained 77.6% of the sDB variability and 4 sPC were selected. The sPC were associated with herd profile, milk quality and payment, poor management, and reproduction based on the significant variables of the sPC. The mean IOFC in the sDB was 0.1377 ± 0.0162 euros per liter of milk (€/L). The dRI explained 81% of the variability of the IOFC calculated for the 135 original farms. When the number of farms below and above 1 standard deviation (SD) of the dRI were calculated, we found that 21

  12. A stochastic approach for the description of the water balance dynamics in a river basin

    S. Manfreda

    2008-09-01

    Full Text Available The present paper introduces an analytical approach for the description of the soil water balance dynamics over a schematic river basin. The model is based on a stochastic differential equation where the rainfall forcing is interpreted as an additive noise in the soil water balance. This equation can be solved assuming known the spatial distribution of the soil moisture over the basin transforming the two-dimensional problem in space in a one dimensional one. This assumption is particularly true in the case of humid and semihumid environments, where spatial redistribution becomes dominant producing a well defined soil moisture pattern. The model allowed to derive the probability density function of the saturated portion of a basin and of its relative saturation. This theory is based on the assumption that the soil water storage capacity varies across the basin following a parabolic distribution and the basin has homogeneous soil texture and vegetation cover. The methodology outlined the role played by the soil water storage capacity distribution of the basin on soil water balance. In particular, the resulting probability density functions of the relative basin saturation were found to be strongly controlled by the maximum water storage capacity of the basin, while the probability density functions of the relative saturated portion of the basin are strongly influenced by the spatial heterogeneity of the soil water storage capacity. Moreover, the saturated areas reach their maximum variability when the mean rainfall rate is almost equal to the soil water loss coefficient given by the sum of the maximum rate of evapotranspiration and leakage loss in the soil water balance. The model was tested using the results of a continuous numerical simulation performed with a semi-distributed model in order to validate the proposed theoretical distributions.

  13. Effects of extrinsic mortality on the evolution of aging: a stochastic modeling approach.

    Maxim Nikolaievich Shokhirev

    Full Text Available The evolutionary theories of aging are useful for gaining insights into the complex mechanisms underlying senescence. Classical theories argue that high levels of extrinsic mortality should select for the evolution of shorter lifespans and earlier peak fertility. Non-classical theories, in contrast, posit that an increase in extrinsic mortality could select for the evolution of longer lifespans. Although numerous studies support the classical paradigm, recent data challenge classical predictions, finding that high extrinsic mortality can select for the evolution of longer lifespans. To further elucidate the role of extrinsic mortality in the evolution of aging, we implemented a stochastic, agent-based, computational model. We used a simulated annealing optimization approach to predict which model parameters predispose populations to evolve longer or shorter lifespans in response to increased levels of predation. We report that longer lifespans evolved in the presence of rising predation if the cost of mating is relatively high and if energy is available in excess. Conversely, we found that dramatically shorter lifespans evolved when mating costs were relatively low and food was relatively scarce. We also analyzed the effects of increased predation on various parameters related to density dependence and energy allocation. Longer and shorter lifespans were accompanied by increased and decreased investments of energy into somatic maintenance, respectively. Similarly, earlier and later maturation ages were accompanied by increased and decreased energetic investments into early fecundity, respectively. Higher predation significantly decreased the total population size, enlarged the shared resource pool, and redistributed energy reserves for mature individuals. These results both corroborate and refine classical predictions, demonstrating a population-level trade-off between longevity and fecundity and identifying conditions that produce both

  14. Cost and technical efficiency of physician practices: a stochastic frontier approach using panel data.

    Heimeshoff, Mareike; Schreyögg, Jonas; Kwietniewski, Lukas

    2014-06-01

    This is the first study to use stochastic frontier analysis to estimate both the technical and cost efficiency of physician practices. The analysis is based on panel data from 3,126 physician practices for the years 2006 through 2008. We specified the technical and cost frontiers as translog function, using the one-step approach of Battese and Coelli to detect factors that influence the efficiency of general practitioners and specialists. Variables that were not analyzed previously in this context (e.g., the degree of practice specialization) and a range of control variables such as a patients' case-mix were included in the estimation. Our results suggest that it is important to investigate both technical and cost efficiency, as results may depend on the type of efficiency analyzed. For example, the technical efficiency of group practices was significantly higher than that of solo practices, whereas the results for cost efficiency differed. This may be due to indivisibilities in expensive technical equipment, which can lead to different types of health care services being provided by different practice types (i.e., with group practices using more expensive inputs, leading to higher costs per case despite these practices being technically more efficient). Other practice characteristics such as participation in disease management programs show the same impact throughout both cost and technical efficiency: participation in disease management programs led to an increase in both, technical and cost efficiency, and may also have had positive effects on the quality of care. Future studies should take quality-related issues into account.

  15. Continuous-Time Public Good Contribution Under Uncertainty: A Stochastic Control Approach

    Ferrari, Giorgio; Riedel, Frank; Steg, Jan-Henrik

    2017-01-01

    In this paper we study continuous-time stochastic control problems with both monotone and classical controls motivated by the so-called public good contribution problem. That is the problem of n economic agents aiming to maximize their expected utility allocating initial wealth over a given time period between private consumption and irreversible contributions to increase the level of some public good. We investigate the corresponding social planner problem and the case of strategic interaction between the agents, i.e. the public good contribution game. We show existence and uniqueness of the social planner’s optimal policy, we characterize it by necessary and sufficient stochastic Kuhn–Tucker conditions and we provide its expression in terms of the unique optional solution of a stochastic backward equation. Similar stochastic first order conditions prove to be very useful for studying any Nash equilibria of the public good contribution game. In the symmetric case they allow us to prove (qualitative) uniqueness of the Nash equilibrium, which we again construct as the unique optional solution of a stochastic backward equation. We finally also provide a detailed analysis of the so-called free rider effect.

  16. Continuous-Time Public Good Contribution Under Uncertainty: A Stochastic Control Approach

    Ferrari, Giorgio, E-mail: giorgio.ferrari@uni-bielefeld.de; Riedel, Frank, E-mail: frank.riedel@uni-bielefeld.de; Steg, Jan-Henrik, E-mail: jsteg@uni-bielefeld.de [Bielefeld University, Center for Mathematical Economics (Germany)

    2017-06-15

    In this paper we study continuous-time stochastic control problems with both monotone and classical controls motivated by the so-called public good contribution problem. That is the problem of n economic agents aiming to maximize their expected utility allocating initial wealth over a given time period between private consumption and irreversible contributions to increase the level of some public good. We investigate the corresponding social planner problem and the case of strategic interaction between the agents, i.e. the public good contribution game. We show existence and uniqueness of the social planner’s optimal policy, we characterize it by necessary and sufficient stochastic Kuhn–Tucker conditions and we provide its expression in terms of the unique optional solution of a stochastic backward equation. Similar stochastic first order conditions prove to be very useful for studying any Nash equilibria of the public good contribution game. In the symmetric case they allow us to prove (qualitative) uniqueness of the Nash equilibrium, which we again construct as the unique optional solution of a stochastic backward equation. We finally also provide a detailed analysis of the so-called free rider effect.

  17. Elitism and Stochastic Dominance

    Bazen, Stephen; Moyes, Patrick

    2011-01-01

    Stochastic dominance has typically been used with a special emphasis on risk and inequality reduction something captured by the concavity of the utility function in the expected utility model. We claim that the applicability of the stochastic dominance approach goes far beyond risk and inequality measurement provided suitable adpations be made. We apply in the paper the stochastic dominance approach to the measurment of elitism which may be considered the opposite of egalitarianism. While the...

  18. A delay fractioning approach to global synchronization of delayed complex networks with stochastic disturbances

    Wang Yao; Wang Zidong; Liang Jinling

    2008-01-01

    In this Letter, the synchronization problem is investigated for a class of stochastic complex networks with time delays. By utilizing a new Lyapunov functional form based on the idea of 'delay fractioning', we employ the stochastic analysis techniques and the properties of Kronecker product to establish delay-dependent synchronization criteria that guarantee the globally asymptotically mean-square synchronization of the addressed delayed networks with stochastic disturbances. These sufficient conditions, which are formulated in terms of linear matrix inequalities (LMIs), can be solved efficiently by the LMI toolbox in Matlab. The main results are proved to be much less conservative and the conservatism could be reduced further as the number of delay fractioning gets bigger. A simulation example is exploited to demonstrate the advantage and applicability of the proposed result

  19. A constrained approach to multiscale stochastic simulation of chemically reacting systems

    Cotter, Simon L.

    2011-01-01

    Stochastic simulation of coupled chemical reactions is often computationally intensive, especially if a chemical system contains reactions occurring on different time scales. In this paper, we introduce a multiscale methodology suitable to address this problem, assuming that the evolution of the slow species in the system is well approximated by a Langevin process. It is based on the conditional stochastic simulation algorithm (CSSA) which samples from the conditional distribution of the suitably defined fast variables, given values for the slow variables. In the constrained multiscale algorithm (CMA) a single realization of the CSSA is then used for each value of the slow variable to approximate the effective drift and diffusion terms, in a similar manner to the constrained mean-force computations in other applications such as molecular dynamics. We then show how using the ensuing Fokker-Planck equation approximation, we can in turn approximate average switching times in stochastic chemical systems. © 2011 American Institute of Physics.

  20. Nonlinear Damping Identification in Nonlinear Dynamic System Based on Stochastic Inverse Approach

    S. L. Han

    2012-01-01

    Full Text Available The nonlinear model is crucial to prepare, supervise, and analyze mechanical system. In this paper, a new nonparametric and output-only identification procedure for nonlinear damping is studied. By introducing the concept of the stochastic state space, we formulate a stochastic inverse problem for a nonlinear damping. The solution of the stochastic inverse problem is designed as probabilistic expression via the hierarchical Bayesian formulation by considering various uncertainties such as the information insufficiency in parameter of interests or errors in measurement. The probability space is estimated using Markov chain Monte Carlo (MCMC. The applicability of the proposed method is demonstrated through numerical experiment and particular application to a realistic problem related to ship roll motion.

  1. A Stochastic Bi-Level Scheduling Approach for the Participation of EV Aggregators in Competitive Electricity Markets

    Rashidizaheh-Kermani, Homa; Vahedipour-Dahraie, Mostafa; Najafi, Hamid Reza

    2017-01-01

    are modeled via stochastic programming. Therefore, a two-level problem is formulated here, in which the aggregator makes decision in the upper level and the EV clients purchase energy to charge their EVs in the lower level. Then the obtained nonlinear bi-level framework is transformed into a single......This paper proposes a stochastic bi-level decision-making model for an electric vehicle (EV) aggregator in a competitive environment. In this approach, the EV aggregator decides to participate in day-ahead (DA) and balancing markets and provides energy price offers to the EV owners in order...... is assessed in a realistic case study and the results show that the proposed model would be effective for an EV aggregator decision-making problem in a competitive environment....

  2. Thirty years of precise gravity measurements at Mt. Vesuvius: an approach to detect underground mass movements

    Giovanna Berrino

    2013-11-01

    Full Text Available Since 1982, high precision gravity measurements have been routinely carried out on Mt. Vesuvius. The gravity network consists of selected sites most of them coinciding with, or very close to, leveling benchmarks to remove the effect of the elevation changes from gravity variations. The reference station is located in Napoli, outside the volcanic area. Since 1986, absolute gravity measurements have been periodically made on a station on Mt. Vesuvius, close to a permanent gravity station established in 1987, and at the reference in Napoli. The results of the gravity measurements since 1982 are presented and discussed. Moderate gravity changes on short-time were generally observed. On long-term significant gravity changes occurred and the overall fields displayed well defined patterns. Several periods of evolution may be recognized. Gravity changes revealed by the relative surveys have been confirmed by repeated absolute measurements, which also confirmed the long-term stability of the reference site. The gravity changes over the recognized periods appear correlated with the seismic crises and with changes of the tidal parameters obtained by continuous measurements. The absence of significant ground deformation implies masses redistribution, essentially density changes without significant volume changes, such as fluids migration at the depth of the seismic foci, i.e. at a few kilometers. The fluid migration may occur through pre-existing geological structures, as also suggested by hydrological studies, and/or through new fractures generated by seismic activity. This interpretation is supported by the analyses of the spatial gravity changes overlapping the most significant and recent seismic crises.

  3. The gravity anomaly of Mount Amiata; different approaches for understanding anomaly source distribution

    Girolami, C.; Barchi, M. R.; Heyde, I.; Pauselli, C.; Vetere, F.; Cannata, A.

    2017-11-01

    In this work, the gravity anomaly signal beneath Mount Amiata and its surroundings have been analysed to reconstruct the subsurface setting. In particular, the work focuses on the investigation of the geological bodies responsible for the Bouguer gravity minimum observed in this area.

  4. A primal-dual decomposition based interior point approach to two-stage stochastic linear programming

    A.B. Berkelaar (Arjan); C.L. Dert (Cees); K.P.B. Oldenkamp; S. Zhang (Shuzhong)

    1999-01-01

    textabstractDecision making under uncertainty is a challenge faced by many decision makers. Stochastic programming is a major tool developed to deal with optimization with uncertainties that has found applications in, e.g. finance, such as asset-liability and bond-portfolio management.

  5. Parametric inference for stochastic differential equations: a smooth and match approach

    Gugushvili, S.; Spreij, P.

    2012-01-01

    We study the problem of parameter estimation for a univariate discretely observed ergodic diffusion process given as a solution to a stochastic differential equation. The estimation procedure we propose consists of two steps. In the first step, which is referred to as a smoothing step, we smooth the

  6. Stochastic approach to pitting-corrosion-extreme modelling in low-carbon steel

    Valor, A. [Facultad de Fisica, Universidad de La Habana, San Lazaro y L, Vedado 10400, La Habana (Cuba); Caleyo, F. [Departamento de Ingenieria Metalurgica, IPN-ESIQIE, UPALM Edif. 7, Zacatenco, Mexico DF 07738 (Mexico)], E-mail: fcaleyo@gmail.com; Rivas, D.; Hallen, J.M. [Departamento de Ingenieria Metalurgica, IPN-ESIQIE, UPALM Edif. 7, Zacatenco, Mexico DF 07738 (Mexico)

    2010-03-15

    A stochastic model previously developed by the authors using Markov chains has been improved in the light of new experimental evidence. The new model has been successfully applied to reproduce the time evolution of extreme pitting corrosion depths in low-carbon steel. The model is shown to provide a better physical understanding of the pitting process.

  7. A Volterra series approach to the approximation of stochastic nonlinear dynamics

    Wouw, van de N.; Nijmeijer, H.; Campen, van D.H.

    2002-01-01

    A response approximation method for stochastically excited, nonlinear, dynamic systems is presented. Herein, the output of the nonlinear system isapproximated by a finite-order Volterra series. The original nonlinear system is replaced by a bilinear system in order to determine the kernels of this

  8. The mass transfer approach to multivariate discrete first order stochastic dominance

    Østerdal, Lars Peter Raahave

    2010-01-01

    A fundamental result in the theory of stochastic dominance tells that first order dominance between two finite multivariate distributions is equivalent to the property that the one can be obtained from the other by shifting probability mass from one outcome to another that is worse a finite numbe...

  9. Stochastic approach to pitting-corrosion-extreme modelling in low-carbon steel

    Valor, A.; Caleyo, F.; Rivas, D.; Hallen, J.M.

    2010-01-01

    A stochastic model previously developed by the authors using Markov chains has been improved in the light of new experimental evidence. The new model has been successfully applied to reproduce the time evolution of extreme pitting corrosion depths in low-carbon steel. The model is shown to provide a better physical understanding of the pitting process.

  10. Restructuring of workflows to minimise errors via stochastic model checking: An automated evolutionary approach

    Herbert, Luke Thomas; Hansen, Zaza Nadja Lee

    2016-01-01

    This article presents a framework for the automated restructuring of stochastic workflows to reduce the impact of faults. The framework allows for the modelling of workflows by means of a formalised subset of the BPMN workflow language. We extend this modelling formalism to describe faults...

  11. A new approach for controlling a hybrid stochastic manufacturing / remanufacturing system with inventories and different leadtimes

    Kiesmüller, G.P.

    2003-01-01

    This paper addresses the control problem of a stochastic recovery system with two stocking points and different leadtimes for production and remanufacturing. For such systems the optimal control policy for a linear cost model is not known. Therefore, in the literature several heuristic policies are

  12. Economic Reforms and Cost Efficiency of Coffee Farmers in Central Kenya: A Stochastic-Translog Approach

    Karanja, A.M.; Kuyvenhoven, A.; Moll, H.A.J.

    2007-01-01

    Work reported in this paper analyses the cost efficiency levels of small-holder coffee farmers in four districts in Central Province, Kenya. The level of efficiency is analysed using a stochastic cost frontier model based on household cross-sectional data collected in 1999 and 2000. The 200 surveyed

  13. Stochastic Approach to Determine CO2 Hydrate Induction Time in Clay Mineral Suspensions

    Lee, K.; Lee, S.; Lee, W.

    2008-12-01

    A large number of induction time data for carbon dioxide hydrate formation were obtained from a batch reactor consisting of four independent reaction cells. Using resistance temperature detector(RTD)s and a digital microscope, we successfully monitored the whole process of hydrate formation (i.e., nucleation and crystal growth) and detected the induction time. The experiments were carried out in kaolinite and montmorillonite suspensions at temperatures between 274 and 277 K and pressures ranging from 3.0 to 4.0 MPa. Each set of data was analyzed beforehand whether to be treated by stochastic manner or not. Geochemical factors potentially influencing the hydrate induction time under different experimental conditions were investigated by stochastic analyses. We observed that clay mineral type, pressure, and temperature significantly affect the stochastic behavior of the induction times for CO2 hydrate formation in this study. The hydrate formation kinetics along with stochastic analyses can provide basic understanding for CO2 hydrate storage in deep-sea sediment and geologic formation, securing its stability under the environments.

  14. Stochastic Optimal Control of a Heave Point Wave Energy Converter Based on a Modified LQG Approach

    Sun, Tao; Nielsen, Søren R. K.

    2018-01-01

    and actuator force are approximately considered by counteracting the absorbed power in the objective quadratic functional. Based on rational approximations to the radiation force and the wave load, the integrated dynamic system can be reformulated as a linear stochastic differential equation which is driven...

  15. Stochastic Real-World Drive Cycle Generation Based on a Two Stage Markov Chain Approach

    Balau, A.E.; Kooijman, D.; Vazquez Rodarte, I.; Ligterink, N.

    2015-01-01

    This paper presents a methodology and tool that stochastically generates drive cycles based on measured data, with the purpose of testing and benchmarking light duty vehicles in a simulation environment or on a test-bench. The WLTP database, containing real world driving measurements, was used as

  16. A stochastic logical system approach to model and optimal control of cyclic variation of residual gas fraction in combustion engines

    Wu, Yuhu; Kumar, Madan; Shen, Tielong

    2016-01-01

    Highlights: • An in-cylinder pressure based measuring method for the RGF is derived. • A stochastic logical dynamical model is proposed to represent the transient behavior of the RGF. • The receding horizon controller is designed to reduce the variance of the RGF. • The effectiveness of the proposed model and control approach is validated by the experimental evidence. - Abstract: In four stroke internal combustion engines, residual gas from the previous cycle is an important factor influencing the combustion quality of the current cycle, and the residual gas fraction (RGF) is a popular index to monitor the influence of residual gas. This paper investigates the cycle-to-cycle transient behavior of the RGF in the view of systems theory and proposes a multi-valued logic-based control strategy for attenuation of RGF fluctuation. First, an in-cylinder pressure sensor-based method for measuring the RGF is provided by following the physics of the in-cylinder transient state of four-stroke internal combustion engines. Then, the stochastic property of the RGF is examined based on statistical data obtained by conducting experiments on a full-scale gasoline engine test bench. Based on the observation of the examination, a stochastic logical transient model is proposed to represent the cycle-to-cycle transient behavior of the RGF, and with the model an optimal feedback control law, which targets on rejection of the RGF fluctuation, is derived in the framework of stochastic logical system theory. Finally, experimental results are demonstrated to show the effectiveness of the proposed model and the control strategy.

  17. Stochastic processes

    Borodin, Andrei N

    2017-01-01

    This book provides a rigorous yet accessible introduction to the theory of stochastic processes. A significant part of the book is devoted to the classic theory of stochastic processes. In turn, it also presents proofs of well-known results, sometimes together with new approaches. Moreover, the book explores topics not previously covered elsewhere, such as distributions of functionals of diffusions stopped at different random times, the Brownian local time, diffusions with jumps, and an invariance principle for random walks and local times. Supported by carefully selected material, the book showcases a wealth of examples that demonstrate how to solve concrete problems by applying theoretical results. It addresses a broad range of applications, focusing on concrete computational techniques rather than on abstract theory. The content presented here is largely self-contained, making it suitable for researchers and graduate students alike.

  18. A Langevin Canonical Approach to the Study of Quantum Stochastic Resonance in Chiral Molecules

    Germán Rojas-Lorenzo

    2016-09-01

    Full Text Available A Langevin canonical framework for a chiral two-level system coupled to a bath of harmonic oscillators is used within a coupling scheme different from the well-known spin-boson model to study the quantum stochastic resonance for chiral molecules. This process refers to the amplification of the response to an external periodic signal at a certain value of the noise strength, being a cooperative effect of friction, noise, and periodic driving occurring in a bistable system. Furthermore, from this stochastic dynamics within the Markovian regime and Ohmic friction, the competing process between tunneling and the parity violating energy difference present in this type of chiral systems plays a fundamental role. This mechanism is finally proposed to observe the so-far elusive parity-violating energy difference in chiral molecules.

  19. A stochastic optimization approach to reduce greenhouse gas emissions from buildings and transportation

    Karan, Ebrahim; Asadi, Somayeh; Ntaimo, Lewis

    2016-01-01

    The magnitude of building- and transportation-related GHG (greenhouse gas) emissions makes the adoption of all-EVs (electric vehicles) powered with renewable power as one of the most effective strategies to reduce emission of GHGs. This paper formulates the problem of GHG mitigation strategy under uncertain conditions and optimizes the strategies in which EVs are powered by solar energy. Under a pre-specified budget, the objective is to determine the type of EV and power generation capacity of the solar system in such a way as to maximize GHG emissions reductions. The model supports the three primary solar systems: off-grid, grid-tied, and hybrid. First, a stochastic optimization model using probability distributions of stochastic variables and EV and solar system specifications is developed. The model is then validated by comparing the estimated values of the optimal strategies and actual values. It is found that the mitigation strategies in which EVs are powered by a hybrid solar system lead to the best cost-expected reduction of CO_2 emissions ratio. The results show an accuracy of about 4% for mitigation strategies in which EVs are powered by a grid-tied or hybrid solar system and 11% when applied to estimate the CO_2 emissions reductions of an off-grid system. - Highlights: • The problem of GHG mitigation is formulated as a stochastic optimization problem. • The objective is to maximize CO_2 emissions reductions within a specified budget. • The stochastic model is validated using actual data. • The results show an estimation accuracy of 4–11%.

  20. Global height datum unification: a new approach in gravity potential space

    Ardalan, A. A.; Safari, A.

    2005-12-01

    The problem of “global height datum unification” is solved in the gravity potential space based on: (1) high-resolution local gravity field modeling, (2) geocentric coordinates of the reference benchmark, and (3) a known value of the geoid’s potential. The high-resolution local gravity field model is derived based on a solution of the fixed-free two-boundary-value problem of the Earth’s gravity field using (a) potential difference values (from precise leveling), (b) modulus of the gravity vector (from gravimetry), (c) astronomical longitude and latitude (from geodetic astronomy and/or combination of (GNSS) Global Navigation Satellite System observations with total station measurements), (d) and satellite altimetry. Knowing the height of the reference benchmark in the national height system and its geocentric GNSS coordinates, and using the derived high-resolution local gravity field model, the gravity potential value of the zero point of the height system is computed. The difference between the derived gravity potential value of the zero point of the height system and the geoid’s potential value is computed. This potential difference gives the offset of the zero point of the height system from geoid in the “potential space”, which is transferred into “geometry space” using the transformation formula derived in this paper. The method was applied to the computation of the offset of the zero point of the Iranian height datum from the geoid’s potential value W 0=62636855.8 m2/s2. According to the geometry space computations, the height datum of Iran is 0.09 m below the geoid.

  1. Measuring energy efficiency under heterogeneous technologies using a latent class stochastic frontier approach: An application to Chinese energy economy

    Lin, Boqiang; Du, Kerui

    2014-01-01

    The importance of technology heterogeneity in estimating economy-wide energy efficiency has been emphasized by recent literature. Some studies use the metafrontier analysis approach to estimate energy efficiency. However, for such studies, some reliable priori information is needed to divide the sample observations properly, which causes a difficulty in unbiased estimation of energy efficiency. Moreover, separately estimating group-specific frontiers might lose some common information across different groups. In order to overcome these weaknesses, this paper introduces a latent class stochastic frontier approach to measure energy efficiency under heterogeneous technologies. An application of the proposed model to Chinese energy economy is presented. Results show that the overall energy efficiency of China's provinces is not high, with an average score of 0.632 during the period from 1997 to 2010. - Highlights: • We introduce a latent class stochastic frontier approach to measure energy efficiency. • Ignoring technological heterogeneity would cause biased estimates of energy efficiency. • An application of the proposed model to Chinese energy economy is presented. • There is still a long way for China to develop an energy efficient regime

  2. Public Transportation Hub Location with Stochastic Demand: An Improved Approach Based on Multiple Attribute Group Decision-Making

    Sen Liu

    2015-01-01

    Full Text Available Urban public transportation hubs are the key nodes of the public transportation system. The location of such hubs is a combinatorial problem. Many factors can affect the decision-making of location, including both quantitative and qualitative factors; however, most current research focuses solely on either the quantitative or the qualitative factors. Little has been done to combine these two approaches. To fulfill this gap in the research, this paper proposes a novel approach to the public transportation hub location problem, which takes both quantitative and qualitative factors into account. In this paper, an improved multiple attribute group decision-making (MAGDM method based on TOPSIS (Technique for Order Preference by Similarity to Ideal Solution and deviation is proposed to convert the qualitative factors of each hub into quantitative evaluation values. A location model with stochastic passenger flows is then established based on the above evaluation values. Finally, stochastic programming theory is applied to solve the model and to determine the location result. A numerical study shows that this approach is applicable and effective.

  3. Stochastic thermodynamics

    Eichhorn, Ralf; Aurell, Erik

    2014-04-01

    'Stochastic thermodynamics as a conceptual framework combines the stochastic energetics approach introduced a decade ago by Sekimoto [1] with the idea that entropy can consistently be assigned to a single fluctuating trajectory [2]'. This quote, taken from Udo Seifert's [3] 2008 review, nicely summarizes the basic ideas behind stochastic thermodynamics: for small systems, driven by external forces and in contact with a heat bath at a well-defined temperature, stochastic energetics [4] defines the exchanged work and heat along a single fluctuating trajectory and connects them to changes in the internal (system) energy by an energy balance analogous to the first law of thermodynamics. Additionally, providing a consistent definition of trajectory-wise entropy production gives rise to second-law-like relations and forms the basis for a 'stochastic thermodynamics' along individual fluctuating trajectories. In order to construct meaningful concepts of work, heat and entropy production for single trajectories, their definitions are based on the stochastic equations of motion modeling the physical system of interest. Because of this, they are valid even for systems that are prevented from equilibrating with the thermal environment by external driving forces (or other sources of non-equilibrium). In that way, the central notions of equilibrium thermodynamics, such as heat, work and entropy, are consistently extended to the non-equilibrium realm. In the (non-equilibrium) ensemble, the trajectory-wise quantities acquire distributions. General statements derived within stochastic thermodynamics typically refer to properties of these distributions, and are valid in the non-equilibrium regime even beyond the linear response. The extension of statistical mechanics and of exact thermodynamic statements to the non-equilibrium realm has been discussed from the early days of statistical mechanics more than 100 years ago. This debate culminated in the development of linear response

  4. Multi-criteria multi-stakeholder decision analysis using a fuzzy-stochastic approach for hydrosystem management

    Subagadis, Y. H.; Schütze, N.; Grundmann, J.

    2014-09-01

    The conventional methods used to solve multi-criteria multi-stakeholder problems are less strongly formulated, as they normally incorporate only homogeneous information at a time and suggest aggregating objectives of different decision-makers avoiding water-society interactions. In this contribution, Multi-Criteria Group Decision Analysis (MCGDA) using a fuzzy-stochastic approach has been proposed to rank a set of alternatives in water management decisions incorporating heterogeneous information under uncertainty. The decision making framework takes hydrologically, environmentally, and socio-economically motivated conflicting objectives into consideration. The criteria related to the performance of the physical system are optimized using multi-criteria simulation-based optimization, and fuzzy linguistic quantifiers have been used to evaluate subjective criteria and to assess stakeholders' degree of optimism. The proposed methodology is applied to find effective and robust intervention strategies for the management of a coastal hydrosystem affected by saltwater intrusion due to excessive groundwater extraction for irrigated agriculture and municipal use. Preliminary results show that the MCGDA based on a fuzzy-stochastic approach gives useful support for robust decision-making and is sensitive to the decision makers' degree of optimism.

  5. Multi-criteria multi-stakeholder decision analysis using a fuzzy-stochastic approach for hydrosystem management

    Y. H. Subagadis

    2014-09-01

    Full Text Available The conventional methods used to solve multi-criteria multi-stakeholder problems are less strongly formulated, as they normally incorporate only homogeneous information at a time and suggest aggregating objectives of different decision-makers avoiding water–society interactions. In this contribution, Multi-Criteria Group Decision Analysis (MCGDA using a fuzzy-stochastic approach has been proposed to rank a set of alternatives in water management decisions incorporating heterogeneous information under uncertainty. The decision making framework takes hydrologically, environmentally, and socio-economically motivated conflicting objectives into consideration. The criteria related to the performance of the physical system are optimized using multi-criteria simulation-based optimization, and fuzzy linguistic quantifiers have been used to evaluate subjective criteria and to assess stakeholders' degree of optimism. The proposed methodology is applied to find effective and robust intervention strategies for the management of a coastal hydrosystem affected by saltwater intrusion due to excessive groundwater extraction for irrigated agriculture and municipal use. Preliminary results show that the MCGDA based on a fuzzy-stochastic approach gives useful support for robust decision-making and is sensitive to the decision makers' degree of optimism.

  6. Gravity from entanglement and RG flow in a top-down approach

    Kwon, O.-Kab; Jang, Dongmin; Kim, Yoonbai; Tolla, D. D.

    2018-05-01

    The duality between a d-dimensional conformal field theory with relevant deformation and a gravity theory on an asymptotically AdS d+1 geometry, has become a suitable tool in the investigation of the emergence of gravity from quantum entanglement in field theory. Recently, we have tested the duality between the mass-deformed ABJM theory and asymptotically AdS4 gravity theory, which is obtained from the KK reduction of the 11-dimensional supergravity on the LLM geometry. In this paper, we extend the KK reduction procedure beyond the linear order and establish non-trivial KK maps between 4-dimensional fields and 11-dimensional fluctuations. We rely on this gauge/gravity duality to calculate the entanglement entropy by using the Ryu-Takayanagi holographic formula and the path integral method developed by Faulkner. We show that the entanglement entropies obtained using these two methods agree when the asymptotically AdS4 metric satisfies the linearized Einstein equation with nonvanishing energy-momentum tensor for two scalar fields. These scalar fields encode the information of the relevant deformation of the ABJM theory. This confirms that the asymptotic limit of LLM geometry is the emergent gravity of the quantum entanglement in the mass-deformed ABJM theory with a small mass parameter. We also comment on the issue of the relative entropy and the Fisher information in our setup.

  7. Governance Mechanism for Global Greenhouse Gas Emissions: A Stochastic Differential Game Approach

    Wei Yu

    2013-01-01

    Full Text Available Today developed and developing countries have to admit the fact that global warming is affecting the earth, but the fundamental problem of how to divide up necessary greenhouse gas reductions between developed and developing countries remains. In this paper, we propose cooperative and noncooperative stochastic differential game models to describe greenhouse gas emissions decision makings of developed and developing countries, calculate their feedback Nash equilibrium and the Pareto optimal solution, characterize parameter spaces that developed and developing countries can cooperate, design cooperative conditions under which participants buy the cooperative payoff, and distribute the cooperative payoff with Nash bargaining solution. Lastly, numerical simulations are employed to illustrate the above results.

  8. The stochastic versus the Euclidean approach to quantum fields on a static space-time

    De Angelis, G.F.; de Falco, D.

    1986-01-01

    Equations are presented which modify the definition of the Gaussian field in the Rindler chart in order to make contact with the Wightman state, the Hartle-Hawking state, and the Euclidean field. By taking Ornstein-Uhlenbeck processes the authors have chosen, in the sense of stochastic mechanics, to place precisely the Fulling modes in their harmonic oscillator ground state. In this respect, together with the periodicity of Minkowski space-time, the authors observe that the covariance of the Ornstein-Uhlenbeck process can be obtained by analytical continuation of the Wightman function of the harmonic oscillator at zero temperature

  9. Effect of trapping on transport coherence III: Dissipation in the stochastic Liouville equation approach

    Barvik, I [International Centre for Theoretical Physics, Trieste (Italy); Polasek, M [Charles Univ., Prague (Czech Republic). Inst. of Physics; Herman, P [Pedagogical Univ., Hradec Kralove (Czech Republic)

    1995-08-01

    We used the formal stochastic Liouville equations within Haken-Strobl-Reineker parametrization for the description of the influence of the bath on the memory functions entering the GME for a dimer and a linear trimer with a trap (here modeled as a sink). The often used inclusion of the noncoherent regime in the MF by an exponentially damped prefactor (after Kenkre`s prescription) does not hold for finite systems. The analytical form of the MF is changed more pronouncely and the influence of the sink in the center of the trimer runs parallel with the influence of the bath in destroying the coherence. (author). 60 refs.

  10. Stochastic light-cone CTMRG: a new DMRG approach to stochastic models 02.50.Ey Stochastic processes; 64.60.Ht Dynamic critical phenomena; 02.70.-c Computational techniques; 05.10.Cc Renormalization group methods;

    Kemper, A; Nishino, T; Schadschneider, A; Zittartz, J

    2003-01-01

    We develop a new variant of the recently introduced stochastic transfer matrix DMRG which we call stochastic light-cone corner-transfer-matrix DMRG (LCTMRG). It is a numerical method to compute dynamic properties of one-dimensional stochastic processes. As suggested by its name, the LCTMRG is a modification of the corner-transfer-matrix DMRG, adjusted by an additional causality argument. As an example, two reaction-diffusion models, the diffusion-annihilation process and the branch-fusion process are studied and compared with exact data and Monte Carlo simulations to estimate the capability and accuracy of the new method. The number of possible Trotter steps of more than 10 sup 5 shows a considerable improvement on the old stochastic TMRG algorithm.

  11. Semi-Infinite Geology Modeling Algorithm (SIGMA): a Modular Approach to 3D Gravity

    Chang, J. C.; Crain, K.

    2015-12-01

    Conventional 3D gravity computations can take up to days, weeks, and even months, depending on the size and resolution of the data being modeled. Additional modeling runs, due to technical malfunctions or additional data modifications, only compound computation times even further. We propose a new modeling algorithm that utilizes vertical line elements to approximate mass, and non-gridded (point) gravity observations. This algorithm is (1) magnitudes faster than conventional methods, (2) accurate to less than 0.1% error, and (3) modular. The modularity of this methodology means that researchers can modify their geology/terrain or gravity data, and only the modified component needs to be re-run. Additionally, land-, sea-, and air-based platforms can be modeled at their observation point, without having to filter data into a synthesized grid.

  12. Stochastic kinetics

    Colombino, A.; Mosiello, R.; Norelli, F.; Jorio, V.M.; Pacilio, N.

    1975-01-01

    A nuclear system kinetics is formulated according to a stochastic approach. The detailed probability balance equations are written for the probability of finding the mixed population of neutrons and detected neutrons, i.e. detectrons, at a given level for a given instant of time. Equations are integrated in search of a probability profile: a series of cases is analyzed through a progressive criterium. It tends to take into account an increasing number of physical processes within the chosen model. The most important contribution is that solutions interpret analytically experimental conditions of equilibrium (moise analysis) and non equilibrium (pulsed neutron measurements, source drop technique, start up procedures)

  13. Models of gas-grain chemistry in interstellar cloud cores with a stochastic approach to surface chemistry

    Stantcheva, T.; Herbst, E.

    2004-08-01

    We present a gas-grain model of homogeneous cold cloud cores with time-independent physical conditions. In the model, the gas-phase chemistry is treated via rate equations while the diffusive granular chemistry is treated stochastically. The two phases are coupled through accretion and evaporation. A small network of surface reactions accounts for the surface production of the stable molecules water, formaldehyde, methanol, carbon dioxide, ammonia, and methane. The calculations are run for a time of 107 years at three different temperatures: 10 K, 15 K, and 20 K. The results are compared with those produced in a totally deterministic gas-grain model that utilizes the rate equation method for both the gas-phase and surface chemistry. The results of the different models are in agreement for the abundances of the gaseous species except for later times when the surface chemistry begins to affect the gas. The agreement for the surface species, however, is somewhat mixed. The average abundances of highly reactive surface species can be orders of magnitude larger in the stochastic-deterministic model than in the purely deterministic one. For non-reactive species, the results of the models can disagree strongly at early times, but agree to well within an order of magnitude at later times for most molecules. Strong exceptions occur for CO and H2CO at 10 K, and for CO2 at 20 K. The agreement seems to be best at a temperature of 15 K. As opposed to the use of the normal rate equation method of surface chemistry, the modified rate method is in significantly better agreement with the stochastic-deterministic approach. Comparison with observations of molecular ices in dense clouds shows mixed agreement.

  14. Population density approach for discrete mRNA distributions in generalized switching models for stochastic gene expression.

    Stinchcombe, Adam R; Peskin, Charles S; Tranchina, Daniel

    2012-06-01

    We present a generalization of a population density approach for modeling and analysis of stochastic gene expression. In the model, the gene of interest fluctuates stochastically between an inactive state, in which transcription cannot occur, and an active state, in which discrete transcription events occur; and the individual mRNA molecules are degraded stochastically in an independent manner. This sort of model in simplest form with exponential dwell times has been used to explain experimental estimates of the discrete distribution of random mRNA copy number. In our generalization, the random dwell times in the inactive and active states, T_{0} and T_{1}, respectively, are independent random variables drawn from any specified distributions. Consequently, the probability per unit time of switching out of a state depends on the time since entering that state. Our method exploits a connection between the fully discrete random process and a related continuous process. We present numerical methods for computing steady-state mRNA distributions and an analytical derivation of the mRNA autocovariance function. We find that empirical estimates of the steady-state mRNA probability mass function from Monte Carlo simulations of laboratory data do not allow one to distinguish between underlying models with exponential and nonexponential dwell times in some relevant parameter regimes. However, in these parameter regimes and where the autocovariance function has negative lobes, the autocovariance function disambiguates the two types of models. Our results strongly suggest that temporal data beyond the autocovariance function is required in general to characterize gene switching.

  15. Feasibility Assessment of an ISS Artificial Gravity Conditioning Facility by Means of Multi-Body Approach

    Toso, Mario; Baldesi, Gianluigi; Moratto, Claudio; De Wilde, Don; Bureo Dacal, Rafael; Castellsaguer, Joaquim

    2012-07-01

    Even though human exploration of Mars is a distant objective, it is well understood that, for human space voyages of several years duration, crews would be at risk of catastrophic consequences should any of the systems that provide adequate air, water, food, or thermal protection fail. Moreover, crews will face serious health and/or safety risks resulting from severe physiologic deconditioning associated with prolonged weightlessness. The principal ones are related to physical and functional deterioration of the regulation of the blood circulation, decreased aerobic capacity, impaired musculo-skeletal systems, and altered sensory- motor system performance. As the reliance of future space programmes on virtual modelling, simulation and justification has substantially grown together with the proto-flight hardware development approach, a range of simulation capabilities have become increasingly important in the requirements specification, design, verification, testing, launch and operation of new space systems. In this frame, multibody software is a key tool in providing a more coordinated and consistent approach from the preliminary development phases of the most complex systems. From a scientific prospective, an artificial gravity facility, such as the one evaluated in this paper, would be the first in-flight testing of the effectiveness and acceptability of short radius centrifuge as a countermeasure to human deconditioning on orbit. The ISS represents a unique opportunity to perform this research. From an engineering point of view, the preliminary assessment described in this paper, highlights the difficult engineering challenges of such a facility. The outcome proves that a human can be accommodated in the available volume, while respecting the human ergonomic basic requirements and preserving the global structural integrity of the hosting ISS module. In particular, analysis shows that, although the load capacity of the structural interfaces imposes a very low

  16. Assessment of impact distances for particulate matter dispersion: A stochastic approach

    Godoy, S.M.; Mores, P.L.; Santa Cruz, A.S.M.; Scenna, N.J.

    2009-01-01

    It is known that pollutants can be dispersed from the emission sources by the wind, or settled on the ground. Particle size, stack height, topography and meteorological conditions strongly affect particulate matter (PM) dispersion. In this work, an impact distance calculation methodology considering different particulate sizes is presented. A Gaussian-type dispersion model for PM that handles size particles larger than 0.1 μm is used. The model considers primary particles and continuous emissions. PM concentration distribution at every affected geographical point defined by a grid is computed. Stochastic uncertainty caused by the natural variability of atmospheric parameters is taken into consideration in the dispersion model by applying a Monte Carlo methodology. The prototype package (STRRAP) that takes into account the stochastic behaviour of atmospheric variables, developed for risk assessment and safe distances calculation [Godoy SM, Santa Cruz ASM, Scenna NJ. STRRAP SYSTEM - A software for hazardous materials risk assessment and safe distances calculation. Reliability Engineering and System Safety 2007;92(7):847-57] is enlarged for the analysis of the PM air dispersion. STRRAP computes distances from the source to every affected receptor in each trial and generates the impact distance distribution for each particulate size. In addition, a representative impact distance value to delimit the affected area can be obtained. Fuel oil stack effluents dispersion in Rosario city is simulated as a case study. Mass concentration distributions and impact distances are computed for the range of interest in environmental air quality evaluations (PM 2.5 -PM 10 ).

  17. Monitoring and pollution control: A stochastic process approach to model oil spills

    Viladrich-Grau, M.

    1991-01-01

    The first chapter analyzes the behavior of a firm in an environment with pollution externalities and technological progress. It is assumed that firms may not purposely violate the pollution control regulations but nonetheless, generate some pollution due to negligence. The model allows firms two possible actions: either increase the level of treated waste or pay an expected penalty if illegal pollution is detected. The results of the first chapter show that in a world with pollution externalities, technological progress does not guarantee increases in the welfare level. The second chapter models the occurrence of an oil spill as a stochastic event. The stochastic model developed allows one to see how each step of the spilling process is affected by each policy measure and to compare the relative efficiency of different measures in reducing spills. The third chapter estimates the parameters that govern oil spill frequency and size distribution. The author models how these parameters depend on two pollution prevention measures: monitoring of transfer operations and assessment of penalties. He shows that these measures reduce the frequency of oil spills

  18. Stochastic volatility and stochastic leverage

    Veraart, Almut; Veraart, Luitgard A. M.

    This paper proposes the new concept of stochastic leverage in stochastic volatility models. Stochastic leverage refers to a stochastic process which replaces the classical constant correlation parameter between the asset return and the stochastic volatility process. We provide a systematic...... treatment of stochastic leverage and propose to model the stochastic leverage effect explicitly, e.g. by means of a linear transformation of a Jacobi process. Such models are both analytically tractable and allow for a direct economic interpretation. In particular, we propose two new stochastic volatility...... models which allow for a stochastic leverage effect: the generalised Heston model and the generalised Barndorff-Nielsen & Shephard model. We investigate the impact of a stochastic leverage effect in the risk neutral world by focusing on implied volatilities generated by option prices derived from our new...

  19. A new approach to developing and optimizing organization strategy based on stochastic quantitative model of strategic performance

    Marko Hell

    2014-03-01

    Full Text Available This paper presents a highly formalized approach to strategy formulation and optimization of strategic performance through proper resource allocation. A stochastic quantitative model of strategic performance (SQMSP is used to evaluate the efficiency of the strategy developed. The SQMSP follows the theoretical notions of the balanced scorecard (BSC and strategy map methodologies, initially developed by Kaplan and Norton. Parameters of the SQMSP are suggested to be random variables and be evaluated by experts who give two-point (optimistic and pessimistic values and three-point (optimistic, most probable and pessimistic values evaluations. The Monte-Carlo method is used to simulate strategic performance. Having been implemented within a computer application and applied to solve the real problem (planning of an IT-strategy at the Faculty of Economics, University of Split the proposed approach demonstrated its high potential as a basis for development of decision support tools related to strategic planning.

  20. Quantum Gravity

    Giribet, G E

    2005-01-01

    Claus Kiefer presents his book, Quantum Gravity, with his hope that '[the] book will convince readers of [the] outstanding problem [of unification and quantum gravity] and encourage them to work on its solution'. With this aim, the author presents a clear exposition of the fundamental concepts of gravity and the steps towards the understanding of its quantum aspects. The main part of the text is dedicated to the analysis of standard topics in the formulation of general relativity. An analysis of the Hamiltonian formulation of general relativity and the canonical quantization of gravity is performed in detail. Chapters four, five and eight provide a pedagogical introduction to the basic concepts of gravitational physics. In particular, aspects such as the quantization of constrained systems, the role played by the quadratic constraint, the ADM decomposition, the Wheeler-de Witt equation and the problem of time are treated in an expert and concise way. Moreover, other specific topics, such as the minisuperspace approach and the feasibility of defining extrinsic times for certain models, are discussed as well. The ninth chapter of the book is dedicated to the quantum gravitational aspects of string theory. Here, a minimalistic but clear introduction to string theory is presented, and this is actually done with emphasis on gravity. It is worth mentioning that no hard (nor explicit) computations are presented, even though the exposition covers the main features of the topic. For instance, black hole statistical physics (within the framework of string theory) is developed in a pedagogical and concise way by means of heuristical arguments. As the author asserts in the epilogue, the hope of the book is to give 'some impressions from progress' made in the study of quantum gravity since its beginning, i.e., since the end of 1920s. In my opinion, Kiefer's book does actually achieve this goal and gives an extensive review of the subject. (book review)

  1. Environmental and Economic Optimization Model for Electric System Planning in Ningxia, China: Inexact Stochastic Risk-Aversion Programming Approach

    L. Ji

    2015-01-01

    Full Text Available The main goal of this paper is to provide a novel risk aversion model for long-term electric power system planning from the manager’s perspective with the consideration of various uncertainties. In the proposed method, interval parameter programming and two-stage stochastic programming are integrated to deal with the technical, economics, and policy uncertainties. Moreover, downside risk theory is introduced to balance the trade-off between the profit and risk according to the decision-maker’s risk aversion attitude. To verify the effectiveness and practical application of this approach, an inexact stochastic risk aversion model is developed for regional electric system planning and management in Ningxia Hui Autonomous Region, China. The series of solutions provide the decision-maker with the optimal investment strategy and operation management under different future emission reduction scenarios and risk-aversion levels. The results indicated that pollution control devices are still the main measures to achieve the current mitigation goal and the adjustment of generation structure would play an important role in the future cleaner electricity system with the stricter environmental policy. In addition, the model can be used for generating decision alternatives and helping decision-makers identify desired energy structure adjustment and pollutants/carbon mitigation abatement policies under various economic and system-reliability constraints.

  2. A new approach for gravity localization in six-dimensional geometries

    Santos, Victor Pereira do Nascimento; Almeida, Carlos Alberto Santos de

    2011-01-01

    Full text: The idea that spacetime may have more than four dimensions is old, originally presented as an attempt to unify Maxwell's theory of Electromagnetism with the brand-new gravitation theory of Einstein. Such extra dimensions are in principle unobservable to the energy scales currently available. However, its effects can be seen in short distance gravity experiments and in observations in cosmology. Also, it is used as a mechanism to explain the difference between the energy scales of the weak force and gravity, which is called the hierarchy problem. The current framework for the extra dimension scenario is consider the four-dimensional known universe as embedded in a higher dimensional space called bulk. The form of this bulk determines how we perceive gravity in our universe; then, the behaviour of gravitational field depends on the geometry of the bulk. Metric solutions were already presented for string-like defect, with and without matter sources, where was shown that the gravity Newtonian potential grows with the inverse cube of distance. Such correction arises from a very particular mass spectrum for the gravitational field, which already contains the orbital angular momentum contributions. In this work we study the behaviour of gravitational field in a extra-dimensional braneworld scenario, using non-factorizable geometries (which preserves Poincare symmetry) and setting suitable matter distributions in order to verify its localization, for several geometries. For such geometries it is possible to find explicit solutions for the tensor fluctuations of the metric. (author)

  3. Graph-based stochastic control with constraints: A unified approach with perfect and imperfect measurements

    Agha-mohammadi, Ali-akbar

    2013-06-01

    This paper is concerned with the problem of stochastic optimal control (possibly with imperfect measurements) in the presence of constraints. We propose a computationally tractable framework to address this problem. The method lends itself to sampling-based methods where we construct a graph in the state space of the problem, on which a Dynamic Programming (DP) is solved and a closed-loop feedback policy is computed. The constraints are seamlessly incorporated to the control policy selection by including their effect on the transition probabilities of the graph edges. We present a unified framework that is applicable both in the state space (with perfect measurements) and in the information space (with imperfect measurements).

  4. Assessment of impact distances for particulate matter dispersion: A stochastic approach

    Godoy, S.M.; Mores, P.L.; Santa Cruz, A.S.M. [CAIMI - Centro de Aplicaciones Informaticas y Modelado en Ingenieria, Universidad Tecnologica Nacional-Facultad Regional Rosario, Zeballos 1341-S2000 BQA Rosario, Santa Fe (Argentina); Scenna, N.J. [CAIMI - Centro de Aplicaciones Informaticas y Modelado en Ingenieria, Universidad Tecnologica Nacional-Facultad Regional Rosario, Zeballos 1341-S2000 BQA Rosario, Santa Fe (Argentina); INGAR - Instituto de Desarrollo y Diseno (Fundacion ARCIEN - CONICET), Avellaneda 3657, S3002 GJC Santa Fe (Argentina)], E-mail: nscenna@santafe-conicet.gov.ar

    2009-10-15

    It is known that pollutants can be dispersed from the emission sources by the wind, or settled on the ground. Particle size, stack height, topography and meteorological conditions strongly affect particulate matter (PM) dispersion. In this work, an impact distance calculation methodology considering different particulate sizes is presented. A Gaussian-type dispersion model for PM that handles size particles larger than 0.1 {mu}m is used. The model considers primary particles and continuous emissions. PM concentration distribution at every affected geographical point defined by a grid is computed. Stochastic uncertainty caused by the natural variability of atmospheric parameters is taken into consideration in the dispersion model by applying a Monte Carlo methodology. The prototype package (STRRAP) that takes into account the stochastic behaviour of atmospheric variables, developed for risk assessment and safe distances calculation [Godoy SM, Santa Cruz ASM, Scenna NJ. STRRAP SYSTEM - A software for hazardous materials risk assessment and safe distances calculation. Reliability Engineering and System Safety 2007;92(7):847-57] is enlarged for the analysis of the PM air dispersion. STRRAP computes distances from the source to every affected receptor in each trial and generates the impact distance distribution for each particulate size. In addition, a representative impact distance value to delimit the affected area can be obtained. Fuel oil stack effluents dispersion in Rosario city is simulated as a case study. Mass concentration distributions and impact distances are computed for the range of interest in environmental air quality evaluations (PM{sub 2.5}-PM{sub 10})

  5. Exploring energy efficiency in China's iron and steel industry: A stochastic frontier approach

    Lin, Boqiang; Wang, Xiaolei

    2014-01-01

    The iron and steel industry is one of the major energy-consuming industries in China. Given the limited research on effective energy conservation in China's industrial sectors, this paper analyzes the total factor energy efficiency and the corresponding energy conservation potential of China's iron and steel industry using the excessive energy-input stochastic frontier model. The results show that there was an increasing trend in energy efficiency between 2005 and 2011 with an average energy efficiency of 0.699 and a cumulative energy conservation potential of 723.44 million tons of coal equivalent (Mtce). We further analyze the regional differences in energy efficiency and find that energy efficiency of Northeastern China is high while that of Central and Western China is low. Therefore, there is a concentration of energy conservation potential for the iron and steel industry in the Central and Western areas. In addition, we discover that inefficient factors are important for improving energy conservation. We find that the structural defect in the economic system is an important impediment to energy efficiency and economic restructuring is the key to improving energy efficiency. - Highlights: • A stochastic frontier model is adopted to analyze energy efficiency. • Industry concentration and ownership structure are main factors affecting the non-efficiency. • Energy efficiency of China's iron and steel industry shows a fluctuating increase. • Regional differences of energy efficiency are further analyzed. • Future policy for energy conservation in China's iron and steel sector is suggested

  6. A stochastic framework for the grid integration of wind power using flexible load approach

    Heydarian-Forushani, E.; Moghaddam, M.P.; Sheikh-El-Eslami, M.K.; Shafie-khah, M.; Catalão, J.P.S.

    2014-01-01

    Highlights: • This paper focuses on the potential of Demand Response Programs (DRPs) to contribute to flexibility. • A stochastic network constrained unit commitment associated with DR is presented. • DR participation levels and electricity tariffs are evaluated on providing a flexible load profile. • Novel quantitative indices for evaluating flexibility are defined to assess the success of DRPs. • DR types and customer participation levels are the main factors to modify the system load profile. - Abstract: Wind power integration has always been a key research area due to the green future power system target. However, the intermittent nature of wind power may impose some technical and economic challenges to Independent System Operators (ISOs) and increase the need for additional flexibility. Motivated by this need, this paper focuses on the potential of Demand Response Programs (DRPs) as an option to contribute to the flexible operation of power systems. On this basis, in order to consider the uncertain nature of wind power and the reality of electricity market, a Stochastic Network Constrained Unit Commitment associated with DR (SNCUCDR) is presented to schedule both generation units and responsive loads in power systems with high penetration of wind power. Afterwards, the effects of both price-based and incentive-based DRPs are evaluated, as well as DR participation levels and electricity tariffs on providing a flexible load profile and facilitating grid integration of wind power. For this reason, novel quantitative indices for evaluating flexibility are defined to assess the success of DRPs in terms of wind integration. Sensitivity studies indicate that DR types and customer participation levels are the main factors to modify the system load profile to support wind power integration

  7. BRS symmetry in stochastic quantization of the gravitational field

    Nakazawa, Naohito.

    1989-12-01

    We study stochastic quantization of gravity in terms of a BRS invariant canonical operator formalism. By introducing artificially canonical momentum variables for the original field variables, a canonical formulation of stochastic quantization is proposed in a sense that the Fokker-Planck hamiltonian is the generator of the fictitious time translation. Then we show that there exists a nilpotent BRS symmetry in an enlarged phase space for gravity (in general, for the first-class constrained systems). The stochastic action of gravity includes explicitly an unique De Witt's type superspace metric which leads to a geometrical interpretation of quantum gravity analogous to nonlinear σ-models. (author)

  8. A Comparison of Deterministic and Stochastic Modeling Approaches for Biochemical Reaction Systems: On Fixed Points, Means, and Modes.

    Hahl, Sayuri K; Kremling, Andreas

    2016-01-01

    In the mathematical modeling of biochemical reactions, a convenient standard approach is to use ordinary differential equations (ODEs) that follow the law of mass action. However, this deterministic ansatz is based on simplifications; in particular, it neglects noise, which is inherent to biological processes. In contrast, the stochasticity of reactions is captured in detail by the discrete chemical master equation (CME). Therefore, the CME is frequently applied to mesoscopic systems, where copy numbers of involved components are small and random fluctuations are thus significant. Here, we compare those two common modeling approaches, aiming at identifying parallels and discrepancies between deterministic variables and possible stochastic counterparts like the mean or modes of the state space probability distribution. To that end, a mathematically flexible reaction scheme of autoregulatory gene expression is translated into the corresponding ODE and CME formulations. We show that in the thermodynamic limit, deterministic stable fixed points usually correspond well to the modes in the stationary probability distribution. However, this connection might be disrupted in small systems. The discrepancies are characterized and systematically traced back to the magnitude of the stoichiometric coefficients and to the presence of nonlinear reactions. These factors are found to synergistically promote large and highly asymmetric fluctuations. As a consequence, bistable but unimodal, and monostable but bimodal systems can emerge. This clearly challenges the role of ODE modeling in the description of cellular signaling and regulation, where some of the involved components usually occur in low copy numbers. Nevertheless, systems whose bimodality originates from deterministic bistability are found to sustain a more robust separation of the two states compared to bimodal, but monostable systems. In regulatory circuits that require precise coordination, ODE modeling is thus still

  9. String duality transformations in f(R) gravity from Noether symmetry approach

    Capozziello, Salvatore [Dipartimento di Fisica, Università di Napoli ' ' Federico II' ' , Compl. Univ. di Monte S. Angelo, Edificio G, Via Cinthia, I-80126, Napoli (Italy); Gionti, Gabriele S.J. [Specola Vaticana, Vatican City, V-00120, Vatican City State (Vatican City State, Holy See); Vernieri, Daniele, E-mail: capozziello@na.inf.it, E-mail: ggionti@as.arizona.edu, E-mail: vernieri@iap.fr [Sorbonne Universités, UPMC Univ Paris 6 et CNRS, UMR 7095, Institut d' Astrophysique de Paris, GReCO, 98bis Bd Arago, 75014 Paris (France)

    2016-01-01

    We select f(R) gravity models that undergo scale factor duality transformations. As a starting point, we consider the tree-level effective gravitational action of bosonic String Theory coupled with the dilaton field. This theory inherits the Busher's duality of its parent String Theory. Using conformal transformations of the metric tensor, it is possible to map the tree-level dilaton-graviton string effective action into f(R) gravity, relating the dilaton field to the Ricci scalar curvature. Furthermore, the duality can be framed under the standard of Noether symmetries and exact cosmological solutions are derived. Using suitable changes of variables, the string-based f(R) Lagrangians are shown in cases where the duality transformation becomes a parity inversion.

  10. String duality transformations in f(R) gravity from Noether symmetry approach

    Capozziello, Salvatore; Gionti, Gabriele S.J.; Vernieri, Daniele

    2016-01-01

    We select f(R) gravity models that undergo scale factor duality transformations. As a starting point, we consider the tree-level effective gravitational action of bosonic String Theory coupled with the dilaton field. This theory inherits the Busher's duality of its parent String Theory. Using conformal transformations of the metric tensor, it is possible to map the tree-level dilaton-graviton string effective action into f(R) gravity, relating the dilaton field to the Ricci scalar curvature. Furthermore, the duality can be framed under the standard of Noether symmetries and exact cosmological solutions are derived. Using suitable changes of variables, the string-based f(R) Lagrangians are shown in cases where the duality transformation becomes a parity inversion

  11. The Gravity Model Approach: An Application on the Eco Was Trading Bloc

    Luqman Afolabi O.

    2016-04-01

    Full Text Available This study aims to examine bilateral trade flows across ECOWAS-15 nations with the use of a panel and cross section for the period of 1981-2013. The methodology carried out to achieve this objective involves the use of various techniques of estimation for the gravity model (Static and dynamic. More specifically, this study aims to investigate the formational impact of regional trade integration agreements on trade flows within a group of countries using the same currencies and ECOWAS at large. The main use of regional variables into gravity models is intended to determine whether RTAs lead to trade creation, or diversion. The results show the presence of a strong relationship among the factors of both RIAs and trade flows.

  12. Black hole state counting in loop quantum gravity: a number-theoretical approach.

    Agulló, Iván; Barbero G, J Fernando; Díaz-Polo, Jacobo; Fernández-Borja, Enrique; Villaseñor, Eduardo J S

    2008-05-30

    We give an efficient method, combining number-theoretic and combinatorial ideas, to exactly compute black hole entropy in the framework of loop quantum gravity. Along the way we provide a complete characterization of the relevant sector of the spectrum of the area operator, including degeneracies, and explicitly determine the number of solutions to the projection constraint. We use a computer implementation of the proposed algorithm to confirm and extend previous results on the detailed structure of the black hole degeneracy spectrum.

  13. The Principle of the Fermionic Projector: An Approach for Quantum Gravity?

    Finster, Felix

    In this short article we introduce the mathematical framework of the principle of the fermionic projector and set up a variational principle in discrete space-time. The underlying physical principles are discussed. We outline the connection to the continuum theory and state recent results. In the last two sections, we speculate on how it might be possible to describe quantum gravity within this framework.

  14. Late-time cosmological approach in mimetic f(R, T) gravity

    Baffou, E.H. [Institut de Mathematiques et de Sciences Physiques (IMSP), Porto-Novo (Benin); Houndjo, M.J.S. [Institut de Mathematiques et de Sciences Physiques (IMSP), Porto-Novo (Benin); Faculte des Sciences et Techniques de Natitingou, Natitingou (Benin); Hamani-Daouda, M. [Universite de Niamey, Departement de Physique, Niamey (Niger); Alvarenga, F.G. [Universidade Federal do Espirito Santo, Departamento de Engenharia e Ciencias Naturais, CEUNES, Sao Mateus, ES (Brazil)

    2017-10-15

    In this paper, we investigate the late-time cosmic acceleration in mimetic f(R, T) gravity with the Lagrange multiplier and potential in a Universe containing, besides radiation and dark energy, a self-interacting (collisional) matter. We obtain through the modified Friedmann equations the main equation that can describe the cosmological evolution. Then, with several models from Q(z) and the well-known particular model f(R, T), we perform an analysis of the late-time evolution. We examine the behavior of the Hubble parameter, the dark energy equation of state and the total effective equation of state and in each case we compare the resulting picture with the non-collisional matter (assumed as dust) and also with the collisional matter in mimetic f(R, T) gravity. The results obtained are in good agreement with the observational data and show that in the presence of the collisional matter the dark energy oscillations in mimetic f(R, T) gravity can be damped. (orig.)

  15. Risk Management of Interest Rate Derivative Portfolios: A Stochastic Control Approach

    Konstantinos Kiriakopoulos

    2014-10-01

    Full Text Available In this paper we formulate the Risk Management Control problem in the interest rate area as a constrained stochastic portfolio optimization problem. The utility that we use can be any continuous function and based on the viscosity theory, the unique solution of the problem is guaranteed. The numerical approximation scheme is presented and applied using a single factor interest rate model. It is shown how the whole methodology works in practice, with the implementation of the algorithm for a specific interest rate portfolio. The recent financial crisis showed that risk management of derivatives portfolios especially in the interest rate market is crucial for the stability of the financial system. Modern Value at Risk (VAR and Conditional Value at Risk (CVAR techniques, although very useful and easy to understand, fail to grasp the need for on-line controlling and monitoring of derivatives portfolio. The portfolios should be designed in a way that risk and return be quantified and controlled in every possible state of the world. We hope that this methodology contributes towards this direction.

  16. Maximum power tracking in WECS (Wind energy conversion systems) via numerical and stochastic approaches

    Elnaggar, M.; Abdel Fattah, H.A.; Elshafei, A.L.

    2014-01-01

    This paper presents a complete design of a two-level control system to capture maximum power in wind energy conversion systems. The upper level of the proposed control system adopts a modified line search optimization algorithm to determine a setpoint for the wind turbine speed. The calculated speed setpoint corresponds to the maximum power point at given operating conditions. The speed setpoint is fed to a generalized predictive controller at the lower level of the control system. A different formulation, that treats the aerodynamic torque as a disturbance, is postulated to derive the control law. The objective is to accurately track the setpoint while keeping the control action free from unacceptably fast or frequent variations. Simulation results based on a realistic model of a 1.5 MW wind turbine confirm the superiority of the proposed control scheme to the conventional ones. - Highlights: • The structure of a MPPT (maximum power point tracking) scheme is presented. • The scheme is divided into the optimization algorithm and the tracking controller. • The optimization algorithm is based on an online line search numerical algorithm. • The tracking controller is treating the aerodynamics torque as a loop disturbance. • The control technique is simulated with stochastic wind speed by Simulink and FAST

  17. A Stochastic Geometry Approach to Full-Duplex MIMO Relay Network

    Mhd Nour Hindia

    2018-01-01

    Full Text Available Cellular networks are extensively modeled by placing the base stations on a grid, with relays and destinations being placed deterministically. These networks are idealized for not considering the interferences when evaluating the coverage/outage and capacity. Realistic models that can overcome such limitation are desirable. Specifically, in a cellular downlink environment, the full-duplex (FD relaying and destination are prone to interferences from unintended sources and relays. However, this paper considered two-hop cellular network in which the mobile nodes aid the sources by relaying the signal to the dead zone. Further, we model the locations of the sources, relays, and destination nodes as a point process on the plane and analyze the performance of two different hops in the downlink. Then, we obtain the success probability and the ergodic capacity of the two-hop MIMO relay scheme, accounting for the interference from all other adjacent cells. We deploy stochastic geometry and point process theory to rigorously analyze the two-hop scheme with/without interference cancellation. These attained expressions are amenable to numerical evaluation and are corroborated by simulation results.

  18. A Two-Stage Stochastic Mixed-Integer Programming Approach to the Smart House Scheduling Problem

    Ozoe, Shunsuke; Tanaka, Yoichi; Fukushima, Masao

    A “Smart House” is a highly energy-optimized house equipped with photovoltaic systems (PV systems), electric battery systems, fuel cell cogeneration systems (FC systems), electric vehicles (EVs) and so on. Smart houses are attracting much attention recently thanks to their enhanced ability to save energy by making full use of renewable energy and by achieving power grid stability despite an increased power draw for installed PV systems. Yet running a smart house's power system, with its multiple power sources and power storages, is no simple task. In this paper, we consider the problem of power scheduling for a smart house with a PV system, an FC system and an EV. We formulate the problem as a mixed integer programming problem, and then extend it to a stochastic programming problem involving recourse costs to cope with uncertain electricity demand, heat demand and PV power generation. Using our method, we seek to achieve the optimal power schedule running at the minimum expected operation cost. We present some results of numerical experiments with data on real-life demands and PV power generation to show the effectiveness of our method.

  19. PARTICLE ACCELERATION AT THE HELIOSPHERIC TERMINATION SHOCK WITH A STOCHASTIC SHOCK OBLIQUITY APPROACH

    Arthur, Aaron D.; Le Roux, Jakobus A.

    2013-01-01

    Observations by the plasma and magnetic field instruments on board the Voyager 2 spacecraft suggest that the termination shock is weak with a compression ratio of ∼2. However, this is contrary to the observations of accelerated particle spectra at the termination shock, where standard diffusive shock acceleration theory predicts a compression ratio closer to ∼2.9. Using our focused transport model, we investigate pickup proton acceleration at a stationary spherical termination shock with a moderately strong compression ratio of 2.8 to include both the subshock and precursor. We show that for the particle energies observed by the Voyager 2 Low Energy Charged Particle (LECP) instrument, pickup protons will have effective length scales of diffusion that are larger than the combined subshock and precursor termination shock structure observed. As a result, the particles will experience a total effective termination shock compression ratio that is larger than values inferred by the plasma and magnetic field instruments for the subshock and similar to the value predicted by diffusive shock acceleration theory. Furthermore, using a stochastically varying magnetic field angle, we are able to qualitatively reproduce the multiple power-law structure observed for the LECP spectra downstream of the termination shock

  20. Efficient stochastic approaches for sensitivity studies of an Eulerian large-scale air pollution model

    Dimov, I.; Georgieva, R.; Todorov, V.; Ostromsky, Tz.

    2017-10-01

    Reliability of large-scale mathematical models is an important issue when such models are used to support decision makers. Sensitivity analysis of model outputs to variation or natural uncertainties of model inputs is crucial for improving the reliability of mathematical models. A comprehensive experimental study of Monte Carlo algorithms based on Sobol sequences for multidimensional numerical integration has been done. A comparison with Latin hypercube sampling and a particular quasi-Monte Carlo lattice rule based on generalized Fibonacci numbers has been presented. The algorithms have been successfully applied to compute global Sobol sensitivity measures corresponding to the influence of several input parameters (six chemical reactions rates and four different groups of pollutants) on the concentrations of important air pollutants. The concentration values have been generated by the Unified Danish Eulerian Model. The sensitivity study has been done for the areas of several European cities with different geographical locations. The numerical tests show that the stochastic algorithms under consideration are efficient for multidimensional integration and especially for computing small by value sensitivity indices. It is a crucial element since even small indices may be important to be estimated in order to achieve a more accurate distribution of inputs influence and a more reliable interpretation of the mathematical model results.

  1. Linear stochastic systems a geometric approach to modeling, estimation and identification

    Lindquist, Anders

    2015-01-01

    This book presents a treatise on the theory and modeling of second-order stationary processes, including an exposition on selected application areas that are important in the engineering and applied sciences. The foundational issues regarding stationary processes dealt with in the beginning of the book have a long history, starting in the 1940s with the work of Kolmogorov, Wiener, Cramér and his students, in particular Wold, and have since been refined and complemented by many others. Problems concerning the filtering and modeling of stationary random signals and systems have also been addressed and studied, fostered by the advent of modern digital computers, since the fundamental work of R.E. Kalman in the early 1960s. The book offers a unified and logically consistent view of the subject based on simple ideas from Hilbert space geometry and coordinate-free thinking. In this framework, the concepts of stochastic state space and state space modeling, based on the notion of the conditional independence of pas...

  2. Technical efficiency of certified maize seed in Palpa district, Nepal: A stochastic frontier production approach

    Mahima Bajracharya

    2017-12-01

    Full Text Available The cereal crop, maize is regarded as staple food mainly in hill areas of Nepal. Seed is one of the vital input which determines the production and yield of any crop. Farmers are found using the required inputs in haphazard way which had increased the cost of production and inefficiency of resources used. The study on seed sector is limited. For such a backdrop, this study was aimed to assess the level of technical efficiency (TE of certified maize seed production. The total of 164 certified seed producer were interviewed in June, 2016 using simple random sampling technique in Palpa district of Nepal. The result revealed that increase in amount of seed and labor by one percent would increase the yield of certified maize seed by 0.29 and 0.34 percent respectively. The TE was estimated using stochastic production frontier model in Stata software. The average TE was found 70 percent which revealed the scope of increasing TE by 30 percent using the existing available resources. There were about 29 percent farmers who had TE of ≥0.7-0.8 followed by 27.44 percent at ≥0.8-0.9. Government and other stakeholders should prioritize to provide technical knowledge via training and increase the visit of extension worker to increase TE of certified maize seed producer in the district.

  3. Stochastic estimation approach for the evaluation of thermal-hydraulic parameters in pressurized water reactors

    Shieh, D.J.; Upadhyaya, M.G.

    1986-01-01

    A method based on the extended Kalman filter is developed for the estimation of the core coolant mass flow rate in pressurized water reactors. The need for flow calibration can be avoided by a direct estimation of this parameter. A reduced-order neutronic and thermal-hydraulic model is developed for the Loss-of-Fluid Test (LOFT) reactor. The neutron detector and core-exit coolant temperature signals from the LOFT reactor are used as measurements in the parameter estimation algorithm. The estimation sensitivity to model uncertainties was evaluated using the ambiguity function analysis. This also provides a lower bound on the measurement sample size necessary to achieve a certain estimation accuracy. A sequential technique was developed to minimize the computational effort needed to discretize the continuous time equations, and thus achieve faster convergence to the true parameter value. The performance of the stochastic approximation method was first evaluated using simulated random data, and then applied to the estimation of coolant flow rate using the operational data from the LOFT reactor at 100 and 65% flow rate conditions

  4. Modeling Cellular Networks with Full Duplex D2D Communication: A Stochastic Geometry Approach

    Ali, Konpal S.

    2016-08-24

    Full-duplex (FD) communication is optimistically promoted to double the spectral efficiency if sufficient self-interference cancellation (SIC) is achieved. However, this is not true when deploying FD-communication in a large-scale setup due to the induced mutual interference. Therefore, a large-scale study is necessary to draw legitimate conclusions about gains associated with FD-communication. This paper studies the FD operation for underlay device-to-device (D2D) communication sharing the uplink resources in cellular networks. We propose a disjoint fine-tuned selection criterion for the D2D and FD modes of operation. Then, we develop a tractable analytical paradigm, based on stochastic geometry, to calculate the outage probability and rate for cellular and D2D users. The results reveal that even in the case of perfect SIC, due to the increased interference injected to the network by FD-D2D communication, having all proximity UEs transmit in FD-D2D is not beneficial for the network. However, if the system parameters are carefully tuned, non-trivial network spectral-efficiency gains (64% shown) can be harvested. We also investigate the effects of imperfect SIC and D2D-link distance distribution on the harvested FD gains.

  5. In-Band α-Duplex Scheme for Cellular Networks: A Stochastic Geometry Approach

    Alammouri, Ahmad

    2016-07-13

    In-band full-duplex (FD) communications have been optimistically promoted to improve the spectrum utilization and efficiency. However, the penetration of FD communications to the cellular networks domain is challenging due to the imposed uplink/downlink interference. This paper presents a tractable framework, based on stochastic geometry, to study FD communications in cellular networks. Particularly, we assess the FD communications effect on the network performance and quantify the associated gains. The study proves the vulnerability of the uplink to the downlink interference and shows that FD rate gains harvested in the downlink (up to 97%) come at the expense of a significant degradation in the uplink rate (up to 94%). Therefore, we propose a novel fine-grained duplexing scheme, denoted as -duplex scheme, which allows a partial overlap between the uplink and the downlink frequency bands. We derive the required conditions to harvest rate gains from the -duplex scheme and show its superiority to both the FD and half-duplex (HD) schemes. In particular, we show that the -duplex scheme provides a simultaneous improvement of 28% for the downlink rate and 56% for the uplink rate. Finally, we show that the amount of the overlap can be optimized based on the network design objective.

  6. A geometric stochastic approach based on marked point processes for road mark detection from high resolution aerial images

    Tournaire, O.; Paparoditis, N.

    Road detection has been a topic of great interest in the photogrammetric and remote sensing communities since the end of the 70s. Many approaches dealing with various sensor resolutions, the nature of the scene or the wished accuracy of the extracted objects have been presented. This topic remains challenging today as the need for accurate and up-to-date data is becoming more and more important. Based on this context, we will study in this paper the road network from a particular point of view, focusing on road marks, and in particular dashed lines. Indeed, they are very useful clues, for evidence of a road, but also for tasks of a higher level. For instance, they can be used to enhance quality and to improve road databases. It is also possible to delineate the different circulation lanes, their width and functionality (speed limit, special lanes for buses or bicycles...). In this paper, we propose a new robust and accurate top-down approach for dashed line detection based on stochastic geometry. Our approach is automatic in the sense that no intervention from a human operator is necessary to initialise the algorithm or to track errors during the process. The core of our approach relies on defining geometric, radiometric and relational models for dashed lines objects. The model also has to deal with the interactions between the different objects making up a line, meaning that it introduces external knowledge taken from specifications. Our strategy is based on a stochastic method, and in particular marked point processes. Our goal is to find the objects configuration minimising an energy function made-up of a data attachment term measuring the consistency of the image with respect to the objects and a regularising term managing the relationship between neighbouring objects. To sample the energy function, we use Green algorithm's; coupled with a simulated annealing to find its minimum. Results from aerial images at various resolutions are presented showing that our

  7. A dynamically adaptive wavelet approach to stochastic computations based on polynomial chaos - capturing all scales of random modes on independent grids

    Ren Xiaoan; Wu Wenquan; Xanthis, Leonidas S.

    2011-01-01

    Highlights: → New approach for stochastic computations based on polynomial chaos. → Development of dynamically adaptive wavelet multiscale solver using space refinement. → Accurate capture of steep gradients and multiscale features in stochastic problems. → All scales of each random mode are captured on independent grids. → Numerical examples demonstrate the need for different space resolutions per mode. - Abstract: In stochastic computations, or uncertainty quantification methods, the spectral approach based on the polynomial chaos expansion in random space leads to a coupled system of deterministic equations for the coefficients of the expansion. The size of this system increases drastically when the number of independent random variables and/or order of polynomial chaos expansions increases. This is invariably the case for large scale simulations and/or problems involving steep gradients and other multiscale features; such features are variously reflected on each solution component or random/uncertainty mode requiring the development of adaptive methods for their accurate resolution. In this paper we propose a new approach for treating such problems based on a dynamically adaptive wavelet methodology involving space-refinement on physical space that allows all scales of each solution component to be refined independently of the rest. We exemplify this using the convection-diffusion model with random input data and present three numerical examples demonstrating the salient features of the proposed method. Thus we establish a new, elegant and flexible approach for stochastic problems with steep gradients and multiscale features based on polynomial chaos expansions.

  8. Development of the negative gravity anomaly of the 85 degrees E Ridge, northeastern Indian Ocean – A process oriented modelling approach

    Sreejith, K.M.; Radhakrishna, M.; Krishna, K.S.; Majumdar, T.J.

    Te value. Entire process is repeated for different Te values ranging from 0 to 25 km, until a good fit is obtained between the observed and calculated gravity anomalies considering RMS error as well as amplitude and wavelength of the anomalies... as the goodness of fit. The model parameters used in the computations are given in table 1. 5. Crustal structure and elastic plate thickness (Te) beneath the ridge Following the approach described above, we have computed individual gravity anomalies contributed...

  9. A New Approach to Predict Microbial Community Assembly and Function Using a Stochastic, Genome-Enabled Modeling Framework

    King, E.; Brodie, E.; Anantharaman, K.; Karaoz, U.; Bouskill, N.; Banfield, J. F.; Steefel, C. I.; Molins, S.

    2016-12-01

    Characterizing and predicting the microbial and chemical compositions of subsurface aquatic systems necessitates an understanding of the metabolism and physiology of organisms that are often uncultured or studied under conditions not relevant for one's environment of interest. Cultivation-independent approaches are therefore important and have greatly enhanced our ability to characterize functional microbial diversity. The capability to reconstruct genomes representing thousands of populations from microbial communities using metagenomic techniques provides a foundation for development of predictive models for community structure and function. Here, we discuss a genome-informed stochastic trait-based model incorporated into a reactive transport framework to represent the activities of coupled guilds of hypothetical microorganisms. Metabolic pathways for each microbe within a functional guild are parameterized from metagenomic data with a unique combination of traits governing organism fitness under dynamic environmental conditions. We simulate the thermodynamics of coupled electron donor and acceptor reactions to predict the energy available for cellular maintenance, respiration, biomass development, and enzyme production. While `omics analyses can now characterize the metabolic potential of microbial communities, it is functionally redundant as well as computationally prohibitive to explicitly include the thousands of recovered organisms into biogeochemical models. However, one can derive potential metabolic pathways from genomes along with trait-linkages to build probability distributions of traits. These distributions are used to assemble groups of microbes that couple one or more of these pathways. From the initial ensemble of microbes, only a subset will persist based on the interaction of their physiological and metabolic traits with environmental conditions, competing organisms, etc. Here, we analyze the predicted niches of these hypothetical microbes and

  10. A stochastic chemical dynamic approach to correlate autoimmunity and optimal vitamin-D range.

    Roy, Susmita; Shrinivas, Krishna; Bagchi, Biman

    2014-01-01

    Motivated by several recent experimental observations that vitamin-D could interact with antigen presenting cells (APCs) and T-lymphocyte cells (T-cells) to promote and to regulate different stages of immune response, we developed a coarse grained but general kinetic model in an attempt to capture the role of vitamin-D in immunomodulatory responses. Our kinetic model, developed using the ideas of chemical network theory, leads to a system of nine coupled equations that we solve both by direct and by stochastic (Gillespie) methods. Both the analyses consistently provide detail information on the dependence of immune response to the variation of critical rate parameters. We find that although vitamin-D plays a negligible role in the initial immune response, it exerts a profound influence in the long term, especially in helping the system to achieve a new, stable steady state. The study explores the role of vitamin-D in preserving an observed bistability in the phase diagram (spanned by system parameters) of immune regulation, thus allowing the response to tolerate a wide range of pathogenic stimulation which could help in resisting autoimmune diseases. We also study how vitamin-D affects the time dependent population of dendritic cells that connect between innate and adaptive immune responses. Variations in dose dependent response of anti-inflammatory and pro-inflammatory T-cell populations to vitamin-D correlate well with recent experimental results. Our kinetic model allows for an estimation of the range of optimum level of vitamin-D required for smooth functioning of the immune system and for control of both hyper-regulation and inflammation. Most importantly, the present study reveals that an overdose or toxic level of vitamin-D or any steroid analogue could give rise to too large a tolerant response, leading to an inefficacy in adaptive immune function.

  11. Microbial and Organic Fine Particle Transport Dynamics in Streams - a Combined Experimental and Stochastic Modeling Approach

    Drummond, Jen; Davies-Colley, Rob; Stott, Rebecca; Sukias, James; Nagels, John; Sharp, Alice; Packman, Aaron

    2014-05-01

    Transport dynamics of microbial cells and organic fine particles are important to stream ecology and biogeochemistry. Cells and particles continuously deposit and resuspend during downstream transport owing to a variety of processes including gravitational settling, interactions with in-stream structures or biofilms at the sediment-water interface, and hyporheic exchange and filtration within underlying sediments. Deposited cells and particles are also resuspended following increases in streamflow. Fine particle retention influences biogeochemical processing of substrates and nutrients (C, N, P), while remobilization of pathogenic microbes during flood events presents a hazard to downstream uses such as water supplies and recreation. We are conducting studies to gain insights into the dynamics of fine particles and microbes in streams, with a campaign of experiments and modeling. The results improve understanding of fine sediment transport, carbon cycling, nutrient spiraling, and microbial hazards in streams. We developed a stochastic model to describe the transport and retention of fine particles and microbes in rivers that accounts for hyporheic exchange and transport through porewaters, reversible filtration within the streambed, and microbial inactivation in the water column and subsurface. This model framework is an advance over previous work in that it incorporates detailed transport and retention processes that are amenable to measurement. Solute, particle, and microbial transport were observed both locally within sediment and at the whole-stream scale. A multi-tracer whole-stream injection experiment compared the transport and retention of a conservative solute, fluorescent fine particles, and the fecal indicator bacterium Escherichia coli. Retention occurred within both the underlying sediment bed and stands of submerged macrophytes. The results demonstrate that the combination of local measurements, whole-stream tracer experiments, and advanced modeling

  12. A stochastic chemical dynamic approach to correlate autoimmunity and optimal vitamin-D range.

    Susmita Roy

    Full Text Available Motivated by several recent experimental observations that vitamin-D could interact with antigen presenting cells (APCs and T-lymphocyte cells (T-cells to promote and to regulate different stages of immune response, we developed a coarse grained but general kinetic model in an attempt to capture the role of vitamin-D in immunomodulatory responses. Our kinetic model, developed using the ideas of chemical network theory, leads to a system of nine coupled equations that we solve both by direct and by stochastic (Gillespie methods. Both the analyses consistently provide detail information on the dependence of immune response to the variation of critical rate parameters. We find that although vitamin-D plays a negligible role in the initial immune response, it exerts a profound influence in the long term, especially in helping the system to achieve a new, stable steady state. The study explores the role of vitamin-D in preserving an observed bistability in the phase diagram (spanned by system parameters of immune regulation, thus allowing the response to tolerate a wide range of pathogenic stimulation which could help in resisting autoimmune diseases. We also study how vitamin-D affects the time dependent population of dendritic cells that connect between innate and adaptive immune responses. Variations in dose dependent response of anti-inflammatory and pro-inflammatory T-cell populations to vitamin-D correlate well with recent experimental results. Our kinetic model allows for an estimation of the range of optimum level of vitamin-D required for smooth functioning of the immune system and for control of both hyper-regulation and inflammation. Most importantly, the present study reveals that an overdose or toxic level of vitamin-D or any steroid analogue could give rise to too large a tolerant response, leading to an inefficacy in adaptive immune function.

  13. Stochastic modeling of catalytic processes in nanoporous materials: Beyond mean-field approach

    Garcia, Andres [Iowa State Univ., Ames, IA (United States)

    2017-08-05

    Transport and reaction in zeolites and other porous materials, such as mesoporous silica particles, has been a focus of interest in recent years. This is in part due to the possibility of anomalous transport effects (e.g. single-file diffusion) and its impact in the reaction yield in catalytic processes. Computational simulations are often used to study these complex nonequilibrium systems. Computer simulations using Molecular Dynamics (MD) techniques are prohibitive, so instead coarse grained one-dimensional models with the aid of Kinetic Monte Carlo (KMC) simulations are used. Both techniques can be computationally expensive, both time and resource wise. These coarse-grained systems can be exactly described by a set of coupled stochastic master equations, that describe the reaction-diffusion kinetics of the system. The equations can be written exactly, however, coupling between the equations and terms within the equations make it impossible to solve them exactly; approximations must be made. One of the most common methods to obtain approximate solutions is to use Mean Field (MF) theory. MF treatments yield reasonable results at high ratios of reaction rate k to hop rate h of the particles, but fail completely at low k=h due to the over-estimation of fluxes of particles within the pore. We develop a method to estimate fluxes and intrapore diffusivity in simple one- dimensional reaction-diffusion models at high and low k=h, where the pores are coupled to an equilibrated three-dimensional fluid. We thus successfully describe analytically these simple reaction-diffusion one-dimensional systems. Extensions to models considering behavior with long range steric interactions and wider pores require determination of multiple boundary conditions. We give a prescription to estimate the required parameters for these simulations. For one dimensional systems, if single-file diffusion is relaxed, additional parameters to describe particle exchange have to be introduced. We use

  14. Family of columns isospectral to gravity-loaded columns with tip force: A discrete approach

    Ramachandran, Nirmal; Ganguli, Ranjan

    2018-06-01

    A discrete model is introduced to analyze transverse vibration of straight, clamped-free (CF) columns of variable cross-sectional geometry under the influence of gravity and a constant axial force at the tip. The discrete model is used to determine critical combinations of loading parameters - a gravity parameter and a tip force parameter - that cause onset of dynamic instability in the CF column. A methodology, based on matrix-factorization, is described to transform the discrete model into a family of models corresponding to weightless and unloaded clamped-free (WUCF) columns, each with a transverse vibration spectrum isospectral to the original model. Characteristics of models in this isospectral family are dependent on three transformation parameters. A procedure is discussed to convert the isospectral discrete model description into geometric description of realistic columns i.e. from the discrete model, we construct isospectral WUCF columns with rectangular cross-sections varying in width and depth. As part of numerical studies to demonstrate efficacy of techniques presented, frequency parameters of a uniform column and three types of tapered CF columns under different combinations of loading parameters are obtained from the discrete model. Critical combinations of these parameters for a typical tapered column are derived. These results match with published results. Example CF columns, under arbitrarily-chosen combinations of loading parameters are considered and for each combination, isospectral WUCF columns are constructed. Role of transformation parameters in determining characteristics of isospectral columns is discussed and optimum values are deduced. Natural frequencies of these WUCF columns computed using Finite Element Method (FEM) match well with those of the given gravity-loaded CF column with tip force, hence confirming isospectrality.

  15. Downward continuation of airborne gravity data by means of the change of boundary approach

    Mansi, A. H.; Capponi, M.; Sampietro, D.

    2018-03-01

    Within the modelling of gravity data, a common practice is the upward/downward continuation of the signal, i.e. the process of continuing the gravitational signal in the vertical direction away or closer to the sources, respectively. The gravity field, being a potential field, satisfies the Laplace's equation outside the masses and this means that it allows to unambiguously perform this analytical continuation only in a source-free domain. The analytical continuation problem has been solved both in the space and spectral domains by exploiting different algorithms. As well known, the downward continuation operator, differently from the upward one, is an unstable operator, due to its spectral characteristics similar to those of a high-pass filter, and several regularization methods have been proposed in order to stabilize it. In this work, an iterative procedure to downward/upward continue the gravity field observations, acquired at different altitudes, is proposed. This methodology is based on the change of boundary principle and it has been expressively thought for aerogravimetric observations for geophysical exploration purposes. Within this field of application, usually several simplifications can be applied, basically due to the specific characteristics of the airborne surveys which are usually flown at almost constant altitude as close as possible to the terrain. For instance, these characteristics, as shown in the present work, allow to perform the downward continuation without the need of any regularization. The goodness of the proposed methodology has been evaluated by means of a numerical test on real data, acquired in the South of Australia. The test shows that it is possible to move the aerogravimetric data, acquired along tracks with a maximum height difference of about 250 m, with accuracies of the order of 10^{-3} mGal.

  16. A quantum group approach to cL > 1 Liouville gravity

    Suzuki, Takashi.

    1995-03-01

    A candidate of c L > 1 Liouville gravity is studied via infinite dimensional representations of U q sl(2, C) with q at a root of unity. We show that vertex operators in this Liouville theory are factorized into classical vertex operators and those which are constructed from finite dimensional representations of U q sl(2, C). Expressions of correlation functions and transition amplitudes are presented. We discuss about our results and find an intimate relation between our quantization of the Liouville theory and the geometric quantization of moduli space of Riemann surfaces. An interpretation of quantum space-time is also given within this formulation. (author)

  17. An elementary introduction to the Gauge theory approach to gravity. 23

    Mukunda, N.

    1989-01-01

    Can all the forces be unified by a gauge group? Can we get a clue by studying gravity itself which is also a gauge theory by gauging the Poincare group?. The main problems have been in the understanding of the role of invariants of the Lie algebra of the group if one has general covariance. One is led to theories more general than general relativity in that, in addition to curvature, one also has torsion. These and other aspects of gravitation as a gauge theory are treated. (author). 11 refs.; 1 fig

  18. Topological charged black holes in massive gravity's rainbow and their thermodynamical analysis through various approaches

    Hendi, S.H.; Eslam Panah, B.; Panahiyan, S.

    2017-01-01

    Violation of Lorentz invariancy in the high energy quantum gravity motivates one to consider an energy dependent spacetime with massive deformation of standard general relativity. In this paper, we take into account an energy dependent metric in the context of a massive gravity model to obtain exact solutions. We investigate the geometry of black hole solutions and also calculate the conserved and thermodynamic quantities, which are fully reproduced by the analysis performed with the standard techniques. After examining the validity of the first law of thermodynamics, we conduct a study regarding the effects of different parameters on thermal stability of the solutions. In addition, we employ the relation between cosmological constant and thermodynamical pressure to study the possibility of phase transition. Interestingly, we will show that for the specific configuration considered in this paper, van der Waals like behavior is observed for different topology. In other words, for flat and hyperbolic horizons, similar to spherical horizon, a second order phase transition and van der Waals like behavior are observed. Furthermore, we use geometrical method to construct phase space and study phase transition and bound points for these black holes. Finally, we obtain critical values in extended phase space through the use of a new method.

  19. A simplified approach for the simulation of water-in-oil emulsions in gravity separators

    Lakehal, D.; Narayanan, C. [ASCOMP GmbH, Zurich (Switzerland); Vilagines, R.; Akhras, A.R. [Saudi Aramco, Dhahran (Saudi Arabia). Research and Development Center

    2009-07-01

    A new method of simulating 3-phase flow separation processes in a crude oil product was presented. The aim of the study was to increase the liquid capacity of the vessels and develop methods of testing variable flow entry procedures. The simulated system was based on gravity separation. Oil well streams were injected into large tanks where gas, oil and water were separated under the action of inertia and gravity. An interface tracking technique was combined with a Euler-Euler model developed as part of a computational fluid dynamics (CFD) program. Emulsion physics were modelled by interface tracking between the gas and oil-in-water liquid mixture. Additional scalar transport equations were solved in order to account for the diffusive process between the oil and water. Various settling velocity models were used to consider the settling of the dispersed water phase in oil. Changes in viscosity and non-Newtonian emulsion behaviour were also considered. The study showed that the interface tracking technique accurately predicted flow when combined with an emulsion model designed to account for the settling of water in the oil phase. Further research is now being conducted to validate computational results against in situ measurements. 13 refs., 1 tab., 8 figs.

  20. Topological charged black holes in massive gravity's rainbow and their thermodynamical analysis through various approaches

    S.H. Hendi

    2017-06-01

    Full Text Available Violation of Lorentz invariancy in the high energy quantum gravity motivates one to consider an energy dependent spacetime with massive deformation of standard general relativity. In this paper, we take into account an energy dependent metric in the context of a massive gravity model to obtain exact solutions. We investigate the geometry of black hole solutions and also calculate the conserved and thermodynamic quantities, which are fully reproduced by the analysis performed with the standard techniques. After examining the validity of the first law of thermodynamics, we conduct a study regarding the effects of different parameters on thermal stability of the solutions. In addition, we employ the relation between cosmological constant and thermodynamical pressure to study the possibility of phase transition. Interestingly, we will show that for the specific configuration considered in this paper, van der Waals like behavior is observed for different topology. In other words, for flat and hyperbolic horizons, similar to spherical horizon, a second order phase transition and van der Waals like behavior are observed. Furthermore, we use geometrical method to construct phase space and study phase transition and bound points for these black holes. Finally, we obtain critical values in extended phase space through the use of a new method.

  1. Anomalies and gravity

    Mielke, Eckehard W.

    2006-01-01

    Anomalies in Yang-Mills type gauge theories of gravity are reviewed. Particular attention is paid to the relation between the Dirac spin, the axial current j5 and the non-covariant gauge spin C. Using diagrammatic techniques, we show that only generalizations of the U(1)- Pontrjagin four-form F and F = dC arise in the chiral anomaly, even when coupled to gravity. Implications for Ashtekar's canonical approach to quantum gravity are discussed

  2. The optimal approach of detecting stochastic gravitational wave from string cosmology using multiple detectors

    Fan Xilong; Zhu Zonghong

    2008-01-01

    String cosmology models predict a relic background of gravitational wave produced during the dilaton-driven inflation. It's spectrum is most likely to be detected by ground gravitational wave laser interferometers (IFOs), like LIGO, Virgo, GEO, as the energy density grows rapidly with frequency. We show the certain ranges of the parameters that underlying string cosmology model using two approaches, associated with 5% false alarm and 95% detection rate. The result presents that the approach of combining multiple pairs of IFOs is better than the approach of directly combining the outputs of multiple IFOs for LIGOH, LIGOL, Virgo and GEO

  3. Stochastic modelling of a single ion channel: an alternating renewal approach with application to limited time resolution.

    Milne, R K; Yeo, G F; Edeson, R O; Madsen, B W

    1988-04-22

    Stochastic models of ion channels have been based largely on Markov theory where individual states and transition rates must be specified, and sojourn-time densities for each state are constrained to be exponential. This study presents an approach based on random-sum methods and alternating-renewal theory, allowing individual states to be grouped into classes provided the successive sojourn times in a given class are independent and identically distributed. Under these conditions Markov models form a special case. The utility of the approach is illustrated by considering the effects of limited time resolution (modelled by using a discrete detection limit, xi) on the properties of observable events, with emphasis on the observed open-time (xi-open-time). The cumulants and Laplace transform for a xi-open-time are derived for a range of Markov and non-Markov models; several useful approximations to the xi-open-time density function are presented. Numerical studies show that the effects of limited time resolution can be extreme, and also highlight the relative importance of the various model parameters. The theory could form a basis for future inferential studies in which parameter estimation takes account of limited time resolution in single channel records. Appendixes include relevant results concerning random sums and a discussion of the role of exponential distributions in Markov models.

  4. An Artificial Gravity Spacecraft Approach which Minimizes Mass, Fuel and Orbital Assembly Reg

    Bell, L.

    2002-01-01

    The Sasakawa International Center for Space Architecture (SICSA) is undertaking a multi-year research and design study that is exploring near and long-term commercial space development opportunities. Space tourism in low-Earth orbit (LEO), and possibly beyond LEO, comprises one business element of this plan. Supported by a financial gift from the owner of a national U.S. hotel chain, SICSA has examined opportunities, requirements and facility concepts to accommodate up to 100 private citizens and crewmembers in LEO, as well as on lunar/planetary rendezvous voyages. SICSA's artificial gravity Science Excursion Vehicle ("AGSEV") design which is featured in this presentation was conceived as an option for consideration to enable round-trip travel to Moon and Mars orbits and back from LEO. During the course of its development, the AGSEV would also serve other important purposes. An early assembly stage would provide an orbital science and technology testbed for artificial gravity demonstration experiments. An ultimate mature stage application would carry crews of up to 12 people on Mars rendezvous missions, consuming approximately the same propellant mass required for lunar excursions. Since artificial gravity spacecraft that rotate to create centripetal accelerations must have long spin radii to limit adverse effects of Coriolis forces upon inhabitants, SICSA's AGSEV design embodies a unique tethered body concept which is highly efficient in terms of structural mass and on-orbit assembly requirements. The design also incorporates "inflatable" as well as "hard" habitat modules to optimize internal volume/mass relationships. Other important considerations and features include: maximizing safety through element and system redundancy; means to avoid destabilizing mass imbalances throughout all construction and operational stages; optimizing ease of on-orbit servicing between missions; and maximizing comfort and performance through careful attention to human needs. A

  5. Modeling and Analysis of Inter-Vehicle Communication: A Stochastic Geometry Approach

    Farooq, Muhammad Junaid

    2015-05-01

    Vehicular communication is the enabling technology for the development of the intelligent transportation systems (ITS), which aims to improve the efficiency and safety of transportation. It can be used for a variety of useful applications such as adaptive traffic control, coordinated braking, emergency messaging, peer-to-peer networking for infotainment services and automatic toll collection etc... Accurate yet simple models for vehicular networks are required in order to understand and optimize their operation. For reliable communication between vehicles, the spectrum access is coordinated via carrier sense multiple access (CSMA) protocol. Existing models either use a simplified network abstraction and access control scheme for analysis or depend on simulation studies. Therefore it is important to develop an analytical model for CSMA coordinated communication between vehicles. In the first part of the thesis, stochastic geometry is exploited to develop a modeling framework for CSMA coordinated inter-vehicle communication (IVC) in a multi-lane highway scenario. The performance of IVC is studied in multi-lane highways taking into account the inter-lane separations and the number of traffic lanes and it is shown that for wide multi-lane highways, the line abstraction model that is widely used in literature loses accuracy and hence the analysis is not reliable. Since the analysis of CSMA in the vehicular setting makes the analysis intractable, an aggressive interference approximation and a conservative interference approximation is proposed for the probability of transmission success. These approximations are tight in the low traffic and high traffic densities respectively. In the subsequent part of the thesis, the developed model is extended to multi-hop IVC because several vehicular applications require going beyond the local communication and efficiently disseminate information across the roads via multi-hops. Two well-known greedy packet forwarding schemes are

  6. A quantum group approach to c{sub L} > 1 Liouville gravity

    Suzuki, Takashi

    1995-03-01

    A candidate of c{sub L} > 1 Liouville gravity is studied via infinite dimensional representations of U{sub q}sl(2, C) with q at a root of unity. We show that vertex operators in this Liouville theory are factorized into classical vertex operators and those which are constructed from finite dimensional representations of U{sub q}sl(2, C). Expressions of correlation functions and transition amplitudes are presented. We discuss about our results and find an intimate relation between our quantization of the Liouville theory and the geometric quantization of moduli space of Riemann surfaces. An interpretation of quantum space-time is also given within this formulation. (author).

  7. Hydrology signal from GRACE gravity data in the Nelson River basin, Canada: a comparison of two approaches

    Li, Tanghua; Wu, Patrick; Wang, Hansheng; Jia, Lulu; Steffen, Holger

    2018-03-01

    The Gravity Recovery and Climate Experiment (GRACE) satellite mission measures the combined gravity signal of several overlapping processes. A common approach to separate the hydrological signal in previous ice-covered regions is to apply numerical models to simulate the glacial isostatic adjustment (GIA) signals related to the vanished ice load and then remove them from the observed GRACE data. However, the results of this method are strongly affected by the uncertainties of the ice and viscosity models of GIA. To avoid this, Wang et al. (Nat Geosci 6(1):38-42, 2013. https://doi.org/10.1038/NGEO1652; Geodesy Geodyn 6(4):267-273, 2015) followed the theory of Wahr et al. (Geophys Res Lett 22(8):977-980, 1995) and isolated water storage changes from GRACE in North America and Scandinavia with the help of Global Positioning System (GPS) data. Lambert et al. (Postglacial rebound and total water storage variations in the Nelson River drainage basin: a gravity GPS Study, Geological Survey of Canada Open File, 7317, 2013a, Geophys Res Lett 40(23):6118-6122, https://doi.org/10.1002/2013GL057973, 2013b) did a similar study for the Nelson River basin in North America but applying GPS and absolute gravity measurements. However, the results of the two studies in the Nelson River basin differ largely, especially for the magnitude of the hydrology signal which differs about 35%. Through detailed comparison and analysis of the input data, data post-processing techniques, methods and results of these two works, we find that the different GRACE data post-processing techniques may lead to this difference. Also the GRACE input has a larger effect on the hydrology signal amplitude than the GPS input in the Nelson River basin due to the relatively small uplift signal in this region. Meanwhile, the influence of the value of α , which represents the ratio between GIA-induced uplift rate and GIA-induced gravity-rate-of-change (before the correction for surface uplift), is more obvious in

  8. Spinor matter fields in SL(2,C) gauge theories of gravity: Lagrangian and Hamiltonian approaches

    Antonowicz, Marek; Szczyrba, Wiktor

    1985-06-01

    We consider the SL(2,C)-covariant Lagrangian formulation of gravitational theories with the presence of spinor matter fields. The invariance properties of such theories give rise to the conservation laws (the contracted Bianchi identities) having in the presence of matter fields a more complicated form than those known in the literature previously. A general SL(2,C) gauge theory of gravity is cast into an SL(2,C)-covariant Hamiltonian formulation. Breaking the SL(2,C) symmetry of the system to the SU(2) symmetry, by introducing a spacelike slicing of spacetime, we get an SU(2)-covariant Hamiltonian picture. The qualitative analysis of SL(2,C) gauge theories of gravity in the SU(2)-covariant formulation enables us to define the dynamical symplectic variables and the gauge variables of the theory under consideration as well as to divide the set of field equations into the dynamical equations and the constraints. In the SU(2)-covariant Hamiltonian formulation the primary constraints, which are generic for first-order matter Lagrangians (Dirac, Weyl, Fierz-Pauli), can be reduced. The effective matter symplectic variables are given by SU(2)-spinor-valued half-forms on three-dimensional slices of spacetime. The coupled Einstein-Cartan-Dirac (Weyl, Fierz-Pauli) system is analyzed from the (3+1) point of view. This analysis is complete; the field equations of the Einstein-Cartan-Dirac theory split into 18 gravitational dynamical equations, 8 dynamical Dirac equations, and 7 first-class constraints. The system has 4+8=12 independent degrees of freedom in the phase space.

  9. Spinor matter fields in SL(2,C) gauge theories of gravity: Lagrangian and Hamiltonian approaches

    Antonowicz, M.; Szczyrba, W.

    1985-01-01

    We consider the SL(2,C)-covariant Lagrangian formulation of gravitational theories with the presence of spinor matter fields. The invariance properties of such theories give rise to the conservation laws (the contracted Bianchi identities) having in the presence of matter fields a more complicated form than those known in the literature previously. A general SL(2,C) gauge theory of gravity is cast into an SL(2,C)-covariant Hamiltonian formulation. Breaking the SL(2,C) symmetry of the system to the SU(2) symmetry, by introducing a spacelike slicing of spacetime, we get an SU(2)-covariant Hamiltonian picture. The qualitative analysis of SL(2,C) gauge theories of gravity in the SU(2)-covariant formulation enables us to define the dynamical symplectic variables and the gauge variables of the theory under consideration as well as to divide the set of field equations into the dynamical equations and the constraints. In the SU(2)-covariant Hamiltonian formulation the primary constraints, which are generic for first-order matter Lagrangians (Dirac, Weyl, Fierz-Pauli), can be reduced. The effective matter symplectic variables are given by SU(2)-spinor-valued half-forms on three-dimensional slices of spacetime. The coupled Einstein-Cartan-Dirac (Weyl, Fierz-Pauli) system is analyzed from the (3+1) point of view. This analysis is complete; the field equations of the Einstein-Cartan-Dirac theory split into 18 gravitational dynamical equations, 8 dynamical Dirac equations, and 7 first-class constraints. The system has 4+8 = 12 independent degrees of freedom in the phase space

  10. A bottom-up approach of stochastic demand allocation in water quality modelling

    Blokker, E.J.M.; Vreeburg, J.H.G.; Beverloo, H.; Klein Arfman, M.; Van Dijk, J.C.

    2010-01-01

    An “all pipes” hydraulic model of a drinking water distribution system was constructed with two types of demand allocations. One is constructed with the conventional top-down approach, i.e. a demand multiplier pattern from the booster station is allocated to all demand nodes with a correction factor

  11. A robust decision-making approach for p-hub median location problems based on two-stage stochastic programming and mean-variance theory : a real case study

    Ahmadi, T.; Karimi, H.; Davoudpour, H.

    2015-01-01

    The stochastic location-allocation p-hub median problems are related to long-term decisions made in risky situations. Due to the importance of this type of problems in real-world applications, the authors were motivated to propose an approach to obtain more reliable policies in stochastic

  12. Stochastic convergence of renewable energy consumption in OECD countries: a fractional integration approach.

    Solarin, Sakiru Adebola; Gil-Alana, Luis Alberiko; Al-Mulali, Usama

    2018-04-13

    In this article, we have examined the hypothesis of convergence of renewable energy consumption in 27 OECD countries. However, instead of relying on classical techniques, which are based on the dichotomy between stationarity I(0) and nonstationarity I(1), we consider a more flexible approach based on fractional integration. We employ both parametric and semiparametric techniques. Using parametric methods, evidence of convergence is found in the cases of Mexico, Switzerland and Sweden along with the USA, Portugal, the Czech Republic, South Korea and Spain, and employing semiparametric approaches, we found evidence of convergence in all these eight countries along with Australia, France, Japan, Greece, Italy and Poland. For the remaining 13 countries, even though the orders of integration of the series are smaller than one in all cases except Germany, the confidence intervals are so wide that we cannot reject the hypothesis of unit roots thus not finding support for the hypothesis of convergence.

  13. Stochastic Real-Time Optimal Control: A Pseudospectral Approach for Bearing-Only Trajectory Optimization

    2011-09-01

    measurements suitable for algorithms such as Ekelund or Spiess ranging [104], followed by one extra turn to eliminate ambiguities. A maneuver that...and Dynamics, 7(3), 1984. [104] Spiess , F. N. “Complete Solution of the Bearings Only Approach Problem”. UC San Diego: Scripps Institution of...spectral methods, 21 Spiess ranging, 4 state augmentation, 81 state transition matrix, 66 stereo ranging, 4 sUAS, 5–8, 14, 38, 39, 69, 77, 82, 86, 117, 167

  14. Evaluating the impact of built environment characteristics on urban boundary layer dynamics using an advanced stochastic approach

    J. Song

    2016-05-01

    Full Text Available Urban land–atmosphere interactions can be captured by numerical modeling framework with coupled land surface and atmospheric processes, while the model performance depends largely on accurate input parameters. In this study, we use an advanced stochastic approach to quantify parameter uncertainty and model sensitivity of a coupled numerical framework for urban land–atmosphere interactions. It is found that the development of urban boundary layer is highly sensitive to surface characteristics of built terrains. Changes of both urban land use and geometry impose significant impact on the overlying urban boundary layer dynamics through modification on bottom boundary conditions, i.e., by altering surface energy partitioning and surface aerodynamic resistance, respectively. Hydrothermal properties of conventional and green roofs have different impacts on atmospheric dynamics due to different surface energy partitioning mechanisms. Urban geometry (represented by the canyon aspect ratio, however, has a significant nonlinear impact on boundary layer structure and temperature. Besides, managing rooftop roughness provides an alternative option to change the boundary layer thermal state through modification of the vertical turbulent transport. The sensitivity analysis deepens our insight into the fundamental physics of urban land–atmosphere interactions and provides useful guidance for urban planning under challenges of changing climate and continuous global urbanization.

  15. Link removal for the control of stochastically evolving epidemics over networks: a comparison of approaches.

    Enns, Eva A; Brandeau, Margaret L

    2015-04-21

    For many communicable diseases, knowledge of the underlying contact network through which the disease spreads is essential to determining appropriate control measures. When behavior change is the primary intervention for disease prevention, it is important to understand how to best modify network connectivity using the limited resources available to control disease spread. We describe and compare four algorithms for selecting a limited number of links to remove from a network: two "preventive" approaches (edge centrality, R0 minimization), where the decision of which links to remove is made prior to any disease outbreak and depends only on the network structure; and two "reactive" approaches (S-I edge centrality, optimal quarantining), where information about the initial disease states of the nodes is incorporated into the decision of which links to remove. We evaluate the performance of these algorithms in minimizing the total number of infections that occur over the course of an acute outbreak of disease. We consider different network structures, including both static and dynamic Erdös-Rényi random networks with varying levels of connectivity, a real-world network of residential hotels connected through injection drug use, and a network exhibiting community structure. We show that reactive approaches outperform preventive approaches in averting infections. Among reactive approaches, removing links in order of S-I edge centrality is favored when the link removal budget is small, while optimal quarantining performs best when the link removal budget is sufficiently large. The budget threshold above which optimal quarantining outperforms the S-I edge centrality algorithm is a function of both network structure (higher for unstructured Erdös-Rényi random networks compared to networks with community structure or the real-world network) and disease infectiousness (lower for highly infectious diseases). We conduct a value-of-information analysis of knowing which

  16. Gravity wave control on ESF day-to-day variability: An empirical approach

    Aswathy, R. P.; Manju, G.

    2017-06-01

    The gravity wave control on the daily variation in nighttime ionization irregularity occurrence is studied using ionosonde data for the period 2002-2007 at magnetic equatorial location Trivandrum. Recent studies during low solar activity period have revealed that the seed perturbations should have the threshold amplitude required to trigger equatorial spread F (ESF), at a particular altitude and that this threshold amplitude undergoes seasonal and solar cycle changes. In the present study, the altitude variation of the threshold seed perturbations is examined for autumnal equinox of different years. Thereafter, a unique empirical model, incorporating the electrodynamical effects and the gravity wave modulation, is developed. Using the model the threshold curve for autumnal equinox season of any year may be delineated if the solar flux index (F10.7) is known. The empirical model is validated using the data for high, moderate, and low solar epochs in 2001, 2004, and 1995, respectively. This model has the potential to be developed further, to forecast ESF incidence, if the base height of ionosphere is in the altitude region where electrodynamics controls the occurrence of ESF. ESF irregularities are harmful for communication and navigation systems, and therefore, research is ongoing globally to predict them. In this context, this study is crucial for evolving a methodology to predict communication as well as navigation outages.Plain Language SummaryThe manifestation of nocturnal ionospheric irregularities at magnetic equatorial regions poses a major hazard for communication and navigation systems. It is therefore essential to arrive at prediction methodologies for these irregularities. The present study puts forth a novel empirical model which, using only solar flux index, successfully differentiates between days with and without nocturnal ionization irregularity occurrence. The model-derived curve is obtained such that the days with and without occurrence of

  17. Optimal coupling of heat and electricity systems: A stochastic hierarchical approach

    Mitridati, Lesia Marie-Jeanne Mariane; Pinson, Pierre

    2016-01-01

    modelled using a finite set of scenarios. This model takes advantage of existing market structures and provides a decision-making tool for heat system operators. The proposed model is implemented in a case study and results are discussed to show the benefits and applicability of this approach....... penetration of CHPs and wind. The objective of this optimization problem is to minimize the heat production cost, subject to constraints describing day-ahead electricity market clearing scenarios. Uncertainties concerning wind power production, electricity demand and rival participants offers are efficiently...

  18. Parallel computations of molecular dynamics trajectories using the stochastic path approach

    Zaloj, Veaceslav; Elber, Ron

    2000-06-01

    A novel protocol to parallelize molecular dynamics trajectories is discussed and tested on a cluster of PCs running the NT operating system. The new technique does not propagate the solution in small time steps, but uses instead a global optimization of a functional of the whole trajectory. The new approach is especially attractive for parallel and distributed computing and its advantages (and disadvantages) are presented. Two numerical examples are discussed: (a) A conformational transition in a solvated dipeptide, and (b) The R→T conformational transition in solvated hemoglobin.

  19. Integrated approach to characterize fouling on a flat sheet membrane gravity driven submerged membrane bioreactor

    Fortunato, Luca

    2016-10-07

    Fouling in membrane bioreactors (MBR) is acknowledged to be complex and unclear. An integrated characterization methodology was employed in this study to understand the fouling on a gravity-driven submerged MBR (GD-SMBR). It involved the use of different analytical tools, including optical coherence tomography (OCT), liquid chromatography with organic carbon detection (LC-OCD), total organic carbon (TOC), flow cytometer (FCM), adenosine triphosphate analysis (ATP) and scanning electron microscopy (SEM). The three-dimensional (3D) biomass morphology was acquired in a real-time through non-destructive and in situ OCT scanning of 75% of the total membrane surface directly in the tank. Results showed that the biomass layer was homogeneously distributed on the membrane surface. The amount of biomass was selectively linked with final destructive autopsy techniques. The LC-OCD analysis indicated the abundance of low molecular weight (LMW) organics in the fouling composition. Three different SEM techniques were applied to investigate the detailed fouling morphology on the membrane. © 2016 Elsevier Ltd

  20. Determinants of Romanians' Migration within the European Union: Static and Dynamic Panel Gravity Approaches

    Adriana Ana Maria Davidescu

    2017-08-01

    Full Text Available The 1st of January 2007 marked Romania’s accession to the European Union (EU and represented its 'ticket' to a free access to the common market. This soon evolved into an important trigger for the increased migration flows from Romania towards the more developed western countries, members of the EU. Not only the opportunity of a free movement of persons – emerged with the integration - but also the existing socio-economic disparities between Romania and the more developed western countries in the EU, led to unidirectional migration flows. Using both static and dynamic panel gravity models, we aim to identify the main determinants of Romanians' migration towards 10 EU member states – Czech Republic, Denmark, Italy, Netherlands, Finland, Germany, Norway, Poland, Spain, and Sweden – for the period 2007-2014. Our empirical findings support the results of other studies performed on different economies. The most important pull factor for Romanians' migration is represented by the economic conditions in the destination countries, proxied by the GDP/capita. Other important pull factors fuelling Romanians' migration refer to the unemployment rate, life expectancy, education spending, and population density. A key role is also played by the existing social networks in the destination countries which are proxied in our model by the lagged migration flows

  1. PROBABILISTIC APPROACH FOR THE DETERMINATION OF CUTS PERMISSIBLE BRAKING MODES ON THE GRAVITY HUMPS

    Volodymyr BOBROVSKYI

    2016-03-01

    Full Text Available The paper presents the research results of cuts braking modes on the gravity humps. The objective of this paper is developing the methods for assessment of braking modes of cuts under conditions of fuzziness of their rolling properties, as well as selecting the permissible starting speed range of cuts from retardant positions. As a criterion for assessing the modes of target control of cut rolling speed, it was proposed to use an average gap size on a classification track at the established norms of probable exceeding of permissible speed of cars collision and their stop in retarders. As a criterion for evaluating the modes of interval control of cuts rolling speed, using the risk of their non-separation on the switches was proposed. Using the simulation modeling and mathematical statistics, the configuration of the range of permissible speed of cuts coming out from retardant positions has been set. The conducted researches allow simplifying the choice of cut braking modes in systems of automatic control of cut rolling speed.

  2. Integrated approach to characterize fouling on a flat sheet membrane gravity driven submerged membrane bioreactor.

    Fortunato, Luca; Jeong, Sanghyun; Wang, Yiran; Behzad, Ali R; Leiknes, TorOve

    2016-12-01

    Fouling in membrane bioreactors (MBR) is acknowledged to be complex and unclear. An integrated characterization methodology was employed in this study to understand the fouling on a gravity-driven submerged MBR (GD-SMBR). It involved the use of different analytical tools, including optical coherence tomography (OCT), liquid chromatography with organic carbon detection (LC-OCD), total organic carbon (TOC), flow cytometer (FCM), adenosine triphosphate analysis (ATP) and scanning electron microscopy (SEM). The three-dimensional (3D) biomass morphology was acquired in a real-time through non-destructive and in situ OCT scanning of 75% of the total membrane surface directly in the tank. Results showed that the biomass layer was homogeneously distributed on the membrane surface. The amount of biomass was selectively linked with final destructive autopsy techniques. The LC-OCD analysis indicated the abundance of low molecular weight (LMW) organics in the fouling composition. Three different SEM techniques were applied to investigate the detailed fouling morphology on the membrane. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Hamiltonian approach to GR. Pt. 2. Covariant theory of quantum gravity

    Cremaschini, Claudio [Faculty of Philosophy and Science, Silesian University in Opava, Institute of Physics and Research Center for Theoretical Physics and Astrophysics, Opava (Czech Republic); Tessarotto, Massimo [University of Trieste, Department of Mathematics and Geosciences, Trieste (Italy); Faculty of Philosophy and Science, Silesian University in Opava, Institute of Physics, Opava (Czech Republic)

    2017-05-15

    A non-perturbative quantum field theory of General Relativity is presented which leads to a new realization of the theory of covariant quantum gravity (CQG-theory). The treatment is founded on the recently identified Hamiltonian structure associated with the classical space-time, i.e., the corresponding manifestly covariant Hamilton equations and the related Hamilton-Jacobi theory. The quantum Hamiltonian operator and the CQG-wave equation for the corresponding CQG-state and wave function are realized in 4-scalar form. The new quantum wave equation is shown to be equivalent to a set of quantum hydrodynamic equations which warrant the consistency with the classical GR Hamilton-Jacobi equation in the semiclassical limit. A perturbative approximation scheme is developed, which permits the adoption of the harmonic oscillator approximation for the treatment of the Hamiltonian potential. As an application of the theory, the stationary vacuum CQG-wave equation is studied, yielding a stationary equation for the CQG-state in terms of the 4-scalar invariant-energy eigenvalue associated with the corresponding approximate quantum Hamiltonian operator. The conditions for the existence of a discrete invariant-energy spectrum are pointed out. This yields a possible estimate for the graviton mass together with a new interpretation about the quantum origin of the cosmological constant. (orig.)

  4. A retrodictive stochastic simulation algorithm

    Vaughan, T.G.; Drummond, P.D.; Drummond, A.J.

    2010-01-01

    In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.

  5. A stochastic-deterministic approach for evaluation of uncertainty in the predicted maximum fuel bundle enthalpy in a CANDU postulated LBLOCA event

    Serghiuta, D.; Tholammakkil, J.; Shen, W., E-mail: Dumitru.Serghiuta@cnsc-ccsn.gc.ca [Canadian Nuclear Safety Commission, Ottawa, Ontario (Canada)

    2014-07-01

    A stochastic-deterministic approach based on representation of uncertainties by subjective probabilities is proposed for evaluation of bounding values of functional failure probability and assessment of probabilistic safety margins. The approach is designed for screening and limited independent review verification. Its application is illustrated for a postulated generic CANDU LBLOCA and evaluation of the possibility distribution function of maximum bundle enthalpy considering the reactor physics part of LBLOCA power pulse simulation only. The computer codes HELIOS and NESTLE-CANDU were used in a stochastic procedure driven by the computer code DAKOTA to simulate the LBLOCA power pulse using combinations of core neutronic characteristics randomly generated from postulated subjective probability distributions with deterministic constraints and fixed transient bundle-wise thermal hydraulic conditions. With this information, a bounding estimate of functional failure probability using the limit for the maximum fuel bundle enthalpy can be derived for use in evaluation of core damage frequency. (author)

  6. Stochastic approach to error estimation for image-guided robotic systems.

    Haidegger, Tamas; Gyõri, Sándor; Benyo, Balazs; Benyó, Zoltáán

    2010-01-01

    Image-guided surgical systems and surgical robots are primarily developed to provide patient safety through increased precision and minimal invasiveness. Even more, robotic devices should allow for refined treatments that are not possible by other means. It is crucial to determine the accuracy of a system, to define the expected overall task execution error. A major step toward this aim is to quantitatively analyze the effect of registration and tracking-series of multiplication of erroneous homogeneous transformations. First, the currently used models and algorithms are introduced along with their limitations, and a new, probability distribution based method is described. The new approach has several advantages, as it was demonstrated in our simulations. Primarily, it determines the full 6 degree of freedom accuracy of the point of interest, allowing for the more accurate use of advanced application-oriented concepts, such as Virtual Fixtures. On the other hand, it becomes feasible to consider different surgical scenarios with varying weighting factors.

  7. Evaluation of soil characterization technologies using a stochastic, value-of-information approach

    Kaplan, P.G.

    1993-01-01

    The US Department of Energy has initiated an integrated demonstration program to develop and compare new technologies for the characterization of uranium-contaminated soils. As part of this effort, a performance-assessment task was funded in February, 1993 to evaluate the field tested technologies. Performance assessment can be cleaned as the analysis that evaluates a system's, or technology's, ability to meet the criteria specified for performance. Four new technologies were field tested at the Fernald Environmental Management Restoration Co. in Ohio. In the next section, the goals of this performance assessment task are discussed. The following section discusses issues that must be resolved if the goals are to be successfully met. The author concludes with a discussion of the potential benefits to performance assessment of the approach taken. This paper is intended to be the first of a series of documentation that describes the work. Also in this proceedings is a paper on the field demonstration at the Fernald site and a description of the technologies (Tidwell et al, 1993) and a paper on the application of advanced geostatistical techniques (Rautman, 1993). The overall approach is to simply demonstrate the applicability of concepts that are well described in the literature but are not routinely applied to problems in environmental remediation, restoration, and waste management. The basic geostatistical concepts are documented in Clark (1979) and in Issaks and Srivastava (1989). Advanced concepts and applications, along with software, are discussed in Deutsch and Journel (1992). Integration of geostatistical modeling with a decision-analytic framework is discussed in Freeze et al (1992). Information-theoretic and probabilistic concepts are borrowed from the work of Shannon (1948), Jaynes (1957), and Harr (1987). The author sees the task as one of introducing and applying robust methodologies with demonstrated applicability in other fields to the problem at hand

  8. Quantum Gravity Effects in Cosmology

    Gu Je-An

    2018-01-01

    Full Text Available Within the geometrodynamic approach to quantum cosmology, we studied the quantum gravity effects in cosmology. The Gibbons-Hawking temperature is corrected by quantum gravity due to spacetime fluctuations and the power spectrum as well as any probe field will experience the effective temperature, a quantum gravity effect.

  9. New approach of financial volatility duration dynamics by stochastic finite-range interacting voter system.

    Wang, Guochao; Wang, Jun

    2017-01-01

    We make an approach on investigating the fluctuation behaviors of financial volatility duration dynamics. A new concept of volatility two-component range intensity (VTRI) is developed, which constitutes the maximal variation range of volatility intensity and shortest passage time of duration, and can quantify the investment risk in financial markets. In an attempt to study and describe the nonlinear complex properties of VTRI, a random agent-based financial price model is developed by the finite-range interacting biased voter system. The autocorrelation behaviors and the power-law scaling behaviors of return time series and VTRI series are investigated. Then, the complexity of VTRI series of the real markets and the proposed model is analyzed by Fuzzy entropy (FuzzyEn) and Lempel-Ziv complexity. In this process, we apply the cross-Fuzzy entropy (C-FuzzyEn) to study the asynchrony of pairs of VTRI series. The empirical results reveal that the proposed model has the similar complex behaviors with the actual markets and indicate that the proposed stock VTRI series analysis and the financial model are meaningful and feasible to some extent.

  10. A tightly-coupled domain-decomposition approach for highly nonlinear stochastic multiphysics systems

    Taverniers, Søren; Tartakovsky, Daniel M., E-mail: dmt@ucsd.edu

    2017-02-01

    Multiphysics simulations often involve nonlinear components that are driven by internally generated or externally imposed random fluctuations. When used with a domain-decomposition (DD) algorithm, such components have to be coupled in a way that both accurately propagates the noise between the subdomains and lends itself to a stable and cost-effective temporal integration. We develop a conservative DD approach in which tight coupling is obtained by using a Jacobian-free Newton–Krylov (JfNK) method with a generalized minimum residual iterative linear solver. This strategy is tested on a coupled nonlinear diffusion system forced by a truncated Gaussian noise at the boundary. Enforcement of path-wise continuity of the state variable and its flux, as opposed to continuity in the mean, at interfaces between subdomains enables the DD algorithm to correctly propagate boundary fluctuations throughout the computational domain. Reliance on a single Newton iteration (explicit coupling), rather than on the fully converged JfNK (implicit) coupling, may increase the solution error by an order of magnitude. Increase in communication frequency between the DD components reduces the explicit coupling's error, but makes it less efficient than the implicit coupling at comparable error levels for all noise strengths considered. Finally, the DD algorithm with the implicit JfNK coupling resolves temporally-correlated fluctuations of the boundary noise when the correlation time of the latter exceeds some multiple of an appropriately defined characteristic diffusion time.

  11. Beyond the SCS curve number: A new stochastic spatial runoff approach

    Bartlett, M. S., Jr.; Parolari, A.; McDonnell, J.; Porporato, A. M.

    2015-12-01

    The Soil Conservation Service curve number (SCS-CN) method is the standard approach in practice for predicting a storm event runoff response. It is popular because its low parametric complexity and ease of use. However, the SCS-CN method does not describe the spatial variability of runoff and is restricted to certain geographic regions and land use types. Here we present a general theory for extending the SCS-CN method. Our new theory accommodates different event based models derived from alternative rainfall-runoff mechanisms or distributions of watershed variables, which are the basis of different semi-distributed models such as VIC, PDM, and TOPMODEL. We introduce a parsimonious but flexible description where runoff is initiated by a pure threshold, i.e., saturation excess, that is complemented by fill and spill runoff behavior from areas of partial saturation. To facilitate event based runoff prediction, we derive simple equations for the fraction of the runoff source areas, the probability density function (PDF) describing runoff variability, and the corresponding average runoff value (a runoff curve analogous to the SCS-CN). The benefit of the theory is that it unites the SCS-CN method, VIC, PDM, and TOPMODEL as the same model type but with different assumptions for the spatial distribution of variables and the runoff mechanism. The new multiple runoff mechanism description for the SCS-CN enables runoff prediction in geographic regions and site runoff types previously misrepresented by the traditional SCS-CN method. In addition, we show that the VIC, PDM, and TOPMODEL runoff curves may be more suitable than the SCS-CN for different conditions. Lastly, we explore predictions of sediment and nutrient transport by applying the PDF describing runoff variability within our new framework.

  12. Stochastic processes

    Parzen, Emanuel

    1962-01-01

    Well-written and accessible, this classic introduction to stochastic processes and related mathematics is appropriate for advanced undergraduate students of mathematics with a knowledge of calculus and continuous probability theory. The treatment offers examples of the wide variety of empirical phenomena for which stochastic processes provide mathematical models, and it develops the methods of probability model-building.Chapter 1 presents precise definitions of the notions of a random variable and a stochastic process and introduces the Wiener and Poisson processes. Subsequent chapters examine

  13. Adaptive filtering of GOCE-derived gravity gradients of the disturbing potential in the context of the space-wise approach

    Piretzidis, Dimitrios; Sideris, Michael G.

    2017-09-01

    connected to and employed in the first computational steps of the space-wise approach, where a time-wise Wiener filter is applied at the first stage of GOCE gravity gradient filtering. The results of this work can be extended to using other adaptive filtering algorithms, such as the recursive least-squares and recursive least-squares lattice filters.

  14. Discrete quantum gravity

    Williams, Ruth M

    2006-01-01

    A review is given of a number of approaches to discrete quantum gravity, with a restriction to those likely to be relevant in four dimensions. This paper is dedicated to Rafael Sorkin on the occasion of his sixtieth birthday

  15. Enhancing Productivity and Resource Conservation by Eliminating Inefficiency of Thai Rice Farmers: A Zero Inefficiency Stochastic Frontier Approach

    Jianxu Liu

    2017-05-01

    Full Text Available The study first identified fully efficient farmers and then estimated technical efficiency of inefficient farmers, identifying their determinants by applying a Zero Inefficiency Stochastic Frontier Model (ZISFM on a sample of 300 rice farmers from central-northern Thailand. Next, the study developed scenarios of potential production increase and resource conservation if technical inefficiency was eliminated. Results revealed that 13% of the sampled farmers were fully efficient, thereby justifying the use of our approach. The estimated mean technical efficiency was 91%, implying that rice production can be increased by 9%, by reallocating resources. Land and labor were the major productivity drivers. Education significantly improved technical efficiency. Farmers who transplanted seedlings were relatively technically efficient as compared to those who practised manual and/or mechanical direct seeding methods. Elimination of technical inefficiency could increase output by 8.64% per ha, or generate 5.7–6.4 million tons of additional rice output for Thailand each year. Similarly, elimination of technical inefficiency would potentially conserve 19.44% person-days of labor, 11.95% land area, 11.46% material inputs and 8.67% mechanical power services for every ton of rice produced. This translates into conservation of 2.9–3.0 million person-days of labor, 3.7–4.5 thousand km2 of land, 10.0–14.5 billion baht of material input and 7.6–12.8 billion baht of mechanical power costs to produce current level of rice output in Thailand each year. Policy implications include investment into educating farmers, and improving technical knowledge of seeding technology, to boost rice production and conserve scarce resources in Thailand.

  16. Lattice gravity and strings

    Jevicki, A.; Ninomiya, M.

    1985-01-01

    We are concerned with applications of the simplicial discretization method (Regge calculus) to two-dimensional quantum gravity with emphasis on the physically relevant string model. Beginning with the discretization of gravity and matter we exhibit a discrete version of the conformal trace anomaly. Proceeding to the string problem we show how the direct approach of (finite difference) discretization based on Nambu action corresponds to unsatisfactory treatment of gravitational degrees. Based on the Regge approach we then propose a discretization corresponding to the Polyakov string. In this context we are led to a natural geometric version of the associated Liouville model and two-dimensional gravity. (orig.)

  17. Massive Gravity

    de Rham, Claudia

    2014-01-01

    We review recent progress in massive gravity. We start by showing how different theories of massive gravity emerge from a higher-dimensional theory of general relativity, leading to the Dvali–Gabadadze–Porrati model (DGP), cascading gravity, and ghost-free massive gravity. We then explore their theoretical and phenomenological consistency, proving the absence of Boulware–Deser ghosts and reviewing the Vainshtein mechanism and the cosmological solutions in these models. Finally, we present alt...

  18. Modelling and simulating decision processes of linked lives: An approach based on concurrent processes and stochastic race.

    Warnke, Tom; Reinhardt, Oliver; Klabunde, Anna; Willekens, Frans; Uhrmacher, Adelinde M

    2017-10-01

    Individuals' decision processes play a central role in understanding modern migration phenomena and other demographic processes. Their integration into agent-based computational demography depends largely on suitable support by a modelling language. We are developing the Modelling Language for Linked Lives (ML3) to describe the diverse decision processes of linked lives succinctly in continuous time. The context of individuals is modelled by networks the individual is part of, such as family ties and other social networks. Central concepts, such as behaviour conditional on agent attributes, age-dependent behaviour, and stochastic waiting times, are tightly integrated in the language. Thereby, alternative decisions are modelled by concurrent processes that compete by stochastic race. Using a migration model, we demonstrate how this allows for compact description of complex decisions, here based on the Theory of Planned Behaviour. We describe the challenges for the simulation algorithm posed by stochastic race between multiple concurrent complex decisions.

  19. Stochastic calculations for radiation risk assessment: a Monte-Carlo approach to the simulation of radiocesium transport in the pasture-cow-milk food chain

    Mathies, M; Eisfeld, K; Paretzke, H; Wirth, E [Gesellschaft fuer Strahlen- und Umweltforschung m.b.H. Muenchen, Neuherberg (Germany, F.R.). Inst. fuer Strahlenschutz

    1981-05-01

    The effects of introducing probability distributions of the parameters in radionuclide transport models are investigated. Results from a Monte-Carlo simulation were presented for the transport of /sup 137/Cs via the pasture-cow-milk pathway, taking into the account the uncertainties and naturally occurring fluctuations in the rate constants. The results of the stochastic model calculations characterize the activity concentrations at a given time t and provide a great deal more information for analysis of the environmental transport of radionuclides than deterministic calculations in which the variation of parameters is not taken into consideration. Moreover the stochastic model permits an estimate of the variation of the physico-chemical behaviour of radionuclides in the environment in a more realistic way than by using only the highest transfer coefficients in deterministic approaches, which can lead to non-realistic overestimates of the probability with which high activity levels will be encountered.

  20. Polarimetric and angular light-scattering from dense media: Comparison of a vectorial radiative transfer model with analytical, stochastic and experimental approaches

    Riviere, Nicolas; Ceolato, Romain; Hespel, Laurent

    2013-01-01

    Our work presents computations via a vectorial radiative transfer model of the polarimetric and angular light scattered by a stratified dense medium with small and intermediate optical thickness. We report the validation of this model using analytical results and different computational methods like stochastic algorithms. Moreover, we check the model with experimental data from a specific scatterometer developed at the Onera. The advantages and disadvantages of a radiative approach are discussed. This paper represents a step toward the characterization of particles in dense media involving multiple scattering. -- Highlights: • A vectorial radiative transfer model to simulate the light scattered by stratified layers is developed. • The vectorial radiative transfer equation is solved using an adding–doubling technique. • The results are compared to analytical and stochastic data. • Validation with experimental data from a scatterometer developed at Onera is presented

  1. Statistical mechanics of stochastic neural networks: Relationship between the self-consistent signal-to-noise analysis, Thouless-Anderson-Palmer equation, and replica symmetric calculation approaches

    Shiino, Masatoshi; Yamana, Michiko

    2004-01-01

    We study the statistical mechanical aspects of stochastic analog neural network models for associative memory with correlation type learning. We take three approaches to derive the set of the order parameter equations for investigating statistical properties of retrieval states: the self-consistent signal-to-noise analysis (SCSNA), the Thouless-Anderson-Palmer (TAP) equation, and the replica symmetric calculation. On the basis of the cavity method the SCSNA can be generalized to deal with stochastic networks. We establish the close connection between the TAP equation and the SCSNA to elucidate the relationship between the Onsager reaction term of the TAP equation and the output proportional term of the SCSNA that appear in the expressions for the local fields

  2. SU-F-T-182: A Stochastic Approach to Daily QA Tolerances On Spot Properties for Proton Pencil Beam Scanning

    St James, S; Bloch, C; Saini, J

    2016-01-01

    Purpose: Proton pencil beam scanning is used clinically across the United States. There are no current guidelines on tolerances for daily QA specific to pencil beam scanning, specifically related to the individual spot properties (spot width). Using a stochastic method to determine tolerances has the potential to optimize tolerances on individual spots and decrease the number of false positive failures in daily QA. Individual and global spot tolerances were evaluated. Methods: As part of daily QA for proton pencil beam scanning, a field of 16 spots (corresponding to 8 energies) is measured using an array of ion chambers (Matrixx, IBA). Each individual spot is fit to two Gaussian functions (x,y). The spot width (σ) in × and y are recorded (32 parameters). Results from the daily QA were retrospectively analyzed for 100 days of data. The deviations of the spot widths were histogrammed and fit to a Gaussian function. The stochastic spot tolerance was taken to be the mean ± 3σ. Using these results, tolerances were developed and tested against known deviations in spot width. Results: The individual spot tolerances derived with the stochastic method decreased in 30/32 instances. Using the previous tolerances (± 20% width), the daily QA would have detected 0/20 days of the deviation. Using a tolerance of any 6 spots failing the stochastic tolerance, 18/20 days of the deviation would have been detected. Conclusion: Using a stochastic method we have been able to decrease daily tolerances on the spot widths for 30/32 spot widths measured. The stochastic tolerances can lead to detection of deviations that previously would have been picked up on monthly QA and missed by daily QA. This method could be easily extended for evaluation of other QA parameters in proton spot scanning.

  3. A stochastic frontier approach to study the relationship between gastrointestinal nematode infections and technical efficiency of dairy farms.

    van der Voort, Mariska; Van Meensel, Jef; Lauwers, Ludwig; Vercruysse, Jozef; Van Huylenbroeck, Guido; Charlier, Johannes

    2014-01-01

    The impact of gastrointestinal (GI) nematode infections in dairy farming has traditionally been assessed using partial productivity indicators. But such approaches ignore the impact of infection on the performance of the whole farm. In this study, efficiency analysis was used to study the association of the GI nematode Ostertagia ostertagi on the technical efficiency of dairy farms. Five years of accountancy data were linked to GI nematode infection data gained from a longitudinal parasitic monitoring campaign. The level of exposure to GI nematodes was based on bulk-tank milk ELISA tests, which measure the antibodies to O. ostertagi and was expressed as an optical density ratio (ODR). Two unbalanced data panels were created for the period 2006 to 2010. The first data panel contained 198 observations from the Belgian Farm Accountancy Data Network (Brussels, Belgium) and the second contained 622 observations from the Boerenbond Flemish farmers' union (Leuven, Belgium) accountancy system (Tiber Farm Accounting System). We used the stochastic frontier analysis approach and defined inefficiency effect models specified with the Cobb-Douglas and transcendental logarithmic (Translog) functional form. To assess the efficiency scores, milk production was considered as the main output variable. Six input variables were used: concentrates, roughage, pasture, number of dairy cows, animal health costs, and labor. The ODR of each individual farm served as an explanatory variable of inefficiency. An increase in the level of exposure to GI nematodes was associated with a decrease in technical efficiency. Exposure to GI nematodes constrains the productivity of pasture, health, and labor but does not cause inefficiency in the use of concentrates, roughage, and dairy cows. Lowering the level of infection in the interquartile range (0.271 ODR) was associated with an average milk production increase of 27, 19, and 9L/cow per year for Farm Accountancy Data Network farms and 63, 49, and

  4. STOCHASTIC ASSESSMENT OF NIGERIAN STOCHASTIC ...

    eobe

    STOCHASTIC ASSESSMENT OF NIGERIAN WOOD FOR BRIDGE DECKS ... abandoned bridges with defects only in their decks in both rural and urban locations can be effectively .... which can be seen as the detection of rare physical.

  5. An SDP Approach for Multiperiod Mixed 0–1 Linear Programming Models with Stochastic Dominance Constraints for Risk Management

    Escudero, Laureano F.; Monge, Juan Francisco; Morales, Dolores Romero

    2015-01-01

    In this paper we consider multiperiod mixed 0–1 linear programming models under uncertainty. We propose a risk averse strategy using stochastic dominance constraints (SDC) induced by mixed-integer linear recourse as the risk measure. The SDC strategy extends the existing literature to the multist...

  6. Modeling pitting corrosion damage of high-level radioactive-waste containers, with emphasis on the stochastic approach

    Henshall, G.A.; Halsey, W.G.; Clarke, W.L.; McCright, R.D.

    1993-01-01

    Recent efforts to identify methods of modeling pitting corrosion damage of high-level radioactive-waste containers are described. The need to develop models that can provide information useful to higher level system performance assessment models is emphasized, and examples of how this could be accomplished are described. Work to date has focused upon physically-based phenomenological stochastic models of pit initiation and growth. These models may provide a way to distill information from mechanistic theories in a way that provides the necessary information to the less detailed performance assessment models. Monte Carlo implementations of the stochastic theory have resulted in simulations that are, at least qualitatively, consistent with a wide variety of experimental data. The effects of environment on pitting corrosion have been included in the model using a set of simple phenomenological equations relating the parameters of the stochastic model to key environmental variables. The results suggest that stochastic models might be useful for extrapolating accelerated test data and for predicting the effects of changes in the environment on pit initiation and growth. Preliminary ideas for integrating pitting models with performance assessment models are discussed. These ideas include improving the concept of container ``failure``, and the use of ``rules-of-thumb`` to take information from the detailed process models and provide it to the higher level system and subsystem models. Finally, directions for future work are described, with emphasis on additional experimental work since it is an integral part of the modeling process.

  7. Modeling pitting corrosion damage of high-level radioactive-waste containers, with emphasis on the stochastic approach

    Henshall, G.A.; Halsey, W.G.; Clarke, W.L.; McCright, R.D.

    1993-01-01

    Recent efforts to identify methods of modeling pitting corrosion damage of high-level radioactive-waste containers are described. The need to develop models that can provide information useful to higher level system performance assessment models is emphasized, and examples of how this could be accomplished are described. Work to date has focused upon physically-based phenomenological stochastic models of pit initiation and growth. These models may provide a way to distill information from mechanistic theories in a way that provides the necessary information to the less detailed performance assessment models. Monte Carlo implementations of the stochastic theory have resulted in simulations that are, at least qualitatively, consistent with a wide variety of experimental data. The effects of environment on pitting corrosion have been included in the model using a set of simple phenomenological equations relating the parameters of the stochastic model to key environmental variables. The results suggest that stochastic models might be useful for extrapolating accelerated test data and for predicting the effects of changes in the environment on pit initiation and growth. Preliminary ideas for integrating pitting models with performance assessment models are discussed. These ideas include improving the concept of container ''failure'', and the use of ''rules-of-thumb'' to take information from the detailed process models and provide it to the higher level system and subsystem models. Finally, directions for future work are described, with emphasis on additional experimental work since it is an integral part of the modeling process

  8. Variance decomposition in stochastic simulators.

    Le Maître, O P; Knio, O M; Moraes, A

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  9. Variance decomposition in stochastic simulators

    Le Maître, O. P.; Knio, O. M.; Moraes, A.

    2015-06-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  10. Variance decomposition in stochastic simulators

    Le Maître, O. P., E-mail: olm@limsi.fr [LIMSI-CNRS, UPR 3251, Orsay (France); Knio, O. M., E-mail: knio@duke.edu [Department of Mechanical Engineering and Materials Science, Duke University, Durham, North Carolina 27708 (United States); Moraes, A., E-mail: alvaro.moraesgutierrez@kaust.edu.sa [King Abdullah University of Science and Technology, Thuwal (Saudi Arabia)

    2015-06-28

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  11. Variance decomposition in stochastic simulators

    Le Maî tre, O. P.; Knio, O. M.; Moraes, Alvaro

    2015-01-01

    This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

  12. Stochastic modeling and analysis of telecoms networks

    Decreusefond, Laurent

    2012-01-01

    This book addresses the stochastic modeling of telecommunication networks, introducing the main mathematical tools for that purpose, such as Markov processes, real and spatial point processes and stochastic recursions, and presenting a wide list of results on stability, performances and comparison of systems.The authors propose a comprehensive mathematical construction of the foundations of stochastic network theory: Markov chains, continuous time Markov chains are extensively studied using an original martingale-based approach. A complete presentation of stochastic recursions from an

  13. Stochastic resonance

    Wellens, Thomas; Shatokhin, Vyacheslav; Buchleitner, Andreas

    2004-01-01

    We are taught by conventional wisdom that the transmission and detection of signals is hindered by noise. However, during the last two decades, the paradigm of stochastic resonance (SR) proved this assertion wrong: indeed, addition of the appropriate amount of noise can boost a signal and hence facilitate its detection in a noisy environment. Due to its simplicity and robustness, SR has been implemented by mother nature on almost every scale, thus attracting interdisciplinary interest from physicists, geologists, engineers, biologists and medical doctors, who nowadays use it as an instrument for their specific purposes. At the present time, there exist a lot of diversified models of SR. Taking into account the progress achieved in both theoretical understanding and practical application of this phenomenon, we put the focus of the present review not on discussing in depth technical details of different models and approaches but rather on presenting a general and clear physical picture of SR on a pedagogical level. Particular emphasis will be given to the implementation of SR in generic quantum systems-an issue that has received limited attention in earlier review papers on the topic. The major part of our presentation relies on the two-state model of SR (or on simple variants thereof), which is general enough to exhibit the main features of SR and, in fact, covers many (if not most) of the examples of SR published so far. In order to highlight the diversity of the two-state model, we shall discuss several examples from such different fields as condensed matter, nonlinear and quantum optics and biophysics. Finally, we also discuss some situations that go beyond the generic SR scenario but are still characterized by a constructive role of noise

  14. A stochastic approach to the reconstruction of prehistoric human diet in the Pacific region from bone isotope signatures

    Leach, B.F.; Quinn, C.J.; Lyon, G.L.

    1996-01-01

    A theoretical constraint on dietary reconstructions using isotope analyses of human bones is that for a given number of isotopes, N, one cannot calculate the proportions of more than N+1 food types. This strict algebraic limitation can be relaxed by adopting a stochastic approach, recommended by Mingawa (1992). This strategy is investigated for prehistoric diet in the South Pacific region, focusing on seven of the main food types available to these people: C3 plants, C4 plants, land herbivores, marine shellfish, coral reef fish, non-reef fish, and marine mammals. Sixty-three underlying assumptions were identified and examined in detail. These consist of the mean values for each food type of protein, energy δ 1 3C, δ 1 5N, 3 4S; the offset values for each isotope from the food to human bone collagen; fractionation effects from flesh to collagen in animals; and acceptable daily intake ranges for protein and energy in human diet. Because of the complexity of the environmental regimes in the Pacific it was also found necessary to tabulate these assumptions into tow groups: one set of assumptions relevant to prehistoric people whose environment is dominated by maritime conditions, such as atolls, and a second set where the land is the dominant influence. . A computer simulation algorithm is developed which is based on Mingawa's method. This was tested using a 'Reverse Experiment' procedure. By taking a diet of known percentage weight composition the isotope composition of human bone was forward calculated from this diet. The algorithm was then employed on this isotope signature to see if the original food composition could be calculated in reverse. The differences between real and calculated food weight percentages for the seven foods were 4.8, 0.1, 4.5, 1.8, 1.5, 1.8 and 1.4% respectively. These were all within aceptable statistical limits. Using the full set of assumptions it was then tested on isotope results for δ 1 3C, δ 1 5N and 3 4S for a prehistoric Pacific

  15. Stochastic dynamics and irreversibility

    Tomé, Tânia

    2015-01-01

    This textbook presents an exposition of stochastic dynamics and irreversibility. It comprises the principles of probability theory and the stochastic dynamics in continuous spaces, described by Langevin and Fokker-Planck equations, and in discrete spaces, described by Markov chains and master equations. Special concern is given to the study of irreversibility, both in systems that evolve to equilibrium and in nonequilibrium stationary states. Attention is also given to the study of models displaying phase transitions and critical phenomema both in thermodynamic equilibrium and out of equilibrium. These models include the linear Glauber model, the Glauber-Ising model, lattice models with absorbing states such as the contact process and those used in population dynamic and spreading of epidemic, probabilistic cellular automata, reaction-diffusion processes, random sequential adsorption and dynamic percolation. A stochastic approach to chemical reaction is also presented.The textbook is intended for students of ...

  16. Quantum stochastics

    Chang, Mou-Hsiung

    2015-01-01

    The classical probability theory initiated by Kolmogorov and its quantum counterpart, pioneered by von Neumann, were created at about the same time in the 1930s, but development of the quantum theory has trailed far behind. Although highly appealing, the quantum theory has a steep learning curve, requiring tools from both probability and analysis and a facility for combining the two viewpoints. This book is a systematic, self-contained account of the core of quantum probability and quantum stochastic processes for graduate students and researchers. The only assumed background is knowledge of the basic theory of Hilbert spaces, bounded linear operators, and classical Markov processes. From there, the book introduces additional tools from analysis, and then builds the quantum probability framework needed to support applications to quantum control and quantum information and communication. These include quantum noise, quantum stochastic calculus, stochastic quantum differential equations, quantum Markov semigrou...

  17. Nonmetricity and torsion: Facts and fancies in gauge approaches to gravity

    Baekler, P.; Hehl, F.W.; Mielke, E.W.

    1986-04-01

    In general relativity, the Riemannian connection of spacetime is symmetric and metric-compatible. If we relax at first the symmetry, we arrive at a Riemann-Cartan spacetime U 4 with torsion. If we relax, additionally, the metric-compatibility, then we are led to a metric-affine spacetime (L 4 ,g) with nonmetricity and torsion. In Part 1 we turn to the (L 4 ,g) spacetime and review an appropriate framework for corresponding gravitational model theories. They can be understood as gauge approaches to the 4-dimensional affine group GL(4,R)xR 4 . They embody, in addition to the ordinary ''weak'' gravitational field, a ''strong'' piece, which is mediated by the connection and coupled to the hypermomentum current. In Part 2, by putting the nonmetricity to zero, we turn to the subcase of the Poincare gauge theory. We show in some detail, how this dynamic torsion theory can look effectively Einsteinian from a macroscopic point of view. This applies also to the Einstein-Cartan theory, which is a special case of the Poincare gauge theory for ''frozen'' torsion. In Part 3 we present new exact solutions of the Poincare gauge theory with mass, electric charge, and NUT-parameter. The properties of the new solutions are discussed. (author)

  18. Stochastic Funding of a Defined Contribution Pension Plan with Proportional Administrative Costs and Taxation under Mean-Variance Optimization Approach

    Charles I Nkeki

    2014-11-01

    Full Text Available This paper aim at studying a mean-variance portfolio selection problem with stochastic salary, proportional administrative costs and taxation in the accumulation phase of a defined contribution (DC pension scheme. The fund process is subjected to taxation while the contribution of the pension plan member (PPM is tax exempt. It is assumed that the flow of contributions of a PPM are invested into a market that is characterized by a cash account and a stock. The optimal portfolio processes and expected wealth for the PPM are established. The efficient and parabolic frontiers of a PPM portfolios in mean-variance are obtained. It was found that capital market line can be attained when initial fund and the contribution rate are zero. It was also found that the optimal portfolio process involved an inter-temporal hedging term that will offset any shocks to the stochastic salary of the PPM.

  19. Enhancement of ohmic and stochastic heating by resonance effects in capacitive radio frequency discharges: a theoretical approach.

    Mussenbrock, T; Brinkmann, R P; Lieberman, M A; Lichtenberg, A J; Kawamura, E

    2008-08-22

    In low-pressure capacitive radio frequency discharges, two mechanisms of electron heating are dominant: (i) Ohmic heating due to collisions of electrons with neutrals of the background gas and (ii) stochastic heating due to momentum transfer from the oscillating boundary sheath. In this work we show by means of a nonlinear global model that the self-excitation of the plasma series resonance which arises in asymmetric capacitive discharges due to nonlinear interaction of plasma bulk and sheath significantly affects both Ohmic heating and stochastic heating. We observe that the series resonance effect increases the dissipation by factors of 2-5. We conclude that the nonlinear plasma dynamics should be taken into account in order to describe quantitatively correct electron heating in asymmetric capacitive radio frequency discharges.

  20. Extended Theories of Gravity

    Capozziello, Salvatore; De Laurentis, Mariafelicia

    2011-01-01

    Extended Theories of Gravity can be considered as a new paradigm to cure shortcomings of General Relativity at infrared and ultraviolet scales. They are an approach that, by preserving the undoubtedly positive results of Einstein’s theory, is aimed to address conceptual and experimental problems recently emerged in astrophysics, cosmology and High Energy Physics. In particular, the goal is to encompass, in a self-consistent scheme, problems like inflation, dark energy, dark matter, large scale structure and, first of all, to give at least an effective description of Quantum Gravity. We review the basic principles that any gravitational theory has to follow. The geometrical interpretation is discussed in a broad perspective in order to highlight the basic assumptions of General Relativity and its possible extensions in the general framework of gauge theories. Principles of such modifications are presented, focusing on specific classes of theories like f(R)-gravity and scalar–tensor gravity in the metric and Palatini approaches. The special role of torsion is also discussed. The conceptual features of these theories are fully explored and attention is paid to the issues of dynamical and conformal equivalence between them considering also the initial value problem. A number of viability criteria are presented considering the post-Newtonian and the post-Minkowskian limits. In particular, we discuss the problems of neutrino oscillations and gravitational waves in extended gravity. Finally, future perspectives of extended gravity are considered with possibility to go beyond a trial and error approach.

  1. Continuous stochastic approach to birth and death processes and co-operative behaviour of systems far from equilibrium

    Chechetkin, V.R.; Lutovinov, V.S.

    1986-09-11

    The continuous stochastic formalism for the description of systems with birth and death processes randomly distributed in space is developed with the use of local birth and death operators and local generalization of the corresponding Chapman-Kolmogorov equation. The functional stochastic equation for the evolution of the probability functional is derived and its modifications for evolution of the characteristic functional and the first passage time problem are given. The corresponding evolution equations for equal-time correlators are also derived. The results are generalized then on the exothermic and endothermic chemical reactions. As examples of the particular applications of the results the small fluctuations near stable equilibrium state and fluctuations in mono-molecular reactions, Lotka-Volterra model, Schloegl reaction and brusselator are considered. It is shown that the two-dimensional Lotka-Volterra model may exhibit synergetic phase transition analogous to the topological transition of the Kosterlitz-Thouless-Berezinskii type. At the end of the paper some general consequences from stochastic evolution of the birth and death processes are discussed and the arguments on their importance in evolution of populations, cellular dynamics and in applications to various chemical and biological problems are presented.

  2. Stochastic cooling

    Bisognano, J.; Leemann, C.

    1982-03-01

    Stochastic cooling is the damping of betatron oscillations and momentum spread of a particle beam by a feedback system. In its simplest form, a pickup electrode detects the transverse positions or momenta of particles in a storage ring, and the signal produced is amplified and applied downstream to a kicker. The time delay of the cable and electronics is designed to match the transit time of particles along the arc of the storage ring between the pickup and kicker so that an individual particle receives the amplified version of the signal it produced at the pick-up. If there were only a single particle in the ring, it is obvious that betatron oscillations and momentum offset could be damped. However, in addition to its own signal, a particle receives signals from other beam particles. In the limit of an infinite number of particles, no damping could be achieved; we have Liouville's theorem with constant density of the phase space fluid. For a finite, albeit large number of particles, there remains a residue of the single particle damping which is of practical use in accumulating low phase space density beams of particles such as antiprotons. It was the realization of this fact that led to the invention of stochastic cooling by S. van der Meer in 1968. Since its conception, stochastic cooling has been the subject of much theoretical and experimental work. The earliest experiments were performed at the ISR in 1974, with the subsequent ICE studies firmly establishing the stochastic cooling technique. This work directly led to the design and construction of the Antiproton Accumulator at CERN and the beginnings of p anti p colliding beam physics at the SPS. Experiments in stochastic cooling have been performed at Fermilab in collaboration with LBL, and a design is currently under development for a anti p accumulator for the Tevatron

  3. A stochastic approach to long term operation planning of hydrothermal systems; Uma abordagem estocastica para o planejamento a longo prazo da operacao de sistemas hidrotermicos

    Andrade, Marinho G. [Sao Paulo Univ., Sao Carlos, SP (Brazil). Inst. de Ciencias Matematicas; Soares, Secundino; Cruz Junior, Gelson da; Vinhal, Cassio D.N. [Universidade Estadual de Campinas, SP (Brazil). Faculdade de Engenharia Eletrica

    1996-01-01

    This paper is concerned with long term operation of hydro-thermal power systems. The problem is approached by a deterministic optimization technique coupled to an inflow forecasting model in open-loop feedback framework in monthly basis. The paper aims to compare the solution obtained by this approach and Stochastic Dynamic Programming (SDP), which has been accepted for over than two decades as the better solution to deal with inflow uncertainty in long term planning. The comparison was carried out in systems with a single plant, simulating the operation throughout a period of five years under the historical inflow conditions and evaluating the cost of the complementary thermal generation. Results show that the proposed approach can handle uncertainty as effectively as SDP. Furthermore, it does not require modeling simplification, such as composite reservoirs, to deal with multi hydro plant systems. 10 refs., 1 tab.

  4. Nonlocal gravity

    Mashhoon, Bahram

    2017-01-01

    Relativity theory is based on a postulate of locality, which means that the past history of the observer is not directly taken into account. This book argues that the past history should be taken into account. In this way, nonlocality---in the sense of history dependence---is introduced into relativity theory. The deep connection between inertia and gravitation suggests that gravity could be nonlocal, and in nonlocal gravity the fading gravitational memory of past events must then be taken into account. Along this line of thought, a classical nonlocal generalization of Einstein's theory of gravitation has recently been developed. A significant consequence of this theory is that the nonlocal aspect of gravity appears to simulate dark matter. According to nonlocal gravity theory, what astronomers attribute to dark matter should instead be due to the nonlocality of gravitation. Nonlocality dominates on the scale of galaxies and beyond. Memory fades with time; therefore, the nonlocal aspect of gravity becomes wea...

  5. Quantum gravity from noncommutative spacetime

    Lee, Jungjai; Yang, Hyunseok

    2014-01-01

    We review a novel and authentic way to quantize gravity. This novel approach is based on the fact that Einstein gravity can be formulated in terms of a symplectic geometry rather than a Riemannian geometry in the context of emergent gravity. An essential step for emergent gravity is to realize the equivalence principle, the most important property in the theory of gravity (general relativity), from U(1) gauge theory on a symplectic or Poisson manifold. Through the realization of the equivalence principle, which is an intrinsic property in symplectic geometry known as the Darboux theorem or the Moser lemma, one can understand how diffeomorphism symmetry arises from noncommutative U(1) gauge theory; thus, gravity can emerge from the noncommutative electromagnetism, which is also an interacting theory. As a consequence, a background-independent quantum gravity in which the prior existence of any spacetime structure is not a priori assumed but is defined by using the fundamental ingredients in quantum gravity theory can be formulated. This scheme for quantum gravity can be used to resolve many notorious problems in theoretical physics, such as the cosmological constant problem, to understand the nature of dark energy, and to explain why gravity is so weak compared to other forces. In particular, it leads to a remarkable picture of what matter is. A matter field, such as leptons and quarks, simply arises as a stable localized geometry, which is a topological object in the defining algebra (noncommutative *-algebra) of quantum gravity.

  6. Quantum gravity from noncommutative spacetime

    Lee, Jungjai [Daejin University, Pocheon (Korea, Republic of); Yang, Hyunseok [Korea Institute for Advanced Study, Seoul (Korea, Republic of)

    2014-12-15

    We review a novel and authentic way to quantize gravity. This novel approach is based on the fact that Einstein gravity can be formulated in terms of a symplectic geometry rather than a Riemannian geometry in the context of emergent gravity. An essential step for emergent gravity is to realize the equivalence principle, the most important property in the theory of gravity (general relativity), from U(1) gauge theory on a symplectic or Poisson manifold. Through the realization of the equivalence principle, which is an intrinsic property in symplectic geometry known as the Darboux theorem or the Moser lemma, one can understand how diffeomorphism symmetry arises from noncommutative U(1) gauge theory; thus, gravity can emerge from the noncommutative electromagnetism, which is also an interacting theory. As a consequence, a background-independent quantum gravity in which the prior existence of any spacetime structure is not a priori assumed but is defined by using the fundamental ingredients in quantum gravity theory can be formulated. This scheme for quantum gravity can be used to resolve many notorious problems in theoretical physics, such as the cosmological constant problem, to understand the nature of dark energy, and to explain why gravity is so weak compared to other forces. In particular, it leads to a remarkable picture of what matter is. A matter field, such as leptons and quarks, simply arises as a stable localized geometry, which is a topological object in the defining algebra (noncommutative *-algebra) of quantum gravity.

  7. Application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) to the steel process chain: case study.

    Bieda, Bogusław

    2014-05-15

    The purpose of the paper is to present the results of application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) data of Mittal Steel Poland (MSP) complex in Kraków, Poland. In order to assess the uncertainty, the software CrystalBall® (CB), which is associated with Microsoft® Excel spreadsheet model, is used. The framework of the study was originally carried out for 2005. The total production of steel, coke, pig iron, sinter, slabs from continuous steel casting (CSC), sheets from hot rolling mill (HRM) and blast furnace gas, collected in 2005 from MSP was analyzed and used for MC simulation of the LCI model. In order to describe random nature of all main products used in this study, normal distribution has been applied. The results of the simulation (10,000 trials) performed with the use of CB consist of frequency charts and statistical reports. The results of this study can be used as the first step in performing a full LCA analysis in the steel industry. Further, it is concluded that the stochastic approach is a powerful method for quantifying parameter uncertainty in LCA/LCI studies and it can be applied to any steel industry. The results obtained from this study can help practitioners and decision-makers in the steel production management. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Measuring Gravity in International Trade Flows

    E. Young Song

    2004-12-01

    Full Text Available The purpose of this paper is two-fold. One is to clarify the concept of gravity in international trade flows. The other is to measure the strength of gravity in international trade flows in a way that is consistent with a well-defined concept of gravity. This paper shows that the widely accepted belief that specialization is the source of gravity is not well grounded on theory. We propose to define gravity in international trade as the force that makes the market shares of an exporting country constant in all importing countries, regardless of their sizes. In a stochastic context, we should interpret it as implying that the strength of gravity increases i as the correlation between market shares and market sizes gets weaker and ii as the variance of market shares gets smaller. We estimate an empirical gravity equation thoroughly based on this definition of gravity. We find that a strong degree of gravity exists in most bilateral trade, regardless of income levels of countries, and in trade of most manThe purpose of this paper is two-fold. One is to clarify the concept of gravity in international trade flows. The other is to measure the strength of gravity in international trade flows in a way that is consistent with a well-defined concept of gravity. This paper shows that the widely accepted belief that specialization is the source of gravity is not well grounded on theory. We propose to define gravity in international trade as the force that makes the market shares of an exporting country constant in all importing countries, regardless of their sizes. In a stochastic context, we should interpret it as implying that the strength of gravity increases i as the correlation between market shares and market sizes gets weaker and ii as the variance of market shares gets smaller. We estimate an empirical gravity equation thoroughly based on this definition of gravity. We find that a strong degree of gravity exists in most bilateral trade, regardless of

  9. Theoretical approach on microscopic bases of stochastic functional self-organization: quantitative measures of the organizational degree of the environment

    Oprisan, Sorinel Adrian [Department of Psychology, University of New Orleans, New Orleans, LA (United States)]. E-mail: soprisan@uno.edu

    2001-11-30

    There has been increased theoretical and experimental research interest in autonomous mobile robots exhibiting cooperative behaviour. This paper provides consistent quantitative measures of organizational degree of a two-dimensional environment. We proved, by the way of numerical simulations, that the theoretically derived values of the feature are reliable measures of aggregation degree. The slope of the feature's dependence on memory radius leads to an optimization criterion for stochastic functional self-organization. We also described the intellectual heritages that have guided our research, as well as possible future developments. (author)

  10. Prospective national and regional environmental performance: Boundary estimations using a combined data envelopment - stochastic frontier analysis approach

    Vaninsky, Alexander

    2010-01-01

    The environmental performance of regions and largest economies of the world - actually, the efficiency of their energy sectors - is estimated for the period 2010-2030 by using forecasted values of main economic indicators. Two essentially different methodologies, data envelopment analysis and stochastic frontier analysis, are used to obtain upper and lower boundaries of the environmental efficiency index. Greenhouse gas emission per unit of area is used as a resulting indicator, with GDP, energy consumption, and population forming a background of comparable estimations. The dynamics of the upper and lower boundaries and their average is analyzed. Regions and national economies having low level or negative dynamics of environmental efficiency are determined.

  11. A Solution Approach from an Analytic Model to Heuristic Algorithm for Special Case of Vehicle Routing Problem with Stochastic Demands

    2009-03-01

    Full Text Available We define a special case for the vehicle routing problem with stochastic demands (SC-VRPSD where customer demands are normally distributed. We propose a new linear model for computing the expected length of a tour in SC-VRPSD. The proposed model is based on the integration of the “Traveling Salesman Problem” (TSP and the Assignment Problem. For large-scale problems, we also use an Iterated Local Search (ILS algorithm in order to reach an effective solution.

  12. Stochastic Analysis 2010

    Crisan, Dan

    2011-01-01

    "Stochastic Analysis" aims to provide mathematical tools to describe and model high dimensional random systems. Such tools arise in the study of Stochastic Differential Equations and Stochastic Partial Differential Equations, Infinite Dimensional Stochastic Geometry, Random Media and Interacting Particle Systems, Super-processes, Stochastic Filtering, Mathematical Finance, etc. Stochastic Analysis has emerged as a core area of late 20th century Mathematics and is currently undergoing a rapid scientific development. The special volume "Stochastic Analysis 2010" provides a sa

  13. Massive gravity from bimetric gravity

    Baccetti, Valentina; Martín-Moruno, Prado; Visser, Matt

    2013-01-01

    We discuss the subtle relationship between massive gravity and bimetric gravity, focusing particularly on the manner in which massive gravity may be viewed as a suitable limit of bimetric gravity. The limiting procedure is more delicate than currently appreciated. Specifically, this limiting procedure should not unnecessarily constrain the background metric, which must be externally specified by the theory of massive gravity itself. The fact that in bimetric theories one always has two sets of metric equations of motion continues to have an effect even in the massive gravity limit, leading to additional constraints besides the one set of equations of motion naively expected. Thus, since solutions of bimetric gravity in the limit of vanishing kinetic term are also solutions of massive gravity, but the contrary statement is not necessarily true, there is no complete continuity in the parameter space of the theory. In particular, we study the massive cosmological solutions which are continuous in the parameter space, showing that many interesting cosmologies belong to this class. (paper)

  14. Spectrum-efficient multi-channel design for coexisting IEEE 802.15.4 networks: A stochastic geometry approach

    Elsawy, Hesham

    2014-07-01

    For networks with random topologies (e.g., wireless ad-hoc and sensor networks) and dynamically varying channel gains, choosing the long term operating parameters that optimize the network performance metrics is very challenging. In this paper, we use stochastic geometry analysis to develop a novel framework to design spectrum-efficient multi-channel random wireless networks based on the IEEE 802.15.4 standard. The proposed framework maximizes both spatial and time domain frequency utilization under channel gain uncertainties to minimize the number of frequency channels required to accommodate a certain population of coexisting IEEE 802.15.4 networks. The performance metrics are the outage probability and the self admission failure probability. We relax the single channel assumption that has been used traditionally in the stochastic geometry analysis. We show that the intensity of the admitted networks does not increase linearly with the number of channels and the rate of increase of the intensity of the admitted networks decreases with the number of channels. By using graph theory, we obtain the minimum required number of channels to accommodate a certain intensity of coexisting networks under a self admission failure probability constraint. To this end, we design a superframe structure for the coexisting IEEE 802.15.4 networks and a method for time-domain interference alignment. © 2002-2012 IEEE.

  15. Municipal solid waste management planning for Xiamen City, China: a stochastic fractional inventory-theory-based approach.

    Chen, Xiujuan; Huang, Guohe; Zhao, Shan; Cheng, Guanhui; Wu, Yinghui; Zhu, Hua

    2017-11-01

    In this study, a stochastic fractional inventory-theory-based waste management planning (SFIWP) model was developed and applied for supporting long-term planning of the municipal solid waste (MSW) management in Xiamen City, the special economic zone of Fujian Province, China. In the SFIWP model, the techniques of inventory model, stochastic linear fractional programming, and mixed-integer linear programming were integrated in a framework. Issues of waste inventory in MSW management system were solved, and the system efficiency was maximized through considering maximum net-diverted wastes under various constraint-violation risks. Decision alternatives for waste allocation and capacity expansion were also provided for MSW management planning in Xiamen. The obtained results showed that about 4.24 × 10 6  t of waste would be diverted from landfills when p i is 0.01, which accounted for 93% of waste in Xiamen City, and the waste diversion per unit of cost would be 26.327 × 10 3  t per $10 6 . The capacities of MSW management facilities including incinerators, composting facility, and landfills would be expanded due to increasing waste generation rate.

  16. PERIODIC REVIEW SYSTEM FOR INVENTORY REPLENISHMENT CONTROL FOR A TWO-ECHELON LOGISTICS NETWORK UNDER DEMAND UNCERTAINTY: A TWO-STAGE STOCHASTIC PROGRAMING APPROACH

    P.S.A. Cunha

    Full Text Available ABSTRACT Here, we propose a novel methodology for replenishment and control systems for inventories of two-echelon logistics networks using a two-stage stochastic programming, considering periodic review and uncertain demands. In addition, to achieve better customer services, we introduce a variable rationing rule to address quantities of the item in short. The devised models are reformulated into their deterministic equivalent, resulting in nonlinear mixed-integer programming models, which are then approximately linearized. To deal with the uncertain nature of the item demand levels, we apply a Monte Carlo simulation-based method to generate finite and discrete sets of scenarios. Moreover, the proposed approach does not require restricted assumptions to the behavior of the probabilistic phenomena, as does several existing methods in the literature. Numerical experiments with the proposed approach for randomly generated instances of the problem show results with errors around 1%.

  17. Dynamic stochastic optimization

    Ermoliev, Yuri; Pflug, Georg

    2004-01-01

    Uncertainties and changes are pervasive characteristics of modern systems involving interactions between humans, economics, nature and technology. These systems are often too complex to allow for precise evaluations and, as a result, the lack of proper management (control) may create significant risks. In order to develop robust strategies we need approaches which explic­ itly deal with uncertainties, risks and changing conditions. One rather general approach is to characterize (explicitly or implicitly) uncertainties by objec­ tive or subjective probabilities (measures of confidence or belief). This leads us to stochastic optimization problems which can rarely be solved by using the standard deterministic optimization and optimal control methods. In the stochastic optimization the accent is on problems with a large number of deci­ sion and random variables, and consequently the focus ofattention is directed to efficient solution procedures rather than to (analytical) closed-form solu­ tions. Objective an...

  18. Stochastic quantization and gauge theories

    Kolck, U. van.

    1987-01-01

    Stochastic quantization is presented taking the Flutuation-Dissipation Theorem as a guide. It is shown that the original approach of Parisi and Wu to gauge theories fails to give the right results to gauge invariant quantities when dimensional regularization is used. Although there is a simple solution in an abelian theory, in the non-abelian case it is probably necessary to start from a BRST invariant action instead of a gauge invariant one. Stochastic regularizations are also discussed. (author) [pt

  19. Stochastic efficiency: five case studies

    Proesmans, Karel; Broeck, Christian Van den

    2015-01-01

    Stochastic efficiency is evaluated in five case studies: driven Brownian motion, effusion with a thermo-chemical and thermo-velocity gradient, a quantum dot and a model for information to work conversion. The salient features of stochastic efficiency, including the maximum of the large deviation function at the reversible efficiency, are reproduced. The approach to and extrapolation into the asymptotic time regime are documented. (paper)

  20. Memory effects on stochastic resonance

    Neiman, Alexander; Sung, Wokyung

    1996-02-01

    We study the phenomenon of stochastic resonance (SR) in a bistable system with internal colored noise. In this situation the system possesses time-dependent memory friction connected with noise via the fluctuation-dissipation theorem, so that in the absence of periodic driving the system approaches the thermodynamic equilibrium state. For this non-Markovian case we find that memory usually suppresses stochastic resonance. However, for a large memory time SR can be enhanced by the memory.

  1. Probability, Statistics, and Stochastic Processes

    Olofsson, Peter

    2011-01-01

    A mathematical and intuitive approach to probability, statistics, and stochastic processes This textbook provides a unique, balanced approach to probability, statistics, and stochastic processes. Readers gain a solid foundation in all three fields that serves as a stepping stone to more advanced investigations into each area. This text combines a rigorous, calculus-based development of theory with a more intuitive approach that appeals to readers' sense of reason and logic, an approach developed through the author's many years of classroom experience. The text begins with three chapters that d

  2. Gravity brake

    Lujan, Richard E.

    2001-01-01

    A mechanical gravity brake that prevents hoisted loads within a shaft from free-falling when a loss of hoisting force occurs. A loss of hoist lifting force may occur in a number of situations, for example if a hoist cable were to break, the brakes were to fail on a winch, or the hoist mechanism itself were to fail. Under normal hoisting conditions, the gravity brake of the invention is subject to an upward lifting force from the hoist and a downward pulling force from a suspended load. If the lifting force should suddenly cease, the loss of differential forces on the gravity brake in free-fall is translated to extend a set of brakes against the walls of the shaft to stop the free fall descent of the gravity brake and attached load.

  3. Analogue Gravity

    Barceló Carlos

    2005-12-01

    Full Text Available Analogue models of (and for gravity have a long and distinguished history dating back to the earliest years of general relativity. In this review article we will discuss the history, aims, results, and future prospects for the various analogue models. We start the discussion by presenting a particularly simple example of an analogue model, before exploring the rich history and complex tapestry of models discussed in the literature. The last decade in particular has seen a remarkable and sustained development of analogue gravity ideas, leading to some hundreds of published articles, a workshop, two books, and this review article. Future prospects for the analogue gravity programme also look promising, both on the experimental front (where technology is rapidly advancing and on the theoretical front (where variants of analogue models can be used as a springboard for radical attacks on the problem of quantum gravity.

  4. Quantum Gravity

    Alvarez, Enrique

    2004-01-01

    Gravitons should have momentum just as photons do; and since graviton momentum would cause compression rather than elongation of spacetime outside of matter; it does not appear that gravitons are compatible with Swartzchild's spacetime curvature. Also, since energy is proportional to mass, and mass is proportional to gravity; the energy of matter is proportional to gravity. The energy of matter could thus contract space within matter; and because of the inter-connectedness of space, cause the...

  5. Stochastic Frontier Approach and Data Envelopment Analysis to Total Factor Productivity and Efficiency Measurement of Bangladeshi Rice

    Hossain, Md. Kamrul; Kamil, Anton Abdulbasah; Baten, Md. Azizul; Mustafa, Adli

    2012-01-01

    The objective of this paper is to apply the Translog Stochastic Frontier production model (SFA) and Data Envelopment Analysis (DEA) to estimate efficiencies over time and the Total Factor Productivity (TFP) growth rate for Bangladeshi rice crops (Aus, Aman and Boro) throughout the most recent data available comprising the period 1989–2008. Results indicate that technical efficiency was observed as higher for Boro among the three types of rice, but the overall technical efficiency of rice production was found around 50%. Although positive changes exist in TFP for the sample analyzed, the average growth rate of TFP for rice production was estimated at almost the same levels for both Translog SFA with half normal distribution and DEA. Estimated TFP from SFA is forecasted with ARIMA (2, 0, 0) model. ARIMA (1, 0, 0) model is used to forecast TFP of Aman from DEA estimation. PMID:23077500

  6. Newton-Cartan gravity revisited

    Andringa, Roel

    2016-01-01

    In this research Newton's old theory of gravity is rederived using an algebraic approach known as the gauging procedure. The resulting theory is Newton's theory in the mathematical language of Einstein's General Relativity theory, in which gravity is spacetime curvature. The gauging procedure sheds

  7. Combined Deterministic and Stochastic Approach to Determine Spatial Distribution of Drought Frequency and Duration in the Great Hungarian Plain

    Szabó, J. A.; Kuti, L.; Bakacsi, Zs.; Pásztor, L.; Tahy, Á.

    2009-04-01

    Drought is one of the major weather driven natural hazards, which has most harm impacts on environment, agricultural and hydrological factors than the other hazards. In spite of the fact that Hungary - that country is situated in Central Europe - belongs to the continental climate zone (influenced by Atlantic and Mediterranean streams) and this weather conditions should be favourable for agricultural production, the drought is a serious risk factor in Hungary, especially on the so called "Great Hungarian Plain", which area has been hit by severe drought events. These drought events encouraged the Ministry of Environment and Water of Hungary to embark on a countrywide drought planning programme to coordinate drought planning efforts throughout the country, to ensure that available water is used efficiently and to provide guidance on how drought planning can be accomplished. With regard to this plan, it is indispensable to analyze the regional drought frequency and duration in the target region of the programme as fundamental information for the further works. According to these aims, first we initiated a methodological development for simulating drought in a non-contributing area. As a result of this work, it has been agreed that the most appropriate model structure for our purposes using a spatially distributed physically based Soil-Vegetation-Atmosphere Transfer (SVAT) model embedded into a Markov Chain-Monte Carlo (MCMC) algorithm for estimate multi-year drought frequency and duration. In this framework: - the spatially distributed SVAT component simulates all the fundamental SVAT processes (such as: interception, snow-accumulation and melting, infiltration, water uptake by vegetation and evapotranspiration, vertical and horizontal distribution of soil moisture, etc.) taking the groundwater table as lower, and the hydrometeorological fields as upper boundary conditions into account; - and the MCMC based stochastic component generates time series of daily weather

  8. Stochastic Jeux

    Romanu Ekaterini

    2006-01-01

    Full Text Available This article shows the similarities between Claude Debussy’s and Iannis Xenakis’ philosophy of music and work, in particular the formers Jeux and the latter’s Metastasis and the stochastic works succeeding it, which seem to proceed parallel (with no personal contact to what is perceived as the evolution of 20th century Western music. Those two composers observed the dominant (German tradition as outsiders, and negated some of its elements considered as constant or natural by "traditional" innovators (i.e. serialists: the linearity of musical texture, its form and rhythm.

  9. Open problems and results in the group theoretic approach to quantum gravity via the BMS group and its generalizations

    Melas, Evangelos

    2011-01-01

    The Bondi-Metzner-Sachs group B is the common asymptotic group of all asymptotically flat (lorentzian) space-times, and is the best candidate for the universal symmetry group of General Relativity. However, in quantum gravity, complexified or euclidean versions of General Relativity are frequently considered. McCarthy has shown that there are forty-two generalizations of B for these versions of the theory and a variety of further ones, either real in any signature, or complex. A firm foundation for quantum gravity can be laid by following through the analogue of Wigner's programme for special relativity with B replacing the Poincare group P. Here the main results which have been obtained so far in this research programme are reported and the more important open problems are stated.

  10. Hawking radiation and entropy of a black hole in Lovelock-Born-Infeld gravity from the quantum tunneling approach

    Li, Gu-Qiang

    2017-04-01

    The tunneling radiation of particles from black holes in Lovelock-Born-Infeld (LBI) gravity is studied by using the Parikh-Wilczek (PW) method, and the emission rate of a particle is calculated. It is shown that the emission spectrum deviates from the purely thermal spectrum but is consistent with an underlying unitary theory. Compared to the conventional tunneling rate related to the increment of black hole entropy, the entropy of the black hole in LBI gravity is obtained. The entropy does not obey the area law unless all the Lovelock coefficients equal zero, but it satisfies the first law of thermodynamics and is in accordance with earlier results. It is distinctly shown that the PW tunneling framework is related to the thermodynamic laws of the black hole. Supported by Guangdong Natural Science Foundation (2016A030307051, 2015A030313789)

  11. Stochastic modeling and control system designs of the NASA/MSFC Ground Facility for large space structures: The maximum entropy/optimal projection approach

    Hsia, Wei-Shen

    1986-01-01

    In the Control Systems Division of the Systems Dynamics Laboratory of the NASA/MSFC, a Ground Facility (GF), in which the dynamics and control system concepts being considered for Large Space Structures (LSS) applications can be verified, was designed and built. One of the important aspects of the GF is to design an analytical model which will be as close to experimental data as possible so that a feasible control law can be generated. Using Hyland's Maximum Entropy/Optimal Projection Approach, a procedure was developed in which the maximum entropy principle is used for stochastic modeling and the optimal projection technique is used for a reduced-order dynamic compensator design for a high-order plant.

  12. Application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) to the steel process chain: Case study

    Bieda, Bogusław

    2014-05-01

    The purpose of the paper is to present the results of application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) data of Mittal Steel Poland (MSP) complex in Kraków, Poland. In order to assess the uncertainty, the software CrystalBall® (CB), which is associated with Microsoft® Excel spreadsheet model, is used. The framework of the study was originally carried out for 2005. The total production of steel, coke, pig iron, sinter, slabs from continuous steel casting (CSC), sheets from hot rolling mill (HRM) and blast furnace gas, collected in 2005 from MSP was analyzed and used for MC simulation of the LCI model. In order to describe random nature of all main products used in this study, normal distribution has been applied. The results of the simulation (10,000 trials) performed with the use of CB consist of frequency charts and statistical reports. The results of this study can be used as the first step in performing a full LCA analysis in the steel industry. Further, it is concluded that the stochastic approach is a powerful method for quantifying parameter uncertainty in LCA/LCI studies and it can be applied to any steel industry. The results obtained from this study can help practitioners and decision-makers in the steel production management. - Highlights: • The benefits of Monte Carlo simulation are examined. • The normal probability distribution is studied. • LCI data on Mittal Steel Poland (MSP) complex in Kraków, Poland dates back to 2005. • This is the first assessment of the LCI uncertainties in the Polish steel industry.

  13. Loop quantum gravity

    Pullin, J.

    2015-01-01

    Loop quantum gravity is one of the approaches that are being studied to apply the rules of quantum mechanics to the gravitational field described by the theory of General Relativity . We present an introductory summary of the main ideas and recent results. (Author)

  14. Aggregation, impaired degradation and immunization targeting of amyloid-beta dimers in Alzheimer’s disease: a stochastic modelling approach

    Proctor Carole J

    2012-07-01

    Full Text Available Abstract Background Alzheimer’s disease (AD is the most frequently diagnosed neurodegenerative disorder affecting humans, with advanced age being the most prominent risk factor for developing AD. Despite intense research efforts aimed at elucidating the precise molecular underpinnings of AD, a definitive answer is still lacking. In recent years, consensus has grown that dimerisation of the polypeptide amyloid-beta (Aß, particularly Aß42, plays a crucial role in the neuropathology that characterise AD-affected post-mortem brains, including the large-scale accumulation of fibrils, also referred to as senile plaques. This has led to the realistic hope that targeting Aß42 immunotherapeutically could drastically reduce plaque burden in the ageing brain, thus delaying AD onset or symptom progression. Stochastic modelling is a useful tool for increasing understanding of the processes underlying complex systems-affecting disorders such as AD, providing a rapid and inexpensive strategy for testing putative new therapies. In light of the tool’s utility, we developed computer simulation models to examine Aß42 turnover and its aggregation in detail and to test the effect of immunization against Aß dimers. Results Our model demonstrates for the first time that even a slight decrease in the clearance rate of Aß42 monomers is sufficient to increase the chance of dimers forming, which could act as instigators of protofibril and fibril formation, resulting in increased plaque levels. As the process is slow and levels of Aβ are normally low, stochastic effects are important. Our model predicts that reducing the rate of dimerisation leads to a significant reduction in plaque levels and delays onset of plaque formation. The model was used to test the effect of an antibody mediated immunological response. Our results showed that plaque levels were reduced compared to conditions where antibodies are not present. Conclusion Our model supports the current

  15. Analogue Gravity

    Carlos Barceló

    2011-05-01

    Full Text Available Analogue gravity is a research programme which investigates analogues of general relativistic gravitational fields within other physical systems, typically but not exclusively condensed matter systems, with the aim of gaining new insights into their corresponding problems. Analogue models of (and for gravity have a long and distinguished history dating back to the earliest years of general relativity. In this review article we will discuss the history, aims, results, and future prospects for the various analogue models. We start the discussion by presenting a particularly simple example of an analogue model, before exploring the rich history and complex tapestry of models discussed in the literature. The last decade in particular has seen a remarkable and sustained development of analogue gravity ideas, leading to some hundreds of published articles, a workshop, two books, and this review article. Future prospects for the analogue gravity programme also look promising, both on the experimental front (where technology is rapidly advancing and on the theoretical front (where variants of analogue models can be used as a springboard for radical attacks on the problem of quantum gravity.

  16. Stochastic forward and inverse groundwater flow and solute transport modeling

    Janssen, G.M.C.M.

    2008-01-01

    Keywords: calibration, inverse modeling, stochastic modeling, nonlinear biodegradation, stochastic-convective, advective-dispersive, travel time, network design, non-Gaussian distribution, multimodal distribution, representers

    This thesis offers three new approaches that contribute

  17. Stochastic calculus in physics

    Fox, R.F.

    1987-01-01

    The relationship of Ito-Stratonovich stochastic calculus to studies of weakly colored noise is explained. A functional calculus approach is used to obtain an effective Fokker-Planck equation for the weakly colored noise regime. In a smooth limit, this representation produces the Stratonovich version of the Ito-Stratonovich calculus for white noise. It also provides an approach to steady state behavior for strongly colored noise. Numerical simulation algorithms are explored, and a novel suggestion is made for efficient and accurate simulation of white noise equations

  18. A stochastic approach for automatic registration and fusion of left atrial electroanatomic maps with 3D CT anatomical images

    Cristoforetti, Alessandro; Mase, Michela; Faes, Luca; Centonze, Maurizio; Greco, Maurizio Del; Antolini, Renzo; Nollo, Giandomenico; Ravelli, Flavia

    2007-01-01

    The integration of electroanatomic maps with highly resolved computed tomography cardiac images plays an important role in the successful planning of the ablation procedure of arrhythmias. In this paper, we present and validate a fully-automated strategy for the registration and fusion of sparse, atrial endocardial electroanatomic maps (CARTO maps) with detailed left atrial (LA) anatomical reconstructions segmented from a pre-procedural MDCT scan. Registration is accomplished by a parameterized geometric transformation of the CARTO points and by a stochastic search of the best parameter set which minimizes the misalignment between transformed CARTO points and the LA surface. The subsequent fusion of electrophysiological information on the registered CT atrium is obtained through radial basis function interpolation. The algorithm is validated by simulation and by real data from 14 patients referred to CT imaging prior to the ablation procedure. Results are presented, which show the validity of the algorithmic scheme as well as the accuracy and reproducibility of the integration process. The obtained results encourage the application of the integration method in post-intervention ablation assessment and basic AF research and suggest the development for real-time applications in catheter guiding during ablation intervention

  19. Modelling and application of stochastic processes

    1986-01-01

    The subject of modelling and application of stochastic processes is too vast to be exhausted in a single volume. In this book, attention is focused on a small subset of this vast subject. The primary emphasis is on realization and approximation of stochastic systems. Recently there has been considerable interest in the stochastic realization problem, and hence, an attempt has been made here to collect in one place some of the more recent approaches and algorithms for solving the stochastic realiza­ tion problem. Various different approaches for realizing linear minimum-phase systems, linear nonminimum-phase systems, and bilinear systems are presented. These approaches range from time-domain methods to spectral-domain methods. An overview of the chapter contents briefly describes these approaches. Also, in most of these chapters special attention is given to the problem of developing numerically ef­ ficient algorithms for obtaining reduced-order (approximate) stochastic realizations. On the application side,...

  20. Gravity inversion predicts the nature of the amundsen basin and its continental borderlands near greenland

    Døssing, Arne; Hansen, Thomas Mejer; Olesen, Arne Vestergaard

    2014-01-01

    the results of 3-D gravity inversion for predicting the sediment thickness and basement geometry within the Amundsen Basin and along its borderlands. We use the recently published LOMGRAV-09 gravity compilation and adopt a process-oriented iterative cycle approach that minimizes misfit between an Earth model...... and observations. The sensitivity of our results to lateral variations in depth and density contrast of the Moho is further tested by a stochastic inversion. Within their limitations, the approach and setup used herein provides the first detailed model of the sediment thickness and basement geometry in the Arctic...... above high-relief basement in the central Amundsen Basin. Significantly, an up to 7 km deep elongated sedimentary basin is predicted along the northern edge of the Morris Jesup Rise. This basin continues into the Klenova Valley south of the Lomonosov Ridge and correlates with an offshore continuation...

  1. Stochastic modeling

    Lanchier, Nicolas

    2017-01-01

    Three coherent parts form the material covered in this text, portions of which have not been widely covered in traditional textbooks. In this coverage the reader is quickly introduced to several different topics enriched with 175 exercises which focus on real-world problems. Exercises range from the classics of probability theory to more exotic research-oriented problems based on numerical simulations. Intended for graduate students in mathematics and applied sciences, the text provides the tools and training needed to write and use programs for research purposes. The first part of the text begins with a brief review of measure theory and revisits the main concepts of probability theory, from random variables to the standard limit theorems. The second part covers traditional material on stochastic processes, including martingales, discrete-time Markov chains, Poisson processes, and continuous-time Markov chains. The theory developed is illustrated by a variety of examples surrounding applications such as the ...

  2. Loop Quantum Gravity

    Rovelli Carlo

    1998-01-01

    Full Text Available The problem of finding the quantum theory of the gravitational field, and thus understanding what is quantum spacetime, is still open. One of the most active of the current approaches is loop quantum gravity. Loop quantum gravity is a mathematically well-defined, non-perturbative and background independent quantization of general relativity, with its conventional matter couplings. Research in loop quantum gravity today forms a vast area, ranging from mathematical foundations to physical applications. Among the most significant results obtained are: (i The computation of the physical spectra of geometrical quantities such as area and volume, which yields quantitative predictions on Planck-scale physics. (ii A derivation of the Bekenstein-Hawking black hole entropy formula. (iii An intriguing physical picture of the microstructure of quantum physical space, characterized by a polymer-like Planck scale discreteness. This discreteness emerges naturally from the quantum theory and provides a mathematically well-defined realization of Wheeler's intuition of a spacetime ``foam''. Long standing open problems within the approach (lack of a scalar product, over-completeness of the loop basis, implementation of reality conditions have been fully solved. The weak part of the approach is the treatment of the dynamics: at present there exist several proposals, which are intensely debated. Here, I provide a general overview of ideas, techniques, results and open problems of this candidate theory of quantum gravity, and a guide to the relevant literature.

  3. Random manifolds and quantum gravity

    Krzywicki, A.

    2000-01-01

    The non-perturbative, lattice field theory approach towards the quantization of Euclidean gravity is reviewed. Included is a tentative summary of the most significant results and a presentation of the current state of art

  4. Payload Mass Identification of a Single-Link Flexible Arm Moving under Gravity: An Algebraic Identification Approach

    Juan Carlos Cambera

    2015-01-01

    Full Text Available We deal with the online identification of the payload mass carried by a single-link flexible arm that moves on a vertical plane and therefore is affected by the gravity force. Specifically, we follow a frequency domain design methodology to develop an algebraic identifier. This identifier is capable of achieving robust and efficient mass estimates even in the presence of sensor noise. In order to highlight its performance, the proposed estimator is experimentally tested and compared with other classical methods in several situations that resemble the most typical operation of a manipulator.

  5. A solution of nonlinear equation for the gravity wave spectra from Adomian decomposition method: a first approach

    Antonio Gledson Goulart

    2013-12-01

    Full Text Available In this paper, the equation for the gravity wave spectra in mean atmosphere is analytically solved without linearization by the Adomian decomposition method. As a consequence, the nonlinear nature of problem is preserved and the errors found in the results are only due to the parameterization. The results, with the parameterization applied in the simulations, indicate that the linear solution of the equation is a good approximation only for heights shorter than ten kilometers, because the linearization the equation leads to a solution that does not correctly describe the kinetic energy spectra.

  6. Multi-technique approach for deriving a VLBI signal extra-path variation model induced by gravity: the example of Medicina

    Sarti, P.; Abbondanza, C.; Negusini, M.; Vittuari, L.

    2009-09-01

    During the measurement sessions gravity might induce significant deformations in large VLBI telescopes. If neglected or mismodelled, these deformations might bias the phase of the incoming signal thus corrupting the estimate of some crucial geodetic parameters (e.g. the height component of VLBI Reference Point). This paper describes a multi-technique approach implemented for measuring and quantifying the gravity-dependent deformations experienced by the 32-m diameter VLBI antenna of Medicina (Northern Italy). Such an approach integrates three different methods: Terrestrial Triangulations and Trilaterations (TTT), Laser Scanning (LS) and a Finite Element Model (FEM) of the antenna. The combination of the observations performed with these methods allows to accurately define an elevation-dependent model of the signal path variation which appears to be, for the Medicina telescope, non negligible. In the range [0,90] deg the signal path increases monotonically by almost 2 cm. The effect of such a variation has not been introduced in actual VLBI analysis yet; nevertheless this is the task we are going to pursue in the very next future.

  7. A stochastic analysis approach on the cost-time profile for selecting the best future state MA

    Seyedhosseini, Seyed Mohammad

    2015-05-01

    Full Text Available In the literature on value stream mapping (VSM, the only basis for choosing the best future state map (FSM among the proposed alternatives is the time factor. As a result, the FSM is selected as the best option because it has the least amount of total production lead time (TPLT. In this paper, the cost factor is considered in the FSM selection process, in addition to the time factor. Thus, for each of the proposed FSMs, the cost-time profile (CTP is used. Two factors that are of particular importance for the customer and the manufacturer – the TPLT and the direct cost of the product – are reviewed and analysed by calculating the sub-area of the CTP curve, called the cost-time investment (CTI. In addition, variability in the generated data has been studied in each of the CTPs in order to choose the best FSM more precisely and accurately. Based on a proposed step-by-step stochastic analysis method, and also by using non-parametric Kernel estimation methods for estimating the probability density function of CTIs, the process of choosing the best FSM has been carried out, based not only on the minimum expected CTI, but also on the minimum expected variability amount in CTIs among proposed alternatives. By implementing this method during the process of choosing the best FSM, the manufacturing organisations will consider both the cost factor and the variability in the generated data, in addition to the time factor. Accordingly, the decision-making process proceeds more easily and logically than do traditional methods. Finally, to describe the effectiveness and applicability of the proposed method in this paper, it is applied to a case study on an industrial parts manufacturing company in Iran.

  8. Using stockpile delegation to improve China's strategic oil policy: A multi-dimension stochastic dynamic programming approach

    Chen, Xin; Mu, Hailin; Li, Huanan; Gui, Shusen

    2014-01-01

    There has been much attention paid to oil security in China in recent years. Although China has begun to establish its own strategic petroleum reserve (SPR) to prevent potential losses caused by oil supply interruptions, the system aiming to ensure China's oil security is still incomplete. This paper describes and provides evidence for the benefits of an auxiliary strategic oil policy choice, which aims to strengthen China's oil supply security and offer a solution for strategic oil operations with different holding costs. In this paper, we develop a multi-dimension stochastic dynamic programming model to analyze the oil stockpile delegation policy, which is an intermediate policy between public and private oil stockpiles and is appropriate for the Chinese immature private oil stockpile sector. The model examines the effects of the oil stockpile delegation policy in the context of several distinct situations, including normal world oil market conditions, slight oil supply interruption, and serious oil supply interruption. Operating strategies that respond to different oil supply situations for both the SPR and the delegated oil stockpile were obtained. Different time horizons, interruption times and holding costs of delegated oil stockpiles were examined. The construction process of China's SPR was also taken into account. - Highlights: • We provided an auxiliary strategic oil policy rooted in Chinese local conditions. • The policy strengthen China's capability for preventing oil supply interruption. • We model to obtain the managing strategies for China's strategic petroleum reserve. • Both of the public and delegated oil stockpile were taken into consideration. • The three phase's construction process of China's SPR was taken into account

  9. Dealing with equality and benefit for water allocation in a lake watershed: A Gini-coefficient based stochastic optimization approach

    Dai, C.; Qin, X. S.; Chen, Y.; Guo, H. C.

    2018-06-01

    A Gini-coefficient based stochastic optimization (GBSO) model was developed by integrating the hydrological model, water balance model, Gini coefficient and chance-constrained programming (CCP) into a general multi-objective optimization modeling framework for supporting water resources allocation at a watershed scale. The framework was advantageous in reflecting the conflicting equity and benefit objectives for water allocation, maintaining the water balance of watershed, and dealing with system uncertainties. GBSO was solved by the non-dominated sorting Genetic Algorithms-II (NSGA-II), after the parameter uncertainties of the hydrological model have been quantified into the probability distribution of runoff as the inputs of CCP model, and the chance constraints were converted to the corresponding deterministic versions. The proposed model was applied to identify the Pareto optimal water allocation schemes in the Lake Dianchi watershed, China. The optimal Pareto-front results reflected the tradeoff between system benefit (αSB) and Gini coefficient (αG) under different significance levels (i.e. q) and different drought scenarios, which reveals the conflicting nature of equity and efficiency in water allocation problems. A lower q generally implies a lower risk of violating the system constraints and a worse drought intensity scenario corresponds to less available water resources, both of which would lead to a decreased system benefit and a less equitable water allocation scheme. Thus, the proposed modeling framework could help obtain the Pareto optimal schemes under complexity and ensure that the proposed water allocation solutions are effective for coping with drought conditions, with a proper tradeoff between system benefit and water allocation equity.

  10. Stochasticity Modeling in Memristors

    Naous, Rawan; Al-Shedivat, Maruan; Salama, Khaled N.

    2015-01-01

    Diverse models have been proposed over the past years to explain the exhibiting behavior of memristors, the fourth fundamental circuit element. The models varied in complexity ranging from a description of physical mechanisms to a more generalized mathematical modeling. Nonetheless, stochasticity, a widespread observed phenomenon, has been immensely overlooked from the modeling perspective. This inherent variability within the operation of the memristor is a vital feature for the integration of this nonlinear device into the stochastic electronics realm of study. In this paper, experimentally observed innate stochasticity is modeled in a circuit compatible format. The model proposed is generic and could be incorporated into variants of threshold-based memristor models in which apparent variations in the output hysteresis convey the switching threshold shift. Further application as a noise injection alternative paves the way for novel approaches in the fields of neuromorphic engineering circuits design. On the other hand, extra caution needs to be paid to variability intolerant digital designs based on non-deterministic memristor logic.

  11. Stochasticity Modeling in Memristors

    Naous, Rawan

    2015-10-26

    Diverse models have been proposed over the past years to explain the exhibiting behavior of memristors, the fourth fundamental circuit element. The models varied in complexity ranging from a description of physical mechanisms to a more generalized mathematical modeling. Nonetheless, stochasticity, a widespread observed phenomenon, has been immensely overlooked from the modeling perspective. This inherent variability within the operation of the memristor is a vital feature for the integration of this nonlinear device into the stochastic electronics realm of study. In this paper, experimentally observed innate stochasticity is modeled in a circuit compatible format. The model proposed is generic and could be incorporated into variants of threshold-based memristor models in which apparent variations in the output hysteresis convey the switching threshold shift. Further application as a noise injection alternative paves the way for novel approaches in the fields of neuromorphic engineering circuits design. On the other hand, extra caution needs to be paid to variability intolerant digital designs based on non-deterministic memristor logic.

  12. Topics in string theory and quantum gravity

    Alvarez-Gaume, Luis

    1992-01-01

    These are the lecture notes for the Les Houches Summer School on Quantum Gravity held in July 1992. The notes present some general critical assessment of other (non-string) approaches to quantum gravity, and a selected set of topics concerning what we have learned so far about the subject from string theory. Since these lectures are long (133 A4 pages), we include in this abstract the table of contents, which should help the user of the bulletin board in deciding whether to latex and print the full file. 1-FIELD THEORETICAL APPROACH TO QUANTUM GRAVITY: Linearized gravity; Supergravity; Kaluza-Klein theories; Quantum field theory and classical gravity; Euclidean approach to Quantum Gravity; Canonical quantization of gravity; Gravitational Instantons. 2-CONSISTENCY CONDITIONS: ANOMALIES: Generalities about anomalies; Spinors in 2n dimensions; When can we expect to find anomalies?; The Atiyah-Singer Index Theorem and the computation of anomalies; Examples: Green-Schwarz cancellation mechanism and Witten's SU(2) ...

  13. Simulating Gravity

    Pipinos, Savas

    2010-01-01

    This article describes one classroom activity in which the author simulates the Newtonian gravity, and employs the Euclidean Geometry with the use of new technologies (NT). The prerequisites for this activity were some knowledge of the formulae for a particle free fall in Physics and most certainly, a good understanding of the notion of similarity…

  14. Cellular gravity

    F.C. Gruau; J.T. Tromp (John)

    1999-01-01

    textabstractWe consider the problem of establishing gravity in cellular automata. In particular, when cellular automata states can be partitioned into empty, particle, and wall types, with the latter enclosing rectangular areas, we desire rules that will make the particles fall down and pile up on

  15. Stochastic methods in quantum mechanics

    Gudder, Stanley P

    2005-01-01

    Practical developments in such fields as optical coherence, communication engineering, and laser technology have developed from the applications of stochastic methods. This introductory survey offers a broad view of some of the most useful stochastic methods and techniques in quantum physics, functional analysis, probability theory, communications, and electrical engineering. Starting with a history of quantum mechanics, it examines both the quantum logic approach and the operational approach, with explorations of random fields and quantum field theory.The text assumes a basic knowledge of fun

  16. STOCHASTIC FLOWS OF MAPPINGS

    2007-01-01

    In this paper, the stochastic flow of mappings generated by a Feller convolution semigroup on a compact metric space is studied. This kind of flow is the generalization of superprocesses of stochastic flows and stochastic diffeomorphism induced by the strong solutions of stochastic differential equations.

  17. Stochastic Averaging and Stochastic Extremum Seeking

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  18. A Spatial-Filtering Zero-Inflated Approach to the Estimation of the Gravity Model of Trade

    Rodolfo Metulini

    2018-02-01

    Full Text Available Nonlinear estimation of the gravity model with Poisson-type regression methods has become popular for modelling international trade flows, because it permits a better accounting for zero flows and extreme values in the distribution tail. Nevertheless, as trade flows are not independent from each other due to spatial and network autocorrelation, these methods may lead to biased parameter estimates. To overcome this problem, eigenvector spatial filtering (ESF variants of the Poisson/negative binomial specifications have been proposed in the literature on gravity modelling of trade. However, no specific treatment has been developed for cases in which many zero flows are present. This paper contributes to the literature in two ways. First, by employing a stepwise selection criterion for spatial filters that is based on robust (sandwich p-values and does not require likelihood-based indicators. In this respect, we develop an ad hoc backward stepwise function in R. Second, using this function, we select a reduced set of spatial filters that properly accounts for importer-side and exporter-side specific spatial effects, as well as network effects, both at the count and the logit processes of zero-inflated methods. Applying this estimation strategy to a cross-section of bilateral trade flows between a set of 64 countries for the year 2000, we find that our specification outperforms the benchmark models in terms of model fitting, both considering the AIC and in predicting zero (and small flows.

  19. Topological charged black holes in massive gravity's rainbow and their thermodynamical analysis through various approaches

    Hendi, S.H., E-mail: hendi@shirazu.ac.ir [Physics Department and Biruni Observatory, College of Sciences, Shiraz University, Shiraz 71454 (Iran, Islamic Republic of); Research Institute for Astronomy and Astrophysics of Maragha (RIAAM), P.O. Box 55134-441, Maragha (Iran, Islamic Republic of); Eslam Panah, B. [Physics Department and Biruni Observatory, College of Sciences, Shiraz University, Shiraz 71454 (Iran, Islamic Republic of); Panahiyan, S. [Physics Department and Biruni Observatory, College of Sciences, Shiraz University, Shiraz 71454 (Iran, Islamic Republic of); Physics Department, Shahid Beheshti University, Tehran 19839 (Iran, Islamic Republic of)

    2017-06-10

    Violation of Lorentz invariancy in the high energy quantum gravity motivates one to consider an energy dependent spacetime with massive deformation of standard general relativity. In this paper, we take into account an energy dependent metric in the context of a massive gravity model to obtain exact solutions. We investigate the geometry of black hole solutions and also calculate the conserved and thermodynamic quantities, which are fully reproduced by the analysis performed with the standard techniques. After examining the validity of the first law of thermodynamics, we conduct a study regarding the effects of different parameters on thermal stability of the solutions. In addition, we employ the relation between cosmological constant and thermodynamical pressure to study the possibility of phase transition. Interestingly, we will show that for the specific configuration considered in this paper, van der Waals like behavior is observed for different topology. In other words, for flat and hyperbolic horizons, similar to spherical horizon, a second order phase transition and van der Waals like behavior are observed. Furthermore, we use geometrical method to construct phase space and study phase transition and bound points for these black holes. Finally, we obtain critical values in extended phase space through the use of a new method.

  20. Best Longitudinal Adjustment of Satellite Trajectories for the Observation of Forest Fires (Blastoff): A Stochastic Programming Approach to Satellite System Design

    Hoskins, Aaron B.

    Forest fires cause a significant amount of damage and destruction each year. Optimally dispatching resources reduces the amount of damage a forest fire can cause. Models predict the fire spread to provide the data required to optimally dispatch resources. However, the models are only as accurate as the data used to build them. Satellites are one valuable tool in the collection of data for the forest fire models. Satellites provide data on the types of vegetation, the wind speed and direction, the soil moisture content, etc. The current operating paradigm is to passively collect data when possible. However, images from directly overhead provide better resolution and are easier to process. Maneuvering a constellation of satellites to fly directly over the forest fire provides higher quality data than is achieved with the current operating paradigm. Before launch, the location of the forest fire is unknown. Therefore, it is impossible to optimize the initial orbits for the satellites. Instead, the expected cost of maneuvering to observe the forest fire determines the optimal initial orbits. A two-stage stochastic programming approach is well suited for this class of problem where initial decisions are made with an uncertain future and then subsequent decisions are made once a scenario is realized. A repeat ground track orbit provides a non-maneuvering, natural solution providing a daily flyover of the forest fire. However, additional maneuvers provide a second daily flyover of the forest fire. The additional maneuvering comes at a significant cost in terms of additional fuel, but provides more data collection opportunities. After data are collected, ground stations receive the data for processing. Optimally selecting the ground station locations reduce the number of built ground stations and reduces the data fusion issues. However, the location of the forest fire alters the optimal ground station sites. A two-stage stochastic programming approach optimizes the

  1. A Neuro-fuzzy-stochastic frontier analysis approach for long-term natural gas consumption forecasting and behavior analysis: The cases of Bahrain, Saudi Arabia, Syria, and UAE

    Azadeh, A.; Asadzadeh, S.M.; Saberi, M.; Nadimi, V.; Tajvidi, A.; Sheikalishahi, M.

    2011-01-01

    Highlights: → This paper presents a unique approach for long-term natural gas consumption estimation. → It is applied to selected Arab countries to show its superiority and applicability. → It may be used for other real cases for optimum gas consumption estimation. → It is compared with current studies to show its advantages. → It is capable of dealing with complexity, ambiguity, fuzziness, and randomness. -- Abstract: This paper presents an adaptive network-based fuzzy inference system-stochastic frontier analysis (ANFIS-SFA) approach for long-term natural gas (NG) consumption prediction and analysis of the behavior of NG consumption. The proposed models consist of input variables of Gross Domestic Product (GDP) and population (POP). Six distinct models based on different inputs are defined. All of trained ANFIS are then compared with respect to mean absolute percentage error (MAPE). To meet the best performance of the intelligent based approaches, data are pre-processed (scaled) and finally the outputs are post-processed (returned to its original scale). To show the applicability and superiority of the integrated ANFIS-SFA approach, gas consumption in four Middle Eastern countries i.e. Bahrain, Saudi Arabia, Syria, and United Arab Emirates is forecasted and analyzed based on the data of the time period 1980-2007. With the aid of autoregressive model, GDP and population are projected for the period 2008-2015. These projected data are used as the input of ANFIS model to predict the gas consumption in the selected countries for 2008-2015. SFA is then used to examine the behavior of gas consumption in the past and also to make insights for the forthcoming years. The ANFIS-SFA approach is capable of dealing with complexity, uncertainty, and randomness as well as several other unique features discussed in this paper.

  2. A hybrid stochastic hierarchy equations of motion approach to treat the low temperature dynamics of non-Markovian open quantum systems

    Moix, Jeremy M.; Cao, Jianshu

    2013-10-01

    The hierarchical equations of motion technique has found widespread success as a tool to generate the numerically exact dynamics of non-Markovian open quantum systems. However, its application to low temperature environments remains a serious challenge due to the need for a deep hierarchy that arises from the Matsubara expansion of the bath correlation function. Here we present a hybrid stochastic hierarchical equation of motion (sHEOM) approach that alleviates this bottleneck and leads to a numerical cost that is nearly independent of temperature. Additionally, the sHEOM method generally converges with fewer hierarchy tiers allowing for the treatment of larger systems. Benchmark calculations are presented on the dynamics of two level systems at both high and low temperatures to demonstrate the efficacy of the approach. Then the hybrid method is used to generate the exact dynamics of systems that are nearly impossible to treat by the standard hierarchy. First, exact energy transfer rates are calculated across a broad range of temperatures revealing the deviations from the Förster rates. This is followed by computations of the entanglement dynamics in a system of two qubits at low temperature spanning the weak to strong system-bath coupling regimes.

  3. Consequences of energy conservation violation: late time solutions of Λ(T)CDM subclass of f(R,T) gravity using dynamical system approach

    Shabani, Hamid [University of Sistan and Baluchestan, Physics Department, Faculty of Sciences, Zahedan (Iran, Islamic Republic of); Ziaie, Amir Hadi [Islamic Azad University, Department of Physics, Kahnooj Branch, Kerman (Iran, Islamic Republic of)

    2017-05-15

    Very recently, Josset and Perez (Phys. Rev. Lett. 118:021102, 2017) have shown that a violation of the energy-momentum tensor (EMT) could result in an accelerated expansion state via the appearance of an effective cosmological constant, in the context of unimodular gravity. Inspired by this outcome, in this paper we investigate cosmological consequences of a violation of the EMT conservation in a particular class of f(R,T) gravity when only the pressure-less fluid is present. In this respect, we focus on the late time solutions of models of the type f(R,T) = R + βΛ(-T). As the first task, we study the solutions when the conservation of EMT is respected, and then we proceed with those in which violation occurs. We have found, provided that the EMT conservation is violated, that there generally exist two accelerated expansion solutions of which the stability properties depend on the underlying model. More exactly, we obtain a dark energy solution for which the effective equation of state depends on the model parameters and a de Sitter solution. We present a method to parametrize the Λ(-T) function, which is useful in a dynamical system approach and has been employed in the model. Also, we discuss the cosmological solutions for models with Λ(-T) = 8πG(-T){sup α} in the presence of ultra-relativistic matter. (orig.)

  4. Simple stochastic simulation.

    Schilstra, Maria J; Martin, Stephen R

    2009-01-01

    Stochastic simulations may be used to describe changes with time of a reaction system in a way that explicitly accounts for the fact that molecules show a significant degree of randomness in their dynamic behavior. The stochastic approach is almost invariably used when small numbers of molecules or molecular assemblies are involved because this randomness leads to significant deviations from the predictions of the conventional deterministic (or continuous) approach to the simulation of biochemical kinetics. Advances in computational methods over the three decades that have elapsed since the publication of Daniel Gillespie's seminal paper in 1977 (J. Phys. Chem. 81, 2340-2361) have allowed researchers to produce highly sophisticated models of complex biological systems. However, these models are frequently highly specific for the particular application and their description often involves mathematical treatments inaccessible to the nonspecialist. For anyone completely new to the field to apply such techniques in their own work might seem at first sight to be a rather intimidating prospect. However, the fundamental principles underlying the approach are in essence rather simple, and the aim of this article is to provide an entry point to the field for a newcomer. It focuses mainly on these general principles, both kinetic and computational, which tend to be not particularly well covered in specialist literature, and shows that interesting information may even be obtained using very simple operations in a conventional spreadsheet.

  5. Quantum gravity

    Isham, C.

    1989-01-01

    Gravitational effects are seen as arising from a curvature in spacetime. This must be reconciled with gravity's apparently passive role in quantum theory to achieve a satisfactory quantum theory of gravity. The development of grand unified theories has spurred the search, with forces being of equal strength at a unification energy of 10 15 - 10 18 GeV, with the ''Plank length'', Lp ≅ 10 -35 m. Fundamental principles of general relativity and quantum mechanics are outlined. Gravitons are shown to have spin-0, as mediators of gravitation force in the classical sense or spin-2 which are related to the quantisation of general relativity. Applying the ideas of supersymmetry to gravitation implies partners for the graviton, especially the massless spin 3/2 fermion called a gravitino. The concept of supersymmetric strings is introduced and discussed. (U.K.)

  6. Quantum gravity

    Markov, M.A.; West, P.C.

    1984-01-01

    This book discusses the state of the art of quantum gravity, quantum effects in cosmology, quantum black-hole physics, recent developments in supergravity, and quantum gauge theories. Topics considered include the problems of general relativity, pregeometry, complete cosmological theories, quantum fluctuations in cosmology and galaxy formation, a new inflationary universe scenario, grand unified phase transitions and the early Universe, the generalized second law of thermodynamics, vacuum polarization near black holes, the relativity of vacuum, black hole evaporations and their cosmological consequences, currents in supersymmetric theories, the Kaluza-Klein theories, gauge algebra and quantization, and twistor theory. This volume constitutes the proceedings of the Second Seminar on Quantum Gravity held in Moscow in 1981

  7. Is nonrelativistic gravity possible?

    Kocharyan, A. A.

    2009-01-01

    We study nonrelativistic gravity using the Hamiltonian formalism. For the dynamics of general relativity (relativistic gravity) the formalism is well known and called the Arnowitt-Deser-Misner (ADM) formalism. We show that if the lapse function is constrained correctly, then nonrelativistic gravity is described by a consistent Hamiltonian system. Surprisingly, nonrelativistic gravity can have solutions identical to relativistic gravity ones. In particular, (anti-)de Sitter black holes of Einstein gravity and IR limit of Horava gravity are locally identical.

  8. Algebraic and stochastic coding theory

    Kythe, Dave K

    2012-01-01

    Using a simple yet rigorous approach, Algebraic and Stochastic Coding Theory makes the subject of coding theory easy to understand for readers with a thorough knowledge of digital arithmetic, Boolean and modern algebra, and probability theory. It explains the underlying principles of coding theory and offers a clear, detailed description of each code. More advanced readers will appreciate its coverage of recent developments in coding theory and stochastic processes. After a brief review of coding history and Boolean algebra, the book introduces linear codes, including Hamming and Golay codes.

  9. Minimal Length, Measurability and Gravity

    Alexander Shalyt-Margolin

    2016-03-01

    Full Text Available The present work is a continuation of the previous papers written by the author on the subject. In terms of the measurability (or measurable quantities notion introduced in a minimal length theory, first the consideration is given to a quantum theory in the momentum representation. The same terms are used to consider the Markov gravity model that here illustrates the general approach to studies of gravity in terms of measurable quantities.

  10. The quest for quantum gravity

    Au, G.

    1995-03-01

    One of the greatest challenges facing theoretical physics lies in reconciling Einstein's classical theory of gravity - general relativity -with quantum field theory. Although both theories have been experimentally supported in their respective regimes, they are as compatible as a square peg and a round hole. This article summarises the current status of the superstring approach to the problem, the status of the Ashtekar program, and problem of time in quantum gravity

  11. The quest for quantum gravity

    Au, G

    1995-03-01

    One of the greatest challenges facing theoretical physics lies in reconciling Einstein`s classical theory of gravity - general relativity -with quantum field theory. Although both theories have been experimentally supported in their respective regimes, they are as compatible as a square peg and a round hole. This article summarises the current status of the superstring approach to the problem, the status of the Ashtekar program, and problem of time in quantum gravity.

  12. The Determinants of FDI Flows from the EU‐15 to the Visegrad Group Countries – A Panel Gravity Model Approach

    Liwiusz Wojciechowski

    2013-03-01

    Full Text Available The objective of this paper is to evaluate determinants of the general FDI flow to Visegrad countries and the effect of participation in EMU and EU. It was decided to investigate how augmented Gravity Model of trade allows identifying and evaluating the significance of pull and push factors of FDI. In an empirical analysis of panel data Hausman‐Taylor estimator was used because of the time‐invariant variables presence. While investment decisions regarding the choice of country are determined by the size of the target market, the distance is still a negative factor in creation of FDI volume. Additionally, it was proven that membership in EMU, differences in taxation, historical background, access to the sea and prices stability have significant impact of FDI stock formation in each country belonging to V4. Is was also noted that Poland became a leader of the V4 as well as EU‐12 FDI market sourcing from the old EU Member States. It is necessary to develop an “FDI attracting mechanism” using existing resources. Business regulations and taxation policy as well as main macroeconomic variables which are responsible for the economy standing are also examined as attracting the FDI flow. The originality of this work lies in studying some aspects of FDI inflow into the group of both similar and different countries in economic measures terms.

  13. A Bayesian approach for the stochastic modeling error reduction of magnetic material identification of an electromagnetic device

    Abdallh, A; Crevecoeur, G; Dupré, L

    2012-01-01

    Magnetic material properties of an electromagnetic device can be recovered by solving an inverse problem where measurements are adequately interpreted by a mathematical forward model. The accuracy of these forward models dramatically affects the accuracy of the material properties recovered by the inverse problem. The more accurate the forward model is, the more accurate recovered data are. However, the more accurate ‘fine’ models demand a high computational time and memory storage. Alternatively, less accurate ‘coarse’ models can be used with a demerit of the high expected recovery errors. This paper uses the Bayesian approximation error approach for improving the inverse problem results when coarse models are utilized. The proposed approach adapts the objective function to be minimized with the a priori misfit between fine and coarse forward model responses. In this paper, two different electromagnetic devices, namely a switched reluctance motor and an EI core inductor, are used as case studies. The proposed methodology is validated on both purely numerical and real experimental results. The results show a significant reduction in the recovery error within an acceptable computational time. (paper)

  14. Quantum Gravity Mathematical Models and Experimental Bounds

    Fauser, Bertfried; Zeidler, Eberhard

    2007-01-01

    The construction of a quantum theory of gravity is the most fundamental challenge confronting contemporary theoretical physics. The different physical ideas which evolved while developing a theory of quantum gravity require highly advanced mathematical methods. This book presents different mathematical approaches to formulate a theory of quantum gravity. It represents a carefully selected cross-section of lively discussions about the issue of quantum gravity which took place at the second workshop "Mathematical and Physical Aspects of Quantum Gravity" in Blaubeuren, Germany. This collection covers in a unique way aspects of various competing approaches. A unique feature of the book is the presentation of different approaches to quantum gravity making comparison feasible. This feature is supported by an extensive index. The book is mainly addressed to mathematicians and physicists who are interested in questions related to mathematical physics. It allows the reader to obtain a broad and up-to-date overview on ...

  15. Noncommutative gravity

    Schupp, P.

    2007-01-01

    Heuristic arguments suggest that the classical picture of smooth commutative spacetime should be replaced by some kind of quantum / noncommutative geometry at length scales and energies where quantum as well as gravitational effects are important. Motivated by this idea much research has been devoted to the study of quantum field theory on noncommutative spacetimes. More recently the focus has started to shift back to gravity in this context. We give an introductory overview to the formulation of general relativity in a noncommutative spacetime background and discuss the possibility of exact solutions. (author)

  16. Optimal Land Use Management for Soil Erosion Control by Using an Interval-Parameter Fuzzy Two-Stage Stochastic Programming Approach

    Han, Jing-Cheng; Huang, Guo-He; Zhang, Hua; Li, Zhong

    2013-09-01

    Soil erosion is one of the most serious environmental and public health problems, and such land degradation can be effectively mitigated through performing land use transitions across a watershed. Optimal land use management can thus provide a way to reduce soil erosion while achieving the maximum net benefit. However, optimized land use allocation schemes are not always successful since uncertainties pertaining to soil erosion control are not well presented. This study applied an interval-parameter fuzzy two-stage stochastic programming approach to generate optimal land use planning strategies for soil erosion control based on an inexact optimization framework, in which various uncertainties were reflected. The modeling approach can incorporate predefined soil erosion control policies, and address inherent system uncertainties expressed as discrete intervals, fuzzy sets, and probability distributions. The developed model was demonstrated through a case study in the Xiangxi River watershed, China's Three Gorges Reservoir region. Land use transformations were employed as decision variables, and based on these, the land use change dynamics were yielded for a 15-year planning horizon. Finally, the maximum net economic benefit with an interval value of [1.197, 6.311] × 109 was obtained as well as corresponding land use allocations in the three planning periods. Also, the resulting soil erosion amount was found to be decreased and controlled at a tolerable level over the watershed. Thus, results confirm that the developed model is a useful tool for implementing land use management as not only does it allow local decision makers to optimize land use allocation, but can also help to answer how to accomplish land use changes.

  17. Optimal land use management for soil erosion control by using an interval-parameter fuzzy two-stage stochastic programming approach.

    Han, Jing-Cheng; Huang, Guo-He; Zhang, Hua; Li, Zhong

    2013-09-01

    Soil erosion is one of the most serious environmental and public health problems, and such land degradation can be effectively mitigated through performing land use transitions across a watershed. Optimal land use management can thus provide a way to reduce soil erosion while achieving the maximum net benefit. However, optimized land use allocation schemes are not always successful since uncertainties pertaining to soil erosion control are not well presented. This study applied an interval-parameter fuzzy two-stage stochastic programming approach to generate optimal land use planning strategies for soil erosion control based on an inexact optimization framework, in which various uncertainties were reflected. The modeling approach can incorporate predefined soil erosion control policies, and address inherent system uncertainties expressed as discrete intervals, fuzzy sets, and probability distributions. The developed model was demonstrated through a case study in the Xiangxi River watershed, China's Three Gorges Reservoir region. Land use transformations were employed as decision variables, and based on these, the land use change dynamics were yielded for a 15-year planning horizon. Finally, the maximum net economic benefit with an interval value of [1.197, 6.311] × 10(9) $ was obtained as well as corresponding land use allocations in the three planning periods. Also, the resulting soil erosion amount was found to be decreased and controlled at a tolerable level over the watershed. Thus, results confirm that the developed model is a useful tool for implementing land use management as not only does it allow local decision makers to optimize land use allocation, but can also help to answer how to accomplish land use changes.

  18. A Reliability Comparison of Classical and Stochastic Thickness Margin Approaches to Address Material Property Uncertainties for the Orion Heat Shield

    Sepka, Steve; Vander Kam, Jeremy; McGuire, Kathy

    2018-01-01

    The Orion Thermal Protection System (TPS) margin process uses a root-sum-square approach with branches addressing trajectory, aerothermodynamics, and material response uncertainties in ablator thickness design. The material response branch applies a bond line temperature reduction between the Avcoat ablator and EA9394 adhesive by 60 C (108 F) from its peak allowed value of 260 C (500 F). This process is known as the Bond Line Temperature Material Margin (BTMM) and is intended to cover material property and performance uncertainties. The value of 60 C (108 F) is a constant, applied at any spacecraft body location and for any trajectory. By varying only material properties in a random (monte carlo) manner, the perl-based script mcCHAR is used to investigate the confidence interval provided by the BTMM. In particular, this study will look at various locations on the Orion heat shield forebody for a guided and an abort (ballistic) trajectory.

  19. Stochastic tools in turbulence

    Lumey, John L

    2012-01-01

    Stochastic Tools in Turbulence discusses the available mathematical tools to describe stochastic vector fields to solve problems related to these fields. The book deals with the needs of turbulence in relation to stochastic vector fields, particularly, on three-dimensional aspects, linear problems, and stochastic model building. The text describes probability distributions and densities, including Lebesgue integration, conditional probabilities, conditional expectations, statistical independence, lack of correlation. The book also explains the significance of the moments, the properties of the

  20. Conformal Gravity

    Hooft, G.

    2012-01-01

    The dynamical degree of freedom for the gravitational force is the metric tensor, having 10 locally independent degrees of freedom (of which 4 can be used to fix the coordinate choice). In conformal gravity, we split this field into an overall scalar factor and a nine-component remainder. All unrenormalizable infinities are in this remainder, while the scalar component can be handled like any other scalar field such as the Higgs field. In this formalism, conformal symmetry is spontaneously broken. An imperative demand on any healthy quantum gravity theory is that black holes should be described as quantum systems with micro-states as dictated by the Hawking-Bekenstein theory. This requires conformal symmetry that may be broken spontaneously but not explicitly, and this means that all conformal anomalies must cancel out. Cancellation of conformal anomalies yields constraints on the matter sector as described by some universal field theory. Thus black hole physics may eventually be of help in the construction of unified field theories. (author)

  1. Southern Africa Gravity Data

    National Oceanic and Atmospheric Administration, Department of Commerce — This data base (14,559 records) was received in January 1986. Principal gravity parameters include elevation and observed gravity. The observed gravity values are...

  2. NGS Absolute Gravity Data

    National Oceanic and Atmospheric Administration, Department of Commerce — The NGS Absolute Gravity data (78 stations) was received in July 1993. Principal gravity parameters include Gravity Value, Uncertainty, and Vertical Gradient. The...

  3. Convergence of Sample Path Optimal Policies for Stochastic Dynamic Programming

    Fu, Michael C; Jin, Xing

    2005-01-01

    .... These results have practical implications for Monte Carlo simulation-based solution approaches to stochastic dynamic programming problems where it is impractical to extract the explicit transition...

  4. The relativistic gravity train

    Seel, Max

    2018-05-01

    The gravity train that takes 42.2 min from any point A to any other point B that is connected by a straight-line tunnel through Earth has captured the imagination more than most other applications in calculus or introductory physics courses. Brachystochron and, most recently, nonlinear density solutions have been discussed. Here relativistic corrections are presented. It is discussed how the corrections affect the time to fall through Earth, the Sun, a white dwarf, a neutron star, and—the ultimate limit—the difference in time measured by a moving, a stationary and the fiducial observer at infinity if the density of the sphere approaches the density of a black hole. The relativistic gravity train can serve as a problem with approximate and exact analytic solutions and as numerical exercise in any introductory course on relativity.

  5. Lectures on Quantum Gravity

    Gomberoff, Andres

    2006-01-01

    The 2002 Pan-American Advanced Studies Institute School on Quantum Gravity was held at the Centro de Estudios Cientificos (CECS),Valdivia, Chile, January 4-14, 2002. The school featured lectures by ten speakers, and was attended by nearly 70 students from over 14 countries. A primary goal was to foster interaction and communication between participants from different cultures, both in the layman’s sense of the term and in terms of approaches to quantum gravity. We hope that the links formed by students and the school will persist throughout their professional lives, continuing to promote interaction and the essential exchange of ideas that drives research forward. This volume contains improved and updated versions of the lectures given at the School. It has been prepared both as a reminder for the participants, and so that these pedagogical introductions can be made available to others who were unable to attend. We expect them to serve students of all ages well.

  6. Simplicial quantum gravity

    Hartle, J.B.

    1985-01-01

    Simplicial approximation and the ideas associated with the Regge calculus provide a concrete way of implementing a sum over histories formulation of quantum gravity. A simplicial geometry is made up of flat simplices joined together in a prescribed way together with an assignment of lengths to their edges. A sum over simplicial geometries is a sum over the different ways the simplices can be joined together with an integral over their edge lengths. The construction of the simplicial Euclidean action for this approach to quantum general relativity is illustrated. The recovery of the diffeomorphism group in the continuum limit is discussed. Some possible classes of simplicial complexes with which to define a sum over topologies are described. In two dimensional quantum gravity it is argued that a reasonable class is the class of pseudomanifolds

  7. Quantum gravity and quantum cosmology

    Papantonopoulos, Lefteris; Siopsis, George; Tsamis, Nikos

    2013-01-01

    Quantum gravity has developed into a fast-growing subject in physics and it is expected that probing the high-energy and high-curvature regimes of gravitating systems will shed some light on how to eventually achieve an ultraviolet complete quantum theory of gravity. Such a theory would provide the much needed information about fundamental problems of classical gravity, such as the initial big-bang singularity, the cosmological constant problem, Planck scale physics and the early-time inflationary evolution of our Universe.   While in the first part of this book concepts of quantum gravity are introduced and approached from different angles, the second part discusses these theories in connection with cosmological models and observations, thereby exploring which types of signatures of modern and mathematically rigorous frameworks can be detected by experiments. The third and final part briefly reviews the observational status of dark matter and dark energy, and introduces alternative cosmological models.   ...

  8. Lithium-ion battery capacity fading dynamics modelling for formulation optimization: A stochastic approach to accelerate the design process

    Tao, Laifa; Cheng, Yujie; Lu, Chen; Su, Yuzhuan; Chong, Jin; Jin, Haizu; Lin, Yongshou; Noktehdan, Azadeh

    2017-01-01

    Highlights: •The model is linked to known physicochemical degradation processes and material properties. •Aging dynamics of various battery formulations can be understood by the proposed model. •Large number of experiments will be reduced to accelerate the battery design process. •This approach can describe batteries under various operating conditions. •The proposed model is simple and easily implemented. -- Abstract: A five-state nonhomogeneous Markov chain model, which is an effective and promising way to accelerate the Li-ion battery design process by investigating the capacity fading dynamics of different formulations during the battery design phase, is reported. The parameters of this model are linked to known physicochemical degradation dynamics and material properties. Herein, the states and behaviors of the active materials in Li-ion batteries are modelled. To verify the efficiency of the proposed model, a dataset from approximately 3 years of cycling capacity fading experiments of various formulations using several different materials provided by Contemporary Amperex Technology Limited (CATL), as well as a NASA dataset, are employed. The capabilities of the proposed model for different amounts (50%, 70%, and 90%) of available experimental capacity data are tested and analyzed to assist with the final design determination for manufacturers. The average relative errors of life cycling prediction acquired from these tests are less than 2.4%, 0.8%, and 0.3%, even when only 50%, 70%, and 90% of the data, respectively, is available for different anode materials, electrolyte materials, and individual batteries. Furthermore, the variance is 0.518% when only 50% of the data are available; i.e., one can save at least 50% of the total experimental time and cost with an accuracy greater than 97% in the design phase, which demonstrates an effective and promising way to accelerate the Li-ion battery design process. The qualitative and quantitative analyses

  9. Exploring Ackermann and LQR stability control of stochastic state-space model of hexacopter equipped with robotic arm

    Ibrahim, I. N.; Akkad, M. A. Al; Abramov, I. V.

    2018-05-01

    This paper discusses the control of Unmanned Aerial Vehicles (UAVs) for active interaction and manipulation of objects. The manipulator motion with an unknown payload was analysed concerning force and moment disturbances, which influence the mass distribution, and the centre of gravity (CG). Therefore, a general dynamics mathematical model of a hexacopter was formulated where a stochastic state-space model was extracted in order to build anti-disturbance controllers. Based on the compound pendulum method, the disturbances model that simulates the robotic arm with a payload was inserted into the stochastic model. This study investigates two types of controllers in order to study the stability of a hexacopter. A controller based on Ackermann’s method and the other - on the linear quadratic regulator (LQR) approach - were presented. The latter constitutes a challenge for UAV control performance especially with the presence of uncertainties and disturbances.

  10. Finance & Stochastic

    Giandomenico, Rossano

    2014-01-01

    The study analyses quantitative models for financial markets by starting from geometric Brown process and Wiener process by analyzing Ito’s lemma and first passage model. Furthermore, it is analyzed the prices of the options, Vanilla & Exotic, by using the expected value and numerical model with geometric applications. From contingent claim approach ALM strategies are also analyzed so to get the effective duration measure of liabilities by assuming that clients buy options for protection and ...

  11. Stochastic integration and differential equations

    Protter, Philip E

    2003-01-01

    It has been 15 years since the first edition of Stochastic Integration and Differential Equations, A New Approach appeared, and in those years many other texts on the same subject have been published, often with connections to applications, especially mathematical finance. Yet in spite of the apparent simplicity of approach, none of these books has used the functional analytic method of presenting semimartingales and stochastic integration. Thus a 2nd edition seems worthwhile and timely, though it is no longer appropriate to call it "a new approach". The new edition has several significant changes, most prominently the addition of exercises for solution. These are intended to supplement the text, but lemmas needed in a proof are never relegated to the exercises. Many of the exercises have been tested by graduate students at Purdue and Cornell Universities. Chapter 3 has been completely redone, with a new, more intuitive and simultaneously elementary proof of the fundamental Doob-Meyer decomposition theorem, t...

  12. Noncausal stochastic calculus

    Ogawa, Shigeyoshi

    2017-01-01

    This book presents an elementary introduction to the theory of noncausal stochastic calculus that arises as a natural alternative to the standard theory of stochastic calculus founded in 1944 by Professor Kiyoshi Itô. As is generally known, Itô Calculus is essentially based on the "hypothesis of causality", asking random functions to be adapted to a natural filtration generated by Brownian motion or more generally by square integrable martingale. The intention in this book is to establish a stochastic calculus that is free from this "hypothesis of causality". To be more precise, a noncausal theory of stochastic calculus is developed in this book, based on the noncausal integral introduced by the author in 1979. After studying basic properties of the noncausal stochastic integral, various concrete problems of noncausal nature are considered, mostly concerning stochastic functional equations such as SDE, SIE, SPDE, and others, to show not only the necessity of such theory of noncausal stochastic calculus but ...

  13. Newtonian gravity in loop quantum gravity

    Smolin, Lee

    2010-01-01

    We apply a recent argument of Verlinde to loop quantum gravity, to conclude that Newton's law of gravity emerges in an appropriate limit and setting. This is possible because the relationship between area and entropy is realized in loop quantum gravity when boundaries are imposed on a quantum spacetime.

  14. Identifying the determinants of South Africa’s extensive and intensive trade margins: A gravity model approach

    Marianne Matthee

    2017-03-01

    Full Text Available Background: The significance of the paper is twofold. Firstly, it adds to the small but growing body of literature focusing on the decomposition of South Africa’s export growth. Secondly, it identifies the determinants of the intensive and extensive margins of South Africa’s exports – a topic that (as far as the authors are concerned has not been explored before. Aim: This paper aims to investigate a wide range of market access determinants that affect South Africa’s export growth along the intensive and extensive margins. Setting: Export diversification has been identified as one of the critical pillars of South Africa’s much-hoped-for economic revival. Although recent years have seen the country’s export product mix evolving, there is still insufficient diversification into new markets with high value-added products. This is putting a damper on export performance as a whole and, in turn, hindering South Africa’s economic growth. Methods: A Heckman selection gravity model is applied using highly disaggregated data. The first stage of the process revealed the factors affecting the probability of South Africa exporting to a particular destination (extensive margin. The second stage, which modelled trade flows, revealed the variables that affect export volumes (intensive margin. Results: The results showed that South Africa’s export product mix is relatively varied, but the number of export markets is limited. In terms of the extensive margin (or the probability of exporting, economic variables such as the importing country’s GDP and population have a positive impact on firms’ decision to export. Other factors affecting the extensive margin are distance to the market (negative impact, cultural or language fit (positive impact, presence of a South African embassy abroad (positive impact, existing free trade agreement with Southern African Development Community (positive impact and trade regulations and costs (negative

  15. Stochastic geometry of critical curves, Schramm-Loewner evolutions and conformal field theory

    Gruzberg, Ilya A

    2006-01-01

    Conformally invariant curves that appear at critical points in two-dimensional statistical mechanics systems and their fractal geometry have received a lot of attention in recent years. On the one hand, Schramm (2000 Israel J. Math. 118 221 (Preprint math.PR/9904022)) has invented a new rigorous as well as practical calculational approach to critical curves, based on a beautiful unification of conformal maps and stochastic processes, and by now known as Schramm-Loewner evolution (SLE). On the other hand, Duplantier (2000 Phys. Rev. Lett. 84 1363; Fractal Geometry and Applications: A Jubilee of Benot Mandelbrot: Part 2 (Proc. Symp. Pure Math. vol 72) (Providence, RI: American Mathematical Society) p 365 (Preprint math-ph/0303034)) has applied boundary quantum gravity methods to calculate exact multifractal exponents associated with critical curves. In the first part of this paper, I provide a pedagogical introduction to SLE. I present mathematical facts from the theory of conformal maps and stochastic processes related to SLE. Then I review basic properties of SLE and provide practical derivation of various interesting quantities related to critical curves, including fractal dimensions and crossing probabilities. The second part of the paper is devoted to a way of describing critical curves using boundary conformal field theory (CFT) in the so-called Coulomb gas formalism. This description provides an alternative (to quantum gravity) way of obtaining the multifractal spectrum of critical curves using only traditional methods of CFT based on free bosonic fields

  16. Applied stochastic modelling

    Morgan, Byron JT; Tanner, Martin Abba; Carlin, Bradley P

    2008-01-01

    Introduction and Examples Introduction Examples of data sets Basic Model Fitting Introduction Maximum-likelihood estimation for a geometric model Maximum-likelihood for the beta-geometric model Modelling polyspermy Which model? What is a model for? Mechanistic models Function Optimisation Introduction MATLAB: graphs and finite differences Deterministic search methods Stochastic search methods Accuracy and a hybrid approach Basic Likelihood ToolsIntroduction Estimating standard errors and correlations Looking at surfaces: profile log-likelihoods Confidence regions from profiles Hypothesis testing in model selectionScore and Wald tests Classical goodness of fit Model selection biasGeneral Principles Introduction Parameterisation Parameter redundancy Boundary estimates Regression and influence The EM algorithm Alternative methods of model fitting Non-regular problemsSimulation Techniques Introduction Simulating random variables Integral estimation Verification Monte Carlo inference Estimating sampling distributi...

  17. Stochastic population theories

    Ludwig, Donald

    1974-01-01

    These notes serve as an introduction to stochastic theories which are useful in population biology; they are based on a course given at the Courant Institute, New York, in the Spring of 1974. In order to make the material. accessible to a wide audience, it is assumed that the reader has only a slight acquaintance with probability theory and differential equations. The more sophisticated topics, such as the qualitative behavior of nonlinear models, are approached through a succession of simpler problems. Emphasis is placed upon intuitive interpretations, rather than upon formal proofs. In most cases, the reader is referred elsewhere for a rigorous development. On the other hand, an attempt has been made to treat simple, useful models in some detail. Thus these notes complement the existing mathematical literature, and there appears to be little duplication of existing works. The authors are indebted to Miss Jeanette Figueroa for her beautiful and speedy typing of this work. The research was supported by the Na...

  18. Removal of muscle artifact from EEG data: comparison between stochastic (ICA and CCA) and deterministic (EMD and wavelet-based) approaches

    Safieddine, Doha; Kachenoura, Amar; Albera, Laurent; Birot, Gwénaël; Karfoul, Ahmad; Pasnicu, Anca; Biraben, Arnaud; Wendling, Fabrice; Senhadji, Lotfi; Merlet, Isabelle

    2012-12-01

    Electroencephalographic (EEG) recordings are often contaminated with muscle artifacts. This disturbing myogenic activity not only strongly affects the visual analysis of EEG, but also most surely impairs the results of EEG signal processing tools such as source localization. This article focuses on the particular context of the contamination epileptic signals (interictal spikes) by muscle artifact, as EEG is a key diagnosis tool for this pathology. In this context, our aim was to compare the ability of two stochastic approaches of blind source separation, namely independent component analysis (ICA) and canonical correlation analysis (CCA), and of two deterministic approaches namely empirical mode decomposition (EMD) and wavelet transform (WT) to remove muscle artifacts from EEG signals. To quantitatively compare the performance of these four algorithms, epileptic spike-like EEG signals were simulated from two different source configurations and artificially contaminated with different levels of real EEG-recorded myogenic activity. The efficiency of CCA, ICA, EMD, and WT to correct the muscular artifact was evaluated both by calculating the normalized mean-squared error between denoised and original signals and by comparing the results of source localization obtained from artifact-free as well as noisy signals, before and after artifact correction. Tests on real data recorded in an epileptic patient are also presented. The results obtained in the context of simulations and real data show that EMD outperformed the three other algorithms for the denoising of data highly contaminated by muscular activity. For less noisy data, and when spikes arose from a single cortical source, the myogenic artifact was best corrected with CCA and ICA. Otherwise when spikes originated from two distinct sources, either EMD or ICA offered the most reliable denoising result for highly noisy data, while WT offered the better denoising result for less noisy data. These results suggest that

  19. A stochastic approach to fission

    Boilley, D.; Suraud, E.; Abe, Yasuhisa

    1992-01-01

    A microscopically derived Langevin equation is applied to thermally induced nuclear fission. An important memory effect is pointed out. A strong friction coefficient, calculated from microscopic quantities, tends to decrease the stationary limit of the fission rate and to increase the transient time. Fission was described as a diffusion over a barrier of a collective variable, and a Langevin Equation (LE) was used to study the phenomenon. A study of the stationary flow over the saddle point with a Fokker-Planck Equation (FPE), equivalent to the LE was used to give formula for the stationary fission rate (or reaction rate for the chemistry applications). More recently, a complete study of the fission process was performed numerically with both FPE and LE. A long transient time, that could allow more pre-scission neutrons to evaporate, was pointed out. The derivation of this new LE is recalled, followed by the description of the memory dependence and by the effect of a large friction coefficient on the fission rate. (author) 6 refs., 3 figs

  20. Loop Quantum Gravity.

    Rovelli, Carlo

    2008-01-01

    The problem of describing the quantum behavior of gravity, and thus understanding quantum spacetime , is still open. Loop quantum gravity is a well-developed approach to this problem. It is a mathematically well-defined background-independent quantization of general relativity, with its conventional matter couplings. Today research in loop quantum gravity forms a vast area, ranging from mathematical foundations to physical applications. Among the most significant results obtained so far are: (i) The computation of the spectra of geometrical quantities such as area and volume, which yield tentative quantitative predictions for Planck-scale physics. (ii) A physical picture of the microstructure of quantum spacetime, characterized by Planck-scale discreteness. Discreteness emerges as a standard quantum effect from the discrete spectra, and provides a mathematical realization of Wheeler's "spacetime foam" intuition. (iii) Control of spacetime singularities, such as those in the interior of black holes and the cosmological one. This, in particular, has opened up the possibility of a theoretical investigation into the very early universe and the spacetime regions beyond the Big Bang. (iv) A derivation of the Bekenstein-Hawking black-hole entropy. (v) Low-energy calculations, yielding n -point functions well defined in a background-independent context. The theory is at the roots of, or strictly related to, a number of formalisms that have been developed for describing background-independent quantum field theory, such as spin foams, group field theory, causal spin networks, and others. I give here a general overview of ideas, techniques, results and open problems of this candidate theory of quantum gravity, and a guide to the relevant literature.