WorldWideScience

Sample records for large scale stochastic

  1. Planning under uncertainty solving large-scale stochastic linear programs

    Energy Technology Data Exchange (ETDEWEB)

    Infanger, G. [Stanford Univ., CA (United States). Dept. of Operations Research]|[Technische Univ., Vienna (Austria). Inst. fuer Energiewirtschaft

    1992-12-01

    For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.

  2. Large-scale stochasticity in Hamiltonian systems

    International Nuclear Information System (INIS)

    Escande, D.F.

    1982-01-01

    Large scale stochasticity (L.S.S.) in Hamiltonian systems is defined on the paradigm Hamiltonian H(v,x,t) =v 2 /2-M cos x-P cos k(x-t) which describes the motion of one particle in two electrostatic waves. A renormalization transformation Tsub(r) is described which acts as a microscope that focusses on a given KAM (Kolmogorov-Arnold-Moser) torus in phase space. Though approximate, Tsub(r) yields the threshold of L.S.S. in H with an error of 5-10%. The universal behaviour of KAM tori is predicted: for instance the scale invariance of KAM tori and the critical exponent of the Lyapunov exponent of Cantori. The Fourier expansion of KAM tori is computed and several conjectures by L. Kadanoff and S. Shenker are proved. Chirikov's standard mapping for stochastic layers is derived in a simpler way and the width of the layers is computed. A simpler renormalization scheme for these layers is defined. A Mathieu equation for describing the stability of a discrete family of cycles is derived. When combined with Tsub(r), it allows to prove the link between KAM tori and nearby cycles, conjectured by J. Greene and, in particular, to compute the mean residue of a torus. The fractal diagrams defined by G. Schmidt are computed. A sketch of a methodology for computing the L.S.S. threshold in any two-degree-of-freedom Hamiltonian system is given. (Auth.)

  3. Bonus algorithm for large scale stochastic nonlinear programming problems

    CERN Document Server

    Diwekar, Urmila

    2015-01-01

    This book presents the details of the BONUS algorithm and its real world applications in areas like sensor placement in large scale drinking water networks, sensor placement in advanced power systems, water management in power systems, and capacity expansion of energy systems. A generalized method for stochastic nonlinear programming based on a sampling based approach for uncertainty analysis and statistical reweighting to obtain probability information is demonstrated in this book. Stochastic optimization problems are difficult to solve since they involve dealing with optimization and uncertainty loops. There are two fundamental approaches used to solve such problems. The first being the decomposition techniques and the second method identifies problem specific structures and transforms the problem into a deterministic nonlinear programming problem. These techniques have significant limitations on either the objective function type or the underlying distributions for the uncertain variables. Moreover, these ...

  4. Minimizing the stochasticity of halos in large-scale structure surveys

    Science.gov (United States)

    Hamaus, Nico; Seljak, Uroš; Desjacques, Vincent; Smith, Robert E.; Baldauf, Tobias

    2010-08-01

    In recent work (Seljak, Hamaus, and Desjacques 2009) it was found that weighting central halo galaxies by halo mass can significantly suppress their stochasticity relative to the dark matter, well below the Poisson model expectation. This is useful for constraining relations between galaxies and the dark matter, such as the galaxy bias, especially in situations where sampling variance errors can be eliminated. In this paper we extend this study with the goal of finding the optimal mass-dependent halo weighting. We use N-body simulations to perform a general analysis of halo stochasticity and its dependence on halo mass. We investigate the stochasticity matrix, defined as Cij≡⟨(δi-biδm)(δj-bjδm)⟩, where δm is the dark matter overdensity in Fourier space, δi the halo overdensity of the i-th halo mass bin, and bi the corresponding halo bias. In contrast to the Poisson model predictions we detect nonvanishing correlations between different mass bins. We also find the diagonal terms to be sub-Poissonian for the highest-mass halos. The diagonalization of this matrix results in one large and one low eigenvalue, with the remaining eigenvalues close to the Poisson prediction 1/n¯, where n¯ is the mean halo number density. The eigenmode with the lowest eigenvalue contains most of the information and the corresponding eigenvector provides an optimal weighting function to minimize the stochasticity between halos and dark matter. We find this optimal weighting function to match linear mass weighting at high masses, while at the low-mass end the weights approach a constant whose value depends on the low-mass cut in the halo mass function. This weighting further suppresses the stochasticity as compared to the previously explored mass weighting. Finally, we employ the halo model to derive the stochasticity matrix and the scale-dependent bias from an analytical perspective. It is remarkably successful in reproducing our numerical results and predicts that the

  5. a Stochastic Approach to Multiobjective Optimization of Large-Scale Water Reservoir Networks

    Science.gov (United States)

    Bottacin-Busolin, A.; Worman, A. L.

    2013-12-01

    A main challenge for the planning and management of water resources is the development of multiobjective strategies for operation of large-scale water reservoir networks. The optimal sequence of water releases from multiple reservoirs depends on the stochastic variability of correlated hydrologic inflows and on various processes that affect water demand and energy prices. Although several methods have been suggested, large-scale optimization problems arising in water resources management are still plagued by the high dimensional state space and by the stochastic nature of the hydrologic inflows. In this work, the optimization of reservoir operation is approached using approximate dynamic programming (ADP) with policy iteration and function approximators. The method is based on an off-line learning process in which operating policies are evaluated for a number of stochastic inflow scenarios, and the resulting value functions are used to design new, improved policies until convergence is attained. A case study is presented of a multi-reservoir system in the Dalälven River, Sweden, which includes 13 interconnected reservoirs and 36 power stations. Depending on the late spring and summer peak discharges, the lowlands adjacent to Dalälven can often be flooded during the summer period, and the presence of stagnating floodwater during the hottest months of the year is the cause of a large proliferation of mosquitos, which is a major problem for the people living in the surroundings. Chemical pesticides are currently being used as a preventive countermeasure, which do not provide an effective solution to the problem and have adverse environmental impacts. In this study, ADP was used to analyze the feasibility of alternative operating policies for reducing the flood risk at a reasonable economic cost for the hydropower companies. To this end, mid-term operating policies were derived by combining flood risk reduction with hydropower production objectives. The performance

  6. Unified Tractable Model for Large-Scale Networks Using Stochastic Geometry: Analysis and Design

    KAUST Repository

    Afify, Laila H.

    2016-12-01

    The ever-growing demands for wireless technologies necessitate the evolution of next generation wireless networks that fulfill the diverse wireless users requirements. However, upscaling existing wireless networks implies upscaling an intrinsic component in the wireless domain; the aggregate network interference. Being the main performance limiting factor, it becomes crucial to develop a rigorous analytical framework to accurately characterize the out-of-cell interference, to reap the benefits of emerging networks. Due to the different network setups and key performance indicators, it is essential to conduct a comprehensive study that unifies the various network configurations together with the different tangible performance metrics. In that regard, the focus of this thesis is to present a unified mathematical paradigm, based on Stochastic Geometry, for large-scale networks with different antenna/network configurations. By exploiting such a unified study, we propose an efficient automated network design strategy to satisfy the desired network objectives. First, this thesis studies the exact aggregate network interference characterization, by accounting for each of the interferers signals in the large-scale network. Second, we show that the information about the interferers symbols can be approximated via the Gaussian signaling approach. The developed mathematical model presents twofold analysis unification for uplink and downlink cellular networks literature. It aligns the tangible decoding error probability analysis with the abstract outage probability and ergodic rate analysis. Furthermore, it unifies the analysis for different antenna configurations, i.e., various multiple-input multiple-output (MIMO) systems. Accordingly, we propose a novel reliable network design strategy that is capable of appropriately adjusting the network parameters to meet desired design criteria. In addition, we discuss the diversity-multiplexing tradeoffs imposed by differently favored

  7. Stochastically Estimating Modular Criticality in Large-Scale Logic Circuits Using Sparsity Regularization and Compressive Sensing

    Directory of Open Access Journals (Sweden)

    Mohammed Alawad

    2015-03-01

    Full Text Available This paper considers the problem of how to efficiently measure a large and complex information field with optimally few observations. Specifically, we investigate how to stochastically estimate modular criticality values in a large-scale digital circuit with a very limited number of measurements in order to minimize the total measurement efforts and time. We prove that, through sparsity-promoting transform domain regularization and by strategically integrating compressive sensing with Bayesian learning, more than 98% of the overall measurement accuracy can be achieved with fewer than 10% of measurements as required in a conventional approach that uses exhaustive measurements. Furthermore, we illustrate that the obtained criticality results can be utilized to selectively fortify large-scale digital circuits for operation with narrow voltage headrooms and in the presence of soft-errors rising at near threshold voltage levels, without excessive hardware overheads. Our numerical simulation results have shown that, by optimally allocating only 10% circuit redundancy, for some large-scale benchmark circuits, we can achieve more than a three-times reduction in its overall error probability, whereas if randomly distributing such 10% hardware resource, less than 2% improvements in the target circuit’s overall robustness will be observed. Finally, we conjecture that our proposed approach can be readily applied to estimate other essential properties of digital circuits that are critical to designing and analyzing them, such as the observability measure in reliability analysis and the path delay estimation in stochastic timing analysis. The only key requirement of our proposed methodology is that these global information fields exhibit a certain degree of smoothness, which is universally true for almost any physical phenomenon.

  8. Urban Freight Management with Stochastic Time-Dependent Travel Times and Application to Large-Scale Transportation Networks

    Directory of Open Access Journals (Sweden)

    Shichao Sun

    2015-01-01

    Full Text Available This paper addressed the vehicle routing problem (VRP in large-scale urban transportation networks with stochastic time-dependent (STD travel times. The subproblem which is how to find the optimal path connecting any pair of customer nodes in a STD network was solved through a robust approach without requiring the probability distributions of link travel times. Based on that, the proposed STD-VRP model can be converted into solving a normal time-dependent VRP (TD-VRP, and algorithms for such TD-VRPs can also be introduced to obtain the solution. Numerical experiments were conducted to address STD-VRPTW of practical sizes on a real world urban network, demonstrated here on the road network of Shenzhen, China. The stochastic time-dependent link travel times of the network were calibrated by historical floating car data. A route construction algorithm was applied to solve the STD problem in 4 delivery scenarios efficiently. The computational results showed that the proposed STD-VRPTW model can improve the level of customer service by satisfying the time-window constraint under any circumstances. The improvement can be very significant especially for large-scale network delivery tasks with no more increase in cost and environmental impacts.

  9. Low-frequency scaling applied to stochastic finite-fault modeling

    Science.gov (United States)

    Crane, Stephen; Motazedian, Dariush

    2014-01-01

    Stochastic finite-fault modeling is an important tool for simulating moderate to large earthquakes. It has proven to be useful in applications that require a reliable estimation of ground motions, mostly in the spectral frequency range of 1 to 10 Hz, which is the range of most interest to engineers. However, since there can be little resemblance between the low-frequency spectra of large and small earthquakes, this portion can be difficult to simulate using stochastic finite-fault techniques. This paper introduces two different methods to scale low-frequency spectra for stochastic finite-fault modeling. One method multiplies the subfault source spectrum by an empirical function. This function has three parameters to scale the low-frequency spectra: the level of scaling and the start and end frequencies of the taper. This empirical function adjusts the earthquake spectra only between the desired frequencies, conserving seismic moment in the simulated spectra. The other method is an empirical low-frequency coefficient that is added to the subfault corner frequency. This new parameter changes the ratio between high and low frequencies. For each simulation, the entire earthquake spectra is adjusted, which may result in the seismic moment not being conserved for a simulated earthquake. These low-frequency scaling methods were used to reproduce recorded earthquake spectra from several earthquakes recorded in the Pacific Earthquake Engineering Research Center (PEER) Next Generation Attenuation Models (NGA) database. There were two methods of determining the stochastic parameters of best fit for each earthquake: a general residual analysis and an earthquake-specific residual analysis. Both methods resulted in comparable values for stress drop and the low-frequency scaling parameters; however, the earthquake-specific residual analysis obtained a more accurate distribution of the averaged residuals.

  10. Adaptive Fuzzy Output-Constrained Fault-Tolerant Control of Nonlinear Stochastic Large-Scale Systems With Actuator Faults.

    Science.gov (United States)

    Li, Yongming; Ma, Zhiyao; Tong, Shaocheng

    2017-09-01

    The problem of adaptive fuzzy output-constrained tracking fault-tolerant control (FTC) is investigated for the large-scale stochastic nonlinear systems of pure-feedback form. The nonlinear systems considered in this paper possess the unstructured uncertainties, unknown interconnected terms and unknown nonaffine nonlinear faults. The fuzzy logic systems are employed to identify the unknown lumped nonlinear functions so that the problems of structured uncertainties can be solved. An adaptive fuzzy state observer is designed to solve the nonmeasurable state problem. By combining the barrier Lyapunov function theory, adaptive decentralized and stochastic control principles, a novel fuzzy adaptive output-constrained FTC approach is constructed. All the signals in the closed-loop system are proved to be bounded in probability and the system outputs are constrained in a given compact set. Finally, the applicability of the proposed controller is well carried out by a simulation example.

  11. Calculation of large scale relative permeabilities from stochastic properties of the permeability field and fluid properties

    Energy Technology Data Exchange (ETDEWEB)

    Lenormand, R.; Thiele, M.R. [Institut Francais du Petrole, Rueil Malmaison (France)

    1997-08-01

    The paper describes the method and presents preliminary results for the calculation of homogenized relative permeabilities using stochastic properties of the permeability field. In heterogeneous media, the spreading of an injected fluid is mainly sue to the permeability heterogeneity and viscosity fingering. At large scale, when the heterogeneous medium is replaced by a homogeneous one, we need to introduce a homogenized (or pseudo) relative permeability to obtain the same spreading. Generally, is derived by using fine-grid numerical simulations (Kyte and Berry). However, this operation is time consuming and cannot be performed for all the meshes of the reservoir. We propose an alternate method which uses the information given by the stochastic properties of the field without any numerical simulation. The method is based on recent developments on homogenized transport equations (the {open_quotes}MHD{close_quotes} equation, Lenormand SPE 30797). The MHD equation accounts for the three basic mechanisms of spreading of the injected fluid: (1) Dispersive spreading due to small scale randomness, characterized by a macrodispersion coefficient D. (2) Convective spreading due to large scale heterogeneities (layers) characterized by a heterogeneity factor H. (3) Viscous fingering characterized by an apparent viscosity ration M. In the paper, we first derive the parameters D and H as functions of variance and correlation length of the permeability field. The results are shown to be in good agreement with fine-grid simulations. The are then derived a function of D, H and M. The main result is that this approach lead to a time dependent . Finally, the calculated are compared to the values derived by history matching using fine-grid numerical simulations.

  12. Stochastic structure of annual discharges of large European rivers

    Directory of Open Access Journals (Sweden)

    Stojković Milan

    2015-03-01

    Full Text Available Water resource has become a guarantee for sustainable development on both local and global scales. Exploiting water resources involves development of hydrological models for water management planning. In this paper we present a new stochastic model for generation of mean annul flows. The model is based on historical characteristics of time series of annual flows and consists of the trend component, long-term periodic component and stochastic component. The rest of specified components are model errors which are represented as a random time series. The random time series is generated by the single bootstrap model (SBM. Stochastic ensemble of error terms at the single hydrological station is formed using the SBM method. The ultimate stochastic model gives solutions of annual flows and presents a useful tool for integrated river basin planning and water management studies. The model is applied for ten large European rivers with long observed period. Validation of model results suggests that the stochastic flows simulated by the model can be used for hydrological simulations in river basins.

  13. WKB theory of large deviations in stochastic populations

    Science.gov (United States)

    Assaf, Michael; Meerson, Baruch

    2017-06-01

    Stochasticity can play an important role in the dynamics of biologically relevant populations. These span a broad range of scales: from intra-cellular populations of molecules to population of cells and then to groups of plants, animals and people. Large deviations in stochastic population dynamics—such as those determining population extinction, fixation or switching between different states—are presently in a focus of attention of statistical physicists. We review recent progress in applying different variants of dissipative WKB approximation (after Wentzel, Kramers and Brillouin) to this class of problems. The WKB approximation allows one to evaluate the mean time and/or probability of population extinction, fixation and switches resulting from either intrinsic (demographic) noise, or a combination of the demographic noise and environmental variations, deterministic or random. We mostly cover well-mixed populations, single and multiple, but also briefly consider populations on heterogeneous networks and spatial populations. The spatial setting also allows one to study large fluctuations of the speed of biological invasions. Finally, we briefly discuss possible directions of future work.

  14. WKB theory of large deviations in stochastic populations

    International Nuclear Information System (INIS)

    Assaf, Michael; Meerson, Baruch

    2017-01-01

    Stochasticity can play an important role in the dynamics of biologically relevant populations. These span a broad range of scales: from intra-cellular populations of molecules to population of cells and then to groups of plants, animals and people. Large deviations in stochastic population dynamics—such as those determining population extinction, fixation or switching between different states—are presently in a focus of attention of statistical physicists. We review recent progress in applying different variants of dissipative WKB approximation (after Wentzel, Kramers and Brillouin) to this class of problems. The WKB approximation allows one to evaluate the mean time and/or probability of population extinction, fixation and switches resulting from either intrinsic (demographic) noise, or a combination of the demographic noise and environmental variations, deterministic or random. We mostly cover well-mixed populations, single and multiple, but also briefly consider populations on heterogeneous networks and spatial populations. The spatial setting also allows one to study large fluctuations of the speed of biological invasions. Finally, we briefly discuss possible directions of future work. (topical review)

  15. Large-deviation principles, stochastic effective actions, path entropies, and the structure and meaning of thermodynamic descriptions

    International Nuclear Information System (INIS)

    Smith, Eric

    2011-01-01

    The meaning of thermodynamic descriptions is found in large-deviations scaling (Ellis 1985 Entropy, Large Deviations, and Statistical Mechanics (New York: Springer); Touchette 2009 Phys. Rep. 478 1-69) of the probabilities for fluctuations of averaged quantities. The central function expressing large-deviations scaling is the entropy, which is the basis both for fluctuation theorems and for characterizing the thermodynamic interactions of systems. Freidlin-Wentzell theory (Freidlin and Wentzell 1998 Random Perturbations in Dynamical Systems 2nd edn (New York: Springer)) provides a quite general formulation of large-deviations scaling for non-equilibrium stochastic processes, through a remarkable representation in terms of a Hamiltonian dynamical system. A number of related methods now exist to construct the Freidlin-Wentzell Hamiltonian for many kinds of stochastic processes; one method due to Doi (1976 J. Phys. A: Math. Gen. 9 1465-78; 1976 J. Phys. A: Math. Gen. 9 1479) and Peliti (1985 J. Physique 46 1469; 1986 J. Phys. A: Math. Gen. 19 L365, appropriate to integer counting statistics, is widely used in reaction-diffusion theory. Using these tools together with a path-entropy method due to Jaynes (1980 Annu. Rev. Phys. Chem. 31 579-601), this review shows how to construct entropy functions that both express large-deviations scaling of fluctuations, and describe system-environment interactions, for discrete stochastic processes either at or away from equilibrium. A collection of variational methods familiar within quantum field theory, but less commonly applied to the Doi-Peliti construction, is used to define a 'stochastic effective action', which is the large-deviations rate function for arbitrary non-equilibrium paths. We show how common principles of entropy maximization, applied to different ensembles of states or of histories, lead to different entropy functions and different sets of thermodynamic state variables. Yet the relations among all these levels of

  16. Towards large scale stochastic rainfall models for flood risk assessment in trans-national basins

    Science.gov (United States)

    Serinaldi, F.; Kilsby, C. G.

    2012-04-01

    While extensive research has been devoted to rainfall-runoff modelling for risk assessment in small and medium size watersheds, less attention has been paid, so far, to large scale trans-national basins, where flood events have severe societal and economic impacts with magnitudes quantified in billions of Euros. As an example, in the April 2006 flood events along the Danube basin at least 10 people lost their lives and up to 30 000 people were displaced, with overall damages estimated at more than half a billion Euros. In this context, refined analytical methods are fundamental to improve the risk assessment and, then, the design of structural and non structural measures of protection, such as hydraulic works and insurance/reinsurance policies. Since flood events are mainly driven by exceptional rainfall events, suitable characterization and modelling of space-time properties of rainfall fields is a key issue to perform a reliable flood risk analysis based on alternative precipitation scenarios to be fed in a new generation of large scale rainfall-runoff models. Ultimately, this approach should be extended to a global flood risk model. However, as the need of rainfall models able to account for and simulate spatio-temporal properties of rainfall fields over large areas is rather new, the development of new rainfall simulation frameworks is a challenging task involving that faces with the problem of overcoming the drawbacks of the existing modelling schemes (devised for smaller spatial scales), but keeping the desirable properties. In this study, we critically summarize the most widely used approaches for rainfall simulation. Focusing on stochastic approaches, we stress the importance of introducing suitable climate forcings in these simulation schemes in order to account for the physical coherence of rainfall fields over wide areas. Based on preliminary considerations, we suggest a modelling framework relying on the Generalized Additive Models for Location, Scale

  17. Anomalous scaling of stochastic processes and the Moses effect.

    Science.gov (United States)

    Chen, Lijian; Bassler, Kevin E; McCauley, Joseph L; Gunaratne, Gemunu H

    2017-04-01

    The state of a stochastic process evolving over a time t is typically assumed to lie on a normal distribution whose width scales like t^{1/2}. However, processes in which the probability distribution is not normal and the scaling exponent differs from 1/2 are known. The search for possible origins of such "anomalous" scaling and approaches to quantify them are the motivations for the work reported here. In processes with stationary increments, where the stochastic process is time-independent, autocorrelations between increments and infinite variance of increments can cause anomalous scaling. These sources have been referred to as the Joseph effect and the Noah effect, respectively. If the increments are nonstationary, then scaling of increments with t can also lead to anomalous scaling, a mechanism we refer to as the Moses effect. Scaling exponents quantifying the three effects are defined and related to the Hurst exponent that characterizes the overall scaling of the stochastic process. Methods of time series analysis that enable accurate independent measurement of each exponent are presented. Simple stochastic processes are used to illustrate each effect. Intraday financial time series data are analyzed, revealing that their anomalous scaling is due only to the Moses effect. In the context of financial market data, we reiterate that the Joseph exponent, not the Hurst exponent, is the appropriate measure to test the efficient market hypothesis.

  18. Anomalous scaling of stochastic processes and the Moses effect

    Science.gov (United States)

    Chen, Lijian; Bassler, Kevin E.; McCauley, Joseph L.; Gunaratne, Gemunu H.

    2017-04-01

    The state of a stochastic process evolving over a time t is typically assumed to lie on a normal distribution whose width scales like t1/2. However, processes in which the probability distribution is not normal and the scaling exponent differs from 1/2 are known. The search for possible origins of such "anomalous" scaling and approaches to quantify them are the motivations for the work reported here. In processes with stationary increments, where the stochastic process is time-independent, autocorrelations between increments and infinite variance of increments can cause anomalous scaling. These sources have been referred to as the Joseph effect and the Noah effect, respectively. If the increments are nonstationary, then scaling of increments with t can also lead to anomalous scaling, a mechanism we refer to as the Moses effect. Scaling exponents quantifying the three effects are defined and related to the Hurst exponent that characterizes the overall scaling of the stochastic process. Methods of time series analysis that enable accurate independent measurement of each exponent are presented. Simple stochastic processes are used to illustrate each effect. Intraday financial time series data are analyzed, revealing that their anomalous scaling is due only to the Moses effect. In the context of financial market data, we reiterate that the Joseph exponent, not the Hurst exponent, is the appropriate measure to test the efficient market hypothesis.

  19. Reduced linear noise approximation for biochemical reaction networks with time-scale separation: The stochastic tQSSA+

    Science.gov (United States)

    Herath, Narmada; Del Vecchio, Domitilla

    2018-03-01

    Biochemical reaction networks often involve reactions that take place on different time scales, giving rise to "slow" and "fast" system variables. This property is widely used in the analysis of systems to obtain dynamical models with reduced dimensions. In this paper, we consider stochastic dynamics of biochemical reaction networks modeled using the Linear Noise Approximation (LNA). Under time-scale separation conditions, we obtain a reduced-order LNA that approximates both the slow and fast variables in the system. We mathematically prove that the first and second moments of this reduced-order model converge to those of the full system as the time-scale separation becomes large. These mathematical results, in particular, provide a rigorous justification to the accuracy of LNA models derived using the stochastic total quasi-steady state approximation (tQSSA). Since, in contrast to the stochastic tQSSA, our reduced-order model also provides approximations for the fast variable stochastic properties, we term our method the "stochastic tQSSA+". Finally, we demonstrate the application of our approach on two biochemical network motifs found in gene-regulatory and signal transduction networks.

  20. Bridging time scales in cellular decision making with a stochastic bistable switch

    Directory of Open Access Journals (Sweden)

    Waldherr Steffen

    2010-08-01

    Full Text Available Abstract Background Cellular transformations which involve a significant phenotypical change of the cell's state use bistable biochemical switches as underlying decision systems. Some of these transformations act over a very long time scale on the cell population level, up to the entire lifespan of the organism. Results In this work, we aim at linking cellular decisions taking place on a time scale of years to decades with the biochemical dynamics in signal transduction and gene regulation, occuring on a time scale of minutes to hours. We show that a stochastic bistable switch forms a viable biochemical mechanism to implement decision processes on long time scales. As a case study, the mechanism is applied to model the initiation of follicle growth in mammalian ovaries, where the physiological time scale of follicle pool depletion is on the order of the organism's lifespan. We construct a simple mathematical model for this process based on experimental evidence for the involved genetic mechanisms. Conclusions Despite the underlying stochasticity, the proposed mechanism turns out to yield reliable behavior in large populations of cells subject to the considered decision process. Our model explains how the physiological time constant may emerge from the intrinsic stochasticity of the underlying gene regulatory network. Apart from ovarian follicles, the proposed mechanism may also be of relevance for other physiological systems where cells take binary decisions over a long time scale.

  1. Stochastic models for structured populations scaling limits and long time behavior

    CERN Document Server

    Meleard, Sylvie

    2015-01-01

    In this contribution, several probabilistic tools to study population dynamics are developed. The focus is on scaling limits of qualitatively different stochastic individual based models and the long time behavior of some classes of limiting processes. Structured population dynamics are modeled by measure-valued processes describing the individual behaviors and taking into account the demographic and mutational parameters, and possible interactions between individuals. Many quantitative parameters appear in these models and several relevant normalizations are considered, leading  to infinite-dimensional deterministic or stochastic large-population approximations. Biologically relevant questions are considered, such as extinction criteria, the effect of large birth events, the impact of  environmental catastrophes, the mutation-selection trade-off, recovery criteria in parasite infections, genealogical properties of a sample of individuals. These notes originated from a lecture series on Structured P...

  2. Stochastic time scale for the Universe

    International Nuclear Information System (INIS)

    Szydlowski, M.; Golda, Z.

    1986-01-01

    An intrinsic time scale is naturally defined within stochastic gradient dynamical systems. It should be interpreted as a ''relaxation time'' to a local potential minimum after the system has been randomly perturbed. It is shown that for a flat Friedman-like cosmological model this time scale is of order of the age of the Universe. 7 refs. (author)

  3. Approximate method for stochastic chemical kinetics with two-time scales by chemical Langevin equations

    International Nuclear Information System (INIS)

    Wu, Fuke; Tian, Tianhai; Rawlings, James B.; Yin, George

    2016-01-01

    The frequently used reduction technique is based on the chemical master equation for stochastic chemical kinetics with two-time scales, which yields the modified stochastic simulation algorithm (SSA). For the chemical reaction processes involving a large number of molecular species and reactions, the collection of slow reactions may still include a large number of molecular species and reactions. Consequently, the SSA is still computationally expensive. Because the chemical Langevin equations (CLEs) can effectively work for a large number of molecular species and reactions, this paper develops a reduction method based on the CLE by the stochastic averaging principle developed in the work of Khasminskii and Yin [SIAM J. Appl. Math. 56, 1766–1793 (1996); ibid. 56, 1794–1819 (1996)] to average out the fast-reacting variables. This reduction method leads to a limit averaging system, which is an approximation of the slow reactions. Because in the stochastic chemical kinetics, the CLE is seen as the approximation of the SSA, the limit averaging system can be treated as the approximation of the slow reactions. As an application, we examine the reduction of computation complexity for the gene regulatory networks with two-time scales driven by intrinsic noise. For linear and nonlinear protein production functions, the simulations show that the sample average (expectation) of the limit averaging system is close to that of the slow-reaction process based on the SSA. It demonstrates that the limit averaging system is an efficient approximation of the slow-reaction process in the sense of the weak convergence.

  4. On the probability distribution of the stochastic saturation scale in QCD

    International Nuclear Information System (INIS)

    Marquet, C.; Soyez, G.; Xiao Bowen

    2006-01-01

    It was recently noticed that high-energy scattering processes in QCD have a stochastic nature. An event-by-event scattering amplitude is characterised by a saturation scale which is a random variable. The statistical ensemble of saturation scales formed with all the events is distributed according to a probability law whose cumulants have been recently computed. In this work, we obtain the probability distribution from the cumulants. We prove that it can be considered as Gaussian over a large domain that we specify and our results are confirmed by numerical simulations

  5. Stochastic four-way coupling of gas-solid flows for Large Eddy Simulations

    Science.gov (United States)

    Curran, Thomas; Denner, Fabian; van Wachem, Berend

    2017-11-01

    The interaction of solid particles with turbulence has for long been a topic of interest for predicting the behavior of industrially relevant flows. For the turbulent fluid phase, Large Eddy Simulation (LES) methods are widely used for their low computational cost, leaving only the sub-grid scales (SGS) of turbulence to be modelled. Although LES has seen great success in predicting the behavior of turbulent single-phase flows, the development of LES for turbulent gas-solid flows is still in its infancy. This contribution aims at constructing a model to describe the four-way coupling of particles in an LES framework, by considering the role particles play in the transport of turbulent kinetic energy across the scales. Firstly, a stochastic model reconstructing the sub-grid velocities for the particle tracking is presented. Secondly, to solve particle-particle interaction, most models involve a deterministic treatment of the collisions. We finally introduce a stochastic model for estimating the collision probability. All results are validated against fully resolved DNS-DPS simulations. The final goal of this contribution is to propose a global stochastic method adapted to two-phase LES simulation where the number of particles considered can be significantly increased. Financial support from PetroBras is gratefully acknowledged.

  6. Stochastic layer scaling in the two-wire model for divertor tokamaks

    Science.gov (United States)

    Ali, Halima; Punjabi, Alkesh; Boozer, Allen

    2009-06-01

    The question of magnetic field structure in the vicinity of the separatrix in divertor tokamaks is studied. The authors have investigated this problem earlier in a series of papers, using various mathematical techniques. In the present paper, the two-wire model (TWM) [Reiman, A. 1996 Phys. Plasmas 3, 906] is considered. It is noted that, in the TWM, it is useful to consider an extra equation expressing magnetic flux conservation. This equation does not add any more information to the TWM, since the equation is derived from the TWM. This equation is useful for controlling the step size in the numerical integration of the TWM equations. The TWM with the extra equation is called the flux-preserving TWM. Nevertheless, the technique is apparently still plagued by numerical inaccuracies when the perturbation level is low, resulting in an incorrect scaling of the stochastic layer width. The stochastic broadening of the separatrix in the flux-preserving TWM is compared with that in the low mn (poloidal mode number m and toroidal mode number n) map (LMN) [Ali, H., Punjabi, A., Boozer, A. and Evans, T. 2004 Phys. Plasmas 11, 1908]. The flux-preserving TWM and LMN both give Boozer-Rechester 0.5 power scaling of the stochastic layer width with the amplitude of magnetic perturbation when the perturbation is sufficiently large [Boozer, A. and Rechester, A. 1978, Phys. Fluids 21, 682]. The flux-preserving TWM gives a larger stochastic layer width when the perturbation is low, while the LMN gives correct scaling in the low perturbation region. Area-preserving maps such as the LMN respect the Hamiltonian structure of field line trajectories, and have the added advantage of computational efficiency. Also, for a $1\\frac12$ degree of freedom Hamiltonian system such as field lines, maps do not give Arnold diffusion.

  7. Limitations and tradeoffs in synchronization of large-scale networks with uncertain links

    Science.gov (United States)

    Diwadkar, Amit; Vaidya, Umesh

    2016-01-01

    The synchronization of nonlinear systems connected over large-scale networks has gained popularity in a variety of applications, such as power grids, sensor networks, and biology. Stochastic uncertainty in the interconnections is a ubiquitous phenomenon observed in these physical and biological networks. We provide a size-independent network sufficient condition for the synchronization of scalar nonlinear systems with stochastic linear interactions over large-scale networks. This sufficient condition, expressed in terms of nonlinear dynamics, the Laplacian eigenvalues of the nominal interconnections, and the variance and location of the stochastic uncertainty, allows us to define a synchronization margin. We provide an analytical characterization of important trade-offs between the internal nonlinear dynamics, network topology, and uncertainty in synchronization. For nearest neighbour networks, the existence of an optimal number of neighbours with a maximum synchronization margin is demonstrated. An analytical formula for the optimal gain that produces the maximum synchronization margin allows us to compare the synchronization properties of various complex network topologies. PMID:27067994

  8. Efficient stochastic approaches for sensitivity studies of an Eulerian large-scale air pollution model

    Science.gov (United States)

    Dimov, I.; Georgieva, R.; Todorov, V.; Ostromsky, Tz.

    2017-10-01

    Reliability of large-scale mathematical models is an important issue when such models are used to support decision makers. Sensitivity analysis of model outputs to variation or natural uncertainties of model inputs is crucial for improving the reliability of mathematical models. A comprehensive experimental study of Monte Carlo algorithms based on Sobol sequences for multidimensional numerical integration has been done. A comparison with Latin hypercube sampling and a particular quasi-Monte Carlo lattice rule based on generalized Fibonacci numbers has been presented. The algorithms have been successfully applied to compute global Sobol sensitivity measures corresponding to the influence of several input parameters (six chemical reactions rates and four different groups of pollutants) on the concentrations of important air pollutants. The concentration values have been generated by the Unified Danish Eulerian Model. The sensitivity study has been done for the areas of several European cities with different geographical locations. The numerical tests show that the stochastic algorithms under consideration are efficient for multidimensional integration and especially for computing small by value sensitivity indices. It is a crucial element since even small indices may be important to be estimated in order to achieve a more accurate distribution of inputs influence and a more reliable interpretation of the mathematical model results.

  9. Multiple-scale stochastic processes: Decimation, averaging and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Bo, Stefano, E-mail: stefano.bo@nordita.org [Nordita, KTH Royal Institute of Technology and Stockholm University, Roslagstullsbacken 23, SE-106 91 Stockholm (Sweden); Celani, Antonio [Quantitative Life Sciences, The Abdus Salam International Centre for Theoretical Physics (ICTP), Strada Costiera 11, I-34151 - Trieste (Italy)

    2017-02-07

    The recent experimental progresses in handling microscopic systems have allowed to probe them at levels where fluctuations are prominent, calling for stochastic modeling in a large number of physical, chemical and biological phenomena. This has provided fruitful applications for established stochastic methods and motivated further developments. These systems often involve processes taking place on widely separated time scales. For an efficient modeling one usually focuses on the slower degrees of freedom and it is of great importance to accurately eliminate the fast variables in a controlled fashion, carefully accounting for their net effect on the slower dynamics. This procedure in general requires to perform two different operations: decimation and coarse-graining. We introduce the asymptotic methods that form the basis of this procedure and discuss their application to a series of physical, biological and chemical examples. We then turn our attention to functionals of the stochastic trajectories such as residence times, counting statistics, fluxes, entropy production, etc. which have been increasingly studied in recent years. For such functionals, the elimination of the fast degrees of freedom can present additional difficulties and naive procedures can lead to blatantly inconsistent results. Homogenization techniques for functionals are less covered in the literature and we will pedagogically present them here, as natural extensions of the ones employed for the trajectories. We will also discuss recent applications of these techniques to the thermodynamics of small systems and their interpretation in terms of information-theoretic concepts.

  10. Coarse-graining and hybrid methods for efficient simulation of stochastic multi-scale models of tumour growth

    International Nuclear Information System (INIS)

    Cruz, Roberto de la; Guerrero, Pilar; Calvo, Juan; Alarcón, Tomás

    2017-01-01

    The development of hybrid methodologies is of current interest in both multi-scale modelling and stochastic reaction–diffusion systems regarding their applications to biology. We formulate a hybrid method for stochastic multi-scale models of cells populations that extends the remit of existing hybrid methods for reaction–diffusion systems. Such method is developed for a stochastic multi-scale model of tumour growth, i.e. population-dynamical models which account for the effects of intrinsic noise affecting both the number of cells and the intracellular dynamics. In order to formulate this method, we develop a coarse-grained approximation for both the full stochastic model and its mean-field limit. Such approximation involves averaging out the age-structure (which accounts for the multi-scale nature of the model) by assuming that the age distribution of the population settles onto equilibrium very fast. We then couple the coarse-grained mean-field model to the full stochastic multi-scale model. By doing so, within the mean-field region, we are neglecting noise in both cell numbers (population) and their birth rates (structure). This implies that, in addition to the issues that arise in stochastic-reaction diffusion systems, we need to account for the age-structure of the population when attempting to couple both descriptions. We exploit our coarse-graining model so that, within the mean-field region, the age-distribution is in equilibrium and we know its explicit form. This allows us to couple both domains consistently, as upon transference of cells from the mean-field to the stochastic region, we sample the equilibrium age distribution. Furthermore, our method allows us to investigate the effects of intracellular noise, i.e. fluctuations of the birth rate, on collective properties such as travelling wave velocity. We show that the combination of population and birth-rate noise gives rise to large fluctuations of the birth rate in the region at the leading edge

  11. Coarse-graining and hybrid methods for efficient simulation of stochastic multi-scale models of tumour growth

    Science.gov (United States)

    de la Cruz, Roberto; Guerrero, Pilar; Calvo, Juan; Alarcón, Tomás

    2017-12-01

    The development of hybrid methodologies is of current interest in both multi-scale modelling and stochastic reaction-diffusion systems regarding their applications to biology. We formulate a hybrid method for stochastic multi-scale models of cells populations that extends the remit of existing hybrid methods for reaction-diffusion systems. Such method is developed for a stochastic multi-scale model of tumour growth, i.e. population-dynamical models which account for the effects of intrinsic noise affecting both the number of cells and the intracellular dynamics. In order to formulate this method, we develop a coarse-grained approximation for both the full stochastic model and its mean-field limit. Such approximation involves averaging out the age-structure (which accounts for the multi-scale nature of the model) by assuming that the age distribution of the population settles onto equilibrium very fast. We then couple the coarse-grained mean-field model to the full stochastic multi-scale model. By doing so, within the mean-field region, we are neglecting noise in both cell numbers (population) and their birth rates (structure). This implies that, in addition to the issues that arise in stochastic-reaction diffusion systems, we need to account for the age-structure of the population when attempting to couple both descriptions. We exploit our coarse-graining model so that, within the mean-field region, the age-distribution is in equilibrium and we know its explicit form. This allows us to couple both domains consistently, as upon transference of cells from the mean-field to the stochastic region, we sample the equilibrium age distribution. Furthermore, our method allows us to investigate the effects of intracellular noise, i.e. fluctuations of the birth rate, on collective properties such as travelling wave velocity. We show that the combination of population and birth-rate noise gives rise to large fluctuations of the birth rate in the region at the leading edge of

  12. Breaking the theoretical scaling limit for predicting quasiparticle energies: the stochastic GW approach.

    Science.gov (United States)

    Neuhauser, Daniel; Gao, Yi; Arntsen, Christopher; Karshenas, Cyrus; Rabani, Eran; Baer, Roi

    2014-08-15

    We develop a formalism to calculate the quasiparticle energy within the GW many-body perturbation correction to the density functional theory. The occupied and virtual orbitals of the Kohn-Sham Hamiltonian are replaced by stochastic orbitals used to evaluate the Green function G, the polarization potential W, and, thereby, the GW self-energy. The stochastic GW (sGW) formalism relies on novel theoretical concepts such as stochastic time-dependent Hartree propagation, stochastic matrix compression, and spatial or temporal stochastic decoupling techniques. Beyond the theoretical interest, the formalism enables linear scaling GW calculations breaking the theoretical scaling limit for GW as well as circumventing the need for energy cutoff approximations. We illustrate the method for silicon nanocrystals of varying sizes with N_{e}>3000 electrons.

  13. Large Scale Computing for the Modelling of Whole Brain Connectivity

    DEFF Research Database (Denmark)

    Albers, Kristoffer Jon

    organization of the brain in continuously increasing resolution. From these images, networks of structural and functional connectivity can be constructed. Bayesian stochastic block modelling provides a prominent data-driven approach for uncovering the latent organization, by clustering the networks into groups...... of neurons. Relying on Markov Chain Monte Carlo (MCMC) simulations as the workhorse in Bayesian inference however poses significant computational challenges, especially when modelling networks at the scale and complexity supported by high-resolution whole-brain MRI. In this thesis, we present how to overcome...... these computational limitations and apply Bayesian stochastic block models for un-supervised data-driven clustering of whole-brain connectivity in full image resolution. We implement high-performance software that allows us to efficiently apply stochastic blockmodelling with MCMC sampling on large complex networks...

  14. Advanced Dynamically Adaptive Algorithms for Stochastic Simulations on Extreme Scales

    Energy Technology Data Exchange (ETDEWEB)

    Xiu, Dongbin [Univ. of Utah, Salt Lake City, UT (United States)

    2017-03-03

    The focus of the project is the development of mathematical methods and high-performance computational tools for stochastic simulations, with a particular emphasis on computations on extreme scales. The core of the project revolves around the design of highly efficient and scalable numerical algorithms that can adaptively and accurately, in high dimensional spaces, resolve stochastic problems with limited smoothness, even containing discontinuities.

  15. Aerofoil broadband and tonal noise modelling using stochastic sound sources and incorporated large scale fluctuations

    Science.gov (United States)

    Proskurov, S.; Darbyshire, O. R.; Karabasov, S. A.

    2017-12-01

    The present work discusses modifications to the stochastic Fast Random Particle Mesh (FRPM) method featuring both tonal and broadband noise sources. The technique relies on the combination of incorporated vortex-shedding resolved flow available from Unsteady Reynolds-Averaged Navier-Stokes (URANS) simulation with the fine-scale turbulence FRPM solution generated via the stochastic velocity fluctuations in the context of vortex sound theory. In contrast to the existing literature, our method encompasses a unified treatment for broadband and tonal acoustic noise sources at the source level, thus, accounting for linear source interference as well as possible non-linear source interaction effects. When sound sources are determined, for the sound propagation, Acoustic Perturbation Equations (APE-4) are solved in the time-domain. Results of the method's application for two aerofoil benchmark cases, with both sharp and blunt trailing edges are presented. In each case, the importance of individual linear and non-linear noise sources was investigated. Several new key features related to the unsteady implementation of the method were tested and brought into the equation. Encouraging results have been obtained for benchmark test cases using the new technique which is believed to be potentially applicable to other airframe noise problems where both tonal and broadband parts are important.

  16. The restricted stochastic user equilibrium with threshold model: Large-scale application and parameter testing

    DEFF Research Database (Denmark)

    Rasmussen, Thomas Kjær; Nielsen, Otto Anker; Watling, David P.

    2017-01-01

    Equilibrium model (DUE), by combining the strengths of the Boundedly Rational User Equilibrium model and the Restricted Stochastic User Equilibrium model (RSUE). Thereby, the RSUET model reaches an equilibrated solution in which the flow is distributed according to Random Utility Theory among a consistently...... model improves the behavioural realism, especially for high congestion cases. Also, fast and well-behaved convergence to equilibrated solutions among non-universal choice sets is observed across different congestion levels, choice model scale parameters, and algorithm step sizes. Clearly, the results...... highlight that the RSUET outperforms the MNP SUE in terms of convergence, calculation time and behavioural realism. The choice set composition is validated by using 16,618 observed route choices collected by GPS devices in the same network and observing their reproduction within the equilibrated choice sets...

  17. Large deviations for solutions to stochastic recurrence equations under Kesten's condition

    DEFF Research Database (Denmark)

    Buraczewski, Dariusz; Damek, Ewa; Mikosch, Thomas Valentin

    2013-01-01

    In this paper we prove large deviations results for partial sums constructed from the solution to a stochastic recurrence equation. We assume Kesten’s condition [17] under which the solution of the stochastic recurrence equation has a marginal distribution with power law tails, while the noise...... sequence of the equations can have light tails. The results of the paper are analogs of those obtained by A.V. and S.V. Nagaev [21, 22] in the case of partial sums of iid random variables. In the latter case, the large deviation probabilities of the partial sums are essentially determined by the largest...... step size of the partial sum. For the solution to a stochastic recurrence equation, the magnitude of the large deviation probabilities is again given by the tail of the maximum summand, but the exact asymptotic tail behavior is also influenced by clusters of extreme values, due to dependencies...

  18. A stochastic large deformation model for computational anatomy

    DEFF Research Database (Denmark)

    Arnaudon, Alexis; Holm, Darryl D.; Pai, Akshay Sadananda Uppinakudru

    2017-01-01

    In the study of shapes of human organs using computational anatomy, variations are found to arise from inter-subject anatomical differences, disease-specific effects, and measurement noise. This paper introduces a stochastic model for incorporating random variations into the Large Deformation...

  19. A stochastic mathematical model to locate field hospitals under disruption uncertainty for large-scale disaster preparedness

    Directory of Open Access Journals (Sweden)

    Nezir Aydin

    2016-03-01

    Full Text Available In this study, we consider field hospital location decisions for emergency treatment points in response to large scale disasters. Specifically, we developed a two-stage stochastic model that determines the number and locations of field hospitals and the allocation of injured victims to these field hospitals. Our model considers the locations as well as the failings of the existing public hospitals while deciding on the location of field hospitals that are anticipated to be opened. The model that we developed is a variant of the P-median location model and it integrates capacity restrictions both on field hospitals that are planned to be opened and the disruptions that occur in existing public hospitals. We conducted experiments to demonstrate how the proposed model can be utilized in practice in a real life problem case scenario. Results show the effects of the failings of existing hospitals, the level of failure probability and the capacity of projected field hospitals to deal with the assessment of any given emergency treatment system’s performance. Crucially, it also specifically provides an assessment on the average distance within which a victim needs to be transferred in order to be treated properly and then from this assessment, the proportion of total satisfied demand is then calculated.

  20. Doubly stochastic Poisson process models for precipitation at fine time-scales

    Science.gov (United States)

    Ramesh, Nadarajah I.; Onof, Christian; Xie, Dichao

    2012-09-01

    This paper considers a class of stochastic point process models, based on doubly stochastic Poisson processes, in the modelling of rainfall. We examine the application of this class of models, a neglected alternative to the widely-known Poisson cluster models, in the analysis of fine time-scale rainfall intensity. These models are mainly used to analyse tipping-bucket raingauge data from a single site but an extension to multiple sites is illustrated which reveals the potential of this class of models to study the temporal and spatial variability of precipitation at fine time-scales.

  1. Generation Expansion Planning With Large Amounts of Wind Power via Decision-Dependent Stochastic Programming

    Energy Technology Data Exchange (ETDEWEB)

    Zhan, Yiduo; Zheng, Qipeng P.; Wang, Jianhui; Pinson, Pierre

    2017-07-01

    Power generation expansion planning needs to deal with future uncertainties carefully, given that the invested generation assets will be in operation for a long time. Many stochastic programming models have been proposed to tackle this challenge. However, most previous works assume predetermined future uncertainties (i.e., fixed random outcomes with given probabilities). In several recent studies of generation assets' planning (e.g., thermal versus renewable), new findings show that the investment decisions could affect the future uncertainties as well. To this end, this paper proposes a multistage decision-dependent stochastic optimization model for long-term large-scale generation expansion planning, where large amounts of wind power are involved. In the decision-dependent model, the future uncertainties are not only affecting but also affected by the current decisions. In particular, the probability distribution function is determined by not only input parameters but also decision variables. To deal with the nonlinear constraints in our model, a quasi-exact solution approach is then introduced to reformulate the multistage stochastic investment model to a mixed-integer linear programming model. The wind penetration, investment decisions, and the optimality of the decision-dependent model are evaluated in a series of multistage case studies. The results show that the proposed decision-dependent model provides effective optimization solutions for long-term generation expansion planning.

  2. Economic and environmental optimization of a large scale sustainable dual feedstock lignocellulosic-based bioethanol supply chain in a stochastic environment

    International Nuclear Information System (INIS)

    Osmani, Atif; Zhang, Jun

    2014-01-01

    Highlights: • 2-Stage stochastic MILP model for optimizing the performance of a sustainable lignocellulosic-based biofuel supply chain. • Multiple uncertainties in biomass supply, purchase price of biomass, bioethanol demand, and sale price of bioethanol. • Stochastic parameters significantly impact the allocation of biomass processing capacities of biorefineries. • Location of biorefineries and choice of conversion technology is found to be insensitive to the stochastic environment. • Use of Sample Average Approximation (SAA) algorithm as a decomposition technique. - Abstract: This work proposes a two-stage stochastic optimization model to maximize the expected profit and simultaneously minimize carbon emissions of a dual-feedstock lignocellulosic-based bioethanol supply chain (LBSC) under uncertainties in supply, demand and prices. The model decides the optimal first-stage decisions and the expected values of the second-stage decisions. A case study based on a 4-state Midwestern region in the US demonstrates the effectiveness of the proposed stochastic model over a deterministic model under uncertainties. Two regional modes are considered for the geographic scale of the LBSC. Under co-operation mode the 4 states are considered as a combined region while under stand-alone mode each of the 4 states is considered as an individual region. Each state under co-operation mode gives better financial and environmental outcomes when compared to stand-alone mode. Uncertainty has a significant impact on the biomass processing capacity of biorefineries. While the location and the choice of conversion technology for biorefineries i.e. biochemical vs. thermochemical, are insensitive to the stochastic environment. As variability of the stochastic parameters increases, the financial and environmental performance is degraded. Sensitivity analysis shows that levels of tax credit and carbon price have a major impact on the choice of conversion technology for a selected

  3. Fluid-Mediated Stochastic Self-Assembly at Centimetric and Sub-Millimetric Scales: Design, Modeling, and Control

    Directory of Open Access Journals (Sweden)

    Bahar Haghighat

    2016-08-01

    Full Text Available Stochastic self-assembly provides promising means for building micro-/nano-structures with a variety of properties and functionalities. Numerous studies have been conducted on the control and modeling of the process in engineered self-assembling systems constituted of modules with varied capabilities ranging from completely reactive nano-/micro-particles to intelligent miniaturized robots. Depending on the capabilities of the constituting modules, different approaches have been utilized for controlling and modeling these systems. In the quest of a unifying control and modeling framework and within the broader perspective of investigating how stochastic control strategies can be adapted from the centimeter-scale down to the (sub-millimeter-scale, as well as from mechatronic to MEMS-based technology, this work presents the outcomes of our research on self-assembly during the past few years. As the first step, we leverage an experimental platform to study self-assembly of water-floating passive modules at the centimeter scale. A dedicated computational framework is developed for real-time tracking, modeling and control of the formation of specific structures. Using a similar approach, we then demonstrate controlled self-assembly of microparticles into clusters of a preset dimension in a microfluidic chamber, where the control loop is closed again through real-time tracking customized for a much faster system dynamics. Finally, with the aim of distributing the intelligence and realizing programmable self-assembly, we present a novel experimental system for fluid-mediated programmable stochastic self-assembly of active modules at the centimeter scale. The system is built around the water-floating 3-cm-sized Lily robots specifically designed to be operative in large swarms and allows for exploring the whole range of fully-centralized to fully-distributed control strategies. The outcomes of our research efforts extend the state-of-the-art methodologies

  4. Fractional Stochastic Field Theory

    Science.gov (United States)

    Honkonen, Juha

    2018-02-01

    Models describing evolution of physical, chemical, biological, social and financial processes are often formulated as differential equations with the understanding that they are large-scale equations for averages of quantities describing intrinsically random processes. Explicit account of randomness may lead to significant changes in the asymptotic behaviour (anomalous scaling) in such models especially in low spatial dimensions, which in many cases may be captured with the use of the renormalization group. Anomalous scaling and memory effects may also be introduced with the use of fractional derivatives and fractional noise. Construction of renormalized stochastic field theory with fractional derivatives and fractional noise in the underlying stochastic differential equations and master equations and the interplay between fluctuation-induced and built-in anomalous scaling behaviour is reviewed and discussed.

  5. General Large Deviations and Functional Iterated Logarithm Law for Multivalued Stochastic Differential Equations

    OpenAIRE

    Ren, Jiagang; Wu, Jing; Zhang, Hua

    2015-01-01

    In this paper, we prove a large deviation principle of Freidlin-Wentzell's type for the multivalued stochastic differential equations. As an application, we derive a functional iterated logarithm law for the solutions of multivalued stochastic differential equations.

  6. Quantitative Missense Variant Effect Prediction Using Large-Scale Mutagenesis Data.

    Science.gov (United States)

    Gray, Vanessa E; Hause, Ronald J; Luebeck, Jens; Shendure, Jay; Fowler, Douglas M

    2018-01-24

    Large datasets describing the quantitative effects of mutations on protein function are becoming increasingly available. Here, we leverage these datasets to develop Envision, which predicts the magnitude of a missense variant's molecular effect. Envision combines 21,026 variant effect measurements from nine large-scale experimental mutagenesis datasets, a hitherto untapped training resource, with a supervised, stochastic gradient boosting learning algorithm. Envision outperforms other missense variant effect predictors both on large-scale mutagenesis data and on an independent test dataset comprising 2,312 TP53 variants whose effects were measured using a low-throughput approach. This dataset was never used for hyperparameter tuning or model training and thus serves as an independent validation set. Envision prediction accuracy is also more consistent across amino acids than other predictors. Finally, we demonstrate that Envision's performance improves as more large-scale mutagenesis data are incorporated. We precompute Envision predictions for every possible single amino acid variant in human, mouse, frog, zebrafish, fruit fly, worm, and yeast proteomes (https://envision.gs.washington.edu/). Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Analytical Solutions for Multi-Time Scale Fractional Stochastic Differential Equations Driven by Fractional Brownian Motion and Their Applications

    OpenAIRE

    Xiao-Li Ding; Juan J. Nieto

    2018-01-01

    In this paper, we investigate analytical solutions of multi-time scale fractional stochastic differential equations driven by fractional Brownian motions. We firstly decompose homogeneous multi-time scale fractional stochastic differential equations driven by fractional Brownian motions into independent differential subequations, and give their analytical solutions. Then, we use the variation of constant parameters to obtain the solutions of nonhomogeneous multi-time scale fractional stochast...

  8. A stochastic method for computing hadronic matrix elements

    Energy Technology Data Exchange (ETDEWEB)

    Alexandrou, Constantia [Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; The Cyprus Institute, Nicosia (Cyprus). Computational-based Science and Technology Research Center; Dinter, Simon; Drach, Vincent [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Jansen, Karl [Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Hadjiyiannakou, Kyriakos [Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; Renner, Dru B. [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Collaboration: European Twisted Mass Collaboration

    2013-02-15

    We present a stochastic method for the calculation of baryon three-point functions that is more versatile compared to the typically used sequential method. We analyze the scaling of the error of the stochastically evaluated three-point function with the lattice volume and find a favorable signal-to-noise ratio suggesting that our stochastic method can be used efficiently at large volumes to compute hadronic matrix elements.

  9. Comparative study of large scale simulation of underground explosions inalluvium and in fractured granite using stochastic characterization

    Science.gov (United States)

    Vorobiev, O.; Ezzedine, S. M.; Antoun, T.; Glenn, L.

    2014-12-01

    This work describes a methodology used for large scale modeling of wave propagation fromunderground explosions conducted at the Nevada Test Site (NTS) in two different geological settings:fractured granitic rock mass and in alluvium deposition. We show that the discrete nature of rockmasses as well as the spatial variability of the fabric of alluvium is very important to understand groundmotions induced by underground explosions. In order to build a credible conceptual model of thesubsurface we integrated the geological, geomechanical and geophysical characterizations conductedduring recent test at the NTS as well as historical data from the characterization during the undergroundnuclear test conducted at the NTS. Because detailed site characterization is limited, expensive and, insome instances, impossible we have numerically investigated the effects of the characterization gaps onthe overall response of the system. We performed several computational studies to identify the keyimportant geologic features specific to fractured media mainly the joints; and those specific foralluvium porous media mainly the spatial variability of geological alluvium facies characterized bytheir variances and their integral scales. We have also explored common key features to both geologicalenvironments such as saturation and topography and assess which characteristics affect the most theground motion in the near-field and in the far-field. Stochastic representation of these features based onthe field characterizations have been implemented in Geodyn and GeodynL hydrocodes. Both codeswere used to guide site characterization efforts in order to provide the essential data to the modelingcommunity. We validate our computational results by comparing the measured and computed groundmotion at various ranges. This work performed under the auspices of the U.S. Department of Energy by Lawrence LivermoreNational Laboratory under Contract DE-AC52-07NA27344.

  10. Modeling Group Perceptions Using Stochastic Simulation: Scaling Issues in the Multiplicative AHP

    DEFF Research Database (Denmark)

    Barfod, Michael Bruhn; van den Honert, Robin; Salling, Kim Bang

    2016-01-01

    This paper proposes a new decision support approach for applying stochastic simulation to the multiplicative analytic hierarchy process (AHP) in order to deal with issues concerning the scale parameter. The paper suggests a new approach that captures the influence from the scale parameter by maki...

  11. Boosting Bayesian parameter inference of nonlinear stochastic differential equation models by Hamiltonian scale separation.

    Science.gov (United States)

    Albert, Carlo; Ulzega, Simone; Stoop, Ruedi

    2016-04-01

    Parameter inference is a fundamental problem in data-driven modeling. Given observed data that is believed to be a realization of some parameterized model, the aim is to find parameter values that are able to explain the observed data. In many situations, the dominant sources of uncertainty must be included into the model for making reliable predictions. This naturally leads to stochastic models. Stochastic models render parameter inference much harder, as the aim then is to find a distribution of likely parameter values. In Bayesian statistics, which is a consistent framework for data-driven learning, this so-called posterior distribution can be used to make probabilistic predictions. We propose a novel, exact, and very efficient approach for generating posterior parameter distributions for stochastic differential equation models calibrated to measured time series. The algorithm is inspired by reinterpreting the posterior distribution as a statistical mechanics partition function of an object akin to a polymer, where the measurements are mapped on heavier beads compared to those of the simulated data. To arrive at distribution samples, we employ a Hamiltonian Monte Carlo approach combined with a multiple time-scale integration. A separation of time scales naturally arises if either the number of measurement points or the number of simulation points becomes large. Furthermore, at least for one-dimensional problems, we can decouple the harmonic modes between measurement points and solve the fastest part of their dynamics analytically. Our approach is applicable to a wide range of inference problems and is highly parallelizable.

  12. Analytical Solutions for Multi-Time Scale Fractional Stochastic Differential Equations Driven by Fractional Brownian Motion and Their Applications

    Directory of Open Access Journals (Sweden)

    Xiao-Li Ding

    2018-01-01

    Full Text Available In this paper, we investigate analytical solutions of multi-time scale fractional stochastic differential equations driven by fractional Brownian motions. We firstly decompose homogeneous multi-time scale fractional stochastic differential equations driven by fractional Brownian motions into independent differential subequations, and give their analytical solutions. Then, we use the variation of constant parameters to obtain the solutions of nonhomogeneous multi-time scale fractional stochastic differential equations driven by fractional Brownian motions. Finally, we give three examples to demonstrate the applicability of our obtained results.

  13. Perturbative QCD Lagrangian at large distances and stochastic dimensionality reduction. Pt. 2

    International Nuclear Information System (INIS)

    Shintani, M.

    1986-11-01

    Using the method of stochastic dimensional reduction, we derive a four-dimensional quantum effective Lagrangian for the classical Yang-Mills system coupled to the Gaussian white noise. It is found that the Lagrangian coincides with the perturbative QCD at large distances constructed in our previous paper. That formalism is based on the local covariant operator formalism which maintains the unitarity of the S-matrix. Furthermore, we show the non-perturbative equivalence between super-Lorentz invariant sectors of the effective Lagrangian and two dimensional QCD coupled to the adjoint pseudo-scalars. This implies that stochastic dimensionality reduction by two is approximately operative in QCD at large distances. (orig.)

  14. Stochastic Wake Modelling Based on POD Analysis

    Directory of Open Access Journals (Sweden)

    David Bastine

    2018-03-01

    Full Text Available In this work, large eddy simulation data is analysed to investigate a new stochastic modeling approach for the wake of a wind turbine. The data is generated by the large eddy simulation (LES model PALM combined with an actuator disk with rotation representing the turbine. After applying a proper orthogonal decomposition (POD, three different stochastic models for the weighting coefficients of the POD modes are deduced resulting in three different wake models. Their performance is investigated mainly on the basis of aeroelastic simulations of a wind turbine in the wake. Three different load cases and their statistical characteristics are compared for the original LES, truncated PODs and the stochastic wake models including different numbers of POD modes. It is shown that approximately six POD modes are enough to capture the load dynamics on large temporal scales. Modeling the weighting coefficients as independent stochastic processes leads to similar load characteristics as in the case of the truncated POD. To complete this simplified wake description, we show evidence that the small-scale dynamics can be captured by adding to our model a homogeneous turbulent field. In this way, we present a procedure to derive stochastic wake models from costly computational fluid dynamics (CFD calculations or elaborated experimental investigations. These numerically efficient models provide the added value of possible long-term studies. Depending on the aspects of interest, different minimalized models may be obtained.

  15. Representative elements: A step to large-scale fracture system simulation

    International Nuclear Information System (INIS)

    Clemo, T.M.

    1987-01-01

    Large-scale simulation of flow and transport in fractured media requires the development of a technique to represent the effect of a large number of fractures. Representative elements are used as a tool to model a subset of a fracture system as a single distributed entity. Representative elements are part of a modeling concept called dual permeability. Dual permeability modeling combines discrete fracture simulation of the most important fractures with the distributed modeling of the less important fracture of a fracture system. This study investigates the use of stochastic analysis to determine properties of representative elements. Given an assumption of fully developed laminar flow, the net fracture conductivities and hence flow velocities can be determined from descriptive statistics of fracture spacing, orientation, aperture, and extent. The distribution of physical characteristics about their mean leads to a distribution of the associated conductivities. The variance of hydraulic conductivity induces dispersion into the transport process. Simple fracture systems are treated to demonstrate the usefulness of stochastic analysis. Explicit equations for conductivity of an element are developed and the dispersion characteristics are shown. Explicit formulation of the hydraulic conductivity and transport dispersion reveals the dependence of these important characteristics on the parameters used to describe the fracture system. Understanding these dependencies will help to focus efforts to identify the characteristics of fracture systems. Simulations of stochastically generated fracture sets do not provide this explicit functional dependence on the fracture system parameters. 12 refs., 6 figs

  16. Stochasticity and determinism in models of hematopoiesis.

    Science.gov (United States)

    Kimmel, Marek

    2014-01-01

    This chapter represents a novel view of modeling in hematopoiesis, synthesizing both deterministic and stochastic approaches. Whereas the stochastic models work in situations where chance dominates, for example when the number of cells is small, or under random mutations, the deterministic models are more important for large-scale, normal hematopoiesis. New types of models are on the horizon. These models attempt to account for distributed environments such as hematopoietic niches and their impact on dynamics. Mixed effects of such structures and chance events are largely unknown and constitute both a challenge and promise for modeling. Our discussion is presented under the separate headings of deterministic and stochastic modeling; however, the connections between both are frequently mentioned. Four case studies are included to elucidate important examples. We also include a primer of deterministic and stochastic dynamics for the reader's use.

  17. Integration of stochastic generation in power systems

    NARCIS (Netherlands)

    Papaefthymiou, G.; Schavemaker, P.H.; Sluis, van der L.; Kling, W.L.; Kurowicka, D.; Cooke, R.M.

    2006-01-01

    Stochastic generation, i.e., electrical power production by an uncontrolled primary energy source, is expected to play an important role in future power systems. A new power system structure is created due to the large-scale implementation of this small-scale, distributed, non-dispatchable

  18. Visual attention mitigates information loss in small- and large-scale neural codes

    Science.gov (United States)

    Sprague, Thomas C; Saproo, Sameer; Serences, John T

    2015-01-01

    Summary The visual system transforms complex inputs into robust and parsimonious neural codes that efficiently guide behavior. Because neural communication is stochastic, the amount of encoded visual information necessarily decreases with each synapse. This constraint requires processing sensory signals in a manner that protects information about relevant stimuli from degradation. Such selective processing – or selective attention – is implemented via several mechanisms, including neural gain and changes in tuning properties. However, examining each of these effects in isolation obscures their joint impact on the fidelity of stimulus feature representations by large-scale population codes. Instead, large-scale activity patterns can be used to reconstruct representations of relevant and irrelevant stimuli, providing a holistic understanding about how neuron-level modulations collectively impact stimulus encoding. PMID:25769502

  19. Stochastic parameterizing manifolds and non-Markovian reduced equations stochastic manifolds for nonlinear SPDEs II

    CERN Document Server

    Chekroun, Mickaël D; Wang, Shouhong

    2015-01-01

    In this second volume, a general approach is developed to provide approximate parameterizations of the "small" scales by the "large" ones for a broad class of stochastic partial differential equations (SPDEs). This is accomplished via the concept of parameterizing manifolds (PMs), which are stochastic manifolds that improve, for a given realization of the noise, in mean square error the partial knowledge of the full SPDE solution when compared to its projection onto some resolved modes. Backward-forward systems are designed to give access to such PMs in practice. The key idea consists of representing the modes with high wave numbers as a pullback limit depending on the time-history of the modes with low wave numbers. Non-Markovian stochastic reduced systems are then derived based on such a PM approach. The reduced systems take the form of stochastic differential equations involving random coefficients that convey memory effects. The theory is illustrated on a stochastic Burgers-type equation.

  20. Large Deviations for Stochastic Tamed 3D Navier-Stokes Equations

    International Nuclear Information System (INIS)

    Roeckner, Michael; Zhang, Tusheng; Zhang Xicheng

    2010-01-01

    In this paper, using weak convergence method, we prove a large deviation principle of Freidlin-Wentzell type for the stochastic tamed 3D Navier-Stokes equations driven by multiplicative noise, which was investigated in (Roeckner and Zhang in Probab. Theory Relat. Fields 145(1-2), 211-267, 2009).

  1. A dynamically adaptive wavelet approach to stochastic computations based on polynomial chaos - capturing all scales of random modes on independent grids

    International Nuclear Information System (INIS)

    Ren Xiaoan; Wu Wenquan; Xanthis, Leonidas S.

    2011-01-01

    Highlights: → New approach for stochastic computations based on polynomial chaos. → Development of dynamically adaptive wavelet multiscale solver using space refinement. → Accurate capture of steep gradients and multiscale features in stochastic problems. → All scales of each random mode are captured on independent grids. → Numerical examples demonstrate the need for different space resolutions per mode. - Abstract: In stochastic computations, or uncertainty quantification methods, the spectral approach based on the polynomial chaos expansion in random space leads to a coupled system of deterministic equations for the coefficients of the expansion. The size of this system increases drastically when the number of independent random variables and/or order of polynomial chaos expansions increases. This is invariably the case for large scale simulations and/or problems involving steep gradients and other multiscale features; such features are variously reflected on each solution component or random/uncertainty mode requiring the development of adaptive methods for their accurate resolution. In this paper we propose a new approach for treating such problems based on a dynamically adaptive wavelet methodology involving space-refinement on physical space that allows all scales of each solution component to be refined independently of the rest. We exemplify this using the convection-diffusion model with random input data and present three numerical examples demonstrating the salient features of the proposed method. Thus we establish a new, elegant and flexible approach for stochastic problems with steep gradients and multiscale features based on polynomial chaos expansions.

  2. Large Deviations for Stochastic Models of Two-Dimensional Second Grade Fluids

    International Nuclear Information System (INIS)

    Zhai, Jianliang; Zhang, Tusheng

    2017-01-01

    In this paper, we establish a large deviation principle for stochastic models of incompressible second grade fluids. The weak convergence method introduced by Budhiraja and Dupuis (Probab Math Statist 20:39–61, 2000) plays an important role.

  3. Large Deviations for Stochastic Models of Two-Dimensional Second Grade Fluids

    Energy Technology Data Exchange (ETDEWEB)

    Zhai, Jianliang, E-mail: zhaijl@ustc.edu.cn [University of Science and Technology of China, School of Mathematical Sciences (China); Zhang, Tusheng, E-mail: Tusheng.Zhang@manchester.ac.uk [University of Manchester, School of Mathematics (United Kingdom)

    2017-06-15

    In this paper, we establish a large deviation principle for stochastic models of incompressible second grade fluids. The weak convergence method introduced by Budhiraja and Dupuis (Probab Math Statist 20:39–61, 2000) plays an important role.

  4. Application of stochastic models in identification and apportionment of heavy metal pollution sources in the surface soils of a large-scale region.

    Science.gov (United States)

    Hu, Yuanan; Cheng, Hefa

    2013-04-16

    As heavy metals occur naturally in soils at measurable concentrations and their natural background contents have significant spatial variations, identification and apportionment of heavy metal pollution sources across large-scale regions is a challenging task. Stochastic models, including the recently developed conditional inference tree (CIT) and the finite mixture distribution model (FMDM), were applied to identify the sources of heavy metals found in the surface soils of the Pearl River Delta, China, and to apportion the contributions from natural background and human activities. Regression trees were successfully developed for the concentrations of Cd, Cu, Zn, Pb, Cr, Ni, As, and Hg in 227 soil samples from a region of over 7.2 × 10(4) km(2) based on seven specific predictors relevant to the source and behavior of heavy metals: land use, soil type, soil organic carbon content, population density, gross domestic product per capita, and the lengths and classes of the roads surrounding the sampling sites. The CIT and FMDM results consistently indicate that Cd, Zn, Cu, Pb, and Cr in the surface soils of the PRD were contributed largely by anthropogenic sources, whereas As, Ni, and Hg in the surface soils mostly originated from the soil parent materials.

  5. Large scale electrolysers

    International Nuclear Information System (INIS)

    B Bello; M Junker

    2006-01-01

    Hydrogen production by water electrolysis represents nearly 4 % of the world hydrogen production. Future development of hydrogen vehicles will require large quantities of hydrogen. Installation of large scale hydrogen production plants will be needed. In this context, development of low cost large scale electrolysers that could use 'clean power' seems necessary. ALPHEA HYDROGEN, an European network and center of expertise on hydrogen and fuel cells, has performed for its members a study in 2005 to evaluate the potential of large scale electrolysers to produce hydrogen in the future. The different electrolysis technologies were compared. Then, a state of art of the electrolysis modules currently available was made. A review of the large scale electrolysis plants that have been installed in the world was also realized. The main projects related to large scale electrolysis were also listed. Economy of large scale electrolysers has been discussed. The influence of energy prices on the hydrogen production cost by large scale electrolysis was evaluated. (authors)

  6. Decomposition and (importance) sampling techniques for multi-stage stochastic linear programs

    Energy Technology Data Exchange (ETDEWEB)

    Infanger, G.

    1993-11-01

    The difficulty of solving large-scale multi-stage stochastic linear programs arises from the sheer number of scenarios associated with numerous stochastic parameters. The number of scenarios grows exponentially with the number of stages and problems get easily out of hand even for very moderate numbers of stochastic parameters per stage. Our method combines dual (Benders) decomposition with Monte Carlo sampling techniques. We employ importance sampling to efficiently obtain accurate estimates of both expected future costs and gradients and right-hand sides of cuts. The method enables us to solve practical large-scale problems with many stages and numerous stochastic parameters per stage. We discuss the theory of sharing and adjusting cuts between different scenarios in a stage. We derive probabilistic lower and upper bounds, where we use importance path sampling for the upper bound estimation. Initial numerical results turned out to be promising.

  7. Visual attention mitigates information loss in small- and large-scale neural codes.

    Science.gov (United States)

    Sprague, Thomas C; Saproo, Sameer; Serences, John T

    2015-04-01

    The visual system transforms complex inputs into robust and parsimonious neural codes that efficiently guide behavior. Because neural communication is stochastic, the amount of encoded visual information necessarily decreases with each synapse. This constraint requires that sensory signals are processed in a manner that protects information about relevant stimuli from degradation. Such selective processing--or selective attention--is implemented via several mechanisms, including neural gain and changes in tuning properties. However, examining each of these effects in isolation obscures their joint impact on the fidelity of stimulus feature representations by large-scale population codes. Instead, large-scale activity patterns can be used to reconstruct representations of relevant and irrelevant stimuli, thereby providing a holistic understanding about how neuron-level modulations collectively impact stimulus encoding. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Stochastic Unit Commitment via Progressive Hedging - Extensive Analysis of Solution Methods

    DEFF Research Database (Denmark)

    Ordoudis, Christos; Pinson, Pierre; Zugno, Marco

    2015-01-01

    Owing to the massive deployment of renewable power production units over the last couple of decades, the use of stochastic optimization methods to solve the unit commitment problem has gained increasing attention. Solving stochastic unit commitment problems in large-scale power systems requires h...

  9. Climate and weather across scales: singularities and stochastic Levy-Clifford algebra

    Science.gov (United States)

    Schertzer, Daniel; Tchiguirinskaia, Ioulia

    2016-04-01

    There have been several attempts to understand and simulate the fluctuations of weather and climate across scales. Beyond mono/uni-scaling approaches (e.g. using spectral analysis), this was done with the help of multifractal techniques that aim to track and simulate the scaling singularities of the underlying equations instead of relying on numerical, scale truncated simulations of these equations (Royer et al., 2008, Lovejoy and Schertzer, 2013). However, these techniques were limited to deal with scalar fields, instead of dealing directly with a system of complex interactions and non trivial symmetries. The latter is unfortunately indispensable to answer to the challenging question of being able to assess the climatology of (exo-) planets based on first principles (Pierrehumbert, 2013) or to fully address the question of the relevance of quasi-geostrophic turbulence and to define an effective, fractal dimension of the atmospheric motions (Schertzer et al., 2012). In this talk, we present a plausible candidate based on the combination of Lévy stable processes and Clifford algebra. Together they combine stochastic and structural properties that are strongly universal. They therefore define with the help of a few physically meaningful parameters a wide class of stochastic symmetries, as well as high dimensional vector- or manifold-valued fields respecting these symmetries (Schertzer and Tchiguirinskaia, 2015). Lovejoy, S. & Schertzer, D., 2013. The Weather and Climate: Emergent Laws and Multifractal Cascades. Cambridge U.K. Cambridge Univeristy Press. Pierrehumbert, R.T., 2013. Strange news from other stars. Nature Geoscience, 6(2), pp.81-83. Royer, J.F. et al., 2008. Multifractal analysis of the evolution of simulated precipitation over France in a climate scenario. C.R. Geoscience, 340(431-440). Schertzer, D. et al., 2012. Quasi-geostrophic turbulence and generalized scale invariance, a theoretical reply. Atmos. Chem. Phys., 12, pp.327-336. Schertzer, D

  10. Stochastic Stokes' Drift, Homogenized Functional Inequalities, and Large Time Behavior of Brownian Ratchets

    KAUST Repository

    Blanchet, Adrien

    2009-01-01

    A periodic perturbation of a Gaussian measure modifies the sharp constants in Poincarae and logarithmic Sobolev inequalities in the homogeniz ation limit, that is, when the period of a periodic perturbation converges to zero. We use variational techniques to determine the homogenized constants and get optimal convergence rates toward s equilibrium of the solutions of the perturbed diffusion equations. The study of these sharp constants is motivated by the study of the stochastic Stokes\\' drift. It also applies to Brownian ratchets and molecular motors in biology. We first establish a transport phenomenon. Asymptotically, the center of mass of the solution moves with a constant velocity, which is determined by a doubly periodic problem. In the reference frame attached to the center of mass, the behavior of the solution is governed at large scale by a diffusion with a modified diffusion coefficient. Using the homogenized logarithmic Sobolev inequality, we prove that the solution converges in self-similar variables attached to t he center of mass to a stationary solution of a Fokker-Planck equation modulated by a periodic perturbation with fast oscillations, with an explicit rate. We also give an asymptotic expansion of the traveling diffusion front corresponding to the stochastic Stokes\\' drift with given potential flow. © 2009 Society for Industrial and Applied Mathematics.

  11. Large scale Brownian dynamics of confined suspensions of rigid particles

    Science.gov (United States)

    Sprinkle, Brennan; Balboa Usabiaga, Florencio; Patankar, Neelesh A.; Donev, Aleksandar

    2017-12-01

    We introduce methods for large-scale Brownian Dynamics (BD) simulation of many rigid particles of arbitrary shape suspended in a fluctuating fluid. Our method adds Brownian motion to the rigid multiblob method [F. Balboa Usabiaga et al., Commun. Appl. Math. Comput. Sci. 11(2), 217-296 (2016)] at a cost comparable to the cost of deterministic simulations. We demonstrate that we can efficiently generate deterministic and random displacements for many particles using preconditioned Krylov iterative methods, if kernel methods to efficiently compute the action of the Rotne-Prager-Yamakawa (RPY) mobility matrix and its "square" root are available for the given boundary conditions. These kernel operations can be computed with near linear scaling for periodic domains using the positively split Ewald method. Here we study particles partially confined by gravity above a no-slip bottom wall using a graphical processing unit implementation of the mobility matrix-vector product, combined with a preconditioned Lanczos iteration for generating Brownian displacements. We address a major challenge in large-scale BD simulations, capturing the stochastic drift term that arises because of the configuration-dependent mobility. Unlike the widely used Fixman midpoint scheme, our methods utilize random finite differences and do not require the solution of resistance problems or the computation of the action of the inverse square root of the RPY mobility matrix. We construct two temporal schemes which are viable for large-scale simulations, an Euler-Maruyama traction scheme and a trapezoidal slip scheme, which minimize the number of mobility problems to be solved per time step while capturing the required stochastic drift terms. We validate and compare these schemes numerically by modeling suspensions of boomerang-shaped particles sedimented near a bottom wall. Using the trapezoidal scheme, we investigate the steady-state active motion in dense suspensions of confined microrollers, whose

  12. Perturbative QCD lagrangian at large distances and stochastic dimensionality reduction

    International Nuclear Information System (INIS)

    Shintani, M.

    1986-10-01

    We construct a Lagrangian for perturbative QCD at large distances within the covariant operator formalism which explains the color confinement of quarks and gluons while maintaining unitarity of the S-matrix. It is also shown that when interactions are switched off, the mechanism of stochastic dimensionality reduction is operative in the system due to exact super-Lorentz symmetries. (orig.)

  13. Stochastic partial differential fluid equations as a diffusive limit of deterministic Lagrangian multi-time dynamics.

    Science.gov (United States)

    Cotter, C J; Gottwald, G A; Holm, D D

    2017-09-01

    In Holm (Holm 2015 Proc. R. Soc. A 471 , 20140963. (doi:10.1098/rspa.2014.0963)), stochastic fluid equations were derived by employing a variational principle with an assumed stochastic Lagrangian particle dynamics. Here we show that the same stochastic Lagrangian dynamics naturally arises in a multi-scale decomposition of the deterministic Lagrangian flow map into a slow large-scale mean and a rapidly fluctuating small-scale map. We employ homogenization theory to derive effective slow stochastic particle dynamics for the resolved mean part, thereby obtaining stochastic fluid partial equations in the Eulerian formulation. To justify the application of rigorous homogenization theory, we assume mildly chaotic fast small-scale dynamics, as well as a centring condition. The latter requires that the mean of the fluctuating deviations is small, when pulled back to the mean flow.

  14. The relationship between large-scale and convective states in the tropics - Towards an improved representation of convection in large-scale models

    Energy Technology Data Exchange (ETDEWEB)

    Jakob, Christian [Monash Univ., Melbourne, VIC (Australia)

    2015-02-26

    This report summarises an investigation into the relationship of tropical thunderstorms to the atmospheric conditions they are embedded in. The study is based on the use of radar observations at the Atmospheric Radiation Measurement site in Darwin run under the auspices of the DOE Atmospheric Systems Research program. Linking the larger scales of the atmosphere with the smaller scales of thunderstorms is crucial for the development of the representation of thunderstorms in weather and climate models, which is carried out by a process termed parametrisation. Through the analysis of radar and wind profiler observations the project made several fundamental discoveries about tropical storms and quantified the relationship of the occurrence and intensity of these storms to the large-scale atmosphere. We were able to show that the rainfall averaged over an area the size of a typical climate model grid-box is largely controlled by the number of storms in the area, and less so by the storm intensity. This allows us to completely rethink the way we represent such storms in climate models. We also found that storms occur in three distinct categories based on their depth and that the transition between these categories is strongly related to the larger scale dynamical features of the atmosphere more so than its thermodynamic state. Finally, we used our observational findings to test and refine a new approach to cumulus parametrisation which relies on the stochastic modelling of the area covered by different convective cloud types.

  15. Solving the problem of imaging resolution: stochastic multi-scale image fusion

    Science.gov (United States)

    Karsanina, Marina; Mallants, Dirk; Gilyazetdinova, Dina; Gerke, Kiril

    2016-04-01

    Structural features of porous materials define the majority of its physical properties, including water infiltration and redistribution, multi-phase flow (e.g. simultaneous water/air flow, gas exchange between biologically active soil root zone and atmosphere, etc.) and solute transport. To characterize soil and rock microstructure X-ray microtomography is extremely useful. However, as any other imaging technique, this one also has a significant drawback - a trade-off between sample size and resolution. The latter is a significant problem for multi-scale complex structures, especially such as soils and carbonates. Other imaging techniques, for example, SEM/FIB-SEM or X-ray macrotomography can be helpful in obtaining higher resolution or wider field of view. The ultimate goal is to create a single dataset containing information from all scales or to characterize such multi-scale structure. In this contribution we demonstrate a general solution for merging multiscale categorical spatial data into a single dataset using stochastic reconstructions with rescaled correlation functions. The versatility of the method is demonstrated by merging three images representing macro, micro and nanoscale spatial information on porous media structure. Images obtained by X-ray microtomography and scanning electron microscopy were fused into a single image with predefined resolution. The methodology is sufficiently generic for implementation of other stochastic reconstruction techniques, any number of scales, any number of material phases, and any number of images for a given scale. The methodology can be further used to assess effective properties of fused porous media images or to compress voluminous spatial datasets for efficient data storage. Potential practical applications of this method are abundant in soil science, hydrology and petroleum engineering, as well as other geosciences. This work was partially supported by RSF grant 14-17-00658 (X-ray microtomography study of shale

  16. Large-Scale Optimization for Bayesian Inference in Complex Systems

    Energy Technology Data Exchange (ETDEWEB)

    Willcox, Karen [MIT; Marzouk, Youssef [MIT

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to

  17. SCALE6 Hybrid Deterministic-Stochastic Shielding Methodology for PWR Containment Calculations

    International Nuclear Information System (INIS)

    Matijevic, Mario; Pevec, Dubravko; Trontl, Kresimir

    2014-01-01

    The capabilities and limitations of SCALE6/MAVRIC hybrid deterministic-stochastic shielding methodology (CADIS and FW-CADIS) are demonstrated when applied to a realistic deep penetration Monte Carlo (MC) shielding problem of full-scale PWR containment model. The ultimate goal of such automatic variance reduction (VR) techniques is to achieve acceptable precision for the MC simulation in reasonable time by preparation of phase-space VR parameters via deterministic transport theory methods (discrete ordinates SN) by generating space-energy mesh-based adjoint function distribution. The hybrid methodology generates VR parameters that work in tandem (biased source distribution and importance map) in automated fashion which is paramount step for MC simulation of complex models with fairly uniform mesh tally uncertainties. The aim in this paper was determination of neutron-gamma dose rate distribution (radiation field) over large portions of PWR containment phase-space with uniform MC uncertainties. The sources of ionizing radiation included fission neutrons and gammas (reactor core) and gammas from activated two-loop coolant. Special attention was given to focused adjoint source definition which gave improved MC statistics in selected materials and/or regions of complex model. We investigated benefits and differences of FW-CADIS over CADIS and manual (i.e. analog) MC simulation of particle transport. Computer memory consumption by deterministic part of hybrid methodology represents main obstacle when using meshes with millions of cells together with high SN/PN parameters, so optimization of control and numerical parameters of deterministic module plays important role for computer memory management. We investigated the possibility of using deterministic module (memory intense) with broad group library v7 2 7n19g opposed to fine group library v7 2 00n47g used with MC module to fully take effect of low energy particle transport and secondary gamma emission. Compared with

  18. A Resampling-Based Stochastic Approximation Method for Analysis of Large Geostatistical Data

    KAUST Repository

    Liang, Faming; Cheng, Yichen; Song, Qifan; Park, Jincheol; Yang, Ping

    2013-01-01

    large number of observations. This article proposes a resampling-based stochastic approximation method to address this challenge. At each iteration of the proposed method, a small subsample is drawn from the full dataset, and then the current estimate

  19. A Lagrangian stochastic model to demonstrate multi-scale interactions between convection and land surface heterogeneity in the atmospheric boundary layer

    Science.gov (United States)

    Parsakhoo, Zahra; Shao, Yaping

    2017-04-01

    Near-surface turbulent mixing has considerable effect on surface fluxes, cloud formation and convection in the atmospheric boundary layer (ABL). Its quantifications is however a modeling and computational challenge since the small eddies are not fully resolved in Eulerian models directly. We have developed a Lagrangian stochastic model to demonstrate multi-scale interactions between convection and land surface heterogeneity in the atmospheric boundary layer based on the Ito Stochastic Differential Equation (SDE) for air parcels (particles). Due to the complexity of the mixing in the ABL, we find that linear Ito SDE cannot represent convections properly. Three strategies have been tested to solve the problem: 1) to make the deterministic term in the Ito equation non-linear; 2) to change the random term in the Ito equation fractional, and 3) to modify the Ito equation by including Levy flights. We focus on the third strategy and interpret mixing as interaction between at least two stochastic processes with different Lagrangian time scales. The model is in progress to include the collisions among the particles with different characteristic and to apply the 3D model for real cases. One application of the model is emphasized: some land surface patterns are generated and then coupled with the Large Eddy Simulation (LES).

  20. Computational singular perturbation analysis of stochastic chemical systems with stiffness

    Science.gov (United States)

    Wang, Lijin; Han, Xiaoying; Cao, Yanzhao; Najm, Habib N.

    2017-04-01

    Computational singular perturbation (CSP) is a useful method for analysis, reduction, and time integration of stiff ordinary differential equation systems. It has found dominant utility, in particular, in chemical reaction systems with a large range of time scales at continuum and deterministic level. On the other hand, CSP is not directly applicable to chemical reaction systems at micro or meso-scale, where stochasticity plays an non-negligible role and thus has to be taken into account. In this work we develop a novel stochastic computational singular perturbation (SCSP) analysis and time integration framework, and associated algorithm, that can be used to not only construct accurately and efficiently the numerical solutions to stiff stochastic chemical reaction systems, but also analyze the dynamics of the reduced stochastic reaction systems. The algorithm is illustrated by an application to a benchmark stochastic differential equation model, and numerical experiments are carried out to demonstrate the effectiveness of the construction.

  1. Received signal strength in large-scale wireless relay sensor network: a stochastic ray approach

    NARCIS (Netherlands)

    Hu, L.; Chen, Y.; Scanlon, W.G.

    2011-01-01

    The authors consider a point percolation lattice representation of a large-scale wireless relay sensor network (WRSN) deployed in a cluttered environment. Each relay sensor corresponds to a grid point in the random lattice and the signal sent by the source is modelled as an ensemble of photons that

  2. Large-scale derived flood frequency analysis based on continuous simulation

    Science.gov (United States)

    Dung Nguyen, Viet; Hundecha, Yeshewatesfa; Guse, Björn; Vorogushyn, Sergiy; Merz, Bruno

    2016-04-01

    There is an increasing need for spatially consistent flood risk assessments at the regional scale (several 100.000 km2), in particular in the insurance industry and for national risk reduction strategies. However, most large-scale flood risk assessments are composed of smaller-scale assessments and show spatial inconsistencies. To overcome this deficit, a large-scale flood model composed of a weather generator and catchments models was developed reflecting the spatially inherent heterogeneity. The weather generator is a multisite and multivariate stochastic model capable of generating synthetic meteorological fields (precipitation, temperature, etc.) at daily resolution for the regional scale. These fields respect the observed autocorrelation, spatial correlation and co-variance between the variables. They are used as input into catchment models. A long-term simulation of this combined system enables to derive very long discharge series at many catchment locations serving as a basic for spatially consistent flood risk estimates at the regional scale. This combined model was set up and validated for major river catchments in Germany. The weather generator was trained by 53-year observation data at 528 stations covering not only the complete Germany but also parts of France, Switzerland, Czech Republic and Australia with the aggregated spatial scale of 443,931 km2. 10.000 years of daily meteorological fields for the study area were generated. Likewise, rainfall-runoff simulations with SWIM were performed for the entire Elbe, Rhine, Weser, Donau and Ems catchments. The validation results illustrate a good performance of the combined system, as the simulated flood magnitudes and frequencies agree well with the observed flood data. Based on continuous simulation this model chain is then used to estimate flood quantiles for the whole Germany including upstream headwater catchments in neighbouring countries. This continuous large scale approach overcomes the several

  3. Effects of demographic stochasticity on biological community assembly on evolutionary time scales

    KAUST Repository

    Murase, Yohsuke; Shimada, Takashi; Ito, Nobuyasu; Rikvold, Per Arne

    2010-01-01

    We study the effects of demographic stochasticity on the long-term dynamics of biological coevolution models of community assembly. The noise is induced in order to check the validity of deterministic population dynamics. While mutualistic communities show little dependence on the stochastic population fluctuations, predator-prey models show strong dependence on the stochasticity, indicating the relevance of the finiteness of the populations. For a predator-prey model, the noise causes drastic decreases in diversity and total population size. The communities that emerge under influence of the noise consist of species strongly coupled with each other and have stronger linear stability around the fixed-point populations than the corresponding noiseless model. The dynamics on evolutionary time scales for the predator-prey model are also altered by the noise. Approximate 1/f fluctuations are observed with noise, while 1/ f2 fluctuations are found for the model without demographic noise. © 2010 The American Physical Society.

  4. Effects of demographic stochasticity on biological community assembly on evolutionary time scales

    KAUST Repository

    Murase, Yohsuke

    2010-04-13

    We study the effects of demographic stochasticity on the long-term dynamics of biological coevolution models of community assembly. The noise is induced in order to check the validity of deterministic population dynamics. While mutualistic communities show little dependence on the stochastic population fluctuations, predator-prey models show strong dependence on the stochasticity, indicating the relevance of the finiteness of the populations. For a predator-prey model, the noise causes drastic decreases in diversity and total population size. The communities that emerge under influence of the noise consist of species strongly coupled with each other and have stronger linear stability around the fixed-point populations than the corresponding noiseless model. The dynamics on evolutionary time scales for the predator-prey model are also altered by the noise. Approximate 1/f fluctuations are observed with noise, while 1/ f2 fluctuations are found for the model without demographic noise. © 2010 The American Physical Society.

  5. Stochastic inflation lattice simulations: Ultra-large scale structure of the universe

    International Nuclear Information System (INIS)

    Salopek, D.S.

    1990-11-01

    Non-Gaussian fluctuations for structure formation may arise in inflation from the nonlinear interaction of long wavelength gravitational and scalar fields. Long wavelength fields have spatial gradients α -1 triangledown small compared to the Hubble radius, and they are described in terms of classical random fields that are fed by short wavelength quantum noise. Lattice Langevin calculations are given for a ''toy model'' with a scalar field interacting with an exponential potential where one can obtain exact analytic solutions of the Fokker-Planck equation. For single scalar field models that are consistent with current microwave background fluctuations, the fluctuations are Gaussian. However, for scales much larger than our observable Universe, one expects large metric fluctuations that are non-Guassian. This example illuminates non-Gaussian models involving multiple scalar fields which are consistent with current microwave background limits. 21 refs., 3 figs

  6. Pseudo-stochastic signal characterization in wavelet-domain

    International Nuclear Information System (INIS)

    Zaytsev, Kirill I; Zhirnov, Andrei A; Alekhnovich, Valentin I; Yurchenko, Stanislav O

    2015-01-01

    In this paper we present the method for fast and accurate characterization of pseudo-stochastic signals, which contain a large number of similar but randomly-located fragments. This method allows estimating the statistical characteristics of pseudo-stochastic signal, and it is based on digital signal processing in wavelet-domain. Continuous wavelet transform and the criterion for wavelet scale power density are utilized. We are experimentally implementing this method for the purpose of sand granulometry, and we are estimating the statistical parameters of test sand fractions

  7. STOCHASTIC OPTICS: A SCATTERING MITIGATION FRAMEWORK FOR RADIO INTERFEROMETRIC IMAGING

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Michael D., E-mail: mjohnson@cfa.harvard.edu [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States)

    2016-12-10

    Just as turbulence in the Earth’s atmosphere can severely limit the angular resolution of optical telescopes, turbulence in the ionized interstellar medium fundamentally limits the resolution of radio telescopes. We present a scattering mitigation framework for radio imaging with very long baseline interferometry (VLBI) that partially overcomes this limitation. Our framework, “stochastic optics,” derives from a simplification of strong interstellar scattering to separate small-scale (“diffractive”) effects from large-scale (“refractive”) effects, thereby separating deterministic and random contributions to the scattering. Stochastic optics extends traditional synthesis imaging by simultaneously reconstructing an unscattered image and its refractive perturbations. Its advantages over direct imaging come from utilizing the many deterministic properties of the scattering—such as the time-averaged “blurring,” polarization independence, and the deterministic evolution in frequency and time—while still accounting for the stochastic image distortions on large scales. These distortions are identified in the image reconstructions through regularization by their time-averaged power spectrum. Using synthetic data, we show that this framework effectively removes the blurring from diffractive scattering while reducing the spurious image features from refractive scattering. Stochastic optics can provide significant improvements over existing scattering mitigation strategies and is especially promising for imaging the Galactic Center supermassive black hole, Sagittarius A*, with the Global mm-VLBI Array and with the Event Horizon Telescope.

  8. STOCHASTIC OPTICS: A SCATTERING MITIGATION FRAMEWORK FOR RADIO INTERFEROMETRIC IMAGING

    International Nuclear Information System (INIS)

    Johnson, Michael D.

    2016-01-01

    Just as turbulence in the Earth’s atmosphere can severely limit the angular resolution of optical telescopes, turbulence in the ionized interstellar medium fundamentally limits the resolution of radio telescopes. We present a scattering mitigation framework for radio imaging with very long baseline interferometry (VLBI) that partially overcomes this limitation. Our framework, “stochastic optics,” derives from a simplification of strong interstellar scattering to separate small-scale (“diffractive”) effects from large-scale (“refractive”) effects, thereby separating deterministic and random contributions to the scattering. Stochastic optics extends traditional synthesis imaging by simultaneously reconstructing an unscattered image and its refractive perturbations. Its advantages over direct imaging come from utilizing the many deterministic properties of the scattering—such as the time-averaged “blurring,” polarization independence, and the deterministic evolution in frequency and time—while still accounting for the stochastic image distortions on large scales. These distortions are identified in the image reconstructions through regularization by their time-averaged power spectrum. Using synthetic data, we show that this framework effectively removes the blurring from diffractive scattering while reducing the spurious image features from refractive scattering. Stochastic optics can provide significant improvements over existing scattering mitigation strategies and is especially promising for imaging the Galactic Center supermassive black hole, Sagittarius A*, with the Global mm-VLBI Array and with the Event Horizon Telescope.

  9. Stochastic Heterogeneity Mapping around a Mediterranean salt lens

    Directory of Open Access Journals (Sweden)

    G. G. Buffett

    2010-03-01

    Full Text Available We present the first application of Stochastic Heterogeneity Mapping based on the band-limited von Kármán function to a seismic reflection stack of a Mediterranean water eddy (meddy, a large salt lens of Mediterranean water. This process extracts two stochastic parameters directly from the reflectivity field of the seismic data: the Hurst number, which ranges from 0 to 1, and the correlation length (scale length. Lower Hurst numbers represent a richer range of high wavenumbers and correspond to a broader range of heterogeneity in reflection events. The Hurst number estimate for the top of the meddy (0.39 compares well with recent theoretical work, which required values between 0.25 and 0.5 to model internal wave surfaces in open ocean conditions based on simulating a Garrett-Munk spectrum (GM76 slope of −2. The scale lengths obtained do not fit as well to seismic reflection events as those used in other studies to model internal waves. We suggest two explanations for this discrepancy: (1 due to the fact that the stochastic parameters are derived from the reflectivity field rather than the impedance field the estimated scale lengths may be underestimated, as has been reported; and (2 because the meddy seismic image is a two-dimensional slice of a complex and dynamic three-dimensional object, the derived scale lengths are biased to the direction of flow. Nonetheless, varying stochastic parameters, which correspond to different spectral slopes in the Garrett-Munk spectrum (horizontal wavenumber spectrum, can provide an estimate of different internal wave scales from seismic data alone. We hence introduce Stochastic Heterogeneity Mapping as a novel tool in physical oceanography.

  10. Biochemical Network Stochastic Simulator (BioNetS: software for stochastic modeling of biochemical networks

    Directory of Open Access Journals (Sweden)

    Elston Timothy C

    2004-03-01

    Full Text Available Abstract Background Intrinsic fluctuations due to the stochastic nature of biochemical reactions can have large effects on the response of biochemical networks. This is particularly true for pathways that involve transcriptional regulation, where generally there are two copies of each gene and the number of messenger RNA (mRNA molecules can be small. Therefore, there is a need for computational tools for developing and investigating stochastic models of biochemical networks. Results We have developed the software package Biochemical Network Stochastic Simulator (BioNetS for efficientlyand accurately simulating stochastic models of biochemical networks. BioNetS has a graphical user interface that allows models to be entered in a straightforward manner, and allows the user to specify the type of random variable (discrete or continuous for each chemical species in the network. The discrete variables are simulated using an efficient implementation of the Gillespie algorithm. For the continuous random variables, BioNetS constructs and numerically solvesthe appropriate chemical Langevin equations. The software package has been developed to scale efficiently with network size, thereby allowing large systems to be studied. BioNetS runs as a BioSpice agent and can be downloaded from http://www.biospice.org. BioNetS also can be run as a stand alone package. All the required files are accessible from http://x.amath.unc.edu/BioNetS. Conclusions We have developed BioNetS to be a reliable tool for studying the stochastic dynamics of large biochemical networks. Important features of BioNetS are its ability to handle hybrid models that consist of both continuous and discrete random variables and its ability to model cell growth and division. We have verified the accuracy and efficiency of the numerical methods by considering several test systems.

  11. Stochastic Thermodynamics: A Dynamical Systems Approach

    Directory of Open Access Journals (Sweden)

    Tanmay Rajpurohit

    2017-12-01

    Full Text Available In this paper, we develop an energy-based, large-scale dynamical system model driven by Markov diffusion processes to present a unified framework for statistical thermodynamics predicated on a stochastic dynamical systems formalism. Specifically, using a stochastic state space formulation, we develop a nonlinear stochastic compartmental dynamical system model characterized by energy conservation laws that is consistent with statistical thermodynamic principles. In particular, we show that the difference between the average supplied system energy and the average stored system energy for our stochastic thermodynamic model is a martingale with respect to the system filtration. In addition, we show that the average stored system energy is equal to the mean energy that can be extracted from the system and the mean energy that can be delivered to the system in order to transfer it from a zero energy level to an arbitrary nonempty subset in the state space over a finite stopping time.

  12. A Resampling-Based Stochastic Approximation Method for Analysis of Large Geostatistical Data

    KAUST Repository

    Liang, Faming

    2013-03-01

    The Gaussian geostatistical model has been widely used in modeling of spatial data. However, it is challenging to computationally implement this method because it requires the inversion of a large covariance matrix, particularly when there is a large number of observations. This article proposes a resampling-based stochastic approximation method to address this challenge. At each iteration of the proposed method, a small subsample is drawn from the full dataset, and then the current estimate of the parameters is updated accordingly under the framework of stochastic approximation. Since the proposed method makes use of only a small proportion of the data at each iteration, it avoids inverting large covariance matrices and thus is scalable to large datasets. The proposed method also leads to a general parameter estimation approach, maximum mean log-likelihood estimation, which includes the popular maximum (log)-likelihood estimation (MLE) approach as a special case and is expected to play an important role in analyzing large datasets. Under mild conditions, it is shown that the estimator resulting from the proposed method converges in probability to a set of parameter values of equivalent Gaussian probability measures, and that the estimator is asymptotically normally distributed. To the best of the authors\\' knowledge, the present study is the first one on asymptotic normality under infill asymptotics for general covariance functions. The proposed method is illustrated with large datasets, both simulated and real. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  13. Stochastic multi-scale analysis of homogenised properties considering uncertainties in cellular solid microstructures using a first-order perturbation

    Directory of Open Access Journals (Sweden)

    Khairul Salleh Basaruddin

    Full Text Available Randomness in the microstructure due to variations in microscopic properties and geometrical information is used to predict the stochastically homogenised properties of cellular media. Two stochastic problems at the micro-scale level that commonly occur due to fabrication inaccuracies, degradation mechanisms or natural heterogeneity were analysed using a stochastic homogenisation method based on a first-order perturbation. First, the influence of Young's modulus variation in an adhesive on the macroscopic properties of an aluminium-adhesive honeycomb structure was investigated. The fluctuations in the microscopic properties were then combined by varying the microstructure periodicity in a corrugated-core sandwich plate to obtain the variation of the homogenised property. The numerical results show that the uncertainties in the microstructure affect the dispersion of the homogenised property. These results indicate the importance of the presented stochastic multi-scale analysis for the design and fabrication of cellular solids when considering microscopic random variation.

  14. Long-time analytic approximation of large stochastic oscillators: Simulation, analysis and inference.

    Directory of Open Access Journals (Sweden)

    Giorgos Minas

    2017-07-01

    Full Text Available In order to analyse large complex stochastic dynamical models such as those studied in systems biology there is currently a great need for both analytical tools and also algorithms for accurate and fast simulation and estimation. We present a new stochastic approximation of biological oscillators that addresses these needs. Our method, called phase-corrected LNA (pcLNA overcomes the main limitations of the standard Linear Noise Approximation (LNA to remain uniformly accurate for long times, still maintaining the speed and analytically tractability of the LNA. As part of this, we develop analytical expressions for key probability distributions and associated quantities, such as the Fisher Information Matrix and Kullback-Leibler divergence and we introduce a new approach to system-global sensitivity analysis. We also present algorithms for statistical inference and for long-term simulation of oscillating systems that are shown to be as accurate but much faster than leaping algorithms and algorithms for integration of diffusion equations. Stochastic versions of published models of the circadian clock and NF-κB system are used to illustrate our results.

  15. KNO scaling functions given by Buras and Koba and by Barshay and Yamaguchi, and stochastic Rayleigh and Ornstein-Uhlenbeck processes

    International Nuclear Information System (INIS)

    Biyajima, M.

    1984-01-01

    Stochastic backgrounds of the KNO scaling functions given by Buras and Koba and by Barshay and Yamaguchi are investigated. It is found that they are connected with the stochastic Rayleigh process, and the (1+2)- and (1+4)-dimensional Ornstein-Uhlenbeck process. Moreover those KNO scaling functions are transformed into the KNO scaling functions given by the Perina-McGill formula in terms of a nonlinear transformation. Analyses of data by means of them are made. Probability distributions of the former KNO scaling functions are also calculated by the Poisson transformation. (orig.)

  16. Multi-Period Natural Gas Market Modeling. Applications, Stochastic Extensions and Solution Approaches

    International Nuclear Information System (INIS)

    Egging, R.G.

    2010-11-01

    This dissertation develops deterministic and stochastic multi-period mixed complementarity problems (MCP) for the global natural gas market, as well as solution approaches for large-scale stochastic MCP. The deterministic model is unique in the combination of the level of detail of the actors in the natural gas markets and the transport options, the detailed regional and global coverage, the multi-period approach with endogenous capacity expansions for transportation and storage infrastructure, the seasonal variation in demand and the representation of market power according to Nash-Cournot theory. The model is applied to several scenarios for the natural gas market that cover the formation of a cartel by the members of the Gas Exporting Countries Forum, a low availability of unconventional gas in the United States, and cost reductions in long-distance gas transportation. The results provide insights in how different regions are affected by various developments, in terms of production, consumption, traded volumes, prices and profits of market participants. The stochastic MCP is developed and applied to a global natural gas market problem with four scenarios for a time horizon until 2050 with nineteen regions and containing 78,768 variables. The scenarios vary in the possibility of a gas market cartel formation and varying depletion rates of gas reserves in the major gas importing regions. Outcomes for hedging decisions of market participants show some significant shifts in the timing and location of infrastructure investments, thereby affecting local market situations. A first application of Benders decomposition (BD) is presented to solve a large-scale stochastic MCP for the global gas market with many hundreds of first-stage capacity expansion variables and market players exerting various levels of market power. The largest problem solved successfully using BD contained 47,373 variables of which 763 first-stage variables, however using BD did not result in

  17. Chemically intuited, large-scale screening of MOFs by machine learning techniques

    Science.gov (United States)

    Borboudakis, Giorgos; Stergiannakos, Taxiarchis; Frysali, Maria; Klontzas, Emmanuel; Tsamardinos, Ioannis; Froudakis, George E.

    2017-10-01

    A novel computational methodology for large-scale screening of MOFs is applied to gas storage with the use of machine learning technologies. This approach is a promising trade-off between the accuracy of ab initio methods and the speed of classical approaches, strategically combined with chemical intuition. The results demonstrate that the chemical properties of MOFs are indeed predictable (stochastically, not deterministically) using machine learning methods and automated analysis protocols, with the accuracy of predictions increasing with sample size. Our initial results indicate that this methodology is promising to apply not only to gas storage in MOFs but in many other material science projects.

  18. Large scale stochastic spatio-temporal modelling with PCRaster

    NARCIS (Netherlands)

    Karssenberg, D.J.; Drost, N.; Schmitz, O.; Jong, K. de; Bierkens, M.F.P.

    2013-01-01

    PCRaster is a software framework for building spatio-temporal models of land surface processes (http://www.pcraster.eu). Building blocks of models are spatial operations on raster maps, including a large suite of operations for water and sediment routing. These operations are available to model

  19. A stochastic multiscale framework for modeling flow through random heterogeneous porous media

    International Nuclear Information System (INIS)

    Ganapathysubramanian, B.; Zabaras, N.

    2009-01-01

    Flow through porous media is ubiquitous, occurring from large geological scales down to the microscopic scales. Several critical engineering phenomena like contaminant spread, nuclear waste disposal and oil recovery rely on accurate analysis and prediction of these multiscale phenomena. Such analysis is complicated by inherent uncertainties as well as the limited information available to characterize the system. Any realistic modeling of these transport phenomena has to resolve two key issues: (i) the multi-length scale variations in permeability that these systems exhibit, and (ii) the inherently limited information available to quantify these property variations that necessitates posing these phenomena as stochastic processes. A stochastic variational multiscale formulation is developed to incorporate uncertain multiscale features. A stochastic analogue to a mixed multiscale finite element framework is used to formulate the physical stochastic multiscale process. Recent developments in linear and non-linear model reduction techniques are used to convert the limited information available about the permeability variation into a viable stochastic input model. An adaptive sparse grid collocation strategy is used to efficiently solve the resulting stochastic partial differential equations (SPDEs). The framework is applied to analyze flow through random heterogeneous media when only limited statistics about the permeability variation are given

  20. Suppression of large edge-localized modes in high-confinement DIII-D plasmas with a stochastic magnetic boundary.

    Science.gov (United States)

    Evans, T E; Moyer, R A; Thomas, P R; Watkins, J G; Osborne, T H; Boedo, J A; Doyle, E J; Fenstermacher, M E; Finken, K H; Groebner, R J; Groth, M; Harris, J H; La Haye, R J; Lasnier, C J; Masuzaki, S; Ohyabu, N; Pretty, D G; Rhodes, T L; Reimerdes, H; Rudakov, D L; Schaffer, M J; Wang, G; Zeng, L

    2004-06-11

    A stochastic magnetic boundary, produced by an applied edge resonant magnetic perturbation, is used to suppress most large edge-localized modes (ELMs) in high confinement (H-mode) plasmas. The resulting H mode displays rapid, small oscillations with a bursty character modulated by a coherent 130 Hz envelope. The H mode transport barrier and core confinement are unaffected by the stochastic boundary, despite a threefold drop in the toroidal rotation. These results demonstrate that stochastic boundaries are compatible with H modes and may be attractive for ELM control in next-step fusion tokamaks.

  1. Optimizing basin-scale coupled water quantity and water quality management with stochastic dynamic programming

    DEFF Research Database (Denmark)

    Davidsen, Claus; Liu, Suxia; Mo, Xingguo

    2015-01-01

    Few studies address water quality in hydro-economic models, which often focus primarily on optimal allocation of water quantities. Water quality and water quantity are closely coupled, and optimal management with focus solely on either quantity or quality may cause large costs in terms of the oth......-er component. In this study, we couple water quality and water quantity in a joint hydro-economic catchment-scale optimization problem. Stochastic dynamic programming (SDP) is used to minimize the basin-wide total costs arising from water allocation, water curtailment and water treatment. The simple water...... quality module can handle conservative pollutants, first order depletion and non-linear reactions. For demonstration purposes, we model pollutant releases as biochemical oxygen demand (BOD) and use the Streeter-Phelps equation for oxygen deficit to compute the resulting min-imum dissolved oxygen...

  2. Optimisation of timetable-based, stochastic transit assignment models based on MSA

    DEFF Research Database (Denmark)

    Nielsen, Otto Anker; Frederiksen, Rasmus Dyhr

    2006-01-01

    (CRM), such a large-scale transit assignment model was developed and estimated. The Stochastic User Equilibrium problem was solved by the Method of Successive Averages (MSA). However, the model suffered from very large calculation times. The paper focuses on how to optimise transit assignment models...

  3. Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems

    Energy Technology Data Exchange (ETDEWEB)

    Ghattas, Omar [The University of Texas at Austin

    2013-10-15

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUARO Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.

  4. Homogenization of the stochastic Navier–Stokes equation with a stochastic slip boundary condition

    KAUST Repository

    Bessaih, Hakima

    2015-11-02

    The two-dimensional Navier–Stokes equation in a perforated domain with a dynamical slip boundary condition is considered. We assume that the dynamic is driven by a stochastic perturbation on the interior of the domain and another stochastic perturbation on the boundaries of the holes. We consider a scaling (ᵋ for the viscosity and 1 for the density) that will lead to a time-dependent limit problem. However, the noncritical scaling (ᵋ, β > 1) is considered in front of the nonlinear term. The homogenized system in the limit is obtained as a Darcy’s law with memory with two permeabilities and an extra term that is due to the stochastic perturbation on the boundary of the holes. The nonhomogeneity on the boundary contains a stochastic part that yields in the limit an additional term in the Darcy’s law. We use the two-scale convergence method after extending the solution with 0 inside the holes to pass to the limit. By Itô stochastic calculus, we get uniform estimates on the solution in appropriate spaces. Due to the stochastic integral, the pressure that appears in the variational formulation does not have enough regularity in time. This fact made us rely only on the variational formulation for the passage to the limit on the solution. We obtain a variational formulation for the limit that is solution of a Stokes system with two pressures. This two-scale limit gives rise to three cell problems, two of them give the permeabilities while the third one gives an extra term in the Darcy’s law due to the stochastic perturbation on the boundary of the holes.

  5. The Schroedinger-Poisson equations as the large-N limit of the Newtonian N-body system. Applications to the large scale dark matter dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Briscese, Fabio [Northumbria University, Department of Mathematics, Physics and Electrical Engineering, Newcastle upon Tyne (United Kingdom); Citta Universitaria, Istituto Nazionale di Alta Matematica Francesco Severi, Gruppo Nazionale di Fisica Matematica, Rome (Italy)

    2017-09-15

    In this paper it is argued how the dynamics of the classical Newtonian N-body system can be described in terms of the Schroedinger-Poisson equations in the large N limit. This result is based on the stochastic quantization introduced by Nelson, and on the Calogero conjecture. According to the Calogero conjecture, the emerging effective Planck constant is computed in terms of the parameters of the N-body system as ℎ ∝ M{sup 5/3}G{sup 1/2}(N/ left angle ρ right angle){sup 1/6}, where is G the gravitational constant, N and M are the number and the mass of the bodies, and left angle ρ right angle is their average density. The relevance of this result in the context of large scale structure formation is discussed. In particular, this finding gives a further argument in support of the validity of the Schroedinger method as numerical double of the N-body simulations of dark matter dynamics at large cosmological scales. (orig.)

  6. Large-scale solar purchasing

    International Nuclear Information System (INIS)

    1999-01-01

    The principal objective of the project was to participate in the definition of a new IEA task concerning solar procurement (''the Task'') and to assess whether involvement in the task would be in the interest of the UK active solar heating industry. The project also aimed to assess the importance of large scale solar purchasing to UK active solar heating market development and to evaluate the level of interest in large scale solar purchasing amongst potential large scale purchasers (in particular housing associations and housing developers). A further aim of the project was to consider means of stimulating large scale active solar heating purchasing activity within the UK. (author)

  7. A coordination model for ultra-large scale systems of systems

    Directory of Open Access Journals (Sweden)

    Manuela L. Bujorianu

    2013-11-01

    Full Text Available The ultra large multi-agent systems are becoming increasingly popular due to quick decay of the individual production costs and the potential of speeding up the solving of complex problems. Examples include nano-robots, or systems of nano-satellites for dangerous meteorite detection, or cultures of stem cells for organ regeneration or nerve repair. The topics associated with these systems are usually dealt within the theories of intelligent swarms or biologically inspired computation systems. Stochastic models play an important role and they are based on various formulations of the mechanical statistics. In these cases, the main assumption is that the swarm elements have a simple behaviour and that some average properties can be deduced for the entire swarm. In contrast, complex systems in areas like aeronautics are formed by elements with sophisticated behaviour, which are even autonomous. In situations like this, a new approach to swarm coordination is necessary. We present a stochastic model where the swarm elements are communicating autonomous systems, the coordination is separated from the component autonomous activity and the entire swarm can be abstracted away as a piecewise deterministic Markov process, which constitutes one of the most popular model in stochastic control. Keywords: ultra large multi-agent systems, system of systems, autonomous systems, stochastic hybrid systems.

  8. Stochastic background of gravitational waves from hybrid preheating.

    Science.gov (United States)

    García-Bellido, Juan; Figueroa, Daniel G

    2007-02-09

    The process of reheating the Universe after hybrid inflation is extremely violent. It proceeds through the nucleation and subsequent collision of large concentrations of energy density in bubblelike structures, which generate a significant fraction of energy in the form of gravitational waves. We study the power spectrum of the stochastic background of gravitational waves produced at reheating after hybrid inflation. We find that the amplitude could be significant for high-scale models, although the typical frequencies are well beyond what could be reached by planned gravitational wave observatories. On the other hand, low-scale models could still produce a detectable stochastic background at frequencies accessible to those detectors. The discovery of such a background would open a new window into the very early Universe.

  9. Evaluation of convergence behavior of metamodeling techniques for bridging scales in multi-scale multimaterial simulation

    International Nuclear Information System (INIS)

    Sen, Oishik; Davis, Sean; Jacobs, Gustaaf; Udaykumar, H.S.

    2015-01-01

    The effectiveness of several metamodeling techniques, viz. the Polynomial Stochastic Collocation method, Adaptive Stochastic Collocation method, a Radial Basis Function Neural Network, a Kriging Method and a Dynamic Kriging Method is evaluated. This is done with the express purpose of using metamodels to bridge scales between micro- and macro-scale models in a multi-scale multimaterial simulation. The rate of convergence of the error when used to reconstruct hypersurfaces of known functions is studied. For sufficiently large number of training points, Stochastic Collocation methods generally converge faster than the other metamodeling techniques, while the DKG method converges faster when the number of input points is less than 100 in a two-dimensional parameter space. Because the input points correspond to computationally expensive micro/meso-scale computations, the DKG is favored for bridging scales in a multi-scale solver

  10. On the rejection-based algorithm for simulation and analysis of large-scale reaction networks

    Energy Technology Data Exchange (ETDEWEB)

    Thanh, Vo Hong, E-mail: vo@cosbi.eu [The Microsoft Research-University of Trento Centre for Computational and Systems Biology, Piazza Manifattura 1, Rovereto 38068 (Italy); Zunino, Roberto, E-mail: roberto.zunino@unitn.it [Department of Mathematics, University of Trento, Trento (Italy); Priami, Corrado, E-mail: priami@cosbi.eu [The Microsoft Research-University of Trento Centre for Computational and Systems Biology, Piazza Manifattura 1, Rovereto 38068 (Italy); Department of Mathematics, University of Trento, Trento (Italy)

    2015-06-28

    Stochastic simulation for in silico studies of large biochemical networks requires a great amount of computational time. We recently proposed a new exact simulation algorithm, called the rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)], to improve simulation performance by postponing and collapsing as much as possible the propensity updates. In this paper, we analyze the performance of this algorithm in detail, and improve it for simulating large-scale biochemical reaction networks. We also present a new algorithm, called simultaneous RSSA (SRSSA), which generates many independent trajectories simultaneously for the analysis of the biochemical behavior. SRSSA improves simulation performance by utilizing a single data structure across simulations to select reaction firings and forming trajectories. The memory requirement for building and storing the data structure is thus independent of the number of trajectories. The updating of the data structure when needed is performed collectively in a single operation across the simulations. The trajectories generated by SRSSA are exact and independent of each other by exploiting the rejection-based mechanism. We test our new improvement on real biological systems with a wide range of reaction networks to demonstrate its applicability and efficiency.

  11. arXiv Stochastic locality and master-field simulations of very large lattices

    CERN Document Server

    Lüscher, Martin

    2018-01-01

    In lattice QCD and other field theories with a mass gap, the field variables in distant regions of a physically large lattice are only weakly correlated. Accurate stochastic estimates of the expectation values of local observables may therefore be obtained from a single representative field. Such master-field simulations potentially allow very large lattices to be simulated, but require various conceptual and technical issues to be addressed. In this talk, an introduction to the subject is provided and some encouraging results of master-field simulations of the SU(3) gauge theory are reported.

  12. Research on unit commitment with large-scale wind power connected power system

    Science.gov (United States)

    Jiao, Ran; Zhang, Baoqun; Chi, Zhongjun; Gong, Cheng; Ma, Longfei; Yang, Bing

    2017-01-01

    Large-scale integration of wind power generators into power grid brings severe challenges to power system economic dispatch due to its stochastic volatility. Unit commitment including wind farm is analyzed from the two parts of modeling and solving methods. The structures and characteristics can be summarized after classification has been done according to different objective function and constraints. Finally, the issues to be solved and possible directions of research and development in the future are discussed, which can adapt to the requirements of the electricity market, energy-saving power generation dispatching and smart grid, even providing reference for research and practice of researchers and workers in this field.

  13. ARMA modelling of neutron stochastic processes with large measurement noise

    International Nuclear Information System (INIS)

    Zavaljevski, N.; Kostic, Lj.; Pesic, M.

    1994-01-01

    An autoregressive moving average (ARMA) model of the neutron fluctuations with large measurement noise is derived from langevin stochastic equations and validated using time series data obtained during prompt neutron decay constant measurements at the zero power reactor RB in Vinca. Model parameters are estimated using the maximum likelihood (ML) off-line algorithm and an adaptive pole estimation algorithm based on the recursive prediction error method (RPE). The results show that subcriticality can be determined from real data with high measurement noise using much shorter statistical sample than in standard methods. (author)

  14. Multi-period natural gas market modeling Applications, stochastic extensions and solution approaches

    Science.gov (United States)

    Egging, Rudolf Gerardus

    This dissertation develops deterministic and stochastic multi-period mixed complementarity problems (MCP) for the global natural gas market, as well as solution approaches for large-scale stochastic MCP. The deterministic model is unique in the combination of the level of detail of the actors in the natural gas markets and the transport options, the detailed regional and global coverage, the multi-period approach with endogenous capacity expansions for transportation and storage infrastructure, the seasonal variation in demand and the representation of market power according to Nash-Cournot theory. The model is applied to several scenarios for the natural gas market that cover the formation of a cartel by the members of the Gas Exporting Countries Forum, a low availability of unconventional gas in the United States, and cost reductions in long-distance gas transportation. 1 The results provide insights in how different regions are affected by various developments, in terms of production, consumption, traded volumes, prices and profits of market participants. The stochastic MCP is developed and applied to a global natural gas market problem with four scenarios for a time horizon until 2050 with nineteen regions and containing 78,768 variables. The scenarios vary in the possibility of a gas market cartel formation and varying depletion rates of gas reserves in the major gas importing regions. Outcomes for hedging decisions of market participants show some significant shifts in the timing and location of infrastructure investments, thereby affecting local market situations. A first application of Benders decomposition (BD) is presented to solve a large-scale stochastic MCP for the global gas market with many hundreds of first-stage capacity expansion variables and market players exerting various levels of market power. The largest problem solved successfully using BD contained 47,373 variables of which 763 first-stage variables, however using BD did not result in

  15. Assessing Effects of Joining Common Currency Area with Large-Scale DSGE model: A Case of Poland

    OpenAIRE

    Maciej Bukowski; Sebastian Dyrda; Pawe³ Kowal

    2008-01-01

    In this paper we present a large scale dynamic stochastic general equilibrium model, in order to analyze and simulate effects of Euro introduction in Poland. Presented framework is a based on a two-country open economy model, where foreign acts as the Eurozone, and home as a candidate country. We have implemented various types of structural frictions in the open economy block, that generate empirically observable deviations from purchasing power parity rule. We consider such mechanisms as a d...

  16. Foundational perspectives on causality in large-scale brain networks

    Science.gov (United States)

    Mannino, Michael; Bressler, Steven L.

    2015-12-01

    likelihood that a change in the activity of one neuronal population affects the activity in another. We argue that these measures access the inherently probabilistic nature of causal influences in the brain, and are thus better suited for large-scale brain network analysis than are DC-based measures. Our work is consistent with recent advances in the philosophical study of probabilistic causality, which originated from inherent conceptual problems with deterministic regularity theories. It also resonates with concepts of stochasticity that were involved in establishing modern physics. In summary, we argue that probabilistic causality is a conceptually appropriate foundation for describing neural causality in the brain.

  17. Large-Scale Analysis of Auditory Segregation Behavior Crowdsourced via a Smartphone App.

    Science.gov (United States)

    Teki, Sundeep; Kumar, Sukhbinder; Griffiths, Timothy D

    2016-01-01

    The human auditory system is adept at detecting sound sources of interest from a complex mixture of several other simultaneous sounds. The ability to selectively attend to the speech of one speaker whilst ignoring other speakers and background noise is of vital biological significance-the capacity to make sense of complex 'auditory scenes' is significantly impaired in aging populations as well as those with hearing loss. We investigated this problem by designing a synthetic signal, termed the 'stochastic figure-ground' stimulus that captures essential aspects of complex sounds in the natural environment. Previously, we showed that under controlled laboratory conditions, young listeners sampled from the university subject pool (n = 10) performed very well in detecting targets embedded in the stochastic figure-ground signal. Here, we presented a modified version of this cocktail party paradigm as a 'game' featured in a smartphone app (The Great Brain Experiment) and obtained data from a large population with diverse demographical patterns (n = 5148). Despite differences in paradigms and experimental settings, the observed target-detection performance by users of the app was robust and consistent with our previous results from the psychophysical study. Our results highlight the potential use of smartphone apps in capturing robust large-scale auditory behavioral data from normal healthy volunteers, which can also be extended to study auditory deficits in clinical populations with hearing impairments and central auditory disorders.

  18. Large-Scale Analysis of Auditory Segregation Behavior Crowdsourced via a Smartphone App.

    Directory of Open Access Journals (Sweden)

    Sundeep Teki

    Full Text Available The human auditory system is adept at detecting sound sources of interest from a complex mixture of several other simultaneous sounds. The ability to selectively attend to the speech of one speaker whilst ignoring other speakers and background noise is of vital biological significance-the capacity to make sense of complex 'auditory scenes' is significantly impaired in aging populations as well as those with hearing loss. We investigated this problem by designing a synthetic signal, termed the 'stochastic figure-ground' stimulus that captures essential aspects of complex sounds in the natural environment. Previously, we showed that under controlled laboratory conditions, young listeners sampled from the university subject pool (n = 10 performed very well in detecting targets embedded in the stochastic figure-ground signal. Here, we presented a modified version of this cocktail party paradigm as a 'game' featured in a smartphone app (The Great Brain Experiment and obtained data from a large population with diverse demographical patterns (n = 5148. Despite differences in paradigms and experimental settings, the observed target-detection performance by users of the app was robust and consistent with our previous results from the psychophysical study. Our results highlight the potential use of smartphone apps in capturing robust large-scale auditory behavioral data from normal healthy volunteers, which can also be extended to study auditory deficits in clinical populations with hearing impairments and central auditory disorders.

  19. Collective enhancement of inclusive cross sections at large transverse momentum in stochastic-field multiparticle theory

    International Nuclear Information System (INIS)

    Arnold, R.C.

    1976-01-01

    A stochastic-field calculus, previously discussed in connection with Regge intercepts and instability questions, is applied to inclusive cross sections, and is shown to predict a growth with energy of large-P/perpendicular/ to inclusives

  20. Stochastic neuron models

    CERN Document Server

    Greenwood, Priscilla E

    2016-01-01

    This book describes a large number of open problems in the theory of stochastic neural systems, with the aim of enticing probabilists to work on them. This includes problems arising from stochastic models of individual neurons as well as those arising from stochastic models of the activities of small and large networks of interconnected neurons. The necessary neuroscience background to these problems is outlined within the text, so readers can grasp the context in which they arise. This book will be useful for graduate students and instructors providing material and references for applying probability to stochastic neuron modeling. Methods and results are presented, but the emphasis is on questions where additional stochastic analysis may contribute neuroscience insight. An extensive bibliography is included. Dr. Priscilla E. Greenwood is a Professor Emerita in the Department of Mathematics at the University of British Columbia. Dr. Lawrence M. Ward is a Professor in the Department of Psychology and the Brain...

  1. SUPPESSION OF LARGE EDGE LOCALIZED MODES IN HIGH CONFINEMENT DIII-D PLASMAS WITH A STOCHASTIC MAGNETIC BOUNDARY

    International Nuclear Information System (INIS)

    EVANS, TE; MOYER, RA; THOMAS, PR; WATKINS, JG; OSBORNE, TH; BOEDO, JA; FENSTERMACHER, ME; FINKEN, KH; GROEBNER, RJ; GROTH, M; HARRIS, JH; LAHAYE, RJ; LASNIER, CJ; MASUZAKI, S; OHYABU, N; PRETTY, D; RHODES, TL; REIMERDES, H; RUDAKOV, DL; SCHAFFER, MJ; WANG, G; ZENG, L.

    2003-01-01

    OAK-B135 A stochastic magnetic boundary, produced by an externally applied edge resonant magnetic perturbation, is used to suppress large edge localized modes (ELMs) in high confinement (H-mode) plasmas. The resulting H-mode displays rapid, small oscillations with a bursty character modulated by a coherent 130 Hz envelope. The H-mode transport barrier is unaffected by the stochastic boundary. The core confinement of these discharges is unaffected, despite a three-fold drop in the toroidal rotation in the plasma core. These results demonstrate that stochastic boundaries are compatible with H-modes and may be attractive for ELM control in next-step burning fusion tokamaks

  2. Stochastic growth of localized plasma waves

    International Nuclear Information System (INIS)

    Robinson, P.A.; Cairns, Iver H.

    2001-01-01

    Localized bursty plasma waves are detected by spacecraft in many space plasmas. The large spatiotemporal scales involved imply that beam and other instabilities relax to marginal stability and that mean wave energies are low. Stochastic wave growth occurs when ambient fluctuations perturb the system, causing fluctuations about marginal stability. This yields regions where growth is enhanced and others where damping is increased; bursts are associated with enhanced growth and can occur even when the mean growth rate is negative. In stochastic growth, energy loss from the source is suppressed relative to secular growth, preserving it far longer than otherwise possible. Linear stochastic growth can operate at wave levels below thresholds of nonlinear wave-clumping mechanisms such as strong-turbulence modulational instability and is not subject to their coherence and wavelength limits. These mechanisms can be distinguished by statistics of the fields, whose strengths are lognormally distributed if stochastically growing and power-law distributed in strong turbulence. Recent applications of stochastic growth theory (SGT) are described, involving bursty plasma waves and unstable particle distributions in type III solar radio sources, the Earth's foreshock, magnetosheath, and polar cap regions. It is shown that when combined with wave-wave processes, SGT also accounts for associated radio emissions

  3. Flows, scaling, and the control of moment hierarchies for stochastic chemical reaction networks

    Science.gov (United States)

    Smith, Eric; Krishnamurthy, Supriya

    2017-12-01

    Stochastic chemical reaction networks (CRNs) are complex systems that combine the features of concurrent transformation of multiple variables in each elementary reaction event and nonlinear relations between states and their rates of change. Most general results concerning CRNs are limited to restricted cases where a topological characteristic known as deficiency takes a value 0 or 1, implying uniqueness and positivity of steady states and surprising, low-information forms for their associated probability distributions. Here we derive equations of motion for fluctuation moments at all orders for stochastic CRNs at general deficiency. We show, for the standard base case of proportional sampling without replacement (which underlies the mass-action rate law), that the generator of the stochastic process acts on the hierarchy of factorial moments with a finite representation. Whereas simulation of high-order moments for many-particle systems is costly, this representation reduces the solution of moment hierarchies to a complexity comparable to solving a heat equation. At steady states, moment hierarchies for finite CRNs interpolate between low-order and high-order scaling regimes, which may be approximated separately by distributions similar to those for deficiency-zero networks and connected through matched asymptotic expansions. In CRNs with multiple stable or metastable steady states, boundedness of high-order moments provides the starting condition for recursive solution downward to low-order moments, reversing the order usually used to solve moment hierarchies. A basis for a subset of network flows defined by having the same mean-regressing property as the flows in deficiency-zero networks gives the leading contribution to low-order moments in CRNs at general deficiency, in a 1 /n expansion in large particle numbers. Our results give a physical picture of the different informational roles of mean-regressing and non-mean-regressing flows and clarify the dynamical

  4. Stochastic lag time in nucleated linear self-assembly

    Energy Technology Data Exchange (ETDEWEB)

    Tiwari, Nitin S. [Group Theory of Polymers and Soft Matter, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven (Netherlands); Schoot, Paul van der [Group Theory of Polymers and Soft Matter, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven (Netherlands); Institute for Theoretical Physics, Utrecht University, Leuvenlaan 4, 3584 CE Utrecht (Netherlands)

    2016-06-21

    Protein aggregation is of great importance in biology, e.g., in amyloid fibrillation. The aggregation processes that occur at the cellular scale must be highly stochastic in nature because of the statistical number fluctuations that arise on account of the small system size at the cellular scale. We study the nucleated reversible self-assembly of monomeric building blocks into polymer-like aggregates using the method of kinetic Monte Carlo. Kinetic Monte Carlo, being inherently stochastic, allows us to study the impact of fluctuations on the polymerization reactions. One of the most important characteristic features in this kind of problem is the existence of a lag phase before self-assembly takes off, which is what we focus attention on. We study the associated lag time as a function of system size and kinetic pathway. We find that the leading order stochastic contribution to the lag time before polymerization commences is inversely proportional to the system volume for large-enough system size for all nine reaction pathways tested. Finite-size corrections to this do depend on the kinetic pathway.

  5. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    International Nuclear Information System (INIS)

    Desai, Ajit; Pettit, Chris; Poirel, Dominique; Sarkar, Abhijit

    2017-01-01

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolution in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.

  6. Application of Stochastic Unsaturated Flow Theory, Numerical Simulations, and Comparisons to Field Observations

    DEFF Research Database (Denmark)

    Jensen, Karsten Høgh; Mantoglou, Aristotelis

    1992-01-01

    unsaturated flow equation representing the mean system behavior is solved using a finite difference numerical solution technique. The effective parameters are evaluated from the stochastic theory formulas before entering them into the numerical solution for each iteration. The stochastic model is applied...... seems to offer a rational framework for modeling large-scale unsaturated flow and estimating areal averages of soil-hydrological processes in spatially variable soils....

  7. The stochastic effects on the Brazilian Electrical Sector

    International Nuclear Information System (INIS)

    Ferreira, Pedro Guilherme Costa; Oliveira, Fernando Luiz Cyrino; Souza, Reinaldo Castro

    2015-01-01

    The size and characteristics of the Brazilian Electrical Sector (BES) are unique. The system includes a large-scale hydrothermal power system with many hydroelectric plants and multiple owners. Due to the historical harnessing of natural resources, the National Interconnected System (NIS) was developed outside of the economic scale of the BES. The central components of the NIS enable energy generated in any part of Brazil to be consumed in distant regions, considering certain technical configurations. This interconnection results in a large-scale complex system and is controlled by robust computational models, used to support the planning and operation of the NIS. This study presents a different vision of the SEB, demonstrating the intrinsic relationship between hydrological stochasticity and the activities executed by the system, which is an important sector of the infrastructure in Brazil. The simulation of energy scenarios is crucial to the optimal manner to operate the sector and to supporting decisions about whether expansion is necessary, thus, avoiding unnecessary costs and/or losses. These scenarios are an imposing factor in the determination of the spot cost of electrical energy, given that the simulated quantities of water in the reservoirs are one of the determinants for the short-term energy price. - Highlights: • The relationship between the hydrological regimes and the energy policy and planning in Brazil; • An overview about the stochastic effects on the Brazilian Electrical Sector; • The stochasticity associated with the Brazilian electrical planning; • The importance of hydro resources management for energy generation in Brazil;

  8. Energy transfers in large-scale and small-scale dynamos

    Science.gov (United States)

    Samtaney, Ravi; Kumar, Rohit; Verma, Mahendra

    2015-11-01

    We present the energy transfers, mainly energy fluxes and shell-to-shell energy transfers in small-scale dynamo (SSD) and large-scale dynamo (LSD) using numerical simulations of MHD turbulence for Pm = 20 (SSD) and for Pm = 0.2 on 10243 grid. For SSD, we demonstrate that the magnetic energy growth is caused by nonlocal energy transfers from the large-scale or forcing-scale velocity field to small-scale magnetic field. The peak of these energy transfers move towards lower wavenumbers as dynamo evolves, which is the reason for the growth of the magnetic fields at the large scales. The energy transfers U2U (velocity to velocity) and B2B (magnetic to magnetic) are forward and local. For LSD, we show that the magnetic energy growth takes place via energy transfers from large-scale velocity field to large-scale magnetic field. We observe forward U2U and B2B energy flux, similar to SSD.

  9. Projection Effects of Large-scale Structures on Weak-lensing Peak Abundances

    Science.gov (United States)

    Yuan, Shuo; Liu, Xiangkun; Pan, Chuzhong; Wang, Qiao; Fan, Zuhui

    2018-04-01

    High peaks in weak lensing (WL) maps originate dominantly from the lensing effects of single massive halos. Their abundance is therefore closely related to the halo mass function and thus a powerful cosmological probe. However, besides individual massive halos, large-scale structures (LSS) along lines of sight also contribute to the peak signals. In this paper, with ray-tracing simulations, we investigate the LSS projection effects. We show that for current surveys with a large shape noise, the stochastic LSS effects are subdominant. For future WL surveys with source galaxies having a median redshift z med ∼ 1 or higher, however, they are significant. For the cosmological constraints derived from observed WL high-peak counts, severe biases can occur if the LSS effects are not taken into account properly. We extend the model of Fan et al. by incorporating the LSS projection effects into the theoretical considerations. By comparing with simulation results, we demonstrate the good performance of the improved model and its applicability in cosmological studies.

  10. An efficient method based on the uniformity principle for synthesis of large-scale heat exchanger networks

    International Nuclear Information System (INIS)

    Zhang, Chunwei; Cui, Guomin; Chen, Shang

    2016-01-01

    Highlights: • Two dimensionless uniformity factors are presented to heat exchange network. • The grouping of process streams reduces the computational complexity of large-scale HENS problems. • The optimal sub-network can be obtained by Powell particle swarm optimization algorithm. • The method is illustrated by a case study involving 39 process streams, with a better solution. - Abstract: The optimal design of large-scale heat exchanger networks is a difficult task due to the inherent non-linear characteristics and the combinatorial nature of heat exchangers. To solve large-scale heat exchanger network synthesis (HENS) problems, two dimensionless uniformity factors to describe the heat exchanger network (HEN) uniformity in terms of the temperature difference and the accuracy of process stream grouping are deduced. Additionally, a novel algorithm that combines deterministic and stochastic optimizations to obtain an optimal sub-network with a suitable heat load for a given group of streams is proposed, and is named the Powell particle swarm optimization (PPSO). As a result, the synthesis of large-scale heat exchanger networks is divided into two corresponding sub-parts, namely, the grouping of process streams and the optimization of sub-networks. This approach reduces the computational complexity and increases the efficiency of the proposed method. The robustness and effectiveness of the proposed method are demonstrated by solving a large-scale HENS problem involving 39 process streams, and the results obtained are better than those previously published in the literature.

  11. Scaling A Moment-Rate Function For Small To Large Magnitude Events

    Science.gov (United States)

    Archuleta, Ralph; Ji, Chen

    2017-04-01

    Since the 1980's seismologists have recognized that peak ground acceleration (PGA) and peak ground velocity (PGV) scale differently with magnitude for large and moderate earthquakes. In a recent paper (Archuleta and Ji, GRL 2016) we introduced an apparent moment-rate function (aMRF) that accurately predicts the scaling with magnitude of PGA, PGV, PWA (Wood-Anderson Displacement) and the ratio PGA/2πPGV (dominant frequency) for earthquakes 3.3 ≤ M ≤ 5.3. This apparent moment-rate function is controlled by two temporal parameters, tp and td, which are related to the time for the moment-rate function to reach its peak amplitude and the total duration of the earthquake, respectively. These two temporal parameters lead to a Fourier amplitude spectrum (FAS) of displacement that has two corners in between which the spectral amplitudes decay as 1/f, f denotes frequency. At higher or lower frequencies, the FAS of the aMRF looks like a single-corner Aki-Brune omega squared spectrum. However, in the presence of attenuation the higher corner is almost certainly masked. Attempting to correct the spectrum to an Aki-Brune omega-squared spectrum will produce an "apparent" corner frequency that falls between the double corner frequency of the aMRF. We reason that the two corners of the aMRF are the reason that seismologists deduce a stress drop (e.g., Allmann and Shearer, JGR 2009) that is generally much smaller than the stress parameter used to produce ground motions from stochastic simulations (e.g., Boore, 2003 Pageoph.). The presence of two corners for the smaller magnitude earthquakes leads to several questions. Can deconvolution be successfully used to determine scaling from small to large earthquakes? Equivalently will large earthquakes have a double corner? If large earthquakes are the sum of many smaller magnitude earthquakes, what should the displacement FAS look like for a large magnitude earthquake? Can a combination of such a double-corner spectrum and random

  12. What Shapes the Phylogenetic Structure of Anuran Communities in a Seasonal Environment? The Influence of Determinism at Regional Scale to Stochasticity or Antagonistic Forces at Local Scale.

    Science.gov (United States)

    Martins, Clarissa de Araújo; Roque, Fabio de Oliveira; Santos, Bráulio A; Ferreira, Vanda Lúcia; Strüssmann, Christine; Tomas, Walfrido Moraes

    2015-01-01

    Ecological communities are structured by both deterministic and stochastic processes. We investigated phylogenetic patterns at regional and local scales to understand the influences of seasonal processes in shaping the structure of anuran communities in the southern Pantanal wetland, Brazil. We assessed the phylogenetic structure at different scales, using the Net Relatedness Index (NRI), the Nearest Taxon Index (NTI), and phylobetadiversity indexes, as well as a permutation test, to evaluate the effect of seasonality. The anuran community was represented by a non-random set of species with a high degree of phylogenetic relatedness at the regional scale. However, at the local scale the phylogenetic structure of the community was weakly related with the seasonality of the system, indicating that oriented stochastic processes (e.g. colonization, extinction and ecological drift) and/or antagonist forces drive the structure of such communities in the southern Pantanal.

  13. On the Fluctuating Component of the Sun's Large-Scale Magnetic Field

    Science.gov (United States)

    Wang, Y.-M.; Sheeley, N. R., Jr.

    2003-06-01

    The Sun's large-scale magnetic field and its proxies are known to undergo substantial variations on timescales much less than a solar cycle but longer than a rotation period. Examples of such variations include the double activity maximum inferred by Gnevyshev, the large peaks in the interplanetary field strength observed in 1982 and 1991, and the 1.3-1.4 yr periodicities detected over limited time intervals in solar wind speed and geomagnetic activity. We consider the question of the extent to which these variations are stochastic in nature. For this purpose, we simulate the evolution of the Sun's equatorial dipole strength and total open flux under the assumption that the active region sources (BMRs) are distributed randomly in longitude. The results are then interpreted with the help of a simple random walk model including dissipation. We find that the equatorial dipole and open flux generally exhibit multiple peaks during each 11 yr cycle, with the highest peak as likely to occur during the declining phase as at sunspot maximum. The widths of the peaks are determined by the timescale τ~1 yr for the equatorial dipole to decay through the combined action of meridional flow, differential rotation, and supergranular diffusion. The amplitudes of the fluctuations depend on the strengths and longitudinal phase relations of the BMRs, as well as on the relative rates of flux emergence and decay. We conclude that stochastic processes provide a viable explanation for the ``Gnevyshev gaps'' and for the existence of quasi periodicities in the range ~1-3 yr.

  14. Large-scale data analytics

    CERN Document Server

    Gkoulalas-Divanis, Aris

    2014-01-01

    Provides cutting-edge research in large-scale data analytics from diverse scientific areas Surveys varied subject areas and reports on individual results of research in the field Shares many tips and insights into large-scale data analytics from authors and editors with long-term experience and specialization in the field

  15. Hill functions for stochastic gene regulatory networks from master equations with split nodes and time-scale separation

    Science.gov (United States)

    Lipan, Ovidiu; Ferwerda, Cameron

    2018-02-01

    The deterministic Hill function depends only on the average values of molecule numbers. To account for the fluctuations in the molecule numbers, the argument of the Hill function needs to contain the means, the standard deviations, and the correlations. Here we present a method that allows for stochastic Hill functions to be constructed from the dynamical evolution of stochastic biocircuits with specific topologies. These stochastic Hill functions are presented in a closed analytical form so that they can be easily incorporated in models for large genetic regulatory networks. Using a repressive biocircuit as an example, we show by Monte Carlo simulations that the traditional deterministic Hill function inaccurately predicts time of repression by an order of two magnitudes. However, the stochastic Hill function was able to capture the fluctuations and thus accurately predicted the time of repression.

  16. Cosmic microwave background bispectrum from primordial magnetic fields on large angular scales.

    Science.gov (United States)

    Seshadri, T R; Subramanian, Kandaswamy

    2009-08-21

    Primordial magnetic fields lead to non-Gaussian signals in the cosmic microwave background (CMB) even at the lowest order, as magnetic stresses and the temperature anisotropy they induce depend quadratically on the magnetic field. In contrast, CMB non-Gaussianity due to inflationary scalar perturbations arises only as a higher-order effect. We propose a novel probe of stochastic primordial magnetic fields that exploits the characteristic CMB non-Gaussianity that they induce. We compute the CMB bispectrum (b(l1l2l3)) induced by such fields on large angular scales. We find a typical value of l1(l1 + 1)l3(l3 + 1)b(l1l2l3) approximately 10(-22), for magnetic fields of strength B0 approximately 3 nG and with a nearly scale invariant magnetic spectrum. Observational limits on the bispectrum allow us to set upper limits on B0 approximately 35 nG.

  17. Large-scale grid management

    International Nuclear Information System (INIS)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-01-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series

  18. Stochastic dynamics of genetic broadcasting networks

    Science.gov (United States)

    Potoyan, Davit A.; Wolynes, Peter G.

    2017-11-01

    The complex genetic programs of eukaryotic cells are often regulated by key transcription factors occupying or clearing out of a large number of genomic locations. Orchestrating the residence times of these factors is therefore important for the well organized functioning of a large network. The classic models of genetic switches sidestep this timing issue by assuming the binding of transcription factors to be governed entirely by thermodynamic protein-DNA affinities. Here we show that relying on passive thermodynamics and random release times can lead to a "time-scale crisis" for master genes that broadcast their signals to a large number of binding sites. We demonstrate that this time-scale crisis for clearance in a large broadcasting network can be resolved by actively regulating residence times through molecular stripping. We illustrate these ideas by studying a model of the stochastic dynamics of the genetic network of the central eukaryotic master regulator NFκ B which broadcasts its signals to many downstream genes that regulate immune response, apoptosis, etc.

  19. The interpolation method of stochastic functions and the stochastic variational principle

    International Nuclear Information System (INIS)

    Liu Xianbin; Chen Qiu

    1993-01-01

    Uncertainties have been attaching more importance to increasingly in modern engineering structural design. Viewed on an appropriate scale, the inherent physical attributes (material properties) of many structural systems always exhibit some patterns of random variation in space and time, generally the random variation shows a small parameter fluctuation. For a linear mechanical system, the random variation is modeled as a random one of a linear partial differential operator and, in stochastic finite element method, a random variation of a stiffness matrix. Besides the stochasticity of the structural physical properties, the influences of random loads which always represent themselves as the random boundary conditions bring about much more complexities in structural analysis. Now the stochastic finite element method or the probabilistic finite element method is used to study the structural systems with random physical parameters, whether or not the loads are random. Differing from the general finite element theory, the main difficulty which the stochastic finite element method faces is the inverse operation of stochastic operators and stochastic matrices, since the inverse operators and the inverse matrices are statistically correlated to the random parameters and random loads. So far, many efforts have been made to obtain the reasonably approximate expressions of the inverse operators and inverse matrices, such as Perturbation Method, Neumann Expansion Method, Galerkin Method (in appropriate Hilbert Spaces defined for random functions), Orthogonal Expansion Method. Among these methods, Perturbation Method appear to be the most available. The advantage of these methods is that the fairly accurate response statistics can be obtained under the condition of the finite information of the input. However, the second-order statistics obtained by use of Perturbation Method and Neumann Expansion Method are not always the appropriate ones, because the relevant second

  20. Ethics of large-scale change

    OpenAIRE

    Arler, Finn

    2006-01-01

      The subject of this paper is long-term large-scale changes in human society. Some very significant examples of large-scale change are presented: human population growth, human appropriation of land and primary production, the human use of fossil fuels, and climate change. The question is posed, which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, th...

  1. Emergence of fractal scale-free networks from stochastic evolution on the Cayley tree

    Energy Technology Data Exchange (ETDEWEB)

    Chełminiak, Przemysław, E-mail: geronimo@amu.edu.pl

    2013-11-29

    An unexpected recognition of fractal topology in some real-world scale-free networks has evoked again an interest in the mechanisms stimulating their evolution. To explain this phenomenon a few models of a deterministic construction as well as a probabilistic growth controlled by a tunable parameter have been proposed so far. A quite different approach based on the fully stochastic evolution of the fractal scale-free networks presented in this Letter counterpoises these former ideas. It is argued that the diffusive evolution of the network on the Cayley tree shapes its fractality, self-similarity and the branching number criticality without any control parameter. The last attribute of the scale-free network is an intrinsic property of the skeleton, a special type of spanning tree which determines its fractality.

  2. Stochastic fractional differential equations: Modeling, method and analysis

    International Nuclear Information System (INIS)

    Pedjeu, Jean-C.; Ladde, Gangaram S.

    2012-01-01

    By introducing a concept of dynamic process operating under multi-time scales in sciences and engineering, a mathematical model described by a system of multi-time scale stochastic differential equations is formulated. The classical Picard–Lindelöf successive approximations scheme is applied to the model validation problem, namely, existence and uniqueness of solution process. Naturally, this leads to the problem of finding closed form solutions of both linear and nonlinear multi-time scale stochastic differential equations of Itô–Doob type. Finally, to illustrate the scope of ideas and presented results, multi-time scale stochastic models for ecological and epidemiological processes in population dynamic are outlined.

  3. Multi-scenario modelling of uncertainty in stochastic chemical systems

    International Nuclear Information System (INIS)

    Evans, R. David; Ricardez-Sandoval, Luis A.

    2014-01-01

    Uncertainty analysis has not been well studied at the molecular scale, despite extensive knowledge of uncertainty in macroscale systems. The ability to predict the effect of uncertainty allows for robust control of small scale systems such as nanoreactors, surface reactions, and gene toggle switches. However, it is difficult to model uncertainty in such chemical systems as they are stochastic in nature, and require a large computational cost. To address this issue, a new model of uncertainty propagation in stochastic chemical systems, based on the Chemical Master Equation, is proposed in the present study. The uncertain solution is approximated by a composite state comprised of the averaged effect of samples from the uncertain parameter distributions. This model is then used to study the effect of uncertainty on an isomerization system and a two gene regulation network called a repressilator. The results of this model show that uncertainty in stochastic systems is dependent on both the uncertain distribution, and the system under investigation. -- Highlights: •A method to model uncertainty on stochastic systems was developed. •The method is based on the Chemical Master Equation. •Uncertainty in an isomerization reaction and a gene regulation network was modelled. •Effects were significant and dependent on the uncertain input and reaction system. •The model was computationally more efficient than Kinetic Monte Carlo

  4. A Stochastic Route Choice Model for Car Travellers in the Copenhagen Region

    DEFF Research Database (Denmark)

    Nielsen, Otto Anker; Frederiksen, Rasmus Dyhr; Daly, A.

    2002-01-01

    The paper presents a large-scale stochastic road traffic assignment model for the Copenhagen Region. The model considers several classes of passenger cars (different trip purposes), vans and trucks, each with its own utility function on which route choices are based. The utility functions include...

  5. Stochastic growth of localized plasma waves

    International Nuclear Information System (INIS)

    Robinson, P.A.; Cairns, I.H.

    2000-01-01

    Full text: Localized bursty plasma waves occur in many natural systems, where they are detected by spacecraft. The large spatiotemporal scales involved imply that beam and other instabilities relax to marginal stability and that mean wave energies are low. Stochastic wave growth occurs when ambient fluctuations perturb the wave-driver interaction, causing fluctuations about marginal stability. This yields regions where growth is enhanced and others where damping is increased; observed bursts are associated with enhanced growth and can occur even when the mean growth rate is negative. In stochastic growth, energy loss from the source is suppressed relative to secular growth, preserving it for much longer times and distances than otherwise possible. Linear stochastic growth can operate at wave levels below thresholds of nonlinear wave-clumping mechanisms such as strong-turbulence modulational instability and is not subject to their coherence and wavelength limits. Growth mechanisms can be distinguished by statistics of the fields, whose strengths are lognormally distributed if stochastically growing, power-law distributed in strong turbulence, and uniformly distributed in log under secular growth. After delineating stochastic growth and strong-turbulence regimes, recent applications of stochastic growth theory (SGT) are described, involving bursty plasma waves and unstable particle distributions in type II and III solar radio sources, foreshock regions upstream of the bow shocks of Earth and planets, and Earth's magnetosheath, auroras, and polar-caps. It is shown that when combined with wave-wave processes, SGT accounts for type II and III solar radio emissions. SGT thus removes longstanding problems in understanding persistent unstable distributions, bursty fields, and radio emissions observed in space

  6. Political consultation and large-scale research

    International Nuclear Information System (INIS)

    Bechmann, G.; Folkers, H.

    1977-01-01

    Large-scale research and policy consulting have an intermediary position between sociological sub-systems. While large-scale research coordinates science, policy, and production, policy consulting coordinates science, policy and political spheres. In this very position, large-scale research and policy consulting lack of institutional guarantees and rational back-ground guarantee which are characteristic for their sociological environment. This large-scale research can neither deal with the production of innovative goods under consideration of rentability, nor can it hope for full recognition by the basis-oriented scientific community. Policy consulting knows neither the competence assignment of the political system to make decisions nor can it judge succesfully by the critical standards of the established social science, at least as far as the present situation is concerned. This intermediary position of large-scale research and policy consulting has, in three points, a consequence supporting the thesis which states that this is a new form of institutionalization of science: These are: 1) external control, 2) the organization form, 3) the theoretical conception of large-scale research and policy consulting. (orig.) [de

  7. Statistical inference and comparison of stochastic models for the hydraulic conductivity at the Finnsjoen-site

    International Nuclear Information System (INIS)

    Norman, S.

    1992-04-01

    The origin of this study was to find a good, or even the best, stochastic model for the hydraulic conductivity field at the Finnsjoe site. The conductivity field in question are regularized, that is upscaled. The reason for performing regularization of measurement data is primarily the need for long correlation scales. This is needed in order to model reasonably large domains that can be used when describing regional groundwater flow accurately. A theory of regularization is discussed in this report. In order to find the best model, jacknifing is employed to compare different stochastic models. The theory for this method is described. In the act of doing so we also take a look at linear predictor theory, so called kriging, and include a general discussion of stochastic functions and intrinsic random functions. The statistical inference methods for finding the models are also described, in particular regression, iterative generalized regression (IGLSE) and non-parametric variogram estimators. A large amount of results is presented for a regularization scale of 36 metre. (30 refs.) (au)

  8. Large-scale multimedia modeling applications

    International Nuclear Information System (INIS)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications

  9. Drift Scale Modeling: Study of Unsaturated Flow into a Drift Using a Stochastic Continuum Model

    International Nuclear Information System (INIS)

    Birkholzer, J.T.; Tsang, C.F.; Tsang, Y.W.; Wang, J.S

    1996-01-01

    Unsaturated flow in heterogeneous fractured porous rock was simulated using a stochastic continuum model (SCM). In this model, both the more conductive fractures and the less permeable matrix are generated within the framework of a single continuum stochastic approach, based on non-parametric indicator statistics. High-permeable fracture zones are distinguished from low-permeable matrix zones in that they have assigned a long range correlation structure in prescribed directions. The SCM was applied to study small-scale flow in the vicinity of an access tunnel, which is currently being drilled in the unsaturated fractured tuff formations at Yucca Mountain, Nevada. Extensive underground testing is underway in this tunnel to investigate the suitability of Yucca Mountain as an underground nuclear waste repository. Different flow scenarios were studied in the present paper, considering the flow conditions before and after the tunnel emplacement, and assuming steady-state net infiltration as well as episodic pulse infiltration. Although the capability of the stochastic continuum model has not yet been fully explored, it has been demonstrated that the SCM is a good alternative model feasible of describing heterogeneous flow processes in unsaturated fractured tuff at Yucca Mountain

  10. Spatial scale affects the relative role of stochasticity versus determinism in soil bacterial communities in wheat fields across the North China Plain.

    Science.gov (United States)

    Shi, Yu; Li, Yuntao; Xiang, Xingjia; Sun, Ruibo; Yang, Teng; He, Dan; Zhang, Kaoping; Ni, Yingying; Zhu, Yong-Guan; Adams, Jonathan M; Chu, Haiyan

    2018-02-05

    The relative importance of stochasticity versus determinism in soil bacterial communities is unclear, as are the possible influences that alter the balance between these. Here, we investigated the influence of spatial scale on the relative role of stochasticity and determinism in agricultural monocultures consisting only of wheat, thereby minimizing the influence of differences in plant species cover and in cultivation/disturbance regime, extending across a wide range of soils and climates of the North China Plain (NCP). We sampled 243 sites across 1092 km and sequenced the 16S rRNA bacterial gene using MiSeq. We hypothesized that determinism would play a relatively stronger role at the broadest scales, due to the strong influence of climate and soil differences in selecting many distinct OTUs of bacteria adapted to the different environments. In order to test the more general applicability of the hypothesis, we also compared with a natural ecosystem on the Tibetan Plateau. Our results revealed that the relative importance of stochasticity vs. determinism did vary with spatial scale, in the direction predicted. On the North China Plain, stochasticity played a dominant role from 150 to 900 km (separation between pairs of sites) and determinism dominated at more than 900 km (broad scale). On the Tibetan Plateau, determinism played a dominant role from 130 to 1200 km and stochasticity dominated at less than 130 km. Among the identifiable deterministic factors, soil pH showed the strongest influence on soil bacterial community structure and diversity across the North China Plain. Together, 23.9% of variation in soil microbial community composition could be explained, with environmental factors accounting for 19.7% and spatial parameters 4.1%. Our findings revealed that (1) stochastic processes are relatively more important on the North China Plain, while deterministic processes are more important on the Tibetan Plateau; (2) soil pH was the major factor in shaping

  11. Decentralized Large-Scale Power Balancing

    DEFF Research Database (Denmark)

    Halvgaard, Rasmus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad

    2013-01-01

    problem is formulated as a centralized large-scale optimization problem but is then decomposed into smaller subproblems that are solved locally by each unit connected to an aggregator. For large-scale systems the method is faster than solving the full problem and can be distributed to include an arbitrary...

  12. Stochastic TDHF and the Boltzman-Langevin equation

    International Nuclear Information System (INIS)

    Suraud, E.; Reinhard, P.G.

    1991-01-01

    Outgoing from a time-dependent theory of correlations, we present a stochastic differential equation for the propagation of ensembles of Slater determinants, called Stochastic Time-Dependent Hartree-Fock (Stochastic TDHF). These ensembles are allowed to develop large fluctuations in the Hartree-Fock mean fields. An alternative stochastic differential equation, the Boltzmann-Langevin equation, can be derived from Stochastic TDHF by averaging over subensembles with small fluctuations

  13. Binary Stochastic Representations for Large Multi-class Classification

    KAUST Repository

    Gerald, Thomas

    2017-10-23

    Classification with a large number of classes is a key problem in machine learning and corresponds to many real-world applications like tagging of images or textual documents in social networks. If one-vs-all methods usually reach top performance in this context, these approaches suffer of a high inference complexity, linear w.r.t. the number of categories. Different models based on the notion of binary codes have been proposed to overcome this limitation, achieving in a sublinear inference complexity. But they a priori need to decide which binary code to associate to which category before learning using more or less complex heuristics. We propose a new end-to-end model which aims at simultaneously learning to associate binary codes with categories, but also learning to map inputs to binary codes. This approach called Deep Stochastic Neural Codes (DSNC) keeps the sublinear inference complexity but do not need any a priori tuning. Experimental results on different datasets show the effectiveness of the approach w.r.t. baseline methods.

  14. Synergy of Stochastic and Systematic Energization of Plasmas during Turbulent Reconnection

    Science.gov (United States)

    Pisokas, Theophilos; Vlahos, Loukas; Isliker, Heinz

    2018-01-01

    The important characteristic of turbulent reconnection is that it combines large-scale magnetic disturbances (δ B/B∼ 1) with randomly distributed unstable current sheets (UCSs). Many well-known nonlinear MHD structures (strong turbulence, current sheet(s), shock(s)) lead asymptotically to the state of turbulent reconnection. We analyze in this article, for the first time, the energization of electrons and ions in a large-scale environment that combines large-amplitude disturbances propagating with sub-Alfvénic speed with UCSs. The magnetic disturbances interact stochastically (second-order Fermi) with the charged particles and play a crucial role in the heating of the particles, while the UCSs interact systematically (first-order Fermi) and play a crucial role in the formation of the high-energy tail. The synergy of stochastic and systematic acceleration provided by the mixture of magnetic disturbances and UCSs influences the energetics of the thermal and nonthermal particles, the power-law index, and the length of time the particles remain inside the energy release volume. We show that this synergy can explain the observed very fast and impulsive particle acceleration and the slightly delayed formation of a superhot particle population.

  15. Automating large-scale reactor systems

    International Nuclear Information System (INIS)

    Kisner, R.A.

    1985-01-01

    This paper conveys a philosophy for developing automated large-scale control systems that behave in an integrated, intelligent, flexible manner. Methods for operating large-scale systems under varying degrees of equipment degradation are discussed, and a design approach that separates the effort into phases is suggested. 5 refs., 1 fig

  16. Applications of stochastic models to solute transport in fractured rocks

    International Nuclear Information System (INIS)

    Gelhar, L.W.

    1987-01-01

    A stochastic theory for flow and solute transport in a single variable aperture fracture bounded by sorbing porous matrix into which solutes may diffuse, is developed using a perturbation approximation and spectral solution techniques which assume local statistical homogeneity. The theory predicts that the effective aperture of the fracture for mean solute displacement will be larger than the aperture required to calculate the large-scale flow resistance of the fracture. This ratio of apertures is a function of the variance of the logarithm of the apertures. The theory also predicts the macrodispersion coefficient for large-scale transport in the fracture. The resulting macrodispersivity is proportional to the variance of the logaperture and to its correlation scale. When variable surface sorption is included, it is found that the macrodispersivity is increased significantly, in some cases more than an order of magnitude. It is also shown that the effective retardation coefficient for the sorptively heterogeneous fracture is found by simply taking the arithmetic mean of the local surface sorption coefficient. Matrix diffusion is also shown to increase the fracture macrodispesivity at very large times. A reexamination of the results of four different field tracer tests in crystalline rock in Sweden and Canada shows aperture ratios and dispersivities that are consistent with the stochastic theory. The variance of the natural logarithm of the aperture is found to be in the range of 3 to 6 and the correlation scales for logaperture ranges from .2 to 1.2 meters. Detailed recommendations for additional field investigations at scales ranging from a few meters up to a kilometer are presented. (orig.)

  17. The Software Reliability of Large Scale Integration Circuit and Very Large Scale Integration Circuit

    OpenAIRE

    Artem Ganiyev; Jan Vitasek

    2010-01-01

    This article describes evaluation method of faultless function of large scale integration circuits (LSI) and very large scale integration circuits (VLSI). In the article there is a comparative analysis of factors which determine faultless of integrated circuits, analysis of already existing methods and model of faultless function evaluation of LSI and VLSI. The main part describes a proposed algorithm and program for analysis of fault rate in LSI and VLSI circuits.

  18. Stochastic bias-correction of daily rainfall scenarios for hydrological applications

    Directory of Open Access Journals (Sweden)

    I. Portoghese

    2011-09-01

    Full Text Available The accuracy of rainfall predictions provided by climate models is crucial for the assessment of climate change impacts on hydrological processes. In fact, the presence of bias in downscaled precipitation may produce large bias in the assessment of soil moisture dynamics, river flows and groundwater recharge.

    In this study, a comparison between statistical properties of rainfall observations and model control simulations from a Regional Climate Model (RCM was performed through a robust and meaningful representation of the precipitation process. The output of the adopted RCM was analysed and re-scaled exploiting the structure of a stochastic model of the point rainfall process. In particular, the stochastic model is able to adequately reproduce the rainfall intermittency at the synoptic scale, which is one of the crucial aspects for the Mediterranean environments. Possible alteration in the local rainfall regime was investigated by means of the historical daily time-series from a dense rain-gauge network, which were also used for the analysis of the RCM bias in terms of dry and wet periods and storm intensity. The result is a stochastic scheme for bias-correction at the RCM-cell scale, which produces a realistic representation of the daily rainfall intermittency and precipitation depths, though a residual bias in the storm intensity of longer storm events persists.

  19. A method for stochastic constrained optimization using derivative-free surrogate pattern search and collocation

    International Nuclear Information System (INIS)

    Sankaran, Sethuraman; Audet, Charles; Marsden, Alison L.

    2010-01-01

    Recent advances in coupling novel optimization methods to large-scale computing problems have opened the door to tackling a diverse set of physically realistic engineering design problems. A large computational overhead is associated with computing the cost function for most practical problems involving complex physical phenomena. Such problems are also plagued with uncertainties in a diverse set of parameters. We present a novel stochastic derivative-free optimization approach for tackling such problems. Our method extends the previously developed surrogate management framework (SMF) to allow for uncertainties in both simulation parameters and design variables. The stochastic collocation scheme is employed for stochastic variables whereas Kriging based surrogate functions are employed for the cost function. This approach is tested on four numerical optimization problems and is shown to have significant improvement in efficiency over traditional Monte-Carlo schemes. Problems with multiple probabilistic constraints are also discussed.

  20. Stochastic volatility and stochastic leverage

    DEFF Research Database (Denmark)

    Veraart, Almut; Veraart, Luitgard A. M.

    This paper proposes the new concept of stochastic leverage in stochastic volatility models. Stochastic leverage refers to a stochastic process which replaces the classical constant correlation parameter between the asset return and the stochastic volatility process. We provide a systematic...... treatment of stochastic leverage and propose to model the stochastic leverage effect explicitly, e.g. by means of a linear transformation of a Jacobi process. Such models are both analytically tractable and allow for a direct economic interpretation. In particular, we propose two new stochastic volatility...... models which allow for a stochastic leverage effect: the generalised Heston model and the generalised Barndorff-Nielsen & Shephard model. We investigate the impact of a stochastic leverage effect in the risk neutral world by focusing on implied volatilities generated by option prices derived from our new...

  1. Phylogenetic distribution of large-scale genome patchiness

    Directory of Open Access Journals (Sweden)

    Hackenberg Michael

    2008-04-01

    Full Text Available Abstract Background The phylogenetic distribution of large-scale genome structure (i.e. mosaic compositional patchiness has been explored mainly by analytical ultracentrifugation of bulk DNA. However, with the availability of large, good-quality chromosome sequences, and the recently developed computational methods to directly analyze patchiness on the genome sequence, an evolutionary comparative analysis can be carried out at the sequence level. Results The local variations in the scaling exponent of the Detrended Fluctuation Analysis are used here to analyze large-scale genome structure and directly uncover the characteristic scales present in genome sequences. Furthermore, through shuffling experiments of selected genome regions, computationally-identified, isochore-like regions were identified as the biological source for the uncovered large-scale genome structure. The phylogenetic distribution of short- and large-scale patchiness was determined in the best-sequenced genome assemblies from eleven eukaryotic genomes: mammals (Homo sapiens, Pan troglodytes, Mus musculus, Rattus norvegicus, and Canis familiaris, birds (Gallus gallus, fishes (Danio rerio, invertebrates (Drosophila melanogaster and Caenorhabditis elegans, plants (Arabidopsis thaliana and yeasts (Saccharomyces cerevisiae. We found large-scale patchiness of genome structure, associated with in silico determined, isochore-like regions, throughout this wide phylogenetic range. Conclusion Large-scale genome structure is detected by directly analyzing DNA sequences in a wide range of eukaryotic chromosome sequences, from human to yeast. In all these genomes, large-scale patchiness can be associated with the isochore-like regions, as directly detected in silico at the sequence level.

  2. Managing large-scale models: DBS

    International Nuclear Information System (INIS)

    1981-05-01

    A set of fundamental management tools for developing and operating a large scale model and data base system is presented. Based on experience in operating and developing a large scale computerized system, the only reasonable way to gain strong management control of such a system is to implement appropriate controls and procedures. Chapter I discusses the purpose of the book. Chapter II classifies a broad range of generic management problems into three groups: documentation, operations, and maintenance. First, system problems are identified then solutions for gaining management control are disucssed. Chapters III, IV, and V present practical methods for dealing with these problems. These methods were developed for managing SEAS but have general application for large scale models and data bases

  3. Large Scale Self-Organizing Information Distribution System

    National Research Council Canada - National Science Library

    Low, Steven

    2005-01-01

    This project investigates issues in "large-scale" networks. Here "large-scale" refers to networks with large number of high capacity nodes and transmission links, and shared by a large number of users...

  4. Large scale structure and baryogenesis

    International Nuclear Information System (INIS)

    Kirilova, D.P.; Chizhov, M.V.

    2001-08-01

    We discuss a possible connection between the large scale structure formation and the baryogenesis in the universe. An update review of the observational indications for the presence of a very large scale 120h -1 Mpc in the distribution of the visible matter of the universe is provided. The possibility to generate a periodic distribution with the characteristic scale 120h -1 Mpc through a mechanism producing quasi-periodic baryon density perturbations during inflationary stage, is discussed. The evolution of the baryon charge density distribution is explored in the framework of a low temperature boson condensate baryogenesis scenario. Both the observed very large scale of a the visible matter distribution in the universe and the observed baryon asymmetry value could naturally appear as a result of the evolution of a complex scalar field condensate, formed at the inflationary stage. Moreover, for some model's parameters a natural separation of matter superclusters from antimatter ones can be achieved. (author)

  5. Automatic management software for large-scale cluster system

    International Nuclear Information System (INIS)

    Weng Yunjian; Chinese Academy of Sciences, Beijing; Sun Gongxing

    2007-01-01

    At present, the large-scale cluster system faces to the difficult management. For example the manager has large work load. It needs to cost much time on the management and the maintenance of large-scale cluster system. The nodes in large-scale cluster system are very easy to be chaotic. Thousands of nodes are put in big rooms so that some managers are very easy to make the confusion with machines. How do effectively carry on accurate management under the large-scale cluster system? The article introduces ELFms in the large-scale cluster system. Furthermore, it is proposed to realize the large-scale cluster system automatic management. (authors)

  6. Simplified reactive power management strategy for complex power grids under stochastic operation and incomplete information

    DEFF Research Database (Denmark)

    Vlachogiannis, Ioannis (John)

    2009-01-01

    grids is a major issue for system operators. Under these circumstances an online reactive power management strategy with minimum risk concerning all uncertain and stochastic parameters is proposed. Therefore, new concepts such as reactive power-weighted node-to-node linking and reactive power control......In the current released energy market, the large-scale complex transmission networks and the distribution ones with dispersed energy sources and "intelligent" components operate under uncertainties, stochastic and prior incomplete information. A safe and reliable operation of such complex power...... capability are introduced. A distributed and interconnected stochastic learning automata system is implemented to manage, in a unified and unique way, the reactive power in complex power grids with stochastic reactive power demand and detect the vulnerable part. The proposed simplified strategy can also...

  7. Synthetic Sediments and Stochastic Groundwater Hydrology

    Science.gov (United States)

    Wilson, J. L.

    2002-12-01

    For over twenty years the groundwater community has pursued the somewhat elusive goal of describing the effects of aquifer heterogeneity on subsurface flow and chemical transport. While small perturbation stochastic moment methods have significantly advanced theoretical understanding, why is it that stochastic applications use instead simulations of flow and transport through multiple realizations of synthetic geology? Allan Gutjahr was a principle proponent of the Fast Fourier Transform method for the synthetic generation of aquifer properties and recently explored new, more geologically sound, synthetic methods based on multi-scale Markov random fields. Focusing on sedimentary aquifers, how has the state-of-the-art of synthetic generation changed and what new developments can be expected, for example, to deal with issues like conceptual model uncertainty, the differences between measurement and modeling scales, and subgrid scale variability? What will it take to get stochastic methods, whether based on moments, multiple realizations, or some other approach, into widespread application?

  8. Stochastic inflation in phase space: is slow roll a stochastic attractor?

    Energy Technology Data Exchange (ETDEWEB)

    Grain, Julien [Institut d' Astrophysique Spatiale, UMR8617, CNRS, Univ. Paris Sud, Université Paris-Saclay, Bt. 121, Orsay, F-91405 (France); Vennin, Vincent, E-mail: julien.grain@ias.u-psud.fr, E-mail: vincent.vennin@port.ac.uk [Institute of Cosmology and Gravitation, University of Portsmouth, Dennis Sciama Building, Burnaby Road, Portsmouth, PO13FX (United Kingdom)

    2017-05-01

    An appealing feature of inflationary cosmology is the presence of a phase-space attractor, ''slow roll'', which washes out the dependence on initial field velocities. We investigate the robustness of this property under backreaction from quantum fluctuations using the stochastic inflation formalism in the phase-space approach. A Hamiltonian formulation of stochastic inflation is presented, where it is shown that the coarse-graining procedure—where wavelengths smaller than the Hubble radius are integrated out—preserves the canonical structure of free fields. This means that different sets of canonical variables give rise to the same probability distribution which clarifies the literature with respect to this issue. The role played by the quantum-to-classical transition is also analysed and is shown to constrain the coarse-graining scale. In the case of free fields, we find that quantum diffusion is aligned in phase space with the slow-roll direction. This implies that the classical slow-roll attractor is immune to stochastic effects and thus generalises to a stochastic attractor regardless of initial conditions, with a relaxation time at least as short as in the classical system. For non-test fields or for test fields with non-linear self interactions however, quantum diffusion and the classical slow-roll flow are misaligned. We derive a condition on the coarse-graining scale so that observational corrections from this misalignment are negligible at leading order in slow roll.

  9. Dynamic Stochastic Superresolution of sparsely observed turbulent systems

    International Nuclear Information System (INIS)

    Branicki, M.; Majda, A.J.

    2013-01-01

    Real-time capture of the relevant features of the unresolved turbulent dynamics of complex natural systems from sparse noisy observations and imperfect models is a notoriously difficult problem. The resulting lack of observational resolution and statistical accuracy in estimating the important turbulent processes, which intermittently send significant energy to the large-scale fluctuations, hinders efficient parameterization and real-time prediction using discretized PDE models. This issue is particularly subtle and important when dealing with turbulent geophysical systems with an vast range of interacting spatio-temporal scales and rough energy spectra near the mesh scale of numerical models. Here, we introduce and study a suite of general Dynamic Stochastic Superresolution (DSS) algorithms and show that, by appropriately filtering sparse regular observations with the help of cheap stochastic exactly solvable models, one can derive stochastically ‘superresolved’ velocity fields and gain insight into the important characteristics of the unresolved dynamics, including the detection of the so-called black swans. The DSS algorithms operate in Fourier domain and exploit the fact that the coarse observation network aliases high-wavenumber information into the resolved waveband. It is shown that these cheap algorithms are robust and have significant skill on a test bed of turbulent solutions from realistic nonlinear turbulent spatially extended systems in the presence of a significant model error. In particular, the DSS algorithms are capable of successfully capturing time-localized extreme events in the unresolved modes, and they provide good and robust skill for recovery of the unresolved processes in terms of pattern correlation. Moreover, we show that DSS improves the skill for recovering the primary modes associated with the sparse observation mesh which is equally important in applications. The skill of the various DSS algorithms depends on the energy spectrum

  10. The ScaLIng Macroweather Model (SLIMM): using scaling to forecast global-scale macroweather from months to decades

    Science.gov (United States)

    Lovejoy, S.; del Rio Amador, L.; Hébert, R.

    2015-09-01

    On scales of ≈ 10 days (the lifetime of planetary-scale structures), there is a drastic transition from high-frequency weather to low-frequency macroweather. This scale is close to the predictability limits of deterministic atmospheric models; thus, in GCM (general circulation model) macroweather forecasts, the weather is a high-frequency noise. However, neither the GCM noise nor the GCM climate is fully realistic. In this paper we show how simple stochastic models can be developed that use empirical data to force the statistics and climate to be realistic so that even a two-parameter model can perform as well as GCMs for annual global temperature forecasts. The key is to exploit the scaling of the dynamics and the large stochastic memories that we quantify. Since macroweather temporal (but not spatial) intermittency is low, we propose using the simplest model based on fractional Gaussian noise (fGn): the ScaLIng Macroweather Model (SLIMM). SLIMM is based on a stochastic ordinary differential equation, differing from usual linear stochastic models (such as the linear inverse modelling - LIM) in that it is of fractional rather than integer order. Whereas LIM implicitly assumes that there is no low-frequency memory, SLIMM has a huge memory that can be exploited. Although the basic mathematical forecast problem for fGn has been solved, we approach the problem in an original manner, notably using the method of innovations to obtain simpler results on forecast skill and on the size of the effective system memory. A key to successful stochastic forecasts of natural macroweather variability is to first remove the low-frequency anthropogenic component. A previous attempt to use fGn for forecasts had disappointing results because this was not done. We validate our theory using hindcasts of global and Northern Hemisphere temperatures at monthly and annual resolutions. Several nondimensional measures of forecast skill - with no adjustable parameters - show excellent

  11. Large scale network-centric distributed systems

    CERN Document Server

    Sarbazi-Azad, Hamid

    2014-01-01

    A highly accessible reference offering a broad range of topics and insights on large scale network-centric distributed systems Evolving from the fields of high-performance computing and networking, large scale network-centric distributed systems continues to grow as one of the most important topics in computing and communication and many interdisciplinary areas. Dealing with both wired and wireless networks, this book focuses on the design and performance issues of such systems. Large Scale Network-Centric Distributed Systems provides in-depth coverage ranging from ground-level hardware issu

  12. Universality in stochastic exponential growth.

    Science.gov (United States)

    Iyer-Biswas, Srividya; Crooks, Gavin E; Scherer, Norbert F; Dinner, Aaron R

    2014-07-11

    Recent imaging data for single bacterial cells reveal that their mean sizes grow exponentially in time and that their size distributions collapse to a single curve when rescaled by their means. An analogous result holds for the division-time distributions. A model is needed to delineate the minimal requirements for these scaling behaviors. We formulate a microscopic theory of stochastic exponential growth as a Master Equation that accounts for these observations, in contrast to existing quantitative models of stochastic exponential growth (e.g., the Black-Scholes equation or geometric Brownian motion). Our model, the stochastic Hinshelwood cycle (SHC), is an autocatalytic reaction cycle in which each molecular species catalyzes the production of the next. By finding exact analytical solutions to the SHC and the corresponding first passage time problem, we uncover universal signatures of fluctuations in exponential growth and division. The model makes minimal assumptions, and we describe how more complex reaction networks can reduce to such a cycle. We thus expect similar scalings to be discovered in stochastic processes resulting in exponential growth that appear in diverse contexts such as cosmology, finance, technology, and population growth.

  13. Large-Scale Outflows in Seyfert Galaxies

    Science.gov (United States)

    Colbert, E. J. M.; Baum, S. A.

    1995-12-01

    \\catcode`\\@=11 \\ialign{m @th#1hfil ##hfil \\crcr#2\\crcr\\sim\\crcr}}} \\catcode`\\@=12 Highly collimated outflows extend out to Mpc scales in many radio-loud active galaxies. In Seyfert galaxies, which are radio-quiet, the outflows extend out to kpc scales and do not appear to be as highly collimated. In order to study the nature of large-scale (>~1 kpc) outflows in Seyferts, we have conducted optical, radio and X-ray surveys of a distance-limited sample of 22 edge-on Seyfert galaxies. Results of the optical emission-line imaging and spectroscopic survey imply that large-scale outflows are present in >~{{1} /{4}} of all Seyferts. The radio (VLA) and X-ray (ROSAT) surveys show that large-scale radio and X-ray emission is present at about the same frequency. Kinetic luminosities of the outflows in Seyferts are comparable to those in starburst-driven superwinds. Large-scale radio sources in Seyferts appear diffuse, but do not resemble radio halos found in some edge-on starburst galaxies (e.g. M82). We discuss the feasibility of the outflows being powered by the active nucleus (e.g. a jet) or a circumnuclear starburst.

  14. SCALE INTERACTION IN A MIXING LAYER. THE ROLE OF THE LARGE-SCALE GRADIENTS

    KAUST Repository

    Fiscaletti, Daniele

    2015-08-23

    The interaction between scales is investigated in a turbulent mixing layer. The large-scale amplitude modulation of the small scales already observed in other works depends on the crosswise location. Large-scale positive fluctuations correlate with a stronger activity of the small scales on the low speed-side of the mixing layer, and a reduced activity on the high speed-side. However, from physical considerations we would expect the scales to interact in a qualitatively similar way within the flow and across different turbulent flows. Therefore, instead of the large-scale fluctuations, the large-scale gradients modulation of the small scales has been additionally investigated.

  15. Multiscale Hy3S: Hybrid stochastic simulation for supercomputers

    Directory of Open Access Journals (Sweden)

    Kaznessis Yiannis N

    2006-02-01

    create biological systems and analyze data. We demonstrate the accuracy and efficiency of Hy3S with examples, including a large-scale system benchmark and a complex bistable biochemical network with positive feedback. The software itself is open-sourced under the GPL license and is modular, allowing users to modify it for their own purposes. Conclusion Hy3S is a powerful suite of simulation programs for simulating the stochastic dynamics of networks of biochemical reactions. Its first public version enables computational biologists to more efficiently investigate the dynamics of realistic biological systems.

  16. Tail-constraining stochastic linear–quadratic control: a large deviation and statistical physics approach

    International Nuclear Information System (INIS)

    Chertkov, Michael; Kolokolov, Igor; Lebedev, Vladimir

    2012-01-01

    The standard definition of the stochastic risk-sensitive linear–quadratic (RS-LQ) control depends on the risk parameter, which is normally left to be set exogenously. We reconsider the classical approach and suggest two alternatives, resolving the spurious freedom naturally. One approach consists in seeking for the minimum of the tail of the probability distribution function (PDF) of the cost functional at some large fixed value. Another option suggests minimizing the expectation value of the cost functional under a constraint on the value of the PDF tail. Under the assumption of resulting control stability, both problems are reduced to static optimizations over a stationary control matrix. The solutions are illustrated using the examples of scalar and 1D chain (string) systems. The large deviation self-similar asymptotic of the cost functional PDF is analyzed. (paper)

  17. Simplified reactive power management strategy for complex power grids under stochastic operation and incomplete information

    International Nuclear Information System (INIS)

    Vlachogiannis, John G.

    2009-01-01

    In the current released energy market, the large-scale complex transmission networks and the distribution ones with dispersed energy sources and 'intelligent' components operate under uncertainties, stochastic and prior incomplete information. A safe and reliable operation of such complex power grids is a major issue for system operators. Under these circumstances an online reactive power management strategy with minimum risk concerning all uncertain and stochastic parameters is proposed. Therefore, new concepts such as reactive power-weighted node-to-node linking and reactive power control capability are introduced. A distributed and interconnected stochastic learning automata system is implemented to manage, in a unified and unique way, the reactive power in complex power grids with stochastic reactive power demand and detect the vulnerable part. The proposed simplified strategy can also consider more stochastic aspects such as variable grid's topology. Results of the proposed strategy obtained on the networks of IEEE 30-bus and IEEE 118-bus systems demonstrate the effectiveness of the proposed strategy.

  18. Improved Large-Eddy Simulation Using a Stochastic Backscatter Model: Application to the Neutral Atmospheric Boundary Layer and Urban Street Canyon Flow

    Science.gov (United States)

    O'Neill, J. J.; Cai, X.; Kinnersley, R.

    2015-12-01

    Large-eddy simulation (LES) provides a powerful tool for developing our understanding of atmospheric boundary layer (ABL) dynamics, which in turn can be used to improve the parameterisations of simpler operational models. However, LES modelling is not without its own limitations - most notably, the need to parameterise the effects of all subgrid-scale (SGS) turbulence. Here, we employ a stochastic backscatter SGS model, which explicitly handles the effects of both forward and reverse energy transfer to/from the subgrid scales, to simulate the neutrally stratified ABL as well as flow within an idealised urban street canyon. In both cases, a clear improvement in LES output statistics is observed when compared with the performance of a SGS model that handles forward energy transfer only. In the neutral ABL case, the near-surface velocity profile is brought significantly closer towards its expected logarithmic form. In the street canyon case, the strength of the primary vortex that forms within the canyon is more accurately reproduced when compared to wind tunnel measurements. Our results indicate that grid-scale backscatter plays an important role in both these modelled situations.

  19. Dissecting the large-scale galactic conformity

    Science.gov (United States)

    Seo, Seongu

    2018-01-01

    Galactic conformity is an observed phenomenon that galaxies located in the same region have similar properties such as star formation rate, color, gas fraction, and so on. The conformity was first observed among galaxies within in the same halos (“one-halo conformity”). The one-halo conformity can be readily explained by mutual interactions among galaxies within a halo. Recent observations however further witnessed a puzzling connection among galaxies with no direct interaction. In particular, galaxies located within a sphere of ~5 Mpc radius tend to show similarities, even though the galaxies do not share common halos with each other ("two-halo conformity" or “large-scale conformity”). Using a cosmological hydrodynamic simulation, Illustris, we investigate the physical origin of the two-halo conformity and put forward two scenarios. First, back-splash galaxies are likely responsible for the large-scale conformity. They have evolved into red galaxies due to ram-pressure stripping in a given galaxy cluster and happen to reside now within a ~5 Mpc sphere. Second, galaxies in strong tidal field induced by large-scale structure also seem to give rise to the large-scale conformity. The strong tides suppress star formation in the galaxies. We discuss the importance of the large-scale conformity in the context of galaxy evolution.

  20. Reconstructing Information in Large-Scale Structure via Logarithmic Mapping

    Science.gov (United States)

    Szapudi, Istvan

    We propose to develop a new method to extract information from large-scale structure data combining two-point statistics and non-linear transformations; before, this information was available only with substantially more complex higher-order statistical methods. Initially, most of the cosmological information in large-scale structure lies in two-point statistics. With non- linear evolution, some of that useful information leaks into higher-order statistics. The PI and group has shown in a series of theoretical investigations how that leakage occurs, and explained the Fisher information plateau at smaller scales. This plateau means that even as more modes are added to the measurement of the power spectrum, the total cumulative information (loosely speaking the inverse errorbar) is not increasing. Recently we have shown in Neyrinck et al. (2009, 2010) that a logarithmic (and a related Gaussianization or Box-Cox) transformation on the non-linear Dark Matter or galaxy field reconstructs a surprisingly large fraction of this missing Fisher information of the initial conditions. This was predicted by the earlier wave mechanical formulation of gravitational dynamics by Szapudi & Kaiser (2003). The present proposal is focused on working out the theoretical underpinning of the method to a point that it can be used in practice to analyze data. In particular, one needs to deal with the usual real-life issues of galaxy surveys, such as complex geometry, discrete sam- pling (Poisson or sub-Poisson noise), bias (linear, or non-linear, deterministic, or stochastic), redshift distortions, pro jection effects for 2D samples, and the effects of photometric redshift errors. We will develop methods for weak lensing and Sunyaev-Zeldovich power spectra as well, the latter specifically targetting Planck. In addition, we plan to investigate the question of residual higher- order information after the non-linear mapping, and possible applications for cosmology. Our aim will be to work out

  1. Large-scale dynamo action due to α fluctuations in a linear shear flow

    Science.gov (United States)

    Sridhar, S.; Singh, Nishant K.

    2014-12-01

    We present a model of large-scale dynamo action in a shear flow that has stochastic, zero-mean fluctuations of the α parameter. This is based on a minimal extension of the Kraichnan-Moffatt model, to include a background linear shear and Galilean-invariant α-statistics. Using the first-order smoothing approximation we derive a linear integro-differential equation for the large-scale magnetic field, which is non-perturbative in the shearing rate S , and the α-correlation time τα . The white-noise case, τα = 0 , is solved exactly, and it is concluded that the necessary condition for dynamo action is identical to the Kraichnan-Moffatt model without shear; this is because white-noise does not allow for memory effects, whereas shear needs time to act. To explore memory effects we reduce the integro-differential equation to a partial differential equation, valid for slowly varying fields when τα is small but non-zero. Seeking exponential modal solutions, we solve the modal dispersion relation and obtain an explicit expression for the growth rate as a function of the six independent parameters of the problem. A non-zero τα gives rise to new physical scales, and dynamo action is completely different from the white-noise case; e.g. even weak α fluctuations can give rise to a dynamo. We argue that, at any wavenumber, both Moffatt drift and Shear always contribute to increasing the growth rate. Two examples are presented: (a) a Moffatt drift dynamo in the absence of shear and (b) a Shear dynamo in the absence of Moffatt drift.

  2. Large-scale perspective as a challenge

    NARCIS (Netherlands)

    Plomp, M.G.A.

    2012-01-01

    1. Scale forms a challenge for chain researchers: when exactly is something ‘large-scale’? What are the underlying factors (e.g. number of parties, data, objects in the chain, complexity) that determine this? It appears to be a continuum between small- and large-scale, where positioning on that

  3. Algorithm 896: LSA: Algorithms for Large-Scale Optimization

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan

    2009-01-01

    Roč. 36, č. 3 (2009), 16-1-16-29 ISSN 0098-3500 R&D Pro jects: GA AV ČR IAA1030405; GA ČR GP201/06/P397 Institutional research plan: CEZ:AV0Z10300504 Keywords : algorithms * design * large-scale optimization * large-scale nonsmooth optimization * large-scale nonlinear least squares * large-scale nonlinear minimax * large-scale systems of nonlinear equations * sparse pro blems * partially separable pro blems * limited-memory methods * discrete Newton methods * quasi-Newton methods * primal interior-point methods Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.904, year: 2009

  4. Scale interactions in a mixing layer – the role of the large-scale gradients

    KAUST Repository

    Fiscaletti, D.

    2016-02-15

    © 2016 Cambridge University Press. The interaction between the large and the small scales of turbulence is investigated in a mixing layer, at a Reynolds number based on the Taylor microscale of , via direct numerical simulations. The analysis is performed in physical space, and the local vorticity root-mean-square (r.m.s.) is taken as a measure of the small-scale activity. It is found that positive large-scale velocity fluctuations correspond to large vorticity r.m.s. on the low-speed side of the mixing layer, whereas, they correspond to low vorticity r.m.s. on the high-speed side. The relationship between large and small scales thus depends on position if the vorticity r.m.s. is correlated with the large-scale velocity fluctuations. On the contrary, the correlation coefficient is nearly constant throughout the mixing layer and close to unity if the vorticity r.m.s. is correlated with the large-scale velocity gradients. Therefore, the small-scale activity appears closely related to large-scale gradients, while the correlation between the small-scale activity and the large-scale velocity fluctuations is shown to reflect a property of the large scales. Furthermore, the vorticity from unfiltered (small scales) and from low pass filtered (large scales) velocity fields tend to be aligned when examined within vortical tubes. These results provide evidence for the so-called \\'scale invariance\\' (Meneveau & Katz, Annu. Rev. Fluid Mech., vol. 32, 2000, pp. 1-32), and suggest that some of the large-scale characteristics are not lost at the small scales, at least at the Reynolds number achieved in the present simulation.

  5. Stochasticity in materials structure, properties, and processing—A review

    Science.gov (United States)

    Hull, Robert; Keblinski, Pawel; Lewis, Dan; Maniatty, Antoinette; Meunier, Vincent; Oberai, Assad A.; Picu, Catalin R.; Samuel, Johnson; Shephard, Mark S.; Tomozawa, Minoru; Vashishth, Deepak; Zhang, Shengbai

    2018-03-01

    We review the concept of stochasticity—i.e., unpredictable or uncontrolled fluctuations in structure, chemistry, or kinetic processes—in materials. We first define six broad classes of stochasticity: equilibrium (thermodynamic) fluctuations; structural/compositional fluctuations; kinetic fluctuations; frustration and degeneracy; imprecision in measurements; and stochasticity in modeling and simulation. In this review, we focus on the first four classes that are inherent to materials phenomena. We next develop a mathematical framework for describing materials stochasticity and then show how it can be broadly applied to these four materials-related stochastic classes. In subsequent sections, we describe structural and compositional fluctuations at small length scales that modify material properties and behavior at larger length scales; systems with engineered fluctuations, concentrating primarily on composite materials; systems in which stochasticity is developed through nucleation and kinetic phenomena; and configurations in which constraints in a given system prevent it from attaining its ground state and cause it to attain several, equally likely (degenerate) states. We next describe how stochasticity in these processes results in variations in physical properties and how these variations are then accentuated by—or amplify—stochasticity in processing and manufacturing procedures. In summary, the origins of materials stochasticity, the degree to which it can be predicted and/or controlled, and the possibility of using stochastic descriptions of materials structure, properties, and processing as a new degree of freedom in materials design are described.

  6. Large-scale matrix-handling subroutines 'ATLAS'

    International Nuclear Information System (INIS)

    Tsunematsu, Toshihide; Takeda, Tatsuoki; Fujita, Keiichi; Matsuura, Toshihiko; Tahara, Nobuo

    1978-03-01

    Subroutine package ''ATLAS'' has been developed for handling large-scale matrices. The package is composed of four kinds of subroutines, i.e., basic arithmetic routines, routines for solving linear simultaneous equations and for solving general eigenvalue problems and utility routines. The subroutines are useful in large scale plasma-fluid simulations. (auth.)

  7. Tensor B mode and stochastic Faraday mixing

    CERN Document Server

    Giovannini, Massimo

    2014-01-01

    This paper investigates the Faraday effect as a different source of B mode polarization. The E mode polarization is Faraday rotated provided a stochastic large-scale magnetic field is present prior to photon decoupling. In the first part of the paper we discuss the case where the tensor modes of the geometry are absent and we argue that the B mode recently detected by the Bicep2 collaboration cannot be explained by a large-scale magnetic field rotating, through the Faraday effect, the well established E mode polarization. In this case, the observed temperature autocorrelations would be excessively distorted by the magnetic field. In the second part of the paper the formation of Faraday rotation is treated as a stationary, random and Markovian process with the aim of generalizing a set of scaling laws originally derived in the absence of the tensor modes of the geometry. We show that the scalar, vector and tensor modes of the brightness perturbations can all be Faraday rotated even if the vector and tensor par...

  8. Large-scale solar heat

    Energy Technology Data Exchange (ETDEWEB)

    Tolonen, J.; Konttinen, P.; Lund, P. [Helsinki Univ. of Technology, Otaniemi (Finland). Dept. of Engineering Physics and Mathematics

    1998-12-31

    In this project a large domestic solar heating system was built and a solar district heating system was modelled and simulated. Objectives were to improve the performance and reduce costs of a large-scale solar heating system. As a result of the project the benefit/cost ratio can be increased by 40 % through dimensioning and optimising the system at the designing stage. (orig.)

  9. Fabrication of large-scale one-dimensional Au nanochain and nanowire networks by interfacial self-assembly

    International Nuclear Information System (INIS)

    Wang Minhua; Li Yongjun; Xie Zhaoxiong; Liu Cai; Yeung, Edward S.

    2010-01-01

    By utilizing the strong capillary attraction between interfacial nanoparticles, large-scale one-dimensional Au nanochain networks were fabricated at the n-butanol/water interface, and could be conveniently transferred onto hydrophilic substrates. Furthermore, the length of the nanochains could be adjusted simply by controlling the density of Au nanoparticles (AuNPs) at the n-butanol/water interface. Surprisingly, the resultant Au nanochains could further transform into smooth nanowires by increasing the aging time, forming a nanowire network. Combined characterization by HRTEM and UV-vis spectroscopy indicates that the formation of Au nanochains stemmed from a stochastic assembly of interfacial AuNPs due to strong capillary attraction, and the evolution of nanochains to nanowires follows an Ostwald ripening mechanism rather than an oriented attachment. This method could be utilized to fabricate large-area nanochain or nanowire networks more uniformly on solid substrates than that of evaporating a solution of nanochain colloid, since it eliminates the three-dimensional aggregation behavior.

  10. Experimental description and stochastic modelling of transfers using a scaling factor for the hydrodynamic properties of the soils

    International Nuclear Information System (INIS)

    Vauclin, M.; Vachaud, G.; Imbernon, J.; Dancette, C.

    1983-01-01

    It is well known that natural soils do not have constant hydrodynamic properties on the plot scale. Experimentally, this means that a water balance obtained in an access tube by means of a neutron moisture gauge and tensiometers is not necessarily representative of the whole range studied. For modelling purposes the deterministic aspect of transfers should be associated with a stochastic description of the hydrodynamic parameters (pressure, water content, hydraulic conductivity). An experiment was carried out in a one-hectare plot of bare soil at Bambey (Senegal) in order to characterize its variability: 28 infiltration tests were performed at the points of a 23x23 m grid. At each of these points, the insertion of a neutron access tube to a depth of 2.0 m, and the positioning of three tensiometers at depths of 100, 110 and 120 cm made it possible also to monitor the redistribution of water and to derive the pressure-water content relationships. In addition, internal drainage tests were made in four 1.5x1.5 m soil monoliths so as to find the hydraulic conductivity-water content relationships at different depths. On the assumption of similarity in porous media (verified in this study) all the results were analysed in terms of the theory of scaling factors. The data obtained in bare soil were then used as the basis for solving the stochastic equations for infiltration and drainage. The results show that, apart from satisfactory agreement, with the experiment, the mean solution obtained from the mean parameters (deterministic solution) is clearly different from the mean of the solutions (stochastic solution). These differences, as well as the variance, depend strongly on the variability of the soil, expressed here as the coefficient of variation of the scaling factors. This obviously calls in question the concept of equivalent porous media. (author)

  11. 100 years after Smoluchowski: stochastic processes in cell biology

    International Nuclear Information System (INIS)

    Holcman, D; Schuss, Z

    2017-01-01

    100 years after Smoluchowski introduced his approach to stochastic processes, they are now at the basis of mathematical and physical modeling in cellular biology: they are used for example to analyse and to extract features from a large number (tens of thousands) of single molecular trajectories or to study the diffusive motion of molecules, proteins or receptors. Stochastic modeling is a new step in large data analysis that serves extracting cell biology concepts. We review here Smoluchowski’s approach to stochastic processes and provide several applications for coarse-graining diffusion, studying polymer models for understanding nuclear organization and finally, we discuss the stochastic jump dynamics of telomeres across cell division and stochastic gene regulation. (topical review)

  12. MOLNs: A CLOUD PLATFORM FOR INTERACTIVE, REPRODUCIBLE, AND SCALABLE SPATIAL STOCHASTIC COMPUTATIONAL EXPERIMENTS IN SYSTEMS BIOLOGY USING PyURDME.

    Science.gov (United States)

    Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas

    2016-01-01

    Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments.

  13. Probes of large-scale structure in the Universe

    International Nuclear Information System (INIS)

    Suto, Yasushi; Gorski, K.; Juszkiewicz, R.; Silk, J.

    1988-01-01

    Recent progress in observational techniques has made it possible to confront quantitatively various models for the large-scale structure of the Universe with detailed observational data. We develop a general formalism to show that the gravitational instability theory for the origin of large-scale structure is now capable of critically confronting observational results on cosmic microwave background radiation angular anisotropies, large-scale bulk motions and large-scale clumpiness in the galaxy counts. (author)

  14. Stochastic spin-one massive field

    International Nuclear Information System (INIS)

    Lim, S.C.

    1984-01-01

    Stochastic quantization schemes of Nelson and Parisi and Wu are applied to a spin-one massive field. Unlike the scalar case Nelson's stochastic spin-one massive field cannot be identified with the corresponding euclidean field even if the fourth component of the euclidean coordinate is taken as equal to the real physical time. In the Parisi-Wu quantization scheme the stochastic Proca vector field has a similar property as the scalar field; which has an asymptotically stationary part and a transient part. The large equal-time limit of the expectation values of the stochastic Proca field are equal to the expectation values of the corresponding euclidean field. In the Stueckelberg formalism the Parisi-Wu scheme gives rise to a stochastic vector field which differs from the massless gauge field in that the gauge cannot be fixed by the choice of boundary condition. (orig.)

  15. A Macroscopic Multifractal Analysis of Parabolic Stochastic PDEs

    Science.gov (United States)

    Khoshnevisan, Davar; Kim, Kunwoo; Xiao, Yimin

    2018-05-01

    It is generally argued that the solution to a stochastic PDE with multiplicative noise—such as \\dot{u}= 1/2 u''+uξ, where {ξ} denotes space-time white noise—routinely produces exceptionally-large peaks that are "macroscopically multifractal." See, for example, Gibbon and Doering (Arch Ration Mech Anal 177:115-150, 2005), Gibbon and Titi (Proc R Soc A 461:3089-3097, 2005), and Zimmermann et al. (Phys Rev Lett 85(17):3612-3615, 2000). A few years ago, we proved that the spatial peaks of the solution to the mentioned stochastic PDE indeed form a random multifractal in the macroscopic sense of Barlow and Taylor (J Phys A 22(13):2621-2626, 1989; Proc Lond Math Soc (3) 64:125-152, 1992). The main result of the present paper is a proof of a rigorous formulation of the assertion that the spatio-temporal peaks of the solution form infinitely-many different multifractals on infinitely-many different scales, which we sometimes refer to as "stretch factors." A simpler, though still complex, such structure is shown to also exist for the constant-coefficient version of the said stochastic PDE.

  16. A Macroscopic Multifractal Analysis of Parabolic Stochastic PDEs

    Science.gov (United States)

    Khoshnevisan, Davar; Kim, Kunwoo; Xiao, Yimin

    2018-04-01

    It is generally argued that the solution to a stochastic PDE with multiplicative noise—such as \\dot{u}= 1/2 u''+uξ, where {ξ} denotes space-time white noise—routinely produces exceptionally-large peaks that are "macroscopically multifractal." See, for example, Gibbon and Doering (Arch Ration Mech Anal 177:115-150, 2005), Gibbon and Titi (Proc R Soc A 461:3089-3097, 2005), and Zimmermann et al. (Phys Rev Lett 85(17):3612-3615, 2000). A few years ago, we proved that the spatial peaks of the solution to the mentioned stochastic PDE indeed form a random multifractal in the macroscopic sense of Barlow and Taylor (J Phys A 22(13):2621-2626, 1989; Proc Lond Math Soc (3) 64:125-152, 1992). The main result of the present paper is a proof of a rigorous formulation of the assertion that the spatio-temporal peaks of the solution form infinitely-many different multifractals on infinitely-many different scales, which we sometimes refer to as "stretch factors." A simpler, though still complex, such structure is shown to also exist for the constant-coefficient version of the said stochastic PDE.

  17. Large-scale grid management; Storskala Nettforvaltning

    Energy Technology Data Exchange (ETDEWEB)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-07-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series.

  18. Finite-time and finite-size scalings in the evaluation of large-deviation functions: Numerical approach in continuous time.

    Science.gov (United States)

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    2017-06-01

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.

  19. Finite-time and finite-size scalings in the evaluation of large-deviation functions: Numerical approach in continuous time

    Science.gov (United States)

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    2017-06-01

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which—as shown on the contact process—provides a significant improvement of the large deviation function estimators compared to the standard one.

  20. Japanese large-scale interferometers

    CERN Document Server

    Kuroda, K; Miyoki, S; Ishizuka, H; Taylor, C T; Yamamoto, K; Miyakawa, O; Fujimoto, M K; Kawamura, S; Takahashi, R; Yamazaki, T; Arai, K; Tatsumi, D; Ueda, A; Fukushima, M; Sato, S; Shintomi, T; Yamamoto, A; Suzuki, T; Saitô, Y; Haruyama, T; Sato, N; Higashi, Y; Uchiyama, T; Tomaru, T; Tsubono, K; Ando, M; Takamori, A; Numata, K; Ueda, K I; Yoneda, H; Nakagawa, K; Musha, M; Mio, N; Moriwaki, S; Somiya, K; Araya, A; Kanda, N; Telada, S; Sasaki, M; Tagoshi, H; Nakamura, T; Tanaka, T; Ohara, K

    2002-01-01

    The objective of the TAMA 300 interferometer was to develop advanced technologies for kilometre scale interferometers and to observe gravitational wave events in nearby galaxies. It was designed as a power-recycled Fabry-Perot-Michelson interferometer and was intended as a step towards a final interferometer in Japan. The present successful status of TAMA is presented. TAMA forms a basis for LCGT (large-scale cryogenic gravitational wave telescope), a 3 km scale cryogenic interferometer to be built in the Kamioka mine in Japan, implementing cryogenic mirror techniques. The plan of LCGT is schematically described along with its associated R and D.

  1. Decentralized adaptive neural control for high-order interconnected stochastic nonlinear time-delay systems with unknown system dynamics.

    Science.gov (United States)

    Si, Wenjie; Dong, Xunde; Yang, Feifei

    2018-03-01

    This paper is concerned with the problem of decentralized adaptive backstepping state-feedback control for uncertain high-order large-scale stochastic nonlinear time-delay systems. For the control design of high-order large-scale nonlinear systems, only one adaptive parameter is constructed to overcome the over-parameterization, and neural networks are employed to cope with the difficulties raised by completely unknown system dynamics and stochastic disturbances. And then, the appropriate Lyapunov-Krasovskii functional and the property of hyperbolic tangent functions are used to deal with the unknown unmatched time-delay interactions of high-order large-scale systems for the first time. At last, on the basis of Lyapunov stability theory, the decentralized adaptive neural controller was developed, and it decreases the number of learning parameters. The actual controller can be designed so as to ensure that all the signals in the closed-loop system are semi-globally uniformly ultimately bounded (SGUUB) and the tracking error converges in the small neighborhood of zero. The simulation example is used to further show the validity of the design method. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Stochastic excitation of low frequency variability in the midlatitude atmosphere

    International Nuclear Information System (INIS)

    Ioannou, P.J.; Farrell, B.F.

    1994-01-01

    Spectral analysis of the transient geopotential variance of the midlatitude atmosphere reveals a sharp peak in the wavenumber-period spectra concentrated at large scales (zonal wave numbers m 10 days). This is surprising because conventional baroclinic instability calculations predict a broad maximum of the variance at synoptic scale (8 < m < 12) with associated period of a few days. In this work we review the method for calculating the maintained variance and associated fluxes and then discuss some results pertaining to the interpretation of the EOF's which arise from the stochastic dynamics of non-normal dynamical systems

  3. Asymptotic Limits for Transport in Binary Stochastic Mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Prinja, A. K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-05-01

    The Karhunen-Loeve stochastic spectral expansion of a random binary mixture of immiscible fluids in planar geometry is used to explore asymptotic limits of radiation transport in such mixtures. Under appropriate scalings of mixing parameters - correlation length, volume fraction, and material cross sections - and employing multiple- scale expansion of the angular flux, previously established atomic mix and diffusion limits are reproduced. When applied to highly contrasting material properties in the small cor- relation length limit, the methodology yields a nonstandard reflective medium transport equation that merits further investigation. Finally, a hybrid closure is proposed that produces both small and large correlation length limits of the closure condition for the material averaged equations.

  4. Reserves and cash flows under stochastic retirement

    DEFF Research Database (Denmark)

    Gad, Kamille Sofie Tågholt; Nielsen, Jeppe Woetmann

    2016-01-01

    Uncertain time of retirement and uncertain structure of retirement benefits are risk factors for life insurance companies. Nevertheless, classical life insurance models assume these are deterministic. In this paper, we include the risk from stochastic time of retirement and stochastic benefit...... structure in a classical finite-state Markov model for a life insurance contract. We include discontinuities in the distribution of the retirement time. First, we derive formulas for appropriate scaling of the benefits according to the time of retirement and discuss the link between the scaling...... and the guarantees provided. Stochastic retirement creates a need to rethink the construction of disability products for high ages and ways to handle this are discussed. We show how to calculate market reserves and how to use modified transition probabilities to calculate expected cash flows without significantly...

  5. Large scale scenario analysis of future low carbon energy options

    International Nuclear Information System (INIS)

    Olaleye, Olaitan; Baker, Erin

    2015-01-01

    In this study, we use a multi-model framework to examine a set of possible future energy scenarios resulting from R&D investments in Solar, Nuclear, Carbon Capture and Storage (CCS), Bio-fuels, Bio-electricity, and Batteries for Electric Transportation. Based on a global scenario analysis, we examine the impact on the economy of advancement in energy technologies, considering both individual technologies and the interactions between pairs of technologies, with a focus on the role of uncertainty. Nuclear and CCS have the most impact on abatement costs, with CCS mostly important at high levels of abatement. We show that CCS and Bio-electricity are complements, while most of the other energy technology pairs are substitutes. We also examine for stochastic dominance between R&D portfolios: given the uncertainty in R&D outcomes, we examine which portfolios would be preferred by all decision-makers, regardless of their attitude toward risk. We observe that portfolios with CCS tend to stochastically dominate those without CCS; and portfolios lacking CCS and Nuclear tend to be stochastically dominated by others. We find that the dominance of CCS becomes even stronger as uncertainty in climate damages increases. Finally, we show that there is significant value in carefully choosing a portfolio, as relatively small portfolios can dominate large portfolios. - Highlights: • We examine future energy scenarios in the face of R&D and climate uncertainty. • We examine the impact of advancement in energy technologies and pairs of technologies. • CCS complements Bio-electricity while most technology pairs are substitutes. • R&D portfolios without CCS are stochastically dominated by portfolios with CCS. • Higher damage uncertainty favors R&D development of CCS and Bio-electricity

  6. Large scale model testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.

    1989-01-01

    Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs

  7. Why small-scale cannabis growers stay small: five mechanisms that prevent small-scale growers from going large scale.

    Science.gov (United States)

    Hammersvik, Eirik; Sandberg, Sveinung; Pedersen, Willy

    2012-11-01

    Over the past 15-20 years, domestic cultivation of cannabis has been established in a number of European countries. New techniques have made such cultivation easier; however, the bulk of growers remain small-scale. In this study, we explore the factors that prevent small-scale growers from increasing their production. The study is based on 1 year of ethnographic fieldwork and qualitative interviews conducted with 45 Norwegian cannabis growers, 10 of whom were growing on a large-scale and 35 on a small-scale. The study identifies five mechanisms that prevent small-scale indoor growers from going large-scale. First, large-scale operations involve a number of people, large sums of money, a high work-load and a high risk of detection, and thus demand a higher level of organizational skills than for small growing operations. Second, financial assets are needed to start a large 'grow-site'. Housing rent, electricity, equipment and nutrients are expensive. Third, to be able to sell large quantities of cannabis, growers need access to an illegal distribution network and knowledge of how to act according to black market norms and structures. Fourth, large-scale operations require advanced horticultural skills to maximize yield and quality, which demands greater skills and knowledge than does small-scale cultivation. Fifth, small-scale growers are often embedded in the 'cannabis culture', which emphasizes anti-commercialism, anti-violence and ecological and community values. Hence, starting up large-scale production will imply having to renegotiate or abandon these values. Going from small- to large-scale cannabis production is a demanding task-ideologically, technically, economically and personally. The many obstacles that small-scale growers face and the lack of interest and motivation for going large-scale suggest that the risk of a 'slippery slope' from small-scale to large-scale growing is limited. Possible political implications of the findings are discussed. Copyright

  8. Distributed large-scale dimensional metrology new insights

    CERN Document Server

    Franceschini, Fiorenzo; Maisano, Domenico

    2011-01-01

    Focuses on the latest insights into and challenges of distributed large scale dimensional metrology Enables practitioners to study distributed large scale dimensional metrology independently Includes specific examples of the development of new system prototypes

  9. SCALE INTERACTION IN A MIXING LAYER. THE ROLE OF THE LARGE-SCALE GRADIENTS

    KAUST Repository

    Fiscaletti, Daniele; Attili, Antonio; Bisetti, Fabrizio; Elsinga, Gerrit E.

    2015-01-01

    from physical considerations we would expect the scales to interact in a qualitatively similar way within the flow and across different turbulent flows. Therefore, instead of the large-scale fluctuations, the large-scale gradients modulation of the small scales has been additionally investigated.

  10. Bayesian Hierarchical Scale Mixtures of Log-Normal Models for Inference in Reliability with Stochastic Constraint

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2017-06-01

    Full Text Available This paper develops Bayesian inference in reliability of a class of scale mixtures of log-normal failure time (SMLNFT models with stochastic (or uncertain constraint in their reliability measures. The class is comprehensive and includes existing failure time (FT models (such as log-normal, log-Cauchy, and log-logistic FT models as well as new models that are robust in terms of heavy-tailed FT observations. Since classical frequency approaches to reliability analysis based on the SMLNFT model with stochastic constraint are intractable, the Bayesian method is pursued utilizing a Markov chain Monte Carlo (MCMC sampling based approach. This paper introduces a two-stage maximum entropy (MaxEnt prior, which elicits a priori uncertain constraint and develops Bayesian hierarchical SMLNFT model by using the prior. The paper also proposes an MCMC method for Bayesian inference in the SMLNFT model reliability and calls attention to properties of the MaxEnt prior that are useful for method development. Finally, two data sets are used to illustrate how the proposed methodology works.

  11. Shallow cumuli ensemble statistics for development of a stochastic parameterization

    Science.gov (United States)

    Sakradzija, Mirjana; Seifert, Axel; Heus, Thijs

    2014-05-01

    According to a conventional deterministic approach to the parameterization of moist convection in numerical atmospheric models, a given large scale forcing produces an unique response from the unresolved convective processes. This representation leaves out the small-scale variability of convection, as it is known from the empirical studies of deep and shallow convective cloud ensembles, there is a whole distribution of sub-grid states corresponding to the given large scale forcing. Moreover, this distribution gets broader with the increasing model resolution. This behavior is also consistent with our theoretical understanding of a coarse-grained nonlinear system. We propose an approach to represent the variability of the unresolved shallow-convective states, including the dependence of the sub-grid states distribution spread and shape on the model horizontal resolution. Starting from the Gibbs canonical ensemble theory, Craig and Cohen (2006) developed a theory for the fluctuations in a deep convective ensemble. The micro-states of a deep convective cloud ensemble are characterized by the cloud-base mass flux, which, according to the theory, is exponentially distributed (Boltzmann distribution). Following their work, we study the shallow cumulus ensemble statistics and the distribution of the cloud-base mass flux. We employ a Large-Eddy Simulation model (LES) and a cloud tracking algorithm, followed by a conditional sampling of clouds at the cloud base level, to retrieve the information about the individual cloud life cycles and the cloud ensemble as a whole. In the case of shallow cumulus cloud ensemble, the distribution of micro-states is a generalized exponential distribution. Based on the empirical and theoretical findings, a stochastic model has been developed to simulate the shallow convective cloud ensemble and to test the convective ensemble theory. Stochastic model simulates a compound random process, with the number of convective elements drawn from a

  12. Fast stochastic algorithm for simulating evolutionary population dynamics

    Science.gov (United States)

    Tsimring, Lev; Hasty, Jeff; Mather, William

    2012-02-01

    Evolution and co-evolution of ecological communities are stochastic processes often characterized by vastly different rates of reproduction and mutation and a coexistence of very large and very small sub-populations of co-evolving species. This creates serious difficulties for accurate statistical modeling of evolutionary dynamics. In this talk, we introduce a new exact algorithm for fast fully stochastic simulations of birth/death/mutation processes. It produces a significant speedup compared to the direct stochastic simulation algorithm in a typical case when the total population size is large and the mutation rates are much smaller than birth/death rates. We illustrate the performance of the algorithm on several representative examples: evolution on a smooth fitness landscape, NK model, and stochastic predator-prey system.

  13. Trends in large-scale testing of reactor structures

    International Nuclear Information System (INIS)

    Blejwas, T.E.

    2003-01-01

    Large-scale tests of reactor structures have been conducted at Sandia National Laboratories since the late 1970s. This paper describes a number of different large-scale impact tests, pressurization tests of models of containment structures, and thermal-pressure tests of models of reactor pressure vessels. The advantages of large-scale testing are evident, but cost, in particular limits its use. As computer models have grown in size, such as number of degrees of freedom, the advent of computer graphics has made possible very realistic representation of results - results that may not accurately represent reality. A necessary condition to avoiding this pitfall is the validation of the analytical methods and underlying physical representations. Ironically, the immensely larger computer models sometimes increase the need for large-scale testing, because the modeling is applied to increasing more complex structural systems and/or more complex physical phenomena. Unfortunately, the cost of large-scale tests is a disadvantage that will likely severely limit similar testing in the future. International collaborations may provide the best mechanism for funding future programs with large-scale tests. (author)

  14. Brownian motion and stochastic calculus

    CERN Document Server

    Karatzas, Ioannis

    1998-01-01

    This book is designed as a text for graduate courses in stochastic processes. It is written for readers familiar with measure-theoretic probability and discrete-time processes who wish to explore stochastic processes in continuous time. The vehicle chosen for this exposition is Brownian motion, which is presented as the canonical example of both a martingale and a Markov process with continuous paths. In this context, the theory of stochastic integration and stochastic calculus is developed. The power of this calculus is illustrated by results concerning representations of martingales and change of measure on Wiener space, and these in turn permit a presentation of recent advances in financial economics (option pricing and consumption/investment optimization). This book contains a detailed discussion of weak and strong solutions of stochastic differential equations and a study of local time for semimartingales, with special emphasis on the theory of Brownian local time. The text is complemented by a large num...

  15. Stochastic dynamic modeling of regular and slow earthquakes

    Science.gov (United States)

    Aso, N.; Ando, R.; Ide, S.

    2017-12-01

    Both regular and slow earthquakes are slip phenomena on plate boundaries and are simulated by a (quasi-)dynamic modeling [Liu and Rice, 2005]. In these numerical simulations, spatial heterogeneity is usually considered not only for explaining real physical properties but also for evaluating the stability of the calculations or the sensitivity of the results on the condition. However, even though we discretize the model space with small grids, heterogeneity at smaller scales than the grid size is not considered in the models with deterministic governing equations. To evaluate the effect of heterogeneity at the smaller scales we need to consider stochastic interactions between slip and stress in a dynamic modeling. Tidal stress is known to trigger or affect both regular and slow earthquakes [Yabe et al., 2015; Ide et al., 2016], and such an external force with fluctuation can also be considered as a stochastic external force. A healing process of faults may also be stochastic, so we introduce stochastic friction law. In the present study, we propose a stochastic dynamic model to explain both regular and slow earthquakes. We solve mode III problem, which corresponds to the rupture propagation along the strike direction. We use BIEM (boundary integral equation method) scheme to simulate slip evolution, but we add stochastic perturbations in the governing equations, which is usually written in a deterministic manner. As the simplest type of perturbations, we adopt Gaussian deviations in the formulation of the slip-stress kernel, external force, and friction. By increasing the amplitude of perturbations of the slip-stress kernel, we reproduce complicated rupture process of regular earthquakes including unilateral and bilateral ruptures. By perturbing external force, we reproduce slow rupture propagation at a scale of km/day. The slow propagation generated by a combination of fast interaction at S-wave velocity is analogous to the kinetic theory of gasses: thermal

  16. Spatiotemporal Stochastic Resonance:Theory and Experiment

    Science.gov (United States)

    Peter, Jung

    1996-03-01

    The amplification of weak periodic signals in bistable or excitable systems via stochastic resonance has been studied intensively over the last years. We are going one step further and ask: Can noise enhance spatiotemporal patterns in excitable media and can this effect be observed in nature? To this end, we are looking at large, two dimensional arrays of coupled excitable elements. Due to the coupling, excitation can propagate through the array in form of nonlinear waves. We observe target waves, rotating spiral waves and other wave forms. If the coupling between the elements is below a critical threshold, any excitational pattern will die out in the absence of noise. Below this threshold, large scale rotating spiral waves - as they are observed above threshold - can be maintained by a proper level of the noise[1]. Furthermore, their geometric features, such as the curvature can be controlled by the homogeneous noise level[2]. If the noise level is too large, break up of spiral waves and collisions with spontaneously nucleated waves yields spiral turbulence. Driving our array with a spatiotemporal pattern, e.g. a rotating spiral wave, we show that for weak coupling the excitational response of the array shows stochastic resonance - an effect we have termed spatiotemporal stochastic resonance. In the last part of the talk I'll make contact with calcium waves, observed in astrocyte cultures and hippocampus slices[3]. A. Cornell-Bell and collaborators[3] have pointed out the role of calcium waves for long-range glial signaling. We demonstrate the similarity of calcium waves with nonlinear waves in noisy excitable media. The noise level in the tissue is characterized by spontaneous activity and can be controlled by applying neuro-transmitter substances[3]. Noise effects in our model are compared with the effect of neuro-transmitters on calcium waves. [1]P. Jung and G. Mayer-Kress, CHAOS 5, 458 (1995). [2]P. Jung and G. Mayer-Kress, Phys. Rev. Lett.62, 2682 (1995). [3

  17. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  18. Stochastic efficiency: five case studies

    International Nuclear Information System (INIS)

    Proesmans, Karel; Broeck, Christian Van den

    2015-01-01

    Stochastic efficiency is evaluated in five case studies: driven Brownian motion, effusion with a thermo-chemical and thermo-velocity gradient, a quantum dot and a model for information to work conversion. The salient features of stochastic efficiency, including the maximum of the large deviation function at the reversible efficiency, are reproduced. The approach to and extrapolation into the asymptotic time regime are documented. (paper)

  19. Large-Scale 3D Printing: The Way Forward

    Science.gov (United States)

    Jassmi, Hamad Al; Najjar, Fady Al; Ismail Mourad, Abdel-Hamid

    2018-03-01

    Research on small-scale 3D printing has rapidly evolved, where numerous industrial products have been tested and successfully applied. Nonetheless, research on large-scale 3D printing, directed to large-scale applications such as construction and automotive manufacturing, yet demands a great a great deal of efforts. Large-scale 3D printing is considered an interdisciplinary topic and requires establishing a blended knowledge base from numerous research fields including structural engineering, materials science, mechatronics, software engineering, artificial intelligence and architectural engineering. This review article summarizes key topics of relevance to new research trends on large-scale 3D printing, particularly pertaining (1) technological solutions of additive construction (i.e. the 3D printers themselves), (2) materials science challenges, and (3) new design opportunities.

  20. Stochastic biomathematical models with applications to neuronal modeling

    CERN Document Server

    Batzel, Jerry; Ditlevsen, Susanne

    2013-01-01

    Stochastic biomathematical models are becoming increasingly important as new light is shed on the role of noise in living systems. In certain biological systems, stochastic effects may even enhance a signal, thus providing a biological motivation for the noise observed in living systems. Recent advances in stochastic analysis and increasing computing power facilitate the analysis of more biophysically realistic models, and this book provides researchers in computational neuroscience and stochastic systems with an overview of recent developments. Key concepts are developed in chapters written by experts in their respective fields. Topics include: one-dimensional homogeneous diffusions and their boundary behavior, large deviation theory and its application in stochastic neurobiological models, a review of mathematical methods for stochastic neuronal integrate-and-fire models, stochastic partial differential equation models in neurobiology, and stochastic modeling of spreading cortical depression.

  1. Growth Limits in Large Scale Networks

    DEFF Research Database (Denmark)

    Knudsen, Thomas Phillip

    limitations. The rising complexity of network management with the convergence of communications platforms is shown as problematic for both automatic management feasibility and for manpower resource management. In the fourth step the scope is extended to include the present society with the DDN project as its......The Subject of large scale networks is approached from the perspective of the network planner. An analysis of the long term planning problems is presented with the main focus on the changing requirements for large scale networks and the potential problems in meeting these requirements. The problems...... the fundamental technological resources in network technologies are analysed for scalability. Here several technological limits to continued growth are presented. The third step involves a survey of major problems in managing large scale networks given the growth of user requirements and the technological...

  2. Accelerating sustainability in large-scale facilities

    CERN Multimedia

    Marina Giampietro

    2011-01-01

    Scientific research centres and large-scale facilities are intrinsically energy intensive, but how can big science improve its energy management and eventually contribute to the environmental cause with new cleantech? CERN’s commitment to providing tangible answers to these questions was sealed in the first workshop on energy management for large scale scientific infrastructures held in Lund, Sweden, on the 13-14 October.   Participants at the energy management for large scale scientific infrastructures workshop. The workshop, co-organised with the European Spallation Source (ESS) and  the European Association of National Research Facilities (ERF), tackled a recognised need for addressing energy issues in relation with science and technology policies. It brought together more than 150 representatives of Research Infrastrutures (RIs) and energy experts from Europe and North America. “Without compromising our scientific projects, we can ...

  3. Large scale reflood test

    International Nuclear Information System (INIS)

    Hirano, Kemmei; Murao, Yoshio

    1980-01-01

    The large-scale reflood test with a view to ensuring the safety of light water reactors was started in fiscal 1976 based on the special account act for power source development promotion measures by the entrustment from the Science and Technology Agency. Thereafter, to establish the safety of PWRs in loss-of-coolant accidents by joint international efforts, the Japan-West Germany-U.S. research cooperation program was started in April, 1980. Thereupon, the large-scale reflood test is now included in this program. It consists of two tests using a cylindrical core testing apparatus for examining the overall system effect and a plate core testing apparatus for testing individual effects. Each apparatus is composed of the mock-ups of pressure vessel, primary loop, containment vessel and ECCS. The testing method, the test results and the research cooperation program are described. (J.P.N.)

  4. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  5. Stochastic optimization-based study of dimerization kinetics

    Indian Academy of Sciences (India)

    To this end, we study dimerization kinetics of protein as a model system. We follow the dimerization kinetics using a stochastic simulation algorithm and ... optimization; dimerization kinetics; sensitivity analysis; stochastic simulation ... tion in large molecules and clusters, or the design ..... An unbiased strategy of allocating.

  6. Large Scale Cosmological Anomalies and Inhomogeneous Dark Energy

    Directory of Open Access Journals (Sweden)

    Leandros Perivolaropoulos

    2014-01-01

    Full Text Available A wide range of large scale observations hint towards possible modifications on the standard cosmological model which is based on a homogeneous and isotropic universe with a small cosmological constant and matter. These observations, also known as “cosmic anomalies” include unexpected Cosmic Microwave Background perturbations on large angular scales, large dipolar peculiar velocity flows of galaxies (“bulk flows”, the measurement of inhomogenous values of the fine structure constant on cosmological scales (“alpha dipole” and other effects. The presence of the observational anomalies could either be a large statistical fluctuation in the context of ΛCDM or it could indicate a non-trivial departure from the cosmological principle on Hubble scales. Such a departure is very much constrained by cosmological observations for matter. For dark energy however there are no significant observational constraints for Hubble scale inhomogeneities. In this brief review I discuss some of the theoretical models that can naturally lead to inhomogeneous dark energy, their observational constraints and their potential to explain the large scale cosmic anomalies.

  7. Large-scale patterns in Rayleigh-Benard convection

    International Nuclear Information System (INIS)

    Hardenberg, J. von; Parodi, A.; Passoni, G.; Provenzale, A.; Spiegel, E.A.

    2008-01-01

    Rayleigh-Benard convection at large Rayleigh number is characterized by the presence of intense, vertically moving plumes. Both laboratory and numerical experiments reveal that the rising and descending plumes aggregate into separate clusters so as to produce large-scale updrafts and downdrafts. The horizontal scales of the aggregates reported so far have been comparable to the horizontal extent of the containers, but it has not been clear whether that represents a limitation imposed by domain size. In this work, we present numerical simulations of convection at sufficiently large aspect ratio to ascertain whether there is an intrinsic saturation scale for the clustering process when that ratio is large enough. From a series of simulations of Rayleigh-Benard convection with Rayleigh numbers between 10 5 and 10 8 and with aspect ratios up to 12π, we conclude that the clustering process has a finite horizontal saturation scale with at most a weak dependence on Rayleigh number in the range studied

  8. A stochastic model of depolarization enhancement due to large energy spread in electron storage rings

    International Nuclear Information System (INIS)

    Buon, J.

    1988-10-01

    A new semiclassical and stochastic model of spin diffusion is used to obtain numerical predictions for depolarization enhancement due to beam energy spread. It confirms the results of previous models for the synchrotron sidebands of isolated spin resonances. A satisfactory agreement is obtained with the width of a synchrotron satellite observed at SPEAR. For HERA and LEP, at Z 0 energy, the depolarization enhancement is of the order of a few units and increases very rapidly with the energy spread. Large reduction of polarization degree is expected in these rings

  9. An asymptotic-preserving stochastic Galerkin method for the radiative heat transfer equations with random inputs and diffusive scalings

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Shi, E-mail: sjin@wisc.edu [Department of Mathematics, University of Wisconsin-Madison, Madison, WI 53706 (United States); Institute of Natural Sciences, Department of Mathematics, MOE-LSEC and SHL-MAC, Shanghai Jiao Tong University, Shanghai 200240 (China); Lu, Hanqing, E-mail: hanqing@math.wisc.edu [Department of Mathematics, University of Wisconsin-Madison, Madison, WI 53706 (United States)

    2017-04-01

    In this paper, we develop an Asymptotic-Preserving (AP) stochastic Galerkin scheme for the radiative heat transfer equations with random inputs and diffusive scalings. In this problem the random inputs arise due to uncertainties in cross section, initial data or boundary data. We use the generalized polynomial chaos based stochastic Galerkin (gPC-SG) method, which is combined with the micro–macro decomposition based deterministic AP framework in order to handle efficiently the diffusive regime. For linearized problem we prove the regularity of the solution in the random space and consequently the spectral accuracy of the gPC-SG method. We also prove the uniform (in the mean free path) linear stability for the space-time discretizations. Several numerical tests are presented to show the efficiency and accuracy of proposed scheme, especially in the diffusive regime.

  10. Manufacturing test of large scale hollow capsule and long length cladding in the large scale oxide dispersion strengthened (ODS) martensitic steel

    International Nuclear Information System (INIS)

    Narita, Takeshi; Ukai, Shigeharu; Kaito, Takeji; Ohtsuka, Satoshi; Fujiwara, Masayuki

    2004-04-01

    Mass production capability of oxide dispersion strengthened (ODS) martensitic steel cladding (9Cr) has being evaluated in the Phase II of the Feasibility Studies on Commercialized Fast Reactor Cycle System. The cost for manufacturing mother tube (raw materials powder production, mechanical alloying (MA) by ball mill, canning, hot extrusion, and machining) is a dominant factor in the total cost for manufacturing ODS ferritic steel cladding. In this study, the large-sale 9Cr-ODS martensitic steel mother tube which is made with a large-scale hollow capsule, and long length claddings were manufactured, and the applicability of these processes was evaluated. Following results were obtained in this study. (1) Manufacturing the large scale mother tube in the dimension of 32 mm OD, 21 mm ID, and 2 m length has been successfully carried out using large scale hollow capsule. This mother tube has a high degree of accuracy in size. (2) The chemical composition and the micro structure of the manufactured mother tube are similar to the existing mother tube manufactured by a small scale can. And the remarkable difference between the bottom and top sides in the manufactured mother tube has not been observed. (3) The long length cladding has been successfully manufactured from the large scale mother tube which was made using a large scale hollow capsule. (4) For reducing the manufacturing cost of the ODS steel claddings, manufacturing process of the mother tubes using a large scale hollow capsules is promising. (author)

  11. Toward Development of a Stochastic Wake Model: Validation Using LES and Turbine Loads

    Directory of Open Access Journals (Sweden)

    Jae Sang Moon

    2017-12-01

    Full Text Available Wind turbines within an array do not experience free-stream undisturbed flow fields. Rather, the flow fields on internal turbines are influenced by wakes generated by upwind unit and exhibit different dynamic characteristics relative to the free stream. The International Electrotechnical Commission (IEC standard 61400-1 for the design of wind turbines only considers a deterministic wake model for the design of a wind plant. This study is focused on the development of a stochastic model for waked wind fields. First, high-fidelity physics-based waked wind velocity fields are generated using Large-Eddy Simulation (LES. Stochastic characteristics of these LES waked wind velocity field, including mean and turbulence components, are analyzed. Wake-related mean and turbulence field-related parameters are then estimated for use with a stochastic model, using Multivariate Multiple Linear Regression (MMLR with the LES data. To validate the simulated wind fields based on the stochastic model, wind turbine tower and blade loads are generated using aeroelastic simulation for utility-scale wind turbine models and compared with those based directly on the LES inflow. The study’s overall objective is to offer efficient and validated stochastic approaches that are computationally tractable for assessing the performance and loads of turbines operating in wakes.

  12. Amplification of large-scale magnetic field in nonhelical magnetohydrodynamics

    KAUST Repository

    Kumar, Rohit

    2017-08-11

    It is typically assumed that the kinetic and magnetic helicities play a crucial role in the growth of large-scale dynamo. In this paper, we demonstrate that helicity is not essential for the amplification of large-scale magnetic field. For this purpose, we perform nonhelical magnetohydrodynamic (MHD) simulation, and show that the large-scale magnetic field can grow in nonhelical MHD when random external forcing is employed at scale 1/10 the box size. The energy fluxes and shell-to-shell transfer rates computed using the numerical data show that the large-scale magnetic energy grows due to the energy transfers from the velocity field at the forcing scales.

  13. Domain decomposition method of stochastic PDEs: a two-level scalable preconditioner

    International Nuclear Information System (INIS)

    Subber, Waad; Sarkar, Abhijit

    2012-01-01

    For uncertainty quantification in many practical engineering problems, the stochastic finite element method (SFEM) may be computationally challenging. In SFEM, the size of the algebraic linear system grows rapidly with the spatial mesh resolution and the order of the stochastic dimension. In this paper, we describe a non-overlapping domain decomposition method, namely the iterative substructuring method to tackle the large-scale linear system arising in the SFEM. The SFEM is based on domain decomposition in the geometric space and a polynomial chaos expansion in the probabilistic space. In particular, a two-level scalable preconditioner is proposed for the iterative solver of the interface problem for the stochastic systems. The preconditioner is equipped with a coarse problem which globally connects the subdomains both in the geometric and probabilistic spaces via their corner nodes. This coarse problem propagates the information quickly across the subdomains leading to a scalable preconditioner. For numerical illustrations, a two-dimensional stochastic elliptic partial differential equation (SPDE) with spatially varying non-Gaussian random coefficients is considered. The numerical scalability of the the preconditioner is investigated with respect to the mesh size, subdomain size, fixed problem size per subdomain and order of polynomial chaos expansion. The numerical experiments are performed on a Linux cluster using MPI and PETSc parallel libraries.

  14. Hydrometeorological variability on a large french catchment and its relation to large-scale circulation across temporal scales

    Science.gov (United States)

    Massei, Nicolas; Dieppois, Bastien; Fritier, Nicolas; Laignel, Benoit; Debret, Maxime; Lavers, David; Hannah, David

    2015-04-01

    In the present context of global changes, considerable efforts have been deployed by the hydrological scientific community to improve our understanding of the impacts of climate fluctuations on water resources. Both observational and modeling studies have been extensively employed to characterize hydrological changes and trends, assess the impact of climate variability or provide future scenarios of water resources. In the aim of a better understanding of hydrological changes, it is of crucial importance to determine how and to what extent trends and long-term oscillations detectable in hydrological variables are linked to global climate oscillations. In this work, we develop an approach associating large-scale/local-scale correlation, enmpirical statistical downscaling and wavelet multiresolution decomposition of monthly precipitation and streamflow over the Seine river watershed, and the North Atlantic sea level pressure (SLP) in order to gain additional insights on the atmospheric patterns associated with the regional hydrology. We hypothesized that: i) atmospheric patterns may change according to the different temporal wavelengths defining the variability of the signals; and ii) definition of those hydrological/circulation relationships for each temporal wavelength may improve the determination of large-scale predictors of local variations. The results showed that the large-scale/local-scale links were not necessarily constant according to time-scale (i.e. for the different frequencies characterizing the signals), resulting in changing spatial patterns across scales. This was then taken into account by developing an empirical statistical downscaling (ESD) modeling approach which integrated discrete wavelet multiresolution analysis for reconstructing local hydrometeorological processes (predictand : precipitation and streamflow on the Seine river catchment) based on a large-scale predictor (SLP over the Euro-Atlantic sector) on a monthly time-step. This approach

  15. Superconducting materials for large scale applications

    International Nuclear Information System (INIS)

    Dew-Hughes, D.

    1975-01-01

    Applications of superconductors capable of carrying large current densities in large-scale electrical devices are examined. Discussions are included on critical current density, superconducting materials available, and future prospects for improved superconducting materials. (JRD)

  16. Large-scale influences in near-wall turbulence.

    Science.gov (United States)

    Hutchins, Nicholas; Marusic, Ivan

    2007-03-15

    Hot-wire data acquired in a high Reynolds number facility are used to illustrate the need for adequate scale separation when considering the coherent structure in wall-bounded turbulence. It is found that a large-scale motion in the log region becomes increasingly comparable in energy to the near-wall cycle as the Reynolds number increases. Through decomposition of fluctuating velocity signals, it is shown that this large-scale motion has a distinct modulating influence on the small-scale energy (akin to amplitude modulation). Reassessment of DNS data, in light of these results, shows similar trends, with the rate and intensity of production due to the near-wall cycle subject to a modulating influence from the largest-scale motions.

  17. A dual theory of price and value in a meso-scale economic model with stochastic profit rate

    Science.gov (United States)

    Greenblatt, R. E.

    2014-12-01

    The problem of commodity price determination in a market-based, capitalist economy has a long and contentious history. Neoclassical microeconomic theories are based typically on marginal utility assumptions, while classical macroeconomic theories tend to be value-based. In the current work, I study a simplified meso-scale model of a commodity capitalist economy. The production/exchange model is represented by a network whose nodes are firms, workers, capitalists, and markets, and whose directed edges represent physical or monetary flows. A pair of multivariate linear equations with stochastic input parameters represent physical (supply/demand) and monetary (income/expense) balance. The input parameters yield a non-degenerate profit rate distribution across firms. Labor time and price are found to be eigenvector solutions to the respective balance equations. A simple relation is derived relating the expected value of commodity price to commodity labor content. Results of Monte Carlo simulations are consistent with the stochastic price/labor content relation.

  18. The role of large scale storage in a GB low carbon energy future: Issues and policy challenges

    International Nuclear Information System (INIS)

    Gruenewald, Philipp; Cockerill, Tim; Contestabile, Marcello; Pearson, Peter

    2011-01-01

    Large scale storage offers the prospect of capturing and using excess electricity within a low carbon energy system, which otherwise might have to be wasted. Incorporating the role of storage into current scenario tools is challenging, because it requires high temporal resolution to reflect the effects of intermittent sources on system balancing. This study draws on results from a model with such resolution. It concludes that large scale storage could become economically viable for scenarios with high penetration of renewables. As the proportion of intermittent sources increases, the optimal type of storage shifts towards solutions with low energy related costs, even at the expense of efficiency. However, a range of uncertainties have been identified, concerning storage technology development, the regulatory environment, alternatives to storage and the stochastic uncertainty of year-on-year revenues. All of these negatively affect the cost of finance and the chances of successful market uptake. We argue, therefore, that, if the possible wider system and social benefits from the presence of storage are to be achieved, stronger and more strategic policy support may be necessary. More work on the social and system benefits of storage is needed to gauge the appropriate extent of support measures. - Highlights: → Time resolved modelling shows future potential for large scale power storage in GB. → The value of storage is highly sensitive to a range of parameters. → Uncertainty over the revenue from storage could pose a barrier to investment. → To realise wider system benefits stronger and more strategic policy support may be necessary.

  19. Signaling in large-scale neural networks

    DEFF Research Database (Denmark)

    Berg, Rune W; Hounsgaard, Jørn

    2009-01-01

    We examine the recent finding that neurons in spinal motor circuits enter a high conductance state during functional network activity. The underlying concomitant increase in random inhibitory and excitatory synaptic activity leads to stochastic signal processing. The possible advantages of this m......We examine the recent finding that neurons in spinal motor circuits enter a high conductance state during functional network activity. The underlying concomitant increase in random inhibitory and excitatory synaptic activity leads to stochastic signal processing. The possible advantages...... of this metabolically costly organization are analyzed by comparing with synaptically less intense networks driven by the intrinsic response properties of the network neurons....

  20. Stochastic cooling at Fermilab

    International Nuclear Information System (INIS)

    Marriner, J.

    1986-08-01

    The topics discussed are the stochastic cooling systems in use at Fermilab and some of the techniques that have been employed to meet the particular requirements of the anti-proton source. Stochastic cooling at Fermilab became of paramount importance about 5 years ago when the anti-proton source group at Fermilab abandoned the electron cooling ring in favor of a high flux anti-proton source which relied solely on stochastic cooling to achieve the phase space densities necessary for colliding proton and anti-proton beams. The Fermilab systems have constituted a substantial advance in the techniques of cooling including: large pickup arrays operating at microwave frequencies, extensive use of cryogenic techniques to reduce thermal noise, super-conducting notch filters, and the development of tools for controlling and for accurately phasing the system

  1. Stochastic process variation in deep-submicron CMOS circuits and algorithms

    CERN Document Server

    Zjajo, Amir

    2014-01-01

    One of the most notable features of nanometer scale CMOS technology is the increasing magnitude of variability of the key device parameters affecting performance of integrated circuits. The growth of variability can be attributed to multiple factors, including the difficulty of manufacturing control, the emergence of new systematic variation-generating mechanisms, and most importantly, the increase in atomic-scale randomness, where device operation must be described as a stochastic process. In addition to wide-sense stationary stochastic device variability and temperature variation, existence of non-stationary stochastic electrical noise associated with fundamental processes in integrated-circuit devices represents an elementary limit on the performance of electronic circuits. In an attempt to address these issues, Stochastic Process Variation in Deep-Submicron CMOS: Circuits and Algorithms offers unique combination of mathematical treatment of random process variation, electrical noise and temperature and ne...

  2. Memory effects on stochastic resonance

    Science.gov (United States)

    Neiman, Alexander; Sung, Wokyung

    1996-02-01

    We study the phenomenon of stochastic resonance (SR) in a bistable system with internal colored noise. In this situation the system possesses time-dependent memory friction connected with noise via the fluctuation-dissipation theorem, so that in the absence of periodic driving the system approaches the thermodynamic equilibrium state. For this non-Markovian case we find that memory usually suppresses stochastic resonance. However, for a large memory time SR can be enhanced by the memory.

  3. PKI security in large-scale healthcare networks.

    Science.gov (United States)

    Mantas, Georgios; Lymberopoulos, Dimitrios; Komninos, Nikos

    2012-06-01

    During the past few years a lot of PKI (Public Key Infrastructures) infrastructures have been proposed for healthcare networks in order to ensure secure communication services and exchange of data among healthcare professionals. However, there is a plethora of challenges in these healthcare PKI infrastructures. Especially, there are a lot of challenges for PKI infrastructures deployed over large-scale healthcare networks. In this paper, we propose a PKI infrastructure to ensure security in a large-scale Internet-based healthcare network connecting a wide spectrum of healthcare units geographically distributed within a wide region. Furthermore, the proposed PKI infrastructure facilitates the trust issues that arise in a large-scale healthcare network including multi-domain PKI infrastructures.

  4. Effects of intrinsic stochasticity on delayed reaction-diffusion patterning systems

    KAUST Repository

    Woolley, Thomas E.; Baker, Ruth E.; Gaffney, Eamonn A.; Maini, Philip K.; Seirin-Lee, Sungrim

    2012-01-01

    Cellular gene expression is a complex process involving many steps, including the transcription of DNA and translation of mRNA; hence the synthesis of proteins requires a considerable amount of time, from ten minutes to several hours. Since diffusion-driven instability has been observed to be sensitive to perturbations in kinetic delays, the application of Turing patterning mechanisms to the problem of producing spatially heterogeneous differential gene expression has been questioned. In deterministic systems a small delay in the reactions can cause a large increase in the time it takes a system to pattern. Recently, it has been observed that in undelayed systems intrinsic stochasticity can cause pattern initiation to occur earlier than in the analogous deterministic simulations. Here we are interested in adding both stochasticity and delays to Turing systems in order to assess whether stochasticity can reduce the patterning time scale in delayed Turing systems. As analytical insights to this problem are difficult to attain and often limited in their use, we focus on stochastically simulating delayed systems. We consider four different Turing systems and two different forms of delay. Our results are mixed and lead to the conclusion that, although the sensitivity to delays in the Turing mechanism is not completely removed by the addition of intrinsic noise, the effects of the delays are clearly ameliorated in certain specific cases. © 2012 American Physical Society.

  5. Effects of intrinsic stochasticity on delayed reaction-diffusion patterning systems

    KAUST Repository

    Woolley, Thomas E.

    2012-05-22

    Cellular gene expression is a complex process involving many steps, including the transcription of DNA and translation of mRNA; hence the synthesis of proteins requires a considerable amount of time, from ten minutes to several hours. Since diffusion-driven instability has been observed to be sensitive to perturbations in kinetic delays, the application of Turing patterning mechanisms to the problem of producing spatially heterogeneous differential gene expression has been questioned. In deterministic systems a small delay in the reactions can cause a large increase in the time it takes a system to pattern. Recently, it has been observed that in undelayed systems intrinsic stochasticity can cause pattern initiation to occur earlier than in the analogous deterministic simulations. Here we are interested in adding both stochasticity and delays to Turing systems in order to assess whether stochasticity can reduce the patterning time scale in delayed Turing systems. As analytical insights to this problem are difficult to attain and often limited in their use, we focus on stochastically simulating delayed systems. We consider four different Turing systems and two different forms of delay. Our results are mixed and lead to the conclusion that, although the sensitivity to delays in the Turing mechanism is not completely removed by the addition of intrinsic noise, the effects of the delays are clearly ameliorated in certain specific cases. © 2012 American Physical Society.

  6. Emerging large-scale solar heating applications

    International Nuclear Information System (INIS)

    Wong, W.P.; McClung, J.L.

    2009-01-01

    Currently the market for solar heating applications in Canada is dominated by outdoor swimming pool heating, make-up air pre-heating and domestic water heating in homes, commercial and institutional buildings. All of these involve relatively small systems, except for a few air pre-heating systems on very large buildings. Together these applications make up well over 90% of the solar thermal collectors installed in Canada during 2007. These three applications, along with the recent re-emergence of large-scale concentrated solar thermal for generating electricity, also dominate the world markets. This paper examines some emerging markets for large scale solar heating applications, with a focus on the Canadian climate and market. (author)

  7. Emerging large-scale solar heating applications

    Energy Technology Data Exchange (ETDEWEB)

    Wong, W.P.; McClung, J.L. [Science Applications International Corporation (SAIC Canada), Ottawa, Ontario (Canada)

    2009-07-01

    Currently the market for solar heating applications in Canada is dominated by outdoor swimming pool heating, make-up air pre-heating and domestic water heating in homes, commercial and institutional buildings. All of these involve relatively small systems, except for a few air pre-heating systems on very large buildings. Together these applications make up well over 90% of the solar thermal collectors installed in Canada during 2007. These three applications, along with the recent re-emergence of large-scale concentrated solar thermal for generating electricity, also dominate the world markets. This paper examines some emerging markets for large scale solar heating applications, with a focus on the Canadian climate and market. (author)

  8. Stochastic heterogeneous interaction promotes cooperation in spatial prisoner's dilemma game.

    Directory of Open Access Journals (Sweden)

    Ping Zhu

    Full Text Available Previous studies mostly investigate player's cooperative behavior as affected by game time-scale or individual diversity. In this paper, by involving both time-scale and diversity simultaneously, we explore the effect of stochastic heterogeneous interaction. In our model, the occurrence of game interaction between each pair of linked player obeys a random probability, which is further described by certain distributions. Simulations on a 4-neighbor square lattice show that the cooperation level is remarkably promoted when stochastic heterogeneous interaction is considered. The results are then explained by investigating the mean payoffs, the mean boundary payoffs and the transition probabilities between cooperators and defectors. We also show some typical snapshots and evolution time series of the system. Finally, the 8-neighbor square lattice and BA scale-free network results indicate that the stochastic heterogeneous interaction can be robust against different network topologies. Our work may sharpen the understanding of the joint effect of game time-scale and individual diversity on spatial games.

  9. On the stochastic stability of MHD equilibria

    International Nuclear Information System (INIS)

    Teichmann, J.

    1979-07-01

    The stochastic stability in the large of stationary equilibria of ideal and dissipative magnetohydrodynamics under the influence of stationary random fluctuations is studied using the direct Liapunov method. Sufficient and necessary conditions for stability of the linearized Euler-Lagrangian systems are given. The destabilizing effect of stochastic fluctuations is demonstrated. (orig.)

  10. Capture of fixation by rotational flow; a deterministic hypothesis regarding scaling and stochasticity in fixational eye movements

    Directory of Open Access Journals (Sweden)

    Nicholas Mansel Wilkinson

    2014-02-01

    Full Text Available Visual scan paths exhibit complex, stochastic dynamics. Even during visual fixation, the eye is in constant motion. Fixational drift and tremor are thought to reflect fluctuations in the persistent neural activity of neural integrators in the oculomotor brainstem, which integrate sequences of transient saccadic velocity signals into a short term memory of eye position. Despite intensive research and much progress, the precise mechanisms by which oculomotor posture is maintained remain elusive. Drift exhibits a stochastic statistical profile which has been modelled using random walk formalisms. Tremor is widely dismissed as noise. Here we focus on the dynamical profile of fixational tremor, and argue that tremor may be a signal which usefully reflects the workings of the oculomotor postural control. We identify signatures reminiscent of a certain flavour of transient neurodynamics; toric travelling waves which rotate around a central phase singularity. Spiral waves play an organisational role in dynamical systems at many scales throughout nature, though their potential functional role in brain activity remains a matter of educated speculation. Spiral waves have a repertoire of functionally interesting dynamical properties, including persistence, which suggest that they could in theory contribute to persistent neural activity in the oculomotor postural control system. Whilst speculative, the singularity hypothesis of oculomotor postural control implies testable predictions, and could provide the beginnings of an integrated dynamical framework for eye movements across scales.

  11. Large-scale regions of antimatter

    International Nuclear Information System (INIS)

    Grobov, A. V.; Rubin, S. G.

    2015-01-01

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era

  12. Large-scale regions of antimatter

    Energy Technology Data Exchange (ETDEWEB)

    Grobov, A. V., E-mail: alexey.grobov@gmail.com; Rubin, S. G., E-mail: sgrubin@mephi.ru [National Research Nuclear University MEPhI (Russian Federation)

    2015-07-15

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.

  13. Nonlinear stochastic dynamics of mesoscopic homogeneous biochemical reaction systems—an analytical theory

    International Nuclear Information System (INIS)

    Qian, Hong

    2011-01-01

    The nonlinear dynamics of biochemical reactions in a small-sized system on the order of a cell are stochastic. Assuming spatial homogeneity, the populations of n molecular species follow a multi-dimensional birth-and-death process on Z n . We introduce the Delbrück–Gillespie process, a continuous-time Markov jump process, whose Kolmogorov forward equation has been known as the chemical master equation, and whose stochastic trajectories can be computed via the Gillespie algorithm. Using simple models, we illustrate that a system of nonlinear ordinary differential equations on R n emerges in the infinite system size limit. For finite system size, transitions among multiple attractors of the nonlinear dynamical system are rare events with exponentially long transit times. There is a separation of time scales between the deterministic ODEs and the stochastic Markov jumps between attractors. No diffusion process can provide a global representation that is accurate on both short and long time scales for the nonlinear, stochastic population dynamics. On the short time scale and near deterministic stable fixed points, Ornstein–Uhlenbeck Gaussian processes give linear stochastic dynamics that exhibit time-irreversible circular motion for open, driven chemical systems. Extending this individual stochastic behaviour-based nonlinear population theory of molecular species to other biological systems is discussed. (invited article)

  14. Option Pricing with Stochastic Volatility and Jump Diffusion Processes

    Directory of Open Access Journals (Sweden)

    Radu Lupu

    2006-03-01

    Full Text Available Option pricing by the use of Black Scholes Merton (BSM model is based on the assumption that asset prices have a lognormal distribution. In spite of the use of these models on a large scale, both by practioners and academics, the assumption of lognormality is rejected by the history of returns. The objective of this article is to present the methods that developed after the Black Scholes Merton environment and deals with the option pricing model adjustment to the empirical properties of asset returns. The main models that appeared after BSM allowed for special changes of the returns that materialized in jump-diffusion and stochastic volatility processes. The article presents the foundations of risk neutral options evaluation and the empirical evidence that fed the amendment of the lognormal assumption in the first part and shows the evaluation procedure under the assumption of stock prices following the jump-diffusion process and the stochastic volatility process.

  15. Large-Scale Analysis of Art Proportions

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2014-01-01

    While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square) and with majo......While literature often tries to impute mathematical constants into art, this large-scale study (11 databases of paintings and photos, around 200.000 items) shows a different truth. The analysis, consisting of the width/height proportions, shows a value of rarely if ever one (square...

  16. The Expanded Large Scale Gap Test

    Science.gov (United States)

    1987-03-01

    NSWC TR 86-32 DTIC THE EXPANDED LARGE SCALE GAP TEST BY T. P. LIDDIARD D. PRICE RESEARCH AND TECHNOLOGY DEPARTMENT ’ ~MARCH 1987 Ap~proved for public...arises, to reduce the spread in the LSGT 50% gap value.) The worst charges, such as those with the highest or lowest densities, the largest re-pressed...Arlington, VA 22217 PE 62314N INS3A 1 RJ14E31 7R4TBK 11 TITLE (Include Security CIlmsilficatiorn The Expanded Large Scale Gap Test . 12. PEIRSONAL AUTHOR() T

  17. Redshift space correlations and scale-dependent stochastic biasing of density peaks

    Science.gov (United States)

    Desjacques, Vincent; Sheth, Ravi K.

    2010-01-01

    dependent, so the configuration-space bias is stochastic and scale dependent, both in real and redshift space. We provide expressions for this stochasticity and its evolution.

  18. Large scale and big data processing and management

    CERN Document Server

    Sakr, Sherif

    2014-01-01

    Large Scale and Big Data: Processing and Management provides readers with a central source of reference on the data management techniques currently available for large-scale data processing. Presenting chapters written by leading researchers, academics, and practitioners, it addresses the fundamental challenges associated with Big Data processing tools and techniques across a range of computing environments.The book begins by discussing the basic concepts and tools of large-scale Big Data processing and cloud computing. It also provides an overview of different programming models and cloud-bas

  19. Stochastic cooling

    International Nuclear Information System (INIS)

    Bisognano, J.; Leemann, C.

    1982-03-01

    Stochastic cooling is the damping of betatron oscillations and momentum spread of a particle beam by a feedback system. In its simplest form, a pickup electrode detects the transverse positions or momenta of particles in a storage ring, and the signal produced is amplified and applied downstream to a kicker. The time delay of the cable and electronics is designed to match the transit time of particles along the arc of the storage ring between the pickup and kicker so that an individual particle receives the amplified version of the signal it produced at the pick-up. If there were only a single particle in the ring, it is obvious that betatron oscillations and momentum offset could be damped. However, in addition to its own signal, a particle receives signals from other beam particles. In the limit of an infinite number of particles, no damping could be achieved; we have Liouville's theorem with constant density of the phase space fluid. For a finite, albeit large number of particles, there remains a residue of the single particle damping which is of practical use in accumulating low phase space density beams of particles such as antiprotons. It was the realization of this fact that led to the invention of stochastic cooling by S. van der Meer in 1968. Since its conception, stochastic cooling has been the subject of much theoretical and experimental work. The earliest experiments were performed at the ISR in 1974, with the subsequent ICE studies firmly establishing the stochastic cooling technique. This work directly led to the design and construction of the Antiproton Accumulator at CERN and the beginnings of p anti p colliding beam physics at the SPS. Experiments in stochastic cooling have been performed at Fermilab in collaboration with LBL, and a design is currently under development for a anti p accumulator for the Tevatron

  20. Large scale cluster computing workshop

    International Nuclear Information System (INIS)

    Dane Skow; Alan Silverman

    2002-01-01

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community

  1. Stochastic Effects; Application in Nuclear Physics

    International Nuclear Information System (INIS)

    Mazonka, O.

    2000-04-01

    Stochastic effects in nuclear physics refer to the study of the dynamics of nuclear systems evolving under stochastic equations of motion. In this dissertation we restrict our attention to classical scattering models. We begin with introduction of the model of nuclear dynamics and deterministic equations of evolution. We apply a Langevin approach - an additional property of the model, which reflect the statistical nature of low energy nuclear behaviour. We than concentrate our attention on the problem of calculating tails of distribution functions, which actually is the problem of calculating probabilities of rare outcomes. Two general strategies are proposed. Result and discussion follow. Finally in the appendix we consider stochastic effects in nonequilibrium systems. A few exactly solvable models are presented. For one model we show explicitly that stochastic behaviour in a microscopic description can lead to ordered collective effects on the macroscopic scale. Two others are solved to confirm the predictions of the fluctuation theorem. (author)

  2. Hybrid framework for the simulation of stochastic chemical kinetics

    International Nuclear Information System (INIS)

    Duncan, Andrew; Erban, Radek; Zygalakis, Konstantinos

    2016-01-01

    Stochasticity plays a fundamental role in various biochemical processes, such as cell regulatory networks and enzyme cascades. Isothermal, well-mixed systems can be modelled as Markov processes, typically simulated using the Gillespie Stochastic Simulation Algorithm (SSA) [25]. While easy to implement and exact, the computational cost of using the Gillespie SSA to simulate such systems can become prohibitive as the frequency of reaction events increases. This has motivated numerous coarse-grained schemes, where the “fast” reactions are approximated either using Langevin dynamics or deterministically. While such approaches provide a good approximation when all reactants are abundant, the approximation breaks down when one or more species exist only in small concentrations and the fluctuations arising from the discrete nature of the reactions become significant. This is particularly problematic when using such methods to compute statistics of extinction times for chemical species, as well as simulating non-equilibrium systems such as cell-cycle models in which a single species can cycle between abundance and scarcity. In this paper, a hybrid jump-diffusion model for simulating well-mixed stochastic kinetics is derived. It acts as a bridge between the Gillespie SSA and the chemical Langevin equation. For low reactant reactions the underlying behaviour is purely discrete, while purely diffusive when the concentrations of all species are large, with the two different behaviours coexisting in the intermediate region. A bound on the weak error in the classical large volume scaling limit is obtained, and three different numerical discretisations of the jump-diffusion model are described. The benefits of such a formalism are illustrated using computational examples.

  3. Hybrid framework for the simulation of stochastic chemical kinetics

    Science.gov (United States)

    Duncan, Andrew; Erban, Radek; Zygalakis, Konstantinos

    2016-12-01

    Stochasticity plays a fundamental role in various biochemical processes, such as cell regulatory networks and enzyme cascades. Isothermal, well-mixed systems can be modelled as Markov processes, typically simulated using the Gillespie Stochastic Simulation Algorithm (SSA) [25]. While easy to implement and exact, the computational cost of using the Gillespie SSA to simulate such systems can become prohibitive as the frequency of reaction events increases. This has motivated numerous coarse-grained schemes, where the "fast" reactions are approximated either using Langevin dynamics or deterministically. While such approaches provide a good approximation when all reactants are abundant, the approximation breaks down when one or more species exist only in small concentrations and the fluctuations arising from the discrete nature of the reactions become significant. This is particularly problematic when using such methods to compute statistics of extinction times for chemical species, as well as simulating non-equilibrium systems such as cell-cycle models in which a single species can cycle between abundance and scarcity. In this paper, a hybrid jump-diffusion model for simulating well-mixed stochastic kinetics is derived. It acts as a bridge between the Gillespie SSA and the chemical Langevin equation. For low reactant reactions the underlying behaviour is purely discrete, while purely diffusive when the concentrations of all species are large, with the two different behaviours coexisting in the intermediate region. A bound on the weak error in the classical large volume scaling limit is obtained, and three different numerical discretisations of the jump-diffusion model are described. The benefits of such a formalism are illustrated using computational examples.

  4. Hybrid framework for the simulation of stochastic chemical kinetics

    Energy Technology Data Exchange (ETDEWEB)

    Duncan, Andrew, E-mail: a.duncan@imperial.ac.uk [Department of Mathematics, Imperial College, South Kensington Campus, London, SW7 2AZ (United Kingdom); Erban, Radek, E-mail: erban@maths.ox.ac.uk [Mathematical Institute, University of Oxford, Radcliffe Observatory Quarter, Woodstock Road, Oxford, OX2 6GG (United Kingdom); Zygalakis, Konstantinos, E-mail: k.zygalakis@ed.ac.uk [School of Mathematics, University of Edinburgh, Peter Guthrie Tait Road, Edinburgh, EH9 3FD (United Kingdom)

    2016-12-01

    Stochasticity plays a fundamental role in various biochemical processes, such as cell regulatory networks and enzyme cascades. Isothermal, well-mixed systems can be modelled as Markov processes, typically simulated using the Gillespie Stochastic Simulation Algorithm (SSA) [25]. While easy to implement and exact, the computational cost of using the Gillespie SSA to simulate such systems can become prohibitive as the frequency of reaction events increases. This has motivated numerous coarse-grained schemes, where the “fast” reactions are approximated either using Langevin dynamics or deterministically. While such approaches provide a good approximation when all reactants are abundant, the approximation breaks down when one or more species exist only in small concentrations and the fluctuations arising from the discrete nature of the reactions become significant. This is particularly problematic when using such methods to compute statistics of extinction times for chemical species, as well as simulating non-equilibrium systems such as cell-cycle models in which a single species can cycle between abundance and scarcity. In this paper, a hybrid jump-diffusion model for simulating well-mixed stochastic kinetics is derived. It acts as a bridge between the Gillespie SSA and the chemical Langevin equation. For low reactant reactions the underlying behaviour is purely discrete, while purely diffusive when the concentrations of all species are large, with the two different behaviours coexisting in the intermediate region. A bound on the weak error in the classical large volume scaling limit is obtained, and three different numerical discretisations of the jump-diffusion model are described. The benefits of such a formalism are illustrated using computational examples.

  5. Redesign of a supply network by considering stochastic demand

    Directory of Open Access Journals (Sweden)

    Juan Camilo Paz

    2015-09-01

    Full Text Available This paper presents the problem of redesigning a supply network of large scale by considering variability of the demand. The central problematic takes root in determining strategic decisions of closing and adjusting of capacity of some network echelons and the tactical decisions concerning to the distribution channels used for transporting products. We have formulated a deterministic Mixed Integer Linear Programming Model (MILP and a stochastic MILP model (SMILP whose objective functions are the maximization of the EBITDA (Earnings before Interest, Taxes, Depreciation and Amortization. The decisions of Network Design on stochastic model as capacities, number of warehouses in operation, material and product flows between echelons, are determined in a single stage by defining an objective function that penalizes unsatisfied demand and surplus of demand due to demand changes. The solution strategy adopted for the stochastic model is a scheme denominated as Sample Average Approximation (SAA. The model is based on the case of a Colombian company dedicated to production and marketing of foodstuffs and supplies for the bakery industry. The results show that the proposed methodology was a solid reference for decision support regarding to the supply networks redesign by considering the expected economic contribution of products and variability of the demand.

  6. Perturbation expansions of stochastic wavefunctions for open quantum systems

    Science.gov (United States)

    Ke, Yaling; Zhao, Yi

    2017-11-01

    Based on the stochastic unravelling of the reduced density operator in the Feynman path integral formalism for an open quantum system in touch with harmonic environments, a new non-Markovian stochastic Schrödinger equation (NMSSE) has been established that allows for the systematic perturbation expansion in the system-bath coupling to arbitrary order. This NMSSE can be transformed in a facile manner into the other two NMSSEs, i.e., non-Markovian quantum state diffusion and time-dependent wavepacket diffusion method. Benchmarked by numerically exact results, we have conducted a comparative study of the proposed method in its lowest order approximation, with perturbative quantum master equations in the symmetric spin-boson model and the realistic Fenna-Matthews-Olson complex. It is found that our method outperforms the second-order time-convolutionless quantum master equation in the whole parameter regime and even far better than the fourth-order in the slow bath and high temperature cases. Besides, the method is applicable on an equal footing for any kind of spectral density function and is expected to be a powerful tool to explore the quantum dynamics of large-scale systems, benefiting from the wavefunction framework and the time-local appearance within a single stochastic trajectory.

  7. Renormalization of an abelian gauge theory in stochastic quantization

    International Nuclear Information System (INIS)

    Chaturvedi, S.; Kapoor, A.K.; Srinivasan, V.

    1987-01-01

    The renormalization of an abelian gauge field coupled to a complex scalar field is discussed in the stochastic quantization method. The super space formulation of the stochastic quantization method is used to derive the Ward Takahashi identities associated with supersymmetry. These Ward Takahashi identities together with previously derived Ward Takahashi identities associated with gauge invariance are shown to be sufficient to fix all the renormalization constants in terms of scaling of the fields and of the parameters appearing in the stochastic theory. (orig.)

  8. Large-Scale Agriculture and Outgrower Schemes in Ethiopia

    DEFF Research Database (Denmark)

    Wendimu, Mengistu Assefa

    , the impact of large-scale agriculture and outgrower schemes on productivity, household welfare and wages in developing countries is highly contentious. Chapter 1 of this thesis provides an introduction to the study, while also reviewing the key debate in the contemporary land ‘grabbing’ and historical large...... sugarcane outgrower scheme on household income and asset stocks. Chapter 5 examines the wages and working conditions in ‘formal’ large-scale and ‘informal’ small-scale irrigated agriculture. The results in Chapter 2 show that moisture stress, the use of untested planting materials, and conflict over land...... commands a higher wage than ‘formal’ large-scale agriculture, while rather different wage determination mechanisms exist in the two sectors. Human capital characteristics (education and experience) partly explain the differences in wages within the formal sector, but play no significant role...

  9. The role of stochasticity in sawtooth oscillation

    International Nuclear Information System (INIS)

    Lichtenberg, A.J.; Itoh, Kimitaka; Itoh, Sanae; Fukuyama, Atsushi.

    1991-08-01

    In this paper we have demonstrated that stochastization of field lines, resulting from the interaction of the fundamental m/n=1/1 helical mode with other periodicities, plays an important role in sawtooth oscillations. The time scale for the stochastic temperature diffusion has been determined. It was shown to be sufficiently fast to account for the fast sawtooth crash, and is generally shorter than the time scales for the redistribution of current. The enhancement of the electron and ion viscosity, arising from the stochastic field lines, has been calculated. The enhanced electron viscosity always leads to an initial increase in the growth rate of the mode; the enhanced ion viscosity can ultimately lead to mode stabilization before a complete temperature redistribution or flux reconnection has occurred. A dynamical model has been introduced to calculate the path of the sawtooth oscillation through a parameter space of shear and amplitude of the helical perturbation. The stochastic trigger to the enhanced growth rate and the stabilization by the ion viscosity are also included in the mode. A reasonable prescription for the flux reconnection at the end of the growth phase allows us to determine the initial q-value for the successive sawtooth ramps. (J.P.N.)

  10. Multiple fields in stochastic inflation

    Energy Technology Data Exchange (ETDEWEB)

    Assadullahi, Hooshyar [Institute of Cosmology & Gravitation, University of Portsmouth,Dennis Sciama Building, Burnaby Road, Portsmouth, PO1 3FX (United Kingdom); Firouzjahi, Hassan [School of Astronomy, Institute for Research in Fundamental Sciences (IPM),P.O. Box 19395-5531, Tehran (Iran, Islamic Republic of); Noorbala, Mahdiyar [Department of Physics, University of Tehran,P.O. Box 14395-547, Tehran (Iran, Islamic Republic of); School of Astronomy, Institute for Research in Fundamental Sciences (IPM),P.O. Box 19395-5531, Tehran (Iran, Islamic Republic of); Vennin, Vincent; Wands, David [Institute of Cosmology & Gravitation, University of Portsmouth,Dennis Sciama Building, Burnaby Road, Portsmouth, PO1 3FX (United Kingdom)

    2016-06-24

    Stochastic effects in multi-field inflationary scenarios are investigated. A hierarchy of diffusion equations is derived, the solutions of which yield moments of the numbers of inflationary e-folds. Solving the resulting partial differential equations in multi-dimensional field space is more challenging than the single-field case. A few tractable examples are discussed, which show that the number of fields is, in general, a critical parameter. When more than two fields are present for instance, the probability to explore arbitrarily large-field regions of the potential, otherwise inaccessible to single-field dynamics, becomes non-zero. In some configurations, this gives rise to an infinite mean number of e-folds, regardless of the initial conditions. Another difference with respect to single-field scenarios is that multi-field stochastic effects can be large even at sub-Planckian energy. This opens interesting new possibilities for probing quantum effects in inflationary dynamics, since the moments of the numbers of e-folds can be used to calculate the distribution of primordial density perturbations in the stochastic-δN formalism.

  11. Economically viable large-scale hydrogen liquefaction

    Science.gov (United States)

    Cardella, U.; Decker, L.; Klein, H.

    2017-02-01

    The liquid hydrogen demand, particularly driven by clean energy applications, will rise in the near future. As industrial large scale liquefiers will play a major role within the hydrogen supply chain, production capacity will have to increase by a multiple of today’s typical sizes. The main goal is to reduce the total cost of ownership for these plants by increasing energy efficiency with innovative and simple process designs, optimized in capital expenditure. New concepts must ensure a manageable plant complexity and flexible operability. In the phase of process development and selection, a dimensioning of key equipment for large scale liquefiers, such as turbines and compressors as well as heat exchangers, must be performed iteratively to ensure technological feasibility and maturity. Further critical aspects related to hydrogen liquefaction, e.g. fluid properties, ortho-para hydrogen conversion, and coldbox configuration, must be analysed in detail. This paper provides an overview on the approach, challenges and preliminary results in the development of efficient as well as economically viable concepts for large-scale hydrogen liquefaction.

  12. Large scale chromatographic separations using continuous displacement chromatography (CDC)

    International Nuclear Information System (INIS)

    Taniguchi, V.T.; Doty, A.W.; Byers, C.H.

    1988-01-01

    A process for large scale chromatographic separations using a continuous chromatography technique is described. The process combines the advantages of large scale batch fixed column displacement chromatography with conventional analytical or elution continuous annular chromatography (CAC) to enable large scale displacement chromatography to be performed on a continuous basis (CDC). Such large scale, continuous displacement chromatography separations have not been reported in the literature. The process is demonstrated with the ion exchange separation of a binary lanthanide (Nd/Pr) mixture. The process is, however, applicable to any displacement chromatography separation that can be performed using conventional batch, fixed column chromatography

  13. Entropy Production in Stochastics

    Directory of Open Access Journals (Sweden)

    Demetris Koutsoyiannis

    2017-10-01

    Full Text Available While the modern definition of entropy is genuinely probabilistic, in entropy production the classical thermodynamic definition, as in heat transfer, is typically used. Here we explore the concept of entropy production within stochastics and, particularly, two forms of entropy production in logarithmic time, unconditionally (EPLT or conditionally on the past and present having been observed (CEPLT. We study the theoretical properties of both forms, in general and in application to a broad set of stochastic processes. A main question investigated, related to model identification and fitting from data, is how to estimate the entropy production from a time series. It turns out that there is a link of the EPLT with the climacogram, and of the CEPLT with two additional tools introduced here, namely the differenced climacogram and the climacospectrum. In particular, EPLT and CEPLT are related to slopes of log-log plots of these tools, with the asymptotic slopes at the tails being most important as they justify the emergence of scaling laws of second-order characteristics of stochastic processes. As a real-world application, we use an extraordinary long time series of turbulent velocity and show how a parsimonious stochastic model can be identified and fitted using the tools developed.

  14. Non-smooth optimization methods for large-scale problems: applications to mid-term power generation planning

    International Nuclear Information System (INIS)

    Emiel, G.

    2008-01-01

    This manuscript deals with large-scale non-smooth optimization that may typically arise when performing Lagrangian relaxation of difficult problems. This technique is commonly used to tackle mixed-integer linear programming - or large-scale convex problems. For example, a classical approach when dealing with power generation planning problems in a stochastic environment is to perform a Lagrangian relaxation of the coupling constraints of demand. In this approach, a master problem coordinates local subproblems, specific to each generation unit. The master problem deals with a separable non-smooth dual function which can be maximized with, for example, bundle algorithms. In chapter 2, we introduce basic tools of non-smooth analysis and some recent results regarding incremental or inexact instances of non-smooth algorithms. However, in some situations, the dual problem may still be very hard to solve. For instance, when the number of dualized constraints is very large (exponential in the dimension of the primal problem), explicit dualization may no longer be possible or the update of dual variables may fail. In order to reduce the dual dimension, different heuristics were proposed. They involve a separation procedure to dynamically select a restricted set of constraints to be dualized along the iterations. This relax-and-cut type approach has shown its numerical efficiency in many combinatorial problems. In chapter 3, we show Primal-dual convergence of such strategy when using an adapted sub-gradient method for the dual step and under minimal assumptions on the separation procedure. Another limit of Lagrangian relaxation may appear when the dual function is separable in highly numerous or complex sub-functions. In such situation, the computational burden of solving all local subproblems may be preponderant in the whole iterative process. A natural strategy would be here to take full advantage of the dual separable structure, performing a dual iteration after having

  15. Stochastic dynamics of new inflation

    International Nuclear Information System (INIS)

    Nakao, Ken-ichi; Nambu, Yasusada; Sasaki, Misao.

    1988-07-01

    We investigate thoroughly the dynamics of an inflation-driving scalar field in terms of an extended version of the stochastic approach proposed by Starobinsky and discuss the spacetime structure of the inflationary universe. To avoid any complications which might arise due to quantum gravity, we concentrate our discussions on the new inflationary universe scenario in which all the energy scales involved are well below the planck mass. The investigation is done both analytically and numerically. In particular, we present a full numerical analysis of the stochastic scalar field dynamics on the phase space. Then implications of the results are discussed. (author)

  16. Stochastic modelling of turbulence

    DEFF Research Database (Denmark)

    Sørensen, Emil Hedevang Lohse

    previously been shown to be closely connected to the energy dissipation. The incorporation of the small scale dynamics into the spatial model opens the door to a fully fledged stochastic model of turbulence. Concerning the interaction of wind and wind turbine, a new method is proposed to extract wind turbine...

  17. Large Scale Processes and Extreme Floods in Brazil

    Science.gov (United States)

    Ribeiro Lima, C. H.; AghaKouchak, A.; Lall, U.

    2016-12-01

    Persistent large scale anomalies in the atmospheric circulation and ocean state have been associated with heavy rainfall and extreme floods in water basins of different sizes across the world. Such studies have emerged in the last years as a new tool to improve the traditional, stationary based approach in flood frequency analysis and flood prediction. Here we seek to advance previous studies by evaluating the dominance of large scale processes (e.g. atmospheric rivers/moisture transport) over local processes (e.g. local convection) in producing floods. We consider flood-prone regions in Brazil as case studies and the role of large scale climate processes in generating extreme floods in such regions is explored by means of observed streamflow, reanalysis data and machine learning methods. The dynamics of the large scale atmospheric circulation in the days prior to the flood events are evaluated based on the vertically integrated moisture flux and its divergence field, which are interpreted in a low-dimensional space as obtained by machine learning techniques, particularly supervised kernel principal component analysis. In such reduced dimensional space, clusters are obtained in order to better understand the role of regional moisture recycling or teleconnected moisture in producing floods of a given magnitude. The convective available potential energy (CAPE) is also used as a measure of local convection activities. We investigate for individual sites the exceedance probability in which large scale atmospheric fluxes dominate the flood process. Finally, we analyze regional patterns of floods and how the scaling law of floods with drainage area responds to changes in the climate forcing mechanisms (e.g. local vs large scale).

  18. Climate SPHINX: evaluating the impact of resolution and stochastic physics parameterisations in the EC-Earth global climate model

    Science.gov (United States)

    Davini, Paolo; von Hardenberg, Jost; Corti, Susanna; Christensen, Hannah M.; Juricke, Stephan; Subramanian, Aneesh; Watson, Peter A. G.; Weisheimer, Antje; Palmer, Tim N.

    2017-03-01

    The Climate SPHINX (Stochastic Physics HIgh resolutioN eXperiments) project is a comprehensive set of ensemble simulations aimed at evaluating the sensitivity of present and future climate to model resolution and stochastic parameterisation. The EC-Earth Earth system model is used to explore the impact of stochastic physics in a large ensemble of 30-year climate integrations at five different atmospheric horizontal resolutions (from 125 up to 16 km). The project includes more than 120 simulations in both a historical scenario (1979-2008) and a climate change projection (2039-2068), together with coupled transient runs (1850-2100). A total of 20.4 million core hours have been used, made available from a single year grant from PRACE (the Partnership for Advanced Computing in Europe), and close to 1.5 PB of output data have been produced on SuperMUC IBM Petascale System at the Leibniz Supercomputing Centre (LRZ) in Garching, Germany. About 140 TB of post-processed data are stored on the CINECA supercomputing centre archives and are freely accessible to the community thanks to an EUDAT data pilot project. This paper presents the technical and scientific set-up of the experiments, including the details on the forcing used for the simulations performed, defining the SPHINX v1.0 protocol. In addition, an overview of preliminary results is given. An improvement in the simulation of Euro-Atlantic atmospheric blocking following resolution increase is observed. It is also shown that including stochastic parameterisation in the low-resolution runs helps to improve some aspects of the tropical climate - specifically the Madden-Julian Oscillation and the tropical rainfall variability. These findings show the importance of representing the impact of small-scale processes on the large-scale climate variability either explicitly (with high-resolution simulations) or stochastically (in low-resolution simulations).

  19. Computing in Large-Scale Dynamic Systems

    NARCIS (Netherlands)

    Pruteanu, A.S.

    2013-01-01

    Software applications developed for large-scale systems have always been difficult to de- velop due to problems caused by the large number of computing devices involved. Above a certain network size (roughly one hundred), necessary services such as code updating, topol- ogy discovery and data

  20. Fires in large scale ventilation systems

    International Nuclear Information System (INIS)

    Gregory, W.S.; Martin, R.A.; White, B.W.; Nichols, B.D.; Smith, P.R.; Leslie, I.H.; Fenton, D.L.; Gunaji, M.V.; Blythe, J.P.

    1991-01-01

    This paper summarizes the experience gained simulating fires in large scale ventilation systems patterned after ventilation systems found in nuclear fuel cycle facilities. The series of experiments discussed included: (1) combustion aerosol loading of 0.61x0.61 m HEPA filters with the combustion products of two organic fuels, polystyrene and polymethylemethacrylate; (2) gas dynamic and heat transport through a large scale ventilation system consisting of a 0.61x0.61 m duct 90 m in length, with dampers, HEPA filters, blowers, etc.; (3) gas dynamic and simultaneous transport of heat and solid particulate (consisting of glass beads with a mean aerodynamic diameter of 10μ) through the large scale ventilation system; and (4) the transport of heat and soot, generated by kerosene pool fires, through the large scale ventilation system. The FIRAC computer code, designed to predict fire-induced transients in nuclear fuel cycle facility ventilation systems, was used to predict the results of experiments (2) through (4). In general, the results of the predictions were satisfactory. The code predictions for the gas dynamics, heat transport, and particulate transport and deposition were within 10% of the experimentally measured values. However, the code was less successful in predicting the amount of soot generation from kerosene pool fires, probably due to the fire module of the code being a one-dimensional zone model. The experiments revealed a complicated three-dimensional combustion pattern within the fire room of the ventilation system. Further refinement of the fire module within FIRAC is needed. (orig.)

  1. Stochastic thermodynamics

    Science.gov (United States)

    Eichhorn, Ralf; Aurell, Erik

    2014-04-01

    theory for small deviations from equilibrium, in which a general framework is constructed from the analysis of non-equilibrium states close to equilibrium. In a next step, Prigogine and others developed linear irreversible thermodynamics, which establishes relations between transport coefficients and entropy production on a phenomenological level in terms of thermodynamic forces and fluxes. However, beyond the realm of linear response no general theoretical results were available for quite a long time. This situation has changed drastically over the last 20 years with the development of stochastic thermodynamics, revealing that the range of validity of thermodynamic statements can indeed be extended deep into the non-equilibrium regime. Early developments in that direction trace back to the observations of symmetry relations between the probabilities for entropy production and entropy annihilation in non-equilibrium steady states [5-8] (nowadays categorized in the class of so-called detailed fluctuation theorems), and the derivations of the Bochkov-Kuzovlev [9, 10] and Jarzynski relations [11] (which are now classified as so-called integral fluctuation theorems). Apart from its fundamental theoretical interest, the developments in stochastic thermodynamics have experienced an additional boost from the recent experimental progress in fabricating, manipulating, controlling and observing systems on the micro- and nano-scale. These advances are not only of formidable use for probing and monitoring biological processes on the cellular, sub-cellular and molecular level, but even include the realization of a microscopic thermodynamic heat engine [12] or the experimental verification of Landauer's principle in a colloidal system [13]. The scientific program Stochastic Thermodynamics held between 4 and 15 March 2013, and hosted by The Nordic Institute for Theoretical Physics (Nordita), was attended by more than 50 scientists from the Nordic countries and elsewhere, amongst them

  2. Stochastic Modeling and Optimization in a Microgrid: A Survey

    Directory of Open Access Journals (Sweden)

    Hao Liang

    2014-03-01

    Full Text Available The future smart grid is expected to be an interconnected network of small-scale and self-contained microgrids, in addition to a large-scale electric power backbone. By utilizing microsources, such as renewable energy sources and combined heat and power plants, microgrids can supply electrical and heat loads in local areas in an economic and environment friendly way. To better adopt the intermittent and weather-dependent renewable power generation, energy storage devices, such as batteries, heat buffers and plug-in electric vehicles (PEVs with vehicle-to-grid systems can be integrated in microgrids. However, significant technical challenges arise in the planning, operation and control of microgrids, due to the randomness in renewable power generation, the buffering effect of energy storage devices and the high mobility of PEVs. The two-way communication functionalities of the future smart grid provide an opportunity to address these challenges, by offering the communication links for microgrid status information collection. However, how to utilize stochastic modeling and optimization tools for efficient, reliable and economic planning, operation and control of microgrids remains an open issue. In this paper, we investigate the key features of microgrids and provide a comprehensive literature survey on the stochastic modeling and optimization tools for a microgrid. Future research directions are also identified.

  3. Pan-European stochastic flood event set

    Science.gov (United States)

    Kadlec, Martin; Pinto, Joaquim G.; He, Yi; Punčochář, Petr; Kelemen, Fanni D.; Manful, Desmond; Palán, Ladislav

    2017-04-01

    Impact Forecasting (IF), the model development center of Aon Benfield, has been developing a large suite of catastrophe flood models on probabilistic bases for individual countries in Europe. Such natural catastrophes do not follow national boundaries: for example, the major flood in 2016 was responsible for the Europe's largest insured loss of USD3.4bn and affected Germany, France, Belgium, Austria and parts of several other countries. Reflecting such needs, IF initiated a pan-European flood event set development which combines cross-country exposures with country based loss distributions to provide more insightful data to re/insurers. Because the observed discharge data are not available across the whole Europe in sufficient quantity and quality to permit a detailed loss evaluation purposes, a top-down approach was chosen. This approach is based on simulating precipitation from a GCM/RCM model chain followed by a calculation of discharges using rainfall-runoff modelling. IF set up this project in a close collaboration with Karlsruhe Institute of Technology (KIT) regarding the precipitation estimates and with University of East Anglia (UEA) in terms of the rainfall-runoff modelling. KIT's main objective is to provide high resolution daily historical and stochastic time series of key meteorological variables. A purely dynamical downscaling approach with the regional climate model COSMO-CLM (CCLM) is used to generate the historical time series, using re-analysis data as boundary conditions. The resulting time series are validated against the gridded observational dataset E-OBS, and different bias-correction methods are employed. The generation of the stochastic time series requires transfer functions between large-scale atmospheric variables and regional temperature and precipitation fields. These transfer functions are developed for the historical time series using reanalysis data as predictors and bias-corrected CCLM simulated precipitation and temperature as

  4. Composite stochastic processes

    NARCIS (Netherlands)

    Kampen, N.G. van

    Certain problems in physics and chemistry lead to the definition of a class of stochastic processes. Although they are not Markovian they can be treated explicitly to some extent. In particular, the probability distribution for large times can be found. It is shown to obey a master equation. This

  5. Study on impurity screening in stochastic magnetic boundary of the Large Helical Device

    International Nuclear Information System (INIS)

    Kobayashi, M.; Morita, S.; Feng, Y.

    2008-10-01

    The impurity transport characteristics in the scrape-off layer associated with a stochastic magnetic boundary of LHD are analyzed. The remnant islands with very small internal field line pitch in the stochastic region play a key role in reducing the impurity influx. The thermal force driven impurity influx is significantly suppressed when the perpendicular energy flux exceeds the parallel one inside the islands due to the small pitch. Application of the 3D edge transport code, EMC3-EIRENE, confirmed the impurity retention (screening) effect in the edge region. It is also found that the edge surface layers are the most effective region to retain (screen) impurities because of the flow acceleration and plasma cooling via short flux tubes. The carbon emission obtained in experiments is in good agreement with the modelling results, showing the impurity retention (screening) potential of the stochastic magnetic boundary. (author)

  6. Assessment of Future Whole-System Value of Large-Scale Pumped Storage Plants in Europe

    Directory of Open Access Journals (Sweden)

    Fei Teng

    2018-01-01

    Full Text Available This paper analyses the impacts and benefits of the pumped storage plant (PSP and its upgrade to variable speed on generation and transmission capacity requirements, capital costs, system operating costs and carbon emissions in the future European electricity system. The combination of a deterministic system planning tool, Whole-electricity System Investment Model (WeSIM, and a stochastic system operation optimisation tool, Advanced Stochastic Unit Commitment (ASUC, is used to analyse the whole-system value of PSP technology and to quantify the impact of European balancing market integration and other competing flexible technologies on the value of the PSP. Case studies on the Pan-European system demonstrate that PSPs can reduce the total system cost by up to €13 billion per annum by 2050 in a scenario with a high share of renewables. Upgrading the PSP to variable-speed drive enhances its long-term benefits by 10–20%. On the other hand, balancing market integration across Europe may potentially reduce the overall value of the variable-speed PSP, although the effect can vary across different European regions. The results also suggest that large-scale deployment of demand-side response (DSR leads to a significant reduction in the value of PSPs, while the value of PSPs increases by circa 18% when the total European interconnection capacity is halved. The benefit of PSPs in reducing emissions is relatively negligible by 2030 but constitutes around 6–10% of total annual carbon emissions from the European power sector by 2050.

  7. Large-scale Complex IT Systems

    OpenAIRE

    Sommerville, Ian; Cliff, Dave; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard

    2011-01-01

    This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that identifies the major challen...

  8. Large-scale complex IT systems

    OpenAIRE

    Sommerville, Ian; Cliff, Dave; Calinescu, Radu; Keen, Justin; Kelly, Tim; Kwiatkowska, Marta; McDermid, John; Paige, Richard

    2012-01-01

    12 pages, 2 figures This paper explores the issues around the construction of large-scale complex systems which are built as 'systems of systems' and suggests that there are fundamental reasons, derived from the inherent complexity in these systems, why our current software engineering methods and techniques cannot be scaled up to cope with the engineering challenges of constructing such systems. It then goes on to propose a research and education agenda for software engineering that ident...

  9. First Mile Challenges for Large-Scale IoT

    KAUST Repository

    Bader, Ahmed; Elsawy, Hesham; Gharbieh, Mohammad; Alouini, Mohamed-Slim; Adinoyi, Abdulkareem; Alshaalan, Furaih

    2017-01-01

    The Internet of Things is large-scale by nature. This is not only manifested by the large number of connected devices, but also by the sheer scale of spatial traffic intensity that must be accommodated, primarily in the uplink direction. To that end

  10. A stochastic parameterization for deep convection using cellular automata

    Science.gov (United States)

    Bengtsson, L.; Steinheimer, M.; Bechtold, P.; Geleyn, J.

    2012-12-01

    large-scale variables in regions where convective activity is large. A two month extended evaluation of the deterministic behaviour of the scheme indicate a neutral impact on forecast skill. References: Bengtsson, L., H. Körnich, E. Källén, and G. Svensson, 2011: Large-scale dynamical response to sub-grid scale organization provided by cellular automata. Journal of the Atmospheric Sciences, 68, 3132-3144. Frenkel, Y., A. Majda, and B. Khouider, 2011: Using the stochastic multicloud model to improve tropical convective parameterization: A paradigm example. Journal of the Atmospheric Sciences, doi: 10.1175/JAS-D-11-0148.1. Huang, X.-Y., 1988: The organization of moist convection by internal 365 gravity waves. Tellus A, 42, 270-285. Khouider, B., J. Biello, and A. Majda, 2010: A Stochastic Multicloud Model for Tropical Convection. Comm. Math. Sci., 8, 187-216. Palmer, T., 2011: Towards the Probabilistic Earth-System Simulator: A Vision for the Future of Climate and Weather Prediction. Quarterly Journal of the Royal Meteorological Society 138 (2012) 841-861 Plant, R. and G. Craig, 2008: A stochastic parameterization for deep convection based on equilibrium statistics. J. Atmos. Sci., 65, 87-105.

  11. Ion stochastic heating by obliquely propagating magnetosonic waves

    International Nuclear Information System (INIS)

    Gao Xinliang; Lu Quanming; Wu Mingyu; Wang Shui

    2012-01-01

    The ion motions in obliquely propagating Alfven waves with sufficiently large amplitudes have already been studied by Chen et al.[Phys. Plasmas 8, 4713 (2001)], and it was found that the ion motions are stochastic when the wave frequency is at a fraction of the ion gyro-frequency. In this paper, with test particle simulations, we investigate the ion motions in obliquely propagating magnetosonic waves and find that the ion motions also become stochastic when the amplitude of the magnetosonic waves is sufficiently large due to the resonance at sub-cyclotron frequencies. Similar to the Alfven wave, the increase of the propagating angle, wave frequency, and the number of the wave modes can lower the stochastic threshold of the ion motions. However, because the magnetosonic waves become more and more compressive with the increase of the propagating angle, the decrease of the stochastic threshold with the increase of the propagating angle is more obvious in the magnetosonic waves than that in the Alfven waves.

  12. The scaling of population persistence with carrying capacity does not asymptote in populations of a fish experiencing extreme climate variability.

    Science.gov (United States)

    White, Richard S A; Wintle, Brendan A; McHugh, Peter A; Booker, Douglas J; McIntosh, Angus R

    2017-06-14

    Despite growing concerns regarding increasing frequency of extreme climate events and declining population sizes, the influence of environmental stochasticity on the relationship between population carrying capacity and time-to-extinction has received little empirical attention. While time-to-extinction increases exponentially with carrying capacity in constant environments, theoretical models suggest increasing environmental stochasticity causes asymptotic scaling, thus making minimum viable carrying capacity vastly uncertain in variable environments. Using empirical estimates of environmental stochasticity in fish metapopulations, we showed that increasing environmental stochasticity resulting from extreme droughts was insufficient to create asymptotic scaling of time-to-extinction with carrying capacity in local populations as predicted by theory. Local time-to-extinction increased with carrying capacity due to declining sensitivity to demographic stochasticity, and the slope of this relationship declined significantly as environmental stochasticity increased. However, recent 1 in 25 yr extreme droughts were insufficient to extirpate populations with large carrying capacity. Consequently, large populations may be more resilient to environmental stochasticity than previously thought. The lack of carrying capacity-related asymptotes in persistence under extreme climate variability reveals how small populations affected by habitat loss or overharvesting, may be disproportionately threatened by increases in extreme climate events with global warming. © 2017 The Author(s).

  13. Coordinated phenotype switching with large-scale chromosome flip-flop inversion observed in bacteria.

    Science.gov (United States)

    Cui, Longzhu; Neoh, Hui-min; Iwamoto, Akira; Hiramatsu, Keiichi

    2012-06-19

    Genome inversions are ubiquitous in organisms ranging from prokaryotes to eukaryotes. Typical examples can be identified by comparing the genomes of two or more closely related organisms, where genome inversion footprints are clearly visible. Although the evolutionary implications of this phenomenon are huge, little is known about the function and biological meaning of this process. Here, we report our findings on a bacterium that generates a reversible, large-scale inversion of its chromosome (about half of its total genome) at high frequencies of up to once every four generations. This inversion switches on or off bacterial phenotypes, including colony morphology, antibiotic susceptibility, hemolytic activity, and expression of dozens of genes. Quantitative measurements and mathematical analyses indicate that this reversible switching is stochastic but self-organized so as to maintain two forms of stable cell populations (i.e., small colony variant, normal colony variant) as a bet-hedging strategy. Thus, this heritable and reversible genome fluctuation seems to govern the bacterial life cycle; it has a profound impact on the course and outcomes of bacterial infections.

  14. Prospects for large scale electricity storage in Denmark

    DEFF Research Database (Denmark)

    Krog Ekman, Claus; Jensen, Søren Højgaard

    2010-01-01

    In a future power systems with additional wind power capacity there will be an increased need for large scale power management as well as reliable balancing and reserve capabilities. Different technologies for large scale electricity storage provide solutions to the different challenges arising w...

  15. Collective, stochastic and nonequilibrium behavior of highly excited hadronic matter

    Energy Technology Data Exchange (ETDEWEB)

    Carruthers, P [Los Alamos National Lab., NM (USA). Theoretical Div.

    1984-04-23

    We discuss selected problems concerning the dynamics and stochastic behavior of highly excited matter, particularly the QCD plasma. For the latter we consider the equation of state, kinetics, quasiparticles, flow properties and possible chaos and turbulence. The promise of phase space distribution functions for covariant transport and kinetic theory is stressed. The possibility and implications of a stochastic bag are spelled out. A simplified space-time model of hadronic collisions is pursued, with applications to A-A collisions and other matters. The domain wall between hadronic and plasma phase is of potential importance: its thickness and relation to surface tension is noticed. Finally, we review the recently developed stochastic cell model of multiparticle distributions and KNO scaling. This topic leads to the notion that fractional dimensions are involved in a rather general dynamical context. We speculate that various scaling phenomena are independent of the full dynamical structure, depending only on a general stochastic framework having to do with simple maps and strange attractors. 42 refs.

  16. Derivation of stochastic differential equations for scrape-off layer plasma fluctuations from experimentally measured statistics

    Energy Technology Data Exchange (ETDEWEB)

    Mekkaoui, Abdessamad [IEK-4 Forschungszentrum Juelich 52428 (Germany)

    2013-07-01

    A method to derive stochastic differential equations for intermittent plasma density dynamics in magnetic fusion edge plasma is presented. It uses a measured first four moments (mean, variance, Skewness and Kurtosis) and the correlation time of turbulence to write a Pearson equation for the probability distribution function of fluctuations. The Fokker-Planck equation is then used to derive a Langevin equation for the plasma density fluctuations. A theoretical expectations are used as a constraints to fix the nonlinearity structure of the stochastic differential equation. In particular when the quadratically nonlinear dynamics is assumed, then it is shown that the plasma density is driven by a multiplicative Wiener process and evolves on the turbulence correlation time scale, while the linear growth is quadratically damped by the fluctuation level. Strong criteria for statistical discrimination of experimental time series are proposed as an alternative to the Kurtosis-Skewness scaling. This scaling is broadly used in contemporary literature to characterize edge turbulence, but it is inappropriate because a large family of distributions could share this scaling. Strong criteria allow us to focus on the relevant candidate distribution and approach a nonlinear structure of edge turbulence model.

  17. Evolution of scaling emergence in large-scale spatial epidemic spreading.

    Science.gov (United States)

    Wang, Lin; Li, Xiang; Zhang, Yi-Qing; Zhang, Yan; Zhang, Kan

    2011-01-01

    Zipf's law and Heaps' law are two representatives of the scaling concepts, which play a significant role in the study of complexity science. The coexistence of the Zipf's law and the Heaps' law motivates different understandings on the dependence between these two scalings, which has still hardly been clarified. In this article, we observe an evolution process of the scalings: the Zipf's law and the Heaps' law are naturally shaped to coexist at the initial time, while the crossover comes with the emergence of their inconsistency at the larger time before reaching a stable state, where the Heaps' law still exists with the disappearance of strict Zipf's law. Such findings are illustrated with a scenario of large-scale spatial epidemic spreading, and the empirical results of pandemic disease support a universal analysis of the relation between the two laws regardless of the biological details of disease. Employing the United States domestic air transportation and demographic data to construct a metapopulation model for simulating the pandemic spread at the U.S. country level, we uncover that the broad heterogeneity of the infrastructure plays a key role in the evolution of scaling emergence. The analyses of large-scale spatial epidemic spreading help understand the temporal evolution of scalings, indicating the coexistence of the Zipf's law and the Heaps' law depends on the collective dynamics of epidemic processes, and the heterogeneity of epidemic spread indicates the significance of performing targeted containment strategies at the early time of a pandemic disease.

  18. Large-Scale Structure and Hyperuniformity of Amorphous Ices

    Science.gov (United States)

    Martelli, Fausto; Torquato, Salvatore; Giovambattista, Nicolas; Car, Roberto

    2017-09-01

    We investigate the large-scale structure of amorphous ices and transitions between their different forms by quantifying their large-scale density fluctuations. Specifically, we simulate the isothermal compression of low-density amorphous ice (LDA) and hexagonal ice to produce high-density amorphous ice (HDA). Both HDA and LDA are nearly hyperuniform; i.e., they are characterized by an anomalous suppression of large-scale density fluctuations. By contrast, in correspondence with the nonequilibrium phase transitions to HDA, the presence of structural heterogeneities strongly suppresses the hyperuniformity and the system becomes hyposurficial (devoid of "surface-area fluctuations"). Our investigation challenges the largely accepted "frozen-liquid" picture, which views glasses as structurally arrested liquids. Beyond implications for water, our findings enrich our understanding of pressure-induced structural transformations in glasses.

  19. Stochastic processes

    CERN Document Server

    Borodin, Andrei N

    2017-01-01

    This book provides a rigorous yet accessible introduction to the theory of stochastic processes. A significant part of the book is devoted to the classic theory of stochastic processes. In turn, it also presents proofs of well-known results, sometimes together with new approaches. Moreover, the book explores topics not previously covered elsewhere, such as distributions of functionals of diffusions stopped at different random times, the Brownian local time, diffusions with jumps, and an invariance principle for random walks and local times. Supported by carefully selected material, the book showcases a wealth of examples that demonstrate how to solve concrete problems by applying theoretical results. It addresses a broad range of applications, focusing on concrete computational techniques rather than on abstract theory. The content presented here is largely self-contained, making it suitable for researchers and graduate students alike.

  20. A stochastic SIS epidemic model with vaccination

    Science.gov (United States)

    Cao, Boqiang; Shan, Meijing; Zhang, Qimin; Wang, Weiming

    2017-11-01

    In this paper, we investigate the basic features of an SIS type infectious disease model with varying population size and vaccinations in presence of environment noise. By applying the Markov semigroup theory, we propose a stochastic reproduction number R0s which can be seen as a threshold parameter to utilize in identifying the stochastic extinction and persistence: If R0s disease-free absorbing set for the stochastic epidemic model, which implies that disease dies out with probability one; while if R0s > 1, under some mild extra conditions, the SDE model has an endemic stationary distribution which results in the stochastic persistence of the infectious disease. The most interesting finding is that large environmental noise can suppress the outbreak of the disease.

  1. Community Detection for Large Graphs

    KAUST Repository

    Peng, Chengbin

    2014-05-04

    Many real world networks have inherent community structures, including social networks, transportation networks, biological networks, etc. For large scale networks with millions or billions of nodes in real-world applications, accelerating current community detection algorithms is in demand, and we present two approaches to tackle this issue -A K-core based framework that can accelerate existing community detection algorithms significantly; -A parallel inference algorithm via stochastic block models that can distribute the workload.

  2. Effects of stochastic field lines on the pressure driven MHD instabilities in the Large Helical Device

    Science.gov (United States)

    Ohdachi, Satoshi; Watanabe, Kiyomasa; Sakakibara, Satoru; Suzuki, Yasuhiro; Tsuchiya, Hayato; Ming, Tingfeng; Du, Xiaodi; LHD Expriment Group Team

    2014-10-01

    In the Large Helical Device (LHD), the plasma is surrounded by the so-called magnetic stochastic region, where the Kolmogorov length of the magnetic field lines is very short, from several tens of meters and to thousands meters. Finite pressure gradient are formed in this region and MHD instabilities localized in this region is observed since the edge region of the LHD is always unstable against the pressure driven mode. Therefore, the saturation level of the instabilities is the key issue in order to evaluate the risk of this kind of MHD instabilities. The saturation level depends on the pressure gradient and on the magnetic Reynolds number; there results are similar to the MHD mode in the closed magnetic surface region. The saturation level in the stochastic region is affected also by the stocasticity itself. Parameter dependence of the saturation level of the MHD activities in the region is discussed in detail. It is supported by NIFS budget code ULPP021, 028 and is also partially supported by the Ministry of Education, Science, Sports and Culture, Grant-in-Aid for Scientific Research 26249144, by the JSPS-NRF-NSFC A3 Foresight Program NSFC: No. 11261140328.

  3. Neuro-Inspired Computing with Stochastic Electronics

    KAUST Repository

    Naous, Rawan

    2016-01-06

    The extensive scaling and integration within electronic systems have set the standards for what is addressed to as stochastic electronics. The individual components are increasingly diverting away from their reliable behavior and producing un-deterministic outputs. This stochastic operation highly mimics the biological medium within the brain. Hence, building on the inherent variability, particularly within novel non-volatile memory technologies, paves the way for unconventional neuromorphic designs. Neuro-inspired networks with brain-like structures of neurons and synapses allow for computations and levels of learning for diverse recognition tasks and applications.

  4. MetaPIGA v2.0: maximum likelihood large phylogeny estimation using the metapopulation genetic algorithm and other stochastic heuristics.

    Science.gov (United States)

    Helaers, Raphaël; Milinkovitch, Michel C

    2010-07-15

    The development, in the last decade, of stochastic heuristics implemented in robust application softwares has made large phylogeny inference a key step in most comparative studies involving molecular sequences. Still, the choice of a phylogeny inference software is often dictated by a combination of parameters not related to the raw performance of the implemented algorithm(s) but rather by practical issues such as ergonomics and/or the availability of specific functionalities. Here, we present MetaPIGA v2.0, a robust implementation of several stochastic heuristics for large phylogeny inference (under maximum likelihood), including a Simulated Annealing algorithm, a classical Genetic Algorithm, and the Metapopulation Genetic Algorithm (metaGA) together with complex substitution models, discrete Gamma rate heterogeneity, and the possibility to partition data. MetaPIGA v2.0 also implements the Likelihood Ratio Test, the Akaike Information Criterion, and the Bayesian Information Criterion for automated selection of substitution models that best fit the data. Heuristics and substitution models are highly customizable through manual batch files and command line processing. However, MetaPIGA v2.0 also offers an extensive graphical user interface for parameters setting, generating and running batch files, following run progress, and manipulating result trees. MetaPIGA v2.0 uses standard formats for data sets and trees, is platform independent, runs in 32 and 64-bits systems, and takes advantage of multiprocessor and multicore computers. The metaGA resolves the major problem inherent to classical Genetic Algorithms by maintaining high inter-population variation even under strong intra-population selection. Implementation of the metaGA together with additional stochastic heuristics into a single software will allow rigorous optimization of each heuristic as well as a meaningful comparison of performances among these algorithms. MetaPIGA v2.0 gives access both to high

  5. Online Censoring for Large-Scale Regressions with Application to Streaming Big Data.

    Science.gov (United States)

    Berberidis, Dimitris; Kekatos, Vassilis; Giannakis, Georgios B

    2016-08-01

    On par with data-intensive applications, the sheer size of modern linear regression problems creates an ever-growing demand for efficient solvers. Fortunately, a significant percentage of the data accrued can be omitted while maintaining a certain quality of statistical inference with an affordable computational budget. This work introduces means of identifying and omitting less informative observations in an online and data-adaptive fashion. Given streaming data, the related maximum-likelihood estimator is sequentially found using first- and second-order stochastic approximation algorithms. These schemes are well suited when data are inherently censored or when the aim is to save communication overhead in decentralized learning setups. In a different operational scenario, the task of joint censoring and estimation is put forth to solve large-scale linear regressions in a centralized setup. Novel online algorithms are developed enjoying simple closed-form updates and provable (non)asymptotic convergence guarantees. To attain desired censoring patterns and levels of dimensionality reduction, thresholding rules are investigated too. Numerical tests on real and synthetic datasets corroborate the efficacy of the proposed data-adaptive methods compared to data-agnostic random projection-based alternatives.

  6. Hybrid approaches for multiple-species stochastic reaction–diffusion models

    International Nuclear Information System (INIS)

    Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen

    2015-01-01

    Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. - Highlights: • A novel hybrid stochastic/deterministic reaction–diffusion simulation method is given. • Can massively speed up stochastic simulations while preserving stochastic effects. • Can handle multiple reacting species. • Can handle moving boundaries

  7. Hybrid approaches for multiple-species stochastic reaction–diffusion models

    Energy Technology Data Exchange (ETDEWEB)

    Spill, Fabian, E-mail: fspill@bu.edu [Department of Biomedical Engineering, Boston University, 44 Cummington Street, Boston, MA 02215 (United States); Department of Mechanical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139 (United States); Guerrero, Pilar [Department of Mathematics, University College London, Gower Street, London WC1E 6BT (United Kingdom); Alarcon, Tomas [Centre de Recerca Matematica, Campus de Bellaterra, Edifici C, 08193 Bellaterra (Barcelona) (Spain); Departament de Matemàtiques, Universitat Atonòma de Barcelona, 08193 Bellaterra (Barcelona) (Spain); Maini, Philip K. [Wolfson Centre for Mathematical Biology, Mathematical Institute, University of Oxford, Oxford OX2 6GG (United Kingdom); Byrne, Helen [Wolfson Centre for Mathematical Biology, Mathematical Institute, University of Oxford, Oxford OX2 6GG (United Kingdom); Computational Biology Group, Department of Computer Science, University of Oxford, Oxford OX1 3QD (United Kingdom)

    2015-10-15

    Reaction–diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction–diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model. - Highlights: • A novel hybrid stochastic/deterministic reaction–diffusion simulation method is given. • Can massively speed up stochastic simulations while preserving stochastic effects. • Can handle multiple reacting species. • Can handle moving boundaries.

  8. On Nash Equilibria in Stochastic Games

    Science.gov (United States)

    2003-10-01

    Traditionally automata theory and veri cation has considered zero sum or strictly competitive versions of stochastic games . In these games there are two players...zero- sum discrete-time stochastic dynamic games . SIAM J. Control and Optimization, 19(5):617{634, 1981. 18. R.J. Lipton, E . Markakis, and A. Mehta...Playing large games using simple strate- gies. In EC 03: Electronic Commerce, pages 36{41. ACM Press, 2003. 19. A. Maitra and W. Sudderth. Finitely

  9. Modeling Stochastic Route Choice Behaviors with Equivalent Impedance

    Directory of Open Access Journals (Sweden)

    Jun Li

    2015-01-01

    Full Text Available A Logit-based route choice model is proposed to address the overlapping and scaling problems in the traditional multinomial Logit model. The nonoverlapping links are defined as a subnetwork, and its equivalent impedance is explicitly calculated in order to simply network analyzing. The overlapping links are repeatedly merged into subnetworks with Logit-based equivalent travel costs. The choice set at each intersection comprises only the virtual equivalent route without overlapping. In order to capture heterogeneity in perception errors of different sizes of networks, different scale parameters are assigned to subnetworks and they are linked to the topological relationships to avoid estimation burden. The proposed model provides an alternative method to model the stochastic route choice behaviors without the overlapping and scaling problems, and it still maintains the simple and closed-form expression from the MNL model. A link-based loading algorithm based on Dial’s algorithm is proposed to obviate route enumeration and it is suitable to be applied on large-scale networks. Finally a comparison between the proposed model and other route choice models is given by numerical examples.

  10. Double inflation: A possible resolution of the large-scale structure problem

    International Nuclear Information System (INIS)

    Turner, M.S.; Villumsen, J.V.; Vittorio, N.; Silk, J.; Juszkiewicz, R.

    1986-11-01

    A model is presented for the large-scale structure of the universe in which two successive inflationary phases resulted in large small-scale and small large-scale density fluctuations. This bimodal density fluctuation spectrum in an Ω = 1 universe dominated by hot dark matter leads to large-scale structure of the galaxy distribution that is consistent with recent observational results. In particular, large, nearly empty voids and significant large-scale peculiar velocity fields are produced over scales of ∼100 Mpc, while the small-scale structure over ≤ 10 Mpc resembles that in a low density universe, as observed. Detailed analytical calculations and numerical simulations are given of the spatial and velocity correlations. 38 refs., 6 figs

  11. Large-scale fracture mechancis testing -- requirements and possibilities

    International Nuclear Information System (INIS)

    Brumovsky, M.

    1993-01-01

    Application of fracture mechanics to very important and/or complicated structures, like reactor pressure vessels, brings also some questions about the reliability and precision of such calculations. These problems become more pronounced in cases of elastic-plastic conditions of loading and/or in parts with non-homogeneous materials (base metal and austenitic cladding, property gradient changes through material thickness) or with non-homogeneous stress fields (nozzles, bolt threads, residual stresses etc.). For such special cases some verification by large-scale testing is necessary and valuable. This paper discusses problems connected with planning of such experiments with respect to their limitations, requirements to a good transfer of received results to an actual vessel. At the same time, an analysis of possibilities of small-scale model experiments is also shown, mostly in connection with application of results between standard, small-scale and large-scale experiments. Experience from 30 years of large-scale testing in SKODA is used as an example to support this analysis. 1 fig

  12. Ethics of large-scale change

    DEFF Research Database (Denmark)

    Arler, Finn

    2006-01-01

    , which kind of attitude is appropriate when dealing with large-scale changes like these from an ethical point of view. Three kinds of approaches are discussed: Aldo Leopold's mountain thinking, the neoclassical economists' approach, and finally the so-called Concentric Circle Theories approach...

  13. The Black-Scholes option pricing problem in mathematical finance: generalization and extensions for a large class of stochastic processes

    Science.gov (United States)

    Bouchaud, Jean-Philippe; Sornette, Didier

    1994-06-01

    The ability to price risks and devise optimal investment strategies in thé présence of an uncertain "random" market is thé cornerstone of modern finance theory. We first consider thé simplest such problem of a so-called "European call option" initially solved by Black and Scholes using Ito stochastic calculus for markets modelled by a log-Brownien stochastic process. A simple and powerful formalism is presented which allows us to generalize thé analysis to a large class of stochastic processes, such as ARCH, jump or Lévy processes. We also address thé case of correlated Gaussian processes, which is shown to be a good description of three différent market indices (MATIF, CAC40, FTSE100). Our main result is thé introduction of thé concept of an optimal strategy in the sense of (functional) minimization of the risk with respect to the portfolio. If the risk may be made to vanish for particular continuous uncorrelated 'quasiGaussian' stochastic processes (including Black and Scholes model), this is no longer the case for more general stochastic processes. The value of the residual risk is obtained and suggests the concept of risk-corrected option prices. In the presence of very large deviations such as in Lévy processes, new criteria for rational fixing of the option prices are discussed. We also apply our method to other types of options, `Asian', `American', and discuss new possibilities (`doubledecker'...). The inclusion of transaction costs leads to the appearance of a natural characteristic trading time scale. L'aptitude à quantifier le coût du risque et à définir une stratégie optimale de gestion de portefeuille dans un marché aléatoire constitue la base de la théorie moderne de la finance. Nous considérons d'abord le problème le plus simple de ce type, à savoir celui de l'option d'achat `européenne', qui a été résolu par Black et Scholes à l'aide du calcul stochastique d'Ito appliqué aux marchés modélisés par un processus Log

  14. Comparison Between Overtopping Discharge in Small and Large Scale Models

    DEFF Research Database (Denmark)

    Helgason, Einar; Burcharth, Hans F.

    2006-01-01

    The present paper presents overtopping measurements from small scale model test performed at the Haudraulic & Coastal Engineering Laboratory, Aalborg University, Denmark and large scale model tests performed at the Largde Wave Channel,Hannover, Germany. Comparison between results obtained from...... small and large scale model tests show no clear evidence of scale effects for overtopping above a threshold value. In the large scale model no overtopping was measured for waveheights below Hs = 0.5m as the water sunk into the voids between the stones on the crest. For low overtopping scale effects...

  15. Development of stochastic indicator models of lithology, Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Rautman, C.A.; Robey, T.H.

    1994-01-01

    Indicator geostatistical techniques have been used to produce a number of fully three-dimensional stochastic simulations of large-scale lithologic categories at the Yucca Mountain site. Each realization reproduces the available drill hole data used to condition the simulation. Information is propagated away from each point of observation in accordance with a mathematical model of spatial continuity inferred through soft data taken from published geologic cross sections. Variations among the simulated models collectively represent uncertainty in the lithology at unsampled locations. These stochastic models succeed in capturing many major features of welded-nonwelded lithologic framework of Yucca Mountain. However, contacts between welded and nonwelded rock types for individual simulations appear more complex than suggested by field observation, and a number of probable numerical artifacts exist in these models. Many of the apparent discrepancies between the simulated models and the general geology of Yucca Mountain represent characterization uncertainty, and can be traced to the sparse site data used to condition the simulations. Several vertical stratigraphic columns have been extracted from the three-dimensional stochastic models for use in simplified total-system performance assessment exercises. Simple, manual adjustments are required to eliminate the more obvious simulation artifacts and to impose a secondary set of deterministic geologic features on the overall stratigraphic framework provided by the indictor models

  16. Production and efficiency of large wildland fire suppression effort: A stochastic frontier analysis.

    Science.gov (United States)

    Katuwal, Hari; Calkin, David E; Hand, Michael S

    2016-01-15

    This study examines the production and efficiency of wildland fire suppression effort. We estimate the effectiveness of suppression resource inputs to produce controlled fire lines that contain large wildland fires using stochastic frontier analysis. Determinants of inefficiency are identified and the effects of these determinants on the daily production of controlled fire line are examined. Results indicate that the use of bulldozers and fire engines increase the production of controlled fire line, while firefighter crews do not tend to contribute to controlled fire line production. Production of controlled fire line is more efficient if it occurs along natural or built breaks, such as rivers and roads, and within areas previously burned by wildfires. However, results also indicate that productivity and efficiency of the controlled fire line are sensitive to weather, landscape and fire characteristics. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. STOCHASTIC METHODS IN RISK ANALYSIS

    Directory of Open Access Journals (Sweden)

    Vladimíra OSADSKÁ

    2017-06-01

    Full Text Available In this paper, we review basic stochastic methods which can be used to extend state-of-the-art deterministic analytical methods for risk analysis. We can conclude that the standard deterministic analytical methods highly depend on the practical experience and knowledge of the evaluator and therefore, the stochastic methods should be introduced. The new risk analysis methods should consider the uncertainties in input values. We present how large is the impact on the results of the analysis solving practical example of FMECA with uncertainties modelled using Monte Carlo sampling.

  18. Stochastic inflation and nonlinear gravity

    International Nuclear Information System (INIS)

    Salopek, D.S.; Bond, J.R.

    1991-01-01

    We show how nonlinear effects of the metric and scalar fields may be included in stochastic inflation. Our formalism can be applied to non-Gaussian fluctuation models for galaxy formation. Fluctuations with wavelengths larger than the horizon length are governed by a network of Langevin equations for the physical fields. Stochastic noise terms arise from quantum fluctuations that are assumed to become classical at horizon crossing and that then contribute to the background. Using Hamilton-Jacobi methods, we solve the Arnowitt-Deser-Misner constraint equations which allows us to separate the growing modes from the decaying ones in the drift phase following each stochastic impulse. We argue that the most reasonable choice of time hypersurfaces for the Langevin system during inflation is T=ln(Ha), where H and a are the local values of the Hubble parameter and the scale factor, since T is the natural time for evolving the short-wavelength scalar field fluctuations in an inhomogeneous background

  19. Large-scale DCMs for resting-state fMRI

    Directory of Open Access Journals (Sweden)

    Adeel Razi

    2017-01-01

    Full Text Available This paper considers the identification of large directed graphs for resting-state brain networks based on biophysical models of distributed neuronal activity, that is, effective connectivity. This identification can be contrasted with functional connectivity methods based on symmetric correlations that are ubiquitous in resting-state functional MRI (fMRI. We use spectral dynamic causal modeling (DCM to invert large graphs comprising dozens of nodes or regions. The ensuing graphs are directed and weighted, hence providing a neurobiologically plausible characterization of connectivity in terms of excitatory and inhibitory coupling. Furthermore, we show that the use of Bayesian model reduction to discover the most likely sparse graph (or model from a parent (e.g., fully connected graph eschews the arbitrary thresholding often applied to large symmetric (functional connectivity graphs. Using empirical fMRI data, we show that spectral DCM furnishes connectivity estimates on large graphs that correlate strongly with the estimates provided by stochastic DCM. Furthermore, we increase the efficiency of model inversion using functional connectivity modes to place prior constraints on effective connectivity. In other words, we use a small number of modes to finesse the potentially redundant parameterization of large DCMs. We show that spectral DCM—with functional connectivity priors—is ideally suited for directed graph theoretic analyses of resting-state fMRI. We envision that directed graphs will prove useful in understanding the psychopathology and pathophysiology of neurodegenerative and neurodevelopmental disorders. We will demonstrate the utility of large directed graphs in clinical populations in subsequent reports, using the procedures described in this paper.

  20. Needs, opportunities, and options for large scale systems research

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, G.L.

    1984-10-01

    The Office of Energy Research was recently asked to perform a study of Large Scale Systems in order to facilitate the development of a true large systems theory. It was decided to ask experts in the fields of electrical engineering, chemical engineering and manufacturing/operations research for their ideas concerning large scale systems research. The author was asked to distribute a questionnaire among these experts to find out their opinions concerning recent accomplishments and future research directions in large scale systems research. He was also requested to convene a conference which included three experts in each area as panel members to discuss the general area of large scale systems research. The conference was held on March 26--27, 1984 in Pittsburgh with nine panel members, and 15 other attendees. The present report is a summary of the ideas presented and the recommendations proposed by the attendees.

  1. Large-scale structure of the Universe

    International Nuclear Information System (INIS)

    Doroshkevich, A.G.

    1978-01-01

    The problems, discussed at the ''Large-scale Structure of the Universe'' symposium are considered on a popular level. Described are the cell structure of galaxy distribution in the Universe, principles of mathematical galaxy distribution modelling. The images of cell structures, obtained after reprocessing with the computer are given. Discussed are three hypothesis - vortical, entropic, adiabatic, suggesting various processes of galaxy and galaxy clusters origin. A considerable advantage of the adiabatic hypothesis is recognized. The relict radiation, as a method of direct studying the processes taking place in the Universe is considered. The large-scale peculiarities and small-scale fluctuations of the relict radiation temperature enable one to estimate the turbance properties at the pre-galaxy stage. The discussion of problems, pertaining to studying the hot gas, contained in galaxy clusters, the interactions within galaxy clusters and with the inter-galaxy medium, is recognized to be a notable contribution into the development of theoretical and observational cosmology

  2. Stochastic motion of particles in tandem mirror devices

    International Nuclear Information System (INIS)

    Ichikawa, Y.H.; Kamimura, T.

    1982-01-01

    Stochastic motion of particles in tandem mirror devices is examined on basis of a nonlinear mapping of particle positions on the equatorial plane. Local stability analysis provides detailed informations on particle trajectories. The rate of stochastic plasma diffusion is estimated from numerical observations of motions of particles over a large number of time steps. (author)

  3. Seismic safety in conducting large-scale blasts

    Science.gov (United States)

    Mashukov, I. V.; Chaplygin, V. V.; Domanov, V. P.; Semin, A. A.; Klimkin, M. A.

    2017-09-01

    In mining enterprises to prepare hard rocks for excavation a drilling and blasting method is used. With the approach of mining operations to settlements the negative effect of large-scale blasts increases. To assess the level of seismic impact of large-scale blasts the scientific staff of Siberian State Industrial University carried out expertise for coal mines and iron ore enterprises. Determination of the magnitude of surface seismic vibrations caused by mass explosions was performed using seismic receivers, an analog-digital converter with recording on a laptop. The registration results of surface seismic vibrations during production of more than 280 large-scale blasts at 17 mining enterprises in 22 settlements are presented. The maximum velocity values of the Earth’s surface vibrations are determined. The safety evaluation of seismic effect was carried out according to the permissible value of vibration velocity. For cases with exceedance of permissible values recommendations were developed to reduce the level of seismic impact.

  4. Uncertainty Reduction for Stochastic Processes on Complex Networks

    Science.gov (United States)

    Radicchi, Filippo; Castellano, Claudio

    2018-05-01

    Many real-world systems are characterized by stochastic dynamical rules where a complex network of interactions among individual elements probabilistically determines their state. Even with full knowledge of the network structure and of the stochastic rules, the ability to predict system configurations is generally characterized by a large uncertainty. Selecting a fraction of the nodes and observing their state may help to reduce the uncertainty about the unobserved nodes. However, choosing these points of observation in an optimal way is a highly nontrivial task, depending on the nature of the stochastic process and on the structure of the underlying interaction pattern. In this paper, we introduce a computationally efficient algorithm to determine quasioptimal solutions to the problem. The method leverages network sparsity to reduce computational complexity from exponential to almost quadratic, thus allowing the straightforward application of the method to mid-to-large-size systems. Although the method is exact only for equilibrium stochastic processes defined on trees, it turns out to be effective also for out-of-equilibrium processes on sparse loopy networks.

  5. Image-based Exploration of Large-Scale Pathline Fields

    KAUST Repository

    Nagoor, Omniah H.

    2014-05-27

    While real-time applications are nowadays routinely used in visualizing large nu- merical simulations and volumes, handling these large-scale datasets requires high-end graphics clusters or supercomputers to process and visualize them. However, not all users have access to powerful clusters. Therefore, it is challenging to come up with a visualization approach that provides insight to large-scale datasets on a single com- puter. Explorable images (EI) is one of the methods that allows users to handle large data on a single workstation. Although it is a view-dependent method, it combines both exploration and modification of visual aspects without re-accessing the original huge data. In this thesis, we propose a novel image-based method that applies the concept of EI in visualizing large flow-field pathlines data. The goal of our work is to provide an optimized image-based method, which scales well with the dataset size. Our approach is based on constructing a per-pixel linked list data structure in which each pixel contains a list of pathlines segments. With this view-dependent method it is possible to filter, color-code and explore large-scale flow data in real-time. In addition, optimization techniques such as early-ray termination and deferred shading are applied, which further improves the performance and scalability of our approach.

  6. Stochastic backscatter modelling for the prediction of pollutant removal from an urban street canyon: A large-eddy simulation

    Science.gov (United States)

    O'Neill, J. J.; Cai, X.-M.; Kinnersley, R.

    2016-10-01

    The large-eddy simulation (LES) approach has recently exhibited its appealing capability of capturing turbulent processes inside street canyons and the urban boundary layer aloft, and its potential for deriving the bulk parameters adopted in low-cost operational urban dispersion models. However, the thin roof-level shear layer may be under-resolved in most LES set-ups and thus sophisticated subgrid-scale (SGS) parameterisations may be required. In this paper, we consider the important case of pollutant removal from an urban street canyon of unit aspect ratio (i.e. building height equal to street width) with the external flow perpendicular to the street. We show that by employing a stochastic SGS model that explicitly accounts for backscatter (energy transfer from unresolved to resolved scales), the pollutant removal process is better simulated compared with the use of a simpler (fully dissipative) but widely-used SGS model. The backscatter induces additional mixing within the shear layer which acts to increase the rate of pollutant removal from the street canyon, giving better agreement with a recent wind-tunnel experiment. The exchange velocity, an important parameter in many operational models that determines the mass transfer between the urban canopy and the external flow, is predicted to be around 15% larger with the backscatter SGS model; consequently, the steady-state mean pollutant concentration within the street canyon is around 15% lower. A database of exchange velocities for various other urban configurations could be generated and used as improved input for operational street canyon models.

  7. A probabilistic assessment of large scale wind power development for long-term energy resource planning

    Science.gov (United States)

    Kennedy, Scott Warren

    A steady decline in the cost of wind turbines and increased experience in their successful operation have brought this technology to the forefront of viable alternatives for large-scale power generation. Methodologies for understanding the costs and benefits of large-scale wind power development, however, are currently limited. In this thesis, a new and widely applicable technique for estimating the social benefit of large-scale wind power production is presented. The social benefit is based upon wind power's energy and capacity services and the avoidance of environmental damages. The approach uses probabilistic modeling techniques to account for the stochastic interaction between wind power availability, electricity demand, and conventional generator dispatch. A method for including the spatial smoothing effect of geographically dispersed wind farms is also introduced. The model has been used to analyze potential offshore wind power development to the south of Long Island, NY. If natural gas combined cycle (NGCC) and integrated gasifier combined cycle (IGCC) are the alternative generation sources, wind power exhibits a negative social benefit due to its high capacity cost and the relatively low emissions of these advanced fossil-fuel technologies. Environmental benefits increase significantly if charges for CO2 emissions are included. Results also reveal a diminishing social benefit as wind power penetration increases. The dependence of wind power benefits on natural gas and coal prices is also discussed. In power systems with a high penetration of wind generated electricity, the intermittent availability of wind power may influence hourly spot prices. A price responsive electricity demand model is introduced that shows a small increase in wind power value when consumers react to hourly spot prices. The effectiveness of this mechanism depends heavily on estimates of the own- and cross-price elasticities of aggregate electricity demand. This work makes a valuable

  8. Homogenization of Large-Scale Movement Models in Ecology

    Science.gov (United States)

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  9. The role of large-scale, extratropical dynamics in climate change

    Energy Technology Data Exchange (ETDEWEB)

    Shepherd, T.G. [ed.

    1994-02-01

    The climate modeling community has focused recently on improving our understanding of certain processes, such as cloud feedbacks and ocean circulation, that are deemed critical to climate-change prediction. Although attention to such processes is warranted, emphasis on these areas has diminished a general appreciation of the role played by the large-scale dynamics of the extratropical atmosphere. Lack of interest in extratropical dynamics may reflect the assumption that these dynamical processes are a non-problem as far as climate modeling is concerned, since general circulation models (GCMs) calculate motions on this scale from first principles. Nevertheless, serious shortcomings in our ability to understand and simulate large-scale dynamics exist. Partly due to a paucity of standard GCM diagnostic calculations of large-scale motions and their transports of heat, momentum, potential vorticity, and moisture, a comprehensive understanding of the role of large-scale dynamics in GCM climate simulations has not been developed. Uncertainties remain in our understanding and simulation of large-scale extratropical dynamics and their interaction with other climatic processes, such as cloud feedbacks, large-scale ocean circulation, moist convection, air-sea interaction and land-surface processes. To address some of these issues, the 17th Stanstead Seminar was convened at Bishop`s University in Lennoxville, Quebec. The purpose of the Seminar was to promote discussion of the role of large-scale extratropical dynamics in global climate change. Abstracts of the talks are included in this volume. On the basis of these talks, several key issues emerged concerning large-scale extratropical dynamics and their climatic role. Individual records are indexed separately for the database.

  10. The role of large-scale, extratropical dynamics in climate change

    International Nuclear Information System (INIS)

    Shepherd, T.G.

    1994-02-01

    The climate modeling community has focused recently on improving our understanding of certain processes, such as cloud feedbacks and ocean circulation, that are deemed critical to climate-change prediction. Although attention to such processes is warranted, emphasis on these areas has diminished a general appreciation of the role played by the large-scale dynamics of the extratropical atmosphere. Lack of interest in extratropical dynamics may reflect the assumption that these dynamical processes are a non-problem as far as climate modeling is concerned, since general circulation models (GCMs) calculate motions on this scale from first principles. Nevertheless, serious shortcomings in our ability to understand and simulate large-scale dynamics exist. Partly due to a paucity of standard GCM diagnostic calculations of large-scale motions and their transports of heat, momentum, potential vorticity, and moisture, a comprehensive understanding of the role of large-scale dynamics in GCM climate simulations has not been developed. Uncertainties remain in our understanding and simulation of large-scale extratropical dynamics and their interaction with other climatic processes, such as cloud feedbacks, large-scale ocean circulation, moist convection, air-sea interaction and land-surface processes. To address some of these issues, the 17th Stanstead Seminar was convened at Bishop's University in Lennoxville, Quebec. The purpose of the Seminar was to promote discussion of the role of large-scale extratropical dynamics in global climate change. Abstracts of the talks are included in this volume. On the basis of these talks, several key issues emerged concerning large-scale extratropical dynamics and their climatic role. Individual records are indexed separately for the database

  11. Status: Large-scale subatmospheric cryogenic systems

    International Nuclear Information System (INIS)

    Peterson, T.

    1989-01-01

    In the late 1960's and early 1970's an interest in testing and operating RF cavities at 1.8K motivated the development and construction of four large (300 Watt) 1.8K refrigeration systems. in the past decade, development of successful superconducting RF cavities and interest in obtaining higher magnetic fields with the improved Niobium-Titanium superconductors has once again created interest in large-scale 1.8K refrigeration systems. The L'Air Liquide plant for Tore Supra is a recently commissioned 300 Watt 1.8K system which incorporates new technology, cold compressors, to obtain the low vapor pressure for low temperature cooling. CEBAF proposes to use cold compressors to obtain 5KW at 2.0K. Magnetic refrigerators of 10 Watt capacity or higher at 1.8K are now being developed. The state of the art of large-scale refrigeration in the range under 4K will be reviewed. 28 refs., 4 figs., 7 tabs

  12. Front Propagation in Stochastic Neural Fields

    KAUST Repository

    Bressloff, Paul C.

    2012-01-01

    We analyze the effects of extrinsic multiplicative noise on front propagation in a scalar neural field with excitatory connections. Using a separation of time scales, we represent the fluctuating front in terms of a diffusive-like displacement (wandering) of the front from its uniformly translating position at long time scales, and fluctuations in the front profile around its instantaneous position at short time scales. One major result of our analysis is a comparison between freely propagating fronts and fronts locked to an externally moving stimulus. We show that the latter are much more robust to noise, since the stochastic wandering of the mean front profile is described by an Ornstein-Uhlenbeck process rather than a Wiener process, so that the variance in front position saturates in the long time limit rather than increasing linearly with time. Finally, we consider a stochastic neural field that supports a pulled front in the deterministic limit, and show that the wandering of such a front is now subdiffusive. © 2012 Society for Industrial and Applied Mathematics.

  13. Stochastic Analysis 2010

    CERN Document Server

    Crisan, Dan

    2011-01-01

    "Stochastic Analysis" aims to provide mathematical tools to describe and model high dimensional random systems. Such tools arise in the study of Stochastic Differential Equations and Stochastic Partial Differential Equations, Infinite Dimensional Stochastic Geometry, Random Media and Interacting Particle Systems, Super-processes, Stochastic Filtering, Mathematical Finance, etc. Stochastic Analysis has emerged as a core area of late 20th century Mathematics and is currently undergoing a rapid scientific development. The special volume "Stochastic Analysis 2010" provides a sa

  14. Large-scale weakly supervised object localization via latent category learning.

    Science.gov (United States)

    Chong Wang; Kaiqi Huang; Weiqiang Ren; Junge Zhang; Maybank, Steve

    2015-04-01

    Localizing objects in cluttered backgrounds is challenging under large-scale weakly supervised conditions. Due to the cluttered image condition, objects usually have large ambiguity with backgrounds. Besides, there is also a lack of effective algorithm for large-scale weakly supervised localization in cluttered backgrounds. However, backgrounds contain useful latent information, e.g., the sky in the aeroplane class. If this latent information can be learned, object-background ambiguity can be largely reduced and background can be suppressed effectively. In this paper, we propose the latent category learning (LCL) in large-scale cluttered conditions. LCL is an unsupervised learning method which requires only image-level class labels. First, we use the latent semantic analysis with semantic object representation to learn the latent categories, which represent objects, object parts or backgrounds. Second, to determine which category contains the target object, we propose a category selection strategy by evaluating each category's discrimination. Finally, we propose the online LCL for use in large-scale conditions. Evaluation on the challenging PASCAL Visual Object Class (VOC) 2007 and the large-scale imagenet large-scale visual recognition challenge 2013 detection data sets shows that the method can improve the annotation precision by 10% over previous methods. More importantly, we achieve the detection precision which outperforms previous results by a large margin and can be competitive to the supervised deformable part model 5.0 baseline on both data sets.

  15. Large-scale networks in engineering and life sciences

    CERN Document Server

    Findeisen, Rolf; Flockerzi, Dietrich; Reichl, Udo; Sundmacher, Kai

    2014-01-01

    This edited volume provides insights into and tools for the modeling, analysis, optimization, and control of large-scale networks in the life sciences and in engineering. Large-scale systems are often the result of networked interactions between a large number of subsystems, and their analysis and control are becoming increasingly important. The chapters of this book present the basic concepts and theoretical foundations of network theory and discuss its applications in different scientific areas such as biochemical reactions, chemical production processes, systems biology, electrical circuits, and mobile agents. The aim is to identify common concepts, to understand the underlying mathematical ideas, and to inspire discussions across the borders of the various disciplines.  The book originates from the interdisciplinary summer school “Large Scale Networks in Engineering and Life Sciences” hosted by the International Max Planck Research School Magdeburg, September 26-30, 2011, and will therefore be of int...

  16. Combining deterministic and stochastic velocity fields in the analysis of deep crustal seismic data

    Science.gov (United States)

    Larkin, Steven Paul

    Standard crustal seismic modeling obtains deterministic velocity models which ignore the effects of wavelength-scale heterogeneity, known to exist within the Earth's crust. Stochastic velocity models are a means to include wavelength-scale heterogeneity in the modeling. These models are defined by statistical parameters obtained from geologic maps of exposed crystalline rock, and are thus tied to actual geologic structures. Combining both deterministic and stochastic velocity models into a single model allows a realistic full wavefield (2-D) to be computed. By comparing these simulations to recorded seismic data, the effects of wavelength-scale heterogeneity can be investigated. Combined deterministic and stochastic velocity models are created for two datasets, the 1992 RISC seismic experiment in southeastern California and the 1986 PASSCAL seismic experiment in northern Nevada. The RISC experiment was located in the transition zone between the Salton Trough and the southern Basin and Range province. A high-velocity body previously identified beneath the Salton Trough is constrained to pinch out beneath the Chocolate Mountains to the northeast. The lateral extent of this body is evidence for the ephemeral nature of rifting loci as a continent is initially rifted. Stochastic modeling of wavelength-scale structures above this body indicate that little more than 5% mafic intrusion into a more felsic continental crust is responsible for the observed reflectivity. Modeling of the wide-angle RISC data indicates that coda waves following PmP are initially dominated by diffusion of energy out of the near-surface basin as the wavefield reverberates within this low-velocity layer. At later times, this coda consists of scattered body waves and P to S conversions. Surface waves do not play a significant role in this coda. Modeling of the PASSCAL dataset indicates that a high-gradient crust-mantle transition zone or a rough Moho interface is necessary to reduce precritical Pm

  17. An Novel Architecture of Large-scale Communication in IOT

    Science.gov (United States)

    Ma, Wubin; Deng, Su; Huang, Hongbin

    2018-03-01

    In recent years, many scholars have done a great deal of research on the development of Internet of Things and networked physical systems. However, few people have made the detailed visualization of the large-scale communications architecture in the IOT. In fact, the non-uniform technology between IPv6 and access points has led to a lack of broad principles of large-scale communications architectures. Therefore, this paper presents the Uni-IPv6 Access and Information Exchange Method (UAIEM), a new architecture and algorithm that addresses large-scale communications in the IOT.

  18. Benefits of transactive memory systems in large-scale development

    OpenAIRE

    Aivars, Sablis

    2016-01-01

    Context. Large-scale software development projects are those consisting of a large number of teams, maybe even spread across multiple locations, and working on large and complex software tasks. That means that neither a team member individually nor an entire team holds all the knowledge about the software being developed and teams have to communicate and coordinate their knowledge. Therefore, teams and team members in large-scale software development projects must acquire and manage expertise...

  19. Study of a large scale neutron measurement channel

    International Nuclear Information System (INIS)

    Amarouayache, Anissa; Ben Hadid, Hayet.

    1982-12-01

    A large scale measurement channel allows the processing of the signal coming from an unique neutronic sensor, during three different running modes: impulses, fluctuations and current. The study described in this note includes three parts: - A theoretical study of the large scale channel and its brief description are given. The results obtained till now in that domain are presented. - The fluctuation mode is thoroughly studied and the improvements to be done are defined. The study of a fluctuation linear channel with an automatic commutation of scales is described and the results of the tests are given. In this large scale channel, the method of data processing is analogical. - To become independent of the problems generated by the use of a an analogical processing of the fluctuation signal, a digital method of data processing is tested. The validity of that method is improved. The results obtained on a test system realized according to this method are given and a preliminary plan for further research is defined [fr

  20. Reducing storage of global wind ensembles with stochastic generators

    KAUST Repository

    Jeong, Jaehong

    2018-03-09

    Wind has the potential to make a significant contribution to future energy resources. Locating the sources of this renewable energy on a global scale is however extremely challenging, given the difficulty to store very large data sets generated by modern computer models. We propose a statistical model that aims at reproducing the data-generating mechanism of an ensemble of runs via a Stochastic Generator (SG) of global annual wind data. We introduce an evolutionary spectrum approach with spatially varying parameters based on large-scale geographical descriptors such as altitude to better account for different regimes across the Earth’s orography. We consider a multi-step conditional likelihood approach to estimate the parameters that explicitly accounts for nonstationary features while also balancing memory storage and distributed computation. We apply the proposed model to more than 18 million points of yearly global wind speed. The proposed SG requires orders of magnitude less storage for generating surrogate ensemble members from wind than does creating additional wind fields from the climate model, even if an effective lossy data compression algorithm is applied to the simulation output.

  1. Reducing storage of global wind ensembles with stochastic generators

    KAUST Repository

    Jeong, Jaehong; Castruccio, Stefano; Crippa, Paola; Genton, Marc G.

    2018-01-01

    Wind has the potential to make a significant contribution to future energy resources. Locating the sources of this renewable energy on a global scale is however extremely challenging, given the difficulty to store very large data sets generated by modern computer models. We propose a statistical model that aims at reproducing the data-generating mechanism of an ensemble of runs via a Stochastic Generator (SG) of global annual wind data. We introduce an evolutionary spectrum approach with spatially varying parameters based on large-scale geographical descriptors such as altitude to better account for different regimes across the Earth’s orography. We consider a multi-step conditional likelihood approach to estimate the parameters that explicitly accounts for nonstationary features while also balancing memory storage and distributed computation. We apply the proposed model to more than 18 million points of yearly global wind speed. The proposed SG requires orders of magnitude less storage for generating surrogate ensemble members from wind than does creating additional wind fields from the climate model, even if an effective lossy data compression algorithm is applied to the simulation output.

  2. To what extent are stochastic the arithmetical progressions of the fractional parts?

    International Nuclear Information System (INIS)

    Arnold, V.

    2008-01-01

    For the residues of the division of the n members of an arithmetical progression by a real number N is proved the tending to 0 of the Kolmogorov's stochasticity parameter λ n , when n tends to infinity, providing that the progression step is commensurable with N. On the contrary, when the step is incommensurable with N, the paper describes some examples, where the stochasticity parameter λ n does not tend to zero, and even attains (infrequently) some arbitrary large values. Both the too small and the too large values of the stochasticity parameter show the small probability of the randomness of the sequence, for which they have been counted. Thus, the long arithmetical progressions' stochasticity degree is much smaller than that of the geometrical progressions (which provide temperate values of the stochasticity parameter, similarly to its value for the genuinely random sequences). (author)

  3. A scalable community detection algorithm for large graphs using stochastic block models

    KAUST Repository

    Peng, Chengbin

    2017-11-24

    Community detection in graphs is widely used in social and biological networks, and the stochastic block model is a powerful probabilistic tool for describing graphs with community structures. However, in the era of

  4. A scalable community detection algorithm for large graphs using stochastic block models

    KAUST Repository

    Peng, Chengbin; Zhang, Zhihua; Wong, Ka-Chun; Zhang, Xiangliang; Keyes, David E.

    2017-01-01

    Community detection in graphs is widely used in social and biological networks, and the stochastic block model is a powerful probabilistic tool for describing graphs with community structures. However, in the era of

  5. Stochastic model of financial markets reproducing scaling and memory in volatility return intervals

    Science.gov (United States)

    Gontis, V.; Havlin, S.; Kononovicius, A.; Podobnik, B.; Stanley, H. E.

    2016-11-01

    We investigate the volatility return intervals in the NYSE and FOREX markets. We explain previous empirical findings using a model based on the interacting agent hypothesis instead of the widely-used efficient market hypothesis. We derive macroscopic equations based on the microscopic herding interactions of agents and find that they are able to reproduce various stylized facts of different markets and different assets with the same set of model parameters. We show that the power-law properties and the scaling of return intervals and other financial variables have a similar origin and could be a result of a general class of non-linear stochastic differential equations derived from a master equation of an agent system that is coupled by herding interactions. Specifically, we find that this approach enables us to recover the volatility return interval statistics as well as volatility probability and spectral densities for the NYSE and FOREX markets, for different assets, and for different time-scales. We find also that the historical S&P500 monthly series exhibits the same volatility return interval properties recovered by our proposed model. Our statistical results suggest that human herding is so strong that it persists even when other evolving fluctuations perturbate the financial system.

  6. Capabilities of the Large-Scale Sediment Transport Facility

    Science.gov (United States)

    2016-04-01

    pump flow meters, sediment trap weigh tanks , and beach profiling lidar. A detailed discussion of the original LSTF features and capabilities can be...ERDC/CHL CHETN-I-88 April 2016 Approved for public release; distribution is unlimited. Capabilities of the Large-Scale Sediment Transport...describes the Large-Scale Sediment Transport Facility (LSTF) and recent upgrades to the measurement systems. The purpose of these upgrades was to increase

  7. Spatiotemporal property and predictability of large-scale human mobility

    Science.gov (United States)

    Zhang, Hai-Tao; Zhu, Tao; Fu, Dongfei; Xu, Bowen; Han, Xiao-Pu; Chen, Duxin

    2018-04-01

    Spatiotemporal characteristics of human mobility emerging from complexity on individual scale have been extensively studied due to the application potential on human behavior prediction and recommendation, and control of epidemic spreading. We collect and investigate a comprehensive data set of human activities on large geographical scales, including both websites browse and mobile towers visit. Numerical results show that the degree of activity decays as a power law, indicating that human behaviors are reminiscent of scale-free random walks known as Lévy flight. More significantly, this study suggests that human activities on large geographical scales have specific non-Markovian characteristics, such as a two-segment power-law distribution of dwelling time and a high possibility for prediction. Furthermore, a scale-free featured mobility model with two essential ingredients, i.e., preferential return and exploration, and a Gaussian distribution assumption on the exploration tendency parameter is proposed, which outperforms existing human mobility models under scenarios of large geographical scales.

  8. Problems of large-scale vertically-integrated aquaculture

    Energy Technology Data Exchange (ETDEWEB)

    Webber, H H; Riordan, P F

    1976-01-01

    The problems of vertically-integrated aquaculture are outlined; they are concerned with: species limitations (in the market, biological and technological); site selection, feed, manpower needs, and legal, institutional and financial requirements. The gaps in understanding of, and the constraints limiting, large-scale aquaculture are listed. Future action is recommended with respect to: types and diversity of species to be cultivated, marketing, biotechnology (seed supply, disease control, water quality and concerted effort), siting, feed, manpower, legal and institutional aids (granting of water rights, grants, tax breaks, duty-free imports, etc.), and adequate financing. The last of hard data based on experience suggests that large-scale vertically-integrated aquaculture is a high risk enterprise, and with the high capital investment required, banks and funding institutions are wary of supporting it. Investment in pilot projects is suggested to demonstrate that large-scale aquaculture can be a fully functional and successful business. Construction and operation of such pilot farms is judged to be in the interests of both the public and private sector.

  9. Large-scale computing with Quantum Espresso

    International Nuclear Information System (INIS)

    Giannozzi, P.; Cavazzoni, C.

    2009-01-01

    This paper gives a short introduction to Quantum Espresso: a distribution of software for atomistic simulations in condensed-matter physics, chemical physics, materials science, and to its usage in large-scale parallel computing.

  10. Hybrid approaches for multiple-species stochastic reaction-diffusion models

    Science.gov (United States)

    Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K.; Byrne, Helen

    2015-10-01

    Reaction-diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction-diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model.

  11. Hybrid approaches for multiple-species stochastic reaction-diffusion models.

    KAUST Repository

    Spill, Fabian; Guerrero, Pilar; Alarcon, Tomas; Maini, Philip K; Byrne, Helen

    2015-01-01

    Reaction-diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction-diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model.

  12. Hybrid approaches for multiple-species stochastic reaction-diffusion models.

    KAUST Repository

    Spill, Fabian

    2015-10-01

    Reaction-diffusion models are used to describe systems in fields as diverse as physics, chemistry, ecology and biology. The fundamental quantities in such models are individual entities such as atoms and molecules, bacteria, cells or animals, which move and/or react in a stochastic manner. If the number of entities is large, accounting for each individual is inefficient, and often partial differential equation (PDE) models are used in which the stochastic behaviour of individuals is replaced by a description of the averaged, or mean behaviour of the system. In some situations the number of individuals is large in certain regions and small in others. In such cases, a stochastic model may be inefficient in one region, and a PDE model inaccurate in another. To overcome this problem, we develop a scheme which couples a stochastic reaction-diffusion system in one part of the domain with its mean field analogue, i.e. a discretised PDE model, in the other part of the domain. The interface in between the two domains occupies exactly one lattice site and is chosen such that the mean field description is still accurate there. In this way errors due to the flux between the domains are small. Our scheme can account for multiple dynamic interfaces separating multiple stochastic and deterministic domains, and the coupling between the domains conserves the total number of particles. The method preserves stochastic features such as extinction not observable in the mean field description, and is significantly faster to simulate on a computer than the pure stochastic model.

  13. Stochastic Modeling Of Wind Turbine Drivetrain Components

    DEFF Research Database (Denmark)

    Rafsanjani, Hesam Mirzaei; Sørensen, John Dalsgaard

    2014-01-01

    reliable components are needed for wind turbine. In this paper focus is on reliability of critical components in drivetrain such as bearings and shafts. High failure rates of these components imply a need for more reliable components. To estimate the reliability of these components, stochastic models...... are needed for initial defects and damage accumulation. In this paper, stochastic models are formulated considering some of the failure modes observed in these components. The models are based on theoretical considerations, manufacturing uncertainties, size effects of different scales. It is illustrated how...

  14. Stochastic Effects in Microstructure

    Directory of Open Access Journals (Sweden)

    Glicksman M.E.

    2002-01-01

    Full Text Available We are currently studying microstructural responses to diffusion-limited coarsening in two-phase materials. A mathematical solution to late-stage multiparticle diffusion in finite systems is formulated with account taken of particle-particle interactions and their microstructural correlations, or "locales". The transition from finite system behavior to that for an infinite microstructure is established analytically. Large-scale simulations of late-stage phase coarsening dynamics show increased fluctuations with increasing volume fraction, Vv, of the mean flux entering or leaving particles of a given size class. Fluctuations about the mean flux were found to depend on the scaled particle size, R/, where R is the radius of a particle and is the radius of the dispersoid averaged over the population within the microstructure. Specifically, small (shrinking particles tend to display weak fluctuations about their mean flux, whereas particles of average, or above average size, exhibit strong fluctuations. Remarkably, even in cases of microstructures with a relatively small volume fraction (Vv ~ 10-4, the particle size distribution is broader than that for the well-known Lifshitz-Slyozov limit predicted at zero volume fraction. The simulation results reported here provide some additional surprising insights into the effect of diffusion interactions and stochastic effects during evolution of a microstructure, as it approaches its thermodynamic end-state.

  15. MetaPIGA v2.0: maximum likelihood large phylogeny estimation using the metapopulation genetic algorithm and other stochastic heuristics

    Directory of Open Access Journals (Sweden)

    Milinkovitch Michel C

    2010-07-01

    Full Text Available Abstract Background The development, in the last decade, of stochastic heuristics implemented in robust application softwares has made large phylogeny inference a key step in most comparative studies involving molecular sequences. Still, the choice of a phylogeny inference software is often dictated by a combination of parameters not related to the raw performance of the implemented algorithm(s but rather by practical issues such as ergonomics and/or the availability of specific functionalities. Results Here, we present MetaPIGA v2.0, a robust implementation of several stochastic heuristics for large phylogeny inference (under maximum likelihood, including a Simulated Annealing algorithm, a classical Genetic Algorithm, and the Metapopulation Genetic Algorithm (metaGA together with complex substitution models, discrete Gamma rate heterogeneity, and the possibility to partition data. MetaPIGA v2.0 also implements the Likelihood Ratio Test, the Akaike Information Criterion, and the Bayesian Information Criterion for automated selection of substitution models that best fit the data. Heuristics and substitution models are highly customizable through manual batch files and command line processing. However, MetaPIGA v2.0 also offers an extensive graphical user interface for parameters setting, generating and running batch files, following run progress, and manipulating result trees. MetaPIGA v2.0 uses standard formats for data sets and trees, is platform independent, runs in 32 and 64-bits systems, and takes advantage of multiprocessor and multicore computers. Conclusions The metaGA resolves the major problem inherent to classical Genetic Algorithms by maintaining high inter-population variation even under strong intra-population selection. Implementation of the metaGA together with additional stochastic heuristics into a single software will allow rigorous optimization of each heuristic as well as a meaningful comparison of performances among these

  16. Non-stochastic effects of irradiation. Annex J

    International Nuclear Information System (INIS)

    1982-01-01

    The main purpose of this Annex is to review damage to normal tissues caused by ionizing radiation. Only non-stochastic effects are considered, that is, those effects resulting from changes taking place in large numbers of cells for which a threshold dose may occur. Therefore, in this Annex, the effects on normal tissues are reviewed, in animals and in man, in order to determine the threshold dose levels for non-stochastic effects.

  17. VESPA: Very large-scale Evolutionary and Selective Pressure Analyses

    Directory of Open Access Journals (Sweden)

    Andrew E. Webb

    2017-06-01

    Full Text Available Background Large-scale molecular evolutionary analyses of protein coding sequences requires a number of preparatory inter-related steps from finding gene families, to generating alignments and phylogenetic trees and assessing selective pressure variation. Each phase of these analyses can represent significant challenges, particularly when working with entire proteomes (all protein coding sequences in a genome from a large number of species. Methods We present VESPA, software capable of automating a selective pressure analysis using codeML in addition to the preparatory analyses and summary statistics. VESPA is written in python and Perl and is designed to run within a UNIX environment. Results We have benchmarked VESPA and our results show that the method is consistent, performs well on both large scale and smaller scale datasets, and produces results in line with previously published datasets. Discussion Large-scale gene family identification, sequence alignment, and phylogeny reconstruction are all important aspects of large-scale molecular evolutionary analyses. VESPA provides flexible software for simplifying these processes along with downstream selective pressure variation analyses. The software automatically interprets results from codeML and produces simplified summary files to assist the user in better understanding the results. VESPA may be found at the following website: http://www.mol-evol.org/VESPA.

  18. Cooperative HARQ Assisted NOMA Scheme in Large-scale D2D Networks

    KAUST Repository

    Shi, Zheng

    2017-07-13

    This paper develops an interference aware design for cooperative hybrid automatic repeat request (HARQ) assisted non-orthogonal multiple access (NOMA) scheme for large-scale device-to-device (D2D) networks. Specifically, interference aware rate selection and power allocation are considered to maximize long term average throughput (LTAT) and area spectral efficiency (ASE). The design framework is based on stochastic geometry that jointly accounts for the spatial interference correlation at the NOMA receivers as well as the temporal interference correlation across HARQ transmissions. It is found that ignoring the effect of the aggregate interference, or overlooking the spatial and temporal correlation in interference, highly overestimates the NOMA performance and produces misleading design insights. An interference oblivious selection for the power and/or transmission rates leads to violating the network outage constraints. To this end, the results demonstrate the effectiveness of NOMA transmission and manifest the importance of the cooperative HARQ to combat the negative effect of the network aggregate interference. For instance, comparing to the non-cooperative HARQ assisted NOMA, the proposed scheme can yield an outage probability reduction by $32$%. Furthermore, an interference aware optimal design that maximizes the LTAT given outage constraints leads to $47$% throughput improvement over HARQ-assisted orthogonal multiple access (OMA) scheme.

  19. Generation Expansion Planning with Large Amounts of Wind Power via Decision-Dependent Stochastic Programming

    DEFF Research Database (Denmark)

    Zhan, Yiduo; Zheng, Qipeng; Wang, Jianhui

    2016-01-01

    , the probability distribution function is determined by not only input parameters but also decision variables. To deal with the nonlinear constraints in our model, a quasi-exact solution approach is then introduced to reformulate the multistage stochastic investment model to a mixed-integer linear programming......Power generation expansion planning needs to deal with future uncertainties carefully, given that the invested generation assets will be in operation for a long time. Many stochastic programming models have been proposed to tackle this challenge. However, most previous works assume predetermined...

  20. RESTRUCTURING OF THE LARGE-SCALE SPRINKLERS

    Directory of Open Access Journals (Sweden)

    Paweł Kozaczyk

    2016-09-01

    Full Text Available One of the best ways for agriculture to become independent from shortages of precipitation is irrigation. In the seventies and eighties of the last century a number of large-scale sprinklers in Wielkopolska was built. At the end of 1970’s in the Poznan province 67 sprinklers with a total area of 6400 ha were installed. The average size of the sprinkler reached 95 ha. In 1989 there were 98 sprinklers, and the area which was armed with them was more than 10 130 ha. The study was conducted on 7 large sprinklers with the area ranging from 230 to 520 hectares in 1986÷1998. After the introduction of the market economy in the early 90’s and ownership changes in agriculture, large-scale sprinklers have gone under a significant or total devastation. Land on the State Farms of the State Agricultural Property Agency has leased or sold and the new owners used the existing sprinklers to a very small extent. This involved a change in crop structure, demand structure and an increase in operating costs. There has also been a threefold increase in electricity prices. Operation of large-scale irrigation encountered all kinds of barriers in practice and limitations of system solutions, supply difficulties, high levels of equipment failure which is not inclined to rational use of available sprinklers. An effect of a vision of the local area was to show the current status of the remaining irrigation infrastructure. The adopted scheme for the restructuring of Polish agriculture was not the best solution, causing massive destruction of assets previously invested in the sprinkler system.

  1. Large-scale synthesis of YSZ nanopowder by Pechini method

    Indian Academy of Sciences (India)

    Administrator

    structure and chemical purity of 99⋅1% by inductively coupled plasma optical emission spectroscopy on a large scale. Keywords. Sol–gel; yttria-stabilized zirconia; large scale; nanopowder; Pechini method. 1. Introduction. Zirconia has attracted the attention of many scientists because of its tremendous thermal, mechanical ...

  2. The Phoenix series large scale LNG pool fire experiments.

    Energy Technology Data Exchange (ETDEWEB)

    Simpson, Richard B.; Jensen, Richard Pearson; Demosthenous, Byron; Luketa, Anay Josephine; Ricks, Allen Joseph; Hightower, Marion Michael; Blanchat, Thomas K.; Helmick, Paul H.; Tieszen, Sheldon Robert; Deola, Regina Anne; Mercier, Jeffrey Alan; Suo-Anttila, Jill Marie; Miller, Timothy J.

    2010-12-01

    The increasing demand for natural gas could increase the number and frequency of Liquefied Natural Gas (LNG) tanker deliveries to ports across the United States. Because of the increasing number of shipments and the number of possible new facilities, concerns about the potential safety of the public and property from an accidental, and even more importantly intentional spills, have increased. While improvements have been made over the past decade in assessing hazards from LNG spills, the existing experimental data is much smaller in size and scale than many postulated large accidental and intentional spills. Since the physics and hazards from a fire change with fire size, there are concerns about the adequacy of current hazard prediction techniques for large LNG spills and fires. To address these concerns, Congress funded the Department of Energy (DOE) in 2008 to conduct a series of laboratory and large-scale LNG pool fire experiments at Sandia National Laboratories (Sandia) in Albuquerque, New Mexico. This report presents the test data and results of both sets of fire experiments. A series of five reduced-scale (gas burner) tests (yielding 27 sets of data) were conducted in 2007 and 2008 at Sandia's Thermal Test Complex (TTC) to assess flame height to fire diameter ratios as a function of nondimensional heat release rates for extrapolation to large-scale LNG fires. The large-scale LNG pool fire experiments were conducted in a 120 m diameter pond specially designed and constructed in Sandia's Area III large-scale test complex. Two fire tests of LNG spills of 21 and 81 m in diameter were conducted in 2009 to improve the understanding of flame height, smoke production, and burn rate and therefore the physics and hazards of large LNG spills and fires.

  3. Fractional diffusion equation with distributed-order material derivative. Stochastic foundations

    International Nuclear Information System (INIS)

    Magdziarz, M; Teuerle, M

    2017-01-01

    In this paper, we present the stochastic foundations of fractional dynamics driven by the fractional material derivative of distributed-order type. Before stating our main result, we present the stochastic scenario which underlies the dynamics given by the fractional material derivative. Then we introduce the Lévy walk process of distributed-order type to establish our main result, which is the scaling limit of the considered process. It appears that the probability density function of the scaling limit process fulfills, in a weak sense, the fractional diffusion equation with the material derivative of distributed-order type. (paper)

  4. Stochastic description of heterogeneities of permeability within groundwater flow models

    International Nuclear Information System (INIS)

    Cacas, M.C.; Lachassagne, P.; Ledoux, E.; Marsily, G. de

    1991-01-01

    In order to model radionuclide migration in the geosphere realistically at the field scale, the hydrogeologist needs to be able to simulate groundwater flow in heterogeneous media. Heterogeneity of the medium can be described using a stochastic approach, that affects the way in which a flow model is formulated. In this paper, we discuss the problems that we have encountered in modelling both continuous and fractured media. The stochastic approach leads to a methodology that enables local measurements of permeability to be integrated into a model which gives a good prediction of groundwater flow on a regional scale. 5 Figs.; 8 Refs

  5. Asymptotic analysis for functional stochastic differential equations

    CERN Document Server

    Bao, Jianhai; Yuan, Chenggui

    2016-01-01

    This brief treats dynamical systems that involve delays and random disturbances. The study is motivated by a wide variety of systems in real life in which random noise has to be taken into consideration and the effect of delays cannot be ignored. Concentrating on such systems that are described by functional stochastic differential equations, this work focuses on the study of large time behavior, in particular, ergodicity. This brief is written for probabilists, applied mathematicians, engineers, and scientists who need to use delay systems and functional stochastic differential equations in their work. Selected topics from the brief can also be used in a graduate level topics course in probability and stochastic processes.

  6. Lectures on Topics in Spatial Stochastic Processes

    CERN Document Server

    Capasso, Vincenzo; Ivanoff, B Gail; Dozzi, Marco; Dalang, Robert C; Mountford, Thomas S

    2003-01-01

    The theory of stochastic processes indexed by a partially ordered set has been the subject of much research over the past twenty years. The objective of this CIME International Summer School was to bring to a large audience of young probabilists the general theory of spatial processes, including the theory of set-indexed martingales and to present the different branches of applications of this theory, including stochastic geometry, spatial statistics, empirical processes, spatial estimators and survival analysis. This theory has a broad variety of applications in environmental sciences, social sciences, structure of material and image analysis. In this volume, the reader will find different approaches which foster the development of tools to modelling the spatial aspects of stochastic problems.

  7. Stochastic samples versus vacuum expectation values in cosmology

    International Nuclear Information System (INIS)

    Tsamis, N.C.; Tzetzias, Aggelos; Woodard, R.P.

    2010-01-01

    Particle theorists typically use expectation values to study the quantum back-reaction on inflation, whereas many cosmologists stress the stochastic nature of the process. While expectation values certainly give misleading results for some things, such as the stress tensor, we argue that operators exist for which there is no essential problem. We quantify this by examining the stochastic properties of a noninteracting, massless, minimally coupled scalar on a locally de Sitter background. The square of the stochastic realization of this field seems to provide an example of great relevance for which expectation values are not misleading. We also examine the frequently expressed concern that significant back-reaction from expectation values necessarily implies large stochastic fluctuations between nearby spatial points. Rather than viewing the stochastic formalism in opposition to expectation values, we argue that it provides a marvelously simple way of capturing the leading infrared logarithm corrections to the latter, as advocated by Starobinsky

  8. Distributed EMPC of multiple microgrids for coordinated stochastic energy management

    International Nuclear Information System (INIS)

    Kou, Peng; Liang, Deliang; Gao, Lin

    2017-01-01

    Highlights: • Reducing the system wide operating cost compared to the no-cooperation energy management strategy. • Maintaining the supply and demand balance within each microgrid. • Handling the uncertainties in both supply and demand. • Converting the stochastic optimization problems to standard quadratic and linear programming problems. • Achieving a good balance between control performance and computationally feasibility. - Abstract: The concept of multi-microgrids has the potential to improve the reliability and economic performance of a distribution system. To realize this potential, a coordination among multiple microgrids is needed. In this context, this paper presents a new distributed economic model predictive control scheme for the coordinated stochastic energy management of multi-microgrids. By optimally coordinating the operation of individual microgrids, this scheme maintains the system-wide supply and demand balance in an economical manner. Based on the probabilistic forecasts of renewable power generation and microgrid load, this scheme effectively handles the uncertainties in both supply and demand. Using the Chebyshev inequality and the Delta method, the corresponding stochastic optimization problems have been converted to quadratic and linear programs. The proposed scheme is evaluated on a large-scale case that includes ten interconnected microgrids. The results indicated that the proposed scheme successfully reduces the system wide operating cost, achieves the supply-demand balance in each microgrid, and brings the energy exchange between DNO and main grid to a predefined trajectory.

  9. On estimation of stochastic forcing with application to El Niño

    Science.gov (United States)

    Penland, C.

    2014-12-01

    Although Linear Inverse Modeling (LIM) provides skillful forecasts of tropical ocean sea surface temperatures, LIM's diagnostic properties are at least as useful as its prognostic properties. In this presentation, we discuss an updated method for using LIM to obtain time series representing stochastic forcing of El Niño and to quantify particular unpredictable contributions to LIM forecast error. Attention is paid to the proper stochastic calculus and to the time scale separation between the stochastic forcing and El Niño's signal. The method yields seldom-considered sources of El Niño's stochastic forcing.

  10. Control of stochastic resonance in bistable systems by using periodic signals

    International Nuclear Information System (INIS)

    Min, Lin; Li-Min, Fang; Yong-Jun, Zheng

    2009-01-01

    According to the characteristic structure of double wells in bistable systems, this paper analyses stochastic fluctuations in the single potential well and probability transitions between the two potential wells and proposes a method of controlling stochastic resonance by using a periodic signal. Results of theoretical analysis and numerical simulation show that the phenomenon of stochastic resonance happens when the time scales of the periodic signal and the noise-induced probability transitions between the two potential wells achieve stochastic synchronization. By adding a bistable system with a controllable periodic signal, fluctuations in the single potential well can be effectively controlled, thus affecting the probability transitions between the two potential wells. In this way, an effective control can be achieved which allows one to either enhance or realize stochastic resonance

  11. Geospatial Optimization of Siting Large-Scale Solar Projects

    Energy Technology Data Exchange (ETDEWEB)

    Macknick, Jordan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Quinby, Ted [National Renewable Energy Lab. (NREL), Golden, CO (United States); Caulfield, Emmet [Stanford Univ., CA (United States); Gerritsen, Margot [Stanford Univ., CA (United States); Diffendorfer, Jay [U.S. Geological Survey, Boulder, CO (United States); Haines, Seth [U.S. Geological Survey, Boulder, CO (United States)

    2014-03-01

    Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent with each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.

  12. Drift-Implicit Multi-Level Monte Carlo Tau-Leap Methods for Stochastic Reaction Networks

    KAUST Repository

    Ben Hammouda, Chiheb

    2015-01-01

    -space and deterministic ones. These stochastic models constitute the theory of stochastic reaction networks (SRNs). Furthermore, in some cases, the dynamics of fast and slow time scales can be well separated and this is characterized by what is called sti

  13. Large-scale Agricultural Land Acquisitions in West Africa | IDRC ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    This project will examine large-scale agricultural land acquisitions in nine West African countries -Burkina Faso, Guinea-Bissau, Guinea, Benin, Mali, Togo, Senegal, Niger, and Côte d'Ivoire. ... They will use the results to increase public awareness and knowledge about the consequences of large-scale land acquisitions.

  14. Stochastic and non-stochastic effects - a conceptual analysis

    International Nuclear Information System (INIS)

    Karhausen, L.R.

    1980-01-01

    The attempt to divide radiation effects into stochastic and non-stochastic effects is discussed. It is argued that radiation or toxicological effects are contingently related to radiation or chemical exposure. Biological effects in general can be described by general laws but these laws never represent a necessary connection. Actually stochastic effects express contingent, or empirical, connections while non-stochastic effects represent semantic and non-factual connections. These two expressions stem from two different levels of discourse. The consequence of this analysis for radiation biology and radiation protection is discussed. (author)

  15. A solution approach based on Benders decomposition for the preventive maintenance scheduling problem of a stochastic large-scale energy system

    DEFF Research Database (Denmark)

    Lusby, Richard Martin; Muller, Laurent Flindt; Petersen, Bjørn

    2013-01-01

    This paper describes a Benders decomposition-based framework for solving the large scale energy management problem that was posed for the ROADEF 2010 challenge. The problem was taken from the power industry and entailed scheduling the outage dates for a set of nuclear power plants, which need...... to be regularly taken down for refueling and maintenance, in such away that the expected cost of meeting the power demand in a number of potential scenarios is minimized. We show that the problem structure naturally lends itself to Benders decomposition; however, not all constraints can be included in the mixed...

  16. Particle Acceleration in Mildly Relativistic Shearing Flows: The Interplay of Systematic and Stochastic Effects, and the Origin of the Extended High-energy Emission in AGN Jets

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Ruo-Yu; Rieger, F. M.; Aharonian, F. A., E-mail: ruoyu@mpi-hd.mpg.de, E-mail: frank.rieger@mpi-hd.mpg.de, E-mail: aharon@mpi-hd.mpg.de [Max-Planck-Institut für Kernphysik, Saupfercheckweg 1, D-69117 Heidelberg (Germany)

    2017-06-10

    The origin of the extended X-ray emission in the large-scale jets of active galactic nuclei (AGNs) poses challenges to conventional models of acceleration and emission. Although electron synchrotron radiation is considered the most feasible radiation mechanism, the formation of the continuous large-scale X-ray structure remains an open issue. As astrophysical jets are expected to exhibit some turbulence and shearing motion, we here investigate the potential of shearing flows to facilitate an extended acceleration of particles and evaluate its impact on the resultant particle distribution. Our treatment incorporates systematic shear and stochastic second-order Fermi effects. We show that for typical parameters applicable to large-scale AGN jets, stochastic second-order Fermi acceleration, which always accompanies shear particle acceleration, can play an important role in facilitating the whole process of particle energization. We study the time-dependent evolution of the resultant particle distribution in the presence of second-order Fermi acceleration, shear acceleration, and synchrotron losses using a simple Fokker–Planck approach and provide illustrations for the possible emergence of a complex (multicomponent) particle energy distribution with different spectral branches. We present examples for typical parameters applicable to large-scale AGN jets, indicating the relevance of the underlying processes for understanding the extended X-ray emission and the origin of ultrahigh-energy cosmic rays.

  17. Simulating biological processes: stochastic physics from whole cells to colonies

    Science.gov (United States)

    Earnest, Tyler M.; Cole, John A.; Luthey-Schulten, Zaida

    2018-05-01

    The last few decades have revealed the living cell to be a crowded spatially heterogeneous space teeming with biomolecules whose concentrations and activities are governed by intrinsically random forces. It is from this randomness, however, that a vast array of precisely timed and intricately coordinated biological functions emerge that give rise to the complex forms and behaviors we see in the biosphere around us. This seemingly paradoxical nature of life has drawn the interest of an increasing number of physicists, and recent years have seen stochastic modeling grow into a major subdiscipline within biological physics. Here we review some of the major advances that have shaped our understanding of stochasticity in biology. We begin with some historical context, outlining a string of important experimental results that motivated the development of stochastic modeling. We then embark upon a fairly rigorous treatment of the simulation methods that are currently available for the treatment of stochastic biological models, with an eye toward comparing and contrasting their realms of applicability, and the care that must be taken when parameterizing them. Following that, we describe how stochasticity impacts several key biological functions, including transcription, translation, ribosome biogenesis, chromosome replication, and metabolism, before considering how the functions may be coupled into a comprehensive model of a ‘minimal cell’. Finally, we close with our expectation for the future of the field, focusing on how mesoscopic stochastic methods may be augmented with atomic-scale molecular modeling approaches in order to understand life across a range of length and time scales.

  18. Large-scale motions in the universe: a review

    International Nuclear Information System (INIS)

    Burstein, D.

    1990-01-01

    The expansion of the universe can be retarded in localised regions within the universe both by the presence of gravity and by non-gravitational motions generated in the post-recombination universe. The motions of galaxies thus generated are called 'peculiar motions', and the amplitudes, size scales and coherence of these peculiar motions are among the most direct records of the structure of the universe. As such, measurements of these properties of the present-day universe provide some of the severest tests of cosmological theories. This is a review of the current evidence for large-scale motions of galaxies out to a distance of ∼5000 km s -1 (in an expanding universe, distance is proportional to radial velocity). 'Large-scale' in this context refers to motions that are correlated over size scales larger than the typical sizes of groups of galaxies, up to and including the size of the volume surveyed. To orient the reader into this relatively new field of study, a short modern history is given together with an explanation of the terminology. Careful consideration is given to the data used to measure the distances, and hence the peculiar motions, of galaxies. The evidence for large-scale motions is presented in a graphical fashion, using only the most reliable data for galaxies spanning a wide range in optical properties and over the complete range of galactic environments. The kinds of systematic errors that can affect this analysis are discussed, and the reliability of these motions is assessed. The predictions of two models of large-scale motion are compared to the observations, and special emphasis is placed on those motions in which our own Galaxy directly partakes. (author)

  19. State of the Art in Large-Scale Soil Moisture Monitoring

    Science.gov (United States)

    Ochsner, Tyson E.; Cosh, Michael Harold; Cuenca, Richard H.; Dorigo, Wouter; Draper, Clara S.; Hagimoto, Yutaka; Kerr, Yan H.; Larson, Kristine M.; Njoku, Eni Gerald; Small, Eric E.; hide

    2013-01-01

    Soil moisture is an essential climate variable influencing land atmosphere interactions, an essential hydrologic variable impacting rainfall runoff processes, an essential ecological variable regulating net ecosystem exchange, and an essential agricultural variable constraining food security. Large-scale soil moisture monitoring has advanced in recent years creating opportunities to transform scientific understanding of soil moisture and related processes. These advances are being driven by researchers from a broad range of disciplines, but this complicates collaboration and communication. For some applications, the science required to utilize large-scale soil moisture data is poorly developed. In this review, we describe the state of the art in large-scale soil moisture monitoring and identify some critical needs for research to optimize the use of increasingly available soil moisture data. We review representative examples of 1) emerging in situ and proximal sensing techniques, 2) dedicated soil moisture remote sensing missions, 3) soil moisture monitoring networks, and 4) applications of large-scale soil moisture measurements. Significant near-term progress seems possible in the use of large-scale soil moisture data for drought monitoring. Assimilation of soil moisture data for meteorological or hydrologic forecasting also shows promise, but significant challenges related to model structures and model errors remain. Little progress has been made yet in the use of large-scale soil moisture observations within the context of ecological or agricultural modeling. Opportunities abound to advance the science and practice of large-scale soil moisture monitoring for the sake of improved Earth system monitoring, modeling, and forecasting.

  20. Stochastics introduction to probability and statistics

    CERN Document Server

    Georgii, Hans-Otto

    2012-01-01

    This second revised and extended edition presents the fundamental ideas and results of both, probability theory and statistics, and comprises the material of a one-year course. It is addressed to students with an interest in the mathematical side of stochastics. Stochastic concepts, models and methods are motivated by examples and developed and analysed systematically. Some measure theory is included, but this is done at an elementary level that is in accordance with the introductory character of the book. A large number of problems offer applications and supplements to the text.

  1. Stochastic resonance a mathematical approach in the small noise limit

    CERN Document Server

    Herrmann, Samuel; Pavlyukevich, Ilya; Peithmann, Dierk

    2013-01-01

    Stochastic resonance is a phenomenon arising in a wide spectrum of areas in the sciences ranging from physics through neuroscience to chemistry and biology. This book presents a mathematical approach to stochastic resonance which is based on a large deviations principle (LDP) for randomly perturbed dynamical systems with a weak inhomogeneity given by an exogenous periodicity of small frequency. Resonance, the optimal tuning between period length and noise amplitude, is explained by optimizing the LDP's rate function. The authors show that not all physical measures of tuning quality are robust with respect to dimension reduction. They propose measures of tuning quality based on exponential transition rates explained by large deviations techniques and show that these measures are robust. The book sheds some light on the shortcomings and strengths of different concepts used in the theory and applications of stochastic resonance without attempting to give a comprehensive overview of the many facets of stochastic ...

  2. Large-Scale Habitat Corridors for Biodiversity Conservation: A Forest Corridor in Madagascar.

    Directory of Open Access Journals (Sweden)

    Tanjona Ramiadantsoa

    Full Text Available In biodiversity conservation, habitat corridors are assumed to increase landscape-level connectivity and to enhance the viability of otherwise isolated populations. While the role of corridors is supported by empirical evidence, studies have typically been conducted at small spatial scales. Here, we assess the quality and the functionality of a large 95-km long forest corridor connecting two large national parks (416 and 311 km2 in the southeastern escarpment of Madagascar. We analyze the occurrence of 300 species in 5 taxonomic groups in the parks and in the corridor, and combine high-resolution forest cover data with a simulation model to examine various scenarios of corridor destruction. At present, the corridor contains essentially the same communities as the national parks, reflecting its breadth which on average matches that of the parks. In the simulation model, we consider three types of dispersers: passive dispersers, which settle randomly around the source population; active dispersers, which settle only in favorable habitat; and gap-avoiding active dispersers, which avoid dispersing across non-habitat. Our results suggest that long-distance passive dispersers are most sensitive to ongoing degradation of the corridor, because increasing numbers of propagules are lost outside the forest habitat. For a wide range of dispersal parameters, the national parks are large enough to sustain stable populations until the corridor becomes severely broken, which will happen around 2065 if the current rate of forest loss continues. A significant decrease in gene flow along the corridor is expected after 2040, and this will exacerbate the adverse consequences of isolation. Our results demonstrate that simulation studies assessing the role of habitat corridors should pay close attention to the mode of dispersal and the effects of regional stochasticity.

  3. A route to explosive large-scale magnetic reconnection in a super-ion-scale current sheet

    Directory of Open Access Journals (Sweden)

    K. G. Tanaka

    2009-01-01

    Full Text Available How to trigger magnetic reconnection is one of the most interesting and important problems in space plasma physics. Recently, electron temperature anisotropy (αeo=Te⊥/Te|| at the center of a current sheet and non-local effect of the lower-hybrid drift instability (LHDI that develops at the current sheet edges have attracted attention in this context. In addition to these effects, here we also study the effects of ion temperature anisotropy (αio=Ti⊥/Ti||. Electron anisotropy effects are known to be helpless in a current sheet whose thickness is of ion-scale. In this range of current sheet thickness, the LHDI effects are shown to weaken substantially with a small increase in thickness and the obtained saturation level is too low for a large-scale reconnection to be achieved. Then we investigate whether introduction of electron and ion temperature anisotropies in the initial stage would couple with the LHDI effects to revive quick triggering of large-scale reconnection in a super-ion-scale current sheet. The results are as follows. (1 The initial electron temperature anisotropy is consumed very quickly when a number of minuscule magnetic islands (each lateral length is 1.5~3 times the ion inertial length form. These minuscule islands do not coalesce into a large-scale island to enable large-scale reconnection. (2 The subsequent LHDI effects disturb the current sheet filled with the small islands. This makes the triggering time scale to be accelerated substantially but does not enhance the saturation level of reconnected flux. (3 When the ion temperature anisotropy is added, it survives through the small island formation stage and makes even quicker triggering to happen when the LHDI effects set-in. Furthermore the saturation level is seen to be elevated by a factor of ~2 and large-scale reconnection is achieved only in this case. Comparison with two-dimensional simulations that exclude the LHDI effects confirms that the saturation level

  4. Large-scale Labeled Datasets to Fuel Earth Science Deep Learning Applications

    Science.gov (United States)

    Maskey, M.; Ramachandran, R.; Miller, J.

    2017-12-01

    Deep learning has revolutionized computer vision and natural language processing with various algorithms scaled using high-performance computing. However, generic large-scale labeled datasets such as the ImageNet are the fuel that drives the impressive accuracy of deep learning results. Large-scale labeled datasets already exist in domains such as medical science, but creating them in the Earth science domain is a challenge. While there are ways to apply deep learning using limited labeled datasets, there is a need in the Earth sciences for creating large-scale labeled datasets for benchmarking and scaling deep learning applications. At the NASA Marshall Space Flight Center, we are using deep learning for a variety of Earth science applications where we have encountered the need for large-scale labeled datasets. We will discuss our approaches for creating such datasets and why these datasets are just as valuable as deep learning algorithms. We will also describe successful usage of these large-scale labeled datasets with our deep learning based applications.

  5. Hybrid stochastic simplifications for multiscale gene networks

    Directory of Open Access Journals (Sweden)

    Debussche Arnaud

    2009-09-01

    Full Text Available Abstract Background Stochastic simulation of gene networks by Markov processes has important applications in molecular biology. The complexity of exact simulation algorithms scales with the number of discrete jumps to be performed. Approximate schemes reduce the computational time by reducing the number of simulated discrete events. Also, answering important questions about the relation between network topology and intrinsic noise generation and propagation should be based on general mathematical results. These general results are difficult to obtain for exact models. Results We propose a unified framework for hybrid simplifications of Markov models of multiscale stochastic gene networks dynamics. We discuss several possible hybrid simplifications, and provide algorithms to obtain them from pure jump processes. In hybrid simplifications, some components are discrete and evolve by jumps, while other components are continuous. Hybrid simplifications are obtained by partial Kramers-Moyal expansion 123 which is equivalent to the application of the central limit theorem to a sub-model. By averaging and variable aggregation we drastically reduce simulation time and eliminate non-critical reactions. Hybrid and averaged simplifications can be used for more effective simulation algorithms and for obtaining general design principles relating noise to topology and time scales. The simplified models reproduce with good accuracy the stochastic properties of the gene networks, including waiting times in intermittence phenomena, fluctuation amplitudes and stationary distributions. The methods are illustrated on several gene network examples. Conclusion Hybrid simplifications can be used for onion-like (multi-layered approaches to multi-scale biochemical systems, in which various descriptions are used at various scales. Sets of discrete and continuous variables are treated with different methods and are coupled together in a physically justified approach.

  6. Scaling and criticality in a stochastic multi-agent model of a financial market

    Science.gov (United States)

    Lux, Thomas; Marchesi, Michele

    1999-02-01

    Financial prices have been found to exhibit some universal characteristics that resemble the scaling laws characterizing physical systems in which large numbers of units interact. This raises the question of whether scaling in finance emerges in a similar way - from the interactions of a large ensemble of market participants. However, such an explanation is in contradiction to the prevalent `efficient market hypothesis' in economics, which assumes that the movements of financial prices are an immediate and unbiased reflection of incoming news about future earning prospects. Within this hypothesis, scaling in price changes would simply reflect similar scaling in the `input' signals that influence them. Here we describe a multi-agent model of financial markets which supports the idea that scaling arises from mutual interactions of participants. Although the `news arrival process' in our model lacks both power-law scaling and any temporal dependence in volatility, we find that it generates such behaviour as a result of interactions between agents.

  7. Economic MPC for a linear stochastic system of energy units

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Sokoler, Leo Emil; Standardi, Laura

    2016-01-01

    This paper summarizes comprehensively the work in four recent PhD theses from the Technical University of Denmark related to Economic MPC of future power systems. Future power systems will consist of a large number of decentralized power producers and a large number of controllable power consumers...... in addition to stochastic power producers such as wind turbines and solar power plants. Control of such large scale systems requires new control algorithms. In this paper, we formulate the control of such a system as an Economic Model Predictive Control (MPC) problem. When the power producers and controllable...... power consumers have linear dynamics, the Economic MPC may be expressed as a linear program. We provide linear models for a number of energy units in an energy system, formulate an Economic MPC for coordination of such a system. We indicate how advances in computational MPC makes the solutions...

  8. Impact of stochasticity in immigration and reintroduction on colonizing and extirpating populations.

    Science.gov (United States)

    Rajakaruna, Harshana; Potapov, Alexei; Lewis, Mark

    2013-05-01

    A thorough quantitative understanding of populations at the edge of extinction is needed to manage both invasive and extirpating populations. Immigration can govern the population dynamics when the population levels are low. It increases the probability of a population establishing (or reestablishing) before going extinct (EBE). However, the rate of immigration can be highly fluctuating. Here, we investigate how the stochasticity in immigration impacts the EBE probability for small populations in variable environments. We use a population model with an Allee effect described by a stochastic differential equation (SDE) and employ the Fokker-Planck diffusion approximation to quantify the EBE probability. We find that, the effect of the stochasticity in immigration on the EBE probability depends on both the intrinsic growth rate (r) and the mean rate of immigration (p). In general, if r is large and positive (e.g. invasive species introduced to favorable habitats), or if p is greater than the rate of population decline due to the demographic Allee effect (e.g., effective stocking of declining populations), then the stochasticity in immigration decreases the EBE probability. If r is large and negative (e.g. endangered populations in unfavorable habitats), or if the rate of decline due to the demographic Allee effect is much greater than p (e.g., weak stocking of declining populations), then the stochasticity in immigration increases the EBE probability. However, the mean time for EBE decreases with the increasing stochasticity in immigration with both positive and negative large r. Thus, results suggest that ecological management of populations involves a tradeoff as to whether to increase or decrease the stochasticity in immigration in order to optimize the desired outcome. Moreover, the control of invasive species spread through stochastic means, for example, by stochastic monitoring and treatment of vectors such as ship-ballast water, may be suitable strategies

  9. Impact of stochasticity in immigration and reintroduction on colonizing and extirpating populations

    KAUST Repository

    Rajakaruna, Harshana

    2013-05-01

    A thorough quantitative understanding of populations at the edge of extinction is needed to manage both invasive and extirpating populations. Immigration can govern the population dynamics when the population levels are low. It increases the probability of a population establishing (or reestablishing) before going extinct (EBE). However, the rate of immigration can be highly fluctuating. Here, we investigate how the stochasticity in immigration impacts the EBE probability for small populations in variable environments. We use a population model with an Allee effect described by a stochastic differential equation (SDE) and employ the Fokker-Planck diffusion approximation to quantify the EBE probability.Wefind that, the effect of the stochasticity in immigration on the EBE probability depends on both the intrinsic growth rate (r) and the mean rate of immigration (p). In general, if r is large and positive (e.g. invasive species introduced to favorable habitats), or if p is greater than the rate of population decline due to the demographic Allee effect (e.g., effective stocking of declining populations), then the stochasticity in immigration decreases the EBE probability. If r is large and negative (e.g. endangered populations in unfavorable habitats), or if the rate of decline due to the demographic Allee effect is much greater than p (e.g., weak stocking of declining populations), then the stochasticity in immigration increases the EBE probability. However, the mean time for EBE decreases with the increasing stochasticity in immigration with both positive and negative large r. Thus, results suggest that ecological management of populations involves a tradeoff as to whether to increase or decrease the stochasticity in immigration in order to optimize the desired outcome. Moreover, the control of invasive species spread through stochastic means, for example, by stochastic monitoring and treatment of vectors such as ship-ballast water, may be suitable strategies given

  10. Large-scale structure observables in general relativity

    International Nuclear Information System (INIS)

    Jeong, Donghui; Schmidt, Fabian

    2015-01-01

    We review recent studies that rigorously define several key observables of the large-scale structure of the Universe in a general relativistic context. Specifically, we consider (i) redshift perturbation of cosmic clock events; (ii) distortion of cosmic rulers, including weak lensing shear and magnification; and (iii) observed number density of tracers of the large-scale structure. We provide covariant and gauge-invariant expressions of these observables. Our expressions are given for a linearly perturbed flat Friedmann–Robertson–Walker metric including scalar, vector, and tensor metric perturbations. While we restrict ourselves to linear order in perturbation theory, the approach can be straightforwardly generalized to higher order. (paper)

  11. Fatigue Analysis of Large-scale Wind turbine

    Directory of Open Access Journals (Sweden)

    Zhu Yongli

    2017-01-01

    Full Text Available The paper does research on top flange fatigue damage of large-scale wind turbine generator. It establishes finite element model of top flange connection system with finite element analysis software MSC. Marc/Mentat, analyzes its fatigue strain, implements load simulation of flange fatigue working condition with Bladed software, acquires flange fatigue load spectrum with rain-flow counting method, finally, it realizes fatigue analysis of top flange with fatigue analysis software MSC. Fatigue and Palmgren-Miner linear cumulative damage theory. The analysis result indicates that its result provides new thinking for flange fatigue analysis of large-scale wind turbine generator, and possesses some practical engineering value.

  12. Real-time simulation of large-scale floods

    Science.gov (United States)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  13. Large-scale numerical simulations of plasmas

    International Nuclear Information System (INIS)

    Hamaguchi, Satoshi

    2004-01-01

    The recent trend of large scales simulations of fusion plasma and processing plasmas is briefly summarized. Many advanced simulation techniques have been developed for fusion plasmas and some of these techniques are now applied to analyses of processing plasmas. (author)

  14. Electron thermal confinement in a partially stochastic magnetic structure

    Science.gov (United States)

    Morton, L. A.; Young, W. C.; Hegna, C. C.; Parke, E.; Reusch, J. A.; Den Hartog, D. J.

    2018-04-01

    Using a high-repetition-rate Thomson scattering diagnostic, we observe a peak in electron temperature Te coinciding with the location of a large magnetic island in the Madison Symmetric Torus. Magnetohydrodynamic modeling of this quasi-single helicity plasma indicates that smaller adjacent islands overlap with and destroy the large island flux surfaces. The estimated stochastic electron thermal conductivity ( ≈30 m 2/s ) is consistent with the conductivity inferred from the observed Te gradient and ohmic heating power. Island-shaped Te peaks can result from partially stochastic magnetic islands.

  15. Nearly incompressible fluids: Hydrodynamics and large scale inhomogeneity

    International Nuclear Information System (INIS)

    Hunana, P.; Zank, G. P.; Shaikh, D.

    2006-01-01

    A system of hydrodynamic equations in the presence of large-scale inhomogeneities for a high plasma beta solar wind is derived. The theory is derived under the assumption of low turbulent Mach number and is developed for the flows where the usual incompressible description is not satisfactory and a full compressible treatment is too complex for any analytical studies. When the effects of compressibility are incorporated only weakly, a new description, referred to as 'nearly incompressible hydrodynamics', is obtained. The nearly incompressible theory, was originally applied to homogeneous flows. However, large-scale gradients in density, pressure, temperature, etc., are typical in the solar wind and it was unclear how inhomogeneities would affect the usual incompressible and nearly incompressible descriptions. In the homogeneous case, the lowest order expansion of the fully compressible equations leads to the usual incompressible equations, followed at higher orders by the nearly incompressible equations, as introduced by Zank and Matthaeus. With this work we show that the inclusion of large-scale inhomogeneities (in this case time-independent and radially symmetric background solar wind) modifies the leading-order incompressible description of solar wind flow. We find, for example, that the divergence of velocity fluctuations is nonsolenoidal and that density fluctuations can be described to leading order as a passive scalar. Locally (for small lengthscales), this system of equations converges to the usual incompressible equations and we therefore use the term 'locally incompressible' to describe the equations. This term should be distinguished from the term 'nearly incompressible', which is reserved for higher-order corrections. Furthermore, we find that density fluctuations scale with Mach number linearly, in contrast to the original homogeneous nearly incompressible theory, in which density fluctuations scale with the square of Mach number. Inhomogeneous nearly

  16. Performance Health Monitoring of Large-Scale Systems

    Energy Technology Data Exchange (ETDEWEB)

    Rajamony, Ram [IBM Research, Austin, TX (United States)

    2014-11-20

    This report details the progress made on the ASCR funded project Performance Health Monitoring for Large Scale Systems. A large-­scale application may not achieve its full performance potential due to degraded performance of even a single subsystem. Detecting performance faults, isolating them, and taking remedial action is critical for the scale of systems on the horizon. PHM aims to develop techniques and tools that can be used to identify and mitigate such performance problems. We accomplish this through two main aspects. The PHM framework encompasses diagnostics, system monitoring, fault isolation, and performance evaluation capabilities that indicates when a performance fault has been detected, either due to an anomaly present in the system itself or due to contention for shared resources between concurrently executing jobs. Software components called the PHM Control system then build upon the capabilities provided by the PHM framework to mitigate degradation caused by performance problems.

  17. Susceptibility of optimal train schedules to stochastic disturbances of process times

    DEFF Research Database (Denmark)

    Larsen, Rune; Pranzo, Marco; D’Ariano, Andrea

    2013-01-01

    study, an advanced branch and bound algorithm, on average, outperforms a First In First Out scheduling rule both in deterministic and stochastic traffic scenarios. However, the characteristic of the stochastic processes and the way a stochastic instance is handled turn out to have a serious impact...... and dwell times). In fact, the objective of railway traffic management is to reduce delay propagation and to increase disturbance robustness of train schedules at a network scale. We present a quantitative study of traffic disturbances and their effects on the schedules computed by simple and advanced...

  18. Efficient decomposition and linearization methods for the stochastic transportation problem

    International Nuclear Information System (INIS)

    Holmberg, K.

    1993-01-01

    The stochastic transportation problem can be formulated as a convex transportation problem with nonlinear objective function and linear constraints. We compare several different methods based on decomposition techniques and linearization techniques for this problem, trying to find the most efficient method or combination of methods. We discuss and test a separable programming approach, the Frank-Wolfe method with and without modifications, the new technique of mean value cross decomposition and the more well known Lagrangian relaxation with subgradient optimization, as well as combinations of these approaches. Computational tests are presented, indicating that some new combination methods are quite efficient for large scale problems. (authors) (27 refs.)

  19. Front propagation and clustering in the stochastic nonlocal Fisher equation

    Science.gov (United States)

    Ganan, Yehuda A.; Kessler, David A.

    2018-04-01

    In this work, we study the problem of front propagation and pattern formation in the stochastic nonlocal Fisher equation. We find a crossover between two regimes: a steadily propagating regime for not too large interaction range and a stochastic punctuated spreading regime for larger ranges. We show that the former regime is well described by the heuristic approximation of the system by a deterministic system where the linear growth term is cut off below some critical density. This deterministic system is seen not only to give the right front velocity, but also predicts the onset of clustering for interaction kernels which give rise to stable uniform states, such as the Gaussian kernel, for sufficiently large cutoff. Above the critical cutoff, distinct clusters emerge behind the front. These same features are present in the stochastic model for sufficiently small carrying capacity. In the latter, punctuated spreading, regime, the population is concentrated on clusters, as in the infinite range case, which divide and separate as a result of the stochastic noise. Due to the finite interaction range, if a fragment at the edge of the population separates sufficiently far, it stabilizes as a new cluster, and the processes begins anew. The deterministic cutoff model does not have this spreading for large interaction ranges, attesting to its purely stochastic origins. We show that this mode of spreading has an exponentially small mean spreading velocity, decaying with the range of the interaction kernel.

  20. Learning from large scale neural simulations

    DEFF Research Database (Denmark)

    Serban, Maria

    2017-01-01

    Large-scale neural simulations have the marks of a distinct methodology which can be fruitfully deployed to advance scientific understanding of the human brain. Computer simulation studies can be used to produce surrogate observational data for better conceptual models and new how...

  1. Phenomenology of two-dimensional stably stratified turbulence under large-scale forcing

    KAUST Repository

    Kumar, Abhishek; Verma, Mahendra K.; Sukhatme, Jai

    2017-01-01

    In this paper, we characterise the scaling of energy spectra, and the interscale transfer of energy and enstrophy, for strongly, moderately and weakly stably stratified two-dimensional (2D) turbulence, restricted in a vertical plane, under large-scale random forcing. In the strongly stratified case, a large-scale vertically sheared horizontal flow (VSHF) coexists with small scale turbulence. The VSHF consists of internal gravity waves and the turbulent flow has a kinetic energy (KE) spectrum that follows an approximate k−3 scaling with zero KE flux and a robust positive enstrophy flux. The spectrum of the turbulent potential energy (PE) also approximately follows a k−3 power-law and its flux is directed to small scales. For moderate stratification, there is no VSHF and the KE of the turbulent flow exhibits Bolgiano–Obukhov scaling that transitions from a shallow k−11/5 form at large scales, to a steeper approximate k−3 scaling at small scales. The entire range of scales shows a strong forward enstrophy flux, and interestingly, large (small) scales show an inverse (forward) KE flux. The PE flux in this regime is directed to small scales, and the PE spectrum is characterised by an approximate k−1.64 scaling. Finally, for weak stratification, KE is transferred upscale and its spectrum closely follows a k−2.5 scaling, while PE exhibits a forward transfer and its spectrum shows an approximate k−1.6 power-law. For all stratification strengths, the total energy always flows from large to small scales and almost all the spectral indicies are well explained by accounting for the scale-dependent nature of the corresponding flux.

  2. Phenomenology of two-dimensional stably stratified turbulence under large-scale forcing

    KAUST Repository

    Kumar, Abhishek

    2017-01-11

    In this paper, we characterise the scaling of energy spectra, and the interscale transfer of energy and enstrophy, for strongly, moderately and weakly stably stratified two-dimensional (2D) turbulence, restricted in a vertical plane, under large-scale random forcing. In the strongly stratified case, a large-scale vertically sheared horizontal flow (VSHF) coexists with small scale turbulence. The VSHF consists of internal gravity waves and the turbulent flow has a kinetic energy (KE) spectrum that follows an approximate k−3 scaling with zero KE flux and a robust positive enstrophy flux. The spectrum of the turbulent potential energy (PE) also approximately follows a k−3 power-law and its flux is directed to small scales. For moderate stratification, there is no VSHF and the KE of the turbulent flow exhibits Bolgiano–Obukhov scaling that transitions from a shallow k−11/5 form at large scales, to a steeper approximate k−3 scaling at small scales. The entire range of scales shows a strong forward enstrophy flux, and interestingly, large (small) scales show an inverse (forward) KE flux. The PE flux in this regime is directed to small scales, and the PE spectrum is characterised by an approximate k−1.64 scaling. Finally, for weak stratification, KE is transferred upscale and its spectrum closely follows a k−2.5 scaling, while PE exhibits a forward transfer and its spectrum shows an approximate k−1.6 power-law. For all stratification strengths, the total energy always flows from large to small scales and almost all the spectral indicies are well explained by accounting for the scale-dependent nature of the corresponding flux.

  3. Exploring the large-scale structure of Taylor–Couette turbulence through Large-Eddy Simulations

    Science.gov (United States)

    Ostilla-Mónico, Rodolfo; Zhu, Xiaojue; Verzicco, Roberto

    2018-04-01

    Large eddy simulations (LES) of Taylor-Couette (TC) flow, the flow between two co-axial and independently rotating cylinders are performed in an attempt to explore the large-scale axially-pinned structures seen in experiments and simulations. Both static and dynamic LES models are used. The Reynolds number is kept fixed at Re = 3.4 · 104, and the radius ratio η = ri /ro is set to η = 0.909, limiting the effects of curvature and resulting in frictional Reynolds numbers of around Re τ ≈ 500. Four rotation ratios from Rot = ‑0.0909 to Rot = 0.3 are simulated. First, the LES of TC is benchmarked for different rotation ratios. Both the Smagorinsky model with a constant of cs = 0.1 and the dynamic model are found to produce reasonable results for no mean rotation and cyclonic rotation, but deviations increase for increasing rotation. This is attributed to the increasing anisotropic character of the fluctuations. Second, “over-damped” LES, i.e. LES with a large Smagorinsky constant is performed and is shown to reproduce some features of the large-scale structures, even when the near-wall region is not adequately modeled. This shows the potential for using over-damped LES for fast explorations of the parameter space where large-scale structures are found.

  4. Collective, stochastic and nonequilibrium behavior of highly excited hadronic matter

    International Nuclear Information System (INIS)

    Carruthers, P.

    1983-01-01

    We discuss selected problems concerning the dynamic and stochasticc behavior of highly excited matter, particularly the QCD plasma. For the latter we consider the equation of state, kinetics, quasiparticles, flow properties and possible chaos and turbulence. The promise of phase space distribution functions for covariant transport and kinetic theory is stressed. The possibility and implications of a stochastic bag are spelled out. A simplified space-time model of hadronic collisions is pursued, with applications to A-A collisions and other matters. The domain wall between hadronic and plasma phase is of potential importance: its thickness and relation to surface tension are noticed. Finally we reviewed the recently developed stochastic cell model of multiparticle distributions and KNO scaling. This topic leads to the notion that fractal dimensions are involved in a rather general dynamical context. We speculate that various scaling phenomena are independent of the full dynamical structure, depending only on a general stochastic framework having to do with simple maps and strange attractors. 42 references

  5. Dynamic stochastic optimization

    CERN Document Server

    Ermoliev, Yuri; Pflug, Georg

    2004-01-01

    Uncertainties and changes are pervasive characteristics of modern systems involving interactions between humans, economics, nature and technology. These systems are often too complex to allow for precise evaluations and, as a result, the lack of proper management (control) may create significant risks. In order to develop robust strategies we need approaches which explic­ itly deal with uncertainties, risks and changing conditions. One rather general approach is to characterize (explicitly or implicitly) uncertainties by objec­ tive or subjective probabilities (measures of confidence or belief). This leads us to stochastic optimization problems which can rarely be solved by using the standard deterministic optimization and optimal control methods. In the stochastic optimization the accent is on problems with a large number of deci­ sion and random variables, and consequently the focus ofattention is directed to efficient solution procedures rather than to (analytical) closed-form solu­ tions. Objective an...

  6. Large-scale preparation of hollow graphitic carbon nanospheres

    International Nuclear Information System (INIS)

    Feng, Jun; Li, Fu; Bai, Yu-Jun; Han, Fu-Dong; Qi, Yong-Xin; Lun, Ning; Lu, Xi-Feng

    2013-01-01

    Hollow graphitic carbon nanospheres (HGCNSs) were synthesized on large scale by a simple reaction between glucose and Mg at 550 °C in an autoclave. Characterization by X-ray diffraction, Raman spectroscopy and transmission electron microscopy demonstrates the formation of HGCNSs with an average diameter of 10 nm or so and a wall thickness of a few graphenes. The HGCNSs exhibit a reversible capacity of 391 mAh g −1 after 60 cycles when used as anode materials for Li-ion batteries. -- Graphical abstract: Hollow graphitic carbon nanospheres could be prepared on large scale by the simple reaction between glucose and Mg at 550 °C, which exhibit superior electrochemical performance to graphite. Highlights: ► Hollow graphitic carbon nanospheres (HGCNSs) were prepared on large scale at 550 °C ► The preparation is simple, effective and eco-friendly. ► The in situ yielded MgO nanocrystals promote the graphitization. ► The HGCNSs exhibit superior electrochemical performance to graphite.

  7. Accelerating large-scale phase-field simulations with GPU

    Directory of Open Access Journals (Sweden)

    Xiaoming Shi

    2017-10-01

    Full Text Available A new package for accelerating large-scale phase-field simulations was developed by using GPU based on the semi-implicit Fourier method. The package can solve a variety of equilibrium equations with different inhomogeneity including long-range elastic, magnetostatic, and electrostatic interactions. Through using specific algorithm in Compute Unified Device Architecture (CUDA, Fourier spectral iterative perturbation method was integrated in GPU package. The Allen-Cahn equation, Cahn-Hilliard equation, and phase-field model with long-range interaction were solved based on the algorithm running on GPU respectively to test the performance of the package. From the comparison of the calculation results between the solver executed in single CPU and the one on GPU, it was found that the speed on GPU is enormously elevated to 50 times faster. The present study therefore contributes to the acceleration of large-scale phase-field simulations and provides guidance for experiments to design large-scale functional devices.

  8. First Mile Challenges for Large-Scale IoT

    KAUST Repository

    Bader, Ahmed

    2017-03-16

    The Internet of Things is large-scale by nature. This is not only manifested by the large number of connected devices, but also by the sheer scale of spatial traffic intensity that must be accommodated, primarily in the uplink direction. To that end, cellular networks are indeed a strong first mile candidate to accommodate the data tsunami to be generated by the IoT. However, IoT devices are required in the cellular paradigm to undergo random access procedures as a precursor to resource allocation. Such procedures impose a major bottleneck that hinders cellular networks\\' ability to support large-scale IoT. In this article, we shed light on the random access dilemma and present a case study based on experimental data as well as system-level simulations. Accordingly, a case is built for the latent need to revisit random access procedures. A call for action is motivated by listing a few potential remedies and recommendations.

  9. Global existence and regularity for the 3D stochastic primitive equations of the ocean and atmosphere with multiplicative white noise

    Science.gov (United States)

    Debussche, A.; Glatt-Holtz, N.; Temam, R.; Ziane, M.

    2012-07-01

    The primitive equations (PEs) are a basic model in the study of large scale oceanic and atmospheric dynamics. These systems form the analytical core of the most advanced general circulation models. For this reason and due to their challenging nonlinear and anisotropic structure, the PEs have recently received considerable attention from the mathematical community. On the other hand, in view of the complex multi-scale nature of the earth's climate system, many uncertainties appear that should be accounted for in the basic dynamical models of atmospheric and oceanic processes. In the climate community stochastic methods have come into extensive use in this connection. For this reason there has appeared a need to further develop the foundations of nonlinear stochastic partial differential equations in connection with the PEs and more generally. In this work we study a stochastic version of the PEs. We establish the global existence and uniqueness of strong, pathwise solutions for these equations in dimension 3 for the case of a nonlinear multiplicative noise. The proof makes use of anisotropic estimates, L^{p}_{t}L^{q}_{x} estimates on the pressure and stopping time arguments.

  10. Global existence and regularity for the 3D stochastic primitive equations of the ocean and atmosphere with multiplicative white noise

    International Nuclear Information System (INIS)

    Debussche, A; Glatt-Holtz, N; Temam, R; Ziane, M

    2012-01-01

    The primitive equations (PEs) are a basic model in the study of large scale oceanic and atmospheric dynamics. These systems form the analytical core of the most advanced general circulation models. For this reason and due to their challenging nonlinear and anisotropic structure, the PEs have recently received considerable attention from the mathematical community. On the other hand, in view of the complex multi-scale nature of the earth's climate system, many uncertainties appear that should be accounted for in the basic dynamical models of atmospheric and oceanic processes. In the climate community stochastic methods have come into extensive use in this connection. For this reason there has appeared a need to further develop the foundations of nonlinear stochastic partial differential equations in connection with the PEs and more generally. In this work we study a stochastic version of the PEs. We establish the global existence and uniqueness of strong, pathwise solutions for these equations in dimension 3 for the case of a nonlinear multiplicative noise. The proof makes use of anisotropic estimates, L p t L q x estimates on the pressure and stopping time arguments

  11. Stochastic Spiking Neural Networks Enabled by Magnetic Tunnel Junctions: From Nontelegraphic to Telegraphic Switching Regimes

    Science.gov (United States)

    Liyanagedera, Chamika M.; Sengupta, Abhronil; Jaiswal, Akhilesh; Roy, Kaushik

    2017-12-01

    Stochastic spiking neural networks based on nanoelectronic spin devices can be a possible pathway to achieving "brainlike" compact and energy-efficient cognitive intelligence. The computational model attempt to exploit the intrinsic device stochasticity of nanoelectronic synaptic or neural components to perform learning or inference. However, there has been limited analysis on the scaling effect of stochastic spin devices and its impact on the operation of such stochastic networks at the system level. This work attempts to explore the design space and analyze the performance of nanomagnet-based stochastic neuromorphic computing architectures for magnets with different barrier heights. We illustrate how the underlying network architecture must be modified to account for the random telegraphic switching behavior displayed by magnets with low barrier heights as they are scaled into the superparamagnetic regime. We perform a device-to-system-level analysis on a deep neural-network architecture for a digit-recognition problem on the MNIST data set.

  12. Thermal power generation projects ``Large Scale Solar Heating``; EU-Thermie-Projekte ``Large Scale Solar Heating``

    Energy Technology Data Exchange (ETDEWEB)

    Kuebler, R.; Fisch, M.N. [Steinbeis-Transferzentrum Energie-, Gebaeude- und Solartechnik, Stuttgart (Germany)

    1998-12-31

    The aim of this project is the preparation of the ``Large-Scale Solar Heating`` programme for an Europe-wide development of subject technology. The following demonstration programme was judged well by the experts but was not immediately (1996) accepted for financial subsidies. In November 1997 the EU-commission provided 1,5 million ECU which allowed the realisation of an updated project proposal. By mid 1997 a small project was approved, that had been requested under the lead of Chalmes Industriteteknik (CIT) in Sweden and is mainly carried out for the transfer of technology. (orig.) [Deutsch] Ziel dieses Vorhabens ist die Vorbereitung eines Schwerpunktprogramms `Large Scale Solar Heating`, mit dem die Technologie europaweit weiterentwickelt werden sollte. Das daraus entwickelte Demonstrationsprogramm wurde von den Gutachtern positiv bewertet, konnte jedoch nicht auf Anhieb (1996) in die Foerderung aufgenommen werden. Im November 1997 wurden von der EU-Kommission dann kurzfristig noch 1,5 Mio ECU an Foerderung bewilligt, mit denen ein aktualisierter Projektvorschlag realisiert werden kann. Bereits Mitte 1997 wurde ein kleineres Vorhaben bewilligt, das unter Federfuehrung von Chalmers Industriteknik (CIT) in Schweden beantragt worden war und das vor allem dem Technologietransfer dient. (orig.)

  13. Modeling Stochastic Energy and Water Consumption to Manage Residential Water Uses

    Science.gov (United States)

    Abdallah, A. M.; Rosenberg, D. E.; Water; Energy Conservation

    2011-12-01

    and energy use, potential savings, and payback periods to install efficient water end-use appliances and fixtures. Stochastic model results show the distributions among households for (i) water end-use, (ii) energy consumed to use water, and (iii) financial payback periods. Compared to deterministic analysis, stochastic modeling results show that hot water fractions for appliances follow normal distributions with high standard deviation and reveal pronounced variations among households that significantly affect energy savings and payback period estimates. These distributions provide an important tool to select and size water conservation programs to simultaneously meet both water and energy conservation goals. They also provide a way to identify and target a small fraction of customers with potential to save large water volumes and energy from appliance retrofits. Future work will embed this household scale stochastic model in city-scale models to identify win-win water management opportunities where households save money by conserving water and energy while cities avoid costs, downsize, or delay infrastructure development.

  14. Large-scale retrieval for medical image analytics: A comprehensive review.

    Science.gov (United States)

    Li, Zhongyu; Zhang, Xiaofan; Müller, Henning; Zhang, Shaoting

    2018-01-01

    Over the past decades, medical image analytics was greatly facilitated by the explosion of digital imaging techniques, where huge amounts of medical images were produced with ever-increasing quality and diversity. However, conventional methods for analyzing medical images have achieved limited success, as they are not capable to tackle the huge amount of image data. In this paper, we review state-of-the-art approaches for large-scale medical image analysis, which are mainly based on recent advances in computer vision, machine learning and information retrieval. Specifically, we first present the general pipeline of large-scale retrieval, summarize the challenges/opportunities of medical image analytics on a large-scale. Then, we provide a comprehensive review of algorithms and techniques relevant to major processes in the pipeline, including feature representation, feature indexing, searching, etc. On the basis of existing work, we introduce the evaluation protocols and multiple applications of large-scale medical image retrieval, with a variety of exploratory and diagnostic scenarios. Finally, we discuss future directions of large-scale retrieval, which can further improve the performance of medical image analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Photorealistic large-scale urban city model reconstruction.

    Science.gov (United States)

    Poullis, Charalambos; You, Suya

    2009-01-01

    The rapid and efficient creation of virtual environments has become a crucial part of virtual reality applications. In particular, civil and defense applications often require and employ detailed models of operations areas for training, simulations of different scenarios, planning for natural or man-made events, monitoring, surveillance, games, and films. A realistic representation of the large-scale environments is therefore imperative for the success of such applications since it increases the immersive experience of its users and helps reduce the difference between physical and virtual reality. However, the task of creating such large-scale virtual environments still remains a time-consuming and manual work. In this work, we propose a novel method for the rapid reconstruction of photorealistic large-scale virtual environments. First, a novel, extendible, parameterized geometric primitive is presented for the automatic building identification and reconstruction of building structures. In addition, buildings with complex roofs containing complex linear and nonlinear surfaces are reconstructed interactively using a linear polygonal and a nonlinear primitive, respectively. Second, we present a rendering pipeline for the composition of photorealistic textures, which unlike existing techniques, can recover missing or occluded texture information by integrating multiple information captured from different optical sensors (ground, aerial, and satellite).

  16. Travelling fronts in stochastic Stokes’ drifts

    KAUST Repository

    Blanchet, Adrien; Dolbeault, Jean; Kowalczyk, Michał

    2008-01-01

    By analytical methods we study the large time properties of the solution of a simple one-dimensional model of stochastic Stokes' drift. Semi-explicit formulae allow us to characterize the behaviour of the solutions and compute global quantities

  17. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of the kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.

  18. Large-eddy simulation and Lagrangian stochastic modelling of solid particle and droplet dispersion and mixing. Application to atmospheric pollution; Dispersion et melange turbulents de particules solides et de gouttelettes par une simulation des grandes echelles et une modelisation stochastique lagrangienne. Application a la pollution de l'atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Vinkovic, I.

    2005-07-15

    In order to study atmospheric pollution and the dispersion of industrial stack emissions, a large eddy simulation with the dynamic Smagorinsky-Germano sub-grid-scale model is coupled with Lagrangian tracking of fluid particles containing scalar, solid particles and droplets. The movement of fluid particles at a sub-grid level is given by a three-dimensional Langevin model. The stochastic model is written in terms of sub-grid-scale statistics at a mesh level. By introducing a diffusion model, the coupling between the large-eddy simulation and the modified three-dimensional Langevin model is applied to passive scalar dispersion. The results are validated by comparison with the wind-tunnel experiments of Fackrell and Robins (1982). The equation of motion of a small rigid sphere in a turbulent flow is introduced. Solid particles and droplets are tracked in a Lagrangian way. The velocity of solid particles and droplets is considered to have a large scale component (directly computed by the large-eddy simulation) and a sub-grid scale part. Because of inertia and gravity effects, solid particles and droplets, deviate from the trajectories of the surrounding fluid particles. Therefore, a modified Lagrangian correlation timescale is introduced into the Langevin model previously developed for the sub-grid velocity of fluid particles. Two-way coupling and collisions are taken into account. The results of the large-eddy simulation with solid particles are compared with the wind-tunnel experiments of Nalpanis et al. (1993) and of Taniere et al. (1997) on sand particles in saltation and in modified saltation, respectively. A model for droplet coalescence and breakup is implemented which allows to predict droplet interactions under turbulent flow conditions in the frame of the Euler/Lagrange approach. Coalescence and breakup are considered as a stochastic process with simple scaling symmetry assumption for the droplet radius, initially proposed by Kolmogorov (1941). At high

  19. Stochastic switching in biology: from genotype to phenotype

    International Nuclear Information System (INIS)

    Bressloff, Paul C

    2017-01-01

    There has been a resurgence of interest in non-equilibrium stochastic processes in recent years, driven in part by the observation that the number of molecules (genes, mRNA, proteins) involved in gene expression are often of order 1–1000. This means that deterministic mass-action kinetics tends to break down, and one needs to take into account the discrete, stochastic nature of biochemical reactions. One of the major consequences of molecular noise is the occurrence of stochastic biological switching at both the genotypic and phenotypic levels. For example, individual gene regulatory networks can switch between graded and binary responses, exhibit translational/transcriptional bursting, and support metastability (noise-induced switching between states that are stable in the deterministic limit). If random switching persists at the phenotypic level then this can confer certain advantages to cell populations growing in a changing environment, as exemplified by bacterial persistence in response to antibiotics. Gene expression at the single-cell level can also be regulated by changes in cell density at the population level, a process known as quorum sensing. In contrast to noise-driven phenotypic switching, the switching mechanism in quorum sensing is stimulus-driven and thus noise tends to have a detrimental effect. A common approach to modeling stochastic gene expression is to assume a large but finite system and to approximate the discrete processes by continuous processes using a system-size expansion. However, there is a growing need to have some familiarity with the theory of stochastic processes that goes beyond the standard topics of chemical master equations, the system-size expansion, Langevin equations and the Fokker–Planck equation. Examples include stochastic hybrid systems (piecewise deterministic Markov processes), large deviations and the Wentzel–Kramers–Brillouin (WKB) method, adiabatic reductions, and queuing/renewal theory. The major aim of

  20. Stochastic analysis for Poisson point processes Malliavin calculus, Wiener-Itô chaos expansions and stochastic geometry

    CERN Document Server

    Peccati, Giovanni

    2016-01-01

    Stochastic geometry is the branch of mathematics that studies geometric structures associated with random configurations, such as random graphs, tilings and mosaics. Due to its close ties with stereology and spatial statistics, the results in this area are relevant for a large number of important applications, e.g. to the mathematical modeling and statistical analysis of telecommunication networks, geostatistics and image analysis. In recent years – due mainly to the impetus of the authors and their collaborators – a powerful connection has been established between stochastic geometry and the Malliavin calculus of variations, which is a collection of probabilistic techniques based on the properties of infinite-dimensional differential operators. This has led in particular to the discovery of a large number of new quantitative limit theorems for high-dimensional geometric objects. This unique book presents an organic collection of authoritative surveys written by the principal actors in this rapidly evolvi...

  1. Stochastic Differential Equations and Kondratiev Spaces

    Energy Technology Data Exchange (ETDEWEB)

    Vaage, G.

    1995-05-01

    The purpose of this mathematical thesis was to improve the understanding of physical processes such as fluid flow in porous media. An example is oil flowing in a reservoir. In the first of five included papers, Hilbert space methods for elliptic boundary value problems are used to prove the existence and uniqueness of a large family of elliptic differential equations with additive noise without using the Hermite transform. The ideas are then extended to the multidimensional case and used to prove existence and uniqueness of solution of the Stokes equations with additive noise. The second paper uses functional analytic methods for partial differential equations and presents a general framework for proving existence and uniqueness of solutions to stochastic partial differential equations with multiplicative noise, for a large family of noises. The methods are applied to equations of elliptic, parabolic as well as hyperbolic type. The framework presented can be extended to the multidimensional case. The third paper shows how the ideas from the second paper can be extended to study the moving boundary value problem associated with the stochastic pressure equation. The fourth paper discusses a set of stochastic differential equations. The fifth paper studies the relationship between the two families of Kondratiev spaces used in the thesis. 102 refs.

  2. 3D stochastic inversion and joint inversion of potential fields for multi scale parameters

    Science.gov (United States)

    Shamsipour, Pejman

    stochastic joint inversion method based on cokriging is applied to estimate density and magnetic susceptibility distributions from gravity and total magnetic field data. The method fully integrates the physical relations between the properties (density and magnetic susceptibility) and the indirect observations (gravity and total magnetic field). As a consequence, when the data are considered noise-free, the inverted fields exactly reproduce the observed data. The required density and magnetic susceptibility auto- and cross covariance are assumed to follow a linear model of coregionalization (LCM). In all the methods presented in this thesis, compact and stochastic synthetic models are investigated. The results show the ability of the methods to invert surface and borehole data simultaneously on multiple scale parameters. A case study using ground measurements of total magnetic field and gravity data at the Perseverance mine (Quebec, Canada) is selected and tested with the 3 approaches presented. The recovered 3D susceptibility and density model provides beneficial information that can be used to analyze the geology of massive sulfides for the domain under study.

  3. Accelerating Relevance Vector Machine for Large-Scale Data on Spark

    Directory of Open Access Journals (Sweden)

    Liu Fang

    2017-01-01

    Full Text Available Relevance vector machine (RVM is a machine learning algorithm based on a sparse Bayesian framework, which performs well when running classification and regression tasks on small-scale datasets. However, RVM also has certain drawbacks which restricts its practical applications such as (1 slow training process, (2 poor performance on training large-scale datasets. In order to solve these problem, we propose Discrete AdaBoost RVM (DAB-RVM which incorporate ensemble learning in RVM at first. This method performs well with large-scale low-dimensional datasets. However, as the number of features increases, the training time of DAB-RVM increases as well. To avoid this phenomenon, we utilize the sufficient training samples of large-scale datasets and propose all features boosting RVM (AFB-RVM, which modifies the way of obtaining weak classifiers. In our experiments we study the differences between various boosting techniques with RVM, demonstrating the performance of the proposed approaches on Spark. As a result of this paper, two proposed approaches on Spark for different types of large-scale datasets are available.

  4. A concise course on stochastic partial differential equations

    CERN Document Server

    Prévôt, Claudia

    2007-01-01

    These lectures concentrate on (nonlinear) stochastic partial differential equations (SPDE) of evolutionary type. All kinds of dynamics with stochastic influence in nature or man-made complex systems can be modelled by such equations. To keep the technicalities minimal we confine ourselves to the case where the noise term is given by a stochastic integral w.r.t. a cylindrical Wiener process.But all results can be easily generalized to SPDE with more general noises such as, for instance, stochastic integral w.r.t. a continuous local martingale. There are basically three approaches to analyze SPDE: the "martingale measure approach", the "mild solution approach" and the "variational approach". The purpose of these notes is to give a concise and as self-contained as possible an introduction to the "variational approach". A large part of necessary background material, such as definitions and results from the theory of Hilbert spaces, are included in appendices.

  5. Bayesian hierarchical model for large-scale covariance matrix estimation.

    Science.gov (United States)

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  6. Modeling real-time balancing power demands in wind power systems using stochastic differential equations

    International Nuclear Information System (INIS)

    Olsson, Magnus; Perninge, Magnus; Soeder, Lennart

    2010-01-01

    The inclusion of wind power into power systems has a significant impact on the demand for real-time balancing power due to the stochastic nature of wind power production. The overall aim of this paper is to present probabilistic models of the impact of large-scale integration of wind power on the continuous demand in MW for real-time balancing power. This is important not only for system operators, but also for producers and consumers since they in most systems through various market solutions provide balancing power. Since there can occur situations where the wind power variations cancel out other types of deviations in the system, models on an hourly basis are not sufficient. Therefore the developed model is in continuous time and is based on stochastic differential equations (SDE). The model can be used within an analytical framework or in Monte Carlo simulations. (author)

  7. Oxygen Distributions-Evaluation of Computational Methods, Using a Stochastic Model for Large Tumour Vasculature, to Elucidate the Importance of Considering a Complete Vascular Network.

    Directory of Open Access Journals (Sweden)

    Jakob H Lagerlöf

    Full Text Available To develop a general model that utilises a stochastic method to generate a vessel tree based on experimental data, and an associated irregular, macroscopic tumour. These will be used to evaluate two different methods for computing oxygen distribution.A vessel tree structure, and an associated tumour of 127 cm3, were generated, using a stochastic method and Bresenham's line algorithm to develop trees on two different scales and fusing them together. The vessel dimensions were adjusted through convolution and thresholding and each vessel voxel was assigned an oxygen value. Diffusion and consumption were modelled using a Green's function approach together with Michaelis-Menten kinetics. The computations were performed using a combined tree method (CTM and an individual tree method (ITM. Five tumour sub-sections were compared, to evaluate the methods.The oxygen distributions of the same tissue samples, using different methods of computation, were considerably less similar (root mean square deviation, RMSD≈0.02 than the distributions of different samples using CTM (0.001< RMSD<0.01. The deviations of ITM from CTM increase with lower oxygen values, resulting in ITM severely underestimating the level of hypoxia in the tumour. Kolmogorov Smirnov (KS tests showed that millimetre-scale samples may not represent the whole.The stochastic model managed to capture the heterogeneous nature of hypoxic fractions and, even though the simplified computation did not considerably alter the oxygen distribution, it leads to an evident underestimation of tumour hypoxia, and thereby radioresistance. For a trustworthy computation of tumour oxygenation, the interaction between adjacent microvessel trees must not be neglected, why evaluation should be made using high resolution and the CTM, applied to the entire tumour.

  8. A multiscale extension of the Margrabe formula under stochastic volatility

    International Nuclear Information System (INIS)

    Kim, Jeong-Hoon; Park, Chang-Rae

    2017-01-01

    Highlights: • Fast-mean-reverting stochastic volatility model is chosen to extend the classical Margrabe formula. • The resultant formula is explicitly given by the greeks of Margrabe price itself. • We show how the stochastic volatility corrects the Margrabe price behavior. - Abstract: The pricing of financial derivatives based on stochastic volatility models has been a popular subject in computational finance. Although exact or approximate closed form formulas of the prices of many options under stochastic volatility have been obtained so that the option prices can be easily computed, such formulas for exchange options leave much to be desired. In this paper, we consider two different risky assets with two different scales of mean-reversion rate of volatility and use asymptotic analysis to extend the classical Margrabe formula, which corresponds to a geometric Brownian motion model, and obtain a pricing formula under a stochastic volatility. The resultant formula can be computed easily, simply by taking derivatives of the Margrabe price itself. Based on the formula, we show how the stochastic volatility corrects the Margrabe price behavior depending on the moneyness and the correlation coefficient between the two asset prices.

  9. Creating Large Scale Database Servers

    International Nuclear Information System (INIS)

    Becla, Jacek

    2001-01-01

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region

  10. Creating Large Scale Database Servers

    Energy Technology Data Exchange (ETDEWEB)

    Becla, Jacek

    2001-12-14

    The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-positron interactions. The experiment, started in May 1999, will generate approximately 300TB/year of data for 10 years. All of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). To date, over 70TB of data have been placed in Objectivity/DB, making it one of the largest databases in the world. Providing access to such a large quantity of data through a database server is a daunting task. A full-scale testbed environment had to be developed to tune various software parameters and a fundamental change had to occur in the AMS architecture to allow it to scale past several hundred terabytes of data. Additionally, several protocol extensions had to be implemented to provide practical access to large quantities of data. This paper will describe the design of the database and the changes that we needed to make in the AMS for scalability reasons and how the lessons we learned would be applicable to virtually any kind of database server seeking to operate in the Petabyte region.

  11. Large-scale pool fires

    Directory of Open Access Journals (Sweden)

    Steinhaus Thomas

    2007-01-01

    Full Text Available A review of research into the burning behavior of large pool fires and fuel spill fires is presented. The features which distinguish such fires from smaller pool fires are mainly associated with the fire dynamics at low source Froude numbers and the radiative interaction with the fire source. In hydrocarbon fires, higher soot levels at increased diameters result in radiation blockage effects around the perimeter of large fire plumes; this yields lower emissive powers and a drastic reduction in the radiative loss fraction; whilst there are simplifying factors with these phenomena, arising from the fact that soot yield can saturate, there are other complications deriving from the intermittency of the behavior, with luminous regions of efficient combustion appearing randomly in the outer surface of the fire according the turbulent fluctuations in the fire plume. Knowledge of the fluid flow instabilities, which lead to the formation of large eddies, is also key to understanding the behavior of large-scale fires. Here modeling tools can be effectively exploited in order to investigate the fluid flow phenomena, including RANS- and LES-based computational fluid dynamics codes. The latter are well-suited to representation of the turbulent motions, but a number of challenges remain with their practical application. Massively-parallel computational resources are likely to be necessary in order to be able to adequately address the complex coupled phenomena to the level of detail that is necessary.

  12. Decentralised stabilising controllers for a class of large-scale linear ...

    Indian Academy of Sciences (India)

    subsystems resulting from a new aggregation-decomposition technique. The method has been illustrated through a numerical example of a large-scale linear system consisting of three subsystems each of the fourth order. Keywords. Decentralised stabilisation; large-scale linear systems; optimal feedback control; algebraic ...

  13. Simulation-optimization of large agro-hydrosystems using a decomposition approach

    Science.gov (United States)

    Schuetze, Niels; Grundmann, Jens

    2014-05-01

    In this contribution a stochastic simulation-optimization framework for decision support for optimal planning and operation of water supply of large agro-hydrosystems is presented. It is based on a decomposition solution strategy which allows for (i) the usage of numerical process models together with efficient Monte Carlo simulations for a reliable estimation of higher quantiles of the minimum agricultural water demand for full and deficit irrigation strategies at small scale (farm level), and (ii) the utilization of the optimization results at small scale for solving water resources management problems at regional scale. As a secondary result of several simulation-optimization runs at the smaller scale stochastic crop-water production functions (SCWPF) for different crops are derived which can be used as a basic tool for assessing the impact of climate variability on risk for potential yield. In addition, microeconomic impacts of climate change and the vulnerability of the agro-ecological systems are evaluated. The developed methodology is demonstrated through its application on a real-world case study for the South Al-Batinah region in the Sultanate of Oman where a coastal aquifer is affected by saltwater intrusion due to excessive groundwater withdrawal for irrigated agriculture.

  14. Research on trading patterns of large users' direct power purchase considering consumption of clean energy

    Science.gov (United States)

    Guojun, He; Lin, Guo; Zhicheng, Yu; Xiaojun, Zhu; Lei, Wang; Zhiqiang, Zhao

    2017-03-01

    In order to reduce the stochastic volatility of supply and demand, and maintain the electric power system's stability after large scale stochastic renewable energy sources connected to grid, the development and consumption should be promoted by marketing means. Bilateral contract transaction model of large users' direct power purchase conforms to the actual situation of our country. Trading pattern of large users' direct power purchase is analyzed in this paper, characteristics of each power generation are summed up, and centralized matching mode is mainly introduced. Through the establishment of power generation enterprises' priority evaluation index system and the analysis of power generation enterprises' priority based on fuzzy clustering, the sorting method of power generation enterprises' priority in trading patterns of large users' direct power purchase is put forward. Suggestions for trading mechanism of large users' direct power purchase are offered by this method, which is good for expand the promotion of large users' direct power purchase further.

  15. A theoretically consistent stochastic cascade for temporal disaggregation of intermittent rainfall

    Science.gov (United States)

    Lombardo, F.; Volpi, E.; Koutsoyiannis, D.; Serinaldi, F.

    2017-06-01

    Generating fine-scale time series of intermittent rainfall that are fully consistent with any given coarse-scale totals is a key and open issue in many hydrological problems. We propose a stationary disaggregation method that simulates rainfall time series with given dependence structure, wet/dry probability, and marginal distribution at a target finer (lower-level) time scale, preserving full consistency with variables at a parent coarser (higher-level) time scale. We account for the intermittent character of rainfall at fine time scales by merging a discrete stochastic representation of intermittency and a continuous one of rainfall depths. This approach yields a unique and parsimonious mathematical framework providing general analytical formulations of mean, variance, and autocorrelation function (ACF) for a mixed-type stochastic process in terms of mean, variance, and ACFs of both continuous and discrete components, respectively. To achieve the full consistency between variables at finer and coarser time scales in terms of marginal distribution and coarse-scale totals, the generated lower-level series are adjusted according to a procedure that does not affect the stochastic structure implied by the original model. To assess model performance, we study rainfall process as intermittent with both independent and dependent occurrences, where dependence is quantified by the probability that two consecutive time intervals are dry. In either case, we provide analytical formulations of main statistics of our mixed-type disaggregation model and show their clear accordance with Monte Carlo simulations. An application to rainfall time series from real world is shown as a proof of concept.

  16. Large Scale Survey Data in Career Development Research

    Science.gov (United States)

    Diemer, Matthew A.

    2008-01-01

    Large scale survey datasets have been underutilized but offer numerous advantages for career development scholars, as they contain numerous career development constructs with large and diverse samples that are followed longitudinally. Constructs such as work salience, vocational expectations, educational expectations, work satisfaction, and…

  17. Threshold Dynamics of a Stochastic Chemostat Model with Two Nutrients and One Microorganism

    Directory of Open Access Journals (Sweden)

    Jian Zhang

    2017-01-01

    Full Text Available A new stochastic chemostat model with two substitutable nutrients and one microorganism is proposed and investigated. Firstly, for the corresponding deterministic model, the threshold for extinction and permanence of the microorganism is obtained by analyzing the stability of the equilibria. Then, for the stochastic model, the threshold of the stochastic chemostat for extinction and permanence of the microorganism is explored. Difference of the threshold of the deterministic model and the stochastic model shows that a large stochastic disturbance can affect the persistence of the microorganism and is harmful to the cultivation of the microorganism. To illustrate this phenomenon, we give some computer simulations with different intensity of stochastic noise disturbance.

  18. Similitude and scaling of large structural elements: Case study

    Directory of Open Access Journals (Sweden)

    M. Shehadeh

    2015-06-01

    Full Text Available Scaled down models are widely used for experimental investigations of large structures due to the limitation in the capacities of testing facilities along with the expenses of the experimentation. The modeling accuracy depends upon the model material properties, fabrication accuracy and loading techniques. In the present work the Buckingham π theorem is used to develop the relations (i.e. geometry, loading and properties between the model and a large structural element as that is present in the huge existing petroleum oil drilling rigs. The model is to be designed, loaded and treated according to a set of similitude requirements that relate the model to the large structural element. Three independent scale factors which represent three fundamental dimensions, namely mass, length and time need to be selected for designing the scaled down model. Numerical prediction of the stress distribution within the model and its elastic deformation under steady loading is to be made. The results are compared with those obtained from the full scale structure numerical computations. The effect of scaled down model size and material on the accuracy of the modeling technique is thoroughly examined.

  19. Noncausal stochastic calculus

    CERN Document Server

    Ogawa, Shigeyoshi

    2017-01-01

    This book presents an elementary introduction to the theory of noncausal stochastic calculus that arises as a natural alternative to the standard theory of stochastic calculus founded in 1944 by Professor Kiyoshi Itô. As is generally known, Itô Calculus is essentially based on the "hypothesis of causality", asking random functions to be adapted to a natural filtration generated by Brownian motion or more generally by square integrable martingale. The intention in this book is to establish a stochastic calculus that is free from this "hypothesis of causality". To be more precise, a noncausal theory of stochastic calculus is developed in this book, based on the noncausal integral introduced by the author in 1979. After studying basic properties of the noncausal stochastic integral, various concrete problems of noncausal nature are considered, mostly concerning stochastic functional equations such as SDE, SIE, SPDE, and others, to show not only the necessity of such theory of noncausal stochastic calculus but ...

  20. Moral hazard in the credit market when the collateral value is stochastic

    OpenAIRE

    Niinimäki, Juha-Pekka

    2010-01-01

    This theoretical paper explores the effects of costly and non-costly collateral on moral hazard, when collateral value may fluctuate. Given that all collateral is costly, stochastic collateral will entail the same positive incentive effects as nonstochastic collateral, provided the variation in collateral value is modest. If it is large, the incentive effects are smaller under stochastic collateral. With non-costly collateral, stochastic collateral entails positive incentive effects or no eff...

  1. Large-scale preparation of hollow graphitic carbon nanospheres

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Jun; Li, Fu [Key Laboratory for Liquid-Solid Structural Evolution and Processing of Materials, Ministry of Education, Shandong University, Jinan 250061 (China); Bai, Yu-Jun, E-mail: byj97@126.com [Key Laboratory for Liquid-Solid Structural Evolution and Processing of Materials, Ministry of Education, Shandong University, Jinan 250061 (China); State Key laboratory of Crystal Materials, Shandong University, Jinan 250100 (China); Han, Fu-Dong; Qi, Yong-Xin; Lun, Ning [Key Laboratory for Liquid-Solid Structural Evolution and Processing of Materials, Ministry of Education, Shandong University, Jinan 250061 (China); Lu, Xi-Feng [Lunan Institute of Coal Chemical Engineering, Jining 272000 (China)

    2013-01-15

    Hollow graphitic carbon nanospheres (HGCNSs) were synthesized on large scale by a simple reaction between glucose and Mg at 550 Degree-Sign C in an autoclave. Characterization by X-ray diffraction, Raman spectroscopy and transmission electron microscopy demonstrates the formation of HGCNSs with an average diameter of 10 nm or so and a wall thickness of a few graphenes. The HGCNSs exhibit a reversible capacity of 391 mAh g{sup -1} after 60 cycles when used as anode materials for Li-ion batteries. -- Graphical abstract: Hollow graphitic carbon nanospheres could be prepared on large scale by the simple reaction between glucose and Mg at 550 Degree-Sign C, which exhibit superior electrochemical performance to graphite. Highlights: Black-Right-Pointing-Pointer Hollow graphitic carbon nanospheres (HGCNSs) were prepared on large scale at 550 Degree-Sign C Black-Right-Pointing-Pointer The preparation is simple, effective and eco-friendly. Black-Right-Pointing-Pointer The in situ yielded MgO nanocrystals promote the graphitization. Black-Right-Pointing-Pointer The HGCNSs exhibit superior electrochemical performance to graphite.

  2. Large-scale impact cratering on the terrestrial planets

    International Nuclear Information System (INIS)

    Grieve, R.A.F.

    1982-01-01

    The crater densities on the earth and moon form the basis for a standard flux-time curve that can be used in dating unsampled planetary surfaces and constraining the temporal history of endogenic geologic processes. Abundant evidence is seen not only that impact cratering was an important surface process in planetary history but also that large imapact events produced effects that were crucial in scale. By way of example, it is noted that the formation of multiring basins on the early moon was as important in defining the planetary tectonic framework as plate tectonics is on the earth. Evidence from several planets suggests that the effects of very-large-scale impacts go beyond the simple formation of an impact structure and serve to localize increased endogenic activity over an extended period of geologic time. Even though no longer occurring with the frequency and magnitude of early solar system history, it is noted that large scale impact events continue to affect the local geology of the planets. 92 references

  3. Optical interconnect for large-scale systems

    Science.gov (United States)

    Dress, William

    2013-02-01

    This paper presents a switchless, optical interconnect module that serves as a node in a network of identical distribution modules for large-scale systems. Thousands to millions of hosts or endpoints may be interconnected by a network of such modules, avoiding the need for multi-level switches. Several common network topologies are reviewed and their scaling properties assessed. The concept of message-flow routing is discussed in conjunction with the unique properties enabled by the optical distribution module where it is shown how top-down software control (global routing tables, spanning-tree algorithms) may be avoided.

  4. Stochastic clustering of material surface under high-heat plasma load

    Science.gov (United States)

    Budaev, Viacheslav P.

    2017-11-01

    The results of a study of a surface formed by high-temperature plasma loads on various materials such as tungsten, carbon and stainless steel are presented. High-temperature plasma irradiation leads to an inhomogeneous stochastic clustering of the surface with self-similar granularity - fractality on the scale from nanoscale to macroscales. Cauliflower-like structure of tungsten and carbon materials are formed under high heat plasma load in fusion devices. The statistical characteristics of hierarchical granularity and scale invariance are estimated. They differ qualitatively from the roughness of the ordinary Brownian surface, which is possibly due to the universal mechanisms of stochastic clustering of material surface under the influence of high-temperature plasma.

  5. The critical domain size of stochastic population models.

    Science.gov (United States)

    Reimer, Jody R; Bonsall, Michael B; Maini, Philip K

    2017-02-01

    Identifying the critical domain size necessary for a population to persist is an important question in ecology. Both demographic and environmental stochasticity impact a population's ability to persist. Here we explore ways of including this variability. We study populations with distinct dispersal and sedentary stages, which have traditionally been modelled using a deterministic integrodifference equation (IDE) framework. Individual-based models (IBMs) are the most intuitive stochastic analogues to IDEs but yield few analytic insights. We explore two alternate approaches; one is a scaling up to the population level using the Central Limit Theorem, and the other a variation on both Galton-Watson branching processes and branching processes in random environments. These branching process models closely approximate the IBM and yield insight into the factors determining the critical domain size for a given population subject to stochasticity.

  6. [A large-scale accident in Alpine terrain].

    Science.gov (United States)

    Wildner, M; Paal, P

    2015-02-01

    Due to the geographical conditions, large-scale accidents amounting to mass casualty incidents (MCI) in Alpine terrain regularly present rescue teams with huge challenges. Using an example incident, specific conditions and typical problems associated with such a situation are presented. The first rescue team members to arrive have the elementary tasks of qualified triage and communication to the control room, which is required to dispatch the necessary additional support. Only with a clear "concept", to which all have to adhere, can the subsequent chaos phase be limited. In this respect, a time factor confounded by adverse weather conditions or darkness represents enormous pressure. Additional hazards are frostbite and hypothermia. If priorities can be established in terms of urgency, then treatment and procedure algorithms have proven successful. For evacuation of causalities, a helicopter should be strived for. Due to the low density of hospitals in Alpine regions, it is often necessary to distribute the patients over a wide area. Rescue operations in Alpine terrain have to be performed according to the particular conditions and require rescue teams to have specific knowledge and expertise. The possibility of a large-scale accident should be considered when planning events. With respect to optimization of rescue measures, regular training and exercises are rational, as is the analysis of previous large-scale Alpine accidents.

  7. The Scaling LInear Macroweather model (SLIM): using scaling to forecast global scale macroweather from months to decades

    Science.gov (United States)

    Lovejoy, S.; del Rio Amador, L.; Hébert, R.

    2015-03-01

    At scales of ≈ 10 days (the lifetime of planetary scale structures), there is a drastic transition from high frequency weather to low frequency macroweather. This scale is close to the predictability limits of deterministic atmospheric models; so that in GCM macroweather forecasts, the weather is a high frequency noise. But neither the GCM noise nor the GCM climate is fully realistic. In this paper we show how simple stochastic models can be developped that use empirical data to force the statistics and climate to be realistic so that even a two parameter model can outperform GCM's for annual global temperature forecasts. The key is to exploit the scaling of the dynamics and the enormous stochastic memories that it implies. Since macroweather intermittency is low, we propose using the simplest model based on fractional Gaussian noise (fGn): the Scaling LInear Macroweather model (SLIM). SLIM is based on a stochastic ordinary differential equations, differing from usual linear stochastic models (such as the Linear Inverse Modelling, LIM) in that it is of fractional rather than integer order. Whereas LIM implicitly assumes there is no low frequency memory, SLIM has a huge memory that can be exploited. Although the basic mathematical forecast problem for fGn has been solved, we approach the problem in an original manner notably using the method of innovations to obtain simpler results on forecast skill and on the size of the effective system memory. A key to successful forecasts of natural macroweather variability is to first remove the low frequency anthropogenic component. A previous attempt to use fGn for forecasts had poor results because this was not done. We validate our theory using hindcasts of global and Northern Hemisphere temperatures at monthly and annual resolutions. Several nondimensional measures of forecast skill - with no adjustable parameters - show excellent agreement with hindcasts and these show some skill even at decadal scales. We also compare

  8. Hierarchical Cantor set in the large scale structure with torus geometry

    Energy Technology Data Exchange (ETDEWEB)

    Murdzek, R. [Physics Department, ' Al. I. Cuza' University, Blvd. Carol I, Nr. 11, Iassy 700506 (Romania)], E-mail: rmurdzek@yahoo.com

    2008-12-15

    The formation of large scale structures is considered within a model with string on toroidal space-time. Firstly, the space-time geometry is presented. In this geometry, the Universe is represented by a string describing a torus surface. Thereafter, the large scale structure of the Universe is derived from the string oscillations. The results are in agreement with the cellular structure of the large scale distribution and with the theory of a Cantorian space-time.

  9. Scaling of the stochastic broadening from low mn, high mn, and peeling-ballooning magnetic perturbations in the DIII-D tokamak

    Science.gov (United States)

    Zhao, Michael; Punjabi, Alkesh; Ali, Halima

    2009-11-01

    The equilibrium EFIT data for the DIII-D shot 115467 is used to construct the equilibrium generating function for magnetic field line trajectories in the DIII-D tokamak in natural canonical coordinates [A. Punjabi, and H. Ali, Phys. Plasmas 15, 122502 (2008)]. A canonical transformation is used to construct an area-preserving map for field line trajectories in the natural canonical coordinates in the DIII-D. Maps in natural canonical coordinates have the advantage that natural canonical coordinates can be inverted to calculate real space coordinates (R,Z,φ), and there is no problem in crossing the separatrix. This is not possible for magnetic coordinates [O. Kerwin, A. Punjabi, and H. Ali, Phys. Plasmas 15, 072504 (2008)]. This map is applied to calculate stochastic broadening from the low mn (m,n)=(1,1)+(1,-1); high mn (m,n)=(4,1)+(3,1); and the peeling-ballooning (m,n)=(40,10)+(30,10) magnetic perturbations. In all three cases, the scaling of the widths of stochastic layer near the X-point in the principal plane of the DIII-D deviates at most by 6% from the .5ex1 -.1em/ -.15em.25ex2 power Boozer-Rechester scaling [A. Boozer, and A. Rechester, Phys. Fluids 21, 682 (1978)]. This work is supported by US Department of Energy grants DE-FG02-07ER54937, DE-FG02-01ER54624 and DE-FG02-04ER54793.

  10. Large-scale Motion of Solar Filaments

    Indian Academy of Sciences (India)

    tribpo

    Large-scale Motion of Solar Filaments. Pavel Ambrož, Astronomical Institute of the Acad. Sci. of the Czech Republic, CZ-25165. Ondrejov, The Czech Republic. e-mail: pambroz@asu.cas.cz. Alfred Schroll, Kanzelhöehe Solar Observatory of the University of Graz, A-9521 Treffen,. Austria. e-mail: schroll@solobskh.ac.at.

  11. Sensitivity analysis for large-scale problems

    Science.gov (United States)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  12. Topology Optimization of Large Scale Stokes Flow Problems

    DEFF Research Database (Denmark)

    Aage, Niels; Poulsen, Thomas Harpsøe; Gersborg-Hansen, Allan

    2008-01-01

    This note considers topology optimization of large scale 2D and 3D Stokes flow problems using parallel computations. We solve problems with up to 1.125.000 elements in 2D and 128.000 elements in 3D on a shared memory computer consisting of Sun UltraSparc IV CPUs.......This note considers topology optimization of large scale 2D and 3D Stokes flow problems using parallel computations. We solve problems with up to 1.125.000 elements in 2D and 128.000 elements in 3D on a shared memory computer consisting of Sun UltraSparc IV CPUs....

  13. The Cosmology Large Angular Scale Surveyor

    Science.gov (United States)

    Harrington, Kathleen; Marriage, Tobias; Ali, Aamir; Appel, John; Bennett, Charles; Boone, Fletcher; Brewer, Michael; Chan, Manwei; Chuss, David T.; Colazo, Felipe; hide

    2016-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is a four telescope array designed to characterize relic primordial gravitational waves from inflation and the optical depth to reionization through a measurement of the polarized cosmic microwave background (CMB) on the largest angular scales. The frequencies of the four CLASS telescopes, one at 38 GHz, two at 93 GHz, and one dichroic system at 145217 GHz, are chosen to avoid spectral regions of high atmospheric emission and span the minimum of the polarized Galactic foregrounds: synchrotron emission at lower frequencies and dust emission at higher frequencies. Low-noise transition edge sensor detectors and a rapid front-end polarization modulator provide a unique combination of high sensitivity, stability, and control of systematics. The CLASS site, at 5200 m in the Chilean Atacama desert, allows for daily mapping of up to 70% of the sky and enables the characterization of CMB polarization at the largest angular scales. Using this combination of a broad frequency range, large sky coverage, control over systematics, and high sensitivity, CLASS will observe the reionization and recombination peaks of the CMB E- and B-mode power spectra. CLASS will make a cosmic variance limited measurement of the optical depth to reionization and will measure or place upper limits on the tensor-to-scalar ratio, r, down to a level of 0.01 (95% C.L.).

  14. Prehospital Acute Stroke Severity Scale to Predict Large Artery Occlusion: Design and Comparison With Other Scales.

    Science.gov (United States)

    Hastrup, Sidsel; Damgaard, Dorte; Johnsen, Søren Paaske; Andersen, Grethe

    2016-07-01

    We designed and validated a simple prehospital stroke scale to identify emergent large vessel occlusion (ELVO) in patients with acute ischemic stroke and compared the scale to other published scales for prediction of ELVO. A national historical test cohort of 3127 patients with information on intracranial vessel status (angiography) before reperfusion therapy was identified. National Institutes of Health Stroke Scale (NIHSS) items with the highest predictive value of occlusion of a large intracranial artery were identified, and the most optimal combination meeting predefined criteria to ensure usefulness in the prehospital phase was determined. The predictive performance of Prehospital Acute Stroke Severity (PASS) scale was compared with other published scales for ELVO. The PASS scale was composed of 3 NIHSS scores: level of consciousness (month/age), gaze palsy/deviation, and arm weakness. In derivation of PASS 2/3 of the test cohort was used and showed accuracy (area under the curve) of 0.76 for detecting large arterial occlusion. Optimal cut point ≥2 abnormal scores showed: sensitivity=0.66 (95% CI, 0.62-0.69), specificity=0.83 (0.81-0.85), and area under the curve=0.74 (0.72-0.76). Validation on 1/3 of the test cohort showed similar performance. Patients with a large artery occlusion on angiography with PASS ≥2 had a median NIHSS score of 17 (interquartile range=6) as opposed to PASS <2 with a median NIHSS score of 6 (interquartile range=5). The PASS scale showed equal performance although more simple when compared with other scales predicting ELVO. The PASS scale is simple and has promising accuracy for prediction of ELVO in the field. © 2016 American Heart Association, Inc.

  15. Non-linear stochastic response of a shallow cable

    DEFF Research Database (Denmark)

    Larsen, Jesper Winther; Nielsen, Søren R.K.

    2004-01-01

    The paper considers the stochastic response of geometrical non-linear shallow cables. Large rain-wind induced cable oscillations with non-linear interactions have been observed in many large cable stayed bridges during the last decades. The response of the cable is investigated for a reduced two...

  16. Stochastic Differential Equation-Based Flexible Software Reliability Growth Model

    Directory of Open Access Journals (Sweden)

    P. K. Kapur

    2009-01-01

    Full Text Available Several software reliability growth models (SRGMs have been developed by software developers in tracking and measuring the growth of reliability. As the size of software system is large and the number of faults detected during the testing phase becomes large, so the change of the number of faults that are detected and removed through each debugging becomes sufficiently small compared with the initial fault content at the beginning of the testing phase. In such a situation, we can model the software fault detection process as a stochastic process with continuous state space. In this paper, we propose a new software reliability growth model based on Itô type of stochastic differential equation. We consider an SDE-based generalized Erlang model with logistic error detection function. The model is estimated and validated on real-life data sets cited in literature to show its flexibility. The proposed model integrated with the concept of stochastic differential equation performs comparatively better than the existing NHPP-based models.

  17. Performance evaluation of full-scale tuned liquid dampers (TLDs) for vibration control of large wind turbines using real-time hybrid testing

    DEFF Research Database (Denmark)

    Zhang, Zili; Staino, Andrea; Basu, Biswajit

    2016-01-01

    Highlights •Performance evaluation of full-scale tuned liquid dampers carried out for wind turbines. •Coupled blade-tower model considered in the numerical sub-structure. •Stochastic turbulence due to rotationally sampled spectra considered. •Effect of damping screens experimentally investigated...

  18. Stochastic model of the near-to-injector spray formation assisted by a high-speed coaxial gas jet

    Energy Technology Data Exchange (ETDEWEB)

    Gorokhovski, M [Laboratoire de Mecanique des Fluides et d' Acoustique, CNRS-Ecole Centrale de Lyon-INSA Lyon-Universite Claude Bernard Lyon 1, 36 Avenue Guy de Collongue, 69131 Ecully Cedex (France); Jouanguy, J [Laboratoire de Mecanique de Lille, Ecole Centrale de Lille, Blvd Paul Langevin, 59655 Villeneuve d' Ascq Cedex (France); Chtab-Desportes, A [CD-adapco, 31 rue Delizy 93698 Pantin Cedex (France)], E-mail: mikhael.gorokhovski@ec-lyon.fr

    2009-06-01

    The stochastic model of spray formation in the vicinity of the air-blast atomizer has been described and assessed by comparison with measurements. In this model, the 3D configuration of a continuous liquid core is simulated by spatial trajectories of specifically introduced stochastic particles. The stochastic process is based on the assumption that due to a high Weber number, the exiting continuous liquid jet is depleted in the framework of statistical universalities of a cascade fragmentation under scaling symmetry. The parameters of the stochastic process have been determined according to observations from Lasheras's, Hopfinger's and Villermaux's scientific groups. The spray formation model, based on the computation of spatial distribution of the probability of finding the non-fragmented liquid jet in the near-to-injector region, is combined with the large-eddy simulation (LES) in the coaxial gas jet. Comparison with measurements reported in the literature for different values of the gas-to-liquid dynamic pressure ratio showed that the model predicts correctly the distribution of liquid in the close-to-injector region, the mean length of the liquid core, the spray angle and the typical size of droplets in the far field of spray.

  19. Solving stochastic inflation for arbitrary potentials

    International Nuclear Information System (INIS)

    Martin, Jerome; Musso, Marcello

    2006-01-01

    A perturbative method for solving the Langevin equation of inflationary cosmology in the presence of backreaction is presented. In the Gaussian approximation, the method permits an explicit calculation of the probability distribution of the inflaton field for an arbitrary potential, with or without the volume effects taken into account. The perturbative method is then applied to various concrete models, namely, large field, small field, hybrid, and running mass inflation. New results on the stochastic behavior of the inflaton field in those models are obtained. In particular, it is confirmed that the stochastic effects can be important in new inflation while it is demonstrated they are negligible in (vacuum dominated) hybrid inflation. The case of stochastic running mass inflation is discussed in some details and it is argued that quantum effects blur the distinction between the four classical versions of this model. It is also shown that the self-reproducing regime is likely to be important in this case

  20. Approaching complexity by stochastic methods: From biological systems to turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Friedrich, Rudolf [Institute for Theoretical Physics, University of Muenster, D-48149 Muenster (Germany); Peinke, Joachim [Institute of Physics, Carl von Ossietzky University, D-26111 Oldenburg (Germany); Sahimi, Muhammad [Mork Family Department of Chemical Engineering and Materials Science, University of Southern California, Los Angeles, CA 90089-1211 (United States); Reza Rahimi Tabar, M., E-mail: mohammed.r.rahimi.tabar@uni-oldenburg.de [Department of Physics, Sharif University of Technology, Tehran 11155-9161 (Iran, Islamic Republic of); Institute of Physics, Carl von Ossietzky University, D-26111 Oldenburg (Germany); Fachbereich Physik, Universitaet Osnabrueck, Barbarastrasse 7, 49076 Osnabrueck (Germany)

    2011-09-15

    This review addresses a central question in the field of complex systems: given a fluctuating (in time or space), sequentially measured set of experimental data, how should one analyze the data, assess their underlying trends, and discover the characteristics of the fluctuations that generate the experimental traces? In recent years, significant progress has been made in addressing this question for a class of stochastic processes that can be modeled by Langevin equations, including additive as well as multiplicative fluctuations or noise. Important results have emerged from the analysis of temporal data for such diverse fields as neuroscience, cardiology, finance, economy, surface science, turbulence, seismic time series and epileptic brain dynamics, to name but a few. Furthermore, it has been recognized that a similar approach can be applied to the data that depend on a length scale, such as velocity increments in fully developed turbulent flow, or height increments that characterize rough surfaces. A basic ingredient of the approach to the analysis of fluctuating data is the presence of a Markovian property, which can be detected in real systems above a certain time or length scale. This scale is referred to as the Markov-Einstein (ME) scale, and has turned out to be a useful characteristic of complex systems. We provide a review of the operational methods that have been developed for analyzing stochastic data in time and scale. We address in detail the following issues: (i) reconstruction of stochastic evolution equations from data in terms of the Langevin equations or the corresponding Fokker-Planck equations and (ii) intermittency, cascades, and multiscale correlation functions.

  1. Approaching complexity by stochastic methods: From biological systems to turbulence

    International Nuclear Information System (INIS)

    Friedrich, Rudolf; Peinke, Joachim; Sahimi, Muhammad; Reza Rahimi Tabar, M.

    2011-01-01

    This review addresses a central question in the field of complex systems: given a fluctuating (in time or space), sequentially measured set of experimental data, how should one analyze the data, assess their underlying trends, and discover the characteristics of the fluctuations that generate the experimental traces? In recent years, significant progress has been made in addressing this question for a class of stochastic processes that can be modeled by Langevin equations, including additive as well as multiplicative fluctuations or noise. Important results have emerged from the analysis of temporal data for such diverse fields as neuroscience, cardiology, finance, economy, surface science, turbulence, seismic time series and epileptic brain dynamics, to name but a few. Furthermore, it has been recognized that a similar approach can be applied to the data that depend on a length scale, such as velocity increments in fully developed turbulent flow, or height increments that characterize rough surfaces. A basic ingredient of the approach to the analysis of fluctuating data is the presence of a Markovian property, which can be detected in real systems above a certain time or length scale. This scale is referred to as the Markov-Einstein (ME) scale, and has turned out to be a useful characteristic of complex systems. We provide a review of the operational methods that have been developed for analyzing stochastic data in time and scale. We address in detail the following issues: (i) reconstruction of stochastic evolution equations from data in terms of the Langevin equations or the corresponding Fokker-Planck equations and (ii) intermittency, cascades, and multiscale correlation functions.

  2. Analysis using large-scale ringing data

    Directory of Open Access Journals (Sweden)

    Baillie, S. R.

    2004-06-01

    Full Text Available Birds are highly mobile organisms and there is increasing evidence that studies at large spatial scales are needed if we are to properly understand their population dynamics. While classical metapopulation models have rarely proved useful for birds, more general metapopulation ideas involving collections of populations interacting within spatially structured landscapes are highly relevant (Harrison, 1994. There is increasing interest in understanding patterns of synchrony, or lack of synchrony, between populations and the environmental and dispersal mechanisms that bring about these patterns (Paradis et al., 2000. To investigate these processes we need to measure abundance, demographic rates and dispersal at large spatial scales, in addition to gathering data on relevant environmental variables. There is an increasing realisation that conservation needs to address rapid declines of common and widespread species (they will not remain so if such trends continue as well as the management of small populations that are at risk of extinction. While the knowledge needed to support the management of small populations can often be obtained from intensive studies in a few restricted areas, conservation of widespread species often requires information on population trends and processes measured at regional, national and continental scales (Baillie, 2001. While management prescriptions for widespread populations may initially be developed from a small number of local studies or experiments, there is an increasing need to understand how such results will scale up when applied across wider areas. There is also a vital role for monitoring at large spatial scales both in identifying such population declines and in assessing population recovery. Gathering data on avian abundance and demography at large spatial scales usually relies on the efforts of large numbers of skilled volunteers. Volunteer studies based on ringing (for example Constant Effort Sites [CES

  3. Optimal Rules for Single Machine Scheduling with Stochastic Breakdowns

    Directory of Open Access Journals (Sweden)

    Jinwei Gu

    2014-01-01

    Full Text Available This paper studies the problem of scheduling a set of jobs on a single machine subject to stochastic breakdowns, where jobs have to be restarted if preemptions occur because of breakdowns. The breakdown process of the machine is independent of the jobs processed on the machine. The processing times required to complete the jobs are constants if no breakdown occurs. The machine uptimes are independently and identically distributed (i.i.d. and are subject to a uniform distribution. It is proved that the Longest Processing Time first (LPT rule minimizes the expected makespan. For the large-scale problem, it is also showed that the Shortest Processing Time first (SPT rule is optimal to minimize the expected total completion times of all jobs.

  4. Fast Simulation of Large-Scale Floods Based on GPU Parallel Computing

    OpenAIRE

    Qiang Liu; Yi Qin; Guodong Li

    2018-01-01

    Computing speed is a significant issue of large-scale flood simulations for real-time response to disaster prevention and mitigation. Even today, most of the large-scale flood simulations are generally run on supercomputers due to the massive amounts of data and computations necessary. In this work, a two-dimensional shallow water model based on an unstructured Godunov-type finite volume scheme was proposed for flood simulation. To realize a fast simulation of large-scale floods on a personal...

  5. Managing Risk and Uncertainty in Large-Scale University Research Projects

    Science.gov (United States)

    Moore, Sharlissa; Shangraw, R. F., Jr.

    2011-01-01

    Both publicly and privately funded research projects managed by universities are growing in size and scope. Complex, large-scale projects (over $50 million) pose new management challenges and risks for universities. This paper explores the relationship between project success and a variety of factors in large-scale university projects. First, we…

  6. Parallel clustering algorithm for large-scale biological data sets.

    Science.gov (United States)

    Wang, Minchao; Zhang, Wu; Ding, Wang; Dai, Dongbo; Zhang, Huiran; Xie, Hao; Chen, Luonan; Guo, Yike; Xie, Jiang

    2014-01-01

    Recent explosion of biological data brings a great challenge for the traditional clustering algorithms. With increasing scale of data sets, much larger memory and longer runtime are required for the cluster identification problems. The affinity propagation algorithm outperforms many other classical clustering algorithms and is widely applied into the biological researches. However, the time and space complexity become a great bottleneck when handling the large-scale data sets. Moreover, the similarity matrix, whose constructing procedure takes long runtime, is required before running the affinity propagation algorithm, since the algorithm clusters data sets based on the similarities between data pairs. Two types of parallel architectures are proposed in this paper to accelerate the similarity matrix constructing procedure and the affinity propagation algorithm. The memory-shared architecture is used to construct the similarity matrix, and the distributed system is taken for the affinity propagation algorithm, because of its large memory size and great computing capacity. An appropriate way of data partition and reduction is designed in our method, in order to minimize the global communication cost among processes. A speedup of 100 is gained with 128 cores. The runtime is reduced from serval hours to a few seconds, which indicates that parallel algorithm is capable of handling large-scale data sets effectively. The parallel affinity propagation also achieves a good performance when clustering large-scale gene data (microarray) and detecting families in large protein superfamilies.

  7. Adaptive visualization for large-scale graph

    International Nuclear Information System (INIS)

    Nakamura, Hiroko; Shinano, Yuji; Ohzahata, Satoshi

    2010-01-01

    We propose an adoptive visualization technique for representing a large-scale hierarchical dataset within limited display space. A hierarchical dataset has nodes and links showing the parent-child relationship between the nodes. These nodes and links are described using graphics primitives. When the number of these primitives is large, it is difficult to recognize the structure of the hierarchical data because many primitives are overlapped within a limited region. To overcome this difficulty, we propose an adaptive visualization technique for hierarchical datasets. The proposed technique selects an appropriate graph style according to the nodal density in each area. (author)

  8. Stabilization Algorithms for Large-Scale Problems

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg

    2006-01-01

    The focus of the project is on stabilization of large-scale inverse problems where structured models and iterative algorithms are necessary for computing approximate solutions. For this purpose, we study various iterative Krylov methods and their abilities to produce regularized solutions. Some......-curve. This heuristic is implemented as a part of a larger algorithm which is developed in collaboration with G. Rodriguez and P. C. Hansen. Last, but not least, a large part of the project has, in different ways, revolved around the object-oriented Matlab toolbox MOORe Tools developed by PhD Michael Jacobsen. New...

  9. Stochastic resonance in small-world neuronal networks with hybrid electrical–chemical synapses

    International Nuclear Information System (INIS)

    Wang, Jiang; Guo, Xinmeng; Yu, Haitao; Liu, Chen; Deng, Bin; Wei, Xile; Chen, Yingyuan

    2014-01-01

    Highlights: •We study stochastic resonance in small-world neural networks with hybrid synapses. •The resonance effect depends largely on the probability of chemical synapse. •An optimal chemical synapse probability exists to evoke network resonance. •Network topology affects the stochastic resonance in hybrid neuronal networks. - Abstract: The dependence of stochastic resonance in small-world neuronal networks with hybrid electrical–chemical synapses on the probability of chemical synapse and the rewiring probability is investigated. A subthreshold periodic signal is imposed on one single neuron within the neuronal network as a pacemaker. It is shown that, irrespective of the probability of chemical synapse, there exists a moderate intensity of external noise optimizing the response of neuronal networks to the pacemaker. Moreover, the effect of pacemaker driven stochastic resonance of the system depends largely on the probability of chemical synapse. A high probability of chemical synapse will need lower noise intensity to evoke the phenomenon of stochastic resonance in the networked neuronal systems. In addition, for fixed noise intensity, there is an optimal chemical synapse probability, which can promote the propagation of the localized subthreshold pacemaker across neural networks. And the optimal chemical synapses probability turns even larger as the coupling strength decreases. Furthermore, the small-world topology has a significant impact on the stochastic resonance in hybrid neuronal networks. It is found that increasing the rewiring probability can always enhance the stochastic resonance until it approaches the random network limit

  10. Design study on sodium cooled large-scale reactor

    International Nuclear Information System (INIS)

    Murakami, Tsutomu; Hishida, Masahiko; Kisohara, Naoyuki

    2004-07-01

    In Phase 1 of the 'Feasibility Studies on Commercialized Fast Reactor Cycle Systems (F/S)', an advanced loop type reactor has been selected as a promising concept of sodium-cooled large-scale reactor, which has a possibility to fulfill the design requirements of the F/S. In Phase 2, design improvement for further cost reduction of establishment of the plant concept has been performed. This report summarizes the results of the design study on the sodium-cooled large-scale reactor performed in JFY2003, which is the third year of Phase 2. In the JFY2003 design study, critical subjects related to safety, structural integrity and thermal hydraulics which found in the last fiscal year has been examined and the plant concept has been modified. Furthermore, fundamental specifications of main systems and components have been set and economy has been evaluated. In addition, as the interim evaluation of the candidate concept of the FBR fuel cycle is to be conducted, cost effectiveness and achievability for the development goal were evaluated and the data of the three large-scale reactor candidate concepts were prepared. As a results of this study, the plant concept of the sodium-cooled large-scale reactor has been constructed, which has a prospect to satisfy the economic goal (construction cost: less than 200,000 yens/kWe, etc.) and has a prospect to solve the critical subjects. From now on, reflecting the results of elemental experiments, the preliminary conceptual design of this plant will be preceded toward the selection for narrowing down candidate concepts at the end of Phase 2. (author)

  11. Design study on sodium-cooled large-scale reactor

    International Nuclear Information System (INIS)

    Shimakawa, Yoshio; Nibe, Nobuaki; Hori, Toru

    2002-05-01

    In Phase 1 of the 'Feasibility Study on Commercialized Fast Reactor Cycle Systems (F/S)', an advanced loop type reactor has been selected as a promising concept of sodium-cooled large-scale reactor, which has a possibility to fulfill the design requirements of the F/S. In Phase 2 of the F/S, it is planed to precede a preliminary conceptual design of a sodium-cooled large-scale reactor based on the design of the advanced loop type reactor. Through the design study, it is intended to construct such a plant concept that can show its attraction and competitiveness as a commercialized reactor. This report summarizes the results of the design study on the sodium-cooled large-scale reactor performed in JFY2001, which is the first year of Phase 2. In the JFY2001 design study, a plant concept has been constructed based on the design of the advanced loop type reactor, and fundamental specifications of main systems and components have been set. Furthermore, critical subjects related to safety, structural integrity, thermal hydraulics, operability, maintainability and economy have been examined and evaluated. As a result of this study, the plant concept of the sodium-cooled large-scale reactor has been constructed, which has a prospect to satisfy the economic goal (construction cost: less than 200,000yens/kWe, etc.) and has a prospect to solve the critical subjects. From now on, reflecting the results of elemental experiments, the preliminary conceptual design of this plant will be preceded toward the selection for narrowing down candidate concepts at the end of Phase 2. (author)

  12. Stochastic Modeling and Analysis of Power System with Renewable Generation

    DEFF Research Database (Denmark)

    Chen, Peiyuan

    Unlike traditional fossil-fuel based power generation, renewable generation such as wind power relies on uncontrollable prime sources such as wind speed. Wind speed varies stochastically, which to a large extent determines the stochastic behavior of power generation from wind farms...... that such a stochastic model can be used to simulate the effect of load management on the load duration curve. As CHP units are turned on and off by regulating power, CHP generation has discrete output and thus can be modeled by a transition matrix based discrete Markov chain. As the CHP generation has a strong diurnal...

  13. Large scale CMB anomalies from thawing cosmic strings

    Energy Technology Data Exchange (ETDEWEB)

    Ringeval, Christophe [Centre for Cosmology, Particle Physics and Phenomenology, Institute of Mathematics and Physics, Louvain University, 2 Chemin du Cyclotron, 1348 Louvain-la-Neuve (Belgium); Yamauchi, Daisuke; Yokoyama, Jun' ichi [Research Center for the Early Universe (RESCEU), Graduate School of Science, The University of Tokyo, Tokyo 113-0033 (Japan); Bouchet, François R., E-mail: christophe.ringeval@uclouvain.be, E-mail: yamauchi@resceu.s.u-tokyo.ac.jp, E-mail: yokoyama@resceu.s.u-tokyo.ac.jp, E-mail: bouchet@iap.fr [Institut d' Astrophysique de Paris, UMR 7095-CNRS, Université Pierre et Marie Curie, 98bis boulevard Arago, 75014 Paris (France)

    2016-02-01

    Cosmic strings formed during inflation are expected to be either diluted over super-Hubble distances, i.e., invisible today, or to have crossed our past light cone very recently. We discuss the latter situation in which a few strings imprint their signature in the Cosmic Microwave Background (CMB) Anisotropies after recombination. Being almost frozen in the Hubble flow, these strings are quasi static and evade almost all of the previously derived constraints on their tension while being able to source large scale anisotropies in the CMB sky. Using a local variance estimator on thousand of numerically simulated Nambu-Goto all sky maps, we compute the expected signal and show that it can mimic a dipole modulation at large angular scales while being negligible at small angles. Interestingly, such a scenario generically produces one cold spot from the thawing of a cosmic string loop. Mixed with anisotropies of inflationary origin, we find that a few strings of tension GU = O(1) × 10{sup −6} match the amplitude of the dipole modulation reported in the Planck satellite measurements and could be at the origin of other large scale anomalies.

  14. Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons

    Science.gov (United States)

    Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang

    2011-01-01

    An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (“explaining away”) and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons. PMID:22219717

  15. Probabilistic inference in general graphical models through sampling in stochastic networks of spiking neurons.

    Directory of Open Access Journals (Sweden)

    Dejan Pecevski

    2011-12-01

    Full Text Available An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows ("explaining away" and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons.

  16. Probabilistic inference in general graphical models through sampling in stochastic networks of spiking neurons.

    Science.gov (United States)

    Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang

    2011-12-01

    An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows ("explaining away") and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons.

  17. Investment in different sized SMRs: Economic evaluation of stochastic scenarios by INCAS code

    International Nuclear Information System (INIS)

    Barenghi, S.; Boarin, S.; Ricotti, M. E.

    2012-01-01

    Small Modular LWR concepts are being developed and proposed to investors worldwide. They capitalize on operating track record of GEN II LWR, while introducing innovative design enhancements allowed by smaller size and additional benefits from the higher degree of modularization and from deployment of multiple units on the same site. (i.e. 'Economy of Multiple' paradigm) Nevertheless Small Modular Reactors pay for a dis-economy of scale that represents a relevant penalty on a capital intensive investment. Investors in the nuclear power generation industry face a very high financial risk, due to high capital commitment and exceptionally long pay-back time. Investment risk arise from uncertainty that affects scenario conditions over such a long time horizon. Risk aversion is increased by current adverse conditions of financial markets and general economic downturn, as is the case nowadays. This work investigates both the investment profitability and risk of alternative investments in a single Large Reactor or in multiple SMR of different sizes drawing information from project's Internal Rate of Return stochastic distribution. multiple SMR deployment on a single site with total power installed, equivalent to a single LR. Uncertain scenario conditions and stochastic input assumptions are included in the analysis, representing investment uncertainty and risk. Results show that, despite the combination of much larger number of stochastic variables in SMR fleets, uncertainty of project profitability is not increased, as compared to LR: SMR have features able to smooth IRR variance and control investment risk. Despite dis-economy of scale, SMR represent a limited capital commitment and a scalable investment option that meet investors' interest, even in developed and mature markets, that are traditional marketplace for LR. (authors)

  18. Exploiting multi-scale parallelism for large scale numerical modelling of laser wakefield accelerators

    International Nuclear Information System (INIS)

    Fonseca, R A; Vieira, J; Silva, L O; Fiuza, F; Davidson, A; Tsung, F S; Mori, W B

    2013-01-01

    A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ∼10 6 cores and sustained performance over ∼2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios. (paper)

  19. Evidencing `Tight Bound States' in the Hydrogen Atom:. Empirical Manipulation of Large-Scale XD in Violation of QED

    Science.gov (United States)

    Amoroso, Richard L.; Vigier, Jean-Pierre

    2013-09-01

    In this work we extend Vigier's recent theory of `tight bound state' (TBS) physics and propose empirical protocols to test not only for their putative existence, but also that their existence if demonstrated provides the 1st empirical evidence of string theory because it occurs in the context of large-scale extra dimensionality (LSXD) cast in a unique M-Theoretic vacuum corresponding to the new Holographic Anthropic Multiverse (HAM) cosmological paradigm. Physicists generally consider spacetime as a stochastic foam containing a zero-point field (ZPF) from which virtual particles restricted by the quantum uncertainty principle (to the Planck time) wink in and out of existence. According to the extended de Broglie-Bohm-Vigier causal stochastic interpretation of quantum theory spacetime and the matter embedded within it is created annihilated and recreated as a virtual locus of reality with a continuous quantum evolution (de Broglie matter waves) governed by a pilot wave - a `super quantum potential' extended in HAM cosmology to be synonymous with the a `force of coherence' inherent in the Unified Field, UF. We consider this backcloth to be a covariant polarized vacuum of the (generally ignored by contemporary physicists) Dirac type. We discuss open questions of the physics of point particles (fermionic nilpotent singularities). We propose a new set of experiments to test for TBS in a Dirac covariant polarized vacuum LSXD hyperspace suggestive of a recently tested special case of the Lorentz Transformation put forth by Kowalski and Vigier. These protocols reach far beyond the recent battery of atomic spectral violations of QED performed through NIST.

  20. Balancing modern Power System with large scale of wind power

    DEFF Research Database (Denmark)

    Basit, Abdul; Altin, Müfit; Hansen, Anca Daniela

    2014-01-01

    Power system operators must ensure robust, secure and reliable power system operation even with a large scale integration of wind power. Electricity generated from the intermittent wind in large propor-tion may impact on the control of power system balance and thus deviations in the power system...... frequency in small or islanded power systems or tie line power flows in interconnected power systems. Therefore, the large scale integration of wind power into the power system strongly concerns the secure and stable grid operation. To ensure the stable power system operation, the evolving power system has...... to be analysed with improved analytical tools and techniques. This paper proposes techniques for the active power balance control in future power systems with the large scale wind power integration, where power balancing model provides the hour-ahead dispatch plan with reduced planning horizon and the real time...

  1. On square-wave-driven stochastic resonance for energy harvesting in a bistable system

    Energy Technology Data Exchange (ETDEWEB)

    Su, Dongxu, E-mail: sudx@iis.u-tokyo.ac.jp [Graduate School of Engineering, The University of Tokyo, Tokyo 1538505 (Japan); Zheng, Rencheng; Nakano, Kimihiko [Institute of Industrial Science, The University of Tokyo, Tokyo 1538505 (Japan); Cartmell, Matthew P [Department of Mechanical Engineering, University of Sheffield, Sheffield S1 3JD (United Kingdom)

    2014-11-15

    Stochastic resonance is a physical phenomenon through which the throughput of energy within an oscillator excited by a stochastic source can be boosted by adding a small modulating excitation. This study investigates the feasibility of implementing square-wave-driven stochastic resonance to enhance energy harvesting. The motivating hypothesis was that such stochastic resonance can be efficiently realized in a bistable mechanism. However, the condition for the occurrence of stochastic resonance is conventionally defined by the Kramers rate. This definition is inadequate because of the necessity and difficulty in estimating white noise density. A bistable mechanism has been designed using an explicit analytical model which implies a new approach for achieving stochastic resonance in the paper. Experimental tests confirm that the addition of a small-scale force to the bistable system excited by a random signal apparently leads to a corresponding amplification of the response that we now term square-wave-driven stochastic resonance. The study therefore indicates that this approach may be a promising way to improve the performance of an energy harvester under certain forms of random excitation.

  2. On square-wave-driven stochastic resonance for energy harvesting in a bistable system

    International Nuclear Information System (INIS)

    Su, Dongxu; Zheng, Rencheng; Nakano, Kimihiko; Cartmell, Matthew P

    2014-01-01

    Stochastic resonance is a physical phenomenon through which the throughput of energy within an oscillator excited by a stochastic source can be boosted by adding a small modulating excitation. This study investigates the feasibility of implementing square-wave-driven stochastic resonance to enhance energy harvesting. The motivating hypothesis was that such stochastic resonance can be efficiently realized in a bistable mechanism. However, the condition for the occurrence of stochastic resonance is conventionally defined by the Kramers rate. This definition is inadequate because of the necessity and difficulty in estimating white noise density. A bistable mechanism has been designed using an explicit analytical model which implies a new approach for achieving stochastic resonance in the paper. Experimental tests confirm that the addition of a small-scale force to the bistable system excited by a random signal apparently leads to a corresponding amplification of the response that we now term square-wave-driven stochastic resonance. The study therefore indicates that this approach may be a promising way to improve the performance of an energy harvester under certain forms of random excitation

  3. Constraining Stochastic Parametrisation Schemes Using High-Resolution Model Simulations

    Science.gov (United States)

    Christensen, H. M.; Dawson, A.; Palmer, T.

    2017-12-01

    Stochastic parametrisations are used in weather and climate models as a physically motivated way to represent model error due to unresolved processes. Designing new stochastic schemes has been the target of much innovative research over the last decade. While a focus has been on developing physically motivated approaches, many successful stochastic parametrisation schemes are very simple, such as the European Centre for Medium-Range Weather Forecasts (ECMWF) multiplicative scheme `Stochastically Perturbed Parametrisation Tendencies' (SPPT). The SPPT scheme improves the skill of probabilistic weather and seasonal forecasts, and so is widely used. However, little work has focused on assessing the physical basis of the SPPT scheme. We address this matter by using high-resolution model simulations to explicitly measure the `error' in the parametrised tendency that SPPT seeks to represent. The high resolution simulations are first coarse-grained to the desired forecast model resolution before they are used to produce initial conditions and forcing data needed to drive the ECMWF Single Column Model (SCM). By comparing SCM forecast tendencies with the evolution of the high resolution model, we can measure the `error' in the forecast tendencies. In this way, we provide justification for the multiplicative nature of SPPT, and for the temporal and spatial scales of the stochastic perturbations. However, we also identify issues with the SPPT scheme. It is therefore hoped these measurements will improve both holistic and process based approaches to stochastic parametrisation. Figure caption: Instantaneous snapshot of the optimal SPPT stochastic perturbation, derived by comparing high-resolution simulations with a low resolution forecast model.

  4. STOCHASTIC FLOWS OF MAPPINGS

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In this paper, the stochastic flow of mappings generated by a Feller convolution semigroup on a compact metric space is studied. This kind of flow is the generalization of superprocesses of stochastic flows and stochastic diffeomorphism induced by the strong solutions of stochastic differential equations.

  5. Large-Scale Graph Processing Using Apache Giraph

    KAUST Repository

    Sakr, Sherif

    2017-01-07

    This book takes its reader on a journey through Apache Giraph, a popular distributed graph processing platform designed to bring the power of big data processing to graph data. Designed as a step-by-step self-study guide for everyone interested in large-scale graph processing, it describes the fundamental abstractions of the system, its programming models and various techniques for using the system to process graph data at scale, including the implementation of several popular and advanced graph analytics algorithms.

  6. Large-Scale Graph Processing Using Apache Giraph

    KAUST Repository

    Sakr, Sherif; Orakzai, Faisal Moeen; Abdelaziz, Ibrahim; Khayyat, Zuhair

    2017-01-01

    This book takes its reader on a journey through Apache Giraph, a popular distributed graph processing platform designed to bring the power of big data processing to graph data. Designed as a step-by-step self-study guide for everyone interested in large-scale graph processing, it describes the fundamental abstractions of the system, its programming models and various techniques for using the system to process graph data at scale, including the implementation of several popular and advanced graph analytics algorithms.

  7. An interactive display system for large-scale 3D models

    Science.gov (United States)

    Liu, Zijian; Sun, Kun; Tao, Wenbing; Liu, Liman

    2018-04-01

    With the improvement of 3D reconstruction theory and the rapid development of computer hardware technology, the reconstructed 3D models are enlarging in scale and increasing in complexity. Models with tens of thousands of 3D points or triangular meshes are common in practical applications. Due to storage and computing power limitation, it is difficult to achieve real-time display and interaction with large scale 3D models for some common 3D display software, such as MeshLab. In this paper, we propose a display system for large-scale 3D scene models. We construct the LOD (Levels of Detail) model of the reconstructed 3D scene in advance, and then use an out-of-core view-dependent multi-resolution rendering scheme to realize the real-time display of the large-scale 3D model. With the proposed method, our display system is able to render in real time while roaming in the reconstructed scene and 3D camera poses can also be displayed. Furthermore, the memory consumption can be significantly decreased via internal and external memory exchange mechanism, so that it is possible to display a large scale reconstructed scene with over millions of 3D points or triangular meshes in a regular PC with only 4GB RAM.

  8. Large-scale hydrology in Europe : observed patterns and model performance

    Energy Technology Data Exchange (ETDEWEB)

    Gudmundsson, Lukas

    2011-06-15

    In a changing climate, terrestrial water storages are of great interest as water availability impacts key aspects of ecosystem functioning. Thus, a better understanding of the variations of wet and dry periods will contribute to fully grasp processes of the earth system such as nutrient cycling and vegetation dynamics. Currently, river runoff from small, nearly natural, catchments is one of the few variables of the terrestrial water balance that is regularly monitored with detailed spatial and temporal coverage on large scales. River runoff, therefore, provides a foundation to approach European hydrology with respect to observed patterns on large scales, with regard to the ability of models to capture these.The analysis of observed river flow from small catchments, focused on the identification and description of spatial patterns of simultaneous temporal variations of runoff. These are dominated by large-scale variations of climatic variables but also altered by catchment processes. It was shown that time series of annual low, mean and high flows follow the same atmospheric drivers. The observation that high flows are more closely coupled to large scale atmospheric drivers than low flows, indicates the increasing influence of catchment properties on runoff under dry conditions. Further, it was shown that the low-frequency variability of European runoff is dominated by two opposing centres of simultaneous variations, such that dry years in the north are accompanied by wet years in the south.Large-scale hydrological models are simplified representations of our current perception of the terrestrial water balance on large scales. Quantification of the models strengths and weaknesses is the prerequisite for a reliable interpretation of simulation results. Model evaluations may also enable to detect shortcomings with model assumptions and thus enable a refinement of the current perception of hydrological systems. The ability of a multi model ensemble of nine large-scale

  9. Large-scale perturbations from the waterfall field in hybrid inflation

    International Nuclear Information System (INIS)

    Fonseca, José; Wands, David; Sasaki, Misao

    2010-01-01

    We estimate large-scale curvature perturbations from isocurvature fluctuations in the waterfall field during hybrid inflation, in addition to the usual inflaton field perturbations. The tachyonic instability at the end of inflation leads to an explosive growth of super-Hubble scale perturbations, but they retain the steep blue spectrum characteristic of vacuum fluctuations in a massive field during inflation. The power spectrum thus peaks around the Hubble-horizon scale at the end of inflation. We extend the usual δN formalism to include the essential role of these small fluctuations when estimating the large-scale curvature perturbation. The resulting curvature perturbation due to fluctuations in the waterfall field is second-order and the spectrum is expected to be of order 10 −54 on cosmological scales

  10. Decoupling local mechanics from large-scale structure in modular metamaterials

    Science.gov (United States)

    Yang, Nan; Silverberg, Jesse L.

    2017-04-01

    A defining feature of mechanical metamaterials is that their properties are determined by the organization of internal structure instead of the raw fabrication materials. This shift of attention to engineering internal degrees of freedom has coaxed relatively simple materials into exhibiting a wide range of remarkable mechanical properties. For practical applications to be realized, however, this nascent understanding of metamaterial design must be translated into a capacity for engineering large-scale structures with prescribed mechanical functionality. Thus, the challenge is to systematically map desired functionality of large-scale structures backward into a design scheme while using finite parameter domains. Such “inverse design” is often complicated by the deep coupling between large-scale structure and local mechanical function, which limits the available design space. Here, we introduce a design strategy for constructing 1D, 2D, and 3D mechanical metamaterials inspired by modular origami and kirigami. Our approach is to assemble a number of modules into a voxelized large-scale structure, where the module’s design has a greater number of mechanical design parameters than the number of constraints imposed by bulk assembly. This inequality allows each voxel in the bulk structure to be uniquely assigned mechanical properties independent from its ability to connect and deform with its neighbors. In studying specific examples of large-scale metamaterial structures we show that a decoupling of global structure from local mechanical function allows for a variety of mechanically and topologically complex designs.

  11. Factors influencing lysis time stochasticity in bacteriophage λ

    Directory of Open Access Journals (Sweden)

    Dennehy John J

    2011-08-01

    Full Text Available Abstract Background Despite identical genotypes and seemingly uniform environments, stochastic gene expression and other dynamic intracellular processes can produce considerable phenotypic diversity within clonal microbes. One trait that provides a good model to explore the molecular basis of stochastic variation is the timing of host lysis by bacteriophage (phage. Results Individual lysis events of thermally-inducible λ lysogens were observed using a temperature-controlled perfusion chamber mounted on an inverted microscope. Both mean lysis time (MLT and its associated standard deviation (SD were estimated. Using the SD as a measure of lysis time stochasticity, we showed that lysogenic cells in controlled environments varied widely in lysis times, and that the level of lysis time stochasticity depended on allelic variation in the holin sequence, late promoter (pR' activity, and host growth rate. In general, the MLT was positively correlated with the SD. Both lower pR' activities and lower host growth rates resulted in larger SDs. Results from premature lysis, induced by adding KCN at different time points after lysogen induction, showed a negative correlation between the timing of KCN addition and lysis time stochasticity. Conclusions Taken together with results published by others, we conclude that a large fraction of λ lysis time stochasticity is the result of random events following the expression and diffusion of the holin protein. Consequently, factors influencing the timing of reaching critical holin concentrations in the cell membrane, such as holin production rate, strongly influence the mean lysis time and the lysis time stochasticity.

  12. Gas contract portfolio management: a stochastic programming approach

    International Nuclear Information System (INIS)

    Haurie, A.; Smeers, Y.; Zaccour, G.

    1991-01-01

    This paper deals with a stochastic programming model which complements long range market simulation models generating scenarios concerning the evolution of demand and prices for gas in different market segments. Agas company has to negociate contracts with lengths going from one to twenty years. This stochastic model is designed to assess the risk associated with committing the gas production capacity of the company to these market segments. Different approaches are presented to overcome the difficulties associated with the very large size of the resulting optimization problem

  13. The origin of large scale cosmic structure

    International Nuclear Information System (INIS)

    Jones, B.J.T.; Palmer, P.L.

    1985-01-01

    The paper concerns the origin of large scale cosmic structure. The evolution of density perturbations, the nonlinear regime (Zel'dovich's solution and others), the Gott and Rees clustering hierarchy, the spectrum of condensations, and biassed galaxy formation, are all discussed. (UK)

  14. A practical process for light-water detritiation at large scales

    Energy Technology Data Exchange (ETDEWEB)

    Boniface, H.A. [Atomic Energy of Canada Limited, Chalk River, ON (Canada); Robinson, J., E-mail: jr@tyne-engineering.com [Tyne Engineering, Burlington, ON (Canada); Gnanapragasam, N.V.; Castillo, I.; Suppiah, S. [Atomic Energy of Canada Limited, Chalk River, ON (Canada)

    2014-07-01

    AECL and Tyne Engineering have recently completed a preliminary engineering design for a modest-scale tritium removal plant for light water, intended for installation at AECL's Chalk River Laboratories (CRL). This plant design was based on the Combined Electrolysis and Catalytic Exchange (CECE) technology developed at CRL over many years and demonstrated there and elsewhere. The general features and capabilities of this design have been reported as well as the versatility of the design for separating any pair of the three hydrogen isotopes. The same CECE technology could be applied directly to very large-scale wastewater detritiation, such as the case at Fukushima Daiichi Nuclear Power Station. However, since the CECE process scales linearly with throughput, the required capital and operating costs are substantial for such large-scale applications. This paper discusses some options for reducing the costs of very large-scale detritiation. Options include: Reducing tritium removal effectiveness; Energy recovery; Improving the tolerance of impurities; Use of less expensive or more efficient equipment. A brief comparison with alternative processes is also presented. (author)

  15. OffshoreDC DC grids for integration of large scale wind power

    DEFF Research Database (Denmark)

    Zeni, Lorenzo; Endegnanew, Atsede Gualu; Stamatiou, Georgios

    The present report summarizes the main findings of the Nordic Energy Research project “DC grids for large scale integration of offshore wind power – OffshoreDC”. The project is been funded by Nordic Energy Research through the TFI programme and was active between 2011 and 2016. The overall...... objective of the project was to drive the development of the VSC based HVDC technology for future large scale offshore grids, supporting a standardised and commercial development of the technology, and improving the opportunities for the technology to support power system integration of large scale offshore...

  16. Low-Complexity Transmit Antenna Selection and Beamforming for Large-Scale MIMO Communications

    Directory of Open Access Journals (Sweden)

    Kun Qian

    2014-01-01

    Full Text Available Transmit antenna selection plays an important role in large-scale multiple-input multiple-output (MIMO communications, but optimal large-scale MIMO antenna selection is a technical challenge. Exhaustive search is often employed in antenna selection, but it cannot be efficiently implemented in large-scale MIMO communication systems due to its prohibitive high computation complexity. This paper proposes a low-complexity interactive multiple-parameter optimization method for joint transmit antenna selection and beamforming in large-scale MIMO communication systems. The objective is to jointly maximize the channel outrage capacity and signal-to-noise (SNR performance and minimize the mean square error in transmit antenna selection and minimum variance distortionless response (MVDR beamforming without exhaustive search. The effectiveness of all the proposed methods is verified by extensive simulation results. It is shown that the required antenna selection processing time of the proposed method does not increase along with the increase of selected antennas, but the computation complexity of conventional exhaustive search method will significantly increase when large-scale antennas are employed in the system. This is particularly useful in antenna selection for large-scale MIMO communication systems.

  17. Stochastic split determinant algorithms

    International Nuclear Information System (INIS)

    Horvatha, Ivan

    2000-01-01

    I propose a large class of stochastic Markov processes associated with probability distributions analogous to that of lattice gauge theory with dynamical fermions. The construction incorporates the idea of approximate spectral split of the determinant through local loop action, and the idea of treating the infrared part of the split through explicit diagonalizations. I suggest that exact algorithms of practical relevance might be based on Markov processes so constructed

  18. Dimensional flow and fuzziness in quantum gravity: Emergence of stochastic spacetime

    Directory of Open Access Journals (Sweden)

    Gianluca Calcagni

    2017-10-01

    Full Text Available We show that the uncertainty in distance and time measurements found by the heuristic combination of quantum mechanics and general relativity is reproduced in a purely classical and flat multi-fractal spacetime whose geometry changes with the probed scale (dimensional flow and has non-zero imaginary dimension, corresponding to a discrete scale invariance at short distances. Thus, dimensional flow can manifest itself as an intrinsic measurement uncertainty and, conversely, measurement-uncertainty estimates are generally valid because they rely on this universal property of quantum geometries. These general results affect multi-fractional theories, a recent proposal related to quantum gravity, in two ways: they can fix two parameters previously left free (in particular, the value of the spacetime dimension at short scales and point towards a reinterpretation of the ultraviolet structure of geometry as a stochastic foam or fuzziness. This is also confirmed by a correspondence we establish between Nottale scale relativity and the stochastic geometry of multi-fractional models.

  19. Dimensional flow and fuzziness in quantum gravity: Emergence of stochastic spacetime

    International Nuclear Information System (INIS)

    Calcagni, Gianluca; Ronco, Michele

    2017-01-01

    We show that the uncertainty in distance and time measurements found by the heuristic combination of quantum mechanics and general relativity is reproduced in a purely classical and flat multi-fractal spacetime whose geometry changes with the probed scale (dimensional flow) and has non-zero imaginary dimension, corresponding to a discrete scale invariance at short distances. Thus, dimensional flow can manifest itself as an intrinsic measurement uncertainty and, conversely, measurement-uncertainty estimates are generally valid because they rely on this universal property of quantum geometries. These general results affect multi-fractional theories, a recent proposal related to quantum gravity, in two ways: they can fix two parameters previously left free (in particular, the value of the spacetime dimension at short scales) and point towards a reinterpretation of the ultraviolet structure of geometry as a stochastic foam or fuzziness. This is also confirmed by a correspondence we establish between Nottale scale relativity and the stochastic geometry of multi-fractional models.

  20. Dimensional flow and fuzziness in quantum gravity: Emergence of stochastic spacetime

    Science.gov (United States)

    Calcagni, Gianluca; Ronco, Michele

    2017-10-01

    We show that the uncertainty in distance and time measurements found by the heuristic combination of quantum mechanics and general relativity is reproduced in a purely classical and flat multi-fractal spacetime whose geometry changes with the probed scale (dimensional flow) and has non-zero imaginary dimension, corresponding to a discrete scale invariance at short distances. Thus, dimensional flow can manifest itself as an intrinsic measurement uncertainty and, conversely, measurement-uncertainty estimates are generally valid because they rely on this universal property of quantum geometries. These general results affect multi-fractional theories, a recent proposal related to quantum gravity, in two ways: they can fix two parameters previously left free (in particular, the value of the spacetime dimension at short scales) and point towards a reinterpretation of the ultraviolet structure of geometry as a stochastic foam or fuzziness. This is also confirmed by a correspondence we establish between Nottale scale relativity and the stochastic geometry of multi-fractional models.