WorldWideScience

Sample records for large deviation approach

  1. Guessing Revisited: A Large Deviations Approach

    CERN Document Server

    Hanawal, Manjesh Kumar

    2010-01-01

    The problem of guessing a random string is revisited. A close relation between guessing and compression is first established. Then it is shown that if the sequence of distributions of the information spectrum satisfies the large deviation property with a certain rate function, then the limiting guessing exponent exists and is a scalar multiple of the Legendre-Fenchel dual of the rate function. Other sufficient conditions related to certain continuity properties of the information spectrum are briefly discussed. This approach highlights the importance of the information spectrum in determining the limiting guessing exponent. All known prior results are then re-derived as example applications of our unifying approach.

  2. Large deviations

    CERN Document Server

    Varadhan, S R S

    2016-01-01

    The theory of large deviations deals with rates at which probabilities of certain events decay as a natural parameter in the problem varies. This book, which is based on a graduate course on large deviations at the Courant Institute, focuses on three concrete sets of examples: (i) diffusions with small noise and the exit problem, (ii) large time behavior of Markov processes and their connection to the Feynman-Kac formula and the related large deviation behavior of the number of distinct sites visited by a random walk, and (iii) interacting particle systems, their scaling limits, and large deviations from their expected limits. For the most part the examples are worked out in detail, and in the process the subject of large deviations is developed. The book will give the reader a flavor of how large deviation theory can help in problems that are not posed directly in terms of large deviations. The reader is assumed to have some familiarity with probability, Markov processes, and interacting particle systems.

  3. Large deviations

    CERN Document Server

    Hollander, Frank den

    2008-01-01

    This book is an introduction to the theory and applications of large deviations, a branch of probability theory that describes the probability of rare events in terms of variational problems. By focusing the theory, in Part A of the book, on random sequences, the author succeeds in conveying the main ideas behind large deviations without a need for technicalities, thus providing a concise and accessible entry to this challenging and captivating subject. The selection of modern applications, described in Part B of the book, offers a good sample of what large deviation theory is able to achieve

  4. Large deviations of ergodic counting processes: a statistical mechanics approach.

    Science.gov (United States)

    Budini, Adrián A

    2011-07-01

    The large-deviation method allows to characterize an ergodic counting process in terms of a thermodynamic frame where a free energy function determines the asymptotic nonstationary statistical properties of its fluctuations. Here we study this formalism through a statistical mechanics approach, that is, with an auxiliary counting process that maximizes an entropy function associated with the thermodynamic potential. We show that the realizations of this auxiliary process can be obtained after applying a conditional measurement scheme to the original ones, providing is this way an alternative measurement interpretation of the thermodynamic approach. General results are obtained for renewal counting processes, that is, those where the time intervals between consecutive events are independent and defined by a unique waiting time distribution. The underlying statistical mechanics is controlled by the same waiting time distribution, rescaled by an exponential decay measured by the free energy function. A scale invariance, shift closure, and intermittence phenomena are obtained and interpreted in this context. Similar conclusions apply for nonrenewal processes when the memory between successive events is induced by a stochastic waiting time distribution.

  5. Inertial Manifold and Large Deviations Approach to Reduced PDE Dynamics

    Science.gov (United States)

    Cardin, Franco; Favretti, Marco; Lovison, Alberto

    2017-09-01

    In this paper a certain type of reaction-diffusion equation—similar to the Allen-Cahn equation—is the starting point for setting up a genuine thermodynamic reduction i.e. involving a finite number of parameters or collective variables of the initial system. We firstly operate a finite Lyapunov-Schmidt reduction of the cited reaction-diffusion equation when reformulated as a variational problem. In this way we gain a finite-dimensional ODE description of the initial system which preserves the gradient structure of the original one and that is exact for the static case and only approximate for the dynamic case. Our main concern is how to deal with this approximate reduced description of the initial PDE. To start with, we note that our approximate reduced ODE is similar to the approximate inertial manifold introduced by Temam and coworkers for Navier-Stokes equations. As a second approach, we take into account the uncertainty (loss of information) introduced with the above mentioned approximate reduction by considering the stochastic version of the ODE. We study this reduced stochastic system using classical tools from large deviations, viscosity solutions and weak KAM Hamilton-Jacobi theory. In the last part we suggest a possible use of a result of our approach in the comprehensive treatment non equilibrium thermodynamics given by Macroscopic Fluctuation Theory.

  6. Optimal aggregation of noisy observations: A large deviations approach

    Energy Technology Data Exchange (ETDEWEB)

    Murayama, Tatsuto; Davis, Peter, E-mail: murayama@cslab.kecl.ntt.co.j, E-mail: davis@cslab.kecl.ntt.co.j [NTT Communication Science Laboratories, NTT Corporation, 2-4, Hikaridai, Seika-cho, Keihanna, Kyoto 619-0237 (Japan)

    2010-06-01

    Sensing and data aggregation tasks in distributed systems should not be considered as separate issues. The quality of collective estimation involves a fundamental tradeoff between sensing quality, which can be increased by increasing the number of sensors, and aggregation quality under a given capacity of the network, which decreases if the number of sensors is too large. In this paper, we examine a system level strategy for optimal aggregation of data from an ensemble of independent sensors. In particular, we consider large scale aggregation from very many sensors, in which case the network capacity diverges to infinity. Then, by applying the large deviations techniques, we conclude the following significant result: larger scale aggregation always outperforms smaller scale aggregation at higher noise levels, while below a critical value of noise, there exist moderate scale aggregation levels at which optimal estimation is realized. At a critical value of noise, there is an abrupt change in the behavior of a parameter characterizing the aggregation strategy, similar to a phase transition in statistical physics.

  7. Large Deviations and Metastability

    Science.gov (United States)

    Olivieri, Enzo; Eulália Vares, Maria

    2005-02-01

    This self-contained account of the main results in large deviation theory includes recent developments and emphasizes the Freidlin-Wentzell results on small random perturbations. Metastability is described on physical grounds, followed by the development of more exacting approaches to its description. The first part of the book then develops such pertinent tools as the theory of large deviations which is used to provide a physically relevant dynamical description of metastability. Written for graduate students, this book affords an excellent route into contemporary research as well.

  8. A large deviations approach to limit theory for heavy-tailed time series

    DEFF Research Database (Denmark)

    Mikosch, Thomas Valentin; Wintenberger, Olivier

    2016-01-01

    In this paper we propagate a large deviations approach for proving limit theory for (generally) multivariate time series with heavy tails. We make this notion precise by introducing regularly varying time series. We provide general large deviation results for functionals acting on a sample path...... and vanishing in some neighborhood of the origin. We study a variety of such functionals, including large deviations of random walks, their suprema, the ruin functional, and further derive weak limit theory for maxima, point processes, cluster functionals and the tail empirical process. One of the main results...

  9. Large deviations and idempotent probability

    CERN Document Server

    Puhalskii, Anatolii

    2001-01-01

    In the view of many probabilists, author Anatolii Puhalskii''s research results stand among the most significant achievements in the modern theory of large deviations. In fact, his work marked a turning point in the depth of our understanding of the connections between the large deviation principle (LDP) and well-known methods for establishing weak convergence results.Large Deviations and Idempotent Probability expounds upon the recent methodology of building large deviation theory along the lines of weak convergence theory. The author develops an idempotent (or maxitive) probability theory, introduces idempotent analogues of martingales (maxingales), Wiener and Poisson processes, and Ito differential equations, and studies their properties. The large deviation principle for stochastic processes is formulated as a certain type of convergence of stochastic processes to idempotent processes. The author calls this large deviation convergence.The approach to establishing large deviation convergence uses novel com...

  10. Living at the Edge: A Large Deviations Approach to the Outage MIMO Capacity

    CERN Document Server

    Kazakopoulos, P; Moustakas, A L; Caire, G

    2009-01-01

    Using a large deviations approach we calculate the probability distribution of the mutual information of MIMO channels in the limit of large antenna numbers. In contrast to previous methods that only focused at the distribution close to its mean (thus obtaining an asymptotically Gaussian distribution), we calculate the full distribution, including its tails which strongly deviate from the Gaussian behavior near the mean. The resulting distribution interpolates seamlessly between the Gaussian approximation for rates $R$ close to the ergodic value of the mutual information and the approach of Zheng and Tse for large signal to noise ratios $\\rho$. This calculation provides us with a tool to obtain outage probabilities analytically at any point in the $(R, \\rho, N)$ parameter space, as long as the number of antennas $N$ is not too small. In addition, this method also yields the probability distribution of eigenvalues constrained in the subspace where the mutual information per antenna is fixed to $R$ for a given ...

  11. Large deviations from freeness

    CERN Document Server

    Kargin, Vladislav

    2010-01-01

    Let H=A+UBU* where A and B are two N-by-N Hermitian matrices and U is a Haar-distributed random unitary matrix, and let \\mu_H, \\mu_A, and \\mu_B be empirical measures of eigenvalues of matrices H, A, and B, respectively. Then, it is known (see, for example, Pastur-Vasilchuk, CMP, 2000, v.214, pp.249-286) that for large N, measure \\mu_H is close to the free convolution of measures \\mu_A and \\mu_B, where the free convolution is a non-linear operation on probability measures. The large deviations of the cumulative distribution function of \\mu_H from its expectation have been studied by Chatterjee in JFA, 2007, v. 245, pp.379-389. In this paper we improve Chatterjee's estimate and show that P {\\sup_x |F_H (x) -F_+ (x)| > \\delta} < exp [-f(\\delta) N^2], where F_H (x) and F_+ (x) denote the cumulative distribution functions of \\mu_H and the free convolution of \\mu_A and \\mu_B, respectively, and where f(\\delta) is a specific function.

  12. Large deviations and portfolio optimization

    Science.gov (United States)

    Sornette, Didier

    Risk control and optimal diversification constitute a major focus in the finance and insurance industries as well as, more or less consciously, in our everyday life. We present a discussion of the characterization of risks and of the optimization of portfolios that starts from a simple illustrative model and ends by a general functional integral formulation. A major item is that risk, usually thought of as one-dimensional in the conventional mean-variance approach, has to be addressed by the full distribution of losses. Furthermore, the time-horizon of the investment is shown to play a major role. We show the importance of accounting for large fluctuations and use the theory of Cramér for large deviations in this context. We first treat a simple model with a single risky asset that exemplifies the distinction between the average return and the typical return and the role of large deviations in multiplicative processes, and the different optimal strategies for the investors depending on their size. We then analyze the case of assets whose price variations are distributed according to exponential laws, a situation that is found to describe daily price variations reasonably well. Several portfolio optimization strategies are presented that aim at controlling large risks. We end by extending the standard mean-variance portfolio optimization theory, first within the quasi-Gaussian approximation and then using a general formulation for non-Gaussian correlated assets in terms of the formalism of functional integrals developed in the field theory of critical phenomena.

  13. A unified approach to the large deviations for small perturbations of random evolution equations

    Institute of Scientific and Technical Information of China (English)

    胡亦钧

    1997-01-01

    Let be the processes governed by the following stochastic differential equations:where v (t) is a random process independent of the Brownian motion B(·).Some large deviation (LD) properties of are proved.For a particular case,an explicit representation of the rate function is also given,which solves a problem posed by Eizenberg and Freidlin.In the meantime,an abstract LD theorem is obtained.

  14. Front propagation in steady cellular flows: A large-deviation approach

    Science.gov (United States)

    Tzella, Alexandra; Vanneste, Jacques

    2012-11-01

    We examine the speed of propagation of chemical fronts modelled by the Fisher-Kolmogorov-Petrovskii-Piskunov nonlinearity in steady cellular flows. A number of predictions have been previously derived assuming small molecular diffusivity (large Péclet number) and either very slow (small Damköhler number) or very fast (large Damköhler number) chemical reactions. Here, we employ the theory of large deviations to obtain a family of eigenvalue problems from whose solution the front speed is inferred. The matched-asymptotics solution of these eigenvalue problems in the limit of large Péclet number provides approximations for the front speed for a wide range of Damköhler numbers. Two distinguished regimes are identified; in both regimes the front speed is given by a non-trivial function of the Péclet and Damköhler numbers which we determine. Earlier results, characterised by power-law dependences on these numbers, are recovered as limiting cases. The theoretical results are illustrated by a number of numerical simulations. The authors acknowledge support from EPSRC grant EP/I028072/1.

  15. Large deviations in Taylor dispersion

    Science.gov (United States)

    Kahlen, Marcel; Engel, Andreas; Van den Broeck, Christian

    2017-01-01

    We establish a link between the phenomenon of Taylor dispersion and the theory of empirical distributions. Using this connection, we derive, upon applying the theory of large deviations, an alternative and much more precise description of the long-time regime for Taylor dispersion.

  16. A Large Deviation, Hamilton-Jacobi Equation Approach to a Statistical Theory for Turbulence

    Science.gov (United States)

    2012-09-03

    and its associated compressible Euler equations, Comptes Rendus Mathematique , (09 2011): 973. doi: 10.1016/j.crma.2011.08.013 2012/09/03 14:17:15 6...Hamilton-Jacobi PDE is shown to be well-posed. (joint work with T Nguyen, Journal de Mathematique Pures et Appliquees). Future works focusing on large time behavior for such equations is currently under its way. Technology Transfer

  17. Strong disorder renewal approach to DNA denaturation and wetting: typical and large deviation properties of the free energy

    Science.gov (United States)

    Monthus, Cécile

    2017-01-01

    For the DNA denaturation transition in the presence of random contact energies, or equivalently the disordered wetting transition, we introduce a strong disorder renewal approach to construct the optimal contacts in each disordered sample of size L. The transition is found to be of infinite order, with a correlation length diverging with the essential singularity \\ln ξ (T)\\propto |T-{{T}\\text{c}}{{|}-1} . In the critical region, we analyze the statistics over samples of the free-energy density f L and of the contact density, which is the order parameter of the transition. At the critical point, both decay as a power-law of the length L but remain distributed, in agreement with the general phenomenon of lack of self-averaging at random critical points. We also obtain that for any real q  >  0, the moment \\overline{ZLq} of order q of the partition function at the critical point is dominated by some exponentially rare samples displaying a finite free-energy density, i.e. by the large deviation sector of the probability distribution of the free-energy density.

  18. Stochastic gene expression conditioned on large deviations

    Science.gov (United States)

    Horowitz, Jordan M.; Kulkarni, Rahul V.

    2017-06-01

    The intrinsic stochasticity of gene expression can give rise to large fluctuations and rare events that drive phenotypic variation in a population of genetically identical cells. Characterizing the fluctuations that give rise to such rare events motivates the analysis of large deviations in stochastic models of gene expression. Recent developments in non-equilibrium statistical mechanics have led to a framework for analyzing Markovian processes conditioned on rare events and for representing such processes by conditioning-free driven Markovian processes. We use this framework, in combination with approaches based on queueing theory, to analyze a general class of stochastic models of gene expression. Modeling gene expression as a Batch Markovian Arrival Process (BMAP), we derive exact analytical results quantifying large deviations of time-integrated random variables such as promoter activity fluctuations. We find that the conditioning-free driven process can also be represented by a BMAP that has the same form as the original process, but with renormalized parameters. The results obtained can be used to quantify the likelihood of large deviations, to characterize system fluctuations conditional on rare events and to identify combinations of model parameters that can give rise to dynamical phase transitions in system dynamics.

  19. Large Deviations in Quantum Spin Chain

    CERN Document Server

    Ogata, Yoshiko

    2008-01-01

    We show the full large deviation principle for KMS-states and $C^*$-finitely correlated states on a quantum spin chain. We cover general local observables. Our main tool is Ruelle's transfer operator method.

  20. Large deviations for a random speed particle

    CERN Document Server

    Lefevere, Raphael; Zambotti, Lorenzo

    2011-01-01

    We investigate large deviations for the empirical measure of the position and momentum of a particle traveling in a box with hot walls. The particle travels with uniform speed from left to right, until it hits the right boundary. Then it is absorbed and re-emitted from the left boundary with a new random speed, taken from an i.i.d. sequence. It turns out that this simple model, often used to simulate a heat bath, displays unusually complex large deviations features, that we explain in detail. In particular, if the tail of the update distribution of the speed is sufficiently oscillating, then the empirical measure does not satisfy a large deviations principle, and we exhibit optimal lower and upper large deviations functionals.

  1. Large Deviations without Principle: Join the Shortest Queue

    NARCIS (Netherlands)

    Ridder, Ad; Shwartz, Adam

    2004-01-01

    We develop a methodology for studying "large deviations type" questions. Our approach does not require that the large deviations principle holds, and is thus applicable to a larg class of systems. We study a system of queues with exponential servers, which share an arrival stream. Arrivals are route

  2. Large deviations for fractional Poisson processes

    CERN Document Server

    Beghin, Luisa

    2012-01-01

    We present large deviation results for two versions of fractional Poisson processes: the main version which is a renewal process, and the alternative version where all the random variables are weighted Poisson distributed. We also present a sample path large deviation result for suitably normalized counting processes; finally we show how this result can be applied to the two versions of fractional Poisson processes considered in this paper.

  3. The large deviations theorem and ergodicity

    Energy Technology Data Exchange (ETDEWEB)

    Gu Rongbao [School of Finance, Nanjing University of Finance and Economics, Nanjing 210046 (China)

    2007-12-15

    In this paper, some relationships between stochastic and topological properties of dynamical systems are studied. For a continuous map f from a compact metric space X into itself, we show that if f satisfies the large deviations theorem then it is topologically ergodic. Moreover, we introduce the topologically strong ergodicity, and prove that if f is a topologically strongly ergodic map satisfying the large deviations theorem then it is sensitively dependent on initial conditions.

  4. On large deviations for ensembles of distributions

    Science.gov (United States)

    Khrychev, D. A.

    2013-11-01

    The paper is concerned with the large deviations problem in the Freidlin-Wentzell formulation without the assumption of the uniqueness of the solution to the equation involving white noise. In other words, it is assumed that for each \\varepsilon>0 the nonempty set \\mathscr P_\\varepsilon of weak solutions is not necessarily a singleton. Analogues of a number of concepts in the theory of large deviations are introduced for the set \\{\\mathscr P_\\varepsilon,\\,\\varepsilon>0\\}, hereafter referred to as an ensemble of distributions. The ensembles of weak solutions of an n-dimensional stochastic Navier-Stokes system and stochastic wave equation with power-law nonlinearity are shown to be uniformly exponentially tight. An idempotent Wiener process in a Hilbert space and idempotent partial differential equations are defined. The accumulation points in the sense of large deviations of the ensembles in question are shown to be weak solutions of the corresponding idempotent equations. Bibliography: 14 titles.

  5. Large deviations for tandem queueing systems

    Directory of Open Access Journals (Sweden)

    Roland L. Dobrushin

    1994-01-01

    Full Text Available The crude asymptotics of the large delay probability in a tandem queueing system is considered. The main result states that one of the two channels in the tandem system defines the crude asymptotics. The constant that determines the crude asymptotics is given. The results obtained are based on the large deviation principle for random processes with independent increments on an infinite interval recently established by the authors.

  6. Large Deviation Strategy for Inverse Problem

    CERN Document Server

    Ojima, Izumi

    2011-01-01

    Taken traditionally as a no-go theorem against the theorization of inductive processes, Duheme-Quine thesis may interfere with the essence of statistical inference. This difficulty can be resolved by \\textquotedblleft Micro-Macro duality\\textquotedblright\\ \\cite{Oj03, Oj05} which clarifies the importance of specifying the pertinent aspects and accuracy relevant to concrete contexts of scientific discussions and which ensures the matching between what to be described and what to describe in the form of the validity of duality relations. This consolidates the foundations of the inverse problem, induction method, and statistical inference crucial for the sound relations between theory and experiments. To achieve the purpose, we propose here Large Deviation Strategy (LDS for short) on the basis of Micro-Macro duality, quadrality scheme, and large deviation principle. According to the quadrality scheme emphasizing the basic roles played by the dynamics, algebra of observables together with its representations and ...

  7. On large deviations for ensembles of distributions

    Energy Technology Data Exchange (ETDEWEB)

    Khrychev, D A [Moscow State Institute of Radio-Engineering, Electronics and Automation (Technical University), Moscow (Russian Federation)

    2013-11-30

    The paper is concerned with the large deviations problem in the Freidlin-Wentzell formulation without the assumption of the uniqueness of the solution to the equation involving white noise. In other words, it is assumed that for each ε>0 the nonempty set P{sub ε} of weak solutions is not necessarily a singleton. Analogues of a number of concepts in the theory of large deviations are introduced for the set (P{sub ε}, ε>0), hereafter referred to as an ensemble of distributions. The ensembles of weak solutions of an n-dimensional stochastic Navier-Stokes system and stochastic wave equation with power-law nonlinearity are shown to be uniformly exponentially tight. An idempotent Wiener process in a Hilbert space and idempotent partial differential equations are defined. The accumulation points in the sense of large deviations of the ensembles in question are shown to be weak solutions of the corresponding idempotent equations. Bibliography: 14 titles.

  8. Large Deviations and Asymptotic Methods in Finance

    CERN Document Server

    Gatheral, Jim; Gulisashvili, Archil; Jacquier, Antoine; Teichmann, Josef

    2015-01-01

    Topics covered in this volume (large deviations, differential geometry, asymptotic expansions, central limit theorems) give a full picture of the current advances in the application of asymptotic methods in mathematical finance, and thereby provide rigorous solutions to important mathematical and financial issues, such as implied volatility asymptotics, local volatility extrapolation, systemic risk and volatility estimation. This volume gathers together ground-breaking results in this field by some of its leading experts. Over the past decade, asymptotic methods have played an increasingly important role in the study of the behaviour of (financial) models. These methods provide a useful alternative to numerical methods in settings where the latter may lose accuracy (in extremes such as small and large strikes, and small maturities), and lead to a clearer understanding of the behaviour of models, and of the influence of parameters on this behaviour. Graduate students, researchers and practitioners will find th...

  9. LARGE DEVIATIONS AND MODERATE DEVIATIONS FOR SUMS OF NEGATIVELY DEPENDENT RANDOM VARIABLES

    Institute of Scientific and Technical Information of China (English)

    Liu Li; Wan Chenggao; Feng Yanqin

    2011-01-01

    In this article, we obtain the large deviations and moderate deviations for negatively dependent (ND) and non-identically distributed random variables defined on (-∞, +∞). The results show that for some non-identical random variables, precise large deviations and moderate deviations remain insensitive to negative dependence structure.

  10. Large Deviations for Random Matricial Moment Problems

    CERN Document Server

    Nagel, Jan; Gamboa, Fabrice; Rouault, Alain

    2010-01-01

    We consider the moment space $\\mathcal{M}_n^{K}$ corresponding to $p \\times p$ complex matrix measures defined on $K$ ($K=[0,1]$ or $K=\\D$). We endow this set with the uniform law. We are mainly interested in large deviations principles (LDP) when $n \\rightarrow \\infty$. First we fix an integer $k$ and study the vector of the first $k$ components of a random element of $\\mathcal{M}_n^{K}$. We obtain a LDP in the set of $k$-arrays of $p\\times p$ matrices. Then we lift a random element of $\\mathcal{M}_n^{K}$ into a random measure and prove a LDP at the level of random measures. We end with a LDP on Carth\\'eodory and Schur random functions. These last functions are well connected to the above random measure. In all these problems, we take advantage of the so-called canonical moments technique by introducing new (matricial) random variables that are independent and have explicit distributions.

  11. Large deviations in the random sieve

    Science.gov (United States)

    Grimmett, Geoffrey

    1997-05-01

    The proportion [rho]k of gaps with length k between square-free numbers is shown to satisfy log[rho]k=[minus sign](1+o(1))(6/[pi]2) klogk as k[rightward arrow][infty infinity]. Such asymptotics are consistent with Erdos's challenge to prove that the gap following the square-free number t is smaller than clogt/log logt, for all t and some constant c satisfying c>[pi]2/12. The results of this paper are achieved by studying the probabilities of large deviations in a certain ‘random sieve’, for which the proportions [rho]k have representations as probabilities. The asymptotic form of [rho]k may be obtained in situations of greater generality, when the squared primes are replaced by an arbitrary sequence (sr) of relatively prime integers satisfying [sum L: summation operator]r1/sr<[infty infinity], subject to two further conditions of regularity on this sequence.

  12. Large deviations for Glauber dynamics of continuous gas

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    This paper is devoted to the large deviation principles of the Glauber-type dynamics of finite or infinite volume continuous particle systems.We prove that the level-2 empirical process satisfies the large deviation principles in the weak convergence topology,while it does not satisfy the large deviation principles in the T-topology.

  13. Deviations From Newton's Law in Supersymmetric Large Extra Dimensions

    CERN Document Server

    Callin, P

    2006-01-01

    Deviations from Newton's Inverse-Squared Law at the micron length scale are smoking-gun signals for models containing Supersymmetric Large Extra Dimensions (SLEDs), which have been proposed as approaches for resolving the Cosmological Constant Problem. Just like their non-supersymmetric counterparts, SLED models predict gravity to deviate from the inverse-square law because of the advent of new dimensions at sub-millimeter scales. However SLED models differ from their non-supersymmetric counterparts in three important ways: (i) the size of the extra dimensions is fixed by the observed value of the Dark Energy density, making it impossible to shorten the range over which new deviations from Newton's law must be seen; (ii) supersymmetry predicts there to be more fields in the extra dimensions than just gravity, implying different types of couplings to matter and the possibility of repulsive as well as attractive interactions; and (iii) the same mechanism which is purported to keep the cosmological constant natu...

  14. Distributed Detection over Time Varying Networks: Large Deviations Analysis

    CERN Document Server

    Bajovic, Dragana; Xavier, Joao; Sinopoli, Bruno; Moura, Jose M F

    2010-01-01

    We apply large deviations theory to study asymptotic performance of running consensus distributed detection in sensor networks. Running consensus is a stochastic approximation type algorithm, recently proposed. At each time step k, the state at each sensor is updated by a local averaging of the sensor's own state and the states of its neighbors (consensus) and by accounting for the new observations (innovation). We assume Gaussian, spatially correlated observations. We allow the underlying network be time varying, provided that the graph that collects the union of links that are online at least once over a finite time window is connected. This paper shows through large deviations that, under stated assumptions on the network connectivity and sensors' observations, the running consensus detection asymptotically approaches in performance the optimal centralized detection. That is, the Bayes probability of detection error (with the running consensus detector) decays exponentially to zero as k goes to infinity at...

  15. LARGE DEVIATIONS AND MODERATE DEVIATIONS FOR m-NEGATIVELY ASSOCIATED RANDOM VARIABLES

    Institute of Scientific and Technical Information of China (English)

    Hu Yijun; Ming Ruixing; Yang Wenquan

    2007-01-01

    M-negatively associated random variables, which generalizes the classical one of negatively associated random variables and includes m-dependent sequences as its particular case, are introduced and studied. Large deviation principles and moderate deviation upper bounds for stationary m-negatively associated random variables are proved.Kolmogorov-type and Marcinkiewicz-type strong laws of large numbers as well as the three series theorem for m-negatively associated random variables are also given.

  16. Large deviations for stochastic flows and their applications

    Institute of Scientific and Technical Information of China (English)

    高付清; 任佳刚

    2001-01-01

    Large deviations for stochastic flow solutions to SDEs containing a small parameter are studied. The obtained results are applied to establish a Cp, r-large deviation principle for stochastic flows and for solutions to anticipating SDEs. The recent results of Millet-Nualart-Sans and Yoshida are improved and refined.

  17. Large Deviations and a Fluctuation Symmetry for Chaotic Homeomorphisms

    NARCIS (Netherlands)

    Maes, Christian; Verbitskiy, Evgeny

    2003-01-01

    We consider expansive homeomorphisms with the specification property. We give a new simple proof of a large deviation principle for Gibbs measures corresponding to a regular potential and we establish a general symmetry of the rate function for the large deviations of the antisymmetric part, under t

  18. Lyapunov exponents of linear cocycles continuity via large deviations

    CERN Document Server

    Duarte, Pedro

    2016-01-01

    The aim of this monograph is to present a general method of proving continuity of Lyapunov exponents of linear cocycles. The method uses an inductive procedure based on a general, geometric version of the Avalanche Principle. The main assumption required by this method is the availability of appropriate large deviation type estimates for quantities related to the iterates of the base and fiber dynamics associated with the linear cocycle. We establish such estimates for various models of random and quasi-periodic cocycles. Our method has its origins in a paper of M. Goldstein and W. Schlag. Our present work expands upon their approach in both depth and breadth. We conclude this monograph with a list of related open problems, some of which may be treated using a similar approach.

  19. Fluctuations and large deviations in non-equilibrium systems

    Indian Academy of Sciences (India)

    B Derrida

    2005-05-01

    For systems in contact with two reservoirs at different densities or with two thermostats at different temperatures, the large deviation function of the density gives a possible way of extending the notion of free energy to non-equilibrium systems. This large deviation function of the density can be calculated explicitly for exclusion models in one dimension with open boundary conditions. For these models, one can also obtain the distribution of the current of particles flowing through the system and the results lead to a simple conjecture for the large deviation function of the current of more general diffusive systems.

  20. Large Deviations for Multi-valued Stochastic Differential Equations

    CERN Document Server

    Ren, Jiagang; Zhang, Xicheng

    2009-01-01

    We prove a large deviation principle of Freidlin-Wentzell's type for the multivalued stochastic differential equations with monotone drifts, which in particular contains a class of SDEs with reflection in a convex domain.

  1. Static large deviations of boundary driven exclusion processes

    CERN Document Server

    Farfan, Jonathan

    2009-01-01

    We prove that the stationary measure associated to a boundary driven exclusion process in any dimension satisfies a large deviation principle with rate function given by the quasi potential of the Freidlin and Wentzell theory.

  2. Large Deviations: An Introduction to 2007 Abel Prize

    Indian Academy of Sciences (India)

    S Ramasubramanian

    2008-05-01

    2007 Abel prize has been awarded to S R S Varadhan for creating a unified theory of large deviations. We attempt to give a flavour of this branch of probability theory, highlighting the role of Varadhan.

  3. Large Deviations Methods and the Join-the-Shortest-Queue Model

    NARCIS (Netherlands)

    Ridder, Ad; Shwartz, Adam

    2005-01-01

    We develop a methodology for studying ''large deviations type'' questions. Our approach does not require that the large deviations principle holds, and is thus applicable to a larg class of systems. We study a system of queues with exponential servers, which share an arrival stream. Arrivals are rou

  4. General Freidlin-Wentzell large deviations and positive diffusions

    OpenAIRE

    P. Baldi; Caramellino, L.

    2011-01-01

    Abstract We prove Freidlin-Wentzell Large Deviation estimates under rather minimal assumptions. This allows to derive Wentzell-Freidlin Large Deviation estimates for diffusions on the positive half line with coefficients that are neither bounded nor Lipschitz continuous. This applies to models of interest in Finance, i.e. the CIR and the CEV models, which are positive diffusion processes whose diffusion coefficient is only Holder continuous. correspondence: C...

  5. Large Deviations and a Fluctuation Symmetry for Chaotic Homeomorphisms

    Science.gov (United States)

    Maes, Christian; Verbitskiy, Evgeny

    We consider expansive homeomorphisms with the specification property. We give a new simple proof of a large deviation principle for Gibbs measures corresponding to a regular potential and we establish a general symmetry of the rate function for the large deviations of the antisymmetric part, under time-reversal, of the potential. This generalizes the Gallavotti-Cohen fluctuation theorem to a larger class of chaotic systems.

  6. Sample-path Large Deviations in Credit Risk

    CERN Document Server

    Leijdekker, Vincent; Spreij, Peter

    2009-01-01

    The event of large losses plays an important role in credit risk. As these large losses are typically rare, and portfolios usually consist of a large number of positions, large deviation theory is the natural tool to analyze the tail asymptotics of the probabilities involved. We first derive a sample-path large deviation principle (LDP) for the portfolio's loss process, which enables the computation of the logarithmic decay rate of the probabilities of interest. In addition, we derive exact asymptotic results for a number of specific rare-event probabilities, such as the probability of the loss process exceeding some given function.

  7. Dynamical Gibbs-non-Gibbs transitions : a study via coupling and large deviations

    NARCIS (Netherlands)

    Wang, Feijia

    2012-01-01

    In this thesis we use both the two-layer and the large-deviation approach to study the conservation and loss of the Gibbs property for both lattice and mean-field spin systems. Chapter 1 gives general backgrounds on Gibbs and non-Gibbs measures and outlines the the two-layer and the large-deviation

  8. Current Large Deviations for Asymmetric Exclusion Processes with Open Boundaries

    Science.gov (United States)

    Bodineau, T.; Derrida, B.

    2006-04-01

    We study the large deviation functional of the current for the Weakly Asymmetric Simple Exclusion Process in contact with two reservoirs. We compare this functional in the large drift limit to the one of the Totally Asymmetric Simple Exclusion Process, in particular to the Jensen-Varadhan functional. Conjectures for generalizing the Jensen-Varadhan functional to open systems are also stated.

  9. Freidlin-Wentzell's Large Deviations for Stochastic Evolution Equations

    OpenAIRE

    Ren, Jiagang; Zhang, Xicheng

    2008-01-01

    We prove a Freidlin-Wentzell large deviation principle for general stochastic evolution equations with small perturbation multiplicative noises. In particular, our general result can be used to deal with a large class of quasi linear stochastic partial differential equations, such as stochastic porous medium equations and stochastic reaction diffusion equations with polynomial growth zero order term and $p$-Laplacian second order term.

  10. Large deviation theory for coin tossing and turbulence.

    Science.gov (United States)

    Chakraborty, Sagar; Saha, Arnab; Bhattacharjee, Jayanta K

    2009-11-01

    Large deviations play a significant role in many branches of nonequilibrium statistical physics. They are difficult to handle because their effects, though small, are not amenable to perturbation theory. Even the Gaussian model, which is the usual initial step for most perturbation theories, fails to be a starting point while discussing intermittency in fluid turbulence, where large deviations dominate. Our contention is: in the large deviation theory, the central role is played by the distribution associated with the tossing of a coin and the simple coin toss is the "Gaussian model" of problems where rare events play significant role. We illustrate this by applying it to calculate the multifractal exponents of the order structure factors in fully developed turbulence.

  11. Exact Large Deviation Function in the Asymmetric Exclusion Process

    Science.gov (United States)

    Derrida, Bernard; Lebowitz, Joel L.

    1998-01-01

    By an extension of the Bethe ansatz method used by Gwa and Spohn, we obtain an exact expression for the large deviation function of the time averaged current for the fully asymmetric exclusion process in a ring containing N sites and p particles. Using this expression we easily recover the exact diffusion constant obtained earlier and calculate as well some higher cumulants. The distribution of the deviation y of the average current is, in the limit N-->∞, skew and decays like exp-\\(Ay5/2\\) for y-->+∞ and exp-\\(A'\\|y\\|3/2\\) for y-->-∞. Surprisingly, the large deviation function has an expression very similar to the pressure (as a function of the density) of an ideal Bose or Fermi gas in 3D.

  12. Large Deviation for Supercritical Branching Processes with Immigration

    Institute of Scientific and Technical Information of China (English)

    Jing Ning LIU; Mei ZHANG

    2016-01-01

    In this paper, we study the large deviation for a supercritical branching process with immigration controlled by a sequence of non-negative integer-valued independently identical distributed random variables, improving the previous results for non immigration processes. We rely heavily on the detail description and limit property of the generating function of immigration processes.

  13. Small shape deviations causes complex dynamics in large electric generators

    Science.gov (United States)

    Lundström, Niklas L. P.; Grafström, Anton; Aidanpää, Jan-Olov

    2014-05-01

    We prove that combinations of small eccentricity, ovality and/or triangularity in the rotor and stator can produce complex whirling motions of an unbalanced rotor in large synchronous generators. It is concluded which structures of shape deviations that are more harmful, in the sense of producing complex whirling motions, than others. For each such structure, we derive simplified equations of motions from which we conclude analytically the relation between shape deviations and mass unbalance that yield non-smooth whirling motions. Finally we discuss validity of our results in the sense of modeling of the unbalanced magnetic pull force.

  14. Large Deviation Principle for Benedicks-Carleson Quadratic Maps

    Science.gov (United States)

    Chung, Yong Moo; Takahasi, Hiroki

    2012-11-01

    Since the pioneering works of Jakobson and Benedicks & Carleson and others, it has been known that a positive measure set of quadratic maps admit invariant probability measures absolutely continuous with respect to Lebesgue. These measures allow one to statistically predict the asymptotic fate of Lebesgue almost every initial condition. Estimating fluctuations of empirical distributions before they settle to equilibrium requires a fairly good control over large parts of the phase space. We use the sub-exponential slow recurrence condition of Benedicks & Carleson to build induced Markov maps of arbitrarily small scale and associated towers, to which the absolutely continuous measures can be lifted. These various lifts together enable us to obtain a control of recurrence that is sufficient to establish a level 2 large deviation principle, for the absolutely continuous measures. This result encompasses dynamics far from equilibrium, and thus significantly extends presently known local large deviations results for quadratic maps.

  15. Exact Moderate and Large Deviations for Linear Processes

    CERN Document Server

    Peligrada, Magda; Zhong, Yunda; Wu, Wei Biao

    2011-01-01

    Large and moderate deviation probabilities play an important role in many applied areas, such as insurance and risk analysis. This paper studies the exact moderate and large deviation asymptotics in non-logarithmic form for linear processes with independent innovations. The linear processes we analyze are general and therefore they include the long memory case. We give an asymptotic representation for probability of the tail of the normalized sums and specify the zones in which it can be approximated either by a standard normal distribution or by the marginal distribution of the innovation process. The results are then applied to regression estimates, moving averages, fractionally integrated processes, linear processes with regularly varying exponents and functions of linear processes. We also consider the computation of value at risk and expected shortfall, fundamental quantities in risk theory and finance.

  16. Magnetic Elements at Finite Temperature and Large Deviation Theory

    Science.gov (United States)

    Kohn, R. V.; Reznikoff, M. G.; vanden-Eijnden, E.

    2005-08-01

    We investigate thermally activated phenomena in micromagnetics using large deviation theory and concepts from stochastic resonance. We give a natural mathematical definition of finite-temperature astroids, finite-temperature hysteresis loops, etc. Generically, these objects emerge when the (generalized) Arrhenius timescale governing the thermally activated barrier crossing event of magnetic switching matches the timescale at which the magnetic element is pulsed or ramped by an external field; in the special and physically relevant case of multiple-pulse experiments, on the other hand, short-time switching can lead to non-Arrhenius behavior. We show how large deviation theory can be used to explain some properties of the astroids, like their shrinking and sharpening as the number of applied pulses is increased. We also investigate the influence of the dynamics, in particular the relative importance of the gyromagnetic and the damping terms. Finally, we discuss some issues and open questions regarding spatially nonuniform magnetization.

  17. Large Deviation Functional of the Weakly Asymmetric Exclusion Process

    Science.gov (United States)

    Enaud, C.; Derrida, B.

    2004-02-01

    We obtain the large deviation functional of a density profile for the asymmetric exclusion process of L sites with open boundary conditions when the asymmetry scales like 1/L. We recover as limiting cases the expressions derived recently for the symmetric (SSEP) and the asymmetric (ASEP) cases. In the ASEP limit, the non linear differential equation one needs to solve can be analysed by a method which resembles the WKB method.

  18. Large deviations for stochastic flows and their applications

    Institute of Scientific and Technical Information of China (English)

    GAO; Fuqing(

    2001-01-01

    [1]Yoshida, N., A large deviation principle for (r,p)-capacities on the Wiener space, Proba. Th. Rel. Fields, 1993, 94:473-488.[2]Gao, F. Q., Large deviations of (r,p)-capacities for diffusion processes, Advances in Math. (in Chinese), 1996, 25:500-509.[3]Millet, A., Nualart, D., Sanz, M., Large deviations for a class of anticipating stochastic differential equations, Ann.Prob.. 1993, 20: 1902-1931.[4]Millet, A., Nualart, D., Sans, M., Composition of large deviation principles and applications, in Stochastic Analysis (ed.Mayer, E. ), San Diego: Academic Press, 1991, 383-395.[5]Ocone, D., Pardoux, E., A generalized Ito-Ventzell formula, Applications to a class of anticipating stochastic differentialequations, Ann. Inst. Poincaré, Sect. B, 1989, 25: 39-71.[6]Malliavin, P., Nualart, D., Quasi sure analysis of stochastic flows and Banach space valued smooth functionals on the Wiener space, J. Funct. Anal., 1993, 112: 287-317.[7]Huang, Z., Ren, J. , Quasi sure stochastic flows, Stoch. Stoch. Rep. , 1990, 33: 149-157.[8]Gao, E. Q., Large deviations for diffusion processes in Hslder norm, Advances in Math. (in Chinese), 1997, 26: 147-158.[9]Arous, B. G. , Ledoux, M., Grandes déviations sur la déviations de Freidlin-Wentzell en norme holderienne, 1994, Lecr.Notes in Math. , 1994, 987: 1583.[10]Baldi, P. , Sanz, M. , Une remarque sur la théorie des grandes deviations, Lect. Notes Math., 1991, 1485: 345-348.[11]Airault, H., Malliavin, P., Intégration géometrique sur l'espace de Wiener, Bull. Sci. Math., 1988, 112: 3-52.[12]Ikeda, N. , Watanabe, S., Stochastic Differential Equations and Diffusion Processes, 2nd ed., Amsterdam-Kodansha-Tokyo:North-Holland, 1988.[13]Malliavin, P., Stochastic Analysis, Grundlehren der Mathematischen Wissenschaften 313, Berlin: Springer-Verlag, 1997.[14]Brzezniak, Z., Elworthy, K. D., Stochastic flows of diffeomorphism. In Stochastic Analysis and Applications (eds. Davies,I. M.. Truman

  19. Large Deviations for the Macroscopic Motion of an Interface

    Science.gov (United States)

    Birmpa, P.; Dirr, N.; Tsagkarogiannis, D.

    2017-03-01

    We study the most probable way an interface moves on a macroscopic scale from an initial to a final position within a fixed time in the context of large deviations for a stochastic microscopic lattice system of Ising spins with Kac interaction evolving in time according to Glauber (non-conservative) dynamics. Such interfaces separate two stable phases of a ferromagnetic system and in the macroscopic scale are represented by sharp transitions. We derive quantitative estimates for the upper and the lower bound of the cost functional that penalizes all possible deviations and obtain explicit error terms which are valid also in the macroscopic scale. Furthermore, using the result of a companion paper about the minimizers of this cost functional for the macroscopic motion of the interface in a fixed time, we prove that the probability of such events can concentrate on nucleations should the transition happen fast enough.

  20. Quenched Large Deviations for Interacting Diffusions in Random Media

    Science.gov (United States)

    Luçon, Eric

    2017-03-01

    The aim of the paper is to establish a large deviation principle (LDP) for the empirical measure of mean-field interacting diffusions in a random environment. The point is to derive such a result once the environment has been frozen (quenched model). The main theorem states that a LDP holds for every sequence of environment satisfying appropriate convergence condition, with a rate function that does not depend on the disorder and is different from the rate function in the averaged model. Similar results concerning the empirical flow and local empirical measures are provided.

  1. A course on large deviations with an introduction to Gibbs measures

    CERN Document Server

    Rassoul-Agha, Firas

    2015-01-01

    This is an introductory course on the methods of computing asymptotics of probabilities of rare events: the theory of large deviations. The book combines large deviation theory with basic statistical mechanics, namely Gibbs measures with their variational characterization and the phase transition of the Ising model, in a text intended for a one semester or quarter course. The book begins with a straightforward approach to the key ideas and results of large deviation theory in the context of independent identically distributed random variables. This includes Cramér's theorem, relative entropy, Sanov's theorem, process level large deviations, convex duality, and change of measure arguments. Dependence is introduced through the interactions potentials of equilibrium statistical mechanics. The phase transition of the Ising model is proved in two different ways: first in the classical way with the Peierls argument, Dobrushin's uniqueness condition, and correlation inequalities and then a second time through the ...

  2. Large Deviations for Stochastic Partial Differential Equations Driven by a Poisson Random Measure

    CERN Document Server

    Budhiraja, Amarjit; Dupuis, Paul

    2012-01-01

    Stochastic partial differential equations driven by Poisson random measures (PRM) have been proposed as models for many different physical systems, where they are viewed as a refinement of a corresponding noiseless partial differential equations (PDE). A systematic framework for the study of probabilities of deviations of the stochastic PDE from the deterministic PDE is through the theory of large deviations. The goal of this work is to develop the large deviation theory for small Poisson noise perturbations of a general class of deterministic infinite dimensional models. Although the analogous questions for finite dimensional systems have been well studied, there are currently no general results in the infinite dimensional setting. This is in part due to the fact that in this setting solutions may have little spatial regularity, and thus classical approximation methods for large deviation analysis become intractable. The approach taken here, which is based on a variational representation for nonnegative func...

  3. Large deviations of the maximal eigenvalue of random matrices

    CERN Document Server

    Borot, Gaëtan; Majumdar, Satya; Nadal, Céline

    2011-01-01

    We present detailed computations of the 'at least finite' terms (three dominant orders) of the free energy in a one-cut matrix model with a hard edge a, in beta-ensembles, with any polynomial potential. beta is a positive number, so not restricted to the standard values beta = 1 (hermitian matrices), beta = 1/2 (symmetric matrices), beta = 2 (quaternionic self-dual matrices). This model allows to study the statistic of the maximum eigenvalue of random matrices. We compute the large deviation function to the left of the expected maximum. We specialize our results to the gaussian beta-ensembles and check them numerically. Our method is based on general results and procedures already developed in the literature to solve the Pastur equations (also called "loop equations"). It allows to compute the left tail of the analog of Tracy-Widom laws for any beta, including the constant term.

  4. Large Deviation Results for Generalized Compound Negative Binomial Risk Models

    Institute of Scientific and Technical Information of China (English)

    Fan-chao Kong; Chen Shen

    2009-01-01

    In this paper we extend and improve some results of the large deviation for random sums of random variables.Let {Xn;n≥1} be a sequence of non-negative,independent and identically distributed random variables with common heavy-tailed distribution function F and finite mean μ∈R+,{N(n);n≥0} be a sequence of negative binomial distributed random variables with a parameter p ∈(0,1),n≥0,let {M(n);n≥0} be a Poisson process with intensity λ0.Suppose {N(n);n≥0},{Xn;n≥1} and {M(n);n≥0} are mutually results.These results can be applied to certain problems in insurance and finance.

  5. WKB theory of large deviations in stochastic populations

    Science.gov (United States)

    Assaf, Michael; Meerson, Baruch

    2017-06-01

    Stochasticity can play an important role in the dynamics of biologically relevant populations. These span a broad range of scales: from intra-cellular populations of molecules to population of cells and then to groups of plants, animals and people. Large deviations in stochastic population dynamics—such as those determining population extinction, fixation or switching between different states—are presently in a focus of attention of statistical physicists. We review recent progress in applying different variants of dissipative WKB approximation (after Wentzel, Kramers and Brillouin) to this class of problems. The WKB approximation allows one to evaluate the mean time and/or probability of population extinction, fixation and switches resulting from either intrinsic (demographic) noise, or a combination of the demographic noise and environmental variations, deterministic or random. We mostly cover well-mixed populations, single and multiple, but also briefly consider populations on heterogeneous networks and spatial populations. The spatial setting also allows one to study large fluctuations of the speed of biological invasions. Finally, we briefly discuss possible directions of future work.

  6. Testing large-angle deviation from Gaussianity in CMB maps

    CERN Document Server

    Bernui, A; Teixeira, A F F

    2010-01-01

    A detection of the level of non-Gaussianity in the CMB data is essential to discriminate among inflationary models and also to test alternative primordial scenarios. However, the extraction of primordial non-Gaussianity is a difficult endeavor since several effects of non-primordial nature can produce non-Gaussianity. On the other hand, different statistical tools can in principle provide information about distinct forms of non-Gaussianity. Thus, any single statistical estimator cannot be sensitive to all possible forms of non-Gaussianity. In this context, to shed some light in the potential sources of deviation from Gaussianity in CMB data it is important to use different statistical indicators. In a recent paper we proposed two new large-angle non-Gaussianity indicators which provide measures of the departure from Gaussianity on large angular scales. We used these indicators to carry out analyses of non-Gaussianity of the bands and of the foreground-reduced WMAP maps with and without the KQ75 mask. Here we ...

  7. Large deviations of the shifted index number in the Gaussian ensemble

    Science.gov (United States)

    Pérez Castillo, Isaac

    2016-06-01

    We show that, using the Coulomb fluid approach, we are able to derive a rate function \\Psi(c,x) of two variables that captures: (i) the large deviations of bulk eigenvalues; (ii) the large deviations of extreme eigenvalues (both left and right large deviations); (iii) the statistics of the fraction c of eigenvalues to the left of a position x. Thus, \\Psi(c,x) explains the full order statistics of the eigenvalues of large random Gaussian matrices as well as the statistics of the shifted index number. All our analytical findings are thoroughly compared with Monte Carlo simulations, obtaining excellent agreement. A summary of preliminary results has already been presented in Pérez Castillo (2014 Phys. Rev. E 90 040102) in the context of one-dimensional trapped spinless fermions in a harmonic potential.

  8. Large Deviations for the Branching Brownian Motion in Presence of Selection or Coalescence

    Science.gov (United States)

    Derrida, Bernard; Shi, Zhan

    2016-06-01

    The large deviation function has been known for a long time in the literature for the displacement of the rightmost particle in a branching random walk (BRW), or in a branching Brownian motion (BBM). More recently a number of generalizations of the BBM and of the BRW have been considered where selection or coalescence mechanisms tend to limit the exponential growth of the number of particles. Here we try to estimate the large deviation function of the position of the rightmost particle for several such generalizations: the L-BBM, the N-BBM, and the coalescing branching random walk (CBRW) which is closely related to the noisy FKPP equation. Our approach allows us to obtain only upper bounds on these large deviation functions. One noticeable feature of our results is their non analytic dependence on the parameters (such as the coalescence rate in the CBRW).

  9. Large-deviation statistics of vorticity stretching in isotropic turbulence.

    Science.gov (United States)

    Johnson, Perry L; Meneveau, Charles

    2016-03-01

    A key feature of three-dimensional fluid turbulence is the stretching and realignment of vorticity by the action of the strain rate. It is shown in this paper, using the cumulant-generating function, that the cumulative vorticity stretching along a Lagrangian path in isotropic turbulence obeys a large deviation principle. As a result, the relevant statistics can be described by the vorticity stretching Cramér function. This function is computed from a direct numerical simulation data set at a Taylor-scale Reynolds number of Re(λ)=433 and compared to those of the finite-time Lyapunov exponents (FTLE) for material deformation. As expected, the mean cumulative vorticity stretching is slightly less than that of the most-stretched material line (largest FTLE), due to the vorticity's preferential alignment with the second-largest eigenvalue of strain rate and the material line's preferential alignment with the largest eigenvalue. However, the vorticity stretching tends to be significantly larger than the second-largest FTLE, and the Cramér functions reveal that the statistics of vorticity stretching fluctuations are more similar to those of the largest FTLE. In an attempt to relate the vorticity stretching statistics to the vorticity magnitude probability density function in statistically stationary conditions, a model Kramers-Moyal equation is constructed using the statistics encoded in the Cramér function. The model predicts a stretched-exponential tail for the vorticity magnitude probability density function, with good agreement for the exponent but significant difference (35%) in the prefactor.

  10. Dispersion in Rectangular Networks: Effective Diffusivity and Large-Deviation Rate Function

    Science.gov (United States)

    Tzella, Alexandra; Vanneste, Jacques

    2016-09-01

    The dispersion of a diffusive scalar in a fluid flowing through a network has many applications including to biological flows, porous media, water supply, and urban pollution. Motivated by this, we develop a large-deviation theory that predicts the evolution of the concentration of a scalar released in a rectangular network in the limit of large time t ≫1 . This theory provides an approximation for the concentration that remains valid for large distances from the center of mass, specifically for distances up to O (t ) and thus much beyond the O (t1 /2) range where a standard Gaussian approximation holds. A byproduct of the approach is a closed-form expression for the effective diffusivity tensor that governs this Gaussian approximation. Monte Carlo simulations of Brownian particles confirm the large-deviation results and demonstrate their effectiveness in describing the scalar distribution when t is only moderately large.

  11. Large deviations for Markov chains in the positive quadrant

    Science.gov (United States)

    Borovkov, A. A.; Mogul'skii, A. A.

    2001-10-01

    The paper deals with so-called N-partially space-homogeneous time-homogeneous Markov chains X(y,n), n=0,1,2,\\dots, X(y,0)=y, in the positive quadrant \\mathbb R^{2+}=\\{x=(x_2,x_2):x_1\\geqslant0,\\ x_2\\geqslant0\\}. These Markov chains are characterized by the following property of the transition probabilities P(y,A)=\\mathsf P(X(y,1)\\in A): for some N\\geqslant 0 the measure P(y,dx) depends only on x_2, y_2, and x_1-y_1 in the domain x_1>N, y_1>N, and only on x_1, y_1, and x_2-y_2 in the domain x_2>N, y_2>N. For such chains the asymptotic behaviour of \\displaystyle \\ln\\mathsf P\\Bigl(\\frac 1sX(y,n)\\in B\\Bigr), \\qquad \\ln\\mathsf P\\bigl(X(y,n)\\in x+B\\bigr) is found for a fixed set B as s\\to\\infty, \\vert x\\vert\\to\\infty, and n\\to\\infty. Some other conditions on the growth of parameters are also considered, for example, \\vert x-y\\vert\\to\\infty, \\vert y\\vert\\to\\infty. A study is made of the structure of the most probable trajectories, which give the main contribution to this asymptotics, and a number of other results pertaining to the topic are established. Similar results are obtained for the narrower class of 0-partially homogeneous ergodic chains under less restrictive moment conditions on the transition probabilities P(y,dx). Moreover, exact asymptotic expressions for the probabilities \\mathsf P(X(0,n)\\in x+B) are found for 0-partially homogeneous ergodic chains under some additional conditions. The interest in partially homogeneous Markov chains in positive octants is due to the mathematical aspects (new and interesting problems arise in the framework of general large deviation theory) as well as applied issues, for such chains prove to be quite accurate mathematical models for numerous basic types of queueing and communication networks such as the widely known Jackson networks, polling systems, or communication networks associated with the ALOHA algorithm. There is a vast literature dealing with the analysis of these objects. The

  12. Weak convergence approach to the theory of large deviations approach to the theory of large deviations

    CERN Document Server

    Dupuis, Paul

    2011-01-01

    PAUL DUPUIS is a professor in the Division of Applied Mathematics at Brown University in Providence, Rhode Island. RICHARD S. ELLIS is a professor in the Department of Mathematics and Statistics at the University of Massachusetts at Amherst.

  13. Large deviations estimates for the multiscale analysis of heart rate variability

    Science.gov (United States)

    Loiseau, Patrick; Médigue, Claire; Gonçalves, Paulo; Attia, Najmeddine; Seuret, Stéphane; Cottin, François; Chemla, Denis; Sorine, Michel; Barral, Julien

    2012-11-01

    In the realm of multiscale signal analysis, multifractal analysis provides a natural and rich framework to measure the roughness of a time series. As such, it has drawn special attention of both mathematicians and practitioners, and led them to characterize relevant physiological factors impacting the heart rate variability. Notwithstanding these considerable progresses, multifractal analysis almost exclusively developed around the concept of Legendre singularity spectrum, for which efficient and elaborate estimators exist, but which are structurally blind to subtle features like non-concavity or, to a certain extent, non scaling of the distributions. Large deviations theory allows bypassing these limitations but it is only very recently that performing estimators were proposed to reliably compute the corresponding large deviations singularity spectrum. In this article, we illustrate the relevance of this approach, on both theoretical objects and on human heart rate signals from the Physionet public database. As conjectured, we verify that large deviations principles reveal significant information that otherwise remains hidden with classical approaches, and which can be reminiscent of some physiological characteristics. In particular we quantify the presence/absence of scale invariance of RR signals.

  14. Lower Current Large Deviations for Zero-Range Processes on a Ring

    Science.gov (United States)

    Chleboun, Paul; Grosskinsky, Stefan; Pizzoferrato, Andrea

    2017-04-01

    We study lower large deviations for the current of totally asymmetric zero-range processes on a ring with concave current-density relation. We use an approach by Jensen and Varadhan which has previously been applied to exclusion processes, to realize current fluctuations by travelling wave density profiles corresponding to non-entropic weak solutions of the hyperbolic scaling limit of the process. We further establish a dynamic transition, where large deviations of the current below a certain value are no longer typically attained by non-entropic weak solutions, but by condensed profiles, where a non-zero fraction of all the particles accumulates on a single fixed lattice site. This leads to a general characterization of the rate function, which is illustrated by providing detailed results for four generic examples of jump rates, including constant rates, decreasing rates, unbounded sublinear rates and asymptotically linear rates. Our results on the dynamic transition are supported by numerical simulations using a cloning algorithm.

  15. The large deviation function for entropy production: the optimal trajectory and the role of fluctuations

    Science.gov (United States)

    Speck, Thomas; Engel, Andreas; Seifert, Udo

    2012-12-01

    We study the large deviation function for the entropy production rate in two driven one-dimensional systems: the asymmetric random walk on a discrete lattice and Brownian motion in a continuous periodic potential. We compare two approaches: using the Donsker-Varadhan theory and using the Freidlin-Wentzell theory. We show that the wings of the large deviation function are dominated by a single optimal trajectory: either in the forward direction (positive rate) or in the backward direction (negative rate). The joining of the two branches at zero entropy production implies a non-differentiability and thus the appearance of a ‘kink’. However, around zero entropy production, many trajectories contribute and thus the ‘kink’ is smeared out.

  16. Finite Size Corrections to the Large Deviation Function of the Density in the One Dimensional Symmetric Simple Exclusion Process

    Science.gov (United States)

    Derrida, Bernard; Retaux, Martin

    2013-09-01

    The symmetric simple exclusion process is one of the simplest out-of-equilibrium systems for which the steady state is known. Its large deviation functional of the density has been computed in the past both by microscopic and macroscopic approaches. Here we obtain the leading finite size correction to this large deviation functional. The result is compared to the similar corrections for equilibrium systems.

  17. Large deviations of Rouse polymer chain: First passage problem

    Science.gov (United States)

    Cao, Jing; Zhu, Jian; Wang, Zuowei; Likhtman, Alexei E.

    2015-11-01

    The purpose of this paper is to investigate several analytical methods of solving first passage (FP) problem for the Rouse model, a simplest model of a polymer chain. We show that this problem has to be treated as a multi-dimensional Kramers' problem, which presents rich and unexpected behavior. We first perform direct and forward-flux sampling (FFS) simulations and measure the mean first-passage time τ(z) for the free end to reach a certain distance z away from the origin. The results show that the mean FP time is getting faster if the Rouse chain is represented by more beads. Two scaling regimes of τ(z) are observed, with transition between them varying as a function of chain length. We use these simulation results to test two theoretical approaches. One is a well known asymptotic theory valid in the limit of zero temperature. We show that this limit corresponds to fully extended chain when each chain segment is stretched, which is not particularly realistic. A new theory based on the well known Freidlin-Wentzell theory is proposed, where dynamics is projected onto the minimal action path. The new theory predicts both scaling regimes correctly, but fails to get the correct numerical prefactor in the first regime. Combining our theory with the FFS simulations leads us to a simple analytical expression valid for all extensions and chain lengths. One of the applications of polymer FP problem occurs in the context of branched polymer rheology. In this paper, we consider the arm-retraction mechanism in the tube model, which maps exactly on the model we have solved. The results are compared to the Milner-McLeish theory without constraint release, which is found to overestimate FP time by a factor of 10 or more.

  18. Large Deviations for Parameter Estimators of Some Time Inhomogeneous Diffusion Process

    Institute of Scientific and Technical Information of China (English)

    Shou Jiang ZHAO; Fu Qing GAO

    2011-01-01

    The goal of this paper is to study large deviations for estimator and score function of some time inhomogeneous diffusion process.Large deviation in the non-steepness case with explicit rate functions is obtained by using parameter-dependent change of measure.

  19. Large Deviations for Empirical Measures of Not Necessarily Irreducible Countable Markov Chains with Arbitrary Initial Measures

    Institute of Scientific and Technical Information of China (English)

    Yi Wen JIANG; Li Ming WU

    2005-01-01

    All known results on large deviations of occupation measures of Markov processes are based on the assumption of (essential) irreducibility. In this paper we establish the weak* large deviation principle of occupation measures for any countable Markov chain with arbitrary initial measures. The new rate function that we obtain is not convex and depends on the initial measure, contrary to the (essentially) irreducible case.

  20. Rate Function of Large Deviation for a Class of Nonhomogeneous Markov Chains on Supercritical Percolation Network

    Institute of Scientific and Technical Information of China (English)

    Zhong Hao XU; Dong HAN

    2011-01-01

    We model an epidemic with a class of nonhomogeneous Markov chains on the supercritical percolation network on Zd.The large deviations law for the Markov chain is given.Explicit expression of the rate function for large deviation is obtained.

  1. Convex Hulls of Multiple Random Walks: A Large-Deviation Study

    CERN Document Server

    Dewenter, Timo; Hartmann, Alexander K; Majumdar, Satya N

    2016-01-01

    We study the polygons governing the convex hull of a point set created by the steps of $n$ independent two-dimensional random walkers. Each such walk consists of $T$ discrete time steps, where $x$ and $y$ increments are i.i.d. Gaussian. We analyze area $A$ and perimeter $L$ of the convex hulls. We obtain probability densities for these two quantities over a large range of the support by using a large-deviation approach allowing us to study densities below $10^{-900}$. We find that the densities exhibit a universal scaling behavior as a function of $A/T$ and $L/\\sqrt{T}$, respectively. As in the case of one walker ($n=1$), the densities follow Gaussian distributions for $L$ and $\\sqrt{A}$, respectively. We also obtained the rate functions for the area and perimeter, rescaled with the scaling behavior of their maximum possible values, and found limiting functions for $T \\rightarrow \\infty$, revealing that the densities follow the large-deviation principle. These rate functions can be described by a power law fo...

  2. Explicit Computations of Instantons and Large Deviations in Beta-Plane Turbulence

    Science.gov (United States)

    Laurie, J.; Bouchet, F.; Zaboronski, O.

    2012-12-01

    We use a path integral formalism and instanton theory in order to make explicit analytical predictions about large deviations and rare events in beta-plane turbulence. The path integral formalism is a concise way to get large deviation results in dynamical systems forced by random noise. In the most simple cases, it leads to the same results as the Freidlin-Wentzell theory, but it has a wider range of applicability. This approach is however usually extremely limited, due to the complexity of the theoretical problems. As a consequence it provides explicit results in a fairly limited number of models, often extremely simple ones with only a few degrees of freedom. Few exception exist outside the realm of equilibrium statistical physics. We will show that the barotropic model of beta-plane turbulence is one of these non-equilibrium exceptions. We describe sets of explicit solutions to the instanton equation, and precise derivations of the action functional (or large deviation rate function). The reason why such exact computations are possible is related to the existence of hidden symmetries and conservation laws for the instanton dynamics. We outline several applications of this apporach. For instance, we compute explicitly the very low probability to observe flows with an energy much larger or smaller than the typical one. Moreover, we consider regimes for which the system has multiple attractors (corresponding to different numbers of alternating jets), and discuss the computation of transition probabilities between two such attractors. These extremely rare events are of the utmost importance as the dynamics undergo qualitative macroscopic changes during such transitions.

  3. A contribution to large deviations for heavy-tailed random sums

    Institute of Scientific and Technical Information of China (English)

    SU; Chun

    2001-01-01

    [1] Nagaev, A. V., Integral limit theorems for large deviations when Cramer's condition is not fulfilled I, II, Theory Prob. Appl., 1969, 14: 51-64, 193-208.[2] Nagaev, A. V., Limit theorems for large deviations where Cramer's conditions are violated (In Russian), Izv. Akad. Nauk USSR Ser., Fiz-Mat Nauk., 1969, 7: 17.[3] Heyde, C. C., A contribution to the theory of large deviations for sums of independent random variables, Z. Wahrscheinlichkeitsth, 1967, 7: 303.[4] Heyde, C. C., On large deviation probabilities for sums of random variables which are not attracted to the normal law, Ann. Math. Statist., 1967, 38: 1575.[5] Heyde, C. C., On large deviation probabilities in the case of attraction to a nonnormal stable law, Sanky, 1968, 30: 253.[6] Nagaev, S. V., Large deviations for sums of independent random variables, in Sixth Prague Conf. on Information Theory, Random Processes and Statistical Decision Functions, Prague: Academic, 1973, 657674.[7] Nagaev, S. V., Large deviations of sums of independent random variables, Ann. Prob., 1979, 7: 745.[8] Embrechts, P., Klüppelberg, C., Mikosch, T., Modelling Extremal Events for Insurance and Finance, Berlin-Heidelberg: Springer-Verlag, 1997.[9] Cline, D. B. H., Hsing, T., Large deviation probabilities for sums and maxima of random variables with heavy or subexponential tails, Preprint, Texas A&M University, 1991.[10] Klüppelberg, C., Mikosch, T., Large deviations of heavy-tailed random sums with applications to insurance and finance, J. Appl. Prob., 1997, 34: 293.

  4. Non-equilibrium steady states: fluctuations and large deviations of the density and of the current

    Science.gov (United States)

    Derrida, Bernard

    2007-07-01

    These lecture notes give a short review of methods such as the matrix ansatz, the additivity principle or the macroscopic fluctuation theory, developed recently in the theory of non-equilibrium phenomena. They show how these methods allow us to calculate the fluctuations and large deviations of the density and the current in non-equilibrium steady states of systems like exclusion processes. The properties of these fluctuations and large deviation functions in non-equilibrium steady states (for example, non-Gaussian fluctuations of density or non-convexity of the large deviation function which generalizes the notion of free energy) are compared with those of systems at equilibrium.

  5. Precise Large Deviations for Sums of Negatively Associated Random Variables with Common Dominatedly Varying Tails

    Institute of Scientific and Technical Information of China (English)

    Yue Bao WANG; Kai Yong WANG; Dong Ya CHENG

    2006-01-01

    In this paper, we obtain results on precise large deviations for non-random and random sums of negatively associated nonnegative random variables with common dominatedly varying tail distribution function. We discover that, under certain conditions, three precise large-deviation probabilities with different centering numbers are equivalent to each other. Furthermore, we investigate precise large deviations for sums of negatively associated nonnegative random variables with certain negatively dependent occurrences. The obtained results extend and improve the corresponding results of Ng, Tang, Yan and Yang (J. Appl. Prob., 41, 93-107, 2004).

  6. Large deviation functions in a system of diffusing particles with creation and annihilation.

    Science.gov (United States)

    Popkov, V; Schütz, G M

    2011-08-01

    Large deviation functions for an exactly solvable lattice gas model of diffusing particles on a ring, subject to pair annihilation and creation, are obtained analytically using exact free-fermion techniques. Our findings for the large deviation function for the current are compared to recent results of Appert-Rolland et al. [Phys. Rev. E 78, 021122 (2008)] for diffusive systems with conserved particle number. Unlike conservative dynamics, our nonconservative model has no universal finite-size corrections for the cumulants. However, the leading Gaussian part has the same variance as in the conservative case. We also elucidate some properties of the large deviation functions associated with particle creation and annihilation.

  7. Dispersion in the large-deviation regime. Part I: shear flows and periodic flows

    CERN Document Server

    Haynes, P H

    2014-01-01

    The dispersion of a passive scalar in a fluid through the combined action of advection and molecular diffusion is often described as a diffusive process, with an effective diffusivity that is enhanced compared to the molecular value. However, this description fails to capture the tails of the scalar concentration distribution in initial-value problems. To remedy this, we develop a large-deviation theory of scalar dispersion that provides an approximation to the scalar concentration valid at much larger distances away from the centre of mass, specifically distances that are $O(t)$ rather than $O(t^{1/2})$, where $t \\gg 1$ is the time from the scalar release. The theory centres on the calculation of a rate function obtained by solving a one-parameter family of eigenvalue problems which we derive using two alternative approaches, one asymptotic, the other probabilistic. We emphasise the connection between large deviations and homogenisation: a perturbative solution of the eigenvalue problems reduces at leading o...

  8. Large-Deviation Results for Discriminant Statistics of Gaussian Locally Stationary Processes

    Directory of Open Access Journals (Sweden)

    Junichi Hirukawa

    2012-01-01

    Full Text Available This paper discusses the large-deviation principle of discriminant statistics for Gaussian locally stationary processes. First, large-deviation theorems for quadratic forms and the log-likelihood ratio for a Gaussian locally stationary process with a mean function are proved. Their asymptotics are described by the large deviation rate functions. Second, we consider the situations where processes are misspecified to be stationary. In these misspecified cases, we formally make the log-likelihood ratio discriminant statistics and derive the large deviation theorems of them. Since they are complicated, they are evaluated and illustrated by numerical examples. We realize the misspecification of the process to be stationary seriously affecting our discrimination.

  9. General Large Deviations and Functional Iterated Logarithm Law for Multivalued Stochastic Differential Equations

    OpenAIRE

    Ren, Jiagang; Wu, Jing; Zhang, Hua

    2015-01-01

    In this paper, we prove a large deviation principle of Freidlin-Wentzell's type for the multivalued stochastic differential equations. As an application, we derive a functional iterated logarithm law for the solutions of multivalued stochastic differential equations.

  10. Large Deviations for Stochastic Models of Two-Dimensional Second Grade Fluids

    Energy Technology Data Exchange (ETDEWEB)

    Zhai, Jianliang, E-mail: zhaijl@ustc.edu.cn [University of Science and Technology of China, School of Mathematical Sciences (China); Zhang, Tusheng, E-mail: Tusheng.Zhang@manchester.ac.uk [University of Manchester, School of Mathematics (United Kingdom)

    2017-06-15

    In this paper, we establish a large deviation principle for stochastic models of incompressible second grade fluids. The weak convergence method introduced by Budhiraja and Dupuis (Probab Math Statist 20:39–61, 2000) plays an important role.

  11. Non equilibrium steady states: fluctuations and large deviations of the density and of the current

    OpenAIRE

    Derrida, B.

    2007-01-01

    These lecture notes give a short review of methods such as the matrix ansatz, the additivity principle or the macroscopic fluctuation theory, developed recently in the theory of non-equilibrium phenomena. They show how these methods allow to calculate the fluctuations and large deviations of the density and of the current in non-equilibrium steady states of systems like exclusion processes. The properties of these fluctuations and large deviation functions in non-equilibrium steady states (fo...

  12. Non-classical large deviations for a noisy system with non-isolated attractors

    Science.gov (United States)

    Bouchet, Freddy; Touchette, Hugo

    2012-05-01

    We study the large deviations of a simple noise-perturbed dynamical system having continuous sets of steady states, which mimic those found in some partial differential equations related, for example, to turbulence problems. The system is a two-dimensional nonlinear Langevin equation involving a dissipative, non-potential force, which has the essential effect of creating a line of stable fixed points (attracting line) touching a line of unstable fixed points (repelling line). Using different analytical and numerical techniques, we show that the stationary distribution of this system satisfies, in the low-noise limit, a large deviation principle containing two competing terms: (i) a 'classical' but sub-dominant large deviation term, which can be derived from the Freidlin-Wentzell theory of large deviations by studying the fluctuation paths or instantons of the system near the attracting line, and (ii) a dominant large deviation term, which does not follow from the Freidlin-Wentzell theory, as it is related to fluctuation paths of zero action, referred to as sub-instantons, emanating from the repelling line. We discuss the nature of these sub-instantons, and show how they arise from the connection between the attracting and repelling lines. We also discuss in a more general way how we expect these to arise in more general stochastic systems having connected sets of stable and unstable fixed points, and how they should determine the large deviation properties of these systems.

  13. Back in the saddle: Large-deviation statistics of the cosmic log-density field

    CERN Document Server

    Uhlemann, Cora; Pichon, Christophe; Bernardeau, Francis; Reimberg, Paulo

    2015-01-01

    We present a first principle approach to obtain analytical predictions for spherically-averaged cosmic densities in the mildly non-linear regime that go well beyond what is usually achieved by standard perturbation theory. A large deviation principle allows us to compute the leading-order cumulants of average densities in concentric cells. In this symmetry, the spherical collapse model leads to cumulant generating functions that are robust for finite variances and free of critical points when logarithmic density transformations are implemented. They yield in turn accurate density probability distribution functions (PDFs) from a straightforward saddle-point approximation valid for all density values. Based on this easy-to-implement modification, explicit analytic formulas for the evaluation of the one- and two-cell PDF are provided. The theoretical predictions obtained for the PDFs are accurate to a few percent compared to the numerical integration, regardless of the density under consideration and in excellen...

  14. A Closer Look at Multi-Cell Cooperation via Stochastic Geometry and Large Deviations

    CERN Document Server

    Huang, Kaibin

    2012-01-01

    Multi-cell cooperation (MCC) is an approach for mitigating inter-cell interference in dense cellular networks. Existing studies on MCC performance typically rely on either over-simplified Wyner-type models or complex system-level simulations. The promising theoretical results (typically using Wyner models) seem to not materialize in either complex simulations or particularly in practice. To more accurately investigate the theoretical performance of MCC, this paper models an entire plane of interfering cells as a Poisson random tessellation. The base stations (BSs) are then clustered using a regular lattice, whereby BSs in the same cluster mitigate mutual interference by beamforming with perfect channel state information. Techniques from stochastic geometry and large deviation theory are applied to analyze the outage probability as a function of the mobile locations, scattering environment, and the average number of cooperating BSs per cluster, L. For mobiles near the centers of BS clusters, it is shown that a...

  15. Large deviation statistics of non-equilibrium fluctuations in a sheared model-fluid

    Science.gov (United States)

    Dolai, Pritha; Simha, Aditi

    2016-08-01

    We analyse the statistics of the shear stress in a one dimensional model fluid, that exhibits a rich phase behaviour akin to real complex fluids under shear. We show that the energy flux satisfies the Gallavotti-Cohen FT across all phases in the system. The theorem allows us to define an effective temperature which deviates considerably from the equilibrium temperature as the noise in the system increases. This deviation is negligible when the system size is small. The dependence of the effective temperature on the strain rate is phase-dependent. It doesn’t vary much at the phase boundaries. The effective temperature can also be determined from the large deviation function of the energy flux. The local strain rate statistics obeys the large deviation principle and satisfies a fluctuation relation. It does not exhibit a distinct kink near zero strain rate because of inertia of the rotors in our system.

  16. Cumulants and large deviations of the current through non-equilibrium steady states

    Science.gov (United States)

    Bodineau, Thierry; Derrida, Bernard

    2007-06-01

    Using a generalisation of detailed balance for systems maintained out of equilibrium by contact with 2 reservoirs at unequal temperatures or at unequal densities, one can recover the fluctuation theorem for the large deviation function of the current. For large diffusive systems, we show how the large deviation function of the current can be computed using a simple additivity principle. The validity of this additivity principle and the occurrence of phase transitions are discussed in the framework of the macroscopic fluctuation theory. To cite this article: T. Bodineau, B. Derrida, C. R. Physique 8 (2007).

  17. Universal Large-Deviation Function of the Kardar-Parisi-Zhang Equation in One Dimension

    Science.gov (United States)

    Derrida, B.; Appert, C.

    1999-01-01

    Using the Bethe ansatz, we calculate the whole large-deviation function of the displacement of particles in the asymmetric simple exclusion process (ASEP) on a ring. When the size of the ring is large, the central part of this large deviation function takes a scaling form independent of the density of particles. We suggest that this scaling function found for the ASEP is universal and should be characteristic of all the systems described by the Kardar-Parisi-Zhang equation in 1+1 dimension. Simulations done on two simple growth models are in reasonable agreement with this conjecture.

  18. A Maximum Likelihood Approach to Least Absolute Deviation Regression

    Directory of Open Access Journals (Sweden)

    Yinbo Li

    2004-09-01

    Full Text Available Least absolute deviation (LAD regression is an important tool used in numerous applications throughout science and engineering, mainly due to the intrinsic robust characteristics of LAD. In this paper, we show that the optimization needed to solve the LAD regression problem can be viewed as a sequence of maximum likelihood estimates (MLE of location. The derived algorithm reduces to an iterative procedure where a simple coordinate transformation is applied during each iteration to direct the optimization procedure along edge lines of the cost surface, followed by an MLE of location which is executed by a weighted median operation. Requiring weighted medians only, the new algorithm can be easily modularized for hardware implementation, as opposed to most of the other existing LAD methods which require complicated operations such as matrix entry manipulations. One exception is Wesolowsky's direct descent algorithm, which among the top algorithms is also based on weighted median operations. Simulation shows that the new algorithm is superior in speed to Wesolowsky's algorithm, which is simple in structure as well. The new algorithm provides a better tradeoff solution between convergence speed and implementation complexity.

  19. Large deviations for a stochastic Landau-Lifshitz equation, extended version

    CERN Document Server

    Brzeźniak, Z; Jegaraj, T

    2012-01-01

    We study a stochastic Landau-Lifshitz equation on a bounded interval and with finite dimensional noise; this could be a simple model of magnetization in a needle-shaped domain in magnetic media. After showing that a unique, regular solution exists, we obtain a large deviation principle for small noise asymptotics of solutions using the weak convergence method. We then apply the large deviation principle to show that small noise in the field can cause magnetization reversal and also to show the importance of the shape anisotropy parameter for reducing the disturbance of the magnetization caused by small noise in the field.

  20. Large deviations and occupation times for spin particle systems with long range interactions

    Institute of Scientific and Technical Information of China (English)

    陈金文

    2000-01-01

    The large deviation principle for spin particle systems with long range interactions has been studied. It is shown that most of the results in Chen J.W. and Dai Pra P. ’s previous papers can be extended to the present situation. A particularly interesting result is the variational principle which characterizes the stationary Markov measures of such systems as the zeros of the governing LD rate functions. Uniqueness of such measure is studied from this as well as other point of view. We then apply the results to the occupation times of the systems. New large deviation and convergence results are obtained.

  1. Large deviations and occupation times for spin particle systems with long range interactions

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The large deviation principle for spin particle systems with long range interactions has been studied. It is shown that most of the results in Chen J.W. and Dai Pra P.'s previous papers can be extended to the present situation. A particularly interesting result is the variational principle which characterizes the stationary Markov measures of such systems as the zeros of the governing LD rate functions. Uniqueness of such measure is studied from this as well as other point of view. We then apply the results to the occupation times of the systems. New large deviation and convergence results are obtained.

  2. Kardar-Parisi-Zhang Equation and Large Deviations for Random Walks in Weak Random Environments

    Science.gov (United States)

    Corwin, Ivan; Gu, Yu

    2017-01-01

    We consider the transition probabilities for random walks in 1+1 dimensional space-time random environments (RWRE). For critically tuned weak disorder we prove a sharp large deviation result: after appropriate rescaling, the transition probabilities for the RWRE evaluated in the large deviation regime, converge to the solution to the stochastic heat equation (SHE) with multiplicative noise (the logarithm of which is the KPZ equation). We apply this to the exactly solvable Beta RWRE and additionally present a formal derivation of the convergence of certain moment formulas for that model to those for the SHE.

  3. Quasi-potential and Two-Scale Large Deviation Theory for Gillespie Dynamics

    KAUST Repository

    Li, Tiejun

    2016-01-07

    The construction of energy landscape for bio-dynamics is attracting more and more attention recent years. In this talk, I will introduce the strategy to construct the landscape from the connection to rare events, which relies on the large deviation theory for Gillespie-type jump dynamics. In the application to a typical genetic switching model, the two-scale large deviation theory is developed to take into account the fast switching of DNA states. The comparison with other proposals are also discussed. We demonstrate different diffusive limits arise when considering different regimes for genetic translation and switching processes.

  4. LARGE DEVIATION FOR THE EMPIRICAL CORRELATION COEFFICIENT OF TWO GAUSSIAN RANDOM VARIABLES

    Institute of Scientific and Technical Information of China (English)

    Shen Si

    2007-01-01

    In this article, the author obtains the large deviation principles for the empirical correlation coefficient of two Gaussian random variables X and Y. Especially, when considering two independent Gaussian random variables X, Y with the means EX, EY(both known), wherein the author gives two kinds of different proofs and gets the same results.

  5. On the Large Deviation Rate Function for the Empirical Measures of Reversible Jump Markov Processes

    Science.gov (United States)

    2013-09-12

    function. Ann. Probab., 13:342�362, 1985. [10] Walter Rudin . Functional Analysis. McGraw-Hill, New York, 1991. [11] D.W. Stroock. An Introduction to the Theory of Large Deviations. Springer- Verlag, New York, 1984. 36 39

  6. Large Deviations Theorems for Empirical Measures in Freidlin-Wentzell Exit Problems

    OpenAIRE

    Mikami, Toshio

    1991-01-01

    We consider the jump-type Markov processes which are small random perturbations of dynamical systems and their empirical processes. We prove large deviations theorems for empirical measures which are marginal measures of empirical processes at the exit time of Markov processes from a bounded domain in a $d$-dimensional Euclidean space $\\mathscr{R}^d$.

  7. Process-level quenched large deviations for random walk in random environment

    CERN Document Server

    Rassoul-Agha, Firas

    2009-01-01

    We consider a bounded step size random walk in an ergodic random environment with some ellipticity, on an integer lattice of arbitrary dimension. We prove a level 3 large deviation principle, under almost every environment, with rate function related to a relative entropy.

  8. Dynamical large deviations for a boundary driven stochastic lattice gas model with many conserved quantities

    CERN Document Server

    Farfan, Jonathan; Valentim, Fabio J

    2009-01-01

    We prove the dynamical large deviations for a particle system in which particles may have different velocities. We assume that we have two infinite reservoirs of particles at the boundary: this is the so-called boundary driven process. The dynamics we considered consists of a weakly asymmetric simple exclusion process with collision among particles having different velocities.

  9. Current fluctuations and statistics during a large deviation event in an exactly solvable transport model

    Science.gov (United States)

    Hurtado, Pablo I.; Garrido, Pedro L.

    2009-02-01

    We study the distribution of the time-integrated current in an exactly solvable toy model of heat conduction, both analytically and numerically. The simplicity of the model allows us to derive the full current large deviation function and the system statistics during a large deviation event. In this way we unveil a relation between system statistics at the end of a large deviation event and for intermediate times. The mid-time statistics is independent of the sign of the current, a reflection of the time-reversal symmetry of microscopic dynamics, while the end-time statistics does depend on the current sign, and also on its microscopic definition. We compare our exact results with simulations based on the direct evaluation of large deviation functions, analyzing the finite-size corrections of this simulation method and deriving detailed bounds for its applicability. We also show how the Gallavotti-Cohen fluctuation theorem can be used to determine the range of validity of simulation results.

  10. Simulation of heat waves in climate models using large deviation algorithms

    Science.gov (United States)

    Ragone, Francesco; Bouchet, Freddy; Wouters, Jeroen

    2016-04-01

    One of the goals of climate science is to characterize the statistics of extreme, potentially dangerous events (e.g. exceptionally intense precipitations, wind gusts, heat waves) in the present and future climate. The study of extremes is however hindered by both a lack of past observational data for events with a return time larger than decades or centuries, and by the large computational cost required to perform a proper sampling of extreme statistics with state of the art climate models. The study of the dynamics leading to extreme events is especially difficult as it requires hundreds or thousands of realizations of the dynamical paths leading to similar extremes. We will discuss here a new numerical algorithm, based on large deviation theory, that allows to efficiently sample very rare events in complex climate models. A large ensemble of realizations are run in parallel, and selection and cloning procedures are applied in order to oversample the trajectories leading to the extremes of interest. The statistics and characteristic dynamics of the extremes can then be computed on a much larger sample of events. This kind of importance sampling method belongs to a class of genetic algorithms that have been successfully applied in other scientific fields (statistical mechanics, complex biomolecular dynamics), allowing to decrease by orders of magnitude the numerical cost required to sample extremes with respect to standard direct numerical sampling. We study the applicability of this method to the computation of the statistics of European surface temperatures with the Planet Simulator (Plasim), an intermediate complexity general circulation model of the atmosphere. We demonstrate the efficiency of the method by comparing its performances against standard approaches. Dynamical paths leading to heat waves are studied, enlightening the relation of Plasim heat waves with blocking events, and the dynamics leading to these events. We then discuss the feasibility of this

  11. A framework for the direct evaluation of large deviations in non-Markovian processes

    Science.gov (United States)

    Cavallaro, Massimo; Harris, Rosemary J.

    2016-11-01

    We propose a general framework to simulate stochastic trajectories with arbitrarily long memory dependence and efficiently evaluate large deviation functions associated to time-extensive observables. This extends the ‘cloning’ procedure of Giardiná et al (2006 Phys. Rev. Lett. 96 120603) to non-Markovian systems. We demonstrate the validity of this method by testing non-Markovian variants of an ion-channel model and the totally asymmetric exclusion process, recovering results obtainable by other means.

  12. Large deviations of heavy-tailed random sums with applications in insurance and finance

    NARCIS (Netherlands)

    Kluppelberg, C; Mikosch, T

    1997-01-01

    We prove large deviation results for the random sum S(t)=Sigma(i=1)(N(t)) X-i, t greater than or equal to 0, where (N(t))(t greater than or equal to 0) are non-negative integer-valued random variables and (X-n)(n is an element of N) are i.i.d. non-negative random Variables with common distribution f

  13. Large deviation principle of Freidlin-Wentzell type for pinned diffusion processes

    OpenAIRE

    Inahama, Yuzuru

    2012-01-01

    Since T. Lyons invented rough path theory, one of its most successful applications is a new proof of Freidlin-Wentzell's large deviation principle for diffusion processes. In this paper we extend this method to the case of pinned diffusion processes under a mild ellipticity assumption. Besides rough path theory, our main tool is quasi-sure analysis, which is a kind of potential theory in Malliavin calculus.

  14. Large deviation principle of Freidlin-Wentzell type for pinned diffusion processes

    CERN Document Server

    Inahama, Yuzuru

    2012-01-01

    Since T. Lyons invented rough path theory, one of its most successful applications is a new proof of Freidlin-Wentzell's large deviation principle for diffusion processes. In this paper we extend this method to the case of pinned diffusion processes under a mild ellipticity assumption. Besides rough path theory, our main tool is quasi-sure analysis, which is one of the deepest parts of Malliavin calculus.

  15. Large deviations for heavy-tailed random sums of independent random variables with dominatedly varying tails

    Institute of Scientific and Technical Information of China (English)

    LIU; Yan(刘艳); HU; Yijun(胡亦钧)

    2003-01-01

    We prove large deviation results on the partial and random sums Sn = ∑ni=1 Xi, n≥1; S(t) =∑N(t)i=1 Xi, t≥0, where {N(t);t≥0} are non-negative integer-valued random variables and {Xn;n≥1} areindependent non-negative random variables with distribution, Fn, of Xn, independent of {N(t); t≥0}. Specialattention is paid to the distribution of dominated variation.

  16. Large deviations for self-intersection local times of stable random walks

    CERN Document Server

    Laurent, Clément

    2010-01-01

    Let $(X_t,t\\geq 0)$ be a random walk on $\\mathbb{Z}^d$. Let $ l_T(x)= \\int_0^T \\delta_x(X_s)ds$ the local time at the state $x$ and $ I_T= \\sum\\limits_{x\\in\\mathbb{Z}^d} l_T(x)^q $ the q-fold self-intersection local time (SILT). In \\cite{Castell} Castell proves a large deviations principle for the SILT of the simple random walk in the critical case $q(d-2)=d$. In the supercritical case $q(d-2)>d$, Chen and M\\"orters obtain in \\cite{ChenMorters} a large deviations principle for the intersection of $q$ independent random walks, and Asselah obtains in \\cite{Asselah5} a large deviations principle for the SILT with $q=2$. We extend these results to an $\\alpha$-stable process (i.e. $\\alpha\\in]0,2]$) in the case where $q(d-\\alpha)\\geq d$.

  17. Large deviations for self-intersection local times in subcritical dimensions

    CERN Document Server

    Laurent, Clément

    2010-01-01

    Let $(X_t,t\\geq 0)$ be a random walk on $\\mathbb{Z}^d$. Let $ l_t(x)= \\int_0^t \\delta_x(X_s)ds$ be the local time at site $x$ and $ I_t= \\sum\\limits_{x\\in\\mathbb{Z}^d} l_t(x)^p $ the p-fold self-intersection local time (SILT). Becker and K\\"onig have recently proved a large deviations principle for $I_t$ for all $(p,d)\\in\\mathbb{R}^d\\times\\mathbb{Z}^d$ such that $p(d-2)<2$. We extend these results to a broader scale of deviations and to the whole subcritical domain $p(d-2)large deviations principle using a method introduced by Castell for the critical case $p(d-2)=d$ and developed by Laurent for the critical and supercritical case $p(d-\\alpha)\\geq d$ of $\\alpha$-stable random walk.

  18. Large Deviations, Guerra's and A.S.S. Schemes, and the Parisi Hypothesis

    Science.gov (United States)

    Talagrand, Michel

    2007-03-01

    We investigate the problem of computing lim_{N to infty}1/aNlog EZ_N^a for any value of a, where Z N is the partition function of the celebrated Sherrington-Kirkpatrick (SK) model, or of some of its natural generalizations. This is a natural "large deviation" problem. Its study helps to get a fresh look at some of the recent ideas introduced in the area, and raises a number of natural questions. We provide a complete solution for a ≥ 0.

  19. Macroscopic properties and dynamical large deviations of the boundary driven Kawasaki process with long range interaction

    CERN Document Server

    Mourragui, Mustapha

    2011-01-01

    We consider a boundary driven exclusion process associated to particles evolving under Kawasaki (conservative) dynamics and long range interaction in a regime in which at equilibrium phase separation might occur. We show that the empirical density under the diffusive scaling solves a non linear integro differential evolution equation with Dirichlet boundary conditions and we prove the associated dynamical large deviations principle. Further, tuning suitable the intensity of the interaction, in the uniqueness phase regime, we show that under the stationary measure the empirical density solves a non local, stationary, transport equation.

  20. How T-cells use large deviations to recognize foreign antigens

    CERN Document Server

    Zint, Natali; Hollander, Frank den

    2008-01-01

    A stochastic model for the activation of T-cells is analysed. T-cells are part of the immune system and recognize foreign antigens against a background of the body's own molecules. The model under consideration is a slight generalization of a model introduced by Van den Berg, Rand and Burroughs in 2001, and is capable of explaining how this recognition works on the basis of rare stochastic events. With the help of a refined large deviation theorem and numerical evaluation it is shown that, for a wide range of parameters, T-cells can distinguish reliably between foreign antigens and self-antigens.

  1. Large Deviation Generating Function for Currents in the Pauli-Fierz Model

    Science.gov (United States)

    de Roeck, Wojciech

    We consider a finite quantum system coupled to quasifree thermal reservoirs at different temperatures. We construct the statistics of energy transport between the reservoirs and we show that the corresponding large deviation generating function exists and it is analytic on a compact set. This result is valid for small coupling and exponentially decaying reservoir correlation functions. Our technique consists of a diagrammatic expansion that uses the Markovian limit of the system as a reference. As a corollary, we derive the Gallavotti-Cohen fluctuation relation for the entropy production.

  2. Large Deviations and Gallavotti-Cohen Principle for Dissipative PDEs with Rough Noise

    Science.gov (United States)

    Jakšić, V.; Nersesyan, V.; Pillet, C.-A.; Shirikyan, A.

    2015-05-01

    We study a class of dissipative PDEs perturbed by an unbounded kick force. Under some natural assumptions, the restrictions of solutions to integer times form a homogeneous Markov process. Assuming that the noise is rough with respect to the space variables and has a non-degenerate law, we prove that the system in question satisfies a large deviation principle (LDP) in τ-topology. Under some additional hypotheses, we establish a Gallavotti-Cohen type symmetry for the rate function of an entropy production functional and the strict positivity and finiteness of the mean entropy production rate in the stationary regime. The latter result is applicable to PDEs with strong nonlinear dissipation.

  3. Dispersion in rectangular networks: effective diffusivity and large-deviation rate function

    CERN Document Server

    Tzella, Alexandra

    2015-01-01

    We investigate the dispersion of a passive scalar released in a fluid flowing within a rectangular, Manhattan-style network. We use large-deviation theory to approximate the scalar concentration as it evolves under the combined action of advection and diffusion and derive an expression for the rate function that controls the form of the concentration at large times $t$. For moderately large distances $O(t^{1/2})$ from the centre of mass, this form reduces to a Gaussian parameterised by a (tensorial) effective diffusivity given in closed form. Further away, at distances $O(t)$, a more complex form reveals the strong imprint of the network geometry. Our theoretical predictions are verified against Monte Carlo simulations of Brownian particles.

  4. Large deviations for the local times of a random walk among random conductances

    CERN Document Server

    König, Wolfgang; Wolff, Tilman

    2011-01-01

    We derive an annealed large deviation principle for the normalised local times of a continuous-time random walk among random conductances in a finite domain in $\\Z^d$ in the spirit of Donsker-Varadhan \\cite{DV75}. We work in the interesting case that the conductances may assume arbitrarily small values. Thus, the underlying picture of the principle is a joint strategy of small values of the conductances and large holding times of the walk. The speed and the rate function of our principle are explicit in terms of the lower tails of the conductance distribution. As an application, we identify the logarithmic asymptotics of the lower tails of the principal eigenvalue of the randomly perturbed negative Laplace operator in the domain.

  5. Boundary driven Kawasaki process with long-range interaction: dynamical large deviations and steady states

    Science.gov (United States)

    Mourragui, Mustapha; Orlandi, Enza

    2013-01-01

    A particle system with a single locally-conserved field (density) in a bounded interval with different densities maintained at the two endpoints of the interval is under study here. The particles interact in the bulk through a long-range potential parametrized by β⩾0 and evolve according to an exclusion rule. It is shown that the empirical particle density under the diffusive scaling solves a quasilinear integro-differential evolution equation with Dirichlet boundary conditions. The associated dynamical large deviation principle is proved. Furthermore, when β is small enough, it is also demonstrated that the empirical particle density obeys a law of large numbers with respect to the stationary measures (hydrostatic). The macroscopic particle density solves a non-local, stationary, transport equation.

  6. Two-parameter sample path large deviations for infinite-server queues

    Directory of Open Access Journals (Sweden)

    Jose H. Blanchet

    2014-09-01

    Full Text Available Let Qλ(t,y be the number of people present at time t with y units of remaining service time in an infinite server system with arrival rate equal to λ>0. In the presence of a non-lattice renewal arrival process and assuming that the service times have a continuous distribution, we obtain a large deviations principle for Qλ( · /λ under the topology of uniform convergence on [0,T]×[0,∞. We illustrate our results by obtaining the most likely path, represented as a surface, to ruin in life insurance portfolios, and also we obtain the most likely surfaces to overflow in the setting of loss queues.

  7. Synchronization of Stochastically Coupled Oscillators: Dynamical Phase Transitions and Large Deviations Theory (or Birds and Frogs)

    Science.gov (United States)

    Teodorescu, Razvan

    2009-10-01

    Systems of oscillators coupled non-linearly (stochastically or not) are ubiquitous in nature and can explain many complex phenomena: coupled Josephson junction arrays, cardiac pacemaker cells, swarms or flocks of insects and birds, etc. They are know to have a non-trivial phase diagram, which includes chaotic, partially synchronized, and fully synchronized phases. A traditional model for this class of problems is the Kuramoto system of oscillators, which has been studied extensively for the last three decades. The model is a canonical example for non-equilibrium, dynamical phase transitions, so little understood in physics. From a stochastic analysis point of view, the transition is described by the large deviations principle, which offers little information on the scaling behavior near the critical point. I will discuss a special case of the model, which allows a rigorous analysis of the critical properties of the model, and reveals a new, anomalous scaling behavior in the vicinity of the critical point.

  8. On optimum parameter modulation-estimation from a large deviations perspective

    CERN Document Server

    Merhav, Neri

    2012-01-01

    We consider the problem of jointly optimum modulation and estimation of a real-valued random parameter, conveyed over an additive white Gaussian noise (AWGN) channel, where the performance metric is the large deviations behavior of the estimator, namely, the exponential decay rate (as a function of the observation time) of the probability that the estimation error would exceed a certain threshold. Our basic result is in providing an exact characterization of the fastest achievable exponential decay rate, among all possible modulator-estimator (transmitter-receiver) pairs, where the modulator is limited only in the signal power, but not in bandwidth. This exponential rate turns out to be given by the reliability function of the AWGN channel. We also discuss several ways to achieve this optimum performance, and one of them is based on quantization of the parameter, followed by optimum channel coding and modulation, which gives rise to a separation-based transmitter, if one views this setting from the perspectiv...

  9. Quenched Free Energy and Large Deviations for Random Walks in Random Potentials

    CERN Document Server

    Rassoul-Agha, Firas; Yilmaz, Atilla

    2011-01-01

    We study quenched distributions on random walks in a random potential on integer lattices of arbitrary dimension and with an arbitrary finite set of admissible steps. The potential can be unbounded and can depend on a few steps of the walk. Directed, undirected and stretched polymers, as well as random walk in random environment, are covered. The restriction needed is on the moment of the potential, in relation to the degree of mixing of the ergodic environment. We derive two variational formulas for the limiting quenched free energy and prove a process-level quenched large deviation principle for the empirical measure. As a corollary we obtain LDPs for types of random walk in random environment not covered by earlier results.

  10. Large deviations of the limiting distribution in the Shanks-R\\'enyi prime number race

    CERN Document Server

    Lamzouri, Youness

    2011-01-01

    Let $q\\geq 3$, $2\\leq r\\leq \\phi(q)$ and $a_1,...,a_r$ be distinct residue classes modulo $q$ that are relatively prime to $q$. Assuming the Generalized Riemann Hypothesis and the Grand Simplicity Hypothesis, M. Rubinstein and P. Sarnak showed that the vector-valued function $E_{q;a_1,...,a_r}(x)=(E(x;q,a_1),..., E(x;q,a_r)),$ where $E(x;q,a)= \\frac{\\log x}{\\sqrt{x}}(\\phi(q)\\pi(x;q,a)-\\pi(x))$, has a limiting distribution $\\mu_{q;a_1,...,a_r}$ which is absolutely continuous on $\\mathbb{R}^r$. Under the same assumptions, we determine the asymptotic behavior of the large deviations $\\mu_{q;a_1,...,a_r}(||\\vx||>V)$ for different ranges of $V$, uniformly as $q\\to\\infty.$

  11. Dispersion in the large-deviation regime. Part II: cellular flow at large P\\'eclet number

    CERN Document Server

    Haynes, P H

    2014-01-01

    A standard model for the study of scalar dispersion through advection and molecular diffusion is a two-dimensional periodic flow with closed streamlines inside periodic cells. Over long time scales, the dispersion of a scalar in this flow can be characterised by an effective diffusivity that is a factor $\\mathrm{Pe}^{1/2}$ larger than molecular diffusivity when the P\\'eclet number $\\mathrm{Pe}$ is large. Here we provide a more complete description of dispersion in this regime by applying the large-deviation theory developed in Part I of this paper. We derive approximations to the rate function governing the scalar concentration at large time $t$ by carrying out an asymptotic analysis of the relevant family of eigenvalue problems. We identify two asymptotic regimes and make predictions for the rate function and spatial structure of the scalar. Regime I applies to distances from the release point that satisfy $|\\boldsymbol{x}| = O(\\mathrm{Pe}^{1/4} t)$ . The concentration in this regime is isotropic at large sc...

  12. Large-deviation joint statistics of the finite-time Lyapunov spectrum in isotropic turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Perry L., E-mail: pjohns86@jhu.edu; Meneveau, Charles [Department of Mechanical Engineering and Center for Environmental and Applied Fluid Mechanics, The Johns Hopkins University, 3400 N. Charles Street, Baltimore, Maryland 21218 (United States)

    2015-08-15

    One of the hallmarks of turbulent flows is the chaotic behavior of fluid particle paths with exponentially growing separation among them while their distance does not exceed the viscous range. The maximal (positive) Lyapunov exponent represents the average strength of the exponential growth rate, while fluctuations in the rate of growth are characterized by the finite-time Lyapunov exponents (FTLEs). In the last decade or so, the notion of Lagrangian coherent structures (which are often computed using FTLEs) has gained attention as a tool for visualizing coherent trajectory patterns in a flow and distinguishing regions of the flow with different mixing properties. A quantitative statistical characterization of FTLEs can be accomplished using the statistical theory of large deviations, based on the so-called Cramér function. To obtain the Cramér function from data, we use both the method based on measuring moments and measuring histograms and introduce a finite-size correction to the histogram-based method. We generalize the existing univariate formalism to the joint distributions of the two FTLEs needed to fully specify the Lyapunov spectrum in 3D flows. The joint Cramér function of turbulence is measured from two direct numerical simulation datasets of isotropic turbulence. Results are compared with joint statistics of FTLEs computed using only the symmetric part of the velocity gradient tensor, as well as with joint statistics of instantaneous strain-rate eigenvalues. When using only the strain contribution of the velocity gradient, the maximal FTLE nearly doubles in magnitude, highlighting the role of rotation in de-correlating the fluid deformations along particle paths. We also extend the large-deviation theory to study the statistics of the ratio of FTLEs. The most likely ratio of the FTLEs λ{sub 1} : λ{sub 2} : λ{sub 3} is shown to be about 4:1:−5, compared to about 8:3:−11 when using only the strain-rate tensor for calculating fluid volume

  13. The most likely voltage path and large deviations approximations for integrate-and-fire neurons.

    Science.gov (United States)

    Paninski, Liam

    2006-08-01

    We develop theory and numerical methods for computing the most likely subthreshold voltage path of a noisy integrate-and-fire (IF) neuron, given observations of the neuron's superthreshold spiking activity. This optimal voltage path satisfies a second-order ordinary differential (Euler-Lagrange) equation which may be solved analytically in a number of special cases, and which may be solved numerically in general via a simple "shooting" algorithm. Our results are applicable for both linear and nonlinear subthreshold dynamics, and in certain cases may be extended to correlated subthreshold noise sources. We also show how this optimal voltage may be used to obtain approximations to (1) the likelihood that an IF cell with a given set of parameters was responsible for the observed spike train; and (2) the instantaneous firing rate and interspike interval distribution of a given noisy IF cell. The latter probability approximations are based on the classical Freidlin-Wentzell theory of large deviations principles for stochastic differential equations. We close by comparing this most likely voltage path to the true observed subthreshold voltage trace in a case when intracellular voltage recordings are available in vitro.

  14. On the Hamiltonian structure of large deviations in stochastic hybrid systems

    Science.gov (United States)

    Bressloff, Paul C.; Faugeras, Olivier

    2017-03-01

    We present a new derivation of the classical action underlying a large deviation principle (LDP) for a stochastic hybrid system, which couples a piecewise deterministic dynamical system in {{{R}}d} with a time-homogeneous Markov chain on some discrete space Γ . We assume that the Markov chain on Γ is ergodic, and that the discrete dynamics is much faster than the piecewise deterministic dynamics (separation of time-scales). Using the Perron–Frobenius theorem and the calculus-of-variations, we show that the resulting action Hamiltonian is given by the Perron eigenvalue of a | Γ | -dimensional linear equation. The corresponding linear operator depends on the transition rates of the Markov chain and the nonlinear functions of the piecewise deterministic system. We compare the Hamiltonian to one derived using WKB methods, and show that the latter is a reduction of the former. We also indicate how the analysis can be extended to a multi-scale stochastic process, in which the continuous dynamics is described by a piecewise stochastic differential equations (SDE). Finally, we illustrate the theory by considering applications to conductance-based models of membrane voltage fluctuations in the presence of stochastic ion channels.

  15. Deviation pattern approach for optimizing perturbative terms of QCD renormalization group invariant observables

    CERN Document Server

    Khellat, M

    2016-01-01

    We first consider the idea of renormalization group-induced estimates, in the context of optimization procedures, for the Brodsky-Lepage-Mackenzie approach to generate higher-order contributions for QCD perturbative series. Secondly, we develop the deviation pattern approach (DPA) in which through a series of comparisons between lower-order RG-induced estimates and the corresponding analytical calculations, we modify higher-order RG-induced estimates. Finally, using the normal estimation procedure and DPA, we get estimates of $\\alpha_s^4$ corrections for the Bjorken sum rule of polarized deed-inelastic scattering and for the non-singlet contribution to the Adler function.

  16. Deviation of Large Scale Gravitoelectromagnetic Field in Post-Newtonian Aproximation

    CERN Document Server

    Jardim, I C

    2013-01-01

    In this work a study of the gravity is made using Einstein equation in the post- Newtonian approach. This method makes the equation linear and is used to treat non-relativistic objects. It enables us to construct, from metric-independent elements, fields that are governed by equations similar to the Maxwell ones in Lorentz gauge. We promediate these equations for matter distribuited in local sistens, like solar sistems or galaxies. Finally we define the large scale fields for this distribution, which includes terms analogous to electromagnetic case, like polarization, magnetization and superiors terms.

  17. Large deviations for local times and intersection local times of fractional Brownian motions and Riemann-Liouville processes

    CERN Document Server

    Chen, Xia; Rosinski, Jan; Shao, Qi-Man

    2009-01-01

    In this paper we prove exact forms of large deviations for local times and intersection local times of fractional Brownian motions and Riemann-Liouville processes. We also show that a fractional Brownian motion and the related Riemann-Liouville process behave like constant multiples of each other with regard to large deviations for their local and intersection local times. As a consequence of our large deviation estimates, we derive laws of iterated logarithm for the corresponding local times. The key points of our methods: (1) logarithmic superadditivity of a normalized sequence of moments of exponentially randomized local time of a fractional Brownian motion; (2) logarithmic subadditivity of a normalized sequence of moments of exponentially randomized intersection local time of Riemann-Liouville processes; (3) comparison of local and intersection local times based on embedding of a part of a fractional Brownian motion into the reproducing kernel Hilbert space of the Riemann-Liouville process.

  18. Large deviations for the local fluctuations of random walks and new insights into the "randomness" of Pi

    CERN Document Server

    Barral, Julien

    2010-01-01

    We establish large deviations properties valid for almost every sample path of a class of stationary mixing processes $(X_1,\\dots, X_n,\\dots)$. These large deviations properties are inherited from those of $S_n=\\sum_{i=1}^nX_i$ and they describe how the local fluctuations of almost every realization of $S_n$ deviate from the almost sure behavior provided by the strong law of large numbers. These results have interesting applications to the fluctuations of Brownian motion increments, the local fluctuations of Birkhoff averages on symbolic spaces and their geometric realizations, as well as the local fluctuations of branching random walks. Also, they lead to new insights into the "randomness" of the digits of their expansions in integer bases for fundamental constants such as Pi and the Euler constant. We formulate a new conjecture, supported by numerical experiments, implying the normality of these numbers.

  19. Use of Standard Deviations as Predictors in Models Using Large-Scale International Data Sets

    Science.gov (United States)

    Austin, Bruce; French, Brian; Adesope, Olusola; Gotch, Chad

    2017-01-01

    Measures of variability are successfully used in predictive modeling in research areas outside of education. This study examined how standard deviations can be used to address research questions not easily addressed using traditional measures such as group means based on index variables. Student survey data were obtained from the Organisation for…

  20. A large deviation principle for Minkowski sums of heavy-tailed random compact convex sets with finite expectation

    DEFF Research Database (Denmark)

    Mikosch, Thomas Valentin; Pawlas, Zbynek; Samorodnitsky, Gennady

    2011-01-01

    We prove large deviation results for Minkowski sums Sn of independent and identically distributed random compact sets where we assume that the summands have a regularly varying distribution and finite expectation. The main focus is on random convex compact sets. The results confirm the heavy...

  1. Large deviations of the finite-time magnetization of the Curie-Weiss random-field Ising model

    Science.gov (United States)

    Paga, Pierre; Kühn, Reimer

    2017-08-01

    We study the large deviations of the magnetization at some finite time in the Curie-Weiss random field Ising model with parallel updating. While relaxation dynamics in an infinite-time horizon gives rise to unique dynamical trajectories [specified by initial conditions and governed by first-order dynamics of the form mt +1=f (mt) ] , we observe that the introduction of a finite-time horizon and the specification of terminal conditions can generate a host of metastable solutions obeying second-order dynamics. We show that these solutions are governed by a Newtonian-like dynamics in discrete time which permits solutions in terms of both the first-order relaxation ("forward") dynamics and the backward dynamics mt +1=f-1(mt) . Our approach allows us to classify trajectories for a given final magnetization as stable or metastable according to the value of the rate function associated with them. We find that in analogy to the Freidlin-Wentzell description of the stochastic dynamics of escape from metastable states, the dominant trajectories may switch between the two types (forward and backward) of first-order dynamics. Additionally, we show how to compute rate functions when uncertainty in the quenched disorder is introduced.

  2. Large-deviation principles, stochastic effective actions, path entropies, and the structure and meaning of thermodynamic descriptions

    Science.gov (United States)

    Smith, Eric

    2011-04-01

    The meaning of thermodynamic descriptions is found in large-deviations scaling (Ellis 1985 Entropy, Large Deviations, and Statistical Mechanics (New York: Springer); Touchette 2009 Phys. Rep. 478 1-69) of the probabilities for fluctuations of averaged quantities. The central function expressing large-deviations scaling is the entropy, which is the basis both for fluctuation theorems and for characterizing the thermodynamic interactions of systems. Freidlin-Wentzell theory (Freidlin and Wentzell 1998 Random Perturbations in Dynamical Systems 2nd edn (New York: Springer)) provides a quite general formulation of large-deviations scaling for non-equilibrium stochastic processes, through a remarkable representation in terms of a Hamiltonian dynamical system. A number of related methods now exist to construct the Freidlin-Wentzell Hamiltonian for many kinds of stochastic processes; one method due to Doi (1976 J. Phys. A: Math. Gen. 9 1465-78 1976 J. Phys. A: Math. Gen. 9 1479) and Peliti (1985 J. Physique 46 1469; 1986 J. Phys. A: Math. Gen. 19 L365, appropriate to integer counting statistics, is widely used in reaction-diffusion theory. Using these tools together with a path-entropy method due to Jaynes (1980 Annu. Rev. Phys. Chem. 31 579-601), this review shows how to construct entropy functions that both express large-deviations scaling of fluctuations, and describe system-environment interactions, for discrete stochastic processes either at or away from equilibrium. A collection of variational methods familiar within quantum field theory, but less commonly applied to the Doi-Peliti construction, is used to define a 'stochastic effective action', which is the large-deviations rate function for arbitrary non-equilibrium paths. We show how common principles of entropy maximization, applied to different ensembles of states or of histories, lead to different entropy functions and different sets of thermodynamic state variables. Yet the relations among all these levels of

  3. Large-deviation principles, stochastic effective actions, path entropies, and the structure and meaning of thermodynamic descriptions

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Eric [Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM 87501 (United States)

    2011-04-15

    The meaning of thermodynamic descriptions is found in large-deviations scaling (Ellis 1985 Entropy, Large Deviations, and Statistical Mechanics (New York: Springer); Touchette 2009 Phys. Rep. 478 1-69) of the probabilities for fluctuations of averaged quantities. The central function expressing large-deviations scaling is the entropy, which is the basis both for fluctuation theorems and for characterizing the thermodynamic interactions of systems. Freidlin-Wentzell theory (Freidlin and Wentzell 1998 Random Perturbations in Dynamical Systems 2nd edn (New York: Springer)) provides a quite general formulation of large-deviations scaling for non-equilibrium stochastic processes, through a remarkable representation in terms of a Hamiltonian dynamical system. A number of related methods now exist to construct the Freidlin-Wentzell Hamiltonian for many kinds of stochastic processes; one method due to Doi (1976 J. Phys. A: Math. Gen. 9 1465-78; 1976 J. Phys. A: Math. Gen. 9 1479) and Peliti (1985 J. Physique 46 1469; 1986 J. Phys. A: Math. Gen. 19 L365, appropriate to integer counting statistics, is widely used in reaction-diffusion theory. Using these tools together with a path-entropy method due to Jaynes (1980 Annu. Rev. Phys. Chem. 31 579-601), this review shows how to construct entropy functions that both express large-deviations scaling of fluctuations, and describe system-environment interactions, for discrete stochastic processes either at or away from equilibrium. A collection of variational methods familiar within quantum field theory, but less commonly applied to the Doi-Peliti construction, is used to define a 'stochastic effective action', which is the large-deviations rate function for arbitrary non-equilibrium paths. We show how common principles of entropy maximization, applied to different ensembles of states or of histories, lead to different entropy functions and different sets of thermodynamic state variables. Yet the relations among all these

  4. A non-parametric approach to estimate the total deviation index for non-normal data.

    Science.gov (United States)

    Perez-Jaume, Sara; Carrasco, Josep L

    2015-11-10

    Concordance indices are used to assess the degree of agreement between different methods that measure the same characteristic. In this context, the total deviation index (TDI) is an unscaled concordance measure that quantifies to which extent the readings from the same subject obtained by different methods may differ with a certain probability. Common approaches to estimate the TDI assume data are normally distributed and linearity between response and effects (subjects, methods and random error). Here, we introduce a new non-parametric methodology for estimation and inference of the TDI that can deal with any kind of quantitative data. The present study introduces this non-parametric approach and compares it with the already established methods in two real case examples that represent situations of non-normal data (more specifically, skewed data and count data). The performance of the already established methodologies and our approach in these contexts is assessed by means of a simulation study. Copyright © 2015 John Wiley & Sons, Ltd.

  5. Large-deviation probabilities for maxima of sums of subexponential random variables with application to finite-time ruin probabilities

    Institute of Scientific and Technical Information of China (English)

    JIANG Tao

    2008-01-01

    We establish an asymptotic relation for the large-deviation probabilities of the maxima of sums of subexponential random variables centered by multiples of order statistics of i.i.d. standard uniform random variables. This extends a corresponding result of Korshunov. As an application, we generalize a result of Tang,the uniform asymptotic estimate for the finite-time ruin probability, to the whole strongly subexponential class.

  6. Large-deviation probabilities for maxima of sums of subexponential random variables with application to finite-time ruin probabilities

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    We establish an asymptotic relation for the large-deviation probabilities of the maxima of sums of subexponential random variables centered by multiples of order statistics of i.i.d.standard uniform random variables.This extends a corresponding result of Korshunov.As an application,we generalize a result of Tang,the uniform asymptotic estimate for the finite-time ruin probability,to the whole strongly subexponential class.

  7. Large deviation estimates for exceedance times of perpetuity sequences and their dual processes

    DEFF Research Database (Denmark)

    Buraczewski, Dariusz; Collamore, Jeffrey F.; Damek, Ewa

    2016-01-01

    In a variety of problems in pure and applied probability, it is of relevant to study the large exceedance probabilities of the perpetuity sequence $Y_n := B_1 + A_1 B_2 + \\cdots + (A_1 \\cdots A_{n-1}) B_n$, where $(A_i,B_i) \\subset (0,\\infty) \\times \\reals$. Estimates for the stationary tail...

  8. Large deviations and a Kramers’ type law for self-stabilizing diffusions

    OpenAIRE

    Herrmann, Samuel; Imkeller, Peter; Peithmann, Dierk

    2008-01-01

    We investigate exit times from domains of attraction for the motion of a self-stabilized particle traveling in a geometric (potential type) landscape and perturbed by Brownian noise of small amplitude. Self-stabilization is the effect of including an ensemble-average attraction in addition to the usual state-dependent drift, where the particle is supposed to be suspended in a large population of identical ones. A Kramers' type law for the particle's exit from the potential's domains of attrac...

  9. Large Deviations for 2-D Stochastic Navier-Stokes Equations with Jumps%二维带跳Navier-Stokes方程解的大偏差原理

    Institute of Scientific and Technical Information of China (English)

    赵辉艳

    2012-01-01

    在带泊松跳二维随机Navier-Stokes方程解的解的存在唯一性的基础上,利用弱收敛的方法证明了带泊松跳二维随机Navier-Stokes方程解的Freidlin-Wentzell型的大偏差原理.%In this paper,under the existence and uniqueness of the solution of stochastic 2-D Navier-Stokes equation,we prove Freidlin-Wentzell's large deviation principle for 2-D Stochastic Navier-Stokes Equation driven by multiplicative noise with Poisson jumps by using weak convergence approach.

  10. Large Deviations of Estimators

    NARCIS (Netherlands)

    Kester, A.D.M.; Kallenberg, W.C.M.

    1986-01-01

    The performance of a sequence of estimators $\\{T_n\\}$ of $g(\\theta)$ can be measured by its inaccuracy rate $-\\lim \\inf_{n\\rightarrow\\infty} n^{-1} \\log \\mathbb{P}_\\theta(\\|T_n - g(\\theta)\\| > \\varepsilon)$. For fixed $\\varepsilon > 0$ optimality of consistent estimators $\\operatorname{wrt}$ the ina

  11. Large deviation function for the eden model and universality within the one-dimensional kardar-parisi-zhang class

    Science.gov (United States)

    Appert

    2000-02-01

    It has been recently conjectured that for large systems, the shape of the central part of the large deviation function of the growth velocity would be universal for all the growth systems described by the Kardar-Parisi-Zhang equation in 1+1 dimension. One signature of this universality would be that the ratio of cumulants R(t)=[(c)](2)/[(c)(c)] would tend towards a universal value 0.415 17ellipsis as t tends to infinity, provided periodic boundary conditions are used. This has recently been questioned by Stauffer. In this paper we summarize various numerical and analytical results supporting this conjecture, and report in particular some numerical measurements of the ratio R(t) for the Eden model.

  12. Lower large deviations for the maximal flow through a domain of $\\mathbb{R}^d$ in first passage percolation

    CERN Document Server

    Cerf, Raphaël

    2009-01-01

    We consider the standard first passage percolation model in the rescaled graph $\\mathbb{Z}^d/n$ for $d\\geq 2$, and a domain $\\Omega$ of boundary $\\Gamma$ in $\\mathbb{R}^d$. Let $\\Gamma^1$ and $\\Gamma^2$ be two disjoint open subsets of $\\Gamma$, representing the parts of $\\Gamma$ through which some water can enter and escape from $\\Omega$. We investigate the asymptotic behaviour of the flow $\\phi_n$ through a discrete version $\\Omega_n$ of $\\Omega$ between the corresponding discrete sets $\\Gamma^1_n$ and $\\Gamma^2_n$. We prove that under some conditions on the regularity of the domain and on the law of the capacity of the edges, the lower large deviations of $\\phi_n/ n^{d-1}$ below a certain constant are of surface order.

  13. Efficient simulation of density and probability of large deviations of sum of random vectors using saddle point representations

    CERN Document Server

    Dey, Santanu

    2012-01-01

    We consider the problem of efficient simulation estimation of the density function at the tails, and the probability of large deviations for a sum of independent, identically distributed, light-tailed and non-lattice random vectors. The latter problem besides being of independent interest, also forms a building block for more complex rare event problems that arise, for instance, in queuing and financial credit risk modeling. It has been extensively studied in literature where state independent exponential twisting based importance sampling has been shown to be asymptotically efficient and a more nuanced state dependent exponential twisting has been shown to have a stronger bounded relative error property. We exploit the saddle-point based representations that exist for these rare quantities, which rely on inverting the characteristic functions of the underlying random vectors. We note that these representations reduce the rare event estimation problem to evaluating certain integrals, which may via importance ...

  14. Precise lim sup behavior of probabilities of large deviations for sums of i.i.d. random variables

    Directory of Open Access Journals (Sweden)

    Andrew Rosalsky

    2004-12-01

    Full Text Available Let {X,Xn;n≥1} be a sequence of real-valued i.i.d. random variables and let Sn=∑i=1nXi, n≥1. In this paper, we study the probabilities of large deviations of the form P(Sn>tn1/p, P(Sntn1/p, where t>0 and 0x1/p/ϕ(x=1, then for every t>0, limsupn→∞P(|Sn|>tn1/p/(nϕ(n=tpα.

  15. Large Deviation of the Density Profile in the Steady State of the Open Symmetric Simple Exclusion Process

    Science.gov (United States)

    Derrida, B.; Lebowitz, J. L.; Speer, E. R.

    2002-05-01

    We consider an open one dimensional lattice gas on sites i=1,..., N, with particles jumping independently with rate 1 to neighboring interior empty sites, the simple symmetric exclusion process. The particle fluxes at the left and right boundaries, corresponding to exchanges with reservoirs at different chemical potentials, create a stationary nonequilibrium state (SNS) with a steady flux of particles through the system. The mean density profile in this state, which is linear, describes the typical behavior of a macroscopic system, i.e., this profile occurs with probability 1 when N→∞. The probability of microscopic configurations corresponding to some other profile ρ( x), x= i/ N, has the asymptotic form exp[- N F({ ρ})]; F is the large deviation functional. In contrast to equilibrium systems, for which F eq({ ρ}) is just the integral of the appropriately normalized local free energy density, the F we find here for the nonequilibrium system is a nonlocal function of ρ. This gives rise to the long range correlations in the SNS predicted by fluctuating hydrodynamics and suggests similar non-local behavior of F in general SNS, where the long range correlations have been observed experimentally.

  16. Differing averaged and quenched large deviations for random walks in random environments in dimensions two and three

    CERN Document Server

    Yilmaz, Atilla

    2009-01-01

    We consider the quenched and averaged (or annealed) large deviation rate functions $I_q$ and $I_a$ for space-time and (the usual) space-only RWRE on $\\mathbb{Z}^d$. By Jensen's inequality, $I_a\\leq I_q$. In the space-time case, when $d\\geq3+1$, $I_q$ and $I_a$ are known to be equal on an open set containing the typical velocity $\\xi_o$. When $d=1+1$, we prove that $I_q$ and $I_a$ are equal only at $\\xi_o$. Similarly, when $d=2+1$, we show that $I_a

  17. Anomalously temperature-dependent thermal conductivity of monolayer GaN with large deviations from the traditional 1 /T law

    Science.gov (United States)

    Qin, Guangzhao; Qin, Zhenzhen; Wang, Huimin; Hu, Ming

    2017-05-01

    Efficient heat dissipation, which is featured by high thermal conductivity, is one of the crucial issues for the reliability and stability of nanodevices. However, due to the generally fast 1 /T decrease of thermal conductivity with temperature increase, the efficiency of heat dissipation quickly drops down at an elevated temperature caused by the increase of work load in electronic devices. To this end, pursuing semiconductor materials that possess large thermal conductivity at high temperature, i.e., slower decrease of thermal conductivity with temperature increase than the traditional κ ˜1 /T relation, is extremely important to the development of disruptive nanoelectronics. Recently, monolayer gallium nitride (GaN) with a planar honeycomb structure emerges as a promising new two-dimensional material with great potential for applications in nano- and optoelectronics. Here, we report that, despite the commonly established 1 /T relation of thermal conductivity in plenty of materials, monolayer GaN exhibits anomalous behavior that the thermal conductivity almost decreases linearly over a wide temperature range above 300 K, deviating largely from the traditional κ ˜1 /T law. The thermal conductivity at high temperature is much larger than the expected thermal conductivity that follows the general κ ˜1 /T trend, which would be beneficial for applications of monolayer GaN in nano- and optoelectronics in terms of efficient heat dissipation. We perform detailed analysis on the mechanisms underlying the anomalously temperature-dependent thermal conductivity of monolayer GaN in the framework of Boltzmann transport theory and further get insight from the view of electronic structure. Beyond that, we also propose two required conditions for materials that would exhibit similar anomalous temperature dependence of thermal conductivity: large difference in atom mass (huge phonon band gap) and electronegativity (LO-TO splitting due to strong polarization of bond). Our

  18. Accurate reaction barrier heights of pericyclic reactions: Surprisingly large deviations for the CBS-QB3 composite method and their consequences in DFT benchmark studies.

    Science.gov (United States)

    Karton, Amir; Goerigk, Lars

    2015-04-05

    Accurate barrier heights are obtained for the 26 pericyclic reactions in the BHPERI dataset by means of the high-level Wn-F12 thermochemical protocols. Very often, the complete basis set (CBS)-type composite methods are used in similar situations, but herein it is shown that they in fact result in surprisingly large errors with root mean square deviations (RMSDs) of about 2.5 kcal mol(-1). In comparison, other composite methods, particularly G4-type and estimated coupled cluster with singles, doubles, and quasiperturbative triple excitations [CCSD(T)/CBS] approaches, show deviations well below the chemical-accuracy threshold of 1 kcal mol(-1). With the exception of SCS-MP2 and the herein newly introduced MP3.5 approach, all other tested Møller-Plesset perturbative procedures give poor performance with RMSDs of up to 8.0 kcal mol(-1). The finding that CBS-type methods fail for barrier heights of these reactions is unexpected and it is particularly troublesome given that they are often used to obtain reference values for benchmark studies. Significant differences are identified in the interpretation and final ranking of density functional theory (DFT) methods when using the original CBS-QB3 rather than the new Wn-F12 reference values for BHPERI. In particular, it is observed that the more accurate Wn-F12 benchmark results in lower statistical errors for those methods that are generally considered to be robust and accurate. Two examples are the PW6B95-D3(BJ) hybrid-meta-general-gradient approximation and the PWPB95-D3(BJ) double-hybrid functionals, which result in the lowest RMSDs of the entire DFT study (1.3 and 1.0 kcal mol(-1), respectively). These results indicate that CBS-QB3 should be applied with caution in computational modeling and benchmark studies involving related systems.

  19. Large Deviations for Sums of Heavy-tailed Random Variables%重尾随机变量和的大偏差

    Institute of Scientific and Technical Information of China (English)

    郭晓燕; 孔繁超

    2007-01-01

    This paper is a further investigation of large deviations for sums of random variables Sn =n∑i=1 Xi and S(t)=N(t)∑i=1 Xi,(t≥0), where {Xn,n≥1} are independent identically distribution and non-negative random variables, and {N(t),t≥0} is a counting process of non-negative integer-valued random variables, independent of {Xn,n≥1}. In this paper, under the suppose F ∈G, which is a bigger heavy-tailed class than C, proved large deviation results for sums of random variables.

  20. Standard deviations

    CERN Document Server

    Smith, Gary

    2015-01-01

    Did you know that having a messy room will make you racist? Or that human beings possess the ability to postpone death until after important ceremonial occasions? Or that people live three to five years longer if they have positive initials, like ACE? All of these ‘facts' have been argued with a straight face by researchers and backed up with reams of data and convincing statistics.As Nobel Prize-winning economist Ronald Coase once cynically observed, ‘If you torture data long enough, it will confess.' Lying with statistics is a time-honoured con. In Standard Deviations, ec

  1. A local probability exponential inequality for the large deviation of an empirical process indexed by an unbounded class of functions and its application

    Institute of Scientific and Technical Information of China (English)

    ZHANG Dixin

    2004-01-01

    A local probability exponential inequality for the tail of large deviation of an empirical process over an unbounded class of functions is proposed and studied. A new method of truncating the original probability space and a new symmetrization method are given. Using these methods, the local probability exponential inequalities for the tails of large deviations of empirical processes with non-i.i.d, independent samples over unbounded class of functions are established. Some applications of the inequalities are discussed. As an additional result of this paper, under the conditions of Kolmogorov theorem,the strong convergence results of Kolmogorov on sums of non-i.i.d. independent random variables are extended to the cases of empirical processes indexed by unbounded classes of functions, the local probability exponential inequalities and the laws of the logarithm for the empirical processes are obtained.

  2. Large Deviations and Ensembles of Trajectories in Stochastic Models(Frontiers in Nonequilibrium Physics-Fundamental Theory, Glassy & Granular Materials, and Computational Physics-)

    OpenAIRE

    Robert L., JACK; Peter, SOLLICH; Department of Physics, University of Bath; King's College London, Department of Mathematics

    2010-01-01

    We consider ensembles of trajectories associated with large deviations of time-integrated quantities in stochastic models. Motivated by proposals that these ensembles are relevant for physical processes such as shearing and glassy relaxation, we show how they can be generated directly using auxiliary stochastic processes. We illustrate our results using the Glauber-Ising chain, for which biased ensembles of trajectories can exhibit ferromagnetic ordering. We discuss the relation between such ...

  3. On the Deviation of the Standard Model Predictions in the Large Hadron Collider Experiments (Letters to Progress in Physics

    Directory of Open Access Journals (Sweden)

    Belyakov A. V.

    2016-01-01

    Full Text Available The newest Large Hadron Collider experiments targeting the search for New Physics manifested the possibility of new heavy particles. Such particles are not predicted in the framework of Standard Model, however their existence is lawful in the framework of another model based on J. A.Wheeler’s geometrodynamcs.

  4. A large deviations analysis of the transient of a queue with may Markov fluid inputs: approximations and fast simulation

    NARCIS (Netherlands)

    Mandjes, M.R.H.; Ridder, Annemarie

    2002-01-01

    This article analyzes the transient buffer content distribution of a queue fed by a large number of Markov fluid sources. We characterize the probability of overflow at time t, given the current buffer level and the number of sources in the on-state. After scaling buffer and bandwidth resources by

  5. Cellular and molecular deviations in bovine in vitro-produced embryos are related to the large offspring syndrome

    NARCIS (Netherlands)

    Lazzari, G.; Wrenzycki, C.; Herrmann, D.; Duchi, R.; Kruip, T.; Niemann, H.; Galli, C.

    2002-01-01

    The large offspring syndrome (LOS) is observed in bovine and ovine offspring following transfer of in vitro-produced (IVP) or cloned embryos and is characterized by a multitude of pathologic changes, of which extended gestation length and increased birthweight are predominant features. In the

  6. Cellular and molecular deviations in bovine in vitro-produced embryos are related to the large offspring syndrome

    NARCIS (Netherlands)

    Lazzari, G.; Wrenzycki, C.; Herrmann, D.; Duchi, R.; Kruip, T.; Niemann, H.; Galli, C.

    2002-01-01

    The large offspring syndrome (LOS) is observed in bovine and ovine offspring following transfer of in vitro-produced (IVP) or cloned embryos and is characterized by a multitude of pathologic changes, of which extended gestation length and increased birthweight are predominant features. In the presen

  7. Distribution of spectral linear statistics on random matrices beyond the large deviation function—Wigner time delay in multichannel disordered wires

    Science.gov (United States)

    Grabsch, Aurélien; Texier, Christophe

    2016-11-01

    An invariant ensemble of N × N random matrices can be characterised by a joint distribution for eigenvalues P({λ }1,\\cdots ,{λ }N). The distribution of linear statistics, i.e. of quantities of the form L=(1/N){\\sum }if({λ }i) where f(x) is a given function, appears in many physical problems. In the N\\to ∞ limit, L scales as L˜ {N}η , where the scaling exponent η depends on the ensemble and the function f(x). Its distribution can be written in the form {P}N(s={N}-η L)≃ {A}N,β (s)\\exp \\{-(β {N}2/2){{Φ }}(s)\\}, where β \\in \\{1,2,4\\} is the Dyson index. The Coulomb gas technique naturally provides the large deviation function {{Φ }}(s), which can be efficiently obtained thanks to a ‘thermodynamic identity’ introduced earlier. We conjecture the pre-exponential function {A}N,β (s). We check our conjecture on several well controlled cases within the Laguerre and the Jacobi ensembles. Then we apply our main result to a situation where the large deviation function has no minimum (and L has infinite moments): this arises in the statistical analysis of the Wigner time delay for semi-infinite multichannel disordered wires (Laguerre ensemble). The statistical analysis of the Wigner time delay then crucially depends on the pre-exponential function {A}N,β (s), which ensures the decay of the distribution for large argument.

  8. Extremely Large Images: Considerations for Contemporary Approach

    CERN Document Server

    Kitaeff, Slava; Wu, Chen; Taubman, David

    2013-01-01

    The new widefi?eld radio telescopes, such as: ASKAP,MWA, LOFAR, eVLA and SKA; will produce spectral-imaging data-cubes (SIDC) of unprecedented volumes in the order of hundreds of Petabytes. Servicing such data as images to the end-user may encounter challenges unforeseen during the development of IVOA SIAP. We discuss the requirements for extremely large SIDC, and in this light we analyse the applicability of approach taken in the ISO/IEC 15444 (JPEG2000) standards.

  9. Large deviation principle for one-dimensional random walk in dynamic random environment: attractive spin-flips and simple symmetric exclusion

    CERN Document Server

    Avena, L; Redig, F

    2009-01-01

    Consider a one-dimensional shift-invariant attractive spin-flip system in equilibrium, constituting a dynamic random environment, together with a nearest-neighbor random walk that on occupied sites has a local drift to the right but on vacant sites has a local drift to the left. In previous work we proved a law of large numbers for dynamic random environments satisfying a space-time mixing property called cone-mixing. If an attractive spin-flip system has a finite average coupling time at the origin for two copies starting from the all-occupied and the all-vacant configuration, respectively, then it is cone-mixing. In the present paper we prove a large deviation principle for the empirical speed of the random walk, both quenched and annealed, and exhibit some properties of the associated rate functions. Under an exponential space-time mixing condition for the spin-flip system, which is stronger than cone-mixing, the two rate functions have a unique zero, i.e., the slow-down phenomenon known to be possible in ...

  10. Intensity Thresholds on Raw Acceleration Data: Euclidean Norm Minus One (ENMO) and Mean Amplitude Deviation (MAD) Approaches

    Science.gov (United States)

    Bakrania, Kishan; Yates, Thomas; Rowlands, Alex V.; Esliger, Dale W.; Bunnewell, Sarah; Sanders, James; Davies, Melanie; Khunti, Kamlesh; Edwardson, Charlotte L.

    2016-01-01

    Objectives (1) To develop and internally-validate Euclidean Norm Minus One (ENMO) and Mean Amplitude Deviation (MAD) thresholds for separating sedentary behaviours from common light-intensity physical activities using raw acceleration data collected from both hip- and wrist-worn tri-axial accelerometers; and (2) to compare and evaluate the performances between the ENMO and MAD metrics. Methods Thirty-three adults [mean age (standard deviation (SD)) = 27.4 (5.9) years; mean BMI (SD) = 23.9 (3.7) kg/m2; 20 females (60.6%)] wore four accelerometers; an ActiGraph GT3X+ and a GENEActiv on the right hip; and an ActiGraph GT3X+ and a GENEActiv on the non-dominant wrist. Under laboratory-conditions, participants performed 16 different activities (11 sedentary behaviours and 5 light-intensity physical activities) for 5 minutes each. ENMO and MAD were computed from the raw acceleration data, and logistic regression and receiver-operating-characteristic (ROC) analyses were implemented to derive thresholds for activity discrimination. Areas under ROC curves (AUROC) were calculated to summarise performances and thresholds were assessed via executing leave-one-out-cross-validations. Results For both hip and wrist monitor placements, in comparison to the ActiGraph GT3X+ monitors, the ENMO and MAD values derived from the GENEActiv devices were observed to be slightly higher, particularly for the lower-intensity activities. Monitor-specific hip and wrist ENMO and MAD thresholds showed excellent ability for separating sedentary behaviours from motion-based light-intensity physical activities (in general, AUROCs >0.95), with validation indicating robustness. However, poor classification was experienced when attempting to isolate standing still from sedentary behaviours (in general, AUROCs <0.65). The ENMO and MAD metrics tended to perform similarly across activities and accelerometer brands. Conclusions Researchers can utilise these robust monitor-specific hip and wrist ENMO and MAD

  11. Large Deviations for Random Sums on Some Kind of Heavy-tailed Classes in Risk Models%在风险模型中一类重尾随机和的大偏差

    Institute of Scientific and Technical Information of China (English)

    孔繁超; 王金亮

    2006-01-01

    This paper is a further investigation into the large deviations for random sums of heavy-tailed,we extended and improved some results in ref. [1] and [2]. These results can applied to some questions in Insurance and Finance.

  12. A local probability exponential inequality for the large deviation of an empirical process indexed by an unbounded class of functions and its application

    Institute of Scientific and Technical Information of China (English)

    2004-01-01

    ., Central limit theorems for empirical measures, Ann. Probab., 1978, 6: 899-929.[18]Dudley, R. M., A Course on Empirical Processes, Lecture Notes in Mathematics 1097, New York: Springer-Verlag, 1984.[19]Dudley, R. M., Philipp, W., Invariance principles for sums of Banach space valued random elements and empirical processes, Z. Wahrsch. verw. Gebiete, 1983, 62, (6): 509-552.[20]Giné, E., Zinn, J., Some limit theorems for empirical processes, Ann. Probab., 1984, 12: 929-989.[21]Wu, L. M., Large deviations moderate deviations and LIL for empirical processes, Ann. Probab., 1994, 22:17-27.[22]Ledoux, M., Concentration of measure and logarithmic Sobolev inequalities, Sem. Probab. ⅩⅩⅩⅢ, LNM 1709,1999, 120-219.[23]Alexander, K. S., Talagrand, M., The law of the iterated logarithm for empirical processes on Vapnik-Cervonenkis classes, J. Multivariate Anal. 1989, 30: 155-166.[24]Chow, Y. S., Teicher, H., Probability Theory, New York: Springer-Verlag, 1988.[25]Petrov, V. V., Sums of Independent Random Variables, New York: Springer-Verlag, 1975.[26]Stout, W., Almost Sure Convergence, New York: Academic Press, Inc, 1974.

  13. Semantic Deviation in Oliver Twist

    Institute of Scientific and Technical Information of China (English)

    康艺凡

    2016-01-01

    Dickens, with his adeptness with language, applies semantic deviation skillfully in his realistic novel Oliver Twist. However, most studies and comments home and abroad on it mainly focus on such aspects as humanity, society, and characters. Therefore, this thesis will take a stylistic approach to Oliver Twist from the perspective of semantic deviation, which is achieved by the use of irony, hyperbole, and pun and analyze how the application of the technique makes the novel attractive.

  14. Dissociated Vertical Deviation

    Science.gov (United States)

    ... Frequently Asked Questions Español Condiciones Chinese Conditions Dissociated Vertical Deviation En Español Read in Chinese What is Dissociated Vertical Deviation (DVD)? DVD is a condition in which ...

  15. Automatic Fastening Large Structures: a New Approach

    Science.gov (United States)

    Lumley, D. F.

    1985-01-01

    The external tank (ET) intertank structure for the space shuttle, a 27.5 ft diameter 22.5 ft long externally stiffened mechanically fastened skin-stringer-frame structure, was a labor intensitive manual structure built on a modified Saturn tooling position. A new approach was developed based on half-section subassemblies. The heart of this manufacturing approach will be 33 ft high vertical automatic riveting system with a 28 ft rotary positioner coming on-line in mid 1985. The Automatic Riveting System incorporates many of the latest automatic riveting technologies. Key features include: vertical columns with two sets of independently operating CNC drill-riveting heads; capability of drill, insert and upset any one piece fastener up to 3/8 inch diameter including slugs without displacing the workpiece offset bucking ram with programmable rotation and deep retraction; vision system for automatic parts program re-synchronization and part edge margin control; and an automatic rivet selection/handling system.

  16. General Approach to Characterize Reservoir Fluids Using a Large PVT Database

    DEFF Research Database (Denmark)

    Varzandeh, Farhad; Yan, Wei; Stenby, Erling Halfdan

    2016-01-01

    . The adjustment was made to minimize the deviation in key PVT properties like saturation pressures, densities at reservoir temperature and Stock Tank Oil (STO) densities, while keeping the n-alkane limit of the correlations unchanged. As an improvement of a previously suggested characterization method...... methods. We proposed a general approach to develop correlations for model parameters and applied it to the characterization for the PC-SAFT EoS. The approach consists in first developing the correlations based on the DIPPR database, and then adjusting the correlations based on a large PVT database...

  17. A Survey on Delay-Aware Resource Control for Wireless Systems --- Large Deviation Theory, Stochastic Lyapunov Drift and Distributed Stochastic Learning

    CERN Document Server

    Cui, Ying; Wang, Rui; Huang, Huang; Zhang, Shunqing

    2011-01-01

    In this tutorial paper, a comprehensive survey is given on several major systematic approaches in dealing with delay-aware control problems, namely the equivalent rate constraint approach, the Lyapunov stability drift approach and the approximate Markov Decision Process (MDP) approach using stochastic learning. These approaches essentially embrace most of the existing literature regarding delay-aware resource control in wireless systems. They have their relative pros and cons in terms of performance, complexity and implementation issues. For each of the approaches, the problem setup, the general solution and the design methodology are discussed. Applications of these approaches to delay-aware resource allocation are illustrated with examples in single-hop wireless networks. Furthermore, recent results regarding delay-aware multi-hop routing designs in general multi-hop networks are elaborated. Finally, the delay performance of the various approaches are compared through simulations using an example of the upl...

  18. An iteratively reweighted least-squares approach to adaptive robust adjustment of parameters in linear regression models with autoregressive and t-distributed deviations

    Science.gov (United States)

    Kargoll, Boris; Omidalizarandi, Mohammad; Loth, Ina; Paffenholz, Jens-André; Alkhatib, Hamza

    2017-09-01

    In this paper, we investigate a linear regression time series model of possibly outlier-afflicted observations and autocorrelated random deviations. This colored noise is represented by a covariance-stationary autoregressive (AR) process, in which the independent error components follow a scaled (Student's) t-distribution. This error model allows for the stochastic modeling of multiple outliers and for an adaptive robust maximum likelihood (ML) estimation of the unknown regression and AR coefficients, the scale parameter, and the degree of freedom of the t-distribution. This approach is meant to be an extension of known estimators, which tend to focus only on the regression model, or on the AR error model, or on normally distributed errors. For the purpose of ML estimation, we derive an expectation conditional maximization either algorithm, which leads to an easy-to-implement version of iteratively reweighted least squares. The estimation performance of the algorithm is evaluated via Monte Carlo simulations for a Fourier as well as a spline model in connection with AR colored noise models of different orders and with three different sampling distributions generating the white noise components. We apply the algorithm to a vibration dataset recorded by a high-accuracy, single-axis accelerometer, focusing on the evaluation of the estimated AR colored noise model.

  19. Some Large Deviation Results for Generalized Compound Binomial Risk Models%广义复合二项风险模型的若干大偏差结果

    Institute of Scientific and Technical Information of China (English)

    孔繁超; 赵朋

    2009-01-01

    This paper is a further investigation of large deviation for partial and random sums of random variables, where {X_n, n≥ 1} is non-negative independent identically distributed random variables with a common heavy-tailed distribution function F on the real line R and finite mean μ∈ R. {N(n),n≥ 0} is a binomial process with a parameter p ∈ (0, 1) and independent of {X_n,n≥1}; {M(n),n≥0} is a Poisson process with intensity λ > 0, S_n=∑~(N(n))_(i=1) X_i-cM(n). Suppose F ∈ C, we futher extend and improve some large deviation results. These results can apply to certain problems in insurance and finance.

  20. A novel approach for evaluating acceptable intra-operative correction of lower limb alignment in femoral and tibial malunion using the deviation angle of the normal contralateral knee.

    Science.gov (United States)

    Wu, Chi-Chuan

    2014-03-01

    A simple and appropriate approach for evaluating an acceptable alignment of bone around the knee during operation has not yet been reported. Thirty-five men and 35 women presenting with nonunion or malunion of the unilateral femoral shaft were included in the first study. Using the standing scanograph, the contralateral normal lower extremity was measured to determine the normal deviation angle (DA) of the medial malleolus when the medial aspect of the knee was placed in the midline of the body. In the second study, the normal DA from individual patients was used as a reference to evaluate knee alignment during operation in 40 other patients presenting with distal femoral or proximal tibial nonunion or malunion. The clinical and knee functional outcomes of these 40 patients were investigated. The average normal DA was 4.2° in men and 6.0° in women (p<0.001). Thirty-four of the 40 patients presenting with disorders around the knee were followed up for an average of 3.6 years (range, 1.1-6.5 years). Thirty fractures healed with a union rate of 88% and an average union period of 4.2 months (range, 2.5-6.5 months). Ideal knee alignment was maintained in all 30 patients with fracture union. Satisfactory function of the knee was achieved in 28 patients (82%, p<0.001). Using a normal DA as a reference may be a feasible and effective technique for evaluating an acceptable alignment of bone around the knee during operation. Level IV, Case series. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. Segmentation Using Symmetry Deviation

    DEFF Research Database (Denmark)

    Hollensen, Christian; Højgaard, L.; Specht, L.

    2011-01-01

    and evaluate the method. The method uses deformable registration on computed tomography(CT) to find anatomical symmetry deviations of Head & Neck squamous cell carcinoma and combining it with positron emission tomography (PET) images. The method allows the use anatomical and symmetrical information of CT scans...... to improve automatic delineations. Materials: PET/CT scans from 30 patients were used for this study, 20 without cancer in hypopharyngeal volume and 10 with hypharyngeal carcinoma. An head and neck atlas was created from the 20 normal patients. The atlas was created using affine and non-rigid registration...... of the CT-scans into a single atlas. Afterwards the standard deviation of anatomical symmetry for the 20 normal patients was evaluated using non-rigid registration and registered onto the atlas to create an atlas for normal anatomical symmetry deviation. The same non-rigid registration was used on the 10...

  2. [Management of large marine ecosystem based on ecosystem approach].

    Science.gov (United States)

    Chu, Jian-song

    2011-09-01

    Large marine ecosystem (LME) is a large area of ocean characterized by distinct oceanology and ecology. Its natural characteristics require management based on ecosystem approach. A series of international treaties and regulations definitely or indirectly support that it should adopt ecosystem approach to manage LME to achieve the sustainable utilization of marine resources. In practices, some countries such as Canada, Australia, and U.S.A. have adopted ecosystem-based approach to manage their oceans, and some international organizations such as global environment fund committee have carried out a number of LME programs based on ecosystem approach. Aiming at the sustainable development of their fisheries, the regional organizations such as Caribbean Community have established regional fisheries mechanism. However, the adoption of ecosystem approach to manage LME is not only a scientific and legal issue, but also a political matter largely depending on the political will and the mutual cooperation degree of related countries.

  3. 控制变换尾下END序列随机和的广义精致大偏差%Extended Precise Large Deviations of Random Sums in the Presence of Dominated Variation END Structure

    Institute of Scientific and Technical Information of China (English)

    胡怡玉; 何基娇; 周之寒

    2014-01-01

    重尾场合下随机变量随机和的精致大偏差是现代金融保险学中一项重要研究课题。假定理赔序列为一列END同分布随机变量序列,带控制变换尾,理赔到来过程为一般计数过程,且与理赔序列相互独立,在更弱的条件下,将文献[5]等中所做的一致变换尾上的广义精致大偏差推广到控制变换尾上,得到相应的偏离均值的大偏差结果。%The precise large deviations of random sums in heavy-tailed random variables is an important research topic in modern insur-ance and finance.Let the claims be a sequence of END real-valued identically distributed random variables with dominated variation, the claims is common counting process and independent from its sequence.Under the weaker condition,the extended precise large deviations of random sums which had been done in the document[5 ]was promoted to dominated variation,the largest deviation of con-sequential deflection mean is acquired.

  4. Structure of deviations from optimality in biological systems.

    Science.gov (United States)

    Pérez-Escudero, Alfonso; Rivera-Alba, Marta; de Polavieja, Gonzalo G

    2009-12-01

    Optimization theory has been used to analyze evolutionary adaptation. This theory has explained many features of biological systems, from the genetic code to animal behavior. However, these systems show important deviations from optimality. Typically, these deviations are large in some particular components of the system, whereas others seem to be almost optimal. Deviations from optimality may be due to many factors in evolution, including stochastic effects and finite time, that may not allow the system to reach the ideal optimum. However, we still expect the system to have a higher probability of reaching a state with a higher value of the proposed indirect measure of fitness. In systems of many components, this implies that the largest deviations are expected in those components with less impact on the indirect measure of fitness. Here, we show that this simple probabilistic rule explains deviations from optimality in two very different biological systems. In Caenorhabditis elegans, this rule successfully explains the experimental deviations of the position of neurons from the configuration of minimal wiring cost. In Escherichia coli, the probabilistic rule correctly obtains the structure of the experimental deviations of metabolic fluxes from the configuration that maximizes biomass production. This approach is proposed to explain or predict more data than optimization theory while using no extra parameters. Thus, it can also be used to find and refine hypotheses about which constraints have shaped biological structures in evolution.

  5. Why do TD-DFT excitation energies of BODIPY/Aza-BODIPY families largely deviate from experiment? Answers from electron correlated and multireference methods.

    Science.gov (United States)

    Momeni, Mohammad R; Brown, Alex

    2015-06-09

    The vertical excitation energies of 17 boron-dipyrromethene (BODIPY) core structures with a variety of substituents and ring sizes are benchmarked using time-dependent density functional theory (TD-DFT) with nine different functionals combined with the cc-pVTZ basis set. When compared to experimental measurements, all functionals provide mean absolute errors (mean AEs) greater than 0.3 eV, larger than the 0.1-0.3 eV differences typically expected from TD-DFT. Due to the high linear correlation of TD-DFT results with experiment, most functionals can be used to predict excitation energies if corrected empirically. Using the CAM-B3LYP functional, 0-0 transition energies are determined, and while the absolute difference is improved (mean AE = 0.478 eV compared to 0.579 eV), the correlation diminishes substantially (R(2) = 0.961 to 0.862). Two very recently introduced charge transfer (CT) indices, q(CT) and d(CT), and electron density difference (EDD) plots demonstrate that CT does not play a significant role for most of the BODIPYs examined and, thus, cannot be the source of error in TD-DFT. To assess TD-DFT methods, vertical excitation energies are determined utilizing TD-HF, configuration interaction CIS and CIS(D), equation of motion EOM-CCSD, SAC-CI, and Laplace-transform based local coupled-cluster singles and approximate doubles LCC2* methods. Moreover, multireference CASSCF and CASPT2 vertical excitation energies were also obtained for all species (except CASPT2 was not feasible for the four largest systems). The SAC-CI/cc-pVDZ, LCC2*/cc-pVDZ, and CASPT2/cc-pVDZ approaches are shown to have the smallest mean AEs of 0.154, 0.109, and 0.100 eV, respectively; the utility of the LCC2* approach is demonstrated for eight extended BODIPYs and aza-BODIPYs. We found that the problems with TD-DFT arise from difficulties in dealing with the differential electron correlation (as assessed by comparing CCS, CC2, LR-CCSD, CCSDR(T), and CCSDR(3) vertical excitation energies for

  6. Testing deviations from $\\Lambda$CDM with growth rate measurements from 6 Large Scale Structure Surveys at $\\mathbf{z=0.06}$ to 1

    CERN Document Server

    Alam, Shadab; Silvestri, Alessandra

    2015-01-01

    We use measurements from the Planck satellite mission and galaxy redshift surveys over the last decade to test three of the basic assumptions of the standard model of cosmology, $\\Lambda$CDM: the spatial curvature of the universe, the nature of dark energy and the laws of gravity on large scales. We obtain improved constraints on several scenarios that violate one or more of these assumptions. We measure $w_0=-0.94\\pm0.17$ (18\\% measurement) and $1+w_a=1.16\\pm0.36$ (31\\% measurement) for models with a time-dependent equation of state, which is an improvement over current best constraints \\citep{Aubourg2014}. In the context of modified gravity, we consider popular scalar tensor models as well as a parametrization of the growth factor. In the case of one-parameter $f(R)$ gravity models with a $\\Lambda$CDM background, we constrain $B_0 < 1.36 \\times 10^{-5} $ (1$\\sigma$ C.L.), which is an improvement by a factor of 4 on the current best \\citep{XU2015}. We provide the very first constraint on the coupling para...

  7. Impact of different setup approaches in image-guided radiotherapy as primary treatment for prostate cancer. A study of 2940 setup deviations in 980 MVCTs

    Energy Technology Data Exchange (ETDEWEB)

    Schiller, Kilian; Specht, Hanno; Kampfer, Severin; Duma, Marciana Nona [Technische Universitaet Muenchen Klinikum rechts der Isar, Department of Radiation Oncology, Muenchen (Germany); Petrucci, Alessia [University of Florence, Department of Radiation Oncology, Florence (Italy); Geinitz, Hans [Krankenhaus der Barmherzigen Schwestern Linz, Department of Radiation Oncology, Linz (Austria); Schuster, Tibor [Klinikum Rechts der Isar, Technische Universitaet Muenchen, Institute for Medical Statistics and Epidemiology, Muenchen (Germany)

    2014-08-15

    The goal of this study was to assess the impact of different setup approaches in image-guided radiotherapy (IMRT) of the prostatic gland. In all, 28 patients with prostate cancer were enrolled in this study. After the placement of an endorectal balloon, the planning target volume (PTV) was treated to a dose of 70 Gy in 35 fractions. A simultaneously integrated boost (SIB) of 76 Gy (2.17 Gy per fraction and per day) was delivered to a smaller target volume. All patients underwent daily prostate-aligned IGRT by megavoltage CT (MVCT). Retrospectively, three different setup approaches were evaluated by comparison to the prostate alignment: setup by skin alignment, endorectal balloon alignment, and automatic registration by bones. A total of 2,940 setup deviations were analyzed in 980 fractions. Compared to prostate alignment, skin mark alignment was associated with substantial displacements, which were ≥ 8 mm in 13 %, 5 %, and 44 % of all fractions in the lateral, longitudinal, and vertical directions, respectively. Endorectal balloon alignment yielded displacements ≥ 8 mm in 3 %, 19 %, and 1 % of all setups; and ≥ 3 mm in 27 %, 58 %, and 18 % of all fractions, respectively. For bone matching, the values were 1 %, 1 %, and 2 % and 3 %, 11 %, and 34 %, respectively. For prostate radiotherapy, setup by skin marks alone is inappropriate for patient positioning due to the fact that, during almost half of the fractions, parts of the prostate would not be targeted successfully with an 8-mm safety margin. Bone matching performs better but not sufficiently for safety margins ≤ 3 mm. Endorectal balloon matching can be combined with bone alignment to increase accuracy in the vertical direction when prostate-based setup is not available. Daily prostate alignment remains the gold standard for high-precision radiotherapy with small safety margins. (orig.) [German] Das Ziel dieser Studie bestand darin, den Einfluss verschiedener Herangehensweisen bei der Einstellung einer

  8. The approach curve method for large anode-cathode distances

    Energy Technology Data Exchange (ETDEWEB)

    Mammana, Victor P.; Monteiro, Othon R.; Fonseca, Leo R.C.

    2003-09-20

    An important technique used to characterize field emission is the measurement of the emitted current against electric field (IxE). In this work we discuss a procedure for obtaining IxE data based on multiple approach curves. We show that the simulated features obtained for an idealized uniform surface matches available experimental data for small anode-cathode distances, while for large distances the simulation predicts a departure from the linear regime. We also discuss the shape of the approach curves for large anode-cathode distances for a cathode made of carbon nanotubes.

  9. LIMSUP DEVIATIONS ON TREES

    Institute of Scientific and Technical Information of China (English)

    Fan Aihua

    2004-01-01

    The vertices of an infinite locally finite tree T are labelled by a collection of i.i.d. real random variables {Xσ}σ∈T which defines a tree indexed walk Sσ = ∑θ<r≤σXr. We introduce and study the oscillations of the walk:Exact Hausdorff dimension of the set of such ξ 's is calculated. An application is given to study the local variation of Brownian motion. A general limsup deviation problem on trees is also studied.

  10. Large Deviations for Processes with Independent Increments.

    Science.gov (United States)

    1984-10-01

    generating function of the increments exists and thus the sample paths of such stochastic processes lie in the space of functions of bounded variation . The...BV[O,1], the space of functions of bounded variation and the topology is that of weak*-convergence. Varadhan (1966) studied the LDP for similar...increments and no Gaussian component which are considered as elements of BV[0,1], the space of functions of bounded variation . The final section

  11. Chaotic Hypothesis and Universal Large Deviations Properties

    CERN Document Server

    Gallavotti, G

    1998-01-01

    Chaotic systems arise naturally in Statistical Mechanics and in Fluid Dynamics. A paradigm for their modelization are smooth hyperbolic systems. Are there consequences that can be drawn simply by assuming that a system is hyperbolic? here we present a few model independent general consequences which may have some relevance for the Physics of chaotic systems. Expanded version of a talk at ICM98, Berlin.

  12. A modified SPH approach for fluids with large density differences

    CERN Document Server

    Ott, F; Ott, Frank; Schnetter, Erik

    2003-01-01

    We introduce a modified SPH approach that is based on discretising the particle density instead of the mass density. This approach makes it possible to use SPH particles with very different masses to simulate multi-phase flows with large differences in mass density between the phases. We test our formulation with a simple advection problem, with sound waves encountering a density discontinuity, and with shock tubes containing an interface between air and Diesel oil. For all examined problems where particles have different masses, the new formulation yields better results than standard SPH, even in the case of a single-phase flow.

  13. A Large Deviation Principle of Capacity for the Markov Process Modulated by the Stochastic Evolution Equation%由随机发展方程调制的马氏过程的容度大偏差原理

    Institute of Scientific and Technical Information of China (English)

    马小翠

    2011-01-01

    给出了{(X^s(t),Z^ε(t));ε〉0,t∈[0,T]}的容度大偏差定理.其中X^ε(t)满足下面的随机微分方程:dX^ε(t)=(√εσ(t))dw(t)+b(X^ε(t),Z^ε(t))dt,Z^ε(t)为有限个状态的随机过程.%In this paper, We discuss a large deviation principle of capacity for {(X^s(t),Z^ε(t));ε〉0,t∈[0,T]}determined by dX^ε(t)=(√εσ(t))dw(t)+b(X^ε(t),Z^ε(t))dt,Z^ε(t)is an n- state process.

  14. A Technical Approach on Large Data Distributed Over a Network

    Directory of Open Access Journals (Sweden)

    Suhasini G

    2011-12-01

    Full Text Available Data mining is nontrivial extraction of implicit, previously unknown and potential useful information from the data. For a database with number of records and for a set of classes such that each record belongs to one of the given classes, the problem of classification is to decide the class to which the given record belongs. The classification problem is also to generate a model for each class from given data set. We are going to make use of supervised classification in which we have training dataset of record, and for each record the class to which it belongs is known. There are many approaches to supervised classification. Decision tree is attractive in data mining environment as they represent rules. Rules can readily expressed in natural languages and they can be even mapped o database access languages. Now a days classification based on decision trees is one of the important problems in data mining   which has applications in many areas.  Now a days database system have become highly distributed, and we are using many paradigms. we consider the problem of inducing decision trees in a large distributed network of highly distributed databases. The classification based on decision tree can be done on the existence of distributed databases in healthcare and in bioinformatics, human computer interaction and by the view that these databases are soon to contain large amounts of data, characterized by its high dimensionality. Current decision tree algorithms would require high communication bandwidth, memory, and they are less efficient and scalability reduces when executed on such large volume of data. So there are some approaches being developed to improve the scalability and even approaches to analyse the data distributed over a network.[keywords: Data mining, Decision tree, decision tree induction, distributed data, classification

  15. Chemical Abstracts Service approach to management of large data bases.

    Science.gov (United States)

    Huffenberger, M A; Wigington, R L

    1975-02-01

    When information handling is "the business," as it is at Chemical Abstract Service (CAS), the total organization must be involved in information management. Since 1967, when, as a result of long-range planning efforts, CAS adopted a "data-base approach" to management of both the processing system and the distribution of information files, CAS has been grappling with the problems of managing large collections of information in computer-based systems. This paper describes what has been done at CAS in the management of large files and what we see as necessary, as a result of our experience, to improve and complete the information management system that is the foundation of our production processes.

  16. A Formal Approach for Agent Based Large Concurrent Intelligent Systems

    CERN Document Server

    Chaudhary, Ankit

    2011-01-01

    Large Intelligent Systems are so complex these days that an urgent need for designing such systems in best available way is evolving. Modeling is the useful technique to show a complex real world system into the form of abstraction, so that analysis and implementation of the intelligent system become easy and is useful in gathering the prior knowledge of system that is not possible to experiment with the real world complex systems. This paper discusses a formal approach of agent-based large systems modeling for intelligent systems, which describes design level precautions, challenges and techniques using autonomous agents, as its fundamental modeling abstraction. We are discussing Ad-Hoc Network System as a case study in which we are using mobile agents where nodes are free to relocate, as they form an Intelligent Systems. The designing is very critical in this scenario and it can reduce the whole cost, time duration and risk involved in the project.

  17. Surgical removal of large central neurocytomas with small incision approach

    Directory of Open Access Journals (Sweden)

    Shu-mao LU

    2014-01-01

    Full Text Available Objective To investigate the strategy and technique of small incision surgery through interhemispheric transcallosal approach for removal of large central neurocytomas in supratentorial ventricule. Methods Clinical data and therapy of 6 cases with central neurocytomas were retrospectively studied. All tumors were removed through small incision interhemispheric transcallosal approach, and the clinical data were analyzed. Results Total resection was achieved in all cases. Three cases experienced transient mutism and one case experienced hemiparalysis. All of them received nerve-nurturing treatment and recovered within 2 weeks. Five cases were followed-up from 6 months to 2 years and there was no recurrence. Conclusions The advantages of interhemispheric transcallosal approach include provision of sufficient surgical visual field and space, protection of normal brain tissue by natural cavity and shortest surgical pathway. Small incision surgery may not only reduce invalid brain exposure and hemorrhage during operation, but also decrease operation time. The small incision surgery through interhemispheric transcallosal approach is an effective choice for removal of central neurocytomas involved in supratentorial ventricule.

  18. Adiabatic hyperspherical approach to large-scale nuclear dynamics

    CERN Document Server

    Suzuki, Yasuyuki

    2015-01-01

    We formulate a fully microscopic approach to large-scale nuclear dynamics using a hyperradius as a collective coordinate. An adiabatic potential is defined by taking account of all possible configurations at a fixed hyperradius, and its hyperradius dependence plays a key role in governing the global nuclear motion. In order to go to larger systems beyond few-body systems, we suggest basis functions of a microscopic multicluster model, propose a method for calculating matrix elements of an adiabatic Hamiltonian with use of Fourier transforms, and test its effectiveness.

  19. Approaches for Scaling DBSCAN Algorithm to Large Spatial Databases

    Institute of Scientific and Technical Information of China (English)

    周傲英; 周水庚; 曹晶; 范晔; 胡运发

    2000-01-01

    The huge amount of information stored in databases owned by corporations (e.g., retail, financial, telecom) has spurred a tremendous interest in the area of knowledge discovery and data mining. Clustering, in data mining, is a useful technique for discovering interesting data distributions and patterns in the underlying data, and has many application fields, such as statistical data analysis, pattern recognition, image processing, and other business applications. Although researchers have been working on clustering algorithms for decades, and a lot of algorithms for clustering have been developed, there is still no efficient algorithm for clustering very large databases and high dimensional data. As an outstanding representative of clustering algorithms, DBSCAN algorithm shows good performance in spatial data clustering. However, for large spatial databases, DBSCAN requires large volume of memory support and could incur substantial I/O costs because it operates directly on the entire database. In this paper, several approaches are proposed to scale DBSCAN algorithm to large spatial databases. To begin with, a fast DBSCAN algorithm is developed, which considerably speeds up the original DBSCAN algorithm. Then a sampling based DBSCAN algorithm, a partitioning-based DBSCAN algorithm, and a parallel DBSCAN algorithm are introduced consecutively. Following that, based on the above-proposed algorithms, a synthetic algorithm is also given. Finally, some experimental results are given to demonstrate the effectiveness and efficiency of these algorithms.

  20. An Analysis of the Linguistic Deviation in Chapter X of Oliver Twist

    Institute of Scientific and Technical Information of China (English)

    刘聪

    2013-01-01

    Charles Dickens is one of the greatest critical realist writers of the Victorian Age. In language, he is often compared with William Shakespeare for his adeptness with the vernacular and large vocabulary. Charles Dickens achieved a recognizable place among English writers through the use of the stylistic features in his fictional language. Oliver Twist is the best representative of Charles Dickens’style, which makes it the most appropriate choice for the present stylistic study on Charles Dickens. No one who has ever read the dehumanizing workhouse scenes of Oliver Twist and the dark, criminal underworld life can forget them. This thesis attempts to investigate Oliver Twist through the approach of modern stylistics, particularly the theory of linguistic devia-tion. This thesis consists of an introduction, the main body and a conclusion. The introduction offers a brief summary of the com-ments on Charles Dickens and Chapter X of Oliver Twist, introduces the newly rising linguistic deviation theories, and brings about the theories on which this thesis settles. The main body explores the deviation effects produced from four aspects: lexical deviation, grammatical deviation, graphological deviation, and semantic deviation. It endeavors to show Dickens ’manipulating language and the effects achieved through this manipulation. The conclusion mainly sums up the previous analysis, and reveals the theme of the novel, positive effect of linguistic deviation and significance of deviation application.

  1. Efficient Graph Based Approach to Large Scale Role Engineering

    Directory of Open Access Journals (Sweden)

    Dana Zhang

    2014-04-01

    Full Text Available Role engineering is the process of defining a set of roles that offer administrative benefit for Role Based Access Control (RBAC, which ensures data privacy. It is a business critical task that is required by enterprises wishing to migrate to RBAC. However, existing methods of role generation have not analysed what constitutes a beneficial role and as a result, often produce inadequate solutions in a time consuming manner. To address the urgent issue of identifying high quality RBAC structures in real enterprise environments, we present a cost based analysis of the problem for both flat and hierarchical RBAC structures. Specifically we propose two cost models to evaluate the administration cost of roles and provide a k-partite graph approach to role engineering. Existing role cost evaulations are approximations that overestimate the benefit of a role. Our method and cost models can provide exact role cost and show when existing role cost evaluations can be used as a lower bound to improve efficiency without effecting quality of results. In the first work to address role engineering using large scale real data sets, we propose RoleAnnealing, a fast solution space search algorithm with incremental computation and guided search space heuristics. Our experimental results on both real and synthetic data sets demonstrate that high quality RBAC configurations that maintain data privacy are identified efficiently by RoleAnnealing. Comparison with an existing approach shows RoleAnnealing is significantly faster and produces RBAC configurations with lower cost.

  2. Tailoring approach for obtaining molecular orbitals of large systems

    Indian Academy of Sciences (India)

    Anuja P Rahalkar; Shridhar R Gadre

    2012-01-01

    Molecular orbitals (MO’s) within Hartree-Fock (HF) theory are of vital importance as they provide preliminary information of bonding and features such as electron localization and chemical reactivity. The contemporary literature treats the Kohn-Sham orbitals within density functional theory (DFT) equivalently to the MO's obtained within HF framework. The high scaling order of ab initio methods is the main hurdle in obtaining the MO's for large molecular systems. With this view, an attempt is made in the present work to employ molecular tailoring approach (MTA) for obtaining the complete set of MO's including occupied and virtual orbitals, for large molecules at HF and B3LYP levels of theory. The energies of highest occupied and lowest unoccupied molecular orbitals, and hence the band gaps, are accurately estimated by MTA for most of the test cases benchmarked in this study, which include -conjugated molecules. Typically, the root mean square errors of valence MO's are in range of 0.001 to 0.010 a.u. for all the test cases examined. MTA shows a time advantage factor of 2 to 3 over the corresponding actual calculation, for many of the systems reported.

  3. A practical and automated approach to large area forest disturbance mapping with remote sensing.

    Science.gov (United States)

    Ozdogan, Mutlu

    2014-01-01

    In this paper, I describe a set of procedures that automate forest disturbance mapping using a pair of Landsat images. The approach is built on the traditional pair-wise change detection method, but is designed to extract training data without user interaction and uses a robust classification algorithm capable of handling incorrectly labeled training data. The steps in this procedure include: i) creating masks for water, non-forested areas, clouds, and cloud shadows; ii) identifying training pixels whose value is above or below a threshold defined by the number of standard deviations from the mean value of the histograms generated from local windows in the short-wave infrared (SWIR) difference image; iii) filtering the original training data through a number of classification algorithms using an n-fold cross validation to eliminate mislabeled training samples; and finally, iv) mapping forest disturbance using a supervised classification algorithm. When applied to 17 Landsat footprints across the U.S. at five-year intervals between 1985 and 2010, the proposed approach produced forest disturbance maps with 80 to 95% overall accuracy, comparable to those obtained from traditional approaches to forest change detection. The primary sources of mis-classification errors included inaccurate identification of forests (errors of commission), issues related to the land/water mask, and clouds and cloud shadows missed during image screening. The approach requires images from the peak growing season, at least for the deciduous forest sites, and cannot readily distinguish forest harvest from natural disturbances or other types of land cover change. The accuracy of detecting forest disturbance diminishes with the number of years between the images that make up the image pair. Nevertheless, the relatively high accuracies, little or no user input needed for processing, speed of map production, and simplicity of the approach make the new method especially practical for forest cover

  4. A practical and automated approach to large area forest disturbance mapping with remote sensing.

    Directory of Open Access Journals (Sweden)

    Mutlu Ozdogan

    Full Text Available In this paper, I describe a set of procedures that automate forest disturbance mapping using a pair of Landsat images. The approach is built on the traditional pair-wise change detection method, but is designed to extract training data without user interaction and uses a robust classification algorithm capable of handling incorrectly labeled training data. The steps in this procedure include: i creating masks for water, non-forested areas, clouds, and cloud shadows; ii identifying training pixels whose value is above or below a threshold defined by the number of standard deviations from the mean value of the histograms generated from local windows in the short-wave infrared (SWIR difference image; iii filtering the original training data through a number of classification algorithms using an n-fold cross validation to eliminate mislabeled training samples; and finally, iv mapping forest disturbance using a supervised classification algorithm. When applied to 17 Landsat footprints across the U.S. at five-year intervals between 1985 and 2010, the proposed approach produced forest disturbance maps with 80 to 95% overall accuracy, comparable to those obtained from traditional approaches to forest change detection. The primary sources of mis-classification errors included inaccurate identification of forests (errors of commission, issues related to the land/water mask, and clouds and cloud shadows missed during image screening. The approach requires images from the peak growing season, at least for the deciduous forest sites, and cannot readily distinguish forest harvest from natural disturbances or other types of land cover change. The accuracy of detecting forest disturbance diminishes with the number of years between the images that make up the image pair. Nevertheless, the relatively high accuracies, little or no user input needed for processing, speed of map production, and simplicity of the approach make the new method especially practical for

  5. Innovative design approaches for large wind turbine blades : final report.

    Energy Technology Data Exchange (ETDEWEB)

    2004-05-01

    The goal of the Blade System Design Study (BSDS) was investigation and evaluation of design and manufacturing issues for wind turbine blades in the one to ten megawatt size range. A series of analysis tasks were completed in support of the design effort. We began with a parametric scaling study to assess blade structure using current technology. This was followed by an economic study of the cost to manufacture, transport and install large blades. Subsequently we identified several innovative design approaches that showed potential for overcoming fundamental physical and manufacturing constraints. The final stage of the project was used to develop several preliminary 50m blade designs. The key design impacts identified in this study are: (1) blade cross-sections, (2) alternative materials, (3) IEC design class, and (4) root attachment. The results show that thick blade cross-sections can provide a large reduction in blade weight, while maintaining high aerodynamic performance. Increasing blade thickness for inboard sections is a key method for improving structural efficiency and reducing blade weight. Carbon/glass hybrid blades were found to provide good improvements in blade weight, stiffness, and deflection when used in the main structural elements of the blade. The addition of carbon resulted in modest cost increases and provided significant benefits, particularly with respect to deflection. The change in design loads between IEC classes is quite significant. Optimized blades should be designed for each IEC design class. A significant portion of blade weight is related to the root buildup and metal hardware for typical root attachment designs. The results show that increasing the number of blade fasteners has a positive effect on total weight, because it reduces the required root laminate thickness.

  6. General approach to characterizing reservoir fluids for EoS models using a large PVT database

    DEFF Research Database (Denmark)

    Varzandeh, Farhad; Stenby, Erling Halfdan; Yan, Wei

    2017-01-01

    database, and then adjusting the correlations based on a large PVT database. The adjustment was made to minimize the deviation in key PVT properties like saturation pressures, densities at reservoir temperature and stock tank oil densities, while keeping the n-alkane limit of the correlations unchanged...

  7. Paroxysmal upgaze deviation: case report

    OpenAIRE

    Echeverría-Palacio CM; Benavidez-Fierro MA

    2012-01-01

    The paroxysmal upgaze deviation is a syndrome that described in infants for first time in 1988; there are just about 50 case reports worldwide ever since. Its etiology is unclear and though it prognosis is variable; most case reports indicate that during growth the episodes tend to decrease in frequency and duration until they disappear. It describes a 16-months old male child who since 11-months old presented many episodes of variable conjugate upward deviation of the eyes, compensatory neck...

  8. Angle-deviation optical profilometer

    Institute of Scientific and Technical Information of China (English)

    Chen-Tai Tan; Yuan-Sheng Chan; Zhen-Chin Lin; Ming-Hung Chiu

    2011-01-01

    @@ We propose a new optical profilometer for three-dimensional (3D) surface profile measurement in real time.The deviation angle is based on geometrical optics and is proportional to the apex angle of a test plate.Measuring the reflectivity of a parallelogram prism allows detection of the deviation angle when the beam is incident at the nearby critical angle. The reflectivity is inversely proportional to the deviation angle and proportional to the apex angle and surface height. We use a charge-coupled device (CCD) camera at the image plane to capture the reflectivity profile and obtain the 3D surface profile directly.%We propose a new optical profilometer for three-dimensional (3D) surface profile measurement in real time.The deviation angle is based on geometrical optics and is proportional to the apex angle of a test plate.Measuring the refiectivity of a parallelogram prism allows detection of the deviation angle when the beam is incident at the nearby critical angle. The refiectivity is inversely proportional to the deviation angle and proportional to the apex angle and surface height. We use a charge-coupled device (CCD) camera at the image plane to capture the refiectivity profile and obtain the 3D surface profile directly.

  9. The Dutch approach to the escape from large compartments

    NARCIS (Netherlands)

    Janse, E.W.; Leur, P.H.E. van de

    1999-01-01

    In the Netherlands, the building regulations have no design mies for large fire compartments (over 1000 m2). With respect to the ability of people to escape from a fire in such large spaces, the Centre for Fire Research of TNO Building and Construction Research has developed a guideline that integra

  10. Investigation on standard deviation of high strength concrete in field and trial of its prediction at relatively large age. Kokyodo concrete no kyodo zoshin ni tomonau baratsuki no henka oyobi sono yosoku ni tsuite

    Energy Technology Data Exchange (ETDEWEB)

    Tomatsuri, K. (Taisei Corp., Tokyo (Japan))

    1991-10-30

    Concrete strength varies in accordance with material proportioning and age, and is usually mixed and designed at the determined age in order to manifest the specified strength. Concerning high strength concrete with design strength over 360kg/cm{sup 2}, however, there is no clear provision to estimate increse and deviation of the strength in the case where either age or cumulative temperature varies. In this study, the strength and the distribution of standard curing concrete and concrete after long piriod of time were measured and analyzed statistically in regard to 14 kinds of high strength concrete with the nominal strength between 360 to 465kg/cm{sup 2} of three construction projects. Considering that strength ratio of concrete at two different kinds of cumulative temperature showed the nominal distribution, a method to predict the strength distribution of concrete after a long period of time was presented. In this method, for instance, use of such parameters as standard deviation of strength on the 28th day of age and strength index could make it possible to predict the average strength and the standard deviation at different ages. 9 refs., 15 figs., 6 tabs.

  11. Drilling axial deviation mechanism and its control program for large-diameter blasthole rock-drilling%大直径深孔凿岩钻孔偏斜的机理及其控制方案

    Institute of Scientific and Technical Information of China (English)

    吴万荣; 魏建华; 张永顺; 杨襄璧

    2001-01-01

    According to the analysis of the affecting f actor and mechanics per formance for drilling axial deviation, the mechanical model was set up for the dr ill bit under offset load, thus the drilling axial deviation mechanism in the pr ocess of rock-drilling was revealed, the feeding force control program was proposed for controlling drilling axial deviation. The experiment results show that the control program can make the feeding force of the propulsion chang e automatically with the rods weight and rock properties. So the drilling axial deviation can be effectively controlled.%通过对钻孔偏斜的影响因素及力学特征的分析, 建立了钻头偏载的力学模型, 从而揭 示了凿岩过程钻孔偏斜的机理, 提出了控制钻孔偏斜的推进力控制方案。 试验结果表明, 该控 制方案能够随钻孔过程中孔内钻杆质量的变化及孔内因素的变化, 自动改变推进器的推进 力, 从而有效地控制钻孔偏斜。

  12. Standard Deviation for Small Samples

    Science.gov (United States)

    Joarder, Anwar H.; Latif, Raja M.

    2006-01-01

    Neater representations for variance are given for small sample sizes, especially for 3 and 4. With these representations, variance can be calculated without a calculator if sample sizes are small and observations are integers, and an upper bound for the standard deviation is immediate. Accessible proofs of lower and upper bounds are presented for…

  13. PROBABILISTIC MEASURES FOR INTERESTINGNESS OF DEVIATIONS – A SURVEY

    Directory of Open Access Journals (Sweden)

    Adnan Masood

    2013-03-01

    Full Text Available Association rule mining has long being plagued with the problem of finding meaningful, actionable knowledge from the large set of rules. In this age of data deluge with modern computing capabilities, we gather, distribute, and store information in vast amounts from diverse data sources. With such data profusion, the core knowledge discovery problem becomes efficient data retrieval rather than simply finding heaps of information. The most common approach is to employ measures of rule interestingness to filter the results of the association rule generation process. However, study of literature suggests that interestingness is difficult to define quantitatively and can be best summarized as, a record or pattern is interesting if it suggests a change in an established model. Almost twenty years ago, Gregory Piatetsky-Shapiro and Christopher J. Matheus, in their paper, “The Interestingness of Deviations,” argued that deviations should be grouped together in a finding and that the interestingness of a finding is the estimated benefit from a possible action connected to it. Since then, this field has progressed and new data mining techniques have been introduced to address the subjective, objective, and semantic interestingness measures. In this brief survey, we review the current state of literature around interestingness of deviations, i.e. outliers with specific interest around probabilistic measures using Bayesian belief networks.

  14. Temporal Approach to Removal of a Large Orbital Foreign Body

    Science.gov (United States)

    de Morais, Hécio Henrique Araújo; Barbalho, Jimmy Charles Melo; de Souza Dias, Tasiana Guedes; Grempel, Rafael Grotta; Vasconcellos, Ricardo José de Holanda

    2014-01-01

    Accidents with firearms can result in extensive orbital trauma. Moreover, gun parts can come loose and impale the maxillofacial region. These injuries can cause the loss of visual acuity and impair eye movements. Multidisciplinary treatment is required for injuries associated with this type of trauma. Computed tomography with three-dimensional reconstruction is useful for determining the precise location and size of the object lodged in the facial skeleton, thereby facilitating the planning of the correct surgical approach. The temporal approach is a fast, simple technique with few complications that is indicated for access to the infratemporal fossa. This article describes the use of the temporal approach on a firearm victim in whom the breech of a rifle had impaled orbital region, with the extremity lodged in the infratemporal fossa. PMID:26269733

  15. Large amplitude motion with a stochastic mean-field approach

    Directory of Open Access Journals (Sweden)

    Yilmaz Bulent

    2012-12-01

    Full Text Available In the stochastic mean-field approach, an ensemble of initial conditions is considered to incorporate correlations beyond the mean-field. Then each starting point is propagated separately using the Time-Dependent Hartree-Fock equation of motion. This approach provides a rather simple tool to better describe fluctuations compared to the standard TDHF. Several illustrations are presented showing that this theory can be rather effective to treat the dynamics close to a quantum phase transition. Applications to fusion and transfer reactions demonstrate the great improvement in the description of mass dispersion.

  16. Paroxysmal upgaze deviation: case report

    Directory of Open Access Journals (Sweden)

    Echeverría-Palacio CM

    2012-05-01

    Full Text Available The paroxysmal upgaze deviation is a syndrome that described in infants for first time in 1988; there are just about 50 case reports worldwide ever since. Its etiology is unclear and though it prognosis is variable; most case reports indicate that during growth the episodes tend to decrease in frequency and duration until they disappear. It describes a 16-months old male child who since 11-months old presented many episodes of variable conjugate upward deviation of the eyes, compensatory neck flexion and down-beat saccades in attempted downgaze. These events are predominantly diurnal, and are exacerbated by stressful situations such as fasting or insomnia, however and improve with sleep. They have normal neurologic and ophthalmologic examination, and neuroimaging and EEG findings are not relevant.

  17. Perception of aircraft Deviation Cues

    Science.gov (United States)

    Martin, Lynne; Azuma, Ronald; Fox, Jason; Verma, Savita; Lozito, Sandra

    2005-01-01

    To begin to address the need for new displays, required by a future airspace concept to support new roles that will be assigned to flight crews, a study of potentially informative display cues was undertaken. Two cues were tested on a simple plan display - aircraft trajectory and flight corridor. Of particular interest was the speed and accuracy with which participants could detect an aircraft deviating outside its flight corridor. Presence of the trajectory cue significantly reduced participant reaction time to a deviation while the flight corridor cue did not. Although non-significant, the flight corridor cue seemed to have a relationship with the accuracy of participants judgments rather than their speed. As this is the second of a series of studies, these issues will be addressed further in future studies.

  18. [The crooked nose: correction of dorsal and caudal septal deviations].

    Science.gov (United States)

    Foda, H M T

    2010-09-01

    The deviated nose represents a complex cosmetic and functional problem. Septal surgery plays a central role in the successful management of the externally deviated nose. This study included 800 patients seeking rhinoplasty to correct external nasal deviations; 71% of these suffered from variable degrees of nasal obstruction. Septal surgery was necessary in 736 (92%) patients, not only to improve breathing, but also to achieve a straight, symmetric external nose. A graduated surgical approach was adopted to allow correction of the dorsal and caudal deviations of the nasal septum without weakening its structural support to the nasal dorsum or nasal tip. The approach depended on full mobilization of deviated cartilage, followed by straightening of the cartilage and its fixation in the corrected position by using bony splinting grafts through an external rhinoplasty approach.

  19. Parametric Approach in Designing Large-Scale Urban Architectural Objects

    Directory of Open Access Journals (Sweden)

    Arne Riekstiņš

    2011-04-01

    Full Text Available When all the disciplines of various science fields converge and develop, new approaches to contemporary architecture arise. The author looks towards approaching digital architecture from parametric viewpoint, revealing its generative capacity, originating from the fields of aeronautical, naval, automobile and product-design industries. The author also goes explicitly through his design cycle workflow for testing the latest methodologies in architectural design. The design process steps involved: extrapolating valuable statistical data about the site into three-dimensional diagrams, defining certain materiality of what is being produced, ways of presenting structural skin and structure simultaneously, contacting the object with the ground, interior program definition of the building with floors and possible spaces, logic of fabrication, CNC milling of the proto-type. The author’s developed tool that is reviewed in this article features enormous performative capacity and is applicable to various architectural design scales.Article in English

  20. 48 CFR 2001.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Individual deviations. 2001... Individual deviations. In individual cases, deviations from either the FAR or the NRCAR will be authorized... deviations clearly in the best interest of the Government. Individual deviations must be authorized...

  1. 48 CFR 801.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Individual deviations. 801... Individual deviations. (a) Authority to authorize individual deviations from the FAR and VAAR is delegated to... nature of the deviation. (d) The DSPE may authorize individual deviations from the FAR and VAAR when...

  2. A Novel Approach Towards Large Scale Cross-Media Retrieval

    Institute of Scientific and Technical Information of China (English)

    Bo Lu; Guo-Ren Wang; Ye Yuan

    2012-01-01

    With the rapid development of Internet and multimedia technology,cross-media retrieval is concerned to retrieve all the related media objects with multi-modality by submitting a query media object.Unfortunately,the complexity and the heterogeneity of multi-modality have posed the following two major challenges for cross-media retrieval:1) how to construct a unified and compact model for media objects with multi-modality,2) how to improve the performance of retrieval for large scale cross-media database.In this paper,we propose a novel method which is dedicate to solving these issues to achieve effective and accurate cross-media retrieval.Firstly,a multi-modality semantic relationship graph (MSRG) is constructed using the semantic correlation amongst the media objects with multi-modality.Secondly,all the media objects in MSRG are mapped onto an isomorphic semantic space.Further,an efficient indexing MK-tree based on heterogeneous data distribution is proposed to manage the media objects within the semantic space and improve the performance of cross-media retrieval.Extensive experiments on real large scale cross-media datasets indicate that our proposal dramatically improves the accuracy and efficiency of cross-media retrieval,outperforming the existing methods significantly.

  3. Flood Hazard Mapping over Large Regions using Geomorphic Approaches

    Science.gov (United States)

    Samela, Caterina; Troy, Tara J.; Manfreda, Salvatore

    2016-04-01

    Historically, man has always preferred to settle and live near the water. This tendency has not changed throughout time, and today nineteen of the twenty most populated agglomerations of the world (Demographia World Urban Areas, 2015) are located along watercourses or at the mouth of a river. On one hand, these locations are advantageous from many points of view. On the other hand, they expose significant populations and economic assets to a certain degree of flood hazard. Knowing the location and the extent of the areas exposed to flood hazards is essential to any strategy for minimizing the risk. Unfortunately, in data-scarce regions the use of traditional floodplain mapping techniques is prevented by the lack of the extensive data required, and this scarcity is generally most pronounced in developing countries. The present work aims to overcome this limitation by defining an alternative simplified procedure for a preliminary, but efficient, floodplain delineation. To validate the method in a data-rich environment, eleven flood-related morphological descriptors derived from DEMs have been used as linear binary classifiers over the Ohio River basin and its sub-catchments, measuring their performances in identifying the floodplains at the change of the topography and the size of the calibration area. The best performing classifiers among those analysed have been applied and validated across the continental U.S. The results suggest that the classifier based on the index ln(hr/H), named the Geomorphic Flood Index (GFI), is the most suitable to detect the flood-prone areas in data-scarce environments and for large-scale applications, providing good accuracy with low requirements in terms of data and computational costs. Keywords: flood hazard, data-scarce regions, large-scale studies, binary classifiers, DEM, USA.

  4. A new approach for defect inspection on large area masks

    Science.gov (United States)

    Scheuring, Gerd; Döbereiner, Stefan; Hillmann, Frank; Falk, Günther; Brück, Hans-Jürgen

    2007-02-01

    Besides the mask market for IC manufacturing, which mainly uses 6 inch sized masks, the market for the so called large area masks is growing very rapidly. Typical applications of these masks are mainly wafer bumping for current packaging processes, color filters on TFTs, and Flip Chip manufacturing. To expose e.g. bumps and similar features on 200 mm wafers under proximity exposure conditions 9 inch masks are used, while in 300 mm wafer bumping processes (Fig. 1) 14 inch masks are handled. Flip Chip manufacturing needs masks up to 28 by 32 inch. This current maximum mask dimension is expected to hold for the next 5 years in industrial production. On the other hand shrinking feature sizes, just as in case of the IC masks, demand enhanced sensitivity of the inspection tools. A defect inspection tool for those masks is valuable for both the mask maker, who has to deliver a defect free mask to his customer, and for the mask user to supervise the mask behavior conditions during its lifetime. This is necessary because large area masks are mainly used for proximity exposures. During this process itself the mask is vulnerable by contacting the resist on top of the wafers. Therefore a regular inspection of the mask after 25, 50, or 100 exposures has to be done during its whole lifetime. Thus critical resist contamination and other defects, which lead to yield losses, can be recognized early. In the future shrinking feature dimensions will require even more sensitive and reliable defect inspection methods than they do presently. Besides the sole inspection capability the tools should also provide highly precise measurement capabilities and extended review options.

  5. A Low Cost Approach to Large Smart Shelf Setups

    Directory of Open Access Journals (Sweden)

    MOGA, D.

    2011-11-01

    Full Text Available Recent years showed a growing interest in the use of RFID technology in applications like distribution and storage of goods, supply chain and inventory. This paper analyses the current smart shelf solutions and presents the experience of developing an automatic reading system for smart shelves. The proposed system addresses the problem of reading RFID tags from items placed on multiple shelves. It allows the use of standard low cost readers and tags and uses a single antenna that can be positioned in specific locations at repeatable positions. The system proposes an alternative to the approaches with multiple antennas placed in fixed position inside the shelf or around the shelves, offering a lower cost solution by means of dedicated electromechanical devices able to carry the antenna and the reader to the locations of interest along a rail system. Moreover, antenna position can be controlled for three axis of movement allowing for extra flexibility and complete coverage of the shelves. The proposed setup is a fully wireless one. It contains a standard reader, electromechanical positioning actuators and wireless communication and control hardware offering power from integrated batteries.

  6. Large aperture freeform VIS telescope with smart alignment approach

    Science.gov (United States)

    Beier, Matthias; Fuhlrott, Wilko; Hartung, Johannes; Holota, Wolfgang; Gebhardt, Andreas; Risse, Stefan

    2016-07-01

    The development of smart alignment and integration strategies for imaging mirror systems to be used within astronomical instrumentation are especially important with regard to the increasing impact of non-rotationally symmetric optics. In the present work, well-known assembly approaches preferentially applied in the course of infrared instrumentation are transferred to visible applications and are verified during the integration of an anamorphic imaging telescope breadboard. The four mirror imaging system is based on a modular concept using mechanically fixed arrangements of each two freeform surfaces, generated by servo assisted diamond machining and corrected using Magnetorheological Finishing as a figuring and smoothing step. Surface testing include optical CGH interferometry as well as tactile profilometry and is conducted with respect to diamond milled fiducials at the mirror bodies. A strict compliance of surface referencing during all significant fabrication steps allow for an easy integration and direct measurement of the system's wave aberration after initial assembly. The achievable imaging performance, as well as influences of the tight tolerance budget and mid-spatial frequency errors, are discussed and experimentally evaluated.

  7. K-minus Estimator Approach to Large Scale Structure

    CERN Document Server

    Martinis, M

    2007-01-01

    Self similar 3D distributions of point-particles, with a given quasifractal dimension D, were generated on a Menger sponge model and then compared with \\textit{2dfGRS} and \\textit{Virgo project} data \\footnote{http://www.mso.anu.edu.au/2dFGRS/, http://www.mpa-garching.mpg.de/Virgo/}. Using the principle of local knowledge, it is argued that in a finite volume of space only the two-point minus estimator is acceptable in the correlation analysis of self similar spatial distributions. In this sense, we have simplified the Pietronero-Labini correlative analysis by defining a K-minus estimator, which when applied to 2dfGRS data revealed the quasifractal dimension $D\\approx 2$ as expected. In our approach the K-minus estimator is used only locally. Dimensions between D = 1 and D = 1.7, as suggested by the standard $\\xi (r)$ analysis, were found to be fallacy of the method. In order to visualize spatial quasifractal objects, we created a small software program called \\textit{RoPo} (''Rotate Points''). This program i...

  8. Quenched moderate deviations principle for random walk in random environment

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    We derive a quenched moderate deviations principle for the one-dimensional nearest random walk in random environment,where the environment is assumed to be stationary and ergodic.The approach is based on hitting time decomposition.

  9. Efficient Approach for Harmonic Resonance Identification of Large Wind Power Plants

    DEFF Research Database (Denmark)

    Ebrahimzadeh, Esmaeil; Blaabjerg, Frede; Wang, Xiongfei;

    2016-01-01

    and with passive components. This paper presents an efficient approach for identification of harmonic resonances in large WPPs containing power electronic converters, cable, transformer, capacitor banks, shunt reactors, etc. The proposed approach introduces a large WPP as a Multi-Input Multi-Output (MIMO) control...

  10. 48 CFR 1301.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Individual deviations... DEPARTMENT OF COMMERCE ACQUISITION REGULATIONS SYSTEM Deviations From the FAR 1301.403 Individual deviations. The designee authorized to approve individual deviations from the FAR is set forth in CAM 1301.70....

  11. 48 CFR 401.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Individual deviations. 401... AGRICULTURE ACQUISITION REGULATION SYSTEM Deviations From the FAR and AGAR 401.403 Individual deviations. In individual cases, deviations from either the FAR or the AGAR will be authorized only when essential to...

  12. 48 CFR 2801.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Individual deviations. 2801... OF JUSTICE ACQUISITION REGULATIONS SYSTEM Deviations From the FAR and JAR 2801.403 Individual deviations. Individual deviations from the FAR or the JAR shall be approved by the head of the...

  13. 48 CFR 301.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Individual deviations. 301... ACQUISITION REGULATION SYSTEM Deviations From the FAR 301.403 Individual deviations. Contracting activities shall prepare requests for individual deviations to either the FAR or HHSAR in accordance with 301.470....

  14. 48 CFR 1501.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Individual deviations. 1501.403 Section 1501.403 Federal Acquisition Regulations System ENVIRONMENTAL PROTECTION AGENCY GENERAL GENERAL Deviations 1501.403 Individual deviations. Requests for individual deviations from the FAR and...

  15. 48 CFR 501.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Individual deviations. 501... Individual deviations. (a) An individual deviation affects only one contract action. (1) The Head of the Contracting Activity (HCA) must approve an individual deviation to the FAR. The authority to grant...

  16. 48 CFR 2401.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Individual deviations. 2401... DEVELOPMENT GENERAL FEDERAL ACQUISITION REGULATION SYSTEM Deviations 2401.403 Individual deviations. In individual cases, proposed deviations from the FAR or HUDAR shall be submitted to the Senior...

  17. Mod-ϕ convergence normality zones and precise deviations

    CERN Document Server

    Féray, Valentin; Nikeghbali, Ashkan

    2016-01-01

    The canonical way to establish the central limit theorem for i.i.d. random variables is to use characteristic functions and Lévy’s continuity theorem. This monograph focuses on this characteristic function approach and presents a renormalization theory called mod-ϕ convergence. This type of convergence is a relatively new concept with many deep ramifications, and has not previously been published in a single accessible volume. The authors construct an extremely flexible framework using this concept in order to study limit theorems and large deviations for a number of probabilistic models related to classical probability, combinatorics, non-commutative random variables, as well as geometric and number-theoretical objects. Intended for researchers in probability theory, the text is carefully well-written and well-structured, containing a great amount of detail and interesting examples. .

  18. Downhole control of deviation with steerable straight-hole turbodrills

    Energy Technology Data Exchange (ETDEWEB)

    Gaynor, T.M.

    1988-03-01

    Advances in directional drilling have until recently been confined to issues that are peripheral to the central problem of controlling assembly behavior downhole. Examples of these advances are measurement while drilling (MWD) and the increasing use of computer assistance in well planning. These were significant steps forward, but the major problem remained. Changes in formation deviation tendencies led to trips to change bottomhole assemblies (BHA's) to cope with the new conditions. There is almost no direct control of deviation behavior. The steerable straight-hole turbodrill (SST) addresses this problem directly, allowing alteration of the well course without the need to trip. The availability of such a system radically changes the way in which directional well planning may be approached. This paper describes the equipment used and the equipment's construction and operational requirements. It discusses the capabilities and current limitation of the systems. Field results are presented for some 300,000 ft (91 500 m) of deviated drilling carried out over 2 years in Alaska and the North Sea. A series of four highly deviated wells totaling 35,000 ft (10 700m) with only three deviation trips is included. The SST is the first deviation drilling system to achieve deviation control over long sections without tripping to change BHA's. Bits and downhole equipment are now more reliable and long-lived than ever, therefore, deviation trips are becoming a major target for well cost saving.

  19. Method for Assessing Grid Frequency Deviation Due to Wind Power Fluctuation Based on “Time-Frequency Transformation”

    DEFF Research Database (Denmark)

    Jin, Lin; Yuan-zhang, Sun; Sørensen, Poul Ejnar

    2012-01-01

    Grid frequency deviation caused by wind power fluctuation has been a major concern for secure operation of a power system with integrated large-scale wind power. Many approaches have been proposed to assess this negative effect on grid frequency due to wind power fluctuation. Unfortunately, most ...

  20. Deviation of the statistical fluctuation in heterogeneous anomalous diffusion

    CERN Document Server

    Itto, Yuichi

    2016-01-01

    The exponent of anomalous diffusion of virus in cytoplasm of a living cell is experimentally known to fluctuate depending on localized areas of the cytoplasm, indicating heterogeneity of diffusion. In a recent paper (Itto, 2012), a maximum-entropy-principle approach has been developed in order to propose an Ansatz for the statistical distribution of such exponent fluctuations. Based on this approach, here the deviation of the statistical distribution of the fluctuations from the proposed one is studied from the viewpoint of Einstein's theory of fluctuations (of the thermodynamic quantities). This may present a step toward understanding the statistical property of the deviation. It is shown in a certain class of small deviations that the deviation obeys the multivariate Gaussian distribution.

  1. Allan deviation analysis of financial return series

    Science.gov (United States)

    Hernández-Pérez, R.

    2012-05-01

    We perform a scaling analysis for the return series of different financial assets applying the Allan deviation (ADEV), which is used in the time and frequency metrology to characterize quantitatively the stability of frequency standards since it has demonstrated to be a robust quantity to analyze fluctuations of non-stationary time series for different observation intervals. The data used are opening price daily series for assets from different markets during a time span of around ten years. We found that the ADEV results for the return series at short scales resemble those expected for an uncorrelated series, consistent with the efficient market hypothesis. On the other hand, the ADEV results for absolute return series for short scales (first one or two decades) decrease following approximately a scaling relation up to a point that is different for almost each asset, after which the ADEV deviates from scaling, which suggests that the presence of clustering, long-range dependence and non-stationarity signatures in the series drive the results for large observation intervals.

  2. Various approaches to the modelling of large scale 3-dimensional circulation in the Ocean

    Digital Repository Service at National Institute of Oceanography (India)

    Shaji, C.; Bahulayan, N.; Rao, A.D.; Dube, S.K.

    In this paper, the three different approaches to the modelling of large scale 3-dimensional flow in the ocean such as the diagnostic, semi-diagnostic (adaptation) and the prognostic are discussed in detail. Three-dimensional solutions are obtained...

  3. Geometry of Dynamic Large Networks: A Scaling and Renormalization Group Approach

    Science.gov (United States)

    2013-12-11

    Geometry of Dynamic Large Networks - A Scaling and Renormalization Group Approach IRAJ SANIEE LUCENT TECHNOLOGIES INC 12/11/2013 Final Report...Z39.18 Final Performance Report Grant Title: Geometry of Dynamic Large Networks: A Scaling and Renormalization Group Approach Grant Award Number...test itself may be scaled to much larger graphs than those we examined via renormalization group methodology. Using well-understood mechanisms, we

  4. 48 CFR 2501.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Individual deviations. 2501.403 Section 2501.403 Federal Acquisition Regulations System NATIONAL SCIENCE FOUNDATION GENERAL FEDERAL ACQUISITION REGULATIONS SYSTEM Deviations From the FAR 2501.403 Individual deviations....

  5. 48 CFR 1901.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Individual deviations. 1901.403 Section 1901.403 Federal Acquisition Regulations System BROADCASTING BOARD OF GOVERNORS GENERAL... Individual deviations. Deviations from the IAAR or the FAR in individual cases shall be authorized by...

  6. 48 CFR 201.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Individual deviations. 201.403 Section 201.403 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM... Individual deviations. (1) Individual deviations, except those described in 201.402(1) and paragraph (2)...

  7. 48 CFR 1.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Individual deviations. 1.403 Section 1.403 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION GENERAL FEDERAL ACQUISITION REGULATIONS SYSTEM Deviations from the FAR 1.403 Individual deviations....

  8. 48 CFR 601.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Individual deviations. 601.403 Section 601.403 Federal Acquisition Regulations System DEPARTMENT OF STATE GENERAL DEPARTMENT OF STATE ACQUISITION REGULATIONS SYSTEM Deviations from the FAR 601.403 Individual deviations....

  9. 48 CFR 3401.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Individual deviations. 3401.403 Section 3401.403 Federal Acquisition Regulations System DEPARTMENT OF EDUCATION ACQUISITION REGULATION GENERAL ED ACQUISITION REGULATION SYSTEM Deviations 3401.403 Individual deviations. An...

  10. Moderate deviations for the quenched mean of the super-Brownian motion with random immigration

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Moderate deviations for the quenched mean of the super-Brownian motion with random immigration are proved for 3≤d≤6, which fills in the gap between central limit theorem(CLT)and large deviation principle(LDP).

  11. Deviations in human gut microbiota

    DEFF Research Database (Denmark)

    Casén, C; Vebø, H C; Sekelja, M

    2015-01-01

    BACKGROUND: Dysbiosis is associated with many diseases, including irritable bowel syndrome (IBS), inflammatory bowel diseases (IBD), obesity and diabetes. Potential clinical impact of imbalance in the intestinal microbiota suggests need for new standardised diagnostic methods to facilitate microb...... and improvement in new therapeutic approaches.......BACKGROUND: Dysbiosis is associated with many diseases, including irritable bowel syndrome (IBS), inflammatory bowel diseases (IBD), obesity and diabetes. Potential clinical impact of imbalance in the intestinal microbiota suggests need for new standardised diagnostic methods to facilitate...

  12. Using Flipped Classroom Approach to Explore Deep Learning in Large Classrooms

    Science.gov (United States)

    Danker, Brenda

    2015-01-01

    This project used two Flipped Classroom approaches to stimulate deep learning in large classrooms during the teaching of a film module as part of a Diploma in Performing Arts course at Sunway University, Malaysia. The flipped classes utilized either a blended learning approach where students first watched online lectures as homework, and then…

  13. General Approach to Characterize Reservoir Fluids Using a Large PVT Database

    DEFF Research Database (Denmark)

    Varzandeh, Farhad; Yan, Wei; Stenby, Erling Halfdan

    2016-01-01

    methods. We proposed a general approach to develop correlations for model parameters and applied it to the characterization for the PC-SAFT EoS. The approach consists in first developing the correlations based on the DIPPR database, and then adjusting the correlations based on a large PVT database...

  14. Controller design approaches for large space structures using LQG control theory. [Linear Quadratic Gaussian

    Science.gov (United States)

    Joshi, S. M.; Groom, N. J.

    1979-01-01

    The paper presents several approaches for the design of reduced order controllers for large space structures. These approaches are shown to be based on LQG control theory and include truncation, modified truncation regulators and estimators, use of higher order estimators, selective modal suppression, and use of polynomial estimators. Further, the use of direct sensor feedback, as opposed to a state estimator, is investigated for some of these approaches. Finally, numerical results are given for a long free beam.

  15. A discursive look at large bodies--implications for discursive approaches in nursing and health research.

    Science.gov (United States)

    Knutsen, Ingrid Ruud

    2015-01-01

    This article illuminates discursive constructions of large bodies in contemporary society and discusses what discursive approaches might add to health care. Today, the World Health Organization describes a current "epidemic of obesity" and classifies large bodies as a medical condition. Texts on the obesity epidemic often draw upon alarming perspectives that involve associations of threat and catastrophe. The concern we see for body size in contemporary discourse is not new. Understandings of body size in Western societies are highly cultural and normative and could be different. The way we approach large bodies affects health care practice as well as subjects' self-perceptions.

  16. An Axiomatic Analysis Approach for Large-Scale Disaster-Tolerant Systems Modeling

    Directory of Open Access Journals (Sweden)

    Theodore W. Manikas

    2011-02-01

    Full Text Available Disaster tolerance in computing and communications systems refers to the ability to maintain a degree of functionality throughout the occurrence of a disaster. We accomplish the incorporation of disaster tolerance within a system by simulating various threats to the system operation and identifying areas for system redesign. Unfortunately, extremely large systems are not amenable to comprehensive simulation studies due to the large computational complexity requirements. To address this limitation, an axiomatic approach that decomposes a large-scale system into smaller subsystems is developed that allows the subsystems to be independently modeled. This approach is implemented using a data communications network system example. The results indicate that the decomposition approach produces simulation responses that are similar to the full system approach, but with greatly reduced simulation time.

  17. An Axiomatic Analysis Approach for Large-Scale Disaster-Tolerant Systems Modeling

    Directory of Open Access Journals (Sweden)

    Theodore W. Manikas

    2011-02-01

    Full Text Available Disaster tolerance in computing and communications systems refers to the ability to maintain a degree of functionality throughout the occurrence of a disaster. We accomplish the incorporation of disaster tolerance within a system by simulating various threats to the system operation and identifying areas for system redesign. Unfortunately, extremely large systems are not amenable to comprehensive simulation studies due to the large computational complexity requirements. To address this limitation, an axiomatic approach that decomposes a large-scale system into smaller subsystems is developed that allows the subsystems to be independently modeled. This approach is implemented using a data communications network system example. The results indicate that the decomposition approach produces simulation responses that are similar to the full system approach, but with greatly reduced simulation time.

  18. Spherical Model on a Cayley Tree: Large Deviations

    Science.gov (United States)

    Patrick, A. E.

    2017-01-01

    We study the spherical model of a ferromagnet on a Cayley tree and show that in the case of empty boundary conditions a ferromagnetic phase transition takes place at the critical temperature T_c =6√{2}/5J, where J is the interaction strength. For any temperature the equilibrium magnetization, m_n, tends to zero in the thermodynamic limit, and the true order parameter is the renormalized magnetization r_n=n^{3/2}m_n, where n is the number of generations in the Cayley tree. Below T_c, the equilibrium values of the order parameter are given by ± ρ ^*, where ρ ^*=2π /(√{2-1)^2}√{1-T/T_c}. One more notable temperature in the model is the penetration temperature T_p=J/W_Cayley(3/2)( 1-1/√{2}( h/2J) ^2) . Below T_p the influence of homogeneous boundary field of magnitude h penetrates throughout the tree. The main new technical result of the paper is a complete set of orthonormal eigenvectors for the discrete Laplace operator on a Cayley tree.

  19. Large deviations for Gaussian queues modelling communication networks

    CERN Document Server

    Mandjes, Michel

    2007-01-01

    Michel Mandjes, Centre for Mathematics and Computer Science (CWI) Amsterdam, The Netherlands, and Professor, Faculty of Engineering, University of Twente. At CWI Mandjes is a senior researcher and Director of the Advanced Communications Network group.  He has published for 60 papers on queuing theory, networks, scheduling, and pricing of networks.

  20. Endoscopic Endonasal Extended Approaches for the Management of Large Pituitary Adenomas.

    Science.gov (United States)

    Cappabianca, Paolo; Cavallo, Luigi Maria; de Divitiis, Oreste; de Angelis, Michelangelo; Chiaramonte, Carmela; Solari, Domenico

    2015-07-01

    The management of giant and large pituitary adenomas with wide intracranial extension or infrasellar involvement of nasal and paranasal cavities is a big challenge for neurosurgeons and the best surgical approach indications are still controversial. Endoscopic extended endonasal approaches have been proposed as a new surgical technique for the treatment of such selected pituitary adenomas. Surgical series coming from many centers all around the world are flourishing and results in terms of outcomes and complications seem encouraging. This technique could be considered a valid alternative to the transcranial route for the management of giant and large pituitary adenomas.

  1. A Project Management Approach to Using Simulation for Cost Estimation on Large, Complex Software Development Projects

    Science.gov (United States)

    Mizell, Carolyn; Malone, Linda

    2007-01-01

    It is very difficult for project managers to develop accurate cost and schedule estimates for large, complex software development projects. None of the approaches or tools available today can estimate the true cost of software with any high degree of accuracy early in a project. This paper provides an approach that utilizes a software development process simulation model that considers and conveys the level of uncertainty that exists when developing an initial estimate. A NASA project will be analyzed using simulation and data from the Software Engineering Laboratory to show the benefits of such an approach.

  2. 48 CFR 1201.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ...) 48 CFR 1.405(e) applies). However, see TAM 1201.403. ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Individual deviations... FEDERAL ACQUISITION REGULATIONS SYSTEM 70-Deviations From the FAR and TAR 1201.403 Individual...

  3. 48 CFR 1401.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Individual deviations. 1401.403 Section 1401.403 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR GENERAL DEPARTMENT OF THE INTERIOR ACQUISITION REGULATION SYSTEM Deviations from the FAR and DIAR 1401.403...

  4. 48 CFR 3001.403 - Individual deviations.

    Science.gov (United States)

    2010-10-01

    .... 3001.403 Section 3001.403 Federal Acquisition Regulations System DEPARTMENT OF HOMELAND SECURITY... from the FAR and HSAR 3001.403 Individual deviations. Unless precluded by law, executive order, or..., including complete documentation of the justification for the deviation (See HSAM 3001.403)....

  5. 41 CFR 101-1.110 - Deviation.

    Science.gov (United States)

    2010-07-01

    ... 41 Public Contracts and Property Management 2 2010-07-01 2010-07-01 true Deviation. 101-1.110 Section 101-1.110 Public Contracts and Property Management Federal Property Management Regulations System FEDERAL PROPERTY MANAGEMENT REGULATIONS GENERAL 1-INTRODUCTION 1.1-Regulation System § 101-1.110 Deviation...

  6. 20 CFR 435.4 - Deviations.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Deviations. 435.4 Section 435.4 Employees' Benefits SOCIAL SECURITY ADMINISTRATION UNIFORM ADMINISTRATIVE REQUIREMENTS FOR GRANTS AND AGREEMENTS WITH... General § 435.4 Deviations. The Office of Management and Budget (OMB) may grant exceptions for classes...

  7. Optical vibration and deviation measurement of rotating machine parts

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    It is of interest to get appropriate information about the dynamic behaviour of rotating machinery parts in service. This paper presents an approach of optical vibration and deviation measurement of such parts. Essential of this method is an image derotator combined with a high speed camera or a laser doppler vibrometer (LDV).

  8. A Third-Quantized Approach to the Large-N Field Models

    CERN Document Server

    Maslov, V P

    1998-01-01

    Large-N field systems are considered from an unusual point of view. The Hamiltonian is presented in a third-quantized form analogously to the second-quantized formulation of the quantum theory of many particles. The semiclassical approximation is applied to the third-quantized Hamiltonian. The advantages of this approach in comparison with 1/N-expansion are discussed.

  9. An Alternative Approach to Large Historical Databases; Exploring Best Practices with Collaboratories

    NARCIS (Netherlands)

    Dormans, S.E.M.; Kok, J.

    2010-01-01

    In their exploration of an alternative approach to large historical databases, the authors aim to bridge the gap between the anticipations regarding Web-based collaborative work and the prevailing practices and academic culture in social and economic history. Until now, the collaboratory model

  10. An efficient approach of attractor calculation for large-scale Boolean gene regulatory networks.

    Science.gov (United States)

    He, Qinbin; Xia, Zhile; Lin, Bin

    2016-11-07

    Boolean network models provide an efficient way for studying gene regulatory networks. The main dynamics of a Boolean network is determined by its attractors. Attractor calculation plays a key role for analyzing Boolean gene regulatory networks. An approach of attractor calculation was proposed in this study, which improved the predecessor-based approach. Furthermore, the proposed approach combined with the identification of constant nodes and simplified Boolean networks to accelerate attractor calculation. The proposed algorithm is effective to calculate all attractors for large-scale Boolean gene regulatory networks. If the average degree of the network is not too large, the algorithm can get all attractors of a Boolean network with dozens or even hundreds of nodes.

  11. An Efficient Approach to Prune Mined Association Rules in Large Databases

    Directory of Open Access Journals (Sweden)

    D. Narmadha

    2011-01-01

    Full Text Available Association rule mining finds interesting associations and/or correlation relationships among large set of data items. However, when the number of association rules become large, it becomes less interesting to the user. It is crucial to help the decision-maker with an efficient postprocessing step in order to select interesting association rules throughout huge volumes of discovered rules. This motivates the need for association analysis. Thus, this paper presents a novel approach to prune mined association rules in large databases. Further, an analysis of different association rule mining techniques for market basket analysis, highlighting strengths of different association rule mining techniques are also discussed. We want to point out potential pitfalls as well as challenging issues need to be addressed by an association rule mining technique. We believe that the results of this approach will help decision maker for making important decisions.

  12. The Large Marine Ecosystem Approach for 21st Century Ocean Health and International Sustainable Development

    Science.gov (United States)

    Honey, K. T.

    2014-12-01

    The global coastal ocean and watersheds are divided into 66 Large Marine Ecosystems (LMEs), which encompass regions from river basins, estuaries, and coasts to the seaward boundaries of continental shelves and margins of major currents. Approximately 80% of global fisheries catch comes from LME waters. Ecosystem goods and services from LMEs contribute an estimated US 18-25 trillion dollars annually to the global economy in market and non-market value. The critical importance of these large-scale systems, however, is threatened by human populations and pressures, including climate change. Fortunately, there is pragmatic reason for optimism. Interdisciplinary frameworks exist, such as the Large Marine Ecosystem (LME) approach for adaptive management that can integrate both nature-centric and human-centric views into ecosystem monitoring, assessment, and adaptive management practices for long-term sustainability. Originally proposed almost 30 years ago, the LME approach rests on five modules are: (i) productivity, (ii) fish and fisheries, (iii) pollution and ecosystem health, (iv) socioeconomics, and (v) governance for iterative adaptive management at a large, international scale of 200,000 km2 or greater. The Global Environment Facility (GEF), World Bank, and United Nations agencies recognize and support the LME approach—as evidenced by over 3.15 billion in financial assistance to date for LME projects. This year of 2014 is an exciting milestone in LME history, after 20 years of the United Nations and GEF organizations adopting LMEs as a unit for ecosystem-based approaches to management. The LME approach, however, is not perfect. Nor is it immutable. Similar to the adaptive management framework it propones, the LME approach itself must adapt to new and emerging 21st Century technologies, science, and realities. The LME approach must further consider socioeconomics and governance. Within the socioeconomics module alone, several trillion-dollar opportunities exist

  13. Moderate Deviation Principles for Stochastic Differential Equations with Jumps

    Science.gov (United States)

    2014-01-15

    random measure and an in�nite dimensional Brownian motion) was derived. As in the Brownian motion case, the representation is motivated in part by...deviations of a smaller order than in large deviation theory . Consider for example an independent and identically distributed (iid) sequence fYigi1 of...8217") " E " 1 2 Z X[0;T ] ( ")21fj "jB"gdT + F G "("N " 1’") # " 1 2 3M 2(1); (3.6) where the last inequality follows from (3.5) on

  14. Large object investigation by digital holography with effective spectrum multiplexing under single-exposure approach

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Ning, E-mail: coolboy006@sohu.com; Zhang, Yingying; Xie, Jun [College of Physics and Electronics, Nanjing XiaoZhuang University, Nanjing, Jiangsu Province 211171 (China)

    2014-10-13

    We present a method to investigate large object by digital holography with effective spectrum multiplexing under single-exposure approach. This method splits the original reference beam and redirects one of its branches as a second object beam. Through the modified Mach-Zehnder interferometer, the two object beams can illuminate different parts of the large object and create a spectrum multiplexed hologram onto the focal plane array of the charge-coupled device/complementary metal oxide semiconductor camera. After correct spectrum extraction and image reconstruction, the large object can be fully observed within only one single snap-shot. The flexibility and great performance make our method a very attractive and promising technique for large object investigation under common 632.8 nm illumination.

  15. Spin-geodesic deviations in the Kerr spacetime

    Science.gov (United States)

    Bini, D.; Geralico, A.

    2011-11-01

    The dynamics of extended spinning bodies in the Kerr spacetime is investigated in the pole-dipole particle approximation and under the assumption that the spin-curvature force only slightly deviates the particle from a geodesic path. The spin parameter is thus assumed to be very small and the back reaction on the spacetime geometry neglected. This approach naturally leads to solve the Mathisson-Papapetrou-Dixon equations linearized in the spin variables as well as in the deviation vector, with the same initial conditions as for geodesic motion. General deviations from generic geodesic motion are studied, generalizing previous results limited to the very special case of an equatorial circular geodesic as the reference path.

  16. Spin-geodesic deviations in the Kerr spacetime

    CERN Document Server

    Bini, Donato

    2014-01-01

    The dynamics of extended spinning bodies in the Kerr spacetime is investigated in the pole-dipole particle approximation and under the assumption that the spin-curvature force only slightly deviates the particle from a geodesic path. The spin parameter is thus assumed to be very small and the back reaction on the spacetime geometry neglected. This approach naturally leads to solve the Mathisson-Papapetrou-Dixon equations linearized in the spin variables as well as in the deviation vector, with the same initial conditions as for geodesic motion. General deviations from generic geodesic motion are studied, generalizing previous results limited to the very special case of an equatorial circular geodesic as the reference path.

  17. Scaling Deviations for Neutrino Reactions in Aysmptotically Free Field Theories

    Science.gov (United States)

    Wilczek, F. A.; Zee, A.; Treiman, S. B.

    1974-11-01

    Several aspects of deep inelastic neutrino scattering are discussed in the framework of asymptotically free field theories. We first consider the growth behavior of the total cross sections at large energies. Because of the deviations from strict scaling which are characteristic of such theories the growth need not be linear. However, upper and lower bounds are established which rather closely bracket a linear growth. We next consider in more detail the expected pattern of scaling deviation for the structure functions and, correspondingly, for the differential cross sections. The analysis here is based on certain speculative assumptions. The focus is on qualitative effects of scaling breakdown as they may show up in the X and y distributions. The last section of the paper deals with deviations from the Callan-Gross relation.

  18. On geodesic deviation in Schwarzschild spacetime

    CERN Document Server

    Philipp, Dennis; Laemmerzahl, Claus; Deshpande, Kaustubh

    2015-01-01

    For metrology, geodesy and gravimetry in space, satellite based instruments and measurement techniques are used and the orbits of the satellites as well as possible deviations between nearby ones are of central interest. The measurement of this deviation itself gives insight into the underlying structure of the spacetime geometry, which is curved and therefore described by the theory of general relativity (GR). In the context of GR, the deviation of nearby geodesics can be described by the Jacobi equation that is a result of linearizing the geodesic equation around a known reference geodesic with respect to the deviation vector and the relative velocity. We review the derivation of this Jacobi equation and restrict ourselves to the simple case of the spacetime outside a spherically symmetric mass distribution and circular reference geodesics to find solutions by projecting the Jacobi equation on a parallel propagated tetrad as done by Fuchs. Using his results, we construct solutions of the Jacobi equation for...

  19. A Hierarchical and Distributed Approach for Mapping Large Applications to Heterogeneous Grids using Genetic Algorithms

    Science.gov (United States)

    Sanyal, Soumya; Jain, Amit; Das, Sajal K.; Biswas, Rupak

    2003-01-01

    In this paper, we propose a distributed approach for mapping a single large application to a heterogeneous grid environment. To minimize the execution time of the parallel application, we distribute the mapping overhead to the available nodes of the grid. This approach not only provides a fast mapping of tasks to resources but is also scalable. We adopt a hierarchical grid model and accomplish the job of mapping tasks to this topology using a scheduler tree. Results show that our three-phase algorithm provides high quality mappings, and is fast and scalable.

  20. Submandibular approach for excision of a large schwannoma in the base of the tongue.

    Science.gov (United States)

    de Bree, R; Westerveld, G J; Smeele, L E

    2000-01-01

    A 24-year-old Turkish woman is described, who gradually developed progressive swallowing problems over 6 months due to a tumor in the base of the tongue. Magnetic resonance imaging showed a large well-circumscribed solid mass. Histopathological examination of an incisional biopsy showed a schwannoma. The tumor was completely removed through a submandibular approach. The postoperative course was uneventful and her complaints disappeared. The submandibular approach used gave an excellent exposure of the base of tongue with a less obvious scar than a lip-splitting incision.

  1. An approach towards the proof of the strong Goldbach's conjecture for sufficiently large even integers

    OpenAIRE

    2016-01-01

    We approach a new proof of the strong Goldbach's conjecture for sufficiently large even integers by applying the Dirichlet's series. Using the Perron formula and the Residue Theorem in complex variable integration, one could show that any large even integer is demonstrated as a sum of two primes. In this paper,the Riemann Hypothesis is assumed to be true in throughout the paper. A novel function is defined on the natural numbers set.This function is a typical sieve function.Then based on this...

  2. An approach towards the proof of the strong Goldbach's conjecture for sufficiently large even integers

    OpenAIRE

    Sabihi, Ahmad

    2016-01-01

    We approach a new proof of the strong Goldbach's conjecture for sufficiently large even integers by applying the Dirichlet's series. Using the Perron formula and the Residue Theorem in complex variable integration, one could show that any large even integer is demonstrated as a sum of two primes. In this paper,the Riemann Hypothesis is assumed to be true in throughout the paper. A novel function is defined on the natural numbers set.This function is a typical sieve function.Then based on this...

  3. A New Approach for Structural Monitoring of Large Dams with a Three-Dimensional Laser Scanner.

    Science.gov (United States)

    González-Aguilera, Diego; Gómez-Lahoz, Javier; Sánchez, José

    2008-09-24

    Driven by progress in sensor technology, computer methods and data processing capabilities, 3D laser scanning has found a wide range of new application fields in recent years. Particularly, monitoring the static and dynamic behaviour of large dams has always been a topic of great importance, due to the impact these structures have on the whole landscape where they are built. The main goal of this paper is to show the relevance and novelty of the laserscanning methodology developed, which incorporates different statistical and modelling approaches not considered until now. As a result, the methods proposed in this paper have provided the measurement and monitoring of the large "Las Cogotas" dam (Avila, Spain).

  4. Decay rates of large-l Rydberg states of multiply charged ions approaching solid surfaces

    Science.gov (United States)

    Nedeljkovic, N. N.; Mirkovic, M. A.; Bozanic, D. K.

    2008-07-01

    We investigate the ionization of large-l multiply charged Rydberg ions approaching solid surfaces within the framework of decay model and applying the etalon equation method. The radial coordinate rho of the active electron is treated as a variational parameter and therefore the parabolic symmetry is preserved in this procedure. The complex eigenenergies are calculated from which the energy terms and the ionization rates are derived. We find that the large-l Rydberg states decay at approximately the same ion-surface distances as the low-l states oriented toward the vacuum and considerably closer to the surface comparing to the low-l states oriented towards the surface.

  5. Hierarchical approach to optimization of parallel matrix multiplication on large-scale platforms

    KAUST Repository

    Hasanov, Khalid

    2014-03-04

    © 2014, Springer Science+Business Media New York. Many state-of-the-art parallel algorithms, which are widely used in scientific applications executed on high-end computing systems, were designed in the twentieth century with relatively small-scale parallelism in mind. Indeed, while in 1990s a system with few hundred cores was considered a powerful supercomputer, modern top supercomputers have millions of cores. In this paper, we present a hierarchical approach to optimization of message-passing parallel algorithms for execution on large-scale distributed-memory systems. The idea is to reduce the communication cost by introducing hierarchy and hence more parallelism in the communication scheme. We apply this approach to SUMMA, the state-of-the-art parallel algorithm for matrix–matrix multiplication, and demonstrate both theoretically and experimentally that the modified Hierarchical SUMMA significantly improves the communication cost and the overall performance on large-scale platforms.

  6. Identification and Prediction of Large Pedestrian Flow in Urban Areas Based on a Hybrid Detection Approach

    OpenAIRE

    Kaisheng Zhang; Mei Wang; Bangyang Wei; Daniel(Jian) Sun

    2016-01-01

    Recently, population density has grown quickly with the increasing acceleration of urbanization. At the same time, overcrowded situations are more likely to occur in populous urban areas, increasing the risk of accidents. This paper proposes a synthetic approach to recognize and identify the large pedestrian flow. In particular, a hybrid pedestrian flow detection model was constructed by analyzing real data from major mobile phone operators in China, including information from smartphones and...

  7. Semiconductor Nanocrystal Quantum Dot Synthesis Approaches Towards Large-Scale Industrial Production for Energy Applications.

    Science.gov (United States)

    Hu, Michael Z; Zhu, Ting

    2015-12-01

    This paper reviews the experimental synthesis and engineering developments that focused on various green approaches and large-scale process production routes for quantum dots. Fundamental process engineering principles were illustrated. In relation to the small-scale hot injection method, our discussions focus on the non-injection route that could be scaled up with engineering stir-tank reactors. In addition, applications that demand to utilize quantum dots as "commodity" chemicals are discussed, including solar cells and solid-state lightings.

  8. Identification and Prediction of Large Pedestrian Flow in Urban Areas Based on a Hybrid Detection Approach

    OpenAIRE

    Kaisheng Zhang; Mei Wang; Bangyang Wei; Daniel (Jian) Sun

    2016-01-01

    Recently, population density has grown quickly with the increasing acceleration of urbanization. At the same time, overcrowded situations are more likely to occur in populous urban areas, increasing the risk of accidents. This paper proposes a synthetic approach to recognize and identify the large pedestrian flow. In particular, a hybrid pedestrian flow detection model was constructed by analyzing real data from major mobile phone operators in China, including information from smartphones and...

  9. Interaction of learning approach with concept integration and achievement in a large guided inquiry organic class

    Science.gov (United States)

    Mewhinney, Christina

    A study was conducted to investigate the relationship of students' concept integration and achievement with time spent within a topic and across related topics in a large first semester guided inquiry organic chemistry class. Achievement was based on evidence of algorithmic problem solving; and concept integration was based on demonstrated performance explaining, applying, and relating concepts to each other. Twelve individual assessments were made of both variables over three related topics---acid/base, nucleophilic substitution and electrophilic addition reactions. Measurements included written, free response and ordered multiple answer questions using a classroom response system. Results demonstrated that students can solve problems without conceptual understanding. A second study was conducted to compare the students' learning approach at the beginning and end of the course. Students were scored on their preferences for a deep, strategic, or surface approach to learning based on their responses to a pre and post survey. Results suggest that students significantly decreased their preference for a surface approach during the semester. Analysis of the data collected was performed to determine the relationship between students' learning approach and their concept integration and achievement in this class. Results show a correlation between a deep approach and concept integration and a strong negative correlation between a surface approach and concept integration.

  10. Flow adjustment inside large finite-size wind farms approaching the infinite wind farm regime

    Science.gov (United States)

    Wu, Ka Ling; Porté-Agel, Fernando

    2017-04-01

    Due to the increasing number and the growing size of wind farms, the distance among them continues to decrease. Thus, it is necessary to understand how these large finite-size wind farms and their wakes could interfere the atmospheric boundary layer (ABL) dynamics and adjacent wind farms. Fully-developed flow inside wind farms has been extensively studied through numerical simulations of infinite wind farms. The transportation of momentum and energy is only vertical and the advection of them is neglected in these infinite wind farms. However, less attention has been paid to examine the length of wind farms required to reach such asymptotic regime and the ABL dynamics in the leading and trailing edges of the large finite-size wind farms. Large eddy simulations are performed in this study to investigate the flow adjustment inside large finite-size wind farms in conventionally-neutral boundary layer with the effect of Coriolis force and free-atmosphere stratification from 1 to 5 K/km. For the large finite-size wind farms considered in the present work, when the potential temperature lapse rate is 5 K/km, the wind farms exceed the height of the ABL by two orders of magnitude for the incoming flow inside the farms to approach the fully-developed regime. An entrance fetch of approximately 40 times of the ABL height is also required for such flow adjustment. At the fully-developed flow regime of the large finite-size wind farms, the flow characteristics match those of infinite wind farms even though they have different adjustment length scales. The role of advection at the entrance and exit regions of the large finite-size wind farms is also examined. The interaction between the internal boundary layer developed above the large finite-size wind farms and the ABL under different potential temperature lapse rates are compared. It is shown that the potential temperature lapse rate plays a role in whether the flow inside the large finite-size wind farms adjusts to the fully

  11. An Efficient Approach for Fast and Accurate Voltage Stability Margin Computation in Large Power Grids

    Directory of Open Access Journals (Sweden)

    Heng-Yi Su

    2016-11-01

    Full Text Available This paper proposes an efficient approach for the computation of voltage stability margin (VSM in a large-scale power grid. The objective is to accurately and rapidly determine the load power margin which corresponds to voltage collapse phenomena. The proposed approach is based on the impedance match-based technique and the model-based technique. It combines the Thevenin equivalent (TE network method with cubic spline extrapolation technique and the continuation technique to achieve fast and accurate VSM computation for a bulk power grid. Moreover, the generator Q limits are taken into account for practical applications. Extensive case studies carried out on Institute of Electrical and Electronics Engineers (IEEE benchmark systems and the Taiwan Power Company (Taipower, Taipei, Taiwan system are used to demonstrate the effectiveness of the proposed approach.

  12. Optimal simulations of ultrasonic fields produced by large thermal therapy arrays using the angular spectrum approach.

    Science.gov (United States)

    Zeng, Xiaozheng; McGough, Robert J

    2009-05-01

    The angular spectrum approach is evaluated for the simulation of focused ultrasound fields produced by large thermal therapy arrays. For an input pressure or normal particle velocity distribution in a plane, the angular spectrum approach rapidly computes the output pressure field in a three dimensional volume. To determine the optimal combination of simulation parameters for angular spectrum calculations, the effect of the size, location, and the numerical accuracy of the input plane on the computed output pressure is evaluated. Simulation results demonstrate that angular spectrum calculations performed with an input pressure plane are more accurate than calculations with an input velocity plane. Results also indicate that when the input pressure plane is slightly larger than the array aperture and is located approximately one wavelength from the array, angular spectrum simulations have very small numerical errors for two dimensional planar arrays. Furthermore, the root mean squared error from angular spectrum simulations asymptotically approaches a nonzero lower limit as the error in the input plane decreases. Overall, the angular spectrum approach is an accurate and robust method for thermal therapy simulations of large ultrasound phased arrays when the input pressure plane is computed with the fast nearfield method and an optimal combination of input parameters.

  13. Evaluating the Health Impact of Large-Scale Public Policy Changes: Classical and Novel Approaches.

    Science.gov (United States)

    Basu, Sanjay; Meghani, Ankita; Siddiqi, Arjumand

    2017-03-20

    Large-scale public policy changes are often recommended to improve public health. Despite varying widely-from tobacco taxes to poverty-relief programs-such policies present a common dilemma to public health researchers: how to evaluate their health effects when randomized controlled trials are not possible. Here, we review the state of knowledge and experience of public health researchers who rigorously evaluate the health consequences of large-scale public policy changes. We organize our discussion by detailing approaches to address three common challenges of conducting policy evaluations: distinguishing a policy effect from time trends in health outcomes or preexisting differences between policy-affected and -unaffected communities (using difference-in-differences approaches); constructing a comparison population when a policy affects a population for whom a well-matched comparator is not immediately available (using propensity score or synthetic control approaches); and addressing unobserved confounders by utilizing quasi-random variations in policy exposure (using regression discontinuity, instrumental variables, or near-far matching approaches).

  14. Approaches to large scale unsaturated flow in heterogeneous, stratified, and fractured geologic media

    Energy Technology Data Exchange (ETDEWEB)

    Ababou, R.

    1991-08-01

    This report develops a broad review and assessment of quantitative modeling approaches and data requirements for large-scale subsurface flow in radioactive waste geologic repository. The data review includes discussions of controlled field experiments, existing contamination sites, and site-specific hydrogeologic conditions at Yucca Mountain. Local-scale constitutive models for the unsaturated hydrodynamic properties of geologic media are analyzed, with particular emphasis on the effect of structural characteristics of the medium. The report further reviews and analyzes large-scale hydrogeologic spatial variability from aquifer data, unsaturated soil data, and fracture network data gathered from the literature. Finally, various modeling strategies toward large-scale flow simulations are assessed, including direct high-resolution simulation, and coarse-scale simulation based on auxiliary hydrodynamic models such as single equivalent continuum and dual-porosity continuum. The roles of anisotropy, fracturing, and broad-band spatial variability are emphasized. 252 refs.

  15. Local and global approaches of affinity propagation clustering for large scale data

    CERN Document Server

    Xia, Dingyin; Zhang, Xuqing; Zhuang, Yueting

    2009-01-01

    Recently a new clustering algorithm called 'affinity propagation' (AP) has been proposed, which efficiently clustered sparsely related data by passing messages between data points. However, we want to cluster large scale data where the similarities are not sparse in many cases. This paper presents two variants of AP for grouping large scale data with a dense similarity matrix. The local approach is partition affinity propagation (PAP) and the global method is landmark affinity propagation (LAP). PAP passes messages in the subsets of data first and then merges them as the number of initial step of iterations; it can effectively reduce the number of iterations of clustering. LAP passes messages between the landmark data points first and then clusters non-landmark data points; it is a large global approximation method to speed up clustering. Experiments are conducted on many datasets, such as random data points, manifold subspaces, images of faces and Chinese calligraphy, and the results demonstrate that the two...

  16. Anterior septal deviation and contralateral alar collapse.

    Science.gov (United States)

    Schalek, P; Hahn, A

    2011-01-01

    Septal deviation is often found in conjunction with other pathological conditions that adversely affect nasal patency. Anterior septal deviation, together with contralateral alar collapse, is a relatively rare type of anatomical and functional incompetence. In our experience, it can often be resolved with septoplasty, without the necessity of surgery involving the external valve. The aim of this paper was to verify this hypothesis prospectively. Twelve patients with anterior septal deviation and simultaneous alar collapse on the opposite side were prospectively enrolled in the study. Subjective assessment of nasal patency was made on post-operative day 1, and again 6 months after surgery, using a subjective evaluation of nasal breathing. The width of the nostril (alar-columellar distance) on the side with the alar collapse was measured during inspiration pre-operatively, 1 day after surgery and again 6 months after surgery. Immediately after surgery, all patients reported improved or excellent nasal breathing on the side of the original septal deviation. On the collapsed side, one patient reported no change in condition. With the exception of one patient, all measurements showed some degree of improvement in the extension of the alar-columellar distance. The average benefit 6 months after surgery was an improvement of 4.54 mm. In our group of patients (anterior septal deviation and simultaneous contralateral alar collapse and no obvious structural changes of the alar cartilage) we found septoplasty to be entirely suitable and we recommend it as the treatment of choice in such cases.

  17. Gene prediction in metagenomic fragments: A large scale machine learning approach

    Directory of Open Access Journals (Sweden)

    Morgenstern Burkhard

    2008-04-01

    Full Text Available Abstract Background Metagenomics is an approach to the characterization of microbial genomes via the direct isolation of genomic sequences from the environment without prior cultivation. The amount of metagenomic sequence data is growing fast while computational methods for metagenome analysis are still in their infancy. In contrast to genomic sequences of single species, which can usually be assembled and analyzed by many available methods, a large proportion of metagenome data remains as unassembled anonymous sequencing reads. One of the aims of all metagenomic sequencing projects is the identification of novel genes. Short length, for example, Sanger sequencing yields on average 700 bp fragments, and unknown phylogenetic origin of most fragments require approaches to gene prediction that are different from the currently available methods for genomes of single species. In particular, the large size of metagenomic samples requires fast and accurate methods with small numbers of false positive predictions. Results We introduce a novel gene prediction algorithm for metagenomic fragments based on a two-stage machine learning approach. In the first stage, we use linear discriminants for monocodon usage, dicodon usage and translation initiation sites to extract features from DNA sequences. In the second stage, an artificial neural network combines these features with open reading frame length and fragment GC-content to compute the probability that this open reading frame encodes a protein. This probability is used for the classification and scoring of gene candidates. With large scale training, our method provides fast single fragment predictions with good sensitivity and specificity on artificially fragmented genomic DNA. Additionally, this method is able to predict translation initiation sites accurately and distinguishes complete from incomplete genes with high reliability. Conclusion Large scale machine learning methods are well-suited for gene

  18. Investigating deviations from norms in court interpreting

    DEFF Research Database (Denmark)

    Dubslaff, Friedel; Martinsen, Bodil

    , in some cases, all - professional users involved (judges, lawyers, prosecutors). As far as the non-Danish speaking users are concerned, it has, with one notable exception, unfortunately not been possible to obtain data from this group via questionnaires. As this type of data, however, is important...... behaviour, explore why the deviations in question occur, find out what happens if deviations are perceived as such by the other participants involved in the interpreted event. We will reconstruct the norms in question by examining interpreters' and (mainly) professional users' behaviour in the course...... deviations and sanctions in every case. By way of example: Several judges, who had given their consent to recordings of authentic data in connection with the research project, reported that they had experienced problems with insufficient language proficiency on the part of untrained interpreters speaking...

  19. Identification and Prediction of Large Pedestrian Flow in Urban Areas Based on a Hybrid Detection Approach

    Directory of Open Access Journals (Sweden)

    Kaisheng Zhang

    2016-12-01

    Full Text Available Recently, population density has grown quickly with the increasing acceleration of urbanization. At the same time, overcrowded situations are more likely to occur in populous urban areas, increasing the risk of accidents. This paper proposes a synthetic approach to recognize and identify the large pedestrian flow. In particular, a hybrid pedestrian flow detection model was constructed by analyzing real data from major mobile phone operators in China, including information from smartphones and base stations (BS. With the hybrid model, the Log Distance Path Loss (LDPL model was used to estimate the pedestrian density from raw network data, and retrieve information with the Gaussian Progress (GP through supervised learning. Temporal-spatial prediction of the pedestrian data was carried out with Machine Learning (ML approaches. Finally, a case study of a real Central Business District (CBD scenario in Shanghai, China using records of millions of cell phone users was conducted. The results showed that the new approach significantly increases the utility and capacity of the mobile network. A more reasonable overcrowding detection and alert system can be developed to improve safety in subway lines and other hotspot landmark areas, such as the Bundle, People’s Square or Disneyland, where a large passenger flow generally exists.

  20. PoDMan: Policy Deviation Management

    Directory of Open Access Journals (Sweden)

    Aishwarya Bakshi

    2017-07-01

    Full Text Available Whenever an unexpected or exceptional situation occurs, complying with the existing policies may not be possible. The main objective of this work is to assist individuals and organizations to decide in the process of deviating from policies and performing a non-complying action. The paper proposes utilizing software agents as supportive tools to provide the best non-complying action while deviating from policies. The article also introduces a process in which the decision on the choice of non-complying action can be made. The work is motivated by a real scenario observed in a hospital in Norway and demonstrated through the same settings.

  1. Local and global approaches of affinity propagation clustering for large scale data

    Institute of Scientific and Technical Information of China (English)

    Ding-yin XIA; Fei WU; Xu-qing ZHANG; Yue-ting ZHUANG

    2008-01-01

    Recently a new clustering algorithm called 'affinity propagation' (AP) has been proposed, which efficiently clustered sparsely related data by passing messages between data points. However, we want to cluster large scale data where the similarities are not sparse in many cases. This paper presents two variants of AP for grouping large scale data with a dense similarity matrix.The local approach is partition affinity propagation (PAP) and the global method is landmark affinity propagation (LAP). PAP passes messages in the subsets of data first and then merges them as the number of initial step of iterations; it can effectively reduce the number of iterations of clustering. LAP passes messages between the landmark data points first and then clusters non-landmarkdata points; it is a large global approximation method to speed up clustering. Experiments are conducted on many datasets, such as random data points, manifold subspaces, images of faces and Chinese calligraphy, and the results demonstrate that the two ap-proaches are feasible and practicable.

  2. Moderate deviations for the eigenvalue counting function of Wigner matrices

    CERN Document Server

    Doering, Hanna

    2011-01-01

    We establish a moderate deviation principle (MDP) for the number of eigenvalues of a Wigner matrix in an interval. The proof relies on fine asymptotics of the variance of the eigenvalue counting function of GUE matrices due to Gustavsson. The extension to large families of Wigner matrices is based on the Tao and Vu Four Moment Theorem and applies localization results by Erd\\"os, Yau and Yin. Moreover we investigate families of covariance matrices as well.

  3. Large eddy simulation of atmospheric boundary layer over wind farms using a prescribed boundary layer approach

    DEFF Research Database (Denmark)

    Chivaee, Hamid Sarlak; Sørensen, Jens Nørkær; Mikkelsen, Robert Flemming

    2012-01-01

    Large eddy simulation (LES) of flow in a wind farm is studied in neutral as well as thermally stratified atmospheric boundary layer (ABL). An approach has been practiced to simulate the flow in a fully developed wind farm boundary layer. The approach is based on the Immersed Boundary Method (IBM......) and involves implementation of an arbitrary prescribed initial boundary layer (See [1]). A prescribed initial boundary layer profile is enforced through the computational domain using body forces to maintain a desired flow field. The body forces are then stored and applied on the domain through the simulation...... and the boundary layer shape will be modified due to the interaction of the turbine wakes and buoyancy contributions. The implemented method is capable of capturing the most important features of wakes of wind farms [1] while having the advantage of resolving the wall layer with a coarser grid than typically...

  4. A Two-Stage Approach for Medical Supplies Intermodal Transportation in Large-Scale Disaster Responses

    Directory of Open Access Journals (Sweden)

    Junhu Ruan

    2014-10-01

    Full Text Available We present a two-stage approach for the “helicopters and vehicles” intermodal transportation of medical supplies in large-scale disaster responses. In the first stage, a fuzzy-based method and its heuristic algorithm are developed to select the locations of temporary distribution centers (TDCs and assign medial aid points (MAPs to each TDC. In the second stage, an integer-programming model is developed to determine the delivery routes. Numerical experiments verified the effectiveness of the approach, and observed several findings: (i More TDCs often increase the efficiency and utility of medical supplies; (ii It is not definitely true that vehicles should load more and more medical supplies in emergency responses; (iii The more contrasting the traveling speeds of helicopters and vehicles are, the more advantageous the intermodal transportation is.

  5. A Coordinated Approach to Channel Estimation in Large-scale Multiple-antenna Systems

    CERN Document Server

    Yin, Haifan; Filippou, Miltiades; Liu, Yingzhuang

    2012-01-01

    This paper addresses the problem of channel estimation in multi-cell interference-limited cellular networks. We consider systems employing multiple antennas and are interested in both the finite and large-scale antenna number regimes (so-called "Massive MIMO"). Such systems deal with the multi-cell interference by way of per-cell beamforming applied at each base station. Channel estimation in such networks, which is known to be hampered by the pilot contamination effect, constitute a major bottleneck for overall performance. We present a novel approach which tackles this problem by enabling a low-rate coordination between cells during the channel estimation phase itself. The coordination makes use of the additional second-order statistical information about the user channels, which are shown to offer a powerful way of discriminating across interfering users with even strongly correlated pilot sequences. Importantly, we demonstrate analytically that in the large number of antennas regime the pilot contaminatio...

  6. Improved cluster-in-molecule local correlation approach for electron correlation calculation of large systems.

    Science.gov (United States)

    Guo, Yang; Li, Wei; Li, Shuhua

    2014-10-02

    An improved cluster-in-molecule (CIM) local correlation approach is developed to allow electron correlation calculations of large systems more accurate and faster. We have proposed a refined strategy of constructing virtual LMOs of various clusters, which is suitable for basis sets of various types. To recover medium-range electron correlation, which is important for quantitative descriptions of large systems, we find that a larger distance threshold (ξ) is necessary for highly accurate results. Our illustrative calculations show that the present CIM-MP2 (second-order Møller-Plesser perturbation theory, MP2) or CIM-CCSD (coupled cluster singles and doubles, CCSD) scheme with a suitable ξ value is capable of recovering more than 99.8% correlation energies for a wide range of systems at different basis sets. Furthermore, the present CIM-MP2 scheme can provide reliable relative energy differences as the conventional MP2 method for secondary structures of polypeptides.

  7. A New Approach for Structural Monitoring of Large Dams with a Three-Dimensional Laser Scanner

    Directory of Open Access Journals (Sweden)

    José Sánchez

    2008-09-01

    Full Text Available Driven by progress in sensor technology, computer methods and data processing capabilities, 3D laser scanning has found a wide range of new application fields in recent years. Particularly, monitoring the static and dynamic behaviour of large dams has always been a topic of great importance, due to the impact these structures have on the whole landscape where they are built. The main goal of this paper is to show the relevance and novelty of the laserscanning methodology developed, which incorporates different statistical and modelling approaches not considered until now. As a result, the methods proposed in this paper have provided the measurement and monitoring of the large “Las Cogotas” dam (Avila, Spain.

  8. A Genetic Algorithm-based Antenna Selection Approach for Large-but-Finite MIMO Networks

    KAUST Repository

    Makki, Behrooz

    2016-12-29

    We study the performance of antenna selectionbased multiple-input-multiple-output (MIMO) networks with large but finite number of transmit antennas and receivers. Considering the continuous and bursty communication scenarios with different users’ data request probabilities, we develop an efficient antenna selection scheme using genetic algorithms (GA). As demonstrated, the proposed algorithm is generic in the sense that it can be used in the cases with different objective functions, precoding methods, levels of available channel state information and channel models. Our results show that the proposed GAbased algorithm reaches (almost) the same throughput as the exhaustive search-based optimal approach, with substantially less implementation complexity.

  9. A Connectionist Modeling Approach to Rapid Analysis of Emergent Social Cognition Properties in Large-Populations

    Energy Technology Data Exchange (ETDEWEB)

    Perumalla, Kalyan S [ORNL; Schryver, Jack C [ORNL

    2009-01-01

    Traditional modeling methodologies, such as those based on rule-based agent modeling, are exhibiting limitations in application to rich behavioral scenarios, especially when applied to large population aggregates. Here, we propose a new modeling methodology based on a well-known "connectionist approach," and articulate its pertinence in new applications of interest. This methodology is designed to address challenges such as speed of model development, model customization, model reuse across disparate geographic/cultural regions, and rapid and incremental updates to models over time.

  10. A Sustainable approach to large ICT Science based infrastructures; the case for Radio Astronomy

    CERN Document Server

    Barbosa, Domingos; Boonstra, Albert-Jan; Aguiar, Rui; van Ardenne, Arnold; de Santander-Vela, Juande; Verdes-Montenegro, Lourdes

    2014-01-01

    Large sensor-based infrastructures for radio astronomy will be among the most intensive data-driven projects in the world, facing very high power demands. The geographically wide distribution of these infrastructures and their associated processing High Performance Computing (HPC) facilities require Green Information and Communications Technologies (ICT). A combination is needed of low power computing, efficient data storage, local data services, Smart Grid power management, and inclusion of Renewable Energies. Here we outline the major characteristics and innovation approaches to address power efficiency and long-term power sustainability for radio astronomy projects, focusing on Green ICT for science.

  11. A new approach for inversion of large random matrices in massive MIMO systems.

    Directory of Open Access Journals (Sweden)

    Muhammad Ali Raza Anjum

    Full Text Available We report a novel approach for inversion of large random matrices in massive Multiple-Input Multiple Output (MIMO systems. It is based on the concept of inverse vectors in which an inverse vector is defined for each column of the principal matrix. Such an inverse vector has to satisfy two constraints. Firstly, it has to be in the null-space of all the remaining columns. We call it the null-space problem. Secondly, it has to form a projection of value equal to one in the direction of selected column. We term it as the normalization problem. The process essentially decomposes the inversion problem and distributes it over columns. Each column can be thought of as a node in the network or a particle in a swarm seeking its own solution, the inverse vector, which lightens the computational load on it. Another benefit of this approach is its applicability to all three cases pertaining to a linear system: the fully-determined, the over-determined, and the under-determined case. It eliminates the need of forming the generalized inverse for the last two cases by providing a new way to solve the least squares problem and the Moore and Penrose's pseudoinverse problem. The approach makes no assumption regarding the size, structure or sparsity of the matrix. This makes it fully applicable to much in vogue large random matrices arising in massive MIMO systems. Also, the null-space problem opens the door for a plethora of methods available in literature for null-space computation to enter the realm of matrix inversion. There is even a flexibility of finding an exact or approximate inverse depending on the null-space method employed. We employ the Householder's null-space method for exact solution and present a complete exposition of the new approach. A detailed comparison with well-established matrix inversion methods in literature is also given.

  12. A new approach for inversion of large random matrices in massive MIMO systems.

    Science.gov (United States)

    Anjum, Muhammad Ali Raza; Ahmed, Muhammad Mansoor

    2014-01-01

    We report a novel approach for inversion of large random matrices in massive Multiple-Input Multiple Output (MIMO) systems. It is based on the concept of inverse vectors in which an inverse vector is defined for each column of the principal matrix. Such an inverse vector has to satisfy two constraints. Firstly, it has to be in the null-space of all the remaining columns. We call it the null-space problem. Secondly, it has to form a projection of value equal to one in the direction of selected column. We term it as the normalization problem. The process essentially decomposes the inversion problem and distributes it over columns. Each column can be thought of as a node in the network or a particle in a swarm seeking its own solution, the inverse vector, which lightens the computational load on it. Another benefit of this approach is its applicability to all three cases pertaining to a linear system: the fully-determined, the over-determined, and the under-determined case. It eliminates the need of forming the generalized inverse for the last two cases by providing a new way to solve the least squares problem and the Moore and Penrose's pseudoinverse problem. The approach makes no assumption regarding the size, structure or sparsity of the matrix. This makes it fully applicable to much in vogue large random matrices arising in massive MIMO systems. Also, the null-space problem opens the door for a plethora of methods available in literature for null-space computation to enter the realm of matrix inversion. There is even a flexibility of finding an exact or approximate inverse depending on the null-space method employed. We employ the Householder's null-space method for exact solution and present a complete exposition of the new approach. A detailed comparison with well-established matrix inversion methods in literature is also given.

  13. The ranking probability approach and its usage in design and analysis of large-scale studies.

    Science.gov (United States)

    Kuo, Chia-Ling; Zaykin, Dmitri

    2013-01-01

    In experiments with many statistical tests there is need to balance type I and type II error rates while taking multiplicity into account. In the traditional approach, the nominal [Formula: see text]-level such as 0.05 is adjusted by the number of tests, [Formula: see text], i.e., as 0.05/[Formula: see text]. Assuming that some proportion of tests represent "true signals", that is, originate from a scenario where the null hypothesis is false, power depends on the number of true signals and the respective distribution of effect sizes. One way to define power is for it to be the probability of making at least one correct rejection at the assumed [Formula: see text]-level. We advocate an alternative way of establishing how "well-powered" a study is. In our approach, useful for studies with multiple tests, the ranking probability [Formula: see text] is controlled, defined as the probability of making at least [Formula: see text] correct rejections while rejecting hypotheses with [Formula: see text] smallest P-values. The two approaches are statistically related. Probability that the smallest P-value is a true signal (i.e., [Formula: see text]) is equal to the power at the level [Formula: see text], to an very good excellent approximation. Ranking probabilities are also related to the false discovery rate and to the Bayesian posterior probability of the null hypothesis. We study properties of our approach when the effect size distribution is replaced for convenience by a single "typical" value taken to be the mean of the underlying distribution. We conclude that its performance is often satisfactory under this simplification; however, substantial imprecision is to be expected when [Formula: see text] is very large and [Formula: see text] is small. Precision is largely restored when three values with the respective abundances are used instead of a single typical effect size value.

  14. Bodily Deviations and Body Image in Adolescence

    Science.gov (United States)

    Vilhjalmsson, Runar; Kristjansdottir, Gudrun; Ward, Dianne S.

    2012-01-01

    Adolescents with unusually sized or shaped bodies may experience ridicule, rejection, or exclusion based on their negatively valued bodily characteristics. Such experiences can have negative consequences for a person's image and evaluation of self. This study focuses on the relationship between bodily deviations and body image and is based on a…

  15. Bodily Deviations and Body Image in Adolescence

    Science.gov (United States)

    Vilhjalmsson, Runar; Kristjansdottir, Gudrun; Ward, Dianne S.

    2012-01-01

    Adolescents with unusually sized or shaped bodies may experience ridicule, rejection, or exclusion based on their negatively valued bodily characteristics. Such experiences can have negative consequences for a person's image and evaluation of self. This study focuses on the relationship between bodily deviations and body image and is based on a…

  16. 45 CFR 2543.4 - Deviations.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 4 2010-10-01 2010-10-01 false Deviations. 2543.4 Section 2543.4 Public Welfare Regulations Relating to Public Welfare (Continued) CORPORATION FOR NATIONAL AND COMMUNITY SERVICE GRANTS AND AGREEMENTS WITH INSTITUTIONS OF HIGHER EDUCATION, HOSPITALS, AND OTHER NON-PROFIT ORGANIZATIONS General...

  17. Voice Deviations and Coexisting Communication Disorders.

    Science.gov (United States)

    St. Louis, Kenneth O.; And Others

    1992-01-01

    This study examined the coexistence of other communicative disorders with voice disorders in about 3,400 children in grades 1-12 at 100 sites throughout the United States. The majority of voice-disordered children had coexisting articulation deviations and also differed from controls on two language measures and mean pure-tone hearing thresholds.…

  18. 41 CFR 109-1.5304 - Deviations.

    Science.gov (United States)

    2010-07-01

    ... Secretary for Procurement and Assistance Management. A HFO's decision not to provide life-cycle control... through the cognizant HFO to the Deputy Assistant Secretary for Procurement and Assistance Management. ... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false Deviations....

  19. 43 CFR 12.904 - Deviations.

    Science.gov (United States)

    2010-10-01

    ... 43 Public Lands: Interior 1 2010-10-01 2010-10-01 false Deviations. 12.904 Section 12.904 Public Lands: Interior Office of the Secretary of the Interior ADMINISTRATIVE AND AUDIT REQUIREMENTS AND COST PRINCIPLES FOR ASSISTANCE PROGRAMS Uniform Administrative Requirements for Grants and Agreements...

  20. Association between septal deviation and sinonasal papilloma.

    Science.gov (United States)

    Nomura, Kazuhiro; Ogawa, Takenori; Sugawara, Mitsuru; Honkura, Yohei; Oshima, Hidetoshi; Arakawa, Kazuya; Oshima, Takeshi; Katori, Yukio

    2013-12-01

    Sinonasal papilloma is a common benign epithelial tumor of the sinonasal tract and accounts for 0.5% to 4% of all nasal tumors. The etiology of sinonasal papilloma remains unclear, although human papilloma virus has been proposed as a major risk factor. Other etiological factors, such as anatomical variations of the nasal cavity, may be related to the pathogenesis of sinonasal papilloma, because deviated nasal septum is seen in patients with chronic rhinosinusitis. We, therefore, investigated the involvement of deviated nasal septum in the development of sinonasal papilloma. Preoperative computed tomography or magnetic resonance imaging findings of 83 patients with sinonasal papilloma were evaluated retrospectively. The side of papilloma and the direction of septal deviation showed a significant correlation. Septum deviated to the intact side in 51 of 83 patients (61.4%) and to the affected side in 18 of 83 patients (21.7%). Straight or S-shaped septum was observed in 14 of 83 patients (16.9%). Even after excluding 27 patients who underwent revision surgery and 15 patients in whom the papilloma touched the concave portion of the nasal septum, the concave side of septal deviation was associated with the development of sinonasal papilloma (p = 0.040). The high incidence of sinonasal papilloma in the concave side may reflect the consequences of the traumatic effects caused by wall shear stress of the high-velocity airflow and the increased chance of inhaling viruses and pollutants. The present study supports the causative role of human papilloma virus and toxic chemicals in the occurrence of sinonasal papilloma.

  1. Adaptive combinatorial design to explore large experimental spaces: approach and validation.

    Science.gov (United States)

    Lejay, L V; Shasha, D E; Palenchar, P M; Kouranov, A Y; Cruikshank, A A; Chou, M F; Coruzzi, G M

    2004-12-01

    Systems biology requires mathematical tools not only to analyse large genomic datasets, but also to explore large experimental spaces in a systematic yet economical way. We demonstrate that two-factor combinatorial design (CD), shown to be useful in software testing, can be used to design a small set of experiments that would allow biologists to explore larger experimental spaces. Further, the results of an initial set of experiments can be used to seed further 'Adaptive' CD experimental designs. As a proof of principle, we demonstrate the usefulness of this Adaptive CD approach by analysing data from the effects of six binary inputs on the regulation of genes in the N-assimilation pathway of Arabidopsis. This CD approach identified the more important regulatory signals previously discovered by traditional experiments using far fewer experiments, and also identified examples of input interactions previously unknown. Tests using simulated data show that Adaptive CD suffers from fewer false positives than traditional experimental designs in determining decisive inputs, and succeeds far more often than traditional or random experimental designs in determining when genes are regulated by input interactions. We conclude that Adaptive CD offers an economical framework for discovering dominant inputs and interactions that affect different aspects of genomic outputs and organismal responses.

  2. A Cost-Effective Planning Graph Approach for Large-Scale Web Service Composition

    Directory of Open Access Journals (Sweden)

    Szu-Yin Lin

    2012-01-01

    Full Text Available Web Service Composition (WSC problems can be considered as a service matching problem, which means that the output parameters of a Web service can be used as inputs of another one. However, when a very large number of Web services are deployed in the environment, the service composition has become sophisticated and complicated process. In this study, we proposed a novel cost-effective Web service composition mechanism. It utilizes planning graph based on backward search algorithm to find multiple feasible solutions and recommends a best composition solution according to the lowest service cost. In other words, the proposed approach is a goal-driven mechanism, which can recommend the approximate solutions, but it consumes fewer amounts of Web services and less nested levels of composite service. Finally, we implement a simulation platform to validate the proposed cost-effective planning graph mechanism in large-scale Web services environment. The simulation results show that our proposed algorithm based on the backward planning graph has reduced by 94% service cost in three different environments of service composition that is compared with other existing service composition approaches which are based on a forward planning graph.

  3. Comparing large-scale computational approaches to epidemic modeling: Agent-based versus structured metapopulation models

    Directory of Open Access Journals (Sweden)

    Merler Stefano

    2010-06-01

    Full Text Available Abstract Background In recent years large-scale computational models for the realistic simulation of epidemic outbreaks have been used with increased frequency. Methodologies adapt to the scale of interest and range from very detailed agent-based models to spatially-structured metapopulation models. One major issue thus concerns to what extent the geotemporal spreading pattern found by different modeling approaches may differ and depend on the different approximations and assumptions used. Methods We provide for the first time a side-by-side comparison of the results obtained with a stochastic agent-based model and a structured metapopulation stochastic model for the progression of a baseline pandemic event in Italy, a large and geographically heterogeneous European country. The agent-based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM model, based on high-resolution census data worldwide, and integrating airline travel flow data with short-range human mobility patterns at the global scale. The model also considers age structure data for Italy. GLEaM and the agent-based models are synchronized in their initial conditions by using the same disease parameterization, and by defining the same importation of infected cases from international travels. Results The results obtained show that both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing on the order of a few days. The relative difference of the epidemic size depends on the basic reproductive ratio, R0, and on the fact that the metapopulation model consistently yields a larger incidence than the agent-based model, as expected due to the differences in the structure in the intra-population contact pattern of the approaches. The age

  4. A topic clustering approach to finding similar questions from large question and answer archives.

    Directory of Open Access Journals (Sweden)

    Wei-Nan Zhang

    Full Text Available With the blooming of Web 2.0, Community Question Answering (CQA services such as Yahoo! Answers (http://answers.yahoo.com, WikiAnswer (http://wiki.answers.com, and Baidu Zhidao (http://zhidao.baidu.com, etc., have emerged as alternatives for knowledge and information acquisition. Over time, a large number of question and answer (Q&A pairs with high quality devoted by human intelligence have been accumulated as a comprehensive knowledge base. Unlike the search engines, which return long lists of results, searching in the CQA services can obtain the correct answers to the question queries by automatically finding similar questions that have already been answered by other users. Hence, it greatly improves the efficiency of the online information retrieval. However, given a question query, finding the similar and well-answered questions is a non-trivial task. The main challenge is the word mismatch between question query (query and candidate question for retrieval (question. To investigate this problem, in this study, we capture the word semantic similarity between query and question by introducing the topic modeling approach. We then propose an unsupervised machine-learning approach to finding similar questions on CQA Q&A archives. The experimental results show that our proposed approach significantly outperforms the state-of-the-art methods.

  5. A Ranking Approach on Large-Scale Graph With Multidimensional Heterogeneous Information.

    Science.gov (United States)

    Wei, Wei; Gao, Bin; Liu, Tie-Yan; Wang, Taifeng; Li, Guohui; Li, Hang

    2016-04-01

    Graph-based ranking has been extensively studied and frequently applied in many applications, such as webpage ranking. It aims at mining potentially valuable information from the raw graph-structured data. Recently, with the proliferation of rich heterogeneous information (e.g., node/edge features and prior knowledge) available in many real-world graphs, how to effectively and efficiently leverage all information to improve the ranking performance becomes a new challenging problem. Previous methods only utilize part of such information and attempt to rank graph nodes according to link-based methods, of which the ranking performances are severely affected by several well-known issues, e.g., over-fitting or high computational complexity, especially when the scale of graph is very large. In this paper, we address the large-scale graph-based ranking problem and focus on how to effectively exploit rich heterogeneous information of the graph to improve the ranking performance. Specifically, we propose an innovative and effective semi-supervised PageRank (SSP) approach to parameterize the derived information within a unified semi-supervised learning framework (SSLF-GR), then simultaneously optimize the parameters and the ranking scores of graph nodes. Experiments on the real-world large-scale graphs demonstrate that our method significantly outperforms the algorithms that consider such graph information only partially.

  6. Deviations from LTE in a stellar atmosphere

    Science.gov (United States)

    Kalkofen, W.; Klein, R. I.; Stein, R. F.

    1979-01-01

    Deviations for LTE are investigated in an atmosphere of hydrogen atoms with one bound level, satisfying the equations of radiative, hydrostatic, and statistical equilibrium. The departure coefficient and the kinetic temperature as functions of the frequency dependence of the radiative cross section are studied analytically and numerically. Near the outer boundary of the atmosphere, the departure coefficient is smaller than unity when the radiative cross section grows with frequency faster than with the square of frequency; it exceeds unity otherwise. Far from the boundary the departure coefficient tends to exceed unity for any frequency dependence of the radiative cross section. Overpopulation always implies that the kinetic temperature in the statistical-equilibrium atmosphere is higher than the temperature in the corresponding LTE atmosphere. Upper and lower bounds on the kinetic temperature are given for an atmosphere with deviations from LTE only in the optically shallow layers when the emergent intensity can be described by a radiation temperature.

  7. The role of septal surgery in management of the deviated nose.

    Science.gov (United States)

    Foda, Hossam M T

    2005-02-01

    The deviated nose represents a complex cosmetic and functional problem. Septal surgery plays a central role in the successful management of the externally deviated nose. This study included 260 patients seeking rhinoplasty to correct external nasal deviations; 75 percent of them had various degrees of nasal obstruction. Septal surgery was necessary in 232 patients (89 percent), not only to improve breathing but also to achieve a straight, symmetrical, external nose as well. A graduated surgical approach was adopted to allow correction of the dorsal and caudal deviations of the nasal septum without weakening its structural support to the dorsum or nasal tip. The approach depended on full mobilization of deviated cartilage, followed by straightening of the cartilage and its fixation in the corrected position by using bony splinting grafts through an external rhinoplasty approach.

  8. Contiguous Uniform Deviation for Multiple Linear Regression in Pattern Recognition

    Science.gov (United States)

    Andriana, A. S.; Prihatmanto, D.; Hidaya, E. M. I.; Supriana, I.; Machbub, C.

    2017-01-01

    Understanding images by recognizing its objects is still a challenging task. Face elements detection has been developed by researchers but not yet shows enough information (low resolution in information) needed for recognizing objects. Available face recognition methods still have error in classification and need a huge amount of examples which may still be incomplete. Another approach which is still rare in understanding images uses pattern structures or syntactic grammars describing shape detail features. Image pixel values are also processed as signal patterns which are approximated by mathematical function curve fitting. This paper attempts to add contiguous uniform deviation method to curve fitting algorithm to increase applicability in image recognition system related to object movement. The combination of multiple linear regression and contiguous uniform deviation method are applied to the function of image pixel values, and show results in higher resolution (more information) of visual object detail description in object movement.

  9. Deviations in delineated GTV caused by artefacts in 4DCT

    DEFF Research Database (Denmark)

    Persson, Gitte Fredberg; Nygaard, Ditte Eklund; Brink, Carsten;

    2010-01-01

    BACKGROUND AND PURPOSE: Four-dimensional computed tomography (4DCT) is used for breathing-adapted radiotherapy planning. Irregular breathing, large tumour motion or interpolation of images can cause artefacts in the 4DCT. This study evaluates the impact of artefacts on gross tumour volume (GTV......) size. MATERIAL AND METHODS: In 19 4DCT scans of patients with peripheral lung tumours, GTV was delineated in all bins. Variations in GTV size between bins in each 4DCT scan were analysed and correlated to tumour motion and variations in breathing signal amplitude and breathing signal period. End......-expiration GTV size (GTVexp) was considered as reference for GTV size. Intra-session delineation error was estimated by re-delineation of GTV in eight of the 4DCT scans. RESULTS: In 16 of the 4DCT scans the maximum deviations from GTVexp were larger than could be explained by delineation error. The deviations...

  10. Moderate Deviation Principle for dynamical systems with small random perturbation

    CERN Document Server

    ma, Yutao; Wu, Liming

    2011-01-01

    Consider the stochastic differential equation in $\\rr^d$ dX^{\\e}_t&=b(X^{\\e}_t)dt+\\sqrt{\\e}\\sigma(X^\\e_t)dB_t X^{\\e}_0&=x_0,\\quad x_0\\in\\rr^d where $b:\\rr^d\\rightarrow\\rr^d$ is $C^1$ such that $ \\leq C(1+|x|^2)$, $\\sigma:\\rr^d\\rightarrow \\MM(d\\times n)$ is locally Lipschitzian with linear growth, and $B_t$ is a standard Brownian motion taking values in $\\rr^n$. Freidlin-Wentzell's theorem gives the large deviation principle for $X^\\e$ for small $\\e$. In this paper we establish its moderate deviation principle.

  11. A pyramid-based approach to visual exploration of a large volume of vehicle trajectory data

    Institute of Scientific and Technical Information of China (English)

    Jing SUN; Xiang LI

    2012-01-01

    Advances in positioning and wireless communicating technologies make it possible to collect large volumes of trajectory data of moving vehicles in a fast and convenient fashion.These data can be applied to traffic studies.Behind this application,a methodological issue that still requires particular attention is the way these data should be spatially visualized.Trajectory data physically consists of a large number of positioning points.With the dramatic increase of data volume,it becomes a challenge to display and explore these data.Existing commercial software often employs vector-based indexing structures to facilitate the display of a large volume of points,but their performance downgrades quickly when the number of points is very large,for example,tens of millions.In this paper,a pyramid-based approach is proposed.A pyramid method initially is invented to facilitate the display of raster images through the tradeoff between storage space and display time.A pyramid is a set of images at different levels with different resolutions.In this paper,we convert vector-based point data into raster data,and build a gridbased indexing structure in a 2D plane.Then,an image pyramid is built.Moreover,at the same level of a pyramid,image is segmented into mosaics with respect to the requirements of data storage and management.Algorithms or procedures on grid-based indexing structure,image pyramid,image segmentation,and visualization operations are given in this paper.A case study with taxi trajectory data in Shanghai is conducted.Results demonstrate that the proposed method outperforms the existing commercial software.

  12. Using Flipped Classroom Approach to Explore Deep Learning in Large Classrooms

    Directory of Open Access Journals (Sweden)

    Brenda Danker

    2015-01-01

    Full Text Available This project used two Flipped Classroom approaches to stimulate deep learning in large classrooms during the teaching of a film module as part of a Diploma in Performing Arts course at Sunway University, Malaysia. The flipped classes utilized either a blended learning approach where students first watched online lectures as homework, and then completed their assignments and practical work in class; or utilized a guided inquiry approach at the beginning of class using this same process. During the class the lecturers were present to help the students, and in addition, the students were advantaged by being able to help one another. The in-class learning activities also included inquiry-based learning, active learning, and peer-learning. This project used an action research approach to improve the in-class instructional design progressively to achieve its impact of deep learning among the students. The in-class learning activities that was included in the later flipped classes merged aspects of blended learning with an inquiry-based learning cycle which focused on the exploration of concepts. Data was gathered from questionnaires filled out by the students and from short interviews with the students, as well as from the teacher’s reflective journals. The findings verified that the flipped classrooms were able to remodel large lecture classes into active-learning classes. The results also support the possibility of individualised learning for the students as being high as a result of the teacher’s ability to provide one-on-one tutoring through technology-infused lessons. It is imperative that the in-class learning activities are purposefully designed as the inclusion of the exploratory learning through guided inquiry-based activities in the flipped classes was a successful way to engage students on a deeper level and increased the students’ curiosity and engaged them to develop higher-order thinking skills. This project also concluded that

  13. Selecting Video Key Frames Based on Relative Entropy and the Extreme Studentized Deviate Test

    Directory of Open Access Journals (Sweden)

    Yuejun Guo

    2016-03-01

    Full Text Available This paper studies the relative entropy and its square root as distance measures of neighboring video frames for video key frame extraction. We develop a novel approach handling both common and wavelet video sequences, in which the extreme Studentized deviate test is exploited to identify shot boundaries for segmenting a video sequence into shots. Then, video shots can be divided into different sub-shots, according to whether the video content change is large or not, and key frames are extracted from sub-shots. The proposed technique is general, effective and efficient to deal with video sequences of any kind. Our new approach can offer optional additional multiscale summarizations of video data, achieving a balance between having more details and maintaining less redundancy. Extensive experimental results show that the new scheme obtains very encouraging results in video key frame extraction, in terms of both objective evaluation metrics and subjective visual perception.

  14. Burnout of pulverized biomass particles in large scale boiler - Single particle model approach

    Energy Technology Data Exchange (ETDEWEB)

    Saastamoinen, Jaakko; Aho, Martti; Moilanen, Antero [VTT Technical Research Centre of Finland, Box 1603, 40101 Jyvaeskylae (Finland); Soerensen, Lasse Holst [ReaTech/ReAddit, Frederiksborgsveij 399, Niels Bohr, DK-4000 Roskilde (Denmark); Clausen, Soennik [Risoe National Laboratory, DK-4000 Roskilde (Denmark); Berg, Mogens [ENERGI E2 A/S, A.C. Meyers Vaenge 9, DK-2450 Copenhagen SV (Denmark)

    2010-05-15

    Burning of coal and biomass particles are studied and compared by measurements in an entrained flow reactor and by modelling. The results are applied to study the burning of pulverized biomass in a large scale utility boiler originally planned for coal. A simplified single particle approach, where the particle combustion model is coupled with one-dimensional equation of motion of the particle, is applied for the calculation of the burnout in the boiler. The particle size of biomass can be much larger than that of coal to reach complete burnout due to lower density and greater reactivity. The burner location and the trajectories of the particles might be optimised to maximise the residence time and burnout. (author)

  15. Validity of equation-of-motion approach to kondo problem in the large N limit

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Jian-xin [Los Alamos National Laboratory; Ting, C S [UNIV OF HOUSTON; Qi, Yunong [UNIV OF HOUSTON

    2008-01-01

    The Anderson impurity model for Kondo problem is investigated for arbitrary orbit-spin degeneracy N of the magnetic impurity by the equation of motion method (EOM). By employing a new decoupling scheme, a self-consistent equation for the one-particle Green function is derived and numerically solved in the large-N approximation. For the particle-hole symmetric Anderson model with finite Coulomb interaction U, we show that the Kondo resonance at the impurity site exists for all N {>=} 2. The approach removes the pathology in the standard EOM for N = 2, and has the same level of applicability as non-crossing approximation. For N = 2, an exchange field splits the Kondo resonance into only two peaks, consist with the result from more rigorous numerical renormalization group (NRG) method. The temperature dependence of the Kondo resonance peak is also discussed.

  16. PAPR Reduction in OFDM Systems with Large Number of Sub-Carriers by Carrier Interferometry Approaches

    Institute of Scientific and Technical Information of China (English)

    HE Jian-hui; QUAN Zi-yi; MEN Ai-dong

    2004-01-01

    High Peak-to-Average Power Ratio (PAPR) is one of the major drawbacks of Orthogonal Frequency Division Multiplexing ( OFDM) systems. This paper presents the structures of the particular bit sequences leading to the maximum PAPR (PAPRmax) in Carrier-Interferometry OFDM (CI/OFDM) and Pseudo Orthogonal Carrier-Interferometry OFDM (PO-CI/OFDM) systems for Binary Phase Shift Keying (BPSK) modulation. Furthermore, the simulation and analysis of PAPRmax and PAPR cumulative distribution in CI/OFDM and PO-CI/OFDM systems with 2048 sub-carriers are presented in this paper. The results show that the PAPR of OFDM system with large number of sub-carriers reduced evidently via CI approaches.

  17. REGULATION OF FOLLICULAR DEVIATION IN VIVO: A MOLECULAR APPROACH.

    OpenAIRE

    Bernardo Garziera Gasperin

    2012-01-01

    O controle local da seleção folicular em mamíferos ainda é pouco compreendido. O objetivo do presente estudo foi identificar fatores locais, receptores e rotas de sinalização envolvidas na seleção do folículo dominante e atresia dos subordinados em bovinos. Em um primeiro estudo, avaliou-se a regulação e função do FGF10 e do seu receptor FGFR2b durante a divergência folicular. A expressão de FGF10 e FGFR2b foi significativamente maior nas células da teca e granulosa, respectivamen...

  18. Discontinuous penalty approach with deviation integral for global constrained minimization

    Institute of Scientific and Technical Information of China (English)

    Liu CHEN; Yi-rong YAO; Quan ZHENG

    2009-01-01

    of the penalized minimization problems are proven.To implement the algorithm,the cross-entropy method and the importance sampling are used based on the Monte-Carlo technique.Numerical tests show the effectiveness of the proposed algorithm.

  19. Molecular tailoring approach: a route for ab initio treatment of large clusters.

    Science.gov (United States)

    Sahu, Nityananda; Gadre, Shridhar R

    2014-09-16

    Conspectus Chemistry on the scale of molecular clusters may be dramatically different from that in the macroscopic bulk. Greater understanding of chemistry in this size regime could greatly influence fields such as materials science and atmospheric and environmental chemistry. Recent advances in experimental techniques and computational resources have led to accurate investigations of the energies and spectral properties of weakly bonded molecular clusters. These have enabled researchers to learn how the physicochemical properties evolve from individual molecules to bulk materials and to understand the growth patterns of clusters. Experimental techniques such as infrared, microwave, and photoelectron spectroscopy are the most popular and powerful tools for probing molecular clusters. In general, these experimental techniques do not directly reveal the atomistic details of the clusters but provide data from which the structural details need to be unearthed. Furthermore, the resolution of the spectral properties of energetically close cluster conformers can be prohibitively difficult. Thus, these investigations of molecular aggregates require a combination of experiments and theory. On the theoretical front, researchers have been actively engaged in quantum chemical ab initio calculations as well as simulation-based studies for the last few decades. To obtain reliable results, there is a need to use correlated methods such as Møller-Plesset second order method, coupled cluster theory, or dispersion corrected density functional theory. However, due to nonlinear scaling of these methods, optimizing the geometry of large clusters still remains a formidable quantum chemistry challenge. Fragment-based methods, such as divide-and-conquer, molecular tailoring approach (MTA), fragment molecular orbitals, and generalized energy-based fragmentation approach, provide alternatives for overcoming the scaling problem for spatially extended molecular systems. Within MTA, a large

  20. Integrating adaptive behaviour in large-scale flood risk assessments: an Agent-Based Modelling approach

    Science.gov (United States)

    Haer, Toon; Aerts, Jeroen

    2015-04-01

    Between 1998 and 2009, Europe suffered over 213 major damaging floods, causing 1126 deaths, displacing around half a million people. In this period, floods caused at least 52 billion euro in insured economic losses making floods the most costly natural hazard faced in Europe. In many low-lying areas, the main strategy to cope with floods is to reduce the risk of the hazard through flood defence structures, like dikes and levees. However, it is suggested that part of the responsibility for flood protection needs to shift to households and businesses in areas at risk, and that governments and insurers can effectively stimulate the implementation of individual protective measures. However, adaptive behaviour towards flood risk reduction and the interaction between the government, insurers, and individuals has hardly been studied in large-scale flood risk assessments. In this study, an European Agent-Based Model is developed including agent representatives for the administrative stakeholders of European Member states, insurers and reinsurers markets, and individuals following complex behaviour models. The Agent-Based Modelling approach allows for an in-depth analysis of the interaction between heterogeneous autonomous agents and the resulting (non-)adaptive behaviour. Existing flood damage models are part of the European Agent-Based Model to allow for a dynamic response of both the agents and the environment to changing flood risk and protective efforts. By following an Agent-Based Modelling approach this study is a first contribution to overcome the limitations of traditional large-scale flood risk models in which the influence of individual adaptive behaviour towards flood risk reduction is often lacking.

  1. Image subsampling and point scoring approaches for large-scale marine benthic monitoring programs

    Science.gov (United States)

    Perkins, Nicholas R.; Foster, Scott D.; Hill, Nicole A.; Barrett, Neville S.

    2016-07-01

    Benthic imagery is an effective tool for quantitative description of ecologically and economically important benthic habitats and biota. The recent development of autonomous underwater vehicles (AUVs) allows surveying of spatial scales that were previously unfeasible. However, an AUV collects a large number of images, the scoring of which is time and labour intensive. There is a need to optimise the way that subsamples of imagery are chosen and scored to gain meaningful inferences for ecological monitoring studies. We examine the trade-off between the number of images selected within transects and the number of random points scored within images on the percent cover of target biota, the typical output of such monitoring programs. We also investigate the efficacy of various image selection approaches, such as systematic or random, on the bias and precision of cover estimates. We use simulated biotas that have varying size, abundance and distributional patterns. We find that a relatively small sampling effort is required to minimise bias. An increased precision for groups that are likely to be the focus of monitoring programs is best gained through increasing the number of images sampled rather than the number of points scored within images. For rare species, sampling using point count approaches is unlikely to provide sufficient precision, and alternative sampling approaches may need to be employed. The approach by which images are selected (simple random sampling, regularly spaced etc.) had no discernible effect on mean and variance estimates, regardless of the distributional pattern of biota. Field validation of our findings is provided through Monte Carlo resampling analysis of a previously scored benthic survey from temperate waters. We show that point count sampling approaches are capable of providing relatively precise cover estimates for candidate groups that are not overly rare. The amount of sampling required, in terms of both the number of images and

  2. Stochastic switching in slow-fast systems: a large-fluctuation approach.

    Science.gov (United States)

    Heckman, Christoffer R; Schwartz, Ira B

    2014-02-01

    In this paper we develop a perturbation method to predict the rate of occurrence of rare events for singularly perturbed stochastic systems using a probability density function approach. In contrast to a stochastic normal form approach, we model rare event occurrences due to large fluctuations probabilistically and employ a WKB ansatz to approximate their rate of occurrence. This results in the generation of a two-point boundary value problem that models the interaction of the state variables and the most likely noise force required to induce a rare event. The resulting equations of motion of describing the phenomenon are shown to be singularly perturbed. Vastly different time scales among the variables are leveraged to reduce the dimension and predict the dynamics on the slow manifold in a deterministic setting. The resulting constrained equations of motion may be used to directly compute an exponent that determines the probability of rare events. To verify the theory, a stochastic damped Duffing oscillator with three equilibrium points (two sinks separated by a saddle) is analyzed. The predicted switching time between states is computed using the optimal path that resides in an expanded phase space. We show that the exponential scaling of the switching rate as a function of system parameters agrees well with numerical simulations. Moreover, the dynamics of the original system and the reduced system via center manifolds are shown to agree in an exponentially scaling sense.

  3. Transanal minimally invasive surgery (TAMIS) approach for large juxta-anal gastrointestinal stromal tumour.

    Science.gov (United States)

    Wachter, Nicolas; Wörns, Marcus-Alexander; Dos Santos, Daniel Pinto; Lang, Hauke; Huber, Tobias; Kneist, Werner

    2016-01-01

    Gastrointestinal stromal tumours (GISTs) are rarely found in the rectum. Large rectal GISTs in the narrow pelvis sometimes require extended abdominal surgery to obtain free resection margins, and it is a challenge to preserve sufficient anal sphincter and urogenital function. Here we present a 56-year-old male with a locally advanced juxta-anal non-metastatic GIST of approximately 10 cm in diameter. Therapy with imatinib reduced the tumour size and allowed partial intersphincteric resection (pISR). The patient underwent an electrophysiology-controlled nerve-sparing hybrid of laparoscopic and transanal minimally invasive surgery (TAMIS) in a multimodal setting. The down-to-up approach provided sufficient dissection plane visualisation and allowed the confirmed nerve-sparing. Lateroterminal coloanal anastomosis was performed. Follow-up showed preserved urogenital function and good anorectal function, and the patient remains disease-free under adjuvant chemotherapy as of 12 months after surgery. This report suggests that the TAMIS approach enables extraluminal high-quality oncological and function-preserving excision of high-risk GISTs.

  4. Transanal minimally invasive surgery (TAMIS approach for large juxta-anal gastrointestinal stromal tumour

    Directory of Open Access Journals (Sweden)

    Nicolas Wachter

    2016-01-01

    Full Text Available Gastrointestinal stromal tumours (GISTs are rarely found in the rectum. Large rectal GISTs in the narrow pelvis sometimes require extended abdominal surgery to obtain free resection margins, and it is a challenge to preserve sufficient anal sphincter and urogenital function. Here we present a 56-year-old male with a locally advanced juxta-anal non-metastatic GIST of approximately 10 cm in diameter. Therapy with imatinib reduced the tumour size and allowed partial intersphincteric resection (pISR. The patient underwent an electrophysiology-controlled nerve-sparing hybrid of laparoscopic and transanal minimally invasive surgery (TAMIS in a multimodal setting. The down-to-up approach provided sufficient dissection plane visualisation and allowed the confirmed nerve-sparing. Lateroterminal coloanal anastomosis was performed. Follow-up showed preserved urogenital function and good anorectal function, and the patient remains disease-free under adjuvant chemotherapy as of 12 months after surgery. This report suggests that the TAMIS approach enables extraluminal high-quality oncological and function-preserving excision of high-risk GISTs.

  5. Preparing laboratory and real-world EEG data for large-scale analysis: A containerized approach

    Directory of Open Access Journals (Sweden)

    Nima eBigdely-Shamlo

    2016-03-01

    Full Text Available Large-scale analysis of EEG and other physiological measures promises new insights into brain processes and more accurate and robust brain-computer interface (BCI models.. However, the absence of standard-ized vocabularies for annotating events in a machine understandable manner, the welter of collection-specific data organizations, the diffi-culty in moving data across processing platforms, and the unavailability of agreed-upon standards for preprocessing have prevented large-scale analyses of EEG. Here we describe a containerized approach and freely available tools we have developed to facilitate the process of an-notating, packaging, and preprocessing EEG data collections to enable data sharing, archiving, large-scale machine learning/data mining and (meta-analysis. The EEG Study Schema (ESS comprises three data Levels, each with its own XML-document schema and file/folder convention, plus a standardized (PREP pipeline to move raw (Data Level 1 data to a basic preprocessed state (Data Level 2 suitable for application of a large class of EEG analysis methods. Researchers can ship a study as a single unit and operate on its data using a standardized interface. ESS does not require a central database and provides all the metadata data necessary to execute a wide variety of EEG processing pipelines. The primary focus of ESS is automated in-depth analysis and meta-analysis EEG studies. However, ESS can also encapsulate meta-information for the other modalities such as eye tracking, that are in-creasingly used in both laboratory and real-world neuroimaging. ESS schema and tools are freely available at eegstudy.org, and a central cata-log of over 850 GB of existing data in ESS format is available at study-catalog.org. These tools and resources are part of a larger effort to ena-ble data sharing at sufficient scale for researchers to engage in truly large-scale EEG analysis and data mining (BigEEG.org.

  6. Retrograde versus Antegrade Approach for the Management of Large Proximal Ureteral Stones

    Science.gov (United States)

    Mykoniatis, Ioannis; Isid, Ayman; Gofrit, Ofer N.; Rosenberg, Shilo; Hidas, Guy; Landau, Ezekiel H.; Pode, Dov; Duvdevani, Mordechai

    2016-01-01

    Objective. To evaluate and compare the efficacy and safety of retrograde versus antegrade ureteroscopic lithotripsy for the treatment of large proximal ureteral stones. Patients and Methods. We retrospectively analyzed the medical records of patients with proximal ureteral stones >15 mm, treated in our institution from January 2011 to January 2016. Intraoperative parameters, postoperative outcomes, and complications were recorded and compared between the two techniques. Results. Our analysis included 57 patients. Thirty-four patients (59.6%) underwent retrograde and 23 patients (40.4%) underwent antegrade ureteroscopy. There was no significant difference in patients' demographics and stone characteristics between the groups. Stone-free rate was significantly higher (p = 0.033) in the antegrade group (100%) compared to retrograde one (82.4%). Fluoroscopy time, procedure duration, and length of hospitalization were significantly (p < 0.001) lower in retrograde approach. On the other hand, the need for postoperative stenting was significantly lower in the antegrade group (p < 0.001). No difference was found between the groups (p = 0.745) regarding postoperative complications. Conclusions. Antegrade ureteroscopy is an efficient and safe option for the management of large proximal ureteral stones. It may achieve high stone-free rates compared to retrograde ureteroscopy with the drawback of longer operative time, fluoroscopy time, and length of hospitalization. PMID:27766263

  7. The opportunities and challenges of large-scale molecular approaches to songbird neurobiology

    Science.gov (United States)

    Mello, C.V.; Clayton, D.F.

    2014-01-01

    High-through put methods for analyzing genome structure and function are having a large impact in song-bird neurobiology. Methods include genome sequencing and annotation, comparative genomics, DNA microarrays and transcriptomics, and the development of a brain atlas of gene expression. Key emerging findings include the identification of complex transcriptional programs active during singing, the robust brain expression of non-coding RNAs, evidence of profound variations in gene expression across brain regions, and the identification of molecular specializations within song production and learning circuits. Current challenges include the statistical analysis of large datasets, effective genome curations, the efficient localization of gene expression changes to specific neuronal circuits and cells, and the dissection of behavioral and environmental factors that influence brain gene expression. The field requires efficient methods for comparisons with organisms like chicken, which offer important anatomical, functional and behavioral contrasts. As sequencing costs plummet, opportunities emerge for comparative approaches that may help reveal evolutionary transitions contributing to vocal learning, social behavior and other properties that make songbirds such compelling research subjects. PMID:25280907

  8. The opportunities and challenges of large-scale molecular approaches to songbird neurobiology.

    Science.gov (United States)

    Mello, C V; Clayton, D F

    2015-03-01

    High-throughput methods for analyzing genome structure and function are having a large impact in songbird neurobiology. Methods include genome sequencing and annotation, comparative genomics, DNA microarrays and transcriptomics, and the development of a brain atlas of gene expression. Key emerging findings include the identification of complex transcriptional programs active during singing, the robust brain expression of non-coding RNAs, evidence of profound variations in gene expression across brain regions, and the identification of molecular specializations within song production and learning circuits. Current challenges include the statistical analysis of large datasets, effective genome curations, the efficient localization of gene expression changes to specific neuronal circuits and cells, and the dissection of behavioral and environmental factors that influence brain gene expression. The field requires efficient methods for comparisons with organisms like chicken, which offer important anatomical, functional and behavioral contrasts. As sequencing costs plummet, opportunities emerge for comparative approaches that may help reveal evolutionary transitions contributing to vocal learning, social behavior and other properties that make songbirds such compelling research subjects.

  9. Coarse graining approach to First principles modeling of radiation cascade in large Fe super-cells

    Science.gov (United States)

    Odbadrakh, Khorgolkhuu; Nicholson, Don; Rusanu, Aurelian; Wang, Yang; Stoller, Roger; Zhang, Xiaoguang; Stocks, George

    2012-02-01

    First principles techniques employed to understand systems at an atomistic level are not practical for large systems consisting of millions of atoms. We present an efficient coarse graining approach to bridge the first principles calculations of local electronic properties to classical Molecular Dynamics (MD) simulations of large structures. Local atomic magnetic moments in crystalline Fe are perturbed by radiation generated defects. The effects are most pronounced near the defect core and decay with distance. We develop a coarse grained technique based on the Locally Self-consistent Multiple Scattering (LSMS) method that exploits the near-sightedness of the electron Green function. The atomic positions were determined by MD with an embedded atom force field. The local moments in the neighborhood of the defect cores are calculated with first-principles based on full local structure information. Atoms in the rest of the system are modeled by representative atoms with approximated properties. This work was supported by the Center for Defect Physics, an Energy Frontier Research Center funded by the US Department of Energy, Office of Science, Office of Basic Energy Sciences.

  10. Solving Large-Scale TSP Using a Fast Wedging Insertion Partitioning Approach

    Directory of Open Access Journals (Sweden)

    Zuoyong Xiang

    2015-01-01

    Full Text Available A new partitioning method, called Wedging Insertion, is proposed for solving large-scale symmetric Traveling Salesman Problem (TSP. The idea of our proposed algorithm is to cut a TSP tour into four segments by nodes’ coordinate (not by rectangle, such as Strip, FRP, and Karp. Each node is located in one of their segments, which excludes four particular nodes, and each segment does not twist with other segments. After the partitioning process, this algorithm utilizes traditional construction method, that is, the insertion method, for each segment to improve the quality of tour, and then connects the starting node and the ending node of each segment to obtain the complete tour. In order to test the performance of our proposed algorithm, we conduct the experiments on various TSPLIB instances. The experimental results show that our proposed algorithm in this paper is more efficient for solving large-scale TSPs. Specifically, our approach is able to obviously reduce the time complexity for running the algorithm; meanwhile, it will lose only about 10% of the algorithm’s performance.

  11. Query Large Scale Microarray Compendium Datasets Using a Model-Based Bayesian Approach with Variable Selection

    Science.gov (United States)

    Hu, Ming; Qin, Zhaohui S.

    2009-01-01

    In microarray gene expression data analysis, it is often of interest to identify genes that share similar expression profiles with a particular gene such as a key regulatory protein. Multiple studies have been conducted using various correlation measures to identify co-expressed genes. While working well for small datasets, the heterogeneity introduced from increased sample size inevitably reduces the sensitivity and specificity of these approaches. This is because most co-expression relationships do not extend to all experimental conditions. With the rapid increase in the size of microarray datasets, identifying functionally related genes from large and diverse microarray gene expression datasets is a key challenge. We develop a model-based gene expression query algorithm built under the Bayesian model selection framework. It is capable of detecting co-expression profiles under a subset of samples/experimental conditions. In addition, it allows linearly transformed expression patterns to be recognized and is robust against sporadic outliers in the data. Both features are critically important for increasing the power of identifying co-expressed genes in large scale gene expression datasets. Our simulation studies suggest that this method outperforms existing correlation coefficients or mutual information-based query tools. When we apply this new method to the Escherichia coli microarray compendium data, it identifies a majority of known regulons as well as novel potential target genes of numerous key transcription factors. PMID:19214232

  12. QAPgrid: a two level QAP-based approach for large-scale data analysis and visualization.

    Directory of Open Access Journals (Sweden)

    Mario Inostroza-Ponta

    Full Text Available BACKGROUND: The visualization of large volumes of data is a computationally challenging task that often promises rewarding new insights. There is great potential in the application of new algorithms and models from combinatorial optimisation. Datasets often contain "hidden regularities" and a combined identification and visualization method should reveal these structures and present them in a way that helps analysis. While several methodologies exist, including those that use non-linear optimization algorithms, severe limitations exist even when working with only a few hundred objects. METHODOLOGY/PRINCIPAL FINDINGS: We present a new data visualization approach (QAPgrid that reveals patterns of similarities and differences in large datasets of objects for which a similarity measure can be computed. Objects are assigned to positions on an underlying square grid in a two-dimensional space. We use the Quadratic Assignment Problem (QAP as a mathematical model to provide an objective function for assignment of objects to positions on the grid. We employ a Memetic Algorithm (a powerful metaheuristic to tackle the large instances of this NP-hard combinatorial optimization problem, and we show its performance on the visualization of real data sets. CONCLUSIONS/SIGNIFICANCE: Overall, the results show that QAPgrid algorithm is able to produce a layout that represents the relationships between objects in the data set. Furthermore, it also represents the relationships between clusters that are feed into the algorithm. We apply the QAPgrid on the 84 Indo-European languages instance, producing a near-optimal layout. Next, we produce a layout of 470 world universities with an observed high degree of correlation with the score used by the Academic Ranking of World Universities compiled in the The Shanghai Jiao Tong University Academic Ranking of World Universities without the need of an ad hoc weighting of attributes. Finally, our Gene Ontology-based study on

  13. Large-scale co-expression approach to dissect secondary cell wall formation across plant species

    Directory of Open Access Journals (Sweden)

    Colin eRuprecht

    2011-07-01

    Full Text Available Plant cell walls are complex composites largely consisting of carbohydrate-based polymers, and are generally divided into primary and secondary walls based on content and characteristics. Cellulose microfibrils constitute a major component of both primary and secondary cell walls and are synthesized at the plasma membrane by cellulose synthase (CESA complexes. Several studies in Arabidopsis have demonstrated the power of co-expression analyses to identify new genes associated with secondary wall cellulose biosynthesis. However, across-species comparative co-expression analyses remain largely unexplored. Here, we compared co-expressed gene vicinity networks of primary and secondary wall CESAs in Arabidopsis, barley, rice, poplar, soybean, Medicago and wheat, and identified gene families that are consistently co-regulated with cellulose biosynthesis. In addition to the expected polysaccharide acting enzymes, we also found many gene families associated with cytoskeleton, signaling, transcriptional regulation, oxidation and protein degradation. Based on these analyses, we selected and biochemically analyzed T-DNA insertion lines corresponding to approximately twenty genes from gene families that re-occur in the co-expressed gene vicinity networks of secondary wall CESAs across the seven species. We developed a statistical pipeline using principal component analysis (PCA and optimal clustering based on silhouette width to analyze sugar profiles. One of the mutants, corresponding to a pinoresinol reductase gene, displayed disturbed xylem morphology and held lower levels of lignin molecules. We propose that this type of large-scale co-expression approach, coupled with statistical analysis of the cell wall contents, will be useful to facilitate rapid knowledge transfer across plant species.

  14. Meiosis and its deviations in polyploid plants.

    Science.gov (United States)

    Grandont, L; Jenczewski, E; Lloyd, A

    2013-01-01

    Meiosis is a fundamental process in all sexual organisms that ensures fertility and genome stability and creates genetic diversity. For each of these outcomes, the exclusive formation of crossovers between homologous chromosomes is needed. This is more difficult to achieve in polyploid species which have more than 2 sets of chromosomes able to recombine. In this review, we describe how meiosis and meiotic recombination 'deviate' in polyploid plants compared to diploids, and give an overview of current knowledge on how they are regulated. See also the sister article focusing on animals by Stenberg and Saura in this themed issue.

  15. Examining Food Risk in the Large using a Complex, Networked System-of-sytems Approach

    Energy Technology Data Exchange (ETDEWEB)

    Ambrosiano, John [Los Alamos National Laboratory; Newkirk, Ryan [U OF MINNESOTA; Mc Donald, Mark P [VANDERBILT U

    2010-12-03

    The food production infrastructure is a highly complex system of systems. Characterizing the risks of intentional contamination in multi-ingredient manufactured foods is extremely challenging because the risks depend on the vulnerabilities of food processing facilities and on the intricacies of the supply-distribution networks that link them. A pure engineering approach to modeling the system is impractical because of the overall system complexity and paucity of data. A methodology is needed to assess food contamination risk 'in the large', based on current, high-level information about manufacturing facilities, corrunodities and markets, that will indicate which food categories are most at risk of intentional contamination and warrant deeper analysis. The approach begins by decomposing the system for producing a multi-ingredient food into instances of two subsystem archetypes: (1) the relevant manufacturing and processing facilities, and (2) the networked corrunodity flows that link them to each other and consumers. Ingredient manufacturing subsystems are modeled as generic systems dynamics models with distributions of key parameters that span the configurations of real facilities. Networks representing the distribution systems are synthesized from general information about food corrunodities. This is done in a series of steps. First, probability networks representing the aggregated flows of food from manufacturers to wholesalers, retailers, other manufacturers, and direct consumers are inferred from high-level approximate information. This is followed by disaggregation of the general flows into flows connecting 'large' and 'small' categories of manufacturers, wholesalers, retailers, and consumers. Optimization methods are then used to determine the most likely network flows consistent with given data. Vulnerability can be assessed for a potential contamination point using a modified CARVER + Shock model. Once the facility and

  16. A DEM-based approach for large-scale floodplain mapping in ungauged watersheds

    Science.gov (United States)

    Jafarzadegan, Keighobad; Merwade, Venkatesh

    2017-07-01

    Binary threshold classifiers are a simple form of supervised classification methods that can be used in floodplain mapping. In these methods, a given watershed is examined as a grid of cells with a particular morphologic value. A reference map is a grid of cells labeled as flood and non-flood from hydraulic modeling or remote sensing observations. By using the reference map, a threshold on morphologic feature is determined to label the unknown cells as flood and non-flood (binary classification). The main limitation of these methods is the threshold transferability assumption in which a homogenous geomorphological and hydrological behavior is assumed for the entire region and the same threshold derived from the reference map (training area) is used for other locations (ungauged watersheds) inside the study area. In order to overcome this limitation and consider the threshold variability inside a large region, regression modeling is used in this paper to predict the threshold by relating it to the watershed characteristics. Application of this approach for North Carolina shows that the threshold is related to main stream slope, average watershed elevation, and average watershed slope. By using the Fitness (F) and Correct (C) criteria of C > 0.9 and F > 0.6, results show the threshold prediction and the corresponding floodplain for 100-year design flow are comparable to that from Federal Emergency Management Agency's (FEMA) Flood Insurance Rate Maps (FIRMs) in the region. However, the floodplains from the proposed model are underpredicted and overpredicted in the flat (average watershed slope 20%). Overall, the proposed approach provides an alternative way of mapping floodplain in data-scarce regions.

  17. Large scale debris-flow hazard assessment: a geotechnical approach and GIS modelling

    Directory of Open Access Journals (Sweden)

    G. Delmonaco

    2003-01-01

    Full Text Available A deterministic distributed model has been developed for large-scale debris-flow hazard analysis in the basin of River Vezza (Tuscany Region – Italy. This area (51.6 km 2 was affected by over 250 landslides. These were classified as debris/earth flow mainly involving the metamorphic geological formations outcropping in the area, triggered by the pluviometric event of 19 June 1996. In the last decades landslide hazard and risk analysis have been favoured by the development of GIS techniques permitting the generalisation, synthesis and modelling of stability conditions on a large scale investigation (>1:10 000. In this work, the main results derived by the application of a geotechnical model coupled with a hydrological model for the assessment of debris flows hazard analysis, are reported. This analysis has been developed starting by the following steps: landslide inventory map derived by aerial photo interpretation, direct field survey, generation of a database and digital maps, elaboration of a DTM and derived themes (i.e. slope angle map, definition of a superficial soil thickness map, geotechnical soil characterisation through implementation of a backanalysis on test slopes, laboratory test analysis, inference of the influence of precipitation, for distinct return times, on ponding time and pore pressure generation, implementation of a slope stability model (infinite slope model and generalisation of the safety factor for estimated rainfall events with different return times. Such an approach has allowed the identification of potential source areas of debris flow triggering. This is used to detected precipitation events with estimated return time of 10, 50, 75 and 100 years. The model shows a dramatic decrease of safety conditions for the simulation when is related to a 75 years return time rainfall event. It corresponds to an estimated cumulated daily intensity of 280–330 mm. This value can be considered the hydrological triggering

  18. Understanding of large Far Eastern organizational cultures in approaches to new product development process : designing versus controlling

    OpenAIRE

    Hwangbo, Hyunwook; Tsekleves, Emmanuel

    2014-01-01

    This paper explores how approaches to new product design can differ nationally when examining large organizational cultures between the East and the West, especially looking at different approaches in the context of ‘openness’. Currently, approaches to new product development in digital landscape have shifted to evolutionary perspectives, which embrace an ‘open’ context in the design process – ‘designing’, rather than single hierarchical and closed strategy for efficiency- ‘controlling’. This...

  19. Efficient Multidisciplinary Analysis Approach for Conceptual Design of Aircraft with Large Shape Change

    Science.gov (United States)

    Chwalowski, Pawel; Samareh, Jamshid A.; Horta, Lucas G.; Piatak, David J.; McGowan, Anna-Maria R.

    2009-01-01

    The conceptual and preliminary design processes for aircraft with large shape changes are generally difficult and time-consuming, and the processes are often customized for a specific shape change concept to streamline the vehicle design effort. Accordingly, several existing reports show excellent results of assessing a particular shape change concept or perturbations of a concept. The goal of the current effort was to develop a multidisciplinary analysis tool and process that would enable an aircraft designer to assess several very different morphing concepts early in the design phase and yet obtain second-order performance results so that design decisions can be made with better confidence. The approach uses an efficient parametric model formulation that allows automatic model generation for systems undergoing radical shape changes as a function of aerodynamic parameters, geometry parameters, and shape change parameters. In contrast to other more self-contained approaches, the approach utilizes off-the-shelf analysis modules to reduce development time and to make it accessible to many users. Because the analysis is loosely coupled, discipline modules like a multibody code can be easily swapped for other modules with similar capabilities. One of the advantages of this loosely coupled system is the ability to use the medium- to high-fidelity tools early in the design stages when the information can significantly influence and improve overall vehicle design. Data transfer among the analysis modules are based on an accurate and automated general purpose data transfer tool. In general, setup time for the integrated system presented in this paper is 2-4 days for simple shape change concepts and 1-2 weeks for more mechanically complicated concepts. Some of the key elements briefly described in the paper include parametric model development, aerodynamic database generation, multibody analysis, and the required software modules as well as examples for a telescoping wing

  20. Large eddy simulation of hydrogen/air scramjet combustion using tabulated thermo-chemistry approach

    Directory of Open Access Journals (Sweden)

    Cao Changmin

    2015-10-01

    Full Text Available Large eddy simulations (LES have been performed to investigate the flow and combustion fields in the scramjet of the German Aerospace Center (DLR. Turbulent combustion is modeled by the tabulated thermo-chemistry approach in combination with the presumed probability density function (PDF. A β-function is used to model the distribution of the mixture fraction, while two different PDFs, δ-function (Model I and β-function (Model II, are applied to model the reaction progress. Temperature is obtained by solving filtered energy transport equation and the reaction rate of the progress variable is rescaled by pressure to consider the effects of compressibility. The adaptive mesh refinement (AMR technique is used to properly capture shock waves, boundary layers, shear layers and flame structures. Statistical results of temperature and velocity predicted by Model II show better accuracy than that predicted by Model I. The results of scatter points and mixture fraction-conditional variables indicate the significant differences between Model I and Model II. It is concluded that second moment information in the presumed PDF of the reaction progress is very important in the simulation of supersonic combustion. It is also found that an unstable flame with extinction and ignition develops in the shear layers of bluff body and a fuel-rich partially premixed flame stabilizes in the central recirculation bubble.

  1. Large eddy simulation of hydrogen/air scramjet combustion using tabulated thermo-chemistry approach

    Institute of Scientific and Technical Information of China (English)

    Cao Changmin; Ye Taohong; Zhao Majie

    2015-01-01

    Large eddy simulations (LES) have been performed to investigate the flow and combus-tion fields in the scramjet of the German Aerospace Center (DLR). Turbulent combustion is mod-eled by the tabulated thermo-chemistry approach in combination with the presumed probability density function (PDF). A b-function is used to model the distribution of the mixture fraction, while two different PDFs, d-function (Model I) and b-function (Model II), are applied to model the reaction progress. Temperature is obtained by solving filtered energy transport equation and the reaction rate of the progress variable is rescaled by pressure to consider the effects of compressibil-ity. The adaptive mesh refinement (AMR) technique is used to properly capture shock waves, boundary layers, shear layers and flame structures. Statistical results of temperature and velocity predicted by Model II show better accuracy than that predicted by Model I. The results of scatter points and mixture fraction-conditional variables indicate the significant differences between Model I and Model II. It is concluded that second moment information in the presumed PDF of the reaction progress is very important in the simulation of supersonic combustion. It is also found that an unstable flame with extinction and ignition develops in the shear layers of bluff body and a fuel-rich partially premixed flame stabilizes in the central recirculation bubble.

  2. Neural ensemble communities: open-source approaches to hardware for large-scale electrophysiology.

    Science.gov (United States)

    Siegle, Joshua H; Hale, Gregory J; Newman, Jonathan P; Voigts, Jakob

    2015-06-01

    One often-overlooked factor when selecting a platform for large-scale electrophysiology is whether or not a particular data acquisition system is 'open' or 'closed': that is, whether or not the system's schematics and source code are available to end users. Open systems have a reputation for being difficult to acquire, poorly documented, and hard to maintain. With the arrival of more powerful and compact integrated circuits, rapid prototyping services, and web-based tools for collaborative development, these stereotypes must be reconsidered. We discuss some of the reasons why multichannel extracellular electrophysiology could benefit from open-source approaches and describe examples of successful community-driven tool development within this field. In order to promote the adoption of open-source hardware and to reduce the need for redundant development efforts, we advocate a move toward standardized interfaces that connect each element of the data processing pipeline. This will give researchers the flexibility to modify their tools when necessary, while allowing them to continue to benefit from the high-quality products and expertise provided by commercial vendors. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. BAL31-NGS approach for identification of telomeres de novo in large genomes.

    Science.gov (United States)

    Peška, Vratislav; Sitová, Zdeňka; Fajkus, Petr; Fajkus, Jiří

    2017-02-01

    This article describes a novel method to identify as yet undiscovered telomere sequences, which combines next generation sequencing (NGS) with BAL31 digestion of high molecular weight DNA. The method was applied to two groups of plants: i) dicots, genus Cestrum, and ii) monocots, Allium species (e.g. A. ursinum and A. cepa). Both groups consist of species with large genomes (tens of Gb) and a low number of chromosomes (2n=14-16), full of repeat elements. Both genera lack typical telomeric repeats and multiple studies have attempted to characterize alternative telomeric sequences. However, despite interesting hypotheses and suggestions of alternative candidate telomeres (retrotransposons, rDNA, satellite repeats) these studies have not resolved the question. In a novel approach based on the two most general features of eukaryotic telomeres, their repetitive character and sensitivity to BAL31 nuclease digestion, we have taken advantage of the capacity and current affordability of NGS in combination with the robustness of classical BAL31 nuclease digestion of chromosomal termini. While representative samples of most repeat elements were ensured by low-coverage (less than 5%) genomic shot-gun NGS, candidate telomeres were identified as under-represented sequences in BAL31-treated samples.

  4. A large health system's approach to utilization of the genetic counselor CPT® 96040 code.

    Science.gov (United States)

    Gustafson, Shanna L; Pfeiffer, Gail; Eng, Charis

    2011-12-01

    : In 2007, CPT® code 96040 was approved for genetic counseling services provided by nonphysician providers. Because of professional recognition and licensure limitations, experiences in direct billing by genetic counselors for these services are limited. A minority of genetics clinics report using this code because of limitations, including perceived denial of the code and confusion regarding compliant use of this code. We present results of our approach to 96040 billing for genetic counseling services under a supervising physicians National Provider ID number in a strategy for integration of genetics services within nongenetics specialty departments of a large academic medical center. : The 96040 billing encounters were tracked for a 14-month period and analyzed for reimbursement by private payers. Association of denial by diagnosis code or specialty of genetics service was statistically analyzed. Descriptive data regarding appointment availability are also summarized. : Of 350 encounters January 2008 to February 2009, 289 (82%) were billed to private payers. Of these, 62.6% received some level of reimbursement. No association was seen for denial when analyzed by the diagnosis code or by genetics focus. Through this model, genetics appointment availability minimally doubled. : Using 96040 allowed for expanding access to genetics services, increased appointment availability, and was successful in obtaining reimbursement for more than half of encounters billed.

  5. Theory of Deviation and Its Application in College English Teaching

    Institute of Scientific and Technical Information of China (English)

    Xu Yanqiu

    2008-01-01

    Deviation is an important concept in stylistics.Besides Shklovskij and Mukarovsky,who made a theoreti cal generalization of deviational phenomena,Leech is the one who studies deviation systematically and catego rizes it into groups.To apply the theory of deviation to College English teaching is an effective way to culti rate students' interest in and aesthetic ability of English texts.

  6. Hypotropic Dissociated Vertical Deviation; a Case Report

    Directory of Open Access Journals (Sweden)

    Zhale Rajavi

    2013-01-01

    Full Text Available Purpose: To report the clinical features of a rare case of hypotropic dissociated vertical deviation (DVD. Case report: A 25-year-old female was referred with unilateral esotropia, hypotropia and slow variable downward drift in her left eye. She had history of esotropia since she had been 3-4 months of age. Best corrected visual acuity was 20/20 in her right eye and 20/40 in the left one when hyperopia was corrected. She underwent bimedial rectus muscle recession of 5.25mm for 45 prism diopters (PDs of esotropia. She was orthophoric 3 months after surgery and no further operation was planned for correction of the hypotropic DVD. Conclusion: This rare case of hypotropic DVD showed only mild amblyopia in her non-fixating eye. The etiology was most probably acquired considering hyperopia as a sign of early onset accommodative esotropia.

  7. Spotting deviations from R^2 inflation

    CERN Document Server

    de la Cruz-Dombriz, Alvaro; Odintsov, Sergei D; Saez-Gomez, Diego

    2016-01-01

    We discuss the soundness of inflationary scenarios in theories beyond the Starobinsky model, namely a class of theories described by arbitrary functions of the Ricci scalar and the K-essence field. We discuss the pathologies associated with higher-order equations of motion which will be shown to constrain the stability of this class of theories. We provide a general framework to calculate the slow-roll parameters and the corresponding mappings to the theory parameters. For paradigmatic gravitational models within the class of theories under consideration we illustrate the power of the Planck/Bicep2 latest results to constrain such gravitational Lagrangians. Finally, bounds for potential deviations from Starobinsky-like inflation are derived.

  8. An discussion on Graphological Deviation in Oliver Twist

    Institute of Scientific and Technical Information of China (English)

    肖潇

    2016-01-01

    In stylistic analysis,when we identifying the stylistic features in literary works,deviation serves as an important sign.According to Leech,there are eight types of deviation in poetry:lexical deviation,grammatical deviation,phonological deviation,graphological deviation,semantic deviation,dialectal deviation,deviation of register,deviation of historical period. Realism marks as an significant development in the history of fiction,for its success in achieving an exposure of the truth of people’s real life and fierce social problems.And foregrounded feature is inevitable part that constitute his language style.We will focus on Oliver Twist,for it is presented with unique writing style,which worthy our investigation.

  9. Outcomes of Surgical Treatment in Cases of Dissociated Vertical Deviation

    Directory of Open Access Journals (Sweden)

    Serpil Akar

    2014-03-01

    Full Text Available Objectives: To investigate the results of different surgical techniques for treating cases of dissociated vertical deviation (DVD. Materials and Methods: A retrospective review of medical records was performed, including 94 eyes of 47 patients who had undergone bilateral superior rectus (SR recessions (Group 1, bilateral SR recession with posterior fixation sutures (Group 2, or bilateral inferior oblique (IO anterior transposition surgery (Group 3 for treatment of DVD. Nineteen patients underwent secondary procedures (SR weakening or IO anterior transposition because of unsatisfactory results. The amount of the DVD in primary position before and after surgery, postoperative success ratios, and probable complications were evaluated. The Wilcoxon signed ranks test and chi-squared test were used for statistical evaluations. Results: In 69% of the 32 eyes in group 1, 65% of the 20 eyes in group 2, and 79% of the 42 eyes in group 3, satisfactory control of the DVD in primary position was achieved. All eyes undergoing both SR weakening and IO anterior transposition had a residual DVD of less than 5 prism diopters (pd. Of the total of 94 eyes, in 26 (89.6% of 29 eyes that had a preoperative DVD angle of more than 15 pd [ten eyes from group 1, seven eyes from group 2, and nine eyes from group 3], the residual DVD angle after surgery was more than 5 pd. However, in the 65 eyes with preoperative DVD of 15 pd or less (21from Group 1, 12 from Group 2, and 32 from Group 3, the residual DVD angle after the operation was less than 5 pd. Two eyes of 2 patients had -1 limitation to elevation after surgery. Conclusion: Only IO anterior transposition or SR weakening surgery appear to be a successful surgical approaches in the management of patients with mild- and moderate-angle (≤15 pd DVD. Weakening both the SR and IO muscles yield a greater success in the management of patients with large-angle (>15 pd DVD. (Turk J Ophthalmol 2014; 44: 132-7

  10. Quantification of in-channel large wood recruitment through a 3-D probabilistic approach

    Science.gov (United States)

    Cislaghi, Alessio; Rigon, Emanuel; Aristide Lenzi, Mario; Battista Bischetti, Gian

    2017-04-01

    Large wood (LW) is a relevant factor in physical, chemical, environmental and biological aspects of low order mountain streams system. LW recruitment, in turn, is affected by many physical processes, such as debris flows, shallow landslides, bank erosion, snow- and wind throw, and increases the potential hazard for downstream human population and infrastructures during intense flood events. In spite of that, the LW recruitment quantification and the modelling of related processes are receiving attention only since few years ago, with particular reference to hillslope instabilities which are the dominant source of LW recruitment in mountainous terrains at regional scale. Actually, models based on the infinite slope approach, commonly adopted for slope stability analysis, can be used for estimating probable LW volume and for identifying the most hazardous areas of wood input, transport and deposition. Such models, however, generally request a robust calibration on landslide inventory and tend to overestimate unstable areas and then LW recruitment volumes. On this background, this work proposes a new LW estimation procedure which combines the forest stand characteristics of the entire catchment and a three-dimensional probabilistic slope stability model. The slope stability model overcomes the limits of the infinite slope approach and considers the spatial variability and uncertainty of the model input parameters through a Monte Carlo analysis. The forest stands characteristics allow including the root reinforcement into the stability model as stochastic input parameter, and provide the necessary information to evaluate the forest wood volume prone to be recruited as LW and its position on the hillslopes. The procedure was tested on a small mountainous headwater catchment in the Eastern Italian Alps, covered with pasture and coniferous forest and prone to shallow landslide and debris flow phenomena, especially during the late spring and the early autumn. The results

  11. A modular approach to large-scale design optimization of aerospace systems

    Science.gov (United States)

    Hwang, John T.

    Gradient-based optimization and the adjoint method form a synergistic combination that enables the efficient solution of large-scale optimization problems. Though the gradient-based approach struggles with non-smooth or multi-modal problems, the capability to efficiently optimize up to tens of thousands of design variables provides a valuable design tool for exploring complex tradeoffs and finding unintuitive designs. However, the widespread adoption of gradient-based optimization is limited by the implementation challenges for computing derivatives efficiently and accurately, particularly in multidisciplinary and shape design problems. This thesis addresses these difficulties in two ways. First, to deal with the heterogeneity and integration challenges of multidisciplinary problems, this thesis presents a computational modeling framework that solves multidisciplinary systems and computes their derivatives in a semi-automated fashion. This framework is built upon a new mathematical formulation developed in this thesis that expresses any computational model as a system of algebraic equations and unifies all methods for computing derivatives using a single equation. The framework is applied to two engineering problems: the optimization of a nanosatellite with 7 disciplines and over 25,000 design variables; and simultaneous allocation and mission optimization for commercial aircraft involving 330 design variables, 12 of which are integer variables handled using the branch-and-bound method. In both cases, the framework makes large-scale optimization possible by reducing the implementation effort and code complexity. The second half of this thesis presents a differentiable parametrization of aircraft geometries and structures for high-fidelity shape optimization. Existing geometry parametrizations are not differentiable, or they are limited in the types of shape changes they allow. This is addressed by a novel parametrization that smoothly interpolates aircraft

  12. What Is the Optimal Treatment of Large Brain Metastases? An Argument for a Multidisciplinary Approach

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Clara Y.H.; Chang, Steven D. [Department of Neurosurgery, Stanford University Medical Center, Stanford, California (United States); Gibbs, Iris C. [Department of Radiation Oncology, Stanford University Medical Center, Stanford, California (United States); Adler, John R.; Harsh, Griffith R. [Department of Neurosurgery, Stanford University Medical Center, Stanford, California (United States); Atalar, Banu [Department of Radiation Oncology, Acibadem University School of Medicine, Istanbul (Turkey); Lieberson, Robert E. [Department of Neurosurgery, Stanford University Medical Center, Stanford, California (United States); Soltys, Scott G., E-mail: sgsoltys@stanford.edu [Department of Radiation Oncology, Stanford University Medical Center, Stanford, California (United States)

    2012-11-01

    Purpose: Single-modality treatment of large brain metastases (>2 cm) with whole-brain irradiation, stereotactic radiosurgery (SRS) alone, or surgery alone is not effective, with local failure (LF) rates of 50% to 90%. Our goal was to improve local control (LC) by using multimodality therapy of surgery and adjuvant SRS targeting the resection cavity. Patients and Methods: We retrospectively evaluated 97 patients with brain metastases >2 cm in diameter treated with surgery and cavity SRS. Local and distant brain failure (DF) rates were analyzed with competing risk analysis, with death as a competing risk. The overall survival rate was calculated by the Kaplain-Meier product-limit method. Results: The median imaging follow-up duration for all patients was 10 months (range, 1-80 months). The 12-month cumulative incidence rates of LF, with death as a competing risk, were 9.3% (95% confidence interval [CI], 4.5%-16.1%), and the median time to LF was 6 months (range, 3-17 months). The 12-month cumulative incidence rate of DF, with death as a competing risk, was 53% (95% CI, 43%-63%). The median survival time for all patients was 15.6 months. The median survival times for recursive partitioning analysis classes 1, 2, and 3 were 33.8, 13.7, and 9.0 months, respectively (p = 0.022). On multivariate analysis, Karnofsky Performance Status ({>=}80 vs. <80; hazard ratio 0.54; 95% CI 0.31-0.94; p = 0.029) and maximum preoperative tumor diameter (hazard ratio 1.41; 95% CI 1.08-1.85; p = 0.013) were associated with survival. Five patients (5%) required intervention for Common Terminology Criteria for Adverse Events v4.02 grade 2 and 3 toxicity. Conclusion: Surgery and adjuvant resection cavity SRS yields excellent LC of large brain metastases. Compared with other multimodality treatment options, this approach allows patients to avoid or delay whole-brain irradiation without compromising LC.

  13. Topographic mapping on large-scale tidal flats with an iterative approach on the waterline method

    Science.gov (United States)

    Kang, Yanyan; Ding, Xianrong; Xu, Fan; Zhang, Changkuan; Ge, Xiaoping

    2017-05-01

    Tidal flats, which are both a natural ecosystem and a type of landscape, are of significant importance to ecosystem function and land resource potential. Morphologic monitoring of tidal flats has become increasingly important with respect to achieving sustainable development targets. Remote sensing is an established technique for the measurement of topography over tidal flats; of the available methods, the waterline method is particularly effective for constructing a digital elevation model (DEM) of intertidal areas. However, application of the waterline method is more limited in large-scale, shifting tidal flats areas, where the tides are not synchronized and the waterline is not a quasi-contour line. For this study, a topographical map of the intertidal regions within the Radial Sand Ridges (RSR) along the Jiangsu Coast, China, was generated using an iterative approach on the waterline method. A series of 21 multi-temporal satellite images (18 HJ-1A/B CCD and three Landsat TM/OLI) of the RSR area collected at different water levels within a five month period (31 December 2013-28 May 2014) was used to extract waterlines based on feature extraction techniques and artificial further modification. These 'remotely-sensed waterlines' were combined with the corresponding water levels from the 'model waterlines' simulated by a hydrodynamic model with an initial generalized DEM of exposed tidal flats. Based on the 21 heighted 'remotely-sensed waterlines', a DEM was constructed using the ANUDEM interpolation method. Using this new DEM as the input data, it was re-entered into the hydrodynamic model, and a new round of water level assignment of waterlines was performed. A third and final output DEM was generated covering an area of approximately 1900 km2 of tidal flats in the RSR. The water level simulation accuracy of the hydrodynamic model was within 0.15 m based on five real-time tide stations, and the height accuracy (root mean square error) of the final DEM was 0.182 m

  14. Large-n approach to thermodynamic Casimir effects in slabs with free surfaces.

    Science.gov (United States)

    Diehl, H W; Grüneberg, Daniel; Hasenbusch, Martin; Hucht, Alfred; Rutkevich, Sergei B; Schmidt, Felix M

    2014-06-01

    The classical n-vector ϕ{4} model with O(n) symmetrical Hamiltonian H is considered in a ∞{2}×L slab geometry bounded by a pair of parallel free surface planes at separation L. Standard quadratic boundary terms implying Robin boundary conditions are included in H. The temperature-dependent scaling functions of the excess free energy and the thermodynamic Casimir force are computed in the large-n limit for temperatures T at, above, and below the bulk critical temperature T_{c}. Their n=∞ limits can be expressed exactly in terms of the spectrum and eigenfunctions of a self-consistent one-dimensional Schrödinger equation. This equation is solved by numerical means for two distinct discretized versions of the model: in the first ("model A"), only the coordinate z across the slab is discretized and the integrations over momenta conjugate to the lateral coordinates are regularized dimensionally; in the second ("model B"), a simple cubic lattice with periodic boundary conditions along the lateral directions is used. Renormalization-group ideas are invoked to show that, in addition to corrections to scaling ∝L{-1}, anomalous ones ∝L{-1}lnL should occur. They can be considerably decreased by taking an appropriate g→∞ (T_{c}→∞) limit of the ϕ{4} interaction constant g. Depending on the model A or B, they can be absorbed completely or to a large extent in an effective thickness L_{eff}=L+δL. Excellent data collapses and consistent high-precision results for both models are obtained. The approach to the low-temperature Goldstone values of the scaling functions is shown to involve logarithmic anomalies. The scaling functions exhibit all qualitative features seen in experiments on the thinning of wetting layers of {4}He and Monte Carlo simulations of XY models, including a pronounced minimum of the Casimir force below T_{c}. The results are in conformity with various analytically known exact properties of the scaling functions.

  15. A Historical Study of Contemporary Human Rights: Deviation or Extinction?

    Directory of Open Access Journals (Sweden)

    Tanel Kerikmäe

    2016-10-01

    Full Text Available Human rights is a core issue of continuing political, legal and economic relevance. The current article discusses the historical perceptions of the very essence of human rights standards and poses the question whether the Realpolitik of the changed world and Europe can justify the deviation from the “purist” approach to human rights. The EU Charter, as the most eminent and contemporary “bill of rights”, is chosen as an example of the divergence from “traditional values”. The article does not offer solutions but rather focuses on the expansive development in the doctrinal approach of interpreting human rights that has not been conceptually agreed upon by historians, philosophers and legal scholars.

  16. Weighted measures based on maximizing deviation for alignment-free sequence comparison

    Science.gov (United States)

    Qian, Kun; Luan, Yihui

    2017-09-01

    Alignment-free sequence comparison is becoming fairly popular in many fields of computational biology due to less requirements for sequence itself and computational efficiency for a large scale of sequence data sets. Especially, the approaches based on k-tuple like D2, D2S and D2∗ are used widely and effectively. However, these measures treat each k-tuple equally without accounting for the potential importance differences among all k-tuples. In this paper, we take advantage of maximizing deviation method proposed in multiple attribute decision making to evaluate the weights of different k-tuples. We modify D2, D2S and D2∗ with weights and test them by similarity search and evaluation on functionally related regulatory sequences. The results demonstrate that the newly proposed measures are more efficient and robust compared to existing alignment-free methods.

  17. A Renewed Approach for Large Eddy Simulation of Complex Geometries Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The potential benefits of Large Eddy Simulation (LES) for aerodynamics and combustion simulation hvae largely been missed, due to the complexity of generating grids...

  18. Discovering Outliers of Potential Drug Toxicities Using a Large-scale Data-driven Approach.

    Science.gov (United States)

    Luo, Jake; Cisler, Ron A

    2016-01-01

    We systematically compared the adverse effects of cancer drugs to detect event outliers across different clinical trials using a data-driven approach. Because many cancer drugs are toxic to patients, better understanding of adverse events of cancer drugs is critical for developing therapies that could minimize the toxic effects. However, due to the large variabilities of adverse events across different cancer drugs, methods to efficiently compare adverse effects across different cancer drugs are lacking. To address this challenge, we present an exploration study that integrates multiple adverse event reports from clinical trials in order to systematically compare adverse events across different cancer drugs. To demonstrate our methods, we first collected data on 186,339 clinical trials from ClinicalTrials.gov and selected 30 common cancer drugs. We identified 1602 cancer trials that studied the selected cancer drugs. Our methods effectively extracted 12,922 distinct adverse events from the clinical trial reports. Using the extracted data, we ranked all 12,922 adverse events based on their prevalence in the clinical trials, such as nausea 82%, fatigue 77%, and vomiting 75.97%. To detect the significant drug outliers that could have a statistically high possibility of causing an event, we used the boxplot method to visualize adverse event outliers across different drugs and applied Grubbs' test to evaluate the significance. Analyses showed that by systematically integrating cross-trial data from multiple clinical trial reports, adverse event outliers associated with cancer drugs can be detected. The method was demonstrated by detecting the following four statistically significant adverse event cases: the association of the drug axitinib with hypertension (Grubbs' test, P < 0.001), the association of the drug imatinib with muscle spasm (P < 0.001), the association of the drug vorinostat with deep vein thrombosis (P < 0.001), and the association of the drug afatinib

  19. Qualitative Variation in Approaches to University Teaching and Learning in Large First-Year Classes

    Science.gov (United States)

    Prosser, Michael; Trigwell, Keith

    2014-01-01

    Research on teaching from a student learning perspective has identified two qualitatively different approaches to university teaching. They are an information transmission and teacher-focused approach, and a conceptual change and student-focused approach. The fundamental difference being in the former the intention is to transfer information to…

  20. On the relation between uncertainties of weighted frequency averages and the various types of Allan deviations

    CERN Document Server

    Benkler, Erik; Sterr, Uwe

    2015-01-01

    The power spectral density in Fourier frequency domain, and the different variants of the Allan deviation (ADEV) in dependence on the averaging time are well established tools to analyse the fluctuation properties and frequency instability of an oscillatory signal. It is often supposed that the statistical uncertainty of a measured average frequency is given by the ADEV at a well considered averaging time. However, this approach requires further mathematical justification and refinement, which has already been done regarding the original ADEV for certain noise types. Here we provide the necessary background to use the modified Allan deviation (modADEV) and other two-sample deviations to determine the uncertainty of weighted frequency averages. The type of two-sample deviation used to determine the uncertainty depends on the method used for determination of the average. We find that the modADEV, which is connected with $\\Lambda$-weighted averaging, and the two sample deviation associated to a linear phase regr...

  1. Statistical characterization of deviations from planned flight trajectories in air traffic management

    CERN Document Server

    Bongiorno, C; Lillo, F; Mantegna, R N; Miccichè, S

    2016-01-01

    Understanding the relation between planned and realized flight trajectories and the determinants of flight deviations is of great importance in air traffic management. In this paper we perform an in depth investigation of the statistical properties of planned and realized air traffic on the German airspace during a 28 day periods, corresponding to an AIRAC cycle. We find that realized trajectories are on average shorter than planned ones and this effect is stronger during night-time than daytime. Flights are more frequently deviated close to the departure airport and at a relatively large angle to destination. Moreover, the probability of a deviation is higher in low traffic phases. All these evidences indicate that deviations are mostly used by controllers to give directs to flights when traffic conditions allow it. Finally we introduce a new metric, termed difork, which is able to characterize navigation points according to the likelihood that a deviation occurs there. Difork allows to identify in a statist...

  2. Meiosis and its deviations in polyploid animals.

    Science.gov (United States)

    Stenberg, P; Saura, A

    2013-01-01

    We review the different modes of meiosis and its deviations encountered in polyploid animals. Bisexual reproduction involving normal meiosis occurs in some allopolyploid frogs with variable degrees of polyploidy. Aberrant modes of bisexual reproduction include gynogenesis, where a sperm stimulates the egg to develop. The sperm may enter the egg but there is no fertilization and syngamy. In hybridogenesis, a genome is eliminated to produce haploid or diploid eggs or sperm. Ploidy can be elevated by fertilization with a haploid sperm in meiotic hybridogenesis, which elevates the ploidy of hybrid offspring such that they produce diploid gametes. Polyploids are then produced in the next generation. In kleptogenesis, females acquire full or partial genomes from their partners. In pre-equalizing hybrid meiosis, one genome is transmitted in the Mendelian fashion, while the other is transmitted clonally. Parthenogenetic animals have a very wide range of mechanisms for restoring or maintaining the mother's ploidy level, including gamete duplication, terminal fusion, central fusion, fusion of the first polar nucleus with the product of the first division, and premeiotic duplication followed by a normal meiosis. In apomictic parthenogenesis, meiosis is replaced by what is effectively mitotic cell division. The above modes have different evolutionary consequences, which are discussed. See also the sister article by Grandont et al. in this themed issue.

  3. Determination of real machine-tool settings and minimization of real surface deviation by computerized inspection

    Science.gov (United States)

    Litvin, Faydor L.; Kuan, Chihping; Zhang, YI

    1991-01-01

    A numerical method is developed for the minimization of deviations of real tooth surfaces from the theoretical ones. The deviations are caused by errors of manufacturing, errors of installment of machine-tool settings and distortion of surfaces by heat-treatment. The deviations are determined by coordinate measurements of gear tooth surfaces. The minimization of deviations is based on the proper correction of initially applied machine-tool settings. The contents of accomplished research project cover the following topics: (1) Descriptions of the principle of coordinate measurements of gear tooth surfaces; (2) Deviation of theoretical tooth surfaces (with examples of surfaces of hypoid gears and references for spiral bevel gears); (3) Determination of the reference point and the grid; (4) Determination of the deviations of real tooth surfaces at the points of the grid; and (5) Determination of required corrections of machine-tool settings for minimization of deviations. The procedure for minimization of deviations is based on numerical solution of an overdetermined system of n linear equations in m unknowns (m much less than n ), where n is the number of points of measurements and m is the number of parameters of applied machine-tool settings to be corrected. The developed approach is illustrated with numerical examples.

  4. Stability and Control of Large-Scale Dynamical Systems A Vector Dissipative Systems Approach

    CERN Document Server

    Haddad, Wassim M

    2011-01-01

    Modern complex large-scale dynamical systems exist in virtually every aspect of science and engineering, and are associated with a wide variety of physical, technological, environmental, and social phenomena, including aerospace, power, communications, and network systems, to name just a few. This book develops a general stability analysis and control design framework for nonlinear large-scale interconnected dynamical systems, and presents the most complete treatment on vector Lyapunov function methods, vector dissipativity theory, and decentralized control architectures. Large-scale dynami

  5. From the double-helix to novel approaches to the sequencing of large genomes.

    Science.gov (United States)

    Szybalski, W

    1993-12-15

    Elucidation of the structure of DNA by Watson and Crick [Nature 171 (1953) 737-738] has led to many crucial molecular experiments, including studies on DNA replication, transcription, physical mapping, and most recently to serious attempts directed toward the sequencing of large genomes [Watson, Science 248 (1990) 44-49]. I am totally convinced of the great importance of the Human Genome Project, and toward achieving this goal I strongly favor 'top-down' approaches consisting of the physical mapping and preparation of contiguous 50-100-kb fragments directly from the genome, followed by their automated sequencing based on the rapid assembly of primers by hexamer ligation together with primer walking. Our 'top-down' procedures totally avoids conventional cloning, subcloning and random sequencing, which are the elements of the present 'bottom-up' procedures. Fragments of 50-100 kb are prepared in sufficient quantities either by in vitro excision with rare-cutting restriction systems (including Achilles' heel cleavage [AC] or the RecA-AC procedures of Koob et al. [Nucleic Acids Res. 20 (1992) 5831-5836]) or by in vivo excision and amplification using the yeast FRT/Flp system or the phage lambda att/Int system. Such fragments, when derived directly from the Escherichia coli genome, are arranged in consecutive order, so that 50 specially constructed strains of E. coli would supply 50 end-to-end arranged approx. 100-kb fragments, which will cover the entire approx. 5-Mb E. coli genome. For the 150-Mb Drosophila melanogaster genome, 1500 of such consecutive 100-kb fragments (supplied by 1500 strains) are required to cover the entire genome. The fragments will be sequenced by the SPEL-6 method involving hexamer ligation [Szybalski, Gene 90 (1990) 177-178; Fresenius J. Anal. Chem. 4 (1992) 343] and primer walking. The 18-mer primers are synthesized in only a few minutes from three contiguous hexamers annealed to the DNA strand to be sequenced when using an over 100-fold

  6. Isothermal study of effusion cooling flows using a large eddy simulation approach

    Institute of Scientific and Technical Information of China (English)

    W.P.Bennett; Z.Yang; J.J. McGuirk

    2009-01-01

    An isothermal numerical study of effusion cooling flow is conducted using a large eddy simulation (LES) approach. Two main types of cooling are considered, namely tangential film cooling and oblique patch effusion cooling. To represent tangential film cooling, a simplified model of a plane turbulent wall jet along a flat plate in quiescent surrounding fluid is considered. In contrast to a classic turbulent boundary layer flow, the plane turbulent wall jet possesses an outer free shear flow region, an inner near wall region and an interaction region, characterised by substantial levels of turbulent shear stress transport. These shear stress characteristics hold significant implications for RANS modelling, implications that also apply to more complex tangential film cooling flows with non-zero free stream velocities. The LES technique used in the current study provides a satisfactory overall prediction of the plane turbulent wall jet flow, including the initial transition region, and the characteristic separation of the zero turbulent shear stress and zero shear strain locations.Oblique effusion patch cooling is modelled using a staggered array of 12 rows of effusion holes, drilled at 30° to the flat plate surface. The effusion holes connect two channels separated by the flat plate. Specifically, these comprise of a channel representing the combustion chamber flow and a cooling air supply channel. A difference in pressure between the two channels forces air from the cooling supply side, through the effusion holes, and into the combustion chamber side. Air from successive effusion rows coalesces to form an aerodynamic film between the combustion chamber main flow and the flat plate. In practical applications, this film is used to separate the hot combustion gases from the combustion chamber liner. The numerical model is shown to be capable of accurately predicting the injection, penetration, downstream decay, and coalescence of the effusion jets. In addition, the

  7. An integrated approach to investigate the reach-averaged bend scale dynamics of large meandering rivers

    Science.gov (United States)

    Monegaglia, Federico; Henshaw, Alex; Zolezzi, Guido; Tubino, Marco

    2016-04-01

    Planform development of evolving meander bends is a beautiful and complex dynamic phenomenon, controlled by the interplay among hydrodynamics, sediments and floodplain characteristics. In the past decades, morphodynamic models of river meandering have provided a thorough understanding of the unit physical processes interacting at the reach scale during meander planform evolution. On the other hand, recent years have seen advances in satellite geosciences able to provide data with increasing resolution and earth coverage, which are becoming an important tool for studying and managing river systems. Analysis of the planform development of meandering rivers through Landsat satellite imagery have been provided in very recent works. Methodologies for the objective and automatic extraction of key river development metrics from multi-temporal satellite images have been proposed though often limited to the extraction of channel centerlines, and not always able to yield quantitative data on channel width, migration rates and bed morphology. Overcoming such gap would make a major step forward to integrate morphodynamic theories, models and real-world data for an increased understanding of meandering river dynamics. In order to fulfill such gaps, a novel automatic procedure for extracting and analyzing the topography and planform dynamics of meandering rivers through time from satellite images is implemented. A robust algorithm able to compute channel centerline in complex contexts such as the presence of channel bifurcations and anabranching structures is used. As a case study, the procedure is applied to the Landsat database for a reach of the well-known case of Rio Beni, a large, suspended load dominated, tropical meandering river flowing through the Bolivian Amazon Basin. The reach-averaged evolution of single bends along Rio Beni over a 30 years period is analyzed, in terms of bend amplification rates computed according to the local centerline migration rate. A

  8. Phase field approach with anisotropic interface energy and interface stresses: Large strain formulation

    Science.gov (United States)

    Levitas, Valery I.; Warren, James A.

    2016-06-01

    A thermodynamically consistent, large-strain, multi-phase field approach (with consequent interface stresses) is generalized for the case with anisotropic interface (gradient) energy (e.g. an energy density that depends both on the magnitude and direction of the gradients in the phase fields). Such a generalization, if done in the "usual" manner, yields a theory that can be shown to be manifestly unphysical. These theories consider the gradient energy as anisotropic in the deformed configuration, and, due to this supposition, several fundamental contradictions arise. First, the Cauchy stress tensor is non-symmetric and, consequently, violates the moment of momentum principle, in essence the Herring (thermodynamic) torque is imparting an unphysical angular momentum to the system. In addition, this non-symmetric stress implies a violation of the principle of material objectivity. These problems in the formulation can be resolved by insisting that the gradient energy is an isotropic function of the gradient of the order parameters in the deformed configuration, but depends on the direction of the gradient of the order parameters (is anisotropic) in the undeformed configuration. We find that for a propagating nonequilibrium interface, the structural part of the interfacial Cauchy stress is symmetric and reduces to a biaxial tension with the magnitude equal to the temperature- and orientation-dependent interface energy. Ginzburg-Landau equations for the evolution of the order parameters and temperature evolution equation, as well as the boundary conditions for the order parameters are derived. Small strain simplifications are presented. Remarkably, this anisotropy yields a first order correction in the Ginzburg-Landau equation for small strains, which has been neglected in prior works. The next strain-related term is third order. For concreteness, specific orientation dependencies of the gradient energy coefficients are examined, using published molecular dynamics

  9. Evaluation of near-wall solution approaches for large-eddy simulations of flow in a centrifugal pump impeller

    Directory of Open Access Journals (Sweden)

    Zhi-Feng Yao

    2016-01-01

    Full Text Available The turbulent flow in a centrifugal pump impeller is bounded by complex surfaces, including blades, a hub and a shroud. The primary challenge of the flow simulation arises from the generation of a boundary layer between the surface of the impeller and the moving fluid. The principal objective is to evaluate the near-wall solution approaches that are typically used to deal with the flow in the boundary layer for the large-eddy simulation (LES of a centrifugal pump impeller. Three near-wall solution approaches –the wall-function approach, the wall-resolved approach and the hybrid Reynolds averaged Navier–Stoke (RANS and LES approach – are tested. The simulation results are compared with experimental results conducted through particle imaging velocimetry (PIV and laser Doppler velocimetry (LDV. It is found that the wall-function approach is more sparing of computational resources, while the other two approaches have the important advantage of providing highly accurate boundary layer flow prediction. The hybrid RANS/LES approach is suitable for predicting steady-flow features, such as time-averaged velocities and hydraulic losses. Despite the fact that the wall-resolved approach is expensive in terms of computing resources, it exhibits a strong ability to capture a small-scale vortex and predict instantaneous velocity in the near-wall region in the impeller. The wall-resolved approach is thus recommended for the transient simulation of flows in centrifugal pump impellers.

  10. Effect of stress on energy flux deviation of ultrasonic waves in GR/EP composites

    Science.gov (United States)

    Prosser, William H.; Kriz, R. D.; Fitting, Dale W.

    1990-01-01

    Ultrasonic waves suffer energy flux deviation in graphite/epoxy because of the large anisotropy. The angle of deviation is a function of the elastic coefficients. For nonlinear solids, these coefficients and thus the angle of deviation is a function of stress. Acoustoelastic theory was used to model the effect of stress on flux deviation for unidirectional T300/5208 using previously measured elastic coefficients. Computations were made for uniaxial stress along the x3 axis (fiber axis) and the x1 for waves propagating in the x1x3 plane. These results predict a shift as large as three degrees for the quasi-transverse wave. The shift in energy flux offers a new nondestructive technique of evaluating stress in composites.

  11. Assistive Technology Approaches for Large-Scale Assessment: Perceptions of Teachers of Students with Visual Impairments

    Science.gov (United States)

    Johnstone, Christopher; Thurlow, Martha; Altman, Jason; Timmons, Joe; Kato, Kentaro

    2009-01-01

    Assistive technology approaches to aid students with visual impairments are becoming commonplace in schools. These approaches, however, present challenges for assessment because students' level of access to different technologies may vary by school district and state. To better understand what assistive technology tools are used in reading…

  12. A top-down approach to construct execution views of a large software-intensive system

    NARCIS (Netherlands)

    Callo Arias, Trosky B.; America, Pierre; Avgeriou, Paris

    2013-01-01

    This paper presents an approach to construct execution views, which are views that describe what the software of a software-intensive system does at runtime and how it does it. The approach represents an architecture reconstruction solution based on a metamodel, a set of viewpoints, and a dynamic an

  13. An Approach to Stability Analysis of Embedded Large-Diameter Cylinder Quay

    Institute of Scientific and Technical Information of China (English)

    王元战; 祝振宇

    2002-01-01

    The large-diameter cylinder structure, which is made of large successive bottomless cylinders placed on foundationbed or partly driven into soil, is a recently developed retaining structure in China. It can be used in port, coastal and off-shore works. The method for stability analysis of the large-diameter cylinder structure, especially for stability analysis ofthe embedded large-diameter cylinder structure, is an important issue. In this paper, an idea is presented that is, em-bedded large-diameter cylinder quays can be divided into two types, i.e. the gravity wall type and the cylinder pile walltype. A method for stability analysis of the large-diameter cylinder quay of the cylinder pile wall type is developed and amethod for stability analysis of the large-diameter cylinder quay of the gravity wall type is also proposed. The effect of sig-nificant parameters on the stability of the large-dianeter cylinder quay of the cylinder pile wall type is investigated throughnumerical calculation.

  14. Monotone Regression and Correction for Order Relation Deviations in Indicator Kriging

    Institute of Scientific and Technical Information of China (English)

    Han Yan; Yang Yiheng

    2008-01-01

    The indicator kriging (IK) is one of the most efficient nonparametric methods in geo-statistics. The order relation problem in the conditional cumulative distribution values obtained by IK is the most severe drawback of it. The correction of order relation deviations is an essential and important part of IK approach. A monotone regression was proposed as a new correction method which could minimize the deviation from original quintiles value, although, ensuring all order relations.

  15. Gear transmission dynamic: Effects of tooth profile deviations and support flexibility

    OpenAIRE

    Fernández del Rincón, Alfonso; Iglesias Santamaría, Miguel; Juan de Luna, Ana Magdalena de; García Fernández, Pablo; Sancibrián Herrera, Ramón; Viadero Rueda, Fernando

    2014-01-01

    In this work a non-linear dynamic model of spur gear transmissions previously developed by the authors is extended to include both desired (relief) and undesired (manufacture errors) deviations in the tooth profile. The model uses a hybrid method for the calculation of meshing forces, which combines FE analysis and analytical formulation, so that it enables a very straightforward implementation of the tooth profile deviations. The model approach handles well non-linearity due to the variable ...

  16. Semi-Passive Oxidation-Based Approaches for Control of Large, Dilute Groundwater Plumes of Chlorinated Ethylenes

    Science.gov (United States)

    2014-04-01

    Based Approaches for Control of Large, Dilute Groundwater Plumes of Chlorinated Ethylenes 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...Numerous studies have reported that chemical oxidation of chlorinated ethylenes in aqueous solution is rapid (e.g. Yan and Schwartz, 1998; Huang et al

  17. A Blended Learning Approach for Teaching Computer Programming: Design for Large Classes in Sub-Saharan Africa

    Science.gov (United States)

    Bati, Tesfaye Bayu; Gelderblom, Helene; van Biljon, Judy

    2014-01-01

    The challenge of teaching programming in higher education is complicated by problems associated with large class teaching, a prevalent situation in many developing countries. This paper reports on an investigation into the use of a blended learning approach to teaching and learning of programming in a class of more than 200 students. A course and…

  18. Is rigorous retrospective harmonization possible? Application of the DataSHaPER approach across 53 large studies

    NARCIS (Netherlands)

    Fortier, Isabel; Doiron, Dany; Little, Julian; Ferretti, Vincent; L'Heureux, Francois; Stolk, Ronald P.; Knoppers, Bartha M.; Hudson, Thomas J.; Burton, Paul R.

    2011-01-01

    Methods This article examines the value of using the DataSHaPER for retrospective harmonization of established studies. Using the DataSHaPER approach, the potential to generate 148 harmonized variables from the questionnaires and physical measures collected in 53 large population-based studies (6.9

  19. Systematic Analysis of Self-Reported Comorbidities in Large Cohort Studies – A Novel Stepwise Approach by Evaluation of Medication

    Science.gov (United States)

    Wacker, Margarethe; Holle, Rolf; Biertz, Frank; Nowak, Dennis; Huber, Rudolf M.; Söhler, Sandra; Vogelmeier, Claus; Ficker, Joachim H.; Mückter, Harald; Jörres, Rudolf A.

    2016-01-01

    Objective In large cohort studies comorbidities are usually self-reported by the patients. This way to collect health information only represents conditions known, memorized and openly reported by the patients. Several studies addressed the relationship between self-reported comorbidities and medical records or pharmacy data, but none of them provided a structured, documented method of evaluation. We thus developed a detailed procedure to compare self-reported comorbidities with information on comorbidities derived from medication inspection. This was applied to the data of the German COPD cohort COSYCONET. Methods Approach I was based solely on ICD10-Codes for the diseases and the indications of medications. To overcome the limitations due to potential non-specificity of medications, Approach II was developed using more detailed information, such as ATC-Codes specific for one disease. The relationship between reported comorbidities and medication was expressed by a four-level concordance score. Results Approaches I and II demonstrated that the patterns of concordance scores markedly differed between comorbidities in the COSYCONET data. On average, Approach I resulted in more than 50% concordance of all reported diseases to at least one medication. The more specific Approach II showed larger differences in the matching with medications, due to large differences in the disease-specificity of drugs. The highest concordance was achieved for diabetes and three combined cardiovascular disorders, while it was substantial for dyslipidemia and hyperuricemia, and low for asthma. Conclusion Both approaches represent feasible strategies to confirm self-reported diagnoses via medication. Approach I covers a broad spectrum of diseases and medications but is limited regarding disease-specificity. Approach II uses the information from medications specific for a single disease and therefore can reach higher concordance scores. The strategies described in a detailed and

  20. 9 CFR 318.308 - Deviations in processing.

    Science.gov (United States)

    2010-01-01

    ... AGENCY ORGANIZATION AND TERMINOLOGY; MANDATORY MEAT AND POULTRY PRODUCTS INSPECTION AND VOLUNTARY...) Deviations in processing (or process deviations) must be handled according to: (1)(i) A HACCP plan for canned...) of this section. (c) (d) Procedures for handling process deviations where the HACCP plan...

  1. 21 CFR 330.11 - NDA deviations from applicable monograph.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 5 2010-04-01 2010-04-01 false NDA deviations from applicable monograph. 330.11... EFFECTIVE AND NOT MISBRANDED Administrative Procedures § 330.11 NDA deviations from applicable monograph. A new drug application requesting approval of an OTC drug deviating in any respect from a monograph that...

  2. Interpreting spacetimes of any dimension using geodesic deviation

    CERN Document Server

    Podolsky, Jiri

    2012-01-01

    We present a general method which can be used for geometrical and physical interpretation of an arbitrary spacetime in four or any higher number of dimensions. It is based on the systematic analysis of relative motion of free test particles. We demonstrate that local effect of the gravitational field on particles, as described by equation of geodesic deviation with respect to a natural orthonormal frame, can always be decomposed into a canonical set of transverse, longitudinal and Newton-Coulomb-type components, isotropic influence of a cosmological constant, and contributions arising from specific matter content of the universe. In particular, exact gravitational waves in Einstein's theory always exhibit themselves via purely transverse effects with D(D-3)/2 independent polarization states. To illustrate the utility of this approach we study the family of pp-wave spacetimes in higher dimensions and discuss specific measurable effects on a detector located in four spacetime dimensions. For example, the corres...

  3. Deviations from Wick's theorem in the canonical ensemble

    Science.gov (United States)

    Schönhammer, K.

    2017-07-01

    Wick's theorem for the expectation values of products of field operators for a system of noninteracting fermions or bosons plays an important role in the perturbative approach to the quantum many-body problem. A finite-temperature version holds in the framework of the grand canonical ensemble, but not for the canonical ensemble appropriate for systems with fixed particle number such as ultracold quantum gases in optical lattices. Here we present formulas for expectation values of products of field operators in the canonical ensemble using a method in the spirit of Gaudin's proof of Wick's theorem for the grand canonical case. The deviations from Wick's theorem are examined quantitatively for two simple models of noninteracting fermions.

  4. A simple multi-seeding approach to growth of large YBCO bulk with a diameter above 53 mm

    Science.gov (United States)

    Tang, Tian-wei; Wu, Dong-jie; Wu, Xing-da; Xu, Ke-Xi

    2015-12-01

    A successful simple multi-seeding approach to growing large size Y-Ba-C-O (YBCO) bulks is reported. Compared with the common single seeding method, our multi-seeding method is more efficient. By using four SmBa2Cu3O7-δ (Sm-123) seeds cut from a large size Sm-Ba-C-O (SmBCO) single domain, large YBCO samples up to 53 mm in diameter could be produced successfully and 100 mm diameter samples can also be grown. Experimental results show that the processing time can be shortened greatly by using this new approach, and the superconducting properties can also be improved. The Hall probe mapping shows that the trapped field distribution of 53 mm diameter multi-seeded sample is homogeneous and the peak value is up to 0.53 T. The magnetic levitation force density reaches to 14.7 N/cm2 (77 K, 0.5 T).

  5. Near Capacity Approaching for Large MIMO Systems by Non-Binary LDPC Codes with MMSE Detection

    CERN Document Server

    Suthisopapan, Puripong; Meesomboon, Anupap; Imtawil, Virasit

    2012-01-01

    In this paper, we have investigated the application of non-binary LDPC codes to spatial multiplexing MIMO systems with a large number of low power antennas. We demonstrate that such large MIMO systems incorporating with low-complexity MMSE detector and non-binary LDPC codes can achieve low probability of bit error at near MIMO capacity. The new proposed non-binary LDPC coded system also performs better than other coded large MIMO systems known in the present literature. For instance, non-binary LDPC coded BPSK-MIMO system with 600 transmit/receive antennas performs within 3.4 dB from the capacity while the best known turbo coded system operates about 9.4 dB away from the capacity. Based on the simulation results provided in this paper, the proposed non-binary LDPC coded large MIMO system is capable of supporting ultra high spectral efficiency at low bit error rate.

  6. Moderate Deviations for Recursive Stochastic Algorithms

    Science.gov (United States)

    2014-08-02

    measures of Markov chains : Lower bounds. Ann. Probab., 25:259�284, 1997. [8] T. Dean and P. Dupuis. Splitting for rare event simulation : A large...we mention their usefulness in the design and analysis of Monte Carlo schemes. It is well known that accelerated Monte Carlo schemes (e.g...fast�continuous time ergodic Markov chain , and in [19] this is extended to a small noise di¤usion whose coe¢ cients depend on the �fast� Markov chain

  7. Modeling and Control of a Large Nuclear Reactor A Three-Time-Scale Approach

    CERN Document Server

    Shimjith, S R; Bandyopadhyay, B

    2013-01-01

    Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property,...

  8. Presenting an Approach for Conducting Knowledge Architecture within Large-Scale Organizations.

    Science.gov (United States)

    Varaee, Touraj; Habibi, Jafar; Mohaghar, Ali

    2015-01-01

    Knowledge architecture (KA) establishes the basic groundwork for the successful implementation of a short-term or long-term knowledge management (KM) program. An example of KA is the design of a prototype before a new vehicle is manufactured. Due to a transformation to large-scale organizations, the traditional architecture of organizations is undergoing fundamental changes. This paper explores the main strengths and weaknesses in the field of KA within large-scale organizations and provides a suitable methodology and supervising framework to overcome specific limitations. This objective was achieved by applying and updating the concepts from the Zachman information architectural framework and the information architectural methodology of enterprise architecture planning (EAP). The proposed solution may be beneficial for architects in knowledge-related areas to successfully accomplish KM within large-scale organizations. The research method is descriptive; its validity is confirmed by performing a case study and polling the opinions of KA experts.

  9. An approach to the damping of local modes of oscillations resulting from large hydraulic transients

    Energy Technology Data Exchange (ETDEWEB)

    Dobrijevic, D.M.; Jankovic, M.V.

    1999-09-01

    A new method of damping of local modes of oscillations under large disturbance is presented in this paper. The digital governor controller is used. Controller operates in real time to improve the generating unit transients through the guide vane position and the runner blade position. The developed digital governor controller, whose control signals are adjusted using the on-line measurements, offers better damping effects for the generator oscillations under large disturbances than the conventional controller. Digital simulations of hydroelectric power plant equipped with low-head Kaplan turbine are performed and the comparisons between the digital governor control and the conventional governor control are presented. Simulation results show that the new controller offers better performances, than the conventional controller, when the system is subjected to large disturbances.

  10. Graph theory approach to the eigenvalue problem of large space structures

    Science.gov (United States)

    Reddy, A. S. S. R.; Bainum, P. M.

    1981-01-01

    Graph theory is used to obtain numerical solutions to eigenvalue problems of large space structures (LSS) characterized by a state vector of large dimensions. The LSS are considered as large, flexible systems requiring both orientation and surface shape control. Graphic interpretation of the determinant of a matrix is employed to reduce a higher dimensional matrix into combinations of smaller dimensional sub-matrices. The reduction is implemented by means of a Boolean equivalent of the original matrices formulated to obtain smaller dimensional equivalents of the original numerical matrix. Computation time becomes less and more accurate solutions are possible. An example is provided in the form of a free-free square plate. Linearized system equations and numerical values of a stiffness matrix are presented, featuring a state vector with 16 components.

  11. An Accurate Approach to Large-Scale IP Traffic Matrix Estimation

    Science.gov (United States)

    Jiang, Dingde; Hu, Guangmin

    This letter proposes a novel method of large-scale IP traffic matrix (TM) estimation, called algebraic reconstruction technique inference (ARTI), which is based on the partial flow measurement and Fratar model. In contrast to previous methods, ARTI can accurately capture the spatio-temporal correlations of TM. Moreover, ARTI is computationally simple since it uses the algebraic reconstruction technique. We use the real data from the Abilene network to validate ARTI. Simulation results show that ARTI can accurately estimate large-scale IP TM and track its dynamics.

  12. Challenges and Approaches of Teaching EFL to Large Classes in China

    Institute of Scientific and Technical Information of China (English)

    YE Wen-hong; LI Ling

    2011-01-01

    Compared with the smaller class sizes in many other countries,China EFL classes are generally oversize with gaps of learners'language proficiency and competence,different language acquisition abilities,diverse motivations and attitudes which constitute a multilevel class with many challenges to class management and language teaching.Based on the views upon the issue of researchers from China and other countries,this paper identifies the challenges with large EFL classes and discusses strategies of coping in such a context by providing pragmatic guidelines of teaching EFL to large classes in China.

  13. Approaching Quantum-Limited Amplification with Large Gain Catalyzed by Optical Parametric Amplifier Medium

    Science.gov (United States)

    Zheng, Qiang; Li, Kai

    2017-07-01

    Amplifier is at the heart of experiments carrying out the precise measurement of a weak signal. An idea quantum amplifier should have a large gain and minimum added noise simultaneously. Here, we consider the quantum measurement properties of the cavity with the OPA medium in the op-amp mode to amplify an input signal. We show that our nonlinear-cavity quantum amplifier has large gain in the single-value stable regime and achieves quantum limit unconditionally. Supported by the National Natural Science Foundation of China under Grant Nos. 11365006, 11364006, and the Natural Science Foundation of Guizhou Province QKHLHZ [2015]7767

  14. Descriptor-variable approach to modeling and optimization of large-scale systems. Final report, March 1976--February 1979

    Energy Technology Data Exchange (ETDEWEB)

    Stengel, D N; Luenberger, D G; Larson, R E; Cline, T B

    1979-02-01

    A new approach to modeling and analysis of systems is presented that exploits the underlying structure of the system. The development of the approach focuses on a new modeling form, called 'descriptor variable' systems, that was first introduced in this research. Key concepts concerning the classification and solution of descriptor-variable systems are identified, and theories are presented for the linear case, the time-invariant linear case, and the nonlinear case. Several standard systems notions are demonstrated to have interesting interpretations when analyzed via descriptor-variable theory. The approach developed also focuses on the optimization of large-scale systems. Descriptor variable models are convenient representations of subsystems in an interconnected network, and optimization of these models via dynamic programming is described. A general procedure for the optimization of large-scale systems, called spatial dynamic programming, is presented where the optimization is spatially decomposed in the way standard dynamic programming temporally decomposes the optimization of dynamical systems. Applications of this approach to large-scale economic markets and power systems are discussed.

  15. Matrix shaped pulsed laser deposition: New approach to large area and homogeneous deposition

    Energy Technology Data Exchange (ETDEWEB)

    Akkan, C.K.; May, A. [INM – Leibniz Institute for New Materials, CVD/Biosurfaces Group, Campus D2 2, 66123 Saarbrücken (Germany); Hammadeh, M. [Department for Obstetrics, Gynecology and Reproductive Medicine, IVF Laboratory, Saarland University Medical Center and Faculty of Medicine, Building 9, 66421 Homburg, Saar (Germany); Abdul-Khaliq, H. [Clinic for Pediatric Cardiology, Saarland University Medical Center and Faculty of Medicine, Building 9, 66421 Homburg, Saar (Germany); Aktas, O.C., E-mail: cenk.aktas@inm-gmbh.de [INM – Leibniz Institute for New Materials, CVD/Biosurfaces Group, Campus D2 2, 66123 Saarbrücken (Germany)

    2014-05-01

    Pulsed laser deposition (PLD) is one of the well-established physical vapor deposition methods used for synthesis of ultra-thin layers. Especially PLD is suitable for the preparation of thin films of complex alloys and ceramics where the conservation of the stoichiometry is critical. Beside several advantages of PLD, inhomogeneity in thickness limits use of PLD in some applications. There are several approaches such as rotation of the substrate or scanning of the laser beam over the target to achieve homogenous layers. On the other hand movement and transition create further complexity in process parameters. Here we present a new approach which we call Matrix Shaped PLD to control the thickness and homogeneity of deposited layers precisely. This new approach is based on shaping of the incoming laser beam by a microlens array and a Fourier lens. The beam is split into much smaller multi-beam array over the target and this leads to a homogenous plasma formation. The uniform intensity distribution over the target yields a very uniform deposit on the substrate. This approach is used to deposit carbide and oxide thin films for biomedical applications. As a case study coating of a stent which has a complex geometry is presented briefly.

  16. An approach to large scale identification of non-obvious structural similarities between proteins

    Directory of Open Access Journals (Sweden)

    Cherkasov Artem

    2004-05-01

    Full Text Available Abstract Background A new sequence independent bioinformatics approach allowing genome-wide search for proteins with similar three dimensional structures has been developed. By utilizing the numerical output of the sequence threading it establishes putative non-obvious structural similarities between proteins. When applied to the testing set of proteins with known three dimensional structures the developed approach was able to recognize structurally similar proteins with high accuracy. Results The method has been developed to identify pathogenic proteins with low sequence identity and high structural similarity to host analogues. Such protein structure relationships would be hypothesized to arise through convergent evolution or through ancient horizontal gene transfer events, now undetectable using current sequence alignment techniques. The pathogen proteins, which could mimic or interfere with host activities, would represent candidate virulence factors. The developed approach utilizes the numerical outputs from the sequence-structure threading. It identifies the potential structural similarity between a pair of proteins by correlating the threading scores of the corresponding two primary sequences against the library of the standard folds. This approach allowed up to 64% sensitivity and 99.9% specificity in distinguishing protein pairs with high structural similarity. Conclusion Preliminary results obtained by comparison of the genomes of Homo sapiens and several strains of Chlamydia trachomatis have demonstrated the potential usefulness of the method in the identification of bacterial proteins with known or potential roles in virulence.

  17. Understanding Protein Synthesis: A Role-Play Approach in Large Undergraduate Human Anatomy and Physiology Classes

    Science.gov (United States)

    Sturges, Diana; Maurer, Trent W.; Cole, Oladipo

    2009-01-01

    This study investigated the effectiveness of role play in a large undergraduate science class. The targeted population consisted of 298 students enrolled in 2 sections of an undergraduate Human Anatomy and Physiology course taught by the same instructor. The section engaged in the role-play activity served as the study group, whereas the section…

  18. Unmasking Outliers in Large Distributed Databases Using Cluster Based Approach: CluBSOLD

    Directory of Open Access Journals (Sweden)

    A. Rama Satish

    2016-04-01

    Full Text Available Outliers are dissimilar or inconsistent data objects with respect to the remaining data objects in the data set or which are far away from their cluster centroids. Detecting outliers in data is a very important concept in Knowledge Data Discovery process for finding hidden knowledge. The task of detecting the outliers has been studied in a large number of research areas like Financial Data Analysis, Large Distributed Systems, Biological Data Analysis, Data Mining, Scientific Applications, Health monitoring, etc., Existing research study of outlier detection shows that Density Based outlier detection techniques are robust. Identifying outliers in a distributed environment is not a simple task because processing with a distributed database raises two major issues. First one is rendering massive data which are generated from different databases. And the second is data integration, which may cause data security violation and sensitive information leakage. Handling distributed database is a difficult task. In this paper, we present a cluster based outliers detection to spot outliers in large and vibrant (updated dynamically distributed database in which cell density based centralized detection is used to succeed in dealing with massive data rendering problem and data integration problem. Experiments are conducted on various datasets and the obtained results clearly shows the robustness of the proposed technique forv finding outliers in large distributed database.

  19. A Logically Centralized Approach for Control and Management of Large Computer Networks

    Science.gov (United States)

    Iqbal, Hammad A.

    2012-01-01

    Management of large enterprise and Internet service provider networks is a complex, error-prone, and costly challenge. It is widely accepted that the key contributors to this complexity are the bundling of control and data forwarding in traditional routers and the use of fully distributed protocols for network control. To address these…

  20. A novel computational approach towards the certification of large-scale boson sampling

    Science.gov (United States)

    Huh, Joonsuk

    Recent proposals of boson sampling and the corresponding experiments exhibit the possible disproof of extended Church-Turning Thesis. Furthermore, the application of boson sampling to molecular computation has been suggested theoretically. Till now, however, only small-scale experiments with a few photons have been successfully performed. The boson sampling experiments of 20-30 photons are expected to reveal the computational superiority of the quantum device. A novel theoretical proposal for the large-scale boson sampling using microwave photons is highly promising due to the deterministic photon sources and the scalability. Therefore, the certification protocol of large-scale boson sampling experiments should be presented to complete the exciting story. We propose, in this presentation, a computational protocol towards the certification of large-scale boson sampling. The correlations of paired photon modes and the time-dependent characteristic functional with its Fourier component can show the fingerprint of large-scale boson sampling. This work was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education, Science and Technology(NRF-2015R1A6A3A04059773), the ICT R&D program of MSIP/IITP [2015-019, Fundamental Research Toward Secure Quantum Communication] and Mueunjae Institute for Chemistry (MIC) postdoctoral fellowship.

  1. Strategic and Collaborative Crisis Management: A Partnership Approach to Large-Scale Crisis

    Science.gov (United States)

    Mann, Timothy

    2007-01-01

    Large-scale crisis such as natural disasters and acts of terrorism can have a paralyzing effect on the campus community and business continuity. Campus officials in these situations face significant challenges that go beyond the immediate response including re-building the physical plant, restoring campus infrastructure, retaining displaced…

  2. Understanding Protein Synthesis: A Role-Play Approach in Large Undergraduate Human Anatomy and Physiology Classes

    Science.gov (United States)

    Sturges, Diana; Maurer, Trent W.; Cole, Oladipo

    2009-01-01

    This study investigated the effectiveness of role play in a large undergraduate science class. The targeted population consisted of 298 students enrolled in 2 sections of an undergraduate Human Anatomy and Physiology course taught by the same instructor. The section engaged in the role-play activity served as the study group, whereas the section…

  3. Facilitating Learning in Large Lecture Classes: Testing the "Teaching Team" Approach to Peer Learning

    Science.gov (United States)

    Stanger-Hall, Kathrin F.; Lang, Sarah; Maas, Martha

    2010-01-01

    We tested the effect of voluntary peer-facilitated study groups on student learning in large introductory biology lecture classes. The peer facilitators (preceptors) were trained as part of a Teaching Team (faculty, graduate assistants, and preceptors) by faculty and Learning Center staff. Each preceptor offered one weekly study group to all…

  4. An Active-Learning Approach to Fostering Understanding of Research Methods in Large Classes

    Science.gov (United States)

    LaCosse, Jennifer; Ainsworth, Sarah E.; Shepherd, Melissa A.; Ent, Michael; Klein, Kelly M.; Holland-Carter, Lauren A.; Moss, Justin H.; Licht, Mark; Licht, Barbara

    2017-01-01

    The current investigation tested the effectiveness of an online student research project designed to supplement traditional methods (e.g., lectures, discussions, and assigned readings) of teaching research methods in a large-enrollment Introduction to Psychology course. Over the course of the semester, students completed seven assignments, each…

  5. Youth Subcultures: From Deviation to Fragmentation

    Directory of Open Access Journals (Sweden)

    Josef Smolík

    2015-04-01

    Full Text Available This theoretical text introduces the issue of youth subcultures and tries to define particular basic concepts that are essential for the study of this issue in context of social pedagogy and sociology. These terms include culture, dominant culture, subculture, counterculture, scene, etc. The article also deals with the basic definition of youth subcultures; it discusses this category on the basis of current debates and then introduces various sociological schools which have dealt with this issue for a long time. These are the Chicago school of sociology, Center for the Study of Popular Culture and the post-subculture approaches. Finally, it is noted that in the last two decades there has occurred a fragmentation of particular styles, which led to the gradual replacement of sociological term subculture.

  6. The short crack fatigue approach in fitness for purpose evaluation of a turbine rotor with the large US indication zone

    Energy Technology Data Exchange (ETDEWEB)

    Grkovic, V. [Faculty of Technical Science, Novi Sad (Yugoslavia); Nedeljkovic, L. [Faculty of Technology and Metallurgy, Belgrade (Yugoslavia)

    1994-12-31

    The short crack fatigue approach in fitness for purpose evaluation of a turbine rotor with the large US indications zone is analyzed and discussed. The approach is based on the available short fatigue crack growth rate equations. Coupled with Paris formula these equations enable the assessment of the total number of loading cycles to failure, provided the material specific constants are available, as well as the precise data on operating stresses and on non-metallic inclusions. Necessary data for the qualifying procedure are presented. Possible issues of the evaluation are discussed. (orig.)

  7. Deviation Prevention Ability of Rollers in Continuous Annealing Furnace and Application

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yan; YANG Quan; HE An-rui; YAO Xi-jiang; GUO De-fu

    2012-01-01

    In order to reduce the strip deviation in the practical production procedure in the continuous annealing fur- nace, a dynamic simulation model was built through finite element method (FEM) to conduct the quantification cal- culation of the effect of regular roller contour types on strip deviation. The result reveals that comparing to the flat roller, forward roller contour can prevent the strip deviation to some degree. In terms of prevention ability, double- taper roller is the strongest, single-taper roller and crown roller are less stronger; and more roller contour values raise the prevention ability. Accordingly, optimization method was applied to continuous annealing furnace, and it largely reduced accidents such as strip break and limited speed that are caused by the deviation.

  8. Authormagic – An Approach to Author Disambiguation in Large-Scale Digital Libraries

    CERN Document Server

    Weiler, Henning; Mele, Salvatore

    2011-01-01

    A collaboration of leading research centers in the field of High Energy Physics (HEP) has built INSPIRE, a novel information infrastructure, which comprises the entire corpus of about one million documents produced within the discipline, including a rich set of metadata, citation information and half a million full-text documents, and offers a unique opportunity for author disambiguation strategies. The presented approach features extended metadata comparison metrics and a three-step unsupervised graph clustering technique. The algorithm aided in identifying 200'000 individuals from 6'500'000 author signatures. Preliminary tests based on knowledge of external experts and a pilot of a crowd-sourcing system show a success rate of more than 96% within the selected test cases. The obtained author clusters serve as a recommendation for INSPIRE users to further clean the publication list in a crowd-sourced approach.

  9. A Multi-Level Middle-Out Cross-Zooming Approach for Large Graph Analytics

    Energy Technology Data Exchange (ETDEWEB)

    Wong, Pak C.; Mackey, Patrick S.; Cook, Kristin A.; Rohrer, Randall M.; Foote, Harlan P.; Whiting, Mark A.

    2009-10-11

    This paper presents a working graph analytics model that embraces the strengths of the traditional top-down and bottom-up approaches with a resilient crossover concept to exploit the vast middle-ground information overlooked by the two extreme analytical approaches. Our graph analytics model is developed in collaboration with researchers and users, who carefully studied the functional requirements that reflect the critical thinking and interaction pattern of a real-life intelligence analyst. To evaluate the model, we implement a system prototype, known as GreenHornet, which allows our analysts to test the theory in practice, identify the technological and usage-related gaps in the model, and then adapt the new technology in their work space. The paper describes the implementation of GreenHornet and compares its strengths and weaknesses against the other prevailing models and tools.

  10. A Lean Approach to Improving SE Visibility in Large Operational Systems Evolution

    Science.gov (United States)

    2013-06-01

    engineering activities in such instances. An initial generalization of pull concepts using a standard kanban approach was developed. During the development... Kanban -based Scheduling System (KSS) (Turner, Lane, et al. 2012). The second phase of this research is describing an implementation of the KSS concept...software and systems engineering tasks and the required capabilities. Because kanban concepts have been primarily used with single level value streams

  11. Multi body model approach to obtain construction criteria for a large space structure

    Science.gov (United States)

    Shigehara, M.; Shigedomi, Y.

    Such natural environmental torques as the gravity gradient could substantially influence the attitude behavior of a large space structure, especially in a low Earth orbit. This paper has tried to introduce the basic criteria for constructing a large structure in low-orbit environment, by using the Solar Power Satellite as a model. The criteria can be derived from the static stability map from the rigid body equations and the dynamic behavior from the multi body equations. The multi-body octopus type equations of motion has been introduced to examine transient behaviors during construction. Specifically, inertia matrix change including unsymmetrical configuration change, construction speed and internal momentum change are considered. These results from the transient behavior studies are included, in a general level, in a set of construction criteria.

  12. Management of large complex multi-stakeholders projects: a bibliometric approach

    Directory of Open Access Journals (Sweden)

    Aline Sacchi Homrich

    2017-06-01

    Full Text Available The growing global importance of large infrastructure projects has piqued the interest of many researchers in a variety of issues related to the management of large, multi-stakeholder projects, characterized by their high complexity and intense interaction among numerous stake-holders with distinct levels of responsibility. The objective of this study is to provide an overview of the academic literature focused on the management of these kinds of projects, describing the main themes considered, the lines of research identified and prominent trends. Bibliometric analysis techniques were used as well as network and content analysis. Research for information was performed in the scientific database, ISI Web of Knowledge and Scopus. The initial sample analysis consisted of 144 papers published between 1984 and 2014 and expanded to the references cited in these papers. The models identified in the literature converge with the following key-processes: project delivery systems; risk-management models; project cost management; public-private partnership.

  13. RCD Large Aspect-Ratio Tokamak Equilibrium with Magnetic Islands: a Perturbed Approach

    Institute of Scientific and Technical Information of China (English)

    F.L.Braga

    2013-01-01

    Solutions of Grad-Shafranov (GS) equation with Reversed Current Density (RCD) profiles present magnetic islands when the magnetic flux is explicitly dependent on the poloidal angle.In this work it is shown that a typical cylindrical (large aspect-ratio) RCD equilibrium configuration perturbed by the magnetic tield of a circular loop (simulating a divertor) is capable of generate magnetic islands,due to the poloidal symmetry break of the GS equilibrium solution.

  14. Application of the Maximum Entropy/optimal Projection Control Design Approach for Large Space Structures

    Science.gov (United States)

    Hyland, D. C.

    1985-01-01

    The underlying philosophy and motivation of the optimal projection/maximum entropy (OP/ME) stochastic modelling and reduced order control design method for high order systems with parameter uncertainties are discussed. The OP/ME design equations for reduced-order dynamic compensation including the effect of parameter uncertainties are reviewed and the application of the methodology to several large space structure (LSS) problems of representative complexity is illustrated.

  15. RCD Large Aspect-Ratio Tokamak Equilibrium with Magnetic Islands: a Perturbed Approach

    Science.gov (United States)

    F. L., Braga

    2013-03-01

    Solutions of Grad-Shafranov (GS) equation with Reversed Current Density (RCD) profiles present magnetic islands when the magnetic flux is explicitly dependent on the poloidal angle. In this work it is shown that a typical cylindrical (large aspect-ratio) RCD equilibrium configuration perturbed by the magnetic field of a circular loop (simulating a divertor) is capable of generate magnetic islands, due to the poloidal symmetry break of the GS equilibrium solution.

  16. Facilitating Learning in Large Lecture Classes: Testing the “Teaching Team” Approach to Peer Learning

    OpenAIRE

    Stanger-Hall, Kathrin F.; Lang, Sarah; Maas, Martha

    2010-01-01

    We tested the effect of voluntary peer-facilitated study groups on student learning in large introductory biology lecture classes. The peer facilitators (preceptors) were trained as part of a Teaching Team (faculty, graduate assistants, and preceptors) by faculty and Learning Center staff. Each preceptor offered one weekly study group to all students in the class. All individual study groups were similar in that they applied active-learning strategies to the class material, but they differed ...

  17. Large-Scale Computations Leading to a First-Principles Approach to Nuclear Structure

    Energy Technology Data Exchange (ETDEWEB)

    Ormand, W E; Navratil, P

    2003-08-18

    We report on large-scale applications of the ab initio, no-core shell model with the primary goal of achieving an accurate description of nuclear structure from the fundamental inter-nucleon interactions. In particular, we show that realistic two-nucleon interactions are inadequate to describe the low-lying structure of {sup 10}B, and that realistic three-nucleon interactions are essential.

  18. Regulation on radial position deviation for vertical AMB systems

    Science.gov (United States)

    Tsai, Nan-Chyuan; Kuo, Chien-Hsien; Lee, Rong-Mao

    2007-10-01

    As a source of model uncertainty, gyroscopic effect, depending on rotor speed, is studied for the vertical active magnetic bearing (VAMB) systems which are increasingly used in various industries such as clean rooms, compressors and satellites. This research applies H∞ controller to regulate the rotor position deviations of the VAMB systems in four degrees of freedom. The performance of H∞ controller is examined by experimental simulations to inspect its closed-loop stiffness, rise time and capability to suppress the high frequency disturbances. Although the H∞ is inferior to the LQR in position deviation regulation, the required control current in the electromagnetic bearings is much less than that for LQR or PID and the performance robustness is well retained. In order to ensure the stability robustness of H∞ controller, two approaches, by Kharitonov polynomials and TITO (two inputs & two outputs) Nyquist Stability Criterion, are employed to synthesize the control feedback loop. A test rig is built to further verify the efficacy of the proposed H∞ controller experimentally. Two Eddy-current types of gap sensors, perpendicular to each other, are included to the realistic rotor-bearing system. A four-pole magnetic bearing is used as the actuator for generation of control force. The commercial I/O module unit with A/D and D/A converters, dSPACE DS1104, is integrated to the VAMB, gap sensors, power amplifiers and signal processing circuits. The H∞ is designed on the basis of rotor speed 10 K rpm but in fact it is significantly robust with respect to the rotor speed, varying from 6.5 to 13.5 K rpm.

  19. Solvers for large-displacement fluid structure interaction problems: segregated versus monolithic approaches

    Science.gov (United States)

    Heil, Matthias; Hazel, Andrew L.; Boyle, Jonathan

    2008-12-01

    We compare the relative performance of monolithic and segregated (partitioned) solvers for large- displacement fluid structure interaction (FSI) problems within the framework of oomph-lib, the object-oriented multi-physics finite-element library, available as open-source software at http://www.oomph-lib.org . Monolithic solvers are widely acknowledged to be more robust than their segregated counterparts, but are believed to be too expensive for use in large-scale problems. We demonstrate that monolithic solvers are competitive even for problems in which the fluid solid coupling is weak and, hence, the segregated solvers converge within a moderate number of iterations. The efficient monolithic solution of large-scale FSI problems requires the development of preconditioners for the iterative solution of the linear systems that arise during the solution of the monolithically coupled fluid and solid equations by Newton’s method. We demonstrate that recent improvements to oomph-lib’s FSI preconditioner result in mesh-independent convergence rates under uniform and non-uniform (adaptive) mesh refinement, and explore its performance in a number of two- and three-dimensional test problems involving the interaction of finite-Reynolds-number flows with shell and beam structures, as well as finite-thickness solids.

  20. PLATO: data-oriented approach to collaborative large-scale brain system modeling.

    Science.gov (United States)

    Kannon, Takayuki; Inagaki, Keiichiro; Kamiji, Nilton L; Makimura, Kouji; Usui, Shiro

    2011-11-01

    The brain is a complex information processing system, which can be divided into sub-systems, such as the sensory organs, functional areas in the cortex, and motor control systems. In this sense, most of the mathematical models developed in the field of neuroscience have mainly targeted a specific sub-system. In order to understand the details of the brain as a whole, such sub-system models need to be integrated toward the development of a neurophysiologically plausible large-scale system model. In the present work, we propose a model integration library where models can be connected by means of a common data format. Here, the common data format should be portable so that models written in any programming language, computer architecture, and operating system can be connected. Moreover, the library should be simple so that models can be adapted to use the common data format without requiring any detailed knowledge on its use. Using this library, we have successfully connected existing models reproducing certain features of the visual system, toward the development of a large-scale visual system model. This library will enable users to reuse and integrate existing and newly developed models toward the development and simulation of a large-scale brain system model. The resulting model can also be executed on high performance computers using Message Passing Interface (MPI).

  1. WHAT MATTERS TO STORE BRAND EQUITY? AN APPROACH TO SPANISH LARGE RETAILING IN A DOWNTURN CONTEXT

    Directory of Open Access Journals (Sweden)

    Calvo-Porral, Cristina

    2013-09-01

    Full Text Available Store brands account for 41% of the Spanish market share in 2011, and a further increase is expected in next years due to economic crisis, which makes up an increasingly competitive market with great research interest. In this context, our study aims to analyze which variables have a relevant influence on store Brand Equity from the consumers’ standpoint in the current downturn context, providing an empirical research on the Spanish large retailing. We carried out an on-line questionnaire to customers of store brands residing in Spain, obtaining a total amount of 362 valid responses regarding the Spanish large retailers Mercadona, Dia, Eroski, Carrefour and El Corte Inglés. Then, the analysis was performed by Structural Equation Modeling (SEM. Results obtained suggest that store commercial image has the higher influence on both store brand perceived quality and store brand awareness, and in relation with the sources of store Brand Equity, the dimension store brand awareness shows the greater influence on the formation of store Brand Equity. This study is of great interest for large retailers who wish to increase their store brands’ value proposition to the marketplace, especially during economic downturns.

  2. A blended learning approach for teaching computer programming: design for large classes in Sub-Saharan Africa

    Science.gov (United States)

    Bayu Bati, Tesfaye; Gelderblom, Helene; van Biljon, Judy

    2014-01-01

    The challenge of teaching programming in higher education is complicated by problems associated with large class teaching, a prevalent situation in many developing countries. This paper reports on an investigation into the use of a blended learning approach to teaching and learning of programming in a class of more than 200 students. A course and learning environment was designed by integrating constructivist learning models of Constructive Alignment, Conversational Framework and the Three-Stage Learning Model. Design science research is used for the course redesign and development of the learning environment, and action research is integrated to undertake participatory evaluation of the intervention. The action research involved the Students' Approach to Learning survey, a comparative analysis of students' performance, and qualitative data analysis of data gathered from various sources. The paper makes a theoretical contribution in presenting a design of a blended learning solution for large class teaching of programming grounded in constructivist learning theory and use of free and open source technologies.

  3. Experimental approach for mixed-mode fatigue delamination crack growth with large-scale bridging in polymer composites

    DEFF Research Database (Denmark)

    Holmes, John W.; Liu, Liu; Sørensen, Bent F.

    2014-01-01

    of delaminations in a typical fibre-reinforced polymer composite was investigated under a constant cyclic loading amplitude. Pure mode I, mode II and mixed-mode crack growth conditions were examined. The results, analysed using a J-integral approach, show that the double cantilever beam loaded with uneven bending......An experimental apparatus utilizing double cantilever beam specimens loaded with uneven bending moments was developed to study the mixed-mode fatigue crack growth in composites. The approach is suitable when large-scale bridging of cracks is present. To illustrate the testing method, cyclic growth...... crack growth rate observed. In addition to details concerning the equipment, a general discussion of the development of cyclic bridging laws for delamination growth in the presence of large-scale bridging is provided....

  4. Interactions and Disorder in Quantum Dots: A New Large-g Approach

    Science.gov (United States)

    Murthy, Ganpathy

    2003-03-01

    Understanding the combined effects of disorder and interactions in electronic systems has emerged as one of the most challenging theoretical problems in condensed matter physics. It turns out[1,2] that one can solve this problem non-perturbatively in both disorder and interactions in the regime when the system is finite (as in a quantum dot) but its dimensionless conductance g under open-lead conditions is large. This regime is experimentally interesting for the statistics of Coulomb Blockade in quantum dots and persistent currents in rings threaded by a flux. First some RG work will be described[1] which shows that a disordered quantum dot with Fermi liquid interactions can be in one of two phases; one controlled by the Universal Hamiltonian[3] and another regime where interactions become large. These two are separated in the infinite-g limit by a second-order disordered Pomeranchuk phase transition. Next we solve the strong-coupling phase[2], which is characterized by a Fermi surface distortion, by a large-N approximation (where N=g is in fact large for realistic systems). Predictions will be presented for finite but large g for the statistics of the Coulomb Blockade peak spacings and other correlators. A connection will be made to ideas concerning "Fock space localization"[4]. Finally, the relationship of these results to puzzles[5] in persistent currents in mesoscopic rings will be presented. 1. G. Murthy and H. Mathur, Phys. Rev. Lett. 89, 126804 (2002). 2. G. Murthy and R. Shankar, cond-mat/0209136. 3. A. V. Andreev and A. Kamenev, Phys. Rev. Lett. 81, 3199 (1998); P. W. Brouwer, Y. Oreg, and B. I. Halperin, Phys. Rev. B 60, R13977 (1999); H. U. Baranger, D. Ullmo, and L. I. Glazman, Phys. Rev. B 61, R2425 (2000); I. L. Kurland, I. L. Aleiner, and B. L. Al'tshuler, Phys. Rev. B 62, 14886 (2000). 4. B. L. Al'tshuler, Y. Gefen, A. Kamanev, and L. S. Levitov, Phys. Rev. Lett. 78, 2803 (1997). 5. U. Eckern and P. Schwab, J. Low Temp. Phys. 126, 1291 (2002).

  5. Large scale production of megakaryocytes from human pluripotent stem cells by a chemically defined forward programming approach

    OpenAIRE

    Moreau, Thomas; Evans, Amanda L.; Vasquez, Louella; Tijssen, Marloes R.; Yan, Ying; Trotter, Matthew W.; Howard, Daniel; Colzani, Maria; Arumugam, Meera; Wu, Wing Han; Dalby, Amanda; Lampela, Riina; Bouet, Guenaelle; Hobbs, Catherine M.; Dean C Pask

    2016-01-01

    This is the author accepted manuscript. It is currently under an indefinite embargo pending publication by Nature Publishing Group. The production of megakaryocytes (MKs) ? the precursors of blood platelets ? from human pluripotent stem cells (hPSCs) offers exciting clinical opportunities for transfusion medicine. We describe an original approach for the large scale generation of MKs in chemically defined conditions using a forward programming strategy relying on the concurrent exogenous e...

  6. Large scale production of megakaryocytes from human pluripotent stem cells by a chemically defined forward programming approach

    OpenAIRE

    Moreau, Thomas; Evans, Amanda L.; Vasquez, Louella; Tijssen, Marloes R.; Yan, Ying; Trotter, Matthew W.; Howard, Daniel; Colzani, Maria; Arumugam, Meera; Wu, Wing Han; Dalby, Amanda; Lampela, Riina; Bouet, Guenaelle; Hobbs, Catherine M.; Dean C Pask

    2016-01-01

    This is the author accepted manuscript. It is currently under an indefinite embargo pending publication by Nature Publishing Group. The production of megakaryocytes (MKs) – the precursors of blood platelets – from human pluripotent stem cells (hPSCs) offers exciting clinical opportunities for transfusion medicine. We describe an original approach for the large scale generation of MKs in chemically defined conditions using a forward programming strategy relying on the concurrent exogenous e...

  7. Slower deviations of the branching Brownian motion and of branching random walks

    Science.gov (United States)

    Derrida, Bernard; Shi, Zhan

    2017-08-01

    We have shown recently how to calculate the large deviation function of the position X\\max(t) of the rightmost particle of a branching Brownian motion at time t. This large deviation function exhibits a phase transition at a certain negative velocity. Here we extend this result to more general branching random walks and show that the probability distribution of X\\max(t) has, asymptotically in time, a prefactor characterized by a non trivial power law. Dedicated to John Cardy on the occasion of his 70th birthday.

  8. Evaluating two model reduction approaches for large scale hedonic models sensitive to omitted variables and multicollinearity

    DEFF Research Database (Denmark)

    Panduro, Toke Emil; Thorsen, Bo Jellesmark

    2014-01-01

    evaluate two common model reduction approaches in an empirical case. The first relies on a principal component analysis (PCA) used to construct new orthogonal variables, which are applied in the hedonic model. The second relies on a stepwise model reduction based on the variance inflation index and Akaike......’s information criteria. Our empirical application focuses on estimating the implicit price of forest proximity in a Danish case area, with a dataset containing 86 relevant variables. We demonstrate that the estimated implicit price for forest proximity, while positive in all models, is clearly sensitive...

  9. Extracting Noun Phrases from Large-Scale Texts A Hybrid Approach and Its Automatic Evaluation

    CERN Document Server

    Chen, K; Chen, Kuang-hua; Chen, Hsin-Hsi

    1994-01-01

    To acquire noun phrases from running texts is useful for many applications, such as word grouping,terminology indexing, etc. The reported literatures adopt pure probabilistic approach, or pure rule-based noun phrases grammar to tackle this problem. In this paper, we apply a probabilistic chunker to deciding the implicit boundaries of constituents and utilize the linguistic knowledge to extract the noun phrases by a finite state mechanism. The test texts are SUSANNE Corpus and the results are evaluated by comparing the parse field of SUSANNE Corpus automatically. The results of this preliminary experiment are encouraging.

  10. A compact to revitalise large-scale irrigation systems: A ‘theory of change’ approach

    Directory of Open Access Journals (Sweden)

    Bruce A. Lankford

    2016-02-01

    Full Text Available In countries with transitional economies such as those found in South Asia, large-scale irrigation systems (LSIS with a history of public ownership account for about 115 million ha (Mha or approximately 45% of their total area under irrigation. In terms of the global area of irrigation (320 Mha for all countries, LSIS are estimated at 130 Mha or 40% of irrigated land. These systems can potentially deliver significant local, regional and global benefits in terms of food, water and energy security, employment, economic growth and ecosystem services. For example, primary crop production is conservatively valued at about US$355 billion. However, efforts to enhance these benefits and reform the sector have been costly and outcomes have been underwhelming and short-lived. We propose the application of a 'theory of change' (ToC as a foundation for promoting transformational change in large-scale irrigation centred upon a 'global irrigation compact' that promotes new forms of leadership, partnership and ownership (LPO. The compact argues that LSIS can change by switching away from the current channelling of aid finances controlled by government irrigation agencies. Instead it is for irrigators, closely partnered by private, public and NGO advisory and regulatory services, to develop strong leadership models and to find new compensatory partnerships with cities and other river basin neighbours. The paper summarises key assumptions for change in the LSIS sector including the need to initially test this change via a handful of volunteer systems. Our other key purpose is to demonstrate a ToC template by which large-scale irrigation policy can be better elaborated and discussed.

  11. Approaching total absorption at near infrared in a large area monolayer graphene by critical coupling

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yonghao; Chadha, Arvinder; Zhao, Deyin; Shuai, Yichen; Menon, Laxmy; Yang, Hongjun; Zhou, Weidong, E-mail: wzhou@uta.edu [Nanophotonics Lab, Department of Electrical Engineering, University of Texas at Arlington, Arlington, Texas 76019 (United States); Piper, Jessica R.; Fan, Shanhui [Ginzton Laboratory, Department of Electrical Engineering, Stanford University, Stanford, California 94305 (United States); Jia, Yichen; Xia, Fengnian [Department of Electrical Engineering, Yale University, New Haven, Connecticut 06520 (United States); Ma, Zhenqiang [Department of Electrical and Computer Engineering, University of Wisconsin-Madison, Madison, Wisconsin 53706 (United States)

    2014-11-03

    We demonstrate experimentally close to total absorption in monolayer graphene based on critical coupling with guided resonances in transfer printed photonic crystal Fano resonance filters at near infrared. Measured peak absorptions of 35% and 85% were obtained from cavity coupled monolayer graphene for the structures without and with back reflectors, respectively. These measured values agree very well with the theoretical values predicted with the coupled mode theory based critical coupling design. Such strong light-matter interactions can lead to extremely compact and high performance photonic devices based on large area monolayer graphene and other two–dimensional materials.

  12. Large $N_{c}$, chiral approach to $M_{\\eta}'$ at finite temperature

    CERN Document Server

    Escribano, R; Tytgat, M H G

    2000-01-01

    We study the temperature dependence of the eta and eta' meson masses withinthe framework of U(3)_L x U(3)_R chiral perturbation theory, up tonext-to-leading order in a simultaneous expansion in momenta, quark masses andnumber of colours. We find that both masses decrease at low temperatures, butonly very slightly. We analyze higher order corrections and argue that largeN_c suggests a discontinuous drop of M_eta' at the critical temperature ofdeconfinement T_c, consistent with a first order transition to a phase withapproximate U(1)_A symmetry.

  13. A least square support vector machine-based approach for contingency classification and ranking in a large power system

    Directory of Open Access Journals (Sweden)

    Bhanu Pratap Soni

    2016-12-01

    Full Text Available This paper proposes an effective supervised learning approach for static security assessment of a large power system. Supervised learning approach employs least square support vector machine (LS-SVM to rank the contingencies and predict the system severity level. The severity of the contingency is measured by two scalar performance indices (PIs: line MVA performance index (PIMVA and Voltage-reactive power performance index (PIVQ. SVM works in two steps. Step I is the estimation of both standard indices (PIMVA and PIVQ that is carried out under different operating scenarios and Step II contingency ranking is carried out based on the values of PIs. The effectiveness of the proposed methodology is demonstrated on IEEE 39-bus (New England system. The approach can be beneficial tool which is less time consuming and accurate security assessment and contingency analysis at energy management center.

  14. Purely posterior midline approach resection for large intra- and extra-spinal dumbbell tumors extending into the thoracic cavity

    Directory of Open Access Journals (Sweden)

    Bo ZHANG

    2016-04-01

    Full Text Available Objective To study the surgical technique and effect of purely posterior midline approach resection for large intra- and extra-spinal dumbbell tumors that extended into the thoracic cavity.  Methods Retrospectively analyze 12 cases of large intra- and extra-spinal dumbbell tumors that extended into the thoracic cavity and were resected through posterior midline approach. The clinical features and common surgical approaches of dumbbell tumors in literature were introduced to explore the advantages of purely posterior midline approach.  Results There were 12 patients (5 males and 7 females with the age between 34-58 years old (average 45 years old. Eleven cases underwent first operation and one case underwent reoperation. There were 4 Eden type Ⅱ tumors, 5 Eden type Ⅲ tumors, and 3 Eden type Ⅳtumors with average size 4.50 cm × 4.00 cm × 3.00 cm. All cases were achieved total resection by purely posterior midline approach and one case received spinal fixation at the same time, with operation time ranged from 120-315 min (average 195 min and average blood loss of 205 ml. Postoperative pathological findings included schwannoma in 9 patients, neurofibroma in one patient, meningioma in one patient and cavernous hemangioma in one patient. The follow-up period was 6-26 months (average 18 months after operation, and all patients recovered well. Preoperative symptoms like root pain, spinal cord compression were relieved to various degrees. Neither new neurological defects nor tumor recurrence was found.  Conclusions Most of the intra- and extra-spinal dumbbell tumors that extend into thoracic cavity are schwannoma. Correctly preoperative radiographic assessment, purely posterior midline approach with piecemeal resection in the intercostal space can achieve total tumor resection in most cases without thoracotomy or assisted incision. DOI: 10.3969/j.issn.1672-6731.2016.03.005

  15. High-throughput film-densitometry: An efficient approach to generate large data sets

    Energy Technology Data Exchange (ETDEWEB)

    Typke, Dieter; Nordmeyer, Robert A.; Jones, Arthur; Lee, Juyoung; Avila-Sakar, Agustin; Downing, Kenneth H.; Glaeser, Robert M.

    2004-07-14

    A film-handling machine (robot) has been built which can, in conjunction with a commercially available film densitometer, exchange and digitize over 300 electron micrographs per day. Implementation of robotic film handling effectively eliminates the delay and tedium associated with digitizing images when data are initially recorded on photographic film. The modulation transfer function (MTF) of the commercially available densitometer is significantly worse than that of a high-end, scientific microdensitometer. Nevertheless, its signal-to-noise ratio (S/N) is quite excellent, allowing substantial restoration of the output to ''near-to-perfect'' performance. Due to the large area of the standard electron microscope film that can be digitized by the commercial densitometer (up to 10,000 x 13,680 pixels with an appropriately coded holder), automated film digitization offers a fast and inexpensive alternative to high-end CCD cameras as a means of acquiring large amounts of image data in electron microscopy.

  16. A quantitative approach to the topology of large-scale structure. [for galactic clustering computation

    Science.gov (United States)

    Gott, J. Richard, III; Weinberg, David H.; Melott, Adrian L.

    1987-01-01

    A quantitative measure of the topology of large-scale structure: the genus of density contours in a smoothed density distribution, is described and applied. For random phase (Gaussian) density fields, the mean genus per unit volume exhibits a universal dependence on threshold density, with a normalizing factor that can be calculated from the power spectrum. If large-scale structure formed from the gravitational instability of small-amplitude density fluctuations, the topology observed today on suitable scales should follow the topology in the initial conditions. The technique is illustrated by applying it to simulations of galaxy clustering in a flat universe dominated by cold dark matter. The technique is also applied to a volume-limited sample of the CfA redshift survey and to a model in which galaxies reside on the surfaces of polyhedral 'bubbles'. The topology of the evolved mass distribution and 'biased' galaxy distribution in the cold dark matter models closely matches the topology of the density fluctuations in the initial conditions. The topology of the observational sample is consistent with the random phase, cold dark matter model.

  17. Differential-algebraic approach to large deformation analysis of frame structures subjected to dynamic loads

    Institute of Scientific and Technical Information of China (English)

    HU Yu-jia; ZHU Yuan-yuan; CHENG Chang-jun

    2008-01-01

    A nonlinear mathematical model for the analysis of large deformation of frame structures with discontinuity conditions and initial displacements,subject to dynamic loads is formulated with arc-coordinates.The differential quadrature element method (DQEM)is then applied to discretize the nonlinear mathematical model in the spatial domain.An effective method is presented to deal with discontinuity conditions of multivariables in the application of DQEM.A set of DQEM discretization equations are obtained,which are a set of nonlinear differential-algebraic equations with singularity in the time domain.This paper also presents a method to solve nonlinear differential-algebra equations.As application,static and dynamical analyses of large deformation of frames and combined frame structures,subjected to concentrated and distributed forces,are presented.The obtained results are compared with those in the literatares.Numerical results show that the proposed method is general,and effective in dealing with discontinuity conditions of multi-variables and solving difierential-algebraic equations.It requires only a small number of nodes and has low computation complexity with high precision and a good convergence property.

  18. Raman Optical Activity Spectra for Large Molecules through Molecules-in-Molecules Fragment-Based Approach.

    Science.gov (United States)

    Jovan Jose, K V; Raghavachari, Krishnan

    2016-02-09

    We present an efficient method for the calculation of the Raman optical activity (ROA) spectra for large molecules through the molecules-in-molecules (MIM) fragment-based method. The relevant higher energy derivatives from smaller fragments are used to build the property tensors of the parent molecule to enable the extension of the MIM method for evaluating ROA spectra (MIM-ROA). Two factors were found to be particularly important in yielding accurate results. First, the link-atom tensor components are projected back onto the corresponding host and supporting atoms through the Jacobian projection method, yielding a mathematically rigorous method. Second, the long-range interactions between fragments are taken into account by using a less computationally expensive lower level of theory. The performance of the MIM-ROA model is calibrated on the enantiomeric pairs of 10 carbohydrate benchmark molecules, with strong intramolecular interactions. The vibrational frequencies and ROA intensities are accurately reproduced relative to the full, unfragmented, results for these systems. In addition, the MIM-ROA method is employed to predict the ROA spectra of d-maltose, α-D-cyclodextrin, and cryptophane-A, yielding spectra in excellent agreement with experiment. The accuracy and performance of the benchmark systems validate the MIM-ROA model for exploring ROA spectra of large molecules.

  19. Improving irrigation efficiency in Italian apple orchards: A large-scale approach

    Science.gov (United States)

    Della Chiesa, Stefano; la Cecilia, Daniele; Niedrist, Georg; Hafner, Hansjörg; Thalheimer, Martin; Tappeiner, Ulrike

    2016-04-01

    Nord-Italian region South Tyrol is Europe's largest apple growing area. In order to enable an economically relevant fruit quality and quantity the relative dry climate of the region 450-700mm gets compensated by a large scale irrigation management which until now follows old, traditional rights. Due to ongoing climatic changes and rising public sensitivity toward sustainable usage of water resources, irrigation practices are more and more critically discussed. In order to establish an objective and quantitative base of information to optimise irrigation practice, 17 existing microclimatic stations were upgraded with soil moisture and soil water potential sensors. As a second information layer a data set of 20,000 soil analyses has been geo-referenced and spatialized using a modern geostatistical method. Finally, to assess whether the zones with shallow aquifer influence soil water availability, data of 70 groundwater depth measuring stations were retrieved. The preliminary results highlight that in many locations in particular in the valley bottoms irrigation largely exceeds plant water needs because either the shallow aquifer provides sufficient water supply by capillary rise processes into the root zone or irrigation is applied without accounting for the specific soil properties.

  20. An Analytical Approach for Optimal Clustering Architecture for Maximizing Lifetime in Large Scale Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Mr. Yogesh Rai

    2011-09-01

    Full Text Available Many methods have been researched to prolong sensor network lifetime using mobile technologies. In the mobile sink research, there are the track based methods and the anchor points based methods as representative operation methods for mobile sinks. However, the existing methods decrease Quality of Service (QoS and lead the routing hotspot in the vicinity of the mobile sink. In large scale wireless sensor networks, clustering is an effective technique for the purpose of improving the utilization of limited energy and prolonging the network lifetime. However, the problem of unbalanced energy dissipation exists in the multi-hop clustering model, where the cluster heads closer to the sink have to relay heavier traffic and consume more energy than farther nodes. In this paper we analyze several aspects based on the optimal clustering architecture for maximizing lifetime for large scale wireless sensor network. We also provide some analytical concepts for energy-aware head rotation and routing protocols to further balance the energy consumption among all nodes.

  1. An Efficient Approach to Obtaining Large Numbers of Distant Supernova Host Galaxy Redshifts

    CERN Document Server

    Lidman, C; Sullivan, M; Myzska, J; Dobbie, P; Glazebrook, K; Mould, J; Astier, P; Balland, C; Betoule, M; Carlberg, R; Conley, A; Fouchez, D; Guy, J; Hardin, D; Hook, I; Howell, D A; Pain, R; Palanque-Delabrouille, N; Perrett, K; Pritchet, C; Regnault, N; Rich, J

    2012-01-01

    We use the wide-field capabilities of the 2dF fibre positioner and the AAOmega spectrograph on the Anglo-Australian Telescope (AAT) to obtain redshifts of galaxies that hosted supernovae during the first three years of the Supernova Legacy Survey (SNLS). With exposure times ranging from 10 to 60 ksec per galaxy, we were able to obtain redshifts for 400 host galaxies in two SNLS fields, thereby substantially increasing the total number of SNLS supernovae with host galaxy redshifts. The median redshift of the galaxies in our sample that hosted photometrically classified Type Ia supernovae (SNe Ia) is 0.77, which is 25% higher than the median redshift of spectroscopically confirmed SNe Ia in the three-year sample of the SNLS. Our results demonstrate that one can use wide-field fibre-fed multi-object spectrographs on 4m telescopes to efficiently obtain redshifts for large numbers of supernova host galaxies over the large areas of sky that will be covered by future high-redshift supernova surveys, such as the Dark...

  2. Approaches in highly parameterized inversion - PEST++, a Parameter ESTimation code optimized for large environmental models

    Science.gov (United States)

    Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.

    2012-01-01

    An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.

  3. Mechanism Modeling and Simulation Based on Dimensional Deviation

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    To analyze the effects on motion characteristics of mechanisms of dimensional variations, a study on random dimensional deviation generation techniques for 3D models on the basis of the present mechanical modeling software was carried out, which utilized the redeveloped interfaces provided by the modeling software to develop a random dimensional deviation generation system with certain probability distribution characteristics. This system has been used to perform modeling and simulation of the specific mechanical time delayed mechanism under multiple deviation varieties, simulation results indicate the dynamic characteristics of the mechanism are influenced significantly by the dimensional deviation in the tolerance distribution range, which should be emphasized in the design.

  4. Deviation and rotation of the larynx in computer tomography

    Energy Technology Data Exchange (ETDEWEB)

    Shibusawa, Mitsunobu (Tokyo Medical and Dental Univ., Tokyo (Japan). Medical Research Institute); Yano, Kazuhiko

    1990-01-01

    Many authors described the clinical importance of asymmetry of the laryngeal framework. However, its pathogenesis is generally unknown. In this study, CT images of 315 Japanese subjects were investigated to define the laryngeal position relative to the midline of the cervical vertebra. The CT slice of each subject within 5 mm cephalad of the cricoarytenoid joint was traced. Then, the deviation and rotation angles were measured using our method. Seventy one percent of the subjects' larynges deviated and/or rotated to the right side, while 17% to the left side. Six percent showed neither deviation nor rotation. As to the rest of 6%, deviation and rotation were in opposite directions. Besides, the length of the thyroid alae were measured in 282 subjects. Left ala was longer in 55%, and right was in 23%, and almost equal in 22%. The conclusions are as follows. The majority of the subjects' CT images showed deviation and/or rotation of the laryngeal framework to the right side. So called idiopathic laryngeal deviation is a case which observed in those cases with remarkable deviation and/or rotation of the laryngeal framework. Aging seemed to be an important factor in accerelation of the laryngeal deviation and rotation. The type of diseases and the side of mass lesions had no statistical significance in deviation and rotation of the larynx. (author).

  5. Modular Approach for Continuous Cell-Level Balancing to Improve Performance of Large Battery Packs: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Muneed ur Rehman, M.; Evzelman, M.; Hathaway, K.; Zane, R.; Plett, G. L.; Smith, K.; Wood, E.; Maksimovic, D.

    2014-10-01

    Energy storage systems require battery cell balancing circuits to avoid divergence of cell state of charge (SOC). A modular approach based on distributed continuous cell-level control is presented that extends the balancing function to higher level pack performance objectives such as improving power capability and increasing pack lifetime. This is achieved by adding DC-DC converters in parallel with cells and using state estimation and control to autonomously bias individual cell SOC and SOC range, forcing healthier cells to be cycled deeper than weaker cells. The result is a pack with improved degradation characteristics and extended lifetime. The modular architecture and control concepts are developed and hardware results are demonstrated for a 91.2-Wh battery pack consisting of four series Li-ion battery cells and four dual active bridge (DAB) bypass DC-DC converters.

  6. Large deformation solid-fluid interaction via a level set approach.

    Energy Technology Data Exchange (ETDEWEB)

    Schunk, Peter Randall; Noble, David R.; Baer, Thomas A.; Rao, Rekha Ranjana; Notz, Patrick K.; Wilkes, Edward Dean

    2003-12-01

    Solidification and blood flow seemingly have little in common, but each involves a fluid in contact with a deformable solid. In these systems, the solid-fluid interface moves as the solid advects and deforms, often traversing the entire domain of interest. Currently, these problems cannot be simulated without innumerable expensive remeshing steps, mesh manipulations or decoupling the solid and fluid motion. Despite the wealth of progress recently made in mechanics modeling, this glaring inadequacy persists. We propose a new technique that tracks the interface implicitly and circumvents the need for remeshing and remapping the solution onto the new mesh. The solid-fluid boundary is tracked with a level set algorithm that changes the equation type dynamically depending on the phases present. This novel approach to coupled mechanics problems promises to give accurate stresses, displacements and velocities in both phases, simultaneously.

  7. Analysis of the Drivetrain Performance of a Large Horizontal-Axis Wind Turbine: An Aeroelastic Approach

    DEFF Research Database (Denmark)

    Gebhardt, Cristian; Preidikman, Sergio; Massa, Julio C

    2010-01-01

    blades’, the drivetrain and the generator. The blades are the part of the turbine that touches energy in the wind and rotates about an axis. Extracting energy from the wind is typically accomplished by first mechanically converting the velocity of the wind into a rotational motion of the wind turbine...... by means of the rotor blades, and then converting the rotational energy of the rotor blades into electrical energy by using a generator. The amount of available energy which the wind transfers to the rotor depends on the mass density of the air, the sweep area of the rotor blades, and the wind speed......Due to increasing environmental concern, and approaching limits to fossil fuel consumption, green sources of energy are gaining interest. Among the several energy sources being explored, wind energy shows much promise in selected areas where the average wind speeds is high. Wind turbines are used...

  8. Virtual reality approach for 3D large model browsing on web site

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    Using virtual reality for interactive design gives a designer an intuitive vision of a design and allows the designer to achieve a viable, optimal solution in a timely manner. The article discusses the process of making the Virtual Reality System of the Humble Administrator's Garden. Translating building data to the Virtual Reality Modeling Language (VRML) is by far unsatisfactory. This creates a challenge for computer designers to do optimization to meet requirements. Five different approaches to optimize models have been presented in this paper. The other methods are to optimize VRML and to reduce the file size. This is done by keeping polygon counts to a minimum and by applying such techniques as object culling and level-of-detail switching.

  9. Multicontroller: an object programming approach to introduce advanced control algorithms for the GCS large scale project

    CERN Document Server

    Cabaret, S; Coppier, H; Rachid, A; Barillère, R; CERN. Geneva. IT Department

    2007-01-01

    The GCS (Gas Control System) project team at CERN uses a Model Driven Approach with a Framework - UNICOS (UNified Industrial COntrol System) - based on PLC (Programming Language Controller) and SCADA (Supervisory Control And Data Acquisition) technologies. The first' UNICOS versions were able to provide a PID (Proportional Integrative Derivative) controller whereas the Gas Systems required more advanced control strategies. The MultiController is a new UNICOS object which provides the following advanced control algorithms: Smith Predictor, PFC (Predictive Function Control), RST* and GPC (Global Predictive Control). Its design is based on a monolithic entity with a global structure definition which is able to capture the desired set of parameters of any specific control algorithm supported by the object. The SCADA system -- PVSS - supervises the MultiController operation. The PVSS interface provides users with supervision faceplate, in particular it links any MultiController with recipes: the GCS experts are ab...

  10. Automatic feature extraction in large fusion databases by using deep learning approach

    Energy Technology Data Exchange (ETDEWEB)

    Farias, Gonzalo, E-mail: gonzalo.farias@ucv.cl [Pontificia Universidad Católica de Valparaíso, Valparaíso (Chile); Dormido-Canto, Sebastián [Departamento de Informática y Automática, UNED, Madrid (Spain); Vega, Jesús; Rattá, Giuseppe [Asociación EURATOM/CIEMAT Para Fusión, CIEMAT, Madrid (Spain); Vargas, Héctor; Hermosilla, Gabriel; Alfaro, Luis; Valencia, Agustín [Pontificia Universidad Católica de Valparaíso, Valparaíso (Chile)

    2016-11-15

    Highlights: • Feature extraction is a very critical stage in any machine learning algorithm. • The problem dimensionality can be reduced enormously when selecting suitable attributes. • Despite the importance of feature extraction, the process is commonly done manually by trial and error. • Fortunately, recent advances in deep learning approach have proposed an encouraging way to find a good feature representation automatically. • In this article, deep learning is applied to the TJ-II fusion database to get more robust and accurate classifiers in comparison to previous work. - Abstract: Feature extraction is one of the most important machine learning issues. Finding suitable attributes of datasets can enormously reduce the dimensionality of the input space, and from a computational point of view can help all of the following steps of pattern recognition problems, such as classification or information retrieval. However, the feature extraction step is usually performed manually. Moreover, depending on the type of data, we can face a wide range of methods to extract features. In this sense, the process to select appropriate techniques normally takes a long time. This work describes the use of recent advances in deep learning approach in order to find a good feature representation automatically. The implementation of a special neural network called sparse autoencoder and its application to two classification problems of the TJ-II fusion database is shown in detail. Results have shown that it is possible to get robust classifiers with a high successful rate, in spite of the fact that the feature space is reduced to less than 0.02% from the original one.

  11. Characterizing the role benthos plays in large coastal seas and estuaries: A modular approach

    Science.gov (United States)

    Tenore, K.R.; Zajac, R.N.; Terwin, J.; Andrade, F.; Blanton, J.; Boynton, W.; Carey, D.; Diaz, R.; Holland, Austin F.; Lopez-Jamar, E.; Montagna, P.; Nichols, F.; Rosenberg, R.; Queiroga, H.; Sprung, M.; Whitlatch, R.B.

    2006-01-01

    Ecologists studying coastal and estuarine benthic communities have long taken a macroecological view, by relating benthic community patterns to environmental factors across several spatial scales. Although many general ecological patterns have been established, often a significant amount of the spatial and temporal variation in soft-sediment communities within and among systems remains unexplained. Here we propose a framework that may aid in unraveling the complex influence of environmental factors associated with the different components of coastal systems (i.e. the terrestrial and benthic landscapes, and the hydrological seascape) on benthic communities, and use this information to assess the role played by benthos in coastal ecosystems. A primary component of the approach is the recognition of system modules (e.g. marshes, dendritic systems, tidal rivers, enclosed basins, open bays, lagoons). The modules may differentially interact with key forcing functions (e.g. temperature, salinity, currents) that influence system processes and in turn benthic responses and functions. Modules may also constrain benthic characteristics and related processes within certain ecological boundaries and help explain their overall spatio-temporal variation. We present an example of how benthic community characteristics are related to the modular structure of 14 coastal seas and estuaries, and show that benthic functional group composition is significantly related to the modular structure of these systems. We also propose a framework for exploring the role of benthic communities in coastal systems using this modular approach and offer predictions of how benthic communities may vary depending on the modular composition and characteristics of a coastal system. ?? 2006 Elsevier B.V. All rights reserved.

  12. Generation of weakly nonlinear nonhydrostatic internal tides over large topography: a multi-modal approach

    Directory of Open Access Journals (Sweden)

    R. Maugé

    2008-03-01

    Full Text Available A set of evolution equations is derived for the modal coefficients in a weakly nonlinear nonhydrostatic internal-tide generation problem. The equations allow for the presence of large-amplitude topography, e.g. a continental slope, which is formally assumed to have a length scale much larger than that of the internal tide. However, comparison with results from more sophisticated numerical models show that this restriction can in practice be relaxed. It is shown that a topographically induced coupling between modes occurs that is distinct from nonlinear coupling. Nonlinear effects include the generation of higher harmonics by reflection from boundaries, i.e. steeper tidal beams at frequencies that are multiples of the basic tidal frequency. With a seasonal thermocline included, the model is capable of reproducing the phenomenon of local generation of internal solitary waves by a tidal beam impinging on the seasonal thermocline.

  13. ADN-Viewer: a 3D approach for bioinformatic analyses of large DNA sequences.

    Science.gov (United States)

    Hérisson, Joan; Ferey, Nicolas; Gros, Pierre-Emmanuel; Gherbi, Rachid

    2007-01-20

    Most of biologists work on textual DNA sequences that are limited to the linear representation of DNA. In this paper, we address the potential offered by Virtual Reality for 3D modeling and immersive visualization of large genomic sequences. The representation of the 3D structure of naked DNA allows biologists to observe and analyze genomes in an interactive way at different levels. We developed a powerful software platform that provides a new point of view for sequences analysis: ADNViewer. Nevertheless, a classical eukaryotic chromosome of 40 million base pairs requires about 6 Gbytes of 3D data. In order to manage these huge amounts of data in real-time, we designed various scene management algorithms and immersive human-computer interaction for user-friendly data exploration. In addition, one bioinformatics study scenario is proposed.

  14. Sub-bottom profiling for large-scale maritime archaeological survey An experience-based approach

    DEFF Research Database (Denmark)

    Grøn, Ole; Boldreel, Lars Ole

    2013-01-01

    investigation of the sea floor. This commercial activity can take the form of aggregate extraction, fishing, installation of facilities such as windmills, cables or pipelines and the construction of bridges, harbours etc.Non-invasive acoustic survey methods play a significant role in the mapping...... of the submerged cultural heritage. Elements such as archaeological wreck sites exposed on the sea floor are mapped using side-scan and multi-beam techniques. These can also provide information on bathymetric patterns representing potential Stone Age settlements, whereas the detection of such archaeological sites...... and wrecks partially or wholly embedded in the sea-floor sediments demands the application of highresolution sub-bottom profilers. This paper presents a strategy for the cost-effective large-scale mapping of unknown sedimentembedded sites such as submerged Stone Age settlements or wrecks, based on sub...

  15. A phenomenological approach to the simulation of metabolism and proliferation dynamics of large tumour cell populations

    CERN Document Server

    Chignola, R; Chignola, Roberto; Milotti, Edoardo

    2005-01-01

    A major goal of modern computational biology is to simulate the collective behaviour of large cell populations starting from the intricate web of molecular interactions occurring at the microscopic level. In this paper we describe a simplified model of cell metabolism, growth and proliferation, suitable for inclusion in a multicell simulator, now under development (Chignola R and Milotti E 2004 Physica A 338 261-6). Nutrients regulate the proliferation dynamics of tumor cells which adapt their behaviour to respond to changes in the biochemical composition of the environment. This modeling of nutrient metabolism and cell cycle at a mesoscopic scale level leads to a continuous flow of information between the two disparate spatiotemporal scales of molecular and cellular dynamics that can be simulated with modern computers and tested experimentally.

  16. Estimation of melting points of large set of persistent organic pollutants utilizing QSPR approach.

    Science.gov (United States)

    Watkins, Marquita; Sizochenko, Natalia; Rasulev, Bakhtiyor; Leszczynski, Jerzy

    2016-03-01

    The presence of polyhalogenated persistent organic pollutants (POPs), such as Cl/Br-substituted benzenes, biphenyls, diphenyl ethers, and naphthalenes has been identified in all environmental compartments. The exposure to these compounds can pose potential risk not only for ecological systems, but also for human health. Therefore, efficient tools for comprehensive environmental risk assessment for POPs are required. Among the factors vital for environmental transport and fate processes is melting point of a compound. In this study, we estimated the melting points of a large group (1419 compounds) of chloro- and bromo- derivatives of dibenzo-p-dioxins, dibenzofurans, biphenyls, naphthalenes, diphenylethers, and benzenes by utilizing quantitative structure-property relationship (QSPR) techniques. The compounds were classified by applying structure-based clustering methods followed by GA-PLS modeling. In addition, random forest method has been applied to develop more general models. Factors responsible for melting point behavior and predictive ability of each method were discussed.

  17. Reconstruction of large, irregularly sampled multidimensional images. A tensor-based approach.

    Science.gov (United States)

    Morozov, Oleksii Vyacheslav; Unser, Michael; Hunziker, Patrick

    2011-02-01

    Many practical applications require the reconstruction of images from irregularly sampled data. The spline formalism offers an attractive framework for solving this problem; the currently available methods, however, are hard to deploy for large-scale interpolation problems in dimensions greater than two (3-D, 3-D+time) because of an exponential increase of their computational cost (curse of dimensionality). Here, we revisit the standard regularized least-squares formulation of the interpolation problem, and propose to perform the reconstruction in a uniform tensor-product B-spline basis as an alternative to the classical solution involving radial basis functions. Our analysis reveals that the underlying multilinear system of equations admits a tensor decomposition with an extreme sparsity of its one dimensional components. We exploit this property for implementing a parallel, memory-efficient system solver. We show that the computational complexity of the proposed algorithm is essentially linear in the number of measurements and that its dependency on the number of dimensions is significantly less than that of the original sparse matrix-based implementation. The net benefit is a substantial reduction in memory requirement and operation count when compared to standard matrix-based algorithms, so that even 4-D problems with millions of samples become computationally feasible on desktop PCs in reasonable time. After validating the proposed algorithm in 3-D and 4-D, we apply it to a concrete imaging problem: the reconstruction of medical ultrasound images (3-D+time) from a large set of irregularly sampled measurements, acquired by a fast rotating ultrasound transducer.

  18. Ranking of Simultaneous Equation Techniques to Small Sample Properties and Correlated Random Deviates

    Directory of Open Access Journals (Sweden)

    A. A. Adepoju

    2009-01-01

    Full Text Available Problem statement: All simultaneous equation estimation methods have some desirable asymptotic properties and these properties become effective in large samples. This study is relevant since samples available to researchers are mostly small in practice and are often plagued with the problem of mutual correlation between pairs of random deviates which is a violation of the assumption of mutual independence between pairs of such random deviates. The objective of this research was to study the small sample properties of these estimators when the errors are correlated to determine if the properties will still hold when available samples are relatively small and the errors were correlated. Approach: Most of the evidence on the small sample properties of the simultaneous equation estimators was studied from sampling (or Monte Carlo experiments. It is important to rank estimators on the merit they have when applied to small samples. This study examined the performances of five simultaneous estimation techniques using some of the basic characteristics of the sampling distributions rather than their full description. The characteristics considered here are the mean, the total absolute bias and the root mean square error. Results: The result revealed that the ranking of the five estimators in respect of the Average Total Absolute Bias (ATAB is invariant to the choice of the upper (P1 or lower (P2 triangular matrix. The result of the FIML using RMSE of estimates was outstandingly best in the open-ended intervals and outstandingly poor in the closed interval (-0.051 and P2 we re-combined. Conclusion: (i The ranking of the various simultaneous estimation methods considered based on their small sample properties differs according to the correlation status of the error term, the identifiability status of the equation and the assumed triangular matrix. (ii The nature of the relationship under study also determined which of the criteria for judging the

  19. Lateral supracerebellar infratentorial approach for microsurgical resection of large midline pineal region tumors: techniques to expand the operative corridor.

    Science.gov (United States)

    Kulwin, Charles; Matsushima, Ken; Malekpour, Mahdi; Cohen-Gadol, Aaron A

    2016-01-01

    Pineal region tumors pose certain challenges in regard to their resection: a deep surgical field, associated critical surrounding neurovascular structures, and narrow operative working corridor due to obstruction by the apex of the culmen. The authors describe a lateral supracerebellar infratentorial approach that was successfully used in the treatment of 10 large (> 3 cm) midline pineal region tumors. The patients were placed in a modified lateral decubitus position. A small lateral suboccipital craniotomy exposed the transverse sinus. Tentorial retraction sutures were used to gently rotate and elevate the transverse sinus to expand the lateral supracerebellar operative corridor. This approach placed only unilateral normal structures at risk and minimized vermian venous sacrifice. The surgeon achieved generous exposure of the caudal midline mesencephalon through a "cross-court" oblique trajectory, while avoiding excessive retraction on the culmen. All patients underwent the lateral approach with no approach-related complication. The final pathological diagnoses were consistent with meningioma in 3 cases, pilocytic astrocytoma in 3 cases, intermediate grade pineal region tumor in 2 cases, and pineoblastoma in 2 cases. The entire extent of these tumors was readily reachable through the lateral supracerebellar route. Gross-total resection was achieved in 8 (80%) of the 10 cases; in 2 cases (20%) near-total resection was performed due to adherence of these tumors to deep diencephalic veins. Large midline pineal region tumors can be removed through a unilateral paramedian suboccipital craniotomy. This approach is simple, may spare some of the midline vermian bridging veins, and may be potentially less invasive and more efficient.

  20. A Large-Scale, Multiagency Approach to Defining a Reference Network for Pacific Northwest Streams

    Science.gov (United States)

    Miller, Stephanie; Eldred, Peter; Muldoon, Ariel; Anlauf-Dunn, Kara; Stein, Charlie; Hubler, Shannon; Merrick, Lesley; Haxton, Nick; Larson, Chad; Rehn, Andrew; Ode, Peter; Vander Laan, Jake

    2016-12-01

    Aquatic monitoring programs vary widely in objectives and design. However, each program faces the unifying challenge of assessing conditions and quantifying reasonable expectations for measured indicators. A common approach for setting resource expectations is to define reference conditions that represent areas of least human disturbance or most natural state of a resource characterized by the range of natural variability across a region of interest. Identification of reference sites often relies heavily on professional judgment, resulting in varying and unrepeatable methods. Standardized methods for data collection, site characterization, and reference site selection facilitate greater cooperation among assessment programs and development of assessment tools that are readily shareable and comparable. We illustrate an example that can serve the broader global monitoring community on how to create a consistent and transparent reference network for multiple stream resource agencies. We provide a case study that offers a simple example of how reference sites can be used, at the landscape level, to link upslope management practices to a specific in-channel response. We found management practices, particularly areas with high road densities, have more fine sediments than areas with fewer roads. While this example uses data from only one of the partner agencies, if data were collected in a similar manner they can be combined and create a larger, more robust dataset. We hope that this starts a dialog regarding more standardized ways through inter-agency collaborations to evaluate data. Creating more consistency in physical and biological field protocols will increase the ability to share data.

  1. Conservative approach: using decompression procedure for management of a large unicystic ameloblastoma of the mandible.

    Science.gov (United States)

    Xavier, Samuel Porfirio; de Mello-Filho, Francisco Veríssimo; Rodrigues, Willian Caetano; Sonoda, Celso Koogi; de Melo, Willian Morais

    2014-05-01

    Ameloblastoma is a relatively uncommon benign odontogenic tumor, which is locally aggressive and has a high tendency to recur, despite its benign histopathologic features. This pathology can be classified into 4 groups: unicystic, solid or multicystic, peripheral, and malignant. There are 3 variants of unicystic ameloblastoma, as luminal, intraluminal, and mural. Therefore, in mural ameloblastoma, the fibrous wall of the cyst is infiltrated with tumor nodules, and for this reason it is considered the most aggressive variant of unicystic ameloblastomas. Various treatment techniques for ameloblastomas have been proposed, which include decompression, enucleation/curettage, sclerotizing solution, cryosurgery, marginal resection, and aggressive resection. Literature shows treatment of this lesion continues to be a subject of intense interest and some controversy. Thus, the authors aimed to describe a case of a mural unicystic ameloblastoma of follicular subtype in a 19-year-old subject who was successfully treated using conservative approaches, as decompression. The patient has been followed up for 3 years, and has remained clinically and radiographically disease-free.

  2. Large discreet resource allocation: a hybrid approach based on dea efficiency measurement

    Directory of Open Access Journals (Sweden)

    Eliane Gonçalves Gomes

    2008-12-01

    Full Text Available Resource allocation is one of the traditional Operations Research problems. In this paper we propose a hybrid model for resource allocation that uses Data Envelopment Analysis efficiency measures. We use Zero Sum Gains DEA models as the starting point to decrease the computational work for the step-bystep algorithm to allocate integer resources in a DEA context. Our approach is illustrated by a numerical example.A alocação de recursos é um dos problemas clássicos da Pesquisa Operacional. Neste artigo é proposto um modelo híbrido para alocar recursos, que faz uso de medidas de eficiência calculadas por Análise de Envoltória de Dados (DEA. São usados modelos DEA com Ganhos de Soma Zero como ponto de partida para reduzir o esforço computacional do algoritmo seqüencial para alocação de recursos discretos em DEA. A abordagem aqui proposta é aplicada a um exemplo numérico.

  3. A 454 sequencing approach for large scale phylogenomic analysis of the common emperor scorpion (Pandinus imperator).

    Science.gov (United States)

    Roeding, Falko; Borner, Janus; Kube, Michael; Klages, Sven; Reinhardt, Richard; Burmester, Thorsten

    2009-12-01

    In recent years, phylogenetic tree reconstructions that rely on multiple gene alignments that had been deduced from expressed sequence tags (ESTs) have become a popular method in molecular systematics. Here, we present a 454 pyrosequencing approach to infer the transcriptome of the Emperor scorpion Pandinus imperator. We obtained 428,844 high-quality reads (mean length=223+/-50 b) from total cDNA, which were assembled into 8334 contigs (mean length 422+/-313 bp) and 26,147 singletons. About 1200 contigs were successfully annotated by BLAST and orthology search. Specific analyses of eight distinct hemocyanin sequences provided further proof for the quality of the 454 reads and the assembly process. The P. imperator sequences were included in a concatenated alignment of 149 orthologous genes of 67 metazoan taxa that covers 39,842 amino acids. After removal of low-quality regions, 11,168 positions were employed for phylogenetic reconstructions. Using Bayesian and maximum likelihood methods, we obtained strongly supported monophyletic Ecdysozoa, Arthropoda (excluding Tardigrada), Euarthropoda, Pancrustacea and Hexapoda. We also recovered the Myriochelata (Chelicerata+Myriapoda). Within the chelicerates, Pycnogonida form the sister group of Euchelicerata. However, Arachnida were found paraphyletic because the Acari (mites and ticks) were recovered as sister group of a clade comprising Xiphosura, Scorpiones and Araneae. In summary, we have shown that 454 pyrosequencing is a cost-effective method that provides sufficient data and coverage depth for gene detection and multigene-based phylogenetic analyses.

  4. Modeling large Mexican urban metropolitan areas by a Vicsek Szalay approach

    Science.gov (United States)

    Murcio, Roberto; Rodríguez-Romo, Suemi

    2011-08-01

    A modified Vicsek-Szalay model is introduced. From this, experiments are performed in order to simulate the spatial morphology of the largest metropolitan area of México: a set of clusters formed by the Valle de México metropolitan area (VMMA), Puebla metropolitan area (PMA) and Toluca metropolitan area (TMA). This case is presented in detail and here is called the Central México metropolitan area (CMMA). To verify the effectiveness of our approach we study two other cases; the set of clusters formed by the Monterrey zone (MZ, formed by the Monterrey metropolitan area and the Saltillo City metropolitan area) and the Chihuahua zone (ChZ, formed by the Chihuahua metropolitan area, Delicias City and Cuauthemoc City ), with acceptable results. Besides we compute three different fractal measures for all our areas of interest (AOI). In this paper, we focus on the global feature of these fractal measures in the description of urban geography and obtained local information which normally comes from inner city structures and small scale human decisions. Finally, we verified that the Zipf law is fulfilled by our simulated urban morphologies, so we know that our model follows it. As is normal for actual city size distributions, the CMMA case is presented in detail. We intend to pave the way in the understanding of population spatial distribution in a geographical space.

  5. Large-pT production of D mesons at the LHCb in the parton Reggeization approach

    Science.gov (United States)

    Karpishkov, A. V.; Saleev, V. A.; Shipilova, A. V.

    2016-12-01

    The production of D mesons in proton-proton collisions at the LHCb detector is studied. We consider the single production of D0/D¯0, D±, D*±, and Ds± mesons and correlation spectra in the production of D D ¯ and D D pairs at the √{S }=7 TeV and √{S }=13 TeV . In case of the single D -meson production we calculate differential cross sections over transverse momentum pT while in the pair D D ¯ , D D -meson production the cross sections are calculated over the azimuthal angle difference Δ φ , rapidity difference Δ y , invariant mass of the pair M and over the pT of the one meson from a pair. The cross sections are obtained at the leading order of the parton Reggeization approach using Kimber-Martin-Ryskin unintegrated parton distribution functions in a proton. To describe the D -meson production we use universal scale-dependent c -quark and gluon fragmentation functions fitted to e+e- annihilation data from CERN LEP1. Our predictions find a good agreement with the LHCb Collaboration data within uncertainties and without free parameters.

  6. Large-p_T production of D mesons at the LHCb in the parton Reggeization approach

    CERN Document Server

    Karpishkov, Anton; Shipilova, Alexandera

    2016-01-01

    The production of D mesons in proton-proton collisions at the LHCb detector is studied. We consider the single production of D^0, D^+, D^star, and D_s^+ mesons and correlation spectra in the production of DbarD and DD pairs at the sqrt{S}=7 TeV and sqrt{S}=13 TeV. In case of the single D-meson production we calculate differential cross sections over transverse momentum p_T while in the pair DbarD,DD-meson production the cross sections are calculated over the azimuthal angle difference, rapidity difference, invariant mass of the pair M and over the p_T of the one meson from a pair. The cross sections are obtained at the leading order of the parton Reggeization approach using Kimber-Martin-Ryskin unintegrated parton distribution functions in a proton. To describe the D-meson production we use universal scale-dependent c-quark and gluon fragmentation functions fitted to e^+e^- annihilation data from CERN LEP1. Our predictions find a good agreement with the LHCb Collaboration data within uncertainties and without...

  7. A structured approach to transforming a large public hospital emergency department via lean methodologies.

    Science.gov (United States)

    Naik, Trushar; Duroseau, Yves; Zehtabchi, Shahriar; Rinnert, Stephan; Payne, Rosamond; McKenzie, Michele; Legome, Eric

    2012-01-01

    Emergency Departments (EDs) face significant challenges in providing efficient, quality, safe, cost-effective care. Lean methodologies are a proposed framework to redesign ED practices and processes to meet these challenges. We outline a systematic way that lean principles can be applied across the entire ED patient experience to transform a high volume ED in a safety net hospital. We review the change in ED performance metrics prior to and after lean implementation. We discuss critical insights and key lessons learned from our lean transformation to date. The steps to implementing lean principles across the patient's ED experience are described with specific attention to executive planning of rapid improvement experiments and the subsequent roll-out of lean transformation over an 18-month time frame. Basic ED performance data were compared to the year prior. Results of the exploratory analysis (using median and interquartile ranges and nonparametric tests for group comparisons) have shown improvement in several performance metrics after initiating lean transformation. The approach, lessons learned, and early data of our transformation can provide critical insights for EDs seeking to incorporate continuous improvement strategies. Key lessons and unique challenges encountered in safety net hospitals are discussed.

  8. An innovative approach for very large landslide dynamic and hydrogeological triggering study by inverse modeling (Grand Ilet landslide, Reunion Island)

    Science.gov (United States)

    Belle, P.; Aunay, B.; Join, J.-L.; Bernardie, S.

    2012-04-01

    Landslide control mechanisms study and displacements modeling interest the scientific community since several decades, with a common objective: landslides prediction for humans and infrastructures protection. However many data acquisition, like pore water pressure or mechanical properties, are necessary for determinist model construction. It could be extremely complex for very large landslides in extreme climatic conditions. An innovative modeling method is proposed for very large landslide functioning characterization using the primary data rainfall and displacement. Here we study two very large landslides (≈ 450 Mm3) in a humid tropical climate (Salazie cirque, Reunion Island). We use an inverse modeling tool basing on a global approach, with Gaussian-exponential transfer functions. Transfer functions between the rainfall input signal and the velocity output signal (permanent GPS daily data)are determinated. Because of the gap displacement data, the hydrologic cycles 2010 and 2011 is selected for the calibration of transfer functions. Afterwards, we model the landslide velocity from rainfall signal since 2004 to 2011. In the case of Grand Ilet landslide, we study relations between transfer functions characteristics and the coupling between the displacements and the hydrogeological functioning. For cumulated displacements, final difference between simulations and observations for 7 years modeling is smaller than 5 %. Seasonal landslides velocity variations are accurately modeled during a period of 7 years. Bimodal transfer functions, with dissociation between rapid and slow impulse responses, are particularly effective for reproducing the recorded displacements. In particular, rapid response permits to model velocity increases after cyclonic events. In case of Grand Ilet landslide, transfer functions characteristics are strongly correlated with the landslide aquifer functioning. Indeed, influence times of rapid and slow responses are reliable with a double

  9. A new fragment-based approach for calculating electronic excitation energies of large systems.

    Science.gov (United States)

    Ma, Yingjin; Liu, Yang; Ma, Haibo

    2012-01-14

    We present a new fragment-based scheme to calculate the excited states of large systems without necessity of a Hartree-Fock (HF) solution of the whole system. This method is based on the implementation of the renormalized excitonic method [M. A. Hajj et al., Phys. Rev. B 72, 224412 (2005)] at ab initio level, which assumes that the excitation of the whole system can be expressed by a linear combination of various local excitations. We decomposed the whole system into several blocks and then constructed the effective Hamiltonians for the intra- and inter-block interactions with block canonical molecular orbitals instead of widely used localized molecular orbitals. Accordingly, we avoided the prerequisite HF solution and the localization procedure of the molecular orbitals in the popular local correlation methods. Test calculations were implemented for hydrogen molecule chains at the full configuration interaction, symmetry adapted cluster/symmetry adapted cluster configuration interaction, HF/configuration interaction singles (CIS) levels and more realistic polyene systems at the HF/CIS level. The calculated vertical excitation energies for lowest excited states are in reasonable accordance with those determined by the calculations of the whole systems with traditional methods, showing that our new fragment-based method can give good estimates for low-lying energy spectra of both weak and moderate interaction systems with economic computational costs.

  10. A large-scale functional approach to uncover human genes and pathways in Drosophila

    Institute of Scientific and Technical Information of China (English)

    Rong Xu; Yuan Zhuang; Tian Xu; Kejing Deng; Yi Zhu; Yue Wu; Jing Ren; Min Wan; Shouyuan Zhao; Xiaohui Wu; Min Han

    2008-01-01

    We demonstrate the feasibility of performing a systematic screen for human gene functions in Drosophila by assay-ing for their ability to induce overexpression phenotypes. Over 1 500 transgenic fly lines corresponding to 236 human genes have been established. In all, 51 lines are capable of eliciting a phenotype suggesting that the human genes are functional. These heterologous genes are functionally relevant as we have found a similar mutant phenotype caused either by a dominant negative mutant form of the human ribosomal protein L8 gene or by RNAi downregulation of the Drosophila RPL8. Significantly, the Drosophila RPL8 mutant can be rescued by wild-type human RPL8. We also provide genetic evidence that Drosophila RPL8 is a new member of the insulin signaling pathway. In summary, the functions of many human genes appear to be highly conserved, and the ability to identify them in Drosophila repre-sents a powerful genetic tool for large-scale analysis of human transcripts in vivo.

  11. A New Approach to Probing Large Scale Power with Peculiar Velocities

    CERN Document Server

    Feldman, H A; Feldman, Hume A.; Watkins, Richard

    1997-01-01

    We propose a new strategy to probe the power spectrum on large scales using galaxy peculiar velocities. We explore the properties of surveys that cover only two small fields in opposing directions on the sky. Surveys of this type have several advantages over those that attempt to cover the entire sky; in particular, by concentrating galaxies in narrow cones these surveys are able to achieve the density needed to measure several moments of the velocity field with only a modest number of objects, even for surveys designed to probe scales them in terms of the three moments to which they are most sensitive. We calculate window functions for these moments and construct a $\\chi^2$ statistic which can be used to put constraints on the power spectrum. In order to explore the sensitivity of these surveys, we calculate the expectation values of the moments and their associated measurement noise as a function of the survey parameters such as density and depth and for several popular models of structure formation. We fin...

  12. Data-Gathering Scheme Using AUVs in Large-Scale Underwater Sensor Networks: A Multihop Approach

    Directory of Open Access Journals (Sweden)

    Jawaad Ullah Khan

    2016-09-01

    Full Text Available In this paper, we propose a data-gathering scheme for hierarchical underwater sensor networks, where multiple Autonomous Underwater Vehicles (AUVs are deployed over large-scale coverage areas. The deployed AUVs constitute an intermittently connected multihop network through inter-AUV synchronization (in this paper, synchronization means an interconnection between nodes for communication for forwarding data to the designated sink. In such a scenario, the performance of the multihop communication depends upon the synchronization among the vehicles. The mobility parameters of the vehicles vary continuously because of the constantly changing underwater currents. The variations in the AUV mobility parameters reduce the inter-AUV synchronization frequency contributing to delays in the multihop communication. The proposed scheme improves the AUV synchronization frequency by permitting neighboring AUVs to share their status information via a pre-selected node called an agent-node at the static layer of the network. We evaluate the proposed scheme in terms of the AUV synchronization frequency, vertical delay (node→AUV, horizontal delay (AUV→AUV, end-to-end delay, and the packet loss ratio. Simulation results show that the proposed scheme significantly reduces the aforementioned delays without the synchronization time-out process employed in conventional works.

  13. Large Eddy Simulation of Autoignition in a Turbulent Hydrogen Jet Flame Using a Progress Variable Approach

    Directory of Open Access Journals (Sweden)

    Rohit Kulkarni

    2012-01-01

    Full Text Available The potential of a progress variable formulation for predicting autoignition and subsequent kernel development in a nonpremixed jet flame is explored in the LES (Large Eddy Simulation context. The chemistry is tabulated as a function of mixture fraction and a composite progress variable, which is defined as a combination of an intermediate and a product species. Transport equations are solved for mixture fraction and progress variable. The filtered mean source term for the progress variable is closed using a probability density function of presumed shape for the mixture fraction. Subgrid fluctuations of the progress variable conditioned on the mixture fraction are neglected. A diluted hydrogen jet issuing into a turbulent coflow of preheated air is chosen as a test case. The model predicts ignition lengths and subsequent kernel growth in good agreement with experiment without any adjustment of model parameters. The autoignition length predicted by the model depends noticeably on the chemical mechanism which the tabulated chemistry is based on. Compared to models using detailed chemistry, significant reduction in computational costs can be realized with the progress variable formulation.

  14. Information Theoretic Approaches to Rapid Discovery of Relationships in Large Climate Data Sets

    Science.gov (United States)

    Knuth, Kevin H.; Rossow, William B.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Mutual information as the asymptotic Bayesian measure of independence is an excellent starting point for investigating the existence of possible relationships among climate-relevant variables in large data sets, As mutual information is a nonlinear function of of its arguments, it is not beholden to the assumption of a linear relationship between the variables in question and can reveal features missed in linear correlation analyses. However, as mutual information is symmetric in its arguments, it only has the ability to reveal the probability that two variables are related. it provides no information as to how they are related; specifically, causal interactions or a relation based on a common cause cannot be detected. For this reason we also investigate the utility of a related quantity called the transfer entropy. The transfer entropy can be written as a difference between mutual informations and has the capability to reveal whether and how the variables are causally related. The application of these information theoretic measures is rested on some familiar examples using data from the International Satellite Cloud Climatology Project (ISCCP) to identify relation between global cloud cover and other variables, including equatorial pacific sea surface temperature (SST), over seasonal and El Nino Southern Oscillation (ENSO) cycles.

  15. Artificial intelligence approach to planning the robotic assembly of large tetrahedral truss structures

    Science.gov (United States)

    Homemdemello, Luiz S.

    1992-01-01

    An assembly planner for tetrahedral truss structures is presented. To overcome the difficulties due to the large number of parts, the planner exploits the simplicity and uniformity of the shapes of the parts and the regularity of their interconnection. The planning automation is based on the computational formalism known as production system. The global data base consists of a hexagonal grid representation of the truss structure. This representation captures the regularity of tetrahedral truss structures and their multiple hierarchies. It maps into quadratic grids and can be implemented in a computer by using a two-dimensional array data structure. By maintaining the multiple hierarchies explicitly in the model, the choice of a particular hierarchy is only made when needed, thus allowing a more informed decision. Furthermore, testing the preconditions of the production rules is simple because the patterned way in which the struts are interconnected is incorporated into the topology of the hexagonal grid. A directed graph representation of assembly sequences allows the use of both graph search and backtracking control strategies.

  16. Gametic phase estimation over large genomic regions using an adaptive window approach

    Directory of Open Access Journals (Sweden)

    Excoffier Laurent

    2003-11-01

    Full Text Available Abstract The authors present ELB, an easy to programme and computationally fast algorithm for inferring gametic phase in population samples of multilocus genotypes. Phase updates are made on the basis of a window of neighbouring loci, and the window size varies according to the local level of linkage disequilibrium. Thus, ELB is particularly well suited to problems involving many loci and/or relatively large genomic regions, including those with variable recombination rate. The authors have simulated population samples of single nucleotide polymorphism genotypes with varying levels of recombination and marker density, and find that ELB provides better local estimation of gametic phase than the PHASE or HTYPER programs, while its global accuracy is broadly similar. The relative improvement in local accuracy increases both with increasing recombination and with increasing marker density. Short tandem repeat (STR, or microsatellite simulation studies demonstrate ELB's superiority over PHASE both globally and locally. Missing data are handled by ELB; simulations show that phase recovery is virtually unaffected by up to 2 per cent of missing data, but that phase estimation is noticeably impaired beyond this amount. The authors also applied ELB to datasets obtained from random pairings of 42 human X chromosomes typed at 97 diallelic markers in a 200 kb low-recombination region. Once again, they found ELB to have consistently better local accuracy than PHASE or HTYPER, while its global accuracy was close to the best.

  17. Facilitating learning in large lecture classes: testing the "teaching team" approach to peer learning.

    Science.gov (United States)

    Stanger-Hall, Kathrin F; Lang, Sarah; Maas, Martha

    2010-01-01

    We tested the effect of voluntary peer-facilitated study groups on student learning in large introductory biology lecture classes. The peer facilitators (preceptors) were trained as part of a Teaching Team (faculty, graduate assistants, and preceptors) by faculty and Learning Center staff. Each preceptor offered one weekly study group to all students in the class. All individual study groups were similar in that they applied active-learning strategies to the class material, but they differed in the actual topics or questions discussed, which were chosen by the individual study groups. Study group participation was correlated with reduced failing grades and course dropout rates in both semesters, and participants scored better on the final exam and earned higher course grades than nonparticipants. In the spring semester the higher scores were clearly due to a significant study group effect beyond ability (grade point average). In contrast, the fall study groups had a small but nonsignificant effect after accounting for student ability. We discuss the differences between the two semesters and offer suggestions on how to implement teaching teams to optimize learning outcomes, including student feedback on study groups.

  18. Binary Large Object-Based Approach for QR Code Detection in Uncontrolled Environments

    Directory of Open Access Journals (Sweden)

    Omar Lopez-Rincon

    2017-01-01

    Full Text Available Quick Response QR barcode detection in nonarbitrary environment is still a challenging task despite many existing applications for finding 2D symbols. The main disadvantage of recent applications for QR code detection is a low performance for rotated and distorted single or multiple symbols in images with variable illumination and presence of noise. In this paper, a particular solution for QR code detection in uncontrolled environments is presented. The proposal consists in recognizing geometrical features of QR code using a binary large object- (BLOB- based algorithm with subsequent iterative filtering QR symbol position detection patterns that do not require complex processing and training of classifiers frequently used for these purposes. The high precision and speed are achieved by adaptive threshold binarization of integral images. In contrast to well-known scanners, which fail to detect QR code with medium to strong blurring, significant nonuniform illumination, considerable symbol deformations, and noising, the proposed technique provides high recognition rate of 80%–100% with a speed compatible to real-time applications. In particular, speed varies from 200 ms to 800 ms per single or multiple QR code detected simultaneously in images with resolution from 640 × 480 to 4080 × 2720, respectively.

  19. An efficient approach for preprocessing data from a large-scale chemical sensor array.

    Science.gov (United States)

    Leo, Marco; Distante, Cosimo; Bernabei, Mara; Persaud, Krishna

    2014-09-24

    In this paper, an artificial olfactory system (Electronic Nose) that mimics the biological olfactory system is introduced. The device consists of a Large-Scale Chemical Sensor Array (16; 384 sensors, made of 24 different kinds of conducting polymer materials)that supplies data to software modules, which perform advanced data processing. In particular, the paper concentrates on the software components consisting, at first, of a crucial step that normalizes the heterogeneous sensor data and reduces their inherent noise. Cleaned data are then supplied as input to a data reduction procedure that extracts the most informative and discriminant directions in order to get an efficient representation in a lower dimensional space where it is possible to more easily find a robust mapping between the observed outputs and the characteristics of the odors in input to the device. Experimental qualitative proofs of the validity of the procedure are given by analyzing data acquired for two different pure analytes and their binary mixtures. Moreover, a classification task is performed in order to explore the possibility of automatically recognizing pure compounds and to predict binary mixture concentrations.

  20. A common sense approach to consequence analysis at a large DOE site. Revision 1

    Energy Technology Data Exchange (ETDEWEB)

    O`Kula, K.R.; McKinley, M.S.; East, J.M.

    1992-12-31

    The primary objective of the Probabilistic Safety Assessment (PSA) at the U. S. Department of Energy (DOE) Savannah River Site (SRS) is to quantify health and economic risks posed by K Reactor operation to the nearby offsite and onsite areas from highly unlikely severe accidents. The overall risk analyses have also been instrumental as defensible bases for analyzing existing safety margins of the restart configuration; determining component, human action, and engineering system vulnerabilities; comparing measures of risk to DOE and commercial guidelines; and prioritizing risk-significant improvements. The key final phase of these probabilistic risk calculations, a third level of analysis or Level 3 PSA, requires the determination of the conditional consequences to onsite workers and the DOE reservation facilities, given low-probability, postulated fuel-melting accidents with accompanying atmospheric releases have occurred. A modified version of the commercial reactor-based MACCS 1.5 code, MACCS/ON, is used in the context of the SRS PSA to perform the consequence determinations. The updated code is applicable to other large DOE sites for risk analyses of facility operations, and is compatible with proposed modifications planned by code developers, Sandia National Laboratories.