The large deviation approach to statistical mechanics
International Nuclear Information System (INIS)
Touchette, Hugo
2009-01-01
The theory of large deviations is concerned with the exponential decay of probabilities of large fluctuations in random systems. These probabilities are important in many fields of study, including statistics, finance, and engineering, as they often yield valuable information about the large fluctuations of a random system around its most probable state or trajectory. In the context of equilibrium statistical mechanics, the theory of large deviations provides exponential-order estimates of probabilities that refine and generalize Einstein's theory of fluctuations. This review explores this and other connections between large deviation theory and statistical mechanics, in an effort to show that the mathematical language of statistical mechanics is the language of large deviation theory. The first part of the review presents the basics of large deviation theory, and works out many of its classical applications related to sums of random variables and Markov processes. The second part goes through many problems and results of statistical mechanics, and shows how these can be formulated and derived within the context of large deviation theory. The problems and results treated cover a wide range of physical systems, including equilibrium many-particle systems, noise-perturbed dynamics, nonequilibrium systems, as well as multifractals, disordered systems, and chaotic systems. This review also covers many fundamental aspects of statistical mechanics, such as the derivation of variational principles characterizing equilibrium and nonequilibrium states, the breaking of the Legendre transform for nonconcave entropies, and the characterization of nonequilibrium fluctuations through fluctuation relations.
The large deviation approach to statistical mechanics
Touchette, Hugo
2009-07-01
The theory of large deviations is concerned with the exponential decay of probabilities of large fluctuations in random systems. These probabilities are important in many fields of study, including statistics, finance, and engineering, as they often yield valuable information about the large fluctuations of a random system around its most probable state or trajectory. In the context of equilibrium statistical mechanics, the theory of large deviations provides exponential-order estimates of probabilities that refine and generalize Einstein’s theory of fluctuations. This review explores this and other connections between large deviation theory and statistical mechanics, in an effort to show that the mathematical language of statistical mechanics is the language of large deviation theory. The first part of the review presents the basics of large deviation theory, and works out many of its classical applications related to sums of random variables and Markov processes. The second part goes through many problems and results of statistical mechanics, and shows how these can be formulated and derived within the context of large deviation theory. The problems and results treated cover a wide range of physical systems, including equilibrium many-particle systems, noise-perturbed dynamics, nonequilibrium systems, as well as multifractals, disordered systems, and chaotic systems. This review also covers many fundamental aspects of statistical mechanics, such as the derivation of variational principles characterizing equilibrium and nonequilibrium states, the breaking of the Legendre transform for nonconcave entropies, and the characterization of nonequilibrium fluctuations through fluctuation relations.
Deuschel, Jean-Dominique; Deuschel, Jean-Dominique
2001-01-01
This is the second printing of the book first published in 1988. The first four chapters of the volume are based on lectures given by Stroock at MIT in 1987. They form an introduction to the basic ideas of the theory of large deviations and make a suitable package on which to base a semester-length course for advanced graduate students with a strong background in analysis and some probability theory. A large selection of exercises presents important material and many applications. The last two chapters present various non-uniform results (Chapter 5) and outline the analytic approach that allow
Varadhan, S R S
2016-01-01
The theory of large deviations deals with rates at which probabilities of certain events decay as a natural parameter in the problem varies. This book, which is based on a graduate course on large deviations at the Courant Institute, focuses on three concrete sets of examples: (i) diffusions with small noise and the exit problem, (ii) large time behavior of Markov processes and their connection to the Feynman-Kac formula and the related large deviation behavior of the number of distinct sites visited by a random walk, and (iii) interacting particle systems, their scaling limits, and large deviations from their expected limits. For the most part the examples are worked out in detail, and in the process the subject of large deviations is developed. The book will give the reader a flavor of how large deviation theory can help in problems that are not posed directly in terms of large deviations. The reader is assumed to have some familiarity with probability, Markov processes, and interacting particle systems.
Large deviations and idempotent probability
Puhalskii, Anatolii
2001-01-01
In the view of many probabilists, author Anatolii Puhalskii''s research results stand among the most significant achievements in the modern theory of large deviations. In fact, his work marked a turning point in the depth of our understanding of the connections between the large deviation principle (LDP) and well-known methods for establishing weak convergence results.Large Deviations and Idempotent Probability expounds upon the recent methodology of building large deviation theory along the lines of weak convergence theory. The author develops an idempotent (or maxitive) probability theory, introduces idempotent analogues of martingales (maxingales), Wiener and Poisson processes, and Ito differential equations, and studies their properties. The large deviation principle for stochastic processes is formulated as a certain type of convergence of stochastic processes to idempotent processes. The author calls this large deviation convergence.The approach to establishing large deviation convergence uses novel com...
International Nuclear Information System (INIS)
Chertkov, Michael; Kolokolov, Igor; Lebedev, Vladimir
2012-01-01
The standard definition of the stochastic risk-sensitive linear–quadratic (RS-LQ) control depends on the risk parameter, which is normally left to be set exogenously. We reconsider the classical approach and suggest two alternatives, resolving the spurious freedom naturally. One approach consists in seeking for the minimum of the tail of the probability distribution function (PDF) of the cost functional at some large fixed value. Another option suggests minimizing the expectation value of the cost functional under a constraint on the value of the PDF tail. Under the assumption of resulting control stability, both problems are reduced to static optimizations over a stationary control matrix. The solutions are illustrated using the examples of scalar and 1D chain (string) systems. The large deviation self-similar asymptotic of the cost functional PDF is analyzed. (paper)
A large deviations approach to the transient of the Erlang loss model
Mandjes, M.R.H.; Ridder, Annemarie
2001-01-01
This paper deals with the transient behavior of the Erlang loss model. After scaling both arrival rate and number of trunks, an asymptotic analysis of the blocking probability is given. Apart from that, the most likely path to blocking is given. Compared to Shwartz and Weiss [Large Deviations for
A large deviations approach to limit theory for heavy-tailed time series
DEFF Research Database (Denmark)
Mikosch, Thomas Valentin; Wintenberger, Olivier
2016-01-01
and vanishing in some neighborhood of the origin. We study a variety of such functionals, including large deviations of random walks, their suprema, the ruin functional, and further derive weak limit theory for maxima, point processes, cluster functionals and the tail empirical process. One of the main results...
Large deviations and portfolio optimization
Sornette, Didier
Risk control and optimal diversification constitute a major focus in the finance and insurance industries as well as, more or less consciously, in our everyday life. We present a discussion of the characterization of risks and of the optimization of portfolios that starts from a simple illustrative model and ends by a general functional integral formulation. A major item is that risk, usually thought of as one-dimensional in the conventional mean-variance approach, has to be addressed by the full distribution of losses. Furthermore, the time-horizon of the investment is shown to play a major role. We show the importance of accounting for large fluctuations and use the theory of Cramér for large deviations in this context. We first treat a simple model with a single risky asset that exemplifies the distinction between the average return and the typical return and the role of large deviations in multiplicative processes, and the different optimal strategies for the investors depending on their size. We then analyze the case of assets whose price variations are distributed according to exponential laws, a situation that is found to describe daily price variations reasonably well. Several portfolio optimization strategies are presented that aim at controlling large risks. We end by extending the standard mean-variance portfolio optimization theory, first within the quasi-Gaussian approximation and then using a general formulation for non-Gaussian correlated assets in terms of the formalism of functional integrals developed in the field theory of critical phenomena.
Approaching nanometre accuracy in measurement of the profile deviation of a large plane mirror
International Nuclear Information System (INIS)
Müller, Andreas; Hofmann, Norbert; Manske, Eberhard
2012-01-01
The interferometric nanoprofilometer (INP), developed at the Institute of Process Measurement and Sensor Technology at the Ilmenau University of Technology, is a precision device for measuring the profile deviations of plane mirrors with a profile length of up to 250 mm at the nanometre scale. As its expanded uncertainty of U(l) = 7.8 nm at a confidence level of p = 95% (k = 2) was mainly influenced by the uncertainty of the straightness standard (3.6 nm) and the uncertainty caused by the signal and demodulation errors of the interferometer signals (1.2 nm), these two sources of uncertainty have been the subject of recent analyses and modifications. To measure the profile deviation of the standard mirror we performed a classic three-flat test using the INP. The three-flat test consists of a combination of measurements between three different test flats. The shape deviations of the three flats can then be determined by applying a least-squares solution of the resulting equation system. The results of this three-flat test showed surprisingly good consistency, enabling us to correct this systematic error in profile deviation measurements and reducing the uncertainty component of the standard mirror to 0.4 nm. Another area of research is the signal and demodulation error arising during the interpretation of the interferometer signals. In the case of the interferometric nanoprofilometer, the special challenge is that the maximum path length differences are too small during the scan of the entire profile deviation over perfectly aligned 250 mm long mirrors for proper interpolation and correction since they do not yet cover even half of an interference fringe. By applying a simple method of weighting to the interferometer data the common ellipse fitting could be performed successfully and the demodulation error was greatly reduced. The remaining uncertainty component is less than 0.5 nm. In summary we were successful in greatly reducing two major systematic errors. The
Entanglement transitions induced by large deviations
Bhosale, Udaysinh T.
2017-12-01
The probability of large deviations of the smallest Schmidt eigenvalue for random pure states of bipartite systems, denoted as A and B , is computed analytically using a Coulomb gas method. It is shown that this probability, for large N , goes as exp[-β N2Φ (ζ ) ] , where the parameter β is the Dyson index of the ensemble, ζ is the large deviation parameter, while the rate function Φ (ζ ) is calculated exactly. Corresponding equilibrium Coulomb charge density is derived for its large deviations. Effects of the large deviations of the extreme (largest and smallest) Schmidt eigenvalues on the bipartite entanglement are studied using the von Neumann entropy. Effect of these deviations is also studied on the entanglement between subsystems 1 and 2, obtained by further partitioning the subsystem A , using the properties of the density matrix's partial transpose ρ12Γ. The density of states of ρ12Γ is found to be close to the Wigner's semicircle law with these large deviations. The entanglement properties are captured very well by a simple random matrix model for the partial transpose. The model predicts the entanglement transition across a critical large deviation parameter ζ . Log negativity is used to quantify the entanglement between subsystems 1 and 2. Analytical formulas for it are derived using the simple model. Numerical simulations are in excellent agreement with the analytical results.
Transport Coefficients from Large Deviation Functions
Gao, Chloe Ya; Limmer, David T.
2017-01-01
We describe a method for computing transport coefficients from the direct evaluation of large deviation functions. This method is general, relying on only equilibrium fluctuations, and is statistically efficient, employing trajectory based importance sampling. Equilibrium fluctuations of molecular currents are characterized by their large deviation functions, which are scaled cumulant generating functions analogous to the free energies. A diffusion Monte Carlo algorithm is used to evaluate th...
The large deviations theorem and ergodicity
International Nuclear Information System (INIS)
Gu Rongbao
2007-01-01
In this paper, some relationships between stochastic and topological properties of dynamical systems are studied. For a continuous map f from a compact metric space X into itself, we show that if f satisfies the large deviations theorem then it is topologically ergodic. Moreover, we introduce the topologically strong ergodicity, and prove that if f is a topologically strongly ergodic map satisfying the large deviations theorem then it is sensitively dependent on initial conditions
Transport Coefficients from Large Deviation Functions
Directory of Open Access Journals (Sweden)
Chloe Ya Gao
2017-10-01
Full Text Available We describe a method for computing transport coefficients from the direct evaluation of large deviation functions. This method is general, relying on only equilibrium fluctuations, and is statistically efficient, employing trajectory based importance sampling. Equilibrium fluctuations of molecular currents are characterized by their large deviation functions, which are scaled cumulant generating functions analogous to the free energies. A diffusion Monte Carlo algorithm is used to evaluate the large deviation functions, from which arbitrary transport coefficients are derivable. We find significant statistical improvement over traditional Green–Kubo based calculations. The systematic and statistical errors of this method are analyzed in the context of specific transport coefficient calculations, including the shear viscosity, interfacial friction coefficient, and thermal conductivity.
Transport Coefficients from Large Deviation Functions
Gao, Chloe; Limmer, David
2017-10-01
We describe a method for computing transport coefficients from the direct evaluation of large deviation function. This method is general, relying on only equilibrium fluctuations, and is statistically efficient, employing trajectory based importance sampling. Equilibrium fluctuations of molecular currents are characterized by their large deviation functions, which is a scaled cumulant generating function analogous to the free energy. A diffusion Monte Carlo algorithm is used to evaluate the large deviation functions, from which arbitrary transport coefficients are derivable. We find significant statistical improvement over traditional Green-Kubo based calculations. The systematic and statistical errors of this method are analyzed in the context of specific transport coefficient calculations, including the shear viscosity, interfacial friction coefficient, and thermal conductivity.
Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien
2017-06-01
Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.
Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien
2017-06-01
Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which—as shown on the contact process—provides a significant improvement of the large deviation function estimators compared to the standard one.
Large Deviations and Asymptotic Methods in Finance
Gatheral, Jim; Gulisashvili, Archil; Jacquier, Antoine; Teichmann, Josef
2015-01-01
Topics covered in this volume (large deviations, differential geometry, asymptotic expansions, central limit theorems) give a full picture of the current advances in the application of asymptotic methods in mathematical finance, and thereby provide rigorous solutions to important mathematical and financial issues, such as implied volatility asymptotics, local volatility extrapolation, systemic risk and volatility estimation. This volume gathers together ground-breaking results in this field by some of its leading experts. Over the past decade, asymptotic methods have played an increasingly important role in the study of the behaviour of (financial) models. These methods provide a useful alternative to numerical methods in settings where the latter may lose accuracy (in extremes such as small and large strikes, and small maturities), and lead to a clearer understanding of the behaviour of models, and of the influence of parameters on this behaviour. Graduate students, researchers and practitioners will find th...
Large Deviations for Two-Time-Scale Diffusions, with Delays
International Nuclear Information System (INIS)
Kushner, Harold J.
2010-01-01
We consider the problem of large deviations for a two-time-scale reflected diffusion process, possibly with delays in the dynamical terms. The Dupuis-Ellis weak convergence approach is used. It is perhaps the most intuitive and simplest for the problems of concern. The results have applications to the problem of approximating optimal controls for two-time-scale systems via use of the averaged equation.
Deviations from Newton's law in supersymmetric large extra dimensions
International Nuclear Information System (INIS)
Callin, P.; Burgess, C.P.
2006-01-01
Deviations from Newton's inverse-squared law at the micron length scale are smoking-gun signals for models containing supersymmetric large extra dimensions (SLEDs), which have been proposed as approaches for resolving the cosmological constant problem. Just like their non-supersymmetric counterparts, SLED models predict gravity to deviate from the inverse-square law because of the advent of new dimensions at sub-millimeter scales. However SLED models differ from their non-supersymmetric counterparts in three important ways: (i) the size of the extra dimensions is fixed by the observed value of the dark energy density, making it impossible to shorten the range over which new deviations from Newton's law must be seen; (ii) supersymmetry predicts there to be more fields in the extra dimensions than just gravity, implying different types of couplings to matter and the possibility of repulsive as well as attractive interactions; and (iii) the same mechanism which is purported to keep the cosmological constant naturally small also keeps the extra-dimensional moduli effectively massless, leading to deviations from general relativity in the far infrared of the scalar-tensor form. We here explore the deviations from Newton's law which are predicted over micron distances, and show the ways in which they differ and resemble those in the non-supersymmetric case
Large deviations for noninteracting infinite-particle systems
International Nuclear Information System (INIS)
Donsker, M.D.; Varadhan, S.R.S.
1987-01-01
A large deviation property is established for noninteracting infinite particle systems. Previous large deviation results obtained by the authors involved a single I-function because the cases treated always involved a unique invariant measure for the process. In the context of this paper there is an infinite family of invariant measures and a corresponding infinite family of I-functions governing the large deviations
Two examples of non strictly convex large deviations
De Marco, Stefano; Jacquier, Antoine; Roome, Patrick
2016-01-01
We present two examples of a large deviations principle where the rate function is not strictly convex. This is motivated by a model used in mathematical finance (the Heston model), and adds a new item to the zoology of non strictly convex large deviations. For one of these examples, we show that the rate function of the Cramer-type of large deviations coincides with that of the Freidlin-Wentzell when contraction principles are applied.
Towards a large deviation theory for strongly correlated systems
International Nuclear Information System (INIS)
Ruiz, Guiomar; Tsallis, Constantino
2012-01-01
A large-deviation connection of statistical mechanics is provided by N independent binary variables, the (N→∞) limit yielding Gaussian distributions. The probability of n≠N/2 out of N throws is governed by e −Nr , r related to the entropy. Large deviations for a strong correlated model characterized by indices (Q,γ) are studied, the (N→∞) limit yielding Q-Gaussians (Q→1 recovers a Gaussian). Its large deviations are governed by e q −Nr q (∝1/N 1/(q−1) , q>1), q=(Q−1)/(γ[3−Q])+1. This illustration opens the door towards a large-deviation foundation of nonextensive statistical mechanics. -- Highlights: ► We introduce the formalism of relative entropy for a single random binary variable and its q-generalization. ► We study a model of N strongly correlated binary random variables and their large-deviation probabilities. ► Large-deviation probability of strongly correlated model exhibits a q-exponential decay whose argument is proportional to N, as extensivity requires. ► Our results point to a q-generalized large deviation theory and suggest a large-deviation foundation of nonextensive statistical mechanics.
Limiting values of large deviation probabilities of quadratic statistics
Jeurnink, Gerardus A.M.; Kallenberg, W.C.M.
1990-01-01
Application of exact Bahadur efficiencies in testing theory or exact inaccuracy rates in estimation theory needs evaluation of large deviation probabilities. Because of the complexity of the expressions, frequently a local limit of the nonlocal measure is considered. Local limits of large deviation
Large deviations in the presence of cooperativity and slow dynamics
Whitelam, Stephen
2018-06-01
We study simple models of intermittency, involving switching between two states, within the dynamical large-deviation formalism. Singularities appear in the formalism when switching is cooperative or when its basic time scale diverges. In the first case the unbiased trajectory distribution undergoes a symmetry breaking, leading to a change in shape of the large-deviation rate function for a particular dynamical observable. In the second case the symmetry of the unbiased trajectory distribution remains unbroken. Comparison of these models suggests that singularities of the dynamical large-deviation formalism can signal the dynamical equivalent of an equilibrium phase transition but do not necessarily do so.
Sample-path large deviations in credit risk
Leijdekker, V.J.G.; Mandjes, M.R.H.; Spreij, P.J.C.
2011-01-01
The event of large losses plays an important role in credit risk. As these large losses are typically rare, and portfolios usually consist of a large number of positions, large deviation theory is the natural tool to analyze the tail asymptotics of the probabilities involved. We first derive a
An explicit local uniform large deviation bound for Brownian bridges
Wittich, O.
2005-01-01
By comparing curve length in a manifold and a standard sphere, we prove a local uniform bound for the exponent in the Large Deviation formula that describes the concentration of Brownian bridges to geodesics.
An absolute deviation approach to assessing correlation.
Gorard, S.
2015-01-01
This paper describes two possible alternatives to the more traditional Pearson’s R correlation coefficient, both based on using the mean absolute deviation, rather than the standard deviation, as a measure of dispersion. Pearson’s R is well-established and has many advantages. However, these newer variants also have several advantages, including greater simplicity and ease of computation, and perhaps greater tolerance of underlying assumptions (such as the need for linearity). The first alter...
Large-deviation theory for diluted Wishart random matrices
Castillo, Isaac Pérez; Metz, Fernando L.
2018-03-01
Wishart random matrices with a sparse or diluted structure are ubiquitous in the processing of large datasets, with applications in physics, biology, and economy. In this work, we develop a theory for the eigenvalue fluctuations of diluted Wishart random matrices based on the replica approach of disordered systems. We derive an analytical expression for the cumulant generating function of the number of eigenvalues IN(x ) smaller than x ∈R+ , from which all cumulants of IN(x ) and the rate function Ψx(k ) controlling its large-deviation probability Prob[IN(x ) =k N ] ≍e-N Ψx(k ) follow. Explicit results for the mean value and the variance of IN(x ) , its rate function, and its third cumulant are discussed and thoroughly compared to numerical diagonalization, showing very good agreement. The present work establishes the theoretical framework put forward in a recent letter [Phys. Rev. Lett. 117, 104101 (2016), 10.1103/PhysRevLett.117.104101] as an exact and compelling approach to deal with eigenvalue fluctuations of sparse random matrices.
Large-deviation properties of resilience of power grids
International Nuclear Information System (INIS)
Dewenter, Timo; Hartmann, Alexander K
2015-01-01
We study the distributions of the resilience of power flow models against transmission line failures via a so-called backup capacity. We consider three ensembles of random networks, and in addition, the topology of the British transmission power grid. The three ensembles are Erdős–Rényi random graphs, Erdős–Rényi random graphs with a fixed number of links, and spatial networks where the nodes are embedded in a two-dimensional plane. We numerically investigate the probability density functions (pdfs) down to the tails to gain insight into very resilient and very vulnerable networks. This is achieved via large-deviation techniques, which allow us to study very rare values that occur with probability densities below 10 −160 . We find that the right tail of the pdfs towards larger backup capacities follows an exponential with a strong curvature. This is confirmed by the rate function, which approaches a limiting curve for increasing network sizes. Very resilient networks are basically characterized by a small diameter and a large power sign ratio. In addition, networks can be made typically more resilient by adding more links. (paper)
Large deviation function for a driven underdamped particle in a periodic potential
Fischer, Lukas P.; Pietzonka, Patrick; Seifert, Udo
2018-02-01
Employing large deviation theory, we explore current fluctuations of underdamped Brownian motion for the paradigmatic example of a single particle in a one-dimensional periodic potential. Two different approaches to the large deviation function of the particle current are presented. First, we derive an explicit expression for the large deviation functional of the empirical phase space density, which replaces the level 2.5 functional used for overdamped dynamics. Using this approach, we obtain several bounds on the large deviation function of the particle current. We compare these to bounds for overdamped dynamics that have recently been derived, motivated by the thermodynamic uncertainty relation. Second, we provide a method to calculate the large deviation function via the cumulant generating function. We use this method to assess the tightness of the bounds in a numerical case study for a cosine potential.
Large deviations for Gaussian processes in Hoelder norm
International Nuclear Information System (INIS)
Fatalov, V R
2003-01-01
Some results are proved on the exact asymptotic representation of large deviation probabilities for Gaussian processes in the Hoeder norm. The following classes of processes are considered: the Wiener process, the Brownian bridge, fractional Brownian motion, and stationary Gaussian processes with power-law covariance function. The investigation uses the method of double sums for Gaussian fields
On asymptotically efficient simulation of large deviation probabilities.
Dieker, A.B.; Mandjes, M.R.H.
2005-01-01
ABSTRACT: Consider a family of probabilities for which the decay is governed by a large deviation principle. To find an estimate for a fixed member of this family, one is often forced to use simulation techniques. Direct Monte Carlo simulation, however, is often impractical, particularly if the
Importance sampling large deviations in nonequilibrium steady states. I
Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T.
2018-03-01
Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.
Importance sampling large deviations in nonequilibrium steady states. I.
Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T
2018-03-28
Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.
Efficient characterisation of large deviations using population dynamics
Brewer, Tobias; Clark, Stephen R.; Bradford, Russell; Jack, Robert L.
2018-05-01
We consider population dynamics as implemented by the cloning algorithm for analysis of large deviations of time-averaged quantities. We use the simple symmetric exclusion process with periodic boundary conditions as a prototypical example and investigate the convergence of the results with respect to the algorithmic parameters, focussing on the dynamical phase transition between homogeneous and inhomogeneous states, where convergence is relatively difficult to achieve. We discuss how the performance of the algorithm can be optimised, and how it can be efficiently exploited on parallel computing platforms.
A course on large deviations with an introduction to Gibbs measures
Rassoul-Agha, Firas
2015-01-01
This is an introductory course on the methods of computing asymptotics of probabilities of rare events: the theory of large deviations. The book combines large deviation theory with basic statistical mechanics, namely Gibbs measures with their variational characterization and the phase transition of the Ising model, in a text intended for a one semester or quarter course. The book begins with a straightforward approach to the key ideas and results of large deviation theory in the context of independent identically distributed random variables. This includes Cramér's theorem, relative entropy, Sanov's theorem, process level large deviations, convex duality, and change of measure arguments. Dependence is introduced through the interactions potentials of equilibrium statistical mechanics. The phase transition of the Ising model is proved in two different ways: first in the classical way with the Peierls argument, Dobrushin's uniqueness condition, and correlation inequalities and then a second time through the ...
Large deviations of the maximum eigenvalue in Wishart random matrices
International Nuclear Information System (INIS)
Vivo, Pierpaolo; Majumdar, Satya N; Bohigas, Oriol
2007-01-01
We analytically compute the probability of large fluctuations to the left of the mean of the largest eigenvalue in the Wishart (Laguerre) ensemble of positive definite random matrices. We show that the probability that all the eigenvalues of a (N x N) Wishart matrix W = X T X (where X is a rectangular M x N matrix with independent Gaussian entries) are smaller than the mean value (λ) = N/c decreases for large N as ∼exp[-β/2 N 2 Φ - (2√c + 1: c)], where β = 1, 2 corresponds respectively to real and complex Wishart matrices, c = N/M ≤ 1 and Φ - (x; c) is a rate (sometimes also called large deviation) function that we compute explicitly. The result for the anti-Wishart case (M < N) simply follows by exchanging M and N. We also analytically determine the average spectral density of an ensemble of Wishart matrices whose eigenvalues are constrained to be smaller than a fixed barrier. Numerical simulations are in excellent agreement with the analytical predictions
Large deviations of the maximum eigenvalue in Wishart random matrices
Energy Technology Data Exchange (ETDEWEB)
Vivo, Pierpaolo [School of Information Systems, Computing and Mathematics, Brunel University, Uxbridge, Middlesex, UB8 3PH (United Kingdom) ; Majumdar, Satya N [Laboratoire de Physique Theorique et Modeles Statistiques (UMR 8626 du CNRS), Universite Paris-Sud, Batiment 100, 91405 Orsay Cedex (France); Bohigas, Oriol [Laboratoire de Physique Theorique et Modeles Statistiques (UMR 8626 du CNRS), Universite Paris-Sud, Batiment 100, 91405 Orsay Cedex (France)
2007-04-20
We analytically compute the probability of large fluctuations to the left of the mean of the largest eigenvalue in the Wishart (Laguerre) ensemble of positive definite random matrices. We show that the probability that all the eigenvalues of a (N x N) Wishart matrix W = X{sup T}X (where X is a rectangular M x N matrix with independent Gaussian entries) are smaller than the mean value ({lambda}) = N/c decreases for large N as {approx}exp[-{beta}/2 N{sup 2}{phi}{sub -} (2{radical}c + 1: c)], where {beta} = 1, 2 corresponds respectively to real and complex Wishart matrices, c = N/M {<=} 1 and {phi}{sub -}(x; c) is a rate (sometimes also called large deviation) function that we compute explicitly. The result for the anti-Wishart case (M < N) simply follows by exchanging M and N. We also analytically determine the average spectral density of an ensemble of Wishart matrices whose eigenvalues are constrained to be smaller than a fixed barrier. Numerical simulations are in excellent agreement with the analytical predictions.
WKB theory of large deviations in stochastic populations
Assaf, Michael; Meerson, Baruch
2017-06-01
Stochasticity can play an important role in the dynamics of biologically relevant populations. These span a broad range of scales: from intra-cellular populations of molecules to population of cells and then to groups of plants, animals and people. Large deviations in stochastic population dynamics—such as those determining population extinction, fixation or switching between different states—are presently in a focus of attention of statistical physicists. We review recent progress in applying different variants of dissipative WKB approximation (after Wentzel, Kramers and Brillouin) to this class of problems. The WKB approximation allows one to evaluate the mean time and/or probability of population extinction, fixation and switches resulting from either intrinsic (demographic) noise, or a combination of the demographic noise and environmental variations, deterministic or random. We mostly cover well-mixed populations, single and multiple, but also briefly consider populations on heterogeneous networks and spatial populations. The spatial setting also allows one to study large fluctuations of the speed of biological invasions. Finally, we briefly discuss possible directions of future work.
WKB theory of large deviations in stochastic populations
International Nuclear Information System (INIS)
Assaf, Michael; Meerson, Baruch
2017-01-01
Stochasticity can play an important role in the dynamics of biologically relevant populations. These span a broad range of scales: from intra-cellular populations of molecules to population of cells and then to groups of plants, animals and people. Large deviations in stochastic population dynamics—such as those determining population extinction, fixation or switching between different states—are presently in a focus of attention of statistical physicists. We review recent progress in applying different variants of dissipative WKB approximation (after Wentzel, Kramers and Brillouin) to this class of problems. The WKB approximation allows one to evaluate the mean time and/or probability of population extinction, fixation and switches resulting from either intrinsic (demographic) noise, or a combination of the demographic noise and environmental variations, deterministic or random. We mostly cover well-mixed populations, single and multiple, but also briefly consider populations on heterogeneous networks and spatial populations. The spatial setting also allows one to study large fluctuations of the speed of biological invasions. Finally, we briefly discuss possible directions of future work. (topical review)
Large deviations for Markov chains in the positive quadrant
Energy Technology Data Exchange (ETDEWEB)
Borovkov, A A; Mogul' skii, A A [S.L. Sobolev Institute for Mathematics, Siberian Branch of the Russian Academy of Sciences, Novosibirsk (Russian Federation)
2001-10-31
The paper deals with so-called N-partially space-homogeneous time-homogeneous Markov chains X(y,n), n=0,1,2,..., X(y,0)=y, in the positive quadrant. These Markov chains are characterized by the following property of the transition probabilities P(y,A)=P(X(y,1) element of A): for some N{>=}0 the measure P(y,dx) depends only on x{sub 2}, y{sub 2}, and x{sub 1}-y{sub 1} in the domain x{sub 1}>N, y{sub 1}>N, and only on x{sub 1}, y{sub 1}, and x{sub 2}-y{sub 2} in the domain x{sub 2}>N, y{sub 2}>N. For such chains the asymptotic behaviour is found for a fixed set B as s{yields}{infinity}, |x|{yields}{infinity}, and n{yields}{infinity}. Some other conditions on the growth of parameters are also considered, for example, |x-y|{yields}{infinity}, |y|{yields}{infinity}. A study is made of the structure of the most probable trajectories, which give the main contribution to this asymptotics, and a number of other results pertaining to the topic are established. Similar results are obtained for the narrower class of 0-partially homogeneous ergodic chains under less restrictive moment conditions on the transition probabilities P(y,dx). Moreover, exact asymptotic expressions for the probabilities P(X(0,n) element of x+B) are found for 0-partially homogeneous ergodic chains under some additional conditions. The interest in partially homogeneous Markov chains in positive octants is due to the mathematical aspects (new and interesting problems arise in the framework of general large deviation theory) as well as applied issues, for such chains prove to be quite accurate mathematical models for numerous basic types of queueing and communication networks such as the widely known Jackson networks, polling systems, or communication networks associated with the ALOHA algorithm. There is a vast literature dealing with the analysis of these objects. The present paper is an attempt to find the extent to which an asymptotic analysis is possible for Markov chains of this type in their general
Dupuis, Paul
2011-01-01
PAUL DUPUIS is a professor in the Division of Applied Mathematics at Brown University in Providence, Rhode Island. RICHARD S. ELLIS is a professor in the Department of Mathematics and Statistics at the University of Massachusetts at Amherst.
Two-scale large deviations for chemical reaction kinetics through second quantization path integral
International Nuclear Information System (INIS)
Li, Tiejun; Lin, Feng
2016-01-01
Motivated by the study of rare events for a typical genetic switching model in systems biology, in this paper we aim to establish the general two-scale large deviations for chemical reaction systems. We build a formal approach to explicitly obtain the large deviation rate functionals for the considered two-scale processes based upon the second quantization path integral technique. We get three important types of large deviation results when the underlying two timescales are in three different regimes. This is realized by singular perturbation analysis to the rate functionals obtained by the path integral. We find that the three regimes possess the same deterministic mean-field limit but completely different chemical Langevin approximations. The obtained results are natural extensions of the classical large volume limit for chemical reactions. We also discuss its implication on the single-molecule Michaelis–Menten kinetics. Our framework and results can be applied to understand general multi-scale systems including diffusion processes. (paper)
Berger, Noam; Mukherjee, Chiranjib; Okamura, Kazuki
2018-03-01
We prove a quenched large deviation principle (LDP) for a simple random walk on a supercritical percolation cluster (SRWPC) on {Z^d} ({d ≥ 2}). The models under interest include classical Bernoulli bond and site percolation as well as models that exhibit long range correlations, like the random cluster model, the random interlacement and the vacant set of random interlacements (for {d ≥ 3}) and the level sets of the Gaussian free field ({d≥ 3}). Inspired by the methods developed by Kosygina et al. (Commun Pure Appl Math 59:1489-1521, 2006) for proving quenched LDP for elliptic diffusions with a random drift, and by Yilmaz (Commun Pure Appl Math 62(8):1033-1075, 2009) and Rosenbluth (Quenched large deviations for multidimensional random walks in a random environment: a variational formula. Ph.D. thesis, NYU, arXiv:0804.1444v1) for similar results regarding elliptic random walks in random environment, we take the point of view of the moving particle and prove a large deviation principle for the quenched distribution of the pair empirical measures of the environment Markov chain in the non-elliptic case of SRWPC. Via a contraction principle, this reduces easily to a quenched LDP for the distribution of the mean velocity of the random walk and both rate functions admit explicit variational formulas. The main difficulty in our set up lies in the inherent non-ellipticity as well as the lack of translation-invariance stemming from conditioning on the fact that the origin belongs to the infinite cluster. We develop a unifying approach for proving quenched large deviations for SRWPC based on exploiting coercivity properties of the relative entropies in the context of convex variational analysis, combined with input from ergodic theory and invoking geometric properties of the supercritical percolation cluster.
Large deviations and mixing for dissipative PDEs with unbounded random kicks
Jakšić, V.; Nersesyan, V.; Pillet, C.-A.; Shirikyan, A.
2018-02-01
We study the problem of exponential mixing and large deviations for discrete-time Markov processes associated with a class of random dynamical systems. Under some dissipativity and regularisation hypotheses for the underlying deterministic dynamics and a non-degeneracy condition for the driving random force, we discuss the existence and uniqueness of a stationary measure and its exponential stability in the Kantorovich-Wasserstein metric. We next turn to the large deviations principle (LDP) and establish its validity for the occupation measures of the Markov processes in question. The proof is based on Kifer’s criterion for non-compact spaces, a result on large-time asymptotics for generalised Markov semigroup, and a coupling argument. These tools combined together constitute a new approach to LDP for infinite-dimensional processes without strong Feller property in a non-compact space. The results obtained can be applied to the two-dimensional Navier-Stokes system in a bounded domain and to the complex Ginzburg-Landau equation.
Large-Deviation Results for Discriminant Statistics of Gaussian Locally Stationary Processes
Directory of Open Access Journals (Sweden)
Junichi Hirukawa
2012-01-01
Full Text Available This paper discusses the large-deviation principle of discriminant statistics for Gaussian locally stationary processes. First, large-deviation theorems for quadratic forms and the log-likelihood ratio for a Gaussian locally stationary process with a mean function are proved. Their asymptotics are described by the large deviation rate functions. Second, we consider the situations where processes are misspecified to be stationary. In these misspecified cases, we formally make the log-likelihood ratio discriminant statistics and derive the large deviation theorems of them. Since they are complicated, they are evaluated and illustrated by numerical examples. We realize the misspecification of the process to be stationary seriously affecting our discrimination.
Large Deviations for Stochastic Models of Two-Dimensional Second Grade Fluids
International Nuclear Information System (INIS)
Zhai, Jianliang; Zhang, Tusheng
2017-01-01
In this paper, we establish a large deviation principle for stochastic models of incompressible second grade fluids. The weak convergence method introduced by Budhiraja and Dupuis (Probab Math Statist 20:39–61, 2000) plays an important role.
Large Deviations for Stochastic Models of Two-Dimensional Second Grade Fluids
Energy Technology Data Exchange (ETDEWEB)
Zhai, Jianliang, E-mail: zhaijl@ustc.edu.cn [University of Science and Technology of China, School of Mathematical Sciences (China); Zhang, Tusheng, E-mail: Tusheng.Zhang@manchester.ac.uk [University of Manchester, School of Mathematics (United Kingdom)
2017-06-15
In this paper, we establish a large deviation principle for stochastic models of incompressible second grade fluids. The weak convergence method introduced by Budhiraja and Dupuis (Probab Math Statist 20:39–61, 2000) plays an important role.
Ren, Jiagang; Wu, Jing; Zhang, Hua
2015-01-01
In this paper, we prove a large deviation principle of Freidlin-Wentzell's type for the multivalued stochastic differential equations. As an application, we derive a functional iterated logarithm law for the solutions of multivalued stochastic differential equations.
Large Deviation Bounds for a Polling System with Two Queues and Multiple Servers
Wei, Fen
2004-01-01
In this paper, we present large deviation bounds for a discrete-time polling system consisting of two-par-allel queues and m servers. The arrival process in each queue is an arbitrary, and possibly correlated, stochastic process. Each server (serves) independently serves the two queues according to a Bernoulli service schedule. Using large deviation techniques, we analyze the tail behavior of the stationary distribution of the queue length processes, and derive upper and lower bounds of the b...
Deviation-based spam-filtering method via stochastic approach
Lee, Daekyung; Lee, Mi Jin; Kim, Beom Jun
2018-03-01
In the presence of a huge number of possible purchase choices, ranks or ratings of items by others often play very important roles for a buyer to make a final purchase decision. Perfectly objective rating is an impossible task to achieve, and we often use an average rating built on how previous buyers estimated the quality of the product. The problem of using a simple average rating is that it can easily be polluted by careless users whose evaluation of products cannot be trusted, and by malicious spammers who try to bias the rating result on purpose. In this letter we suggest how trustworthiness of individual users can be systematically and quantitatively reflected to build a more reliable rating system. We compute the suitably defined reliability of each user based on the user's rating pattern for all products she evaluated. We call our proposed method as the deviation-based ranking, since the statistical significance of each user's rating pattern with respect to the average rating pattern is the key ingredient. We find that our deviation-based ranking method outperforms existing methods in filtering out careless random evaluators as well as malicious spammers.
Han, Zhifeng; Liu, Jianye; Li, Rongbing; Zeng, Qinghua; Wang, Yi
2017-07-04
BeiDou system navigation messages are modulated with a secondary NH (Neumann-Hoffman) code of 1 kbps, where frequent bit transitions limit the coherent integration time to 1 millisecond. Therefore, a bit synchronization algorithm is necessary to obtain bit edges and NH code phases. In order to realize bit synchronization for BeiDou weak signals with large frequency deviation, a bit synchronization algorithm based on differential coherent and maximum likelihood is proposed. Firstly, a differential coherent approach is used to remove the effect of frequency deviation, and the differential delay time is set to be a multiple of bit cycle to remove the influence of NH code. Secondly, the maximum likelihood function detection is used to improve the detection probability of weak signals. Finally, Monte Carlo simulations are conducted to analyze the detection performance of the proposed algorithm compared with a traditional algorithm under the CN0s of 20~40 dB-Hz and different frequency deviations. The results show that the proposed algorithm outperforms the traditional method with a frequency deviation of 50 Hz. This algorithm can remove the effect of BeiDou NH code effectively and weaken the influence of frequency deviation. To confirm the feasibility of the proposed algorithm, real data tests are conducted. The proposed algorithm is suitable for BeiDou weak signal bit synchronization with large frequency deviation.
Large deviations for solutions to stochastic recurrence equations under Kesten's condition
DEFF Research Database (Denmark)
Buraczewski, Dariusz; Damek, Ewa; Mikosch, Thomas Valentin
2013-01-01
In this paper we prove large deviations results for partial sums constructed from the solution to a stochastic recurrence equation. We assume Kesten’s condition [17] under which the solution of the stochastic recurrence equation has a marginal distribution with power law tails, while the noise...... sequence of the equations can have light tails. The results of the paper are analogs of those obtained by A.V. and S.V. Nagaev [21, 22] in the case of partial sums of iid random variables. In the latter case, the large deviation probabilities of the partial sums are essentially determined by the largest...... step size of the partial sum. For the solution to a stochastic recurrence equation, the magnitude of the large deviation probabilities is again given by the tail of the maximum summand, but the exact asymptotic tail behavior is also influenced by clusters of extreme values, due to dependencies...
Quasi-potential and Two-Scale Large Deviation Theory for Gillespie Dynamics
Li, Tiejun
2016-01-07
The construction of energy landscape for bio-dynamics is attracting more and more attention recent years. In this talk, I will introduce the strategy to construct the landscape from the connection to rare events, which relies on the large deviation theory for Gillespie-type jump dynamics. In the application to a typical genetic switching model, the two-scale large deviation theory is developed to take into account the fast switching of DNA states. The comparison with other proposals are also discussed. We demonstrate different diffusive limits arise when considering different regimes for genetic translation and switching processes.
International Nuclear Information System (INIS)
Hurtado, Pablo I; Garrido, Pedro L
2009-01-01
We study the distribution of the time-integrated current in an exactly solvable toy model of heat conduction, both analytically and numerically. The simplicity of the model allows us to derive the full current large deviation function and the system statistics during a large deviation event. In this way we unveil a relation between system statistics at the end of a large deviation event and for intermediate times. The mid-time statistics is independent of the sign of the current, a reflection of the time-reversal symmetry of microscopic dynamics, while the end-time statistics does depend on the current sign, and also on its microscopic definition. We compare our exact results with simulations based on the direct evaluation of large deviation functions, analyzing the finite-size corrections of this simulation method and deriving detailed bounds for its applicability. We also show how the Gallavotti–Cohen fluctuation theorem can be used to determine the range of validity of simulation results
Wasserstein gradient flows from large deviations of many-particle limits
Duong, M.H.; Laschos, V.; Renger, D.R.M.
2013-01-01
We study the Fokker–Planck equation as the many-particle limit of a stochastic particle system on one hand and as a Wasserstein gradient flow on the other. We write the path-space rate functional, which characterises the large deviations from the expected trajectories, in such a way that the free
Large Deviations for the Annealed Ising Model on Inhomogeneous Random Graphs: Spins and Degrees
Dommers, Sander; Giardinà, Cristian; Giberti, Claudio; Hofstad, Remco van der
2018-04-01
We prove a large deviations principle for the total spin and the number of edges under the annealed Ising measure on generalized random graphs. We also give detailed results on how the annealing over the Ising model changes the degrees of the vertices in the graph and show how it gives rise to interesting correlated random graphs.
Quasi-potential and Two-Scale Large Deviation Theory for Gillespie Dynamics
Li, Tiejun; Li, Fangting; Li, Xianggang; Lu, Cheng
2016-01-01
theory for Gillespie-type jump dynamics. In the application to a typical genetic switching model, the two-scale large deviation theory is developed to take into account the fast switching of DNA states. The comparison with other proposals are also
Large Deviations for Stochastic Tamed 3D Navier-Stokes Equations
International Nuclear Information System (INIS)
Roeckner, Michael; Zhang, Tusheng; Zhang Xicheng
2010-01-01
In this paper, using weak convergence method, we prove a large deviation principle of Freidlin-Wentzell type for the stochastic tamed 3D Navier-Stokes equations driven by multiplicative noise, which was investigated in (Roeckner and Zhang in Probab. Theory Relat. Fields 145(1-2), 211-267, 2009).
Large deviations for the Fleming-Viot process with neutral mutation and selection
Dawson, Donald; Feng, Shui
1998-01-01
Large deviation principles are established for the Fleming-Viot processes with neutral mutation and selection, and the corresponding equilibrium measures as the sampling rate goes to 0. All results are first proved for the finite allele model, and then generalized, through the projective limit technique, to the infinite allele model. Explicit expressions are obtained for the rate functions.
Large deviation estimates for a Non-Markovian Lévy generator of big order
International Nuclear Information System (INIS)
Léandre, Rémi
2015-01-01
We give large deviation estimates for a non-markovian convolution semi-group with a non-local generator of Lévy type of big order and with the standard normalisation of semi-classical analysis. No stochastic process is associated to this semi-group. (paper)
A framework for the direct evaluation of large deviations in non-Markovian processes
International Nuclear Information System (INIS)
Cavallaro, Massimo; Harris, Rosemary J
2016-01-01
We propose a general framework to simulate stochastic trajectories with arbitrarily long memory dependence and efficiently evaluate large deviation functions associated to time-extensive observables. This extends the ‘cloning’ procedure of Giardiná et al (2006 Phys. Rev. Lett. 96 120603) to non-Markovian systems. We demonstrate the validity of this method by testing non-Markovian variants of an ion-channel model and the totally asymmetric exclusion process, recovering results obtainable by other means. (letter)
Large deviations of heavy-tailed random sums with applications in insurance and finance
Kluppelberg, C; Mikosch, T
We prove large deviation results for the random sum S(t)=Sigma(i=1)(N(t)) X-i, t greater than or equal to 0, where (N(t))(t greater than or equal to 0) are non-negative integer-valued random variables and (X-n)(n is an element of N) are i.i.d. non-negative random Variables with common distribution
International Nuclear Information System (INIS)
Khorsandi, Jahon; Aven, Terje
2017-01-01
Quantitative risk assessments (QRAs) of complex engineering systems are based on numerous assumptions and expert judgments, as there is limited information available for supporting the analysis. In addition to sensitivity analyses, the concept of assumption deviation risk has been suggested as a means for explicitly considering the risk related to inaccuracies and deviations in the assumptions, which can significantly impact the results of the QRAs. However, challenges remain for its practical implementation, considering the number of assumptions and magnitude of deviations to be considered. This paper presents an approach for integrating an assumption deviation risk analysis as part of QRAs. The approach begins with identifying the safety objectives for which the QRA aims to support, and then identifies critical assumptions with respect to ensuring the objectives are met. Key issues addressed include the deviations required to violate the safety objectives, the uncertainties related to the occurrence of such events, and the strength of knowledge supporting the assessments. Three levels of assumptions are considered, which include assumptions related to the system's structural and operational characteristics, the effectiveness of the established barriers, as well as the consequence analysis process. The approach is illustrated for the case of an offshore installation. - Highlights: • An approach for assessing the risk of deviations in QRA assumptions is presented. • Critical deviations and uncertainties related to their occurrence are addressed. • The analysis promotes critical thinking about the foundation and results of QRAs. • The approach is illustrated for the case of an offshore installation.
Duffy, Ken; Lobunets, Olena; Suhov, Yuri
2007-05-01
We propose a model of a loss averse investor who aims to maximize his expected wealth under certain constraints. The constraints are that he avoids, with high probability, incurring an (suitably defined) unacceptable loss. The methodology employed comes from the theory of large deviations. We explore a number of fundamental properties of the model and illustrate its desirable features. We demonstrate its utility by analyzing assets that follow some commonly used financial return processes: Fractional Brownian Motion, Jump Diffusion, Variance Gamma and Truncated Lévy.
A large deviation principle in H\\"older norm for multiple fractional integrals
Sanz-Solé, Marta; Torrecilla-Tarantino, Iván
2007-01-01
For a fractional Brownian motion $B^H$ with Hurst parameter $H\\in]{1/4},{1/2}[\\cup]{1/2},1[$, multiple indefinite integrals on a simplex are constructed and the regularity of their sample paths are studied. Then, it is proved that the family of probability laws of the processes obtained by replacing $B^H$ by $\\epsilon^{{1/2}} B^H$ satisfies a large deviation principle in H\\"older norm. The definition of the multiple integrals relies upon a representation of the fractional Brownian motion in t...
Large deviations of a long-time average in the Ehrenfest urn model
Meerson, Baruch; Zilber, Pini
2018-05-01
Since its inception in 1907, the Ehrenfest urn model (EUM) has served as a test bed of key concepts of statistical mechanics. Here we employ this model to study large deviations of a time-additive quantity. We consider two continuous-time versions of the EUM with K urns and N balls: with and without interactions between the balls in the same urn. We evaluate the probability distribution that the average number of balls in one urn over time T, , takes any specified value aN, where . For long observation time, , a Donsker–Varadhan large deviation principle holds: , where … denote additional parameters of the model. We calculate the rate function exactly by two different methods due to Donsker and Varadhan and compare the exact results with those obtained with a variant of WKB approximation (after Wentzel, Kramers and Brillouin). In the absence of interactions the WKB prediction for is exact for any N. In the presence of interactions the WKB method gives asymptotically exact results for . The WKB method also uncovers the (very simple) time history of the system which dominates the contribution of different time histories to .
International Nuclear Information System (INIS)
Mourragui, Mustapha; Orlandi, Enza
2013-01-01
A particle system with a single locally-conserved field (density) in a bounded interval with different densities maintained at the two endpoints of the interval is under study here. The particles interact in the bulk through a long-range potential parametrized by β⩾0 and evolve according to an exclusion rule. It is shown that the empirical particle density under the diffusive scaling solves a quasilinear integro-differential evolution equation with Dirichlet boundary conditions. The associated dynamical large deviation principle is proved. Furthermore, when β is small enough, it is also demonstrated that the empirical particle density obeys a law of large numbers with respect to the stationary measures (hydrostatic). The macroscopic particle density solves a non-local, stationary, transport equation. (paper)
Level 2 and level 2.5 large deviation functionals for systems with and without detailed balance
International Nuclear Information System (INIS)
Hoppenau, J; Nickelsen, D; Engel, A
2016-01-01
Large deviation functions are an essential tool in the statistics of rare events. Often they can be obtained by contraction from a so-called level 2 or level 2.5 large deviation functional characterizing the empirical density and current of the underlying stochastic process. For Langevin systems obeying detailed balance, the explicit form of the level 2 functional has been known ever since the mathematical work of Donsker and Varadhan. We rederive the Donsker–Varadhan result using stochastic path-integrals. We than generalize the derivation to level 2.5 large deviation functionals for non-equilibrium steady states and elucidate the relation between the large deviation functionals and different notions of entropy production in stochastic thermodynamics. Finally, we discuss some aspects of the contractions to level 1 large deviation functions and illustrate our findings with examples. (paper)
The Absolute Deviation Rank Diagnostic Approach to Gear Tooth Composite Fault
Directory of Open Access Journals (Sweden)
Guangbin Wang
2017-01-01
Full Text Available Aiming at nonlinear and nonstationary characteristics of the different degree with single fault gear tooth broken, pitting, and composite fault gear tooth broken-pitting, a method for the diagnosis of absolute deviation of gear faults is presented. The method uses ADAMS, respectively, set-up dynamics model of single fault gear tooth broken, pitting, and composite fault gear tooth broken-pitting, to obtain the result of different degree of broken teeth, pitting the single fault and compound faults in the meshing frequency, and the amplitude frequency doubling through simulating analysis. Through the comparison with the normal state to obtain the sensitive characteristic of the fault, the absolute value deviation diagnostic approach is used to identify the fault and validate it through experiments. The results show that absolute deviation rank diagnostic approach can realize the recognition of gear single faults and compound faults with different degrees and provide quick reference to determine the degree of gear fault.
Hessian matrix approach for determining error field sensitivity to coil deviations
Zhu, Caoxiang; Hudson, Stuart R.; Lazerson, Samuel A.; Song, Yuntao; Wan, Yuanxi
2018-05-01
The presence of error fields has been shown to degrade plasma confinement and drive instabilities. Error fields can arise from many sources, but are predominantly attributed to deviations in the coil geometry. In this paper, we introduce a Hessian matrix approach for determining error field sensitivity to coil deviations. A primary cost function used for designing stellarator coils, the surface integral of normalized normal field errors, was adopted to evaluate the deviation of the generated magnetic field from the desired magnetic field. The FOCUS code (Zhu et al 2018 Nucl. Fusion 58 016008) is utilized to provide fast and accurate calculations of the Hessian. The sensitivities of error fields to coil displacements are then determined by the eigenvalues of the Hessian matrix. A proof-of-principle example is given on a CNT-like configuration. We anticipate that this new method could provide information to avoid dominant coil misalignments and simplify coil designs for stellarators.
Use of Standard Deviations as Predictors in Models Using Large-Scale International Data Sets
Austin, Bruce; French, Brian; Adesope, Olusola; Gotch, Chad
2017-01-01
Measures of variability are successfully used in predictive modeling in research areas outside of education. This study examined how standard deviations can be used to address research questions not easily addressed using traditional measures such as group means based on index variables. Student survey data were obtained from the Organisation for…
From a large-deviations principle to the Wasserstein gradient flow : a new micro-macro passage
Adams, S.; Dirr, N.; Peletier, M.A.; Zimmer, J.
2011-01-01
We study the connection between a system of many independent Brownian particles on one hand and the deterministic diffusion equation on the other. For a fixed time step h > 0, a large-deviations rate functional J h characterizes the behaviour of the particle system at t = h in terms of the initial
Das, Biswajit; Gangopadhyay, Gautam
2018-05-01
In the framework of large deviation theory, we have characterized nonequilibrium turnover statistics of enzyme catalysis in a chemiostatic flow with externally controllable parameters, like substrate injection rate and mechanical force. In the kinetics of the process, we have shown the fluctuation theorems in terms of the symmetry of the scaled cumulant generating function (SCGF) in the transient and steady state regime and a similar symmetry rule is reflected in a large deviation rate function (LDRF) as a property of the dissipation rate through boundaries. Large deviation theory also gives the thermodynamic force of a nonequilibrium steady state, as is usually recorded experimentally by a single molecule technique, which plays a key role responsible for the dynamical symmetry of the SCGF and LDRF. Using some special properties of the Legendre transformation, here, we have provided a relation between the fluctuations of fluxes and dissipation rates, and among them, the fluctuation of the turnover rate is routinely estimated but the fluctuation in the dissipation rate is yet to be characterized for small systems. Such an enzymatic reaction flow system can be a very good testing ground to systematically understand the rare events from the large deviation theory which is beyond fluctuation theorem and central limit theorem.
Czech Academy of Sciences Publication Activity Database
De Roeck, W.; Maes, C.; Netočný, Karel; Schütz, M.
2015-01-01
Roč. 56, č. 2 (2015), "023301-1"-"023301-30" ISSN 0022-2488 Institutional support: RVO:68378271 Keywords : quantum systems * quantum large deviations * entanglement * cluster expansions Subject RIV: BE - Theoretical Physics Impact factor: 1.234, year: 2015
Large deviations of the finite-time magnetization of the Curie-Weiss random-field Ising model
Paga, Pierre; Kühn, Reimer
2017-08-01
We study the large deviations of the magnetization at some finite time in the Curie-Weiss random field Ising model with parallel updating. While relaxation dynamics in an infinite-time horizon gives rise to unique dynamical trajectories [specified by initial conditions and governed by first-order dynamics of the form mt +1=f (mt) ] , we observe that the introduction of a finite-time horizon and the specification of terminal conditions can generate a host of metastable solutions obeying second-order dynamics. We show that these solutions are governed by a Newtonian-like dynamics in discrete time which permits solutions in terms of both the first-order relaxation ("forward") dynamics and the backward dynamics mt +1=f-1(mt) . Our approach allows us to classify trajectories for a given final magnetization as stable or metastable according to the value of the rate function associated with them. We find that in analogy to the Freidlin-Wentzell description of the stochastic dynamics of escape from metastable states, the dominant trajectories may switch between the two types (forward and backward) of first-order dynamics. Additionally, we show how to compute rate functions when uncertainty in the quenched disorder is introduced.
International Nuclear Information System (INIS)
Smith, Eric
2011-01-01
The meaning of thermodynamic descriptions is found in large-deviations scaling (Ellis 1985 Entropy, Large Deviations, and Statistical Mechanics (New York: Springer); Touchette 2009 Phys. Rep. 478 1-69) of the probabilities for fluctuations of averaged quantities. The central function expressing large-deviations scaling is the entropy, which is the basis both for fluctuation theorems and for characterizing the thermodynamic interactions of systems. Freidlin-Wentzell theory (Freidlin and Wentzell 1998 Random Perturbations in Dynamical Systems 2nd edn (New York: Springer)) provides a quite general formulation of large-deviations scaling for non-equilibrium stochastic processes, through a remarkable representation in terms of a Hamiltonian dynamical system. A number of related methods now exist to construct the Freidlin-Wentzell Hamiltonian for many kinds of stochastic processes; one method due to Doi (1976 J. Phys. A: Math. Gen. 9 1465-78; 1976 J. Phys. A: Math. Gen. 9 1479) and Peliti (1985 J. Physique 46 1469; 1986 J. Phys. A: Math. Gen. 19 L365, appropriate to integer counting statistics, is widely used in reaction-diffusion theory. Using these tools together with a path-entropy method due to Jaynes (1980 Annu. Rev. Phys. Chem. 31 579-601), this review shows how to construct entropy functions that both express large-deviations scaling of fluctuations, and describe system-environment interactions, for discrete stochastic processes either at or away from equilibrium. A collection of variational methods familiar within quantum field theory, but less commonly applied to the Doi-Peliti construction, is used to define a 'stochastic effective action', which is the large-deviations rate function for arbitrary non-equilibrium paths. We show how common principles of entropy maximization, applied to different ensembles of states or of histories, lead to different entropy functions and different sets of thermodynamic state variables. Yet the relations among all these levels of
International Nuclear Information System (INIS)
Sorel, C.; Pacary, V.
2010-01-01
The solvent extraction systems devoted to uranium purification from crude ore to spent fuel involve concentrated solutions in which deviation from ideality can not be neglected. The Simple Solution Concept based on the behaviour of isopiestic solutions has been applied to quantify the activity coefficients of metals and acids in the aqueous phase in equilibrium with the organic phase. This approach has been validated on various solvent extraction systems such as trialkylphosphates, malonamides or acidic extracting agents both on batch experiments and counter-current tests. Moreover, this concept has been successfully used to estimate the aqueous density which is useful to quantify the variation of volume and to assess critical parameters such as the number density of nuclides. (author)
International Nuclear Information System (INIS)
Hanasaki, Itsuo; Kawano, Satoyuki
2013-01-01
Motility of bacteria is usually recognized in the trajectory data and compared with Brownian motion, but the diffusion coefficient is insufficient to evaluate it. In this paper, we propose a method based on the large deviation principle. We show that it can be used to evaluate the non-Gaussian characteristics of model Escherichia coli motions and to distinguish combinations of the mean running duration and running speed that lead to the same diffusion coefficient. Our proposed method does not require chemical stimuli to induce the chemotaxis in a specific direction, and it is applicable to various types of self-propelling motions for which no a priori information of, for example, threshold parameters for run and tumble or head/tail direction is available. We also address the issue of the finite-sample effect on the large deviation quantities, but we propose to make use of it to characterize the nature of motility. (paper)
Large Deviations and Quasipotential for Finite State Mean Field Interacting Particle Systems
2014-05-01
The conclusion then follows by applying Lemma 4.4.2. 132 119 4.4.1 Iterative solver: The widest neighborhood structure We employ Gauss - Seidel ...nearest neighborhood structure described in Section 4.4.2. We use Gauss - Seidel iterative method for our numerical experiments. The Gauss - Seidel ...x ∈ Bh, M x ∈ Sh\\Bh, where M ∈ (V,∞) is a very large number, so that the iteration (4.5.1) converges quickly. For simplicity, we restrict our
Large deviation estimates for exceedance times of perpetuity sequences and their dual processes
DEFF Research Database (Denmark)
Buraczewski, Dariusz; Collamore, Jeffrey F.; Damek, Ewa
2016-01-01
In a variety of problems in pure and applied probability, it is of relevant to study the large exceedance probabilities of the perpetuity sequence $Y_n := B_1 + A_1 B_2 + \\cdots + (A_1 \\cdots A_{n-1}) B_n$, where $(A_i,B_i) \\subset (0,\\infty) \\times \\reals$. Estimates for the stationary tail dist......-time exceedance probabilities of $\\{ M_n^\\ast \\}$, yielding a new result concerning the convergence of $\\{ M_n^\\ast \\}$ to its stationary distribution.......In a variety of problems in pure and applied probability, it is of relevant to study the large exceedance probabilities of the perpetuity sequence $Y_n := B_1 + A_1 B_2 + \\cdots + (A_1 \\cdots A_{n-1}) B_n$, where $(A_i,B_i) \\subset (0,\\infty) \\times \\reals$. Estimates for the stationary tail...... distribution of $\\{ Y_n \\}$ have been developed in the seminal papers of Kesten (1973) and Goldie (1991). Specifically, it is well-known that if $M := \\sup_n Y_n$, then ${\\mathbb P} \\left\\{ M > u \\right\\} \\sim {\\cal C}_M u^{-\\xi}$ as $u \\to \\infty$. While much attention has been focused on extending...
Kotik, A.; Usyukin, V.; Vinogradov, I.; Arkhipov, M.
2017-11-01
he realization of astrophysical researches requires the development of high-sensitive centimeterband parabolic space radiotelescopes (SRT) with the large-size mirrors. Constructively such SRT with the mirror size more than 10 m can be realized as deployable rigid structures. Mesh-structures of such size do not provide the reflector reflecting surface accuracy which is necessary for the centimeter band observations. Now such telescope with the 10 m diameter mirror is developed in Russia in the frame of "SPECTR - R" program. External dimensions of the telescope is more than the size of existing thermo-vacuum chambers used to prove SRT reflecting surface accuracy parameters under the action of space environment factors. That's why the numerical simulation turns out to be the basis required to accept the taken designs. Such modeling should be based on experimental working of the basic constructive materials and elements of the future reflector. In the article computational modeling of reflecting surface deviations of a centimeter-band of a large-sized deployable space reflector at a stage of his orbital functioning is considered. The analysis of the factors that determines the deviations - both determined (temperatures fields) and not-determined (telescope manufacturing and installation faults; the deformations caused by features of composite materials behavior in space) is carried out. The finite-element model and complex of methods are developed. They allow to carry out computational modeling of reflecting surface deviations caused by influence of all factors and to take into account the deviations correction by space vehicle orientation system. The results of modeling for two modes of functioning (orientation at the Sun) SRT are presented.
Large deviations and Lifshitz singularity of the integrated density of states of random Hamiltonians
International Nuclear Information System (INIS)
Kirsch, W.; Martinelli, F.
1983-01-01
We consider the integrated density of states (IDS) rhosub(infinite)(lambda) of random Hamiltonian Hsub#betta#=-δ+Vsub#betta#, Vsub#betta# being a random field on Rsup(d) which satisfies a mixing condition. We prove that the probability of large fluctuations of the finite volume IDSvertical stroke#betta#vertical stroke - 1 rho(lambda,Hsub(lambda)(#betta#)), #betta#is contained inRsup(d), around the thermodynamic limit rhosub(infinite)(lambda) is bounded from above by exp[-kvertical stroke#betta#vertical stroke], k>0. In this case rhosub(infinite)(lambda) can be recovered from a variational principle. Furthermore we show the existence of a Lifshitz-type of singularity of rhosub(infinite)(lambda) as lambda->0 + in the case where Vsub#betta# is non-negative. More precisely we prove the following bound: rhosub(infinite)(lambda) 0 + k>0. This last result is then discussed in some examples. (orig.)
Shaar, R.; Tauxe, L.; Ebert, Y.
2017-12-01
Continuous decadal-resolution paleomagnetic data from archaeological and sedimentary sources in the Levant revealed the existence a local high-field anomaly, which spanned the first 350 years of the first millennium BCE. This so-called "the Levantine Iron Age geomagnetic Anomaly" (LIAA) was characterized by a high averaged geomagnetic field (virtual axial dipole moments, VADM > 140 Z Am2, nearly twice of today's field), short decadal-scale geomagnetic spikes (VADM of 160-185 Z Am2), fast directional and intensity variations, and substantial deviation (20°-25°) from dipole field direction. Similar high field values in the time frame of LIAA have been observed north, and northeast to the Levant: Eastern Anatolia, Turkmenistan, and Georgia. West of the Levant, in the Balkans, field values in the same time are moderate to low. The overall data suggest that the LIAA is a manifestation of a local positive geomagnetic field anomaly similar in magnitude and scale to the presently active negative South Atlantic Anomaly. In this presentation we review the overall archaeomagnetic and sedimentary evidences supporting the local anomaly hypothesis, and compare these observations with today's IGRF field. We analyze the global data during the first two millennia BCE, which suggest some unexpected large deviations from a simple dipolar geomagnetic structure.
Wang, Wanlin; Zhang, Wang; Chen, Weixin; Gu, Jiajun; Liu, Qinglei; Deng, Tao; Zhang, Di
2013-01-15
The wide angular range of the treelike structure in Morpho butterfly scales was investigated by finite-difference time-domain (FDTD)/particle-swarm-optimization (PSO) analysis. Using the FDTD method, different parameters in the Morpho butterflies' treelike structure were studied and their contributions to the angular dependence were analyzed. Then a wide angular range was realized by the PSO method from quantitatively designing the lamellae deviation (Δy), which was a crucial parameter with angular range. The field map of the wide-range reflection in a large area was given to confirm the wide angular range. The tristimulus values and corresponding color coordinates for various viewing directions were calculated to confirm the blue color in different observation angles. The wide angular range realized by the FDTD/PSO method will assist us in understanding the scientific principles involved and also in designing artificial optical materials.
Precise lim sup behavior of probabilities of large deviations for sums of i.i.d. random variables
Directory of Open Access Journals (Sweden)
Andrew Rosalsky
2004-12-01
Full Text Available Let {X,Xn;nÃ¢Â‰Â¥1} be a sequence of real-valued i.i.d. random variables and let Sn=Ã¢ÂˆÂ‘i=1nXi, nÃ¢Â‰Â¥1. In this paper, we study the probabilities of large deviations of the form P(Sn>tn1/p, P(Sntn1/p, where t>0 and 0x1/p/ÃÂ•(x=1, then for every t>0, limsupnÃ¢Â†Â’Ã¢ÂˆÂžP(|Sn|>tn1/p/(nÃÂ•(n=tpÃŽÂ±.
Directory of Open Access Journals (Sweden)
Shuo Zhang
2017-04-01
Full Text Available Abstract In this paper, we consider a size-dependent renewal risk model with stopping time claim-number process. In this model, we do not make any assumption on the dependence structure of claim sizes and inter-arrival times. We study large deviations of the aggregate amount of claims. For the subexponential heavy-tailed case, we obtain a precise large-deviation formula; our method substantially relies on a martingale for the structure of our models.
International Nuclear Information System (INIS)
Peletier, Mark A.; Redig, Frank; Vafayi, Kiamars
2014-01-01
We consider three one-dimensional continuous-time Markov processes on a lattice, each of which models the conduction of heat: the family of Brownian Energy Processes with parameter m (BEP(m)), a Generalized Brownian Energy Process, and the Kipnis-Marchioro-Presutti (KMP) process. The hydrodynamic limit of each of these three processes is a parabolic equation, the linear heat equation in the case of the BEP(m) and the KMP, and a nonlinear heat equation for the Generalized Brownian Energy Process with parameter a (GBEP(a)). We prove the hydrodynamic limit rigorously for the BEP(m), and give a formal derivation for the GBEP(a). We then formally derive the pathwise large-deviation rate functional for the empirical measure of the three processes. These rate functionals imply gradient-flow structures for the limiting linear and nonlinear heat equations. We contrast these gradient-flow structures with those for processes describing the diffusion of mass, most importantly the class of Wasserstein gradient-flow systems. The linear and nonlinear heat-equation gradient-flow structures are each driven by entropy terms of the form −log ρ; they involve dissipation or mobility terms of order ρ 2 for the linear heat equation, and a nonlinear function of ρ for the nonlinear heat equation
Qin, Guangzhao; Qin, Zhenzhen; Wang, Huimin; Hu, Ming
2017-05-01
Efficient heat dissipation, which is featured by high thermal conductivity, is one of the crucial issues for the reliability and stability of nanodevices. However, due to the generally fast 1 /T decrease of thermal conductivity with temperature increase, the efficiency of heat dissipation quickly drops down at an elevated temperature caused by the increase of work load in electronic devices. To this end, pursuing semiconductor materials that possess large thermal conductivity at high temperature, i.e., slower decrease of thermal conductivity with temperature increase than the traditional κ ˜1 /T relation, is extremely important to the development of disruptive nanoelectronics. Recently, monolayer gallium nitride (GaN) with a planar honeycomb structure emerges as a promising new two-dimensional material with great potential for applications in nano- and optoelectronics. Here, we report that, despite the commonly established 1 /T relation of thermal conductivity in plenty of materials, monolayer GaN exhibits anomalous behavior that the thermal conductivity almost decreases linearly over a wide temperature range above 300 K, deviating largely from the traditional κ ˜1 /T law. The thermal conductivity at high temperature is much larger than the expected thermal conductivity that follows the general κ ˜1 /T trend, which would be beneficial for applications of monolayer GaN in nano- and optoelectronics in terms of efficient heat dissipation. We perform detailed analysis on the mechanisms underlying the anomalously temperature-dependent thermal conductivity of monolayer GaN in the framework of Boltzmann transport theory and further get insight from the view of electronic structure. Beyond that, we also propose two required conditions for materials that would exhibit similar anomalous temperature dependence of thermal conductivity: large difference in atom mass (huge phonon band gap) and electronegativity (LO-TO splitting due to strong polarization of bond). Our
Nagaev, Alexander; Zaigraev, Alexander
2005-01-01
A class of absolutely continuous distributions in Rd is considered. Each distribution belongs to the domain of normal attraction of an α-stable law. The limit law is characterized by a spectral measure which is absolutely continuous with respect to the spherical Lebesgue measure. The large-deviation problem for sums of independent and identically distributed random vectors when the underlying distribution belongs to that class is studied. At the focus of attention are the deviations in the di...
Gao, Lili; Zhou, Zai-Fa; Huang, Qing-An
2017-11-08
A microstructure beam is one of the fundamental elements in MEMS devices like cantilever sensors, RF/optical switches, varactors, resonators, etc. It is still difficult to precisely predict the performance of MEMS beams with the current available simulators due to the inevitable process deviations. Feasible numerical methods are required and can be used to improve the yield and profits of the MEMS devices. In this work, process deviations are considered to be stochastic variables, and a newly-developed numerical method, i.e., generalized polynomial chaos (GPC), is applied for the simulation of the MEMS beam. The doubly-clamped polybeam has been utilized to verify the accuracy of GPC, compared with our Monte Carlo (MC) approaches. Performance predictions have been made on the residual stress by achieving its distributions in GaAs Monolithic Microwave Integrated Circuit (MMIC)-based MEMS beams. The results show that errors are within 1% for the results of GPC approximations compared with the MC simulations. Appropriate choices of the 4-order GPC expansions with orthogonal terms have also succeeded in reducing the MC simulation labor. The mean value of the residual stress, concluded from experimental tests, shares an error about 1.1% with that of the 4-order GPC method. It takes a probability around 54.3% for the 4-order GPC approximation to attain the mean test value of the residual stress. The corresponding yield occupies over 90 percent around the mean within the twofold standard deviations.
Directory of Open Access Journals (Sweden)
Lili Gao
2017-11-01
Full Text Available A microstructure beam is one of the fundamental elements in MEMS devices like cantilever sensors, RF/optical switches, varactors, resonators, etc. It is still difficult to precisely predict the performance of MEMS beams with the current available simulators due to the inevitable process deviations. Feasible numerical methods are required and can be used to improve the yield and profits of the MEMS devices. In this work, process deviations are considered to be stochastic variables, and a newly-developed numerical method, i.e., generalized polynomial chaos (GPC, is applied for the simulation of the MEMS beam. The doubly-clamped polybeam has been utilized to verify the accuracy of GPC, compared with our Monte Carlo (MC approaches. Performance predictions have been made on the residual stress by achieving its distributions in GaAs Monolithic Microwave Integrated Circuit (MMIC-based MEMS beams. The results show that errors are within 1% for the results of GPC approximations compared with the MC simulations. Appropriate choices of the 4-order GPC expansions with orthogonal terms have also succeeded in reducing the MC simulation labor. The mean value of the residual stress, concluded from experimental tests, shares an error about 1.1% with that of the 4-order GPC method. It takes a probability around 54.3% for the 4-order GPC approximation to attain the mean test value of the residual stress. The corresponding yield occupies over 90 percent around the mean within the twofold standard deviations.
Lazzari, G.; Wrenzycki, C.; Herrmann, D.; Duchi, R.; Kruip, T.; Niemann, H.; Galli, C.
2002-01-01
The large offspring syndrome (LOS) is observed in bovine and ovine offspring following transfer of in vitro-produced (IVP) or cloned embryos and is characterized by a multitude of pathologic changes, of which extended gestation length and increased birthweight are predominant features. In the
International Nuclear Information System (INIS)
Chen, Yong; Ge, Hao; Xiong, Jie; Xu, Lihu
2016-01-01
Fluctuation theorem is one of the major achievements in the field of nonequilibrium statistical mechanics during the past two decades. There exist very few results for steady-state fluctuation theorem of sample entropy production rate in terms of large deviation principle for diffusion processes due to the technical difficulties. Here we give a proof for the steady-state fluctuation theorem of a diffusion process in magnetic fields, with explicit expressions of the free energy function and rate function. The proof is based on the Karhunen-Loève expansion of complex-valued Ornstein-Uhlenbeck process.
Innovative Approaches to Large Component Packaging
International Nuclear Information System (INIS)
Freitag, A.; Hooper, M.; Posivak, E.; Sullivan, J.
2006-01-01
Radioactive waste disposal often times requires creative approaches in packaging design, especially for large components. Innovative design techniques are required to meet the needs for handling, transporting, and disposing of these large packages. Large components (i.e., Reactor Pressure Vessel (RPV) heads and even RPVs themselves) require special packaging for shielding and contamination control, as well as for transport and disposal. WMG Inc designed and used standard packaging for RPV heads without control rod drive mechanisms (CRDMs) attached for five RPV heads and has also more recently met an even bigger challenge and developed the innovative Intact Vessel Head Transport System (IVHTS) for RPV heads with CRDMs intact. This packaging system has been given a manufacturer's exemption by the United States Department of Transportation (USDOT) for packaging RPV heads. The IVHTS packaging has now been successfully used at two commercial nuclear power plants. Another example of innovative packaging is the large component packaging that WMG designed, fabricated, and utilized at the West Valley Demonstration Project (WVDP). In 2002, West Valley's high-level waste vitrification process was shut down in preparation for D and D of the West Valley Vitrification Facility. Three of the major components of concern within the Vitrification Facility were the Melter, the Concentrate Feed Makeup Tank (CFMT), and the Melter Feed Holdup Tank (MFHT). The removal, packaging, and disposition of these three components presented significant radiological and handling challenges for the project. WMG designed, fabricated, and installed special packaging for the transport and disposal of each of these three components, which eliminated an otherwise time intensive and costly segmentation process that WVDP was considering. Finally, WMG has also designed and fabricated special packaging for both the Connecticut Yankee (CY) and San Onofre Nuclear Generating Station (SONGS) RPVs. This paper
Mishra, Alok; Swati, D
2015-09-01
Variation in the interval between the R-R peaks of the electrocardiogram represents the modulation of the cardiac oscillations by the autonomic nervous system. This variation is contaminated by anomalous signals called ectopic beats, artefacts or noise which mask the true behaviour of heart rate variability. In this paper, we have proposed a combination filter of recursive impulse rejection filter and recursive 20% filter, with recursive application and preference of replacement over removal of abnormal beats to improve the pre-processing of the inter-beat intervals. We have tested this novel recursive combinational method with median method replacement to estimate the standard deviation of normal to normal (SDNN) beat intervals of congestive heart failure (CHF) and normal sinus rhythm subjects. This work discusses the improvement in pre-processing over single use of impulse rejection filter and removal of abnormal beats for heart rate variability for the estimation of SDNN and Poncaré plot descriptors (SD1, SD2, and SD1/SD2) in detail. We have found the 22 ms value of SDNN and 36 ms value of SD2 descriptor of Poincaré plot as clinical indicators in discriminating the normal cases from CHF cases. The pre-processing is also useful in calculation of Lyapunov exponent which is a nonlinear index as Lyapunov exponents calculated after proposed pre-processing modified in a way that it start following the notion of less complex behaviour of diseased states.
The Distance Standard Deviation
Edelmann, Dominic; Richards, Donald; Vogel, Daniel
2017-01-01
The distance standard deviation, which arises in distance correlation analysis of multivariate data, is studied as a measure of spread. New representations for the distance standard deviation are obtained in terms of Gini's mean difference and in terms of the moments of spacings of order statistics. Inequalities for the distance variance are derived, proving that the distance standard deviation is bounded above by the classical standard deviation and by Gini's mean difference. Further, it is ...
Computational approach to large quantum dynamical problems
International Nuclear Information System (INIS)
Friesner, R.A.; Brunet, J.P.; Wyatt, R.E.; Leforestier, C.; Binkley, S.
1987-01-01
The organizational structure is described for a new program that permits computations on a variety of quantum mechanical problems in chemical dynamics and spectroscopy. Particular attention is devoted to developing and using algorithms that exploit the capabilities of current vector supercomputers. A key component in this procedure is the recursive transformation of the large sparse Hamiltonian matrix into a much smaller tridiagonal matrix. An application to time-dependent laser molecule energy transfer is presented. Rate of energy deposition in the multimode molecule for systematic variations in the molecular intermode coupling parameters is emphasized
Segmentation Using Symmetry Deviation
DEFF Research Database (Denmark)
Hollensen, Christian; Højgaard, L.; Specht, L.
2011-01-01
of the CT-scans into a single atlas. Afterwards the standard deviation of anatomical symmetry for the 20 normal patients was evaluated using non-rigid registration and registered onto the atlas to create an atlas for normal anatomical symmetry deviation. The same non-rigid registration was used on the 10...... hypopharyngeal cancer patients to find anatomical symmetry and evaluate it against the standard deviation of the normal patients to locate pathologic volumes. Combining the information with an absolute PET threshold of 3 Standard uptake value (SUV) a volume was automatically delineated. The overlap of automated....... The standard deviation of the anatomical symmetry, seen in figure for one patient along CT and PET, was extracted for normal patients and compared with the deviation from cancer patients giving a new way of determining cancer pathology location. Using the novel method an overlap concordance index...
Wu, Chi-Chuan
2014-03-01
A simple and appropriate approach for evaluating an acceptable alignment of bone around the knee during operation has not yet been reported. Thirty-five men and 35 women presenting with nonunion or malunion of the unilateral femoral shaft were included in the first study. Using the standing scanograph, the contralateral normal lower extremity was measured to determine the normal deviation angle (DA) of the medial malleolus when the medial aspect of the knee was placed in the midline of the body. In the second study, the normal DA from individual patients was used as a reference to evaluate knee alignment during operation in 40 other patients presenting with distal femoral or proximal tibial nonunion or malunion. The clinical and knee functional outcomes of these 40 patients were investigated. The average normal DA was 4.2° in men and 6.0° in women (palignment was maintained in all 30 patients with fracture union. Satisfactory function of the knee was achieved in 28 patients (82%, palignment of bone around the knee during operation. Level IV, Case series. Copyright © 2012 Elsevier B.V. All rights reserved.
Kargoll, Boris; Omidalizarandi, Mohammad; Loth, Ina; Paffenholz, Jens-André; Alkhatib, Hamza
2018-03-01
In this paper, we investigate a linear regression time series model of possibly outlier-afflicted observations and autocorrelated random deviations. This colored noise is represented by a covariance-stationary autoregressive (AR) process, in which the independent error components follow a scaled (Student's) t-distribution. This error model allows for the stochastic modeling of multiple outliers and for an adaptive robust maximum likelihood (ML) estimation of the unknown regression and AR coefficients, the scale parameter, and the degree of freedom of the t-distribution. This approach is meant to be an extension of known estimators, which tend to focus only on the regression model, or on the AR error model, or on normally distributed errors. For the purpose of ML estimation, we derive an expectation conditional maximization either algorithm, which leads to an easy-to-implement version of iteratively reweighted least squares. The estimation performance of the algorithm is evaluated via Monte Carlo simulations for a Fourier as well as a spline model in connection with AR colored noise models of different orders and with three different sampling distributions generating the white noise components. We apply the algorithm to a vibration dataset recorded by a high-accuracy, single-axis accelerometer, focusing on the evaluation of the estimated AR colored noise model.
International Nuclear Information System (INIS)
Schiller, Kilian; Specht, Hanno; Kampfer, Severin; Duma, Marciana Nona; Petrucci, Alessia; Geinitz, Hans; Schuster, Tibor
2014-01-01
The goal of this study was to assess the impact of different setup approaches in image-guided radiotherapy (IMRT) of the prostatic gland. In all, 28 patients with prostate cancer were enrolled in this study. After the placement of an endorectal balloon, the planning target volume (PTV) was treated to a dose of 70 Gy in 35 fractions. A simultaneously integrated boost (SIB) of 76 Gy (2.17 Gy per fraction and per day) was delivered to a smaller target volume. All patients underwent daily prostate-aligned IGRT by megavoltage CT (MVCT). Retrospectively, three different setup approaches were evaluated by comparison to the prostate alignment: setup by skin alignment, endorectal balloon alignment, and automatic registration by bones. A total of 2,940 setup deviations were analyzed in 980 fractions. Compared to prostate alignment, skin mark alignment was associated with substantial displacements, which were ≥ 8 mm in 13 %, 5 %, and 44 % of all fractions in the lateral, longitudinal, and vertical directions, respectively. Endorectal balloon alignment yielded displacements ≥ 8 mm in 3 %, 19 %, and 1 % of all setups; and ≥ 3 mm in 27 %, 58 %, and 18 % of all fractions, respectively. For bone matching, the values were 1 %, 1 %, and 2 % and 3 %, 11 %, and 34 %, respectively. For prostate radiotherapy, setup by skin marks alone is inappropriate for patient positioning due to the fact that, during almost half of the fractions, parts of the prostate would not be targeted successfully with an 8-mm safety margin. Bone matching performs better but not sufficiently for safety margins ≤ 3 mm. Endorectal balloon matching can be combined with bone alignment to increase accuracy in the vertical direction when prostate-based setup is not available. Daily prostate alignment remains the gold standard for high-precision radiotherapy with small safety margins. (orig.) [de
DEFF Research Database (Denmark)
Rocha, Vera; Van Praag, Mirjam; Carneiro, Anabela
This paper studies three related questions: To what extent otherwise similar startups employ different quantities and qualities of human capital at the moment of entry? How persistent are initial human capital choices over time? And how does deviating from human capital benchmarks influence firm......, founders human capital, and the ownership structure of startups (solo entrepreneurs versus entrepreneurial teams). We then study the survival implications of exogenous deviations from these benchmarks, based on spline models for survival data. Our results indicate that (especially negative) deviations from...... the benchmark can be substantial, are persistent over time, and hinder the survival of firms. The implications may, however, vary according to the sector and the ownership structure at entry. Given the stickiness of initial choices, wrong human capital decisions at entry turn out to be a close to irreversible...
Avian surveys of large geographical areas: A systematic approach
Scott, J.M.; Jacobi, J.D.; Ramsey, F.L.
1981-01-01
A multidisciplinary team approach was used to simultaneously map the distribution of birds, selected food items, and major vegetation types in 34,000- to 140,000-ha tracts in native Hawaiian forests. By using a team approach, large savings in time can be realized over attempts to conduct similar surveys of smaller scope, and a systems approach to management problems is made easier. The methods used in survey design, training observers, and documenting bird numbersand habitat descriptions are discussed in detail.
Energy Technology Data Exchange (ETDEWEB)
Schiller, Kilian; Specht, Hanno; Kampfer, Severin; Duma, Marciana Nona [Technische Universitaet Muenchen Klinikum rechts der Isar, Department of Radiation Oncology, Muenchen (Germany); Petrucci, Alessia [University of Florence, Department of Radiation Oncology, Florence (Italy); Geinitz, Hans [Krankenhaus der Barmherzigen Schwestern Linz, Department of Radiation Oncology, Linz (Austria); Schuster, Tibor [Klinikum Rechts der Isar, Technische Universitaet Muenchen, Institute for Medical Statistics and Epidemiology, Muenchen (Germany)
2014-08-15
The goal of this study was to assess the impact of different setup approaches in image-guided radiotherapy (IMRT) of the prostatic gland. In all, 28 patients with prostate cancer were enrolled in this study. After the placement of an endorectal balloon, the planning target volume (PTV) was treated to a dose of 70 Gy in 35 fractions. A simultaneously integrated boost (SIB) of 76 Gy (2.17 Gy per fraction and per day) was delivered to a smaller target volume. All patients underwent daily prostate-aligned IGRT by megavoltage CT (MVCT). Retrospectively, three different setup approaches were evaluated by comparison to the prostate alignment: setup by skin alignment, endorectal balloon alignment, and automatic registration by bones. A total of 2,940 setup deviations were analyzed in 980 fractions. Compared to prostate alignment, skin mark alignment was associated with substantial displacements, which were ≥ 8 mm in 13 %, 5 %, and 44 % of all fractions in the lateral, longitudinal, and vertical directions, respectively. Endorectal balloon alignment yielded displacements ≥ 8 mm in 3 %, 19 %, and 1 % of all setups; and ≥ 3 mm in 27 %, 58 %, and 18 % of all fractions, respectively. For bone matching, the values were 1 %, 1 %, and 2 % and 3 %, 11 %, and 34 %, respectively. For prostate radiotherapy, setup by skin marks alone is inappropriate for patient positioning due to the fact that, during almost half of the fractions, parts of the prostate would not be targeted successfully with an 8-mm safety margin. Bone matching performs better but not sufficiently for safety margins ≤ 3 mm. Endorectal balloon matching can be combined with bone alignment to increase accuracy in the vertical direction when prostate-based setup is not available. Daily prostate alignment remains the gold standard for high-precision radiotherapy with small safety margins. (orig.) [German] Das Ziel dieser Studie bestand darin, den Einfluss verschiedener Herangehensweisen bei der Einstellung einer
Detecting deviating behaviors without models
Lu, X.; Fahland, D.; van den Biggelaar, F.J.H.M.; van der Aalst, W.M.P.; Reichert, M.; Reijers, H.A.
2016-01-01
Deviation detection is a set of techniques that identify deviations from normative processes in real process executions. These diagnostics are used to derive recommendations for improving business processes. Existing detection techniques identify deviations either only on the process instance level
Importance Sampling, Large Deviations, and Differential Games
2002-01-01
National Science Foundation (NSF- DMS-0072004, NSF-ECS-9979250) and the Army Research Office (DAAD19-00-1-0549, DAAD19-02-1-0425). �Research of this author...supported in part by the National Science Foundation (NSF- DMS-0103669). Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting...Process. Appl., 20:213�229, 1985. [2] S. Asmussen. Risk theory in a Markovian environment. Scand. Acturial J., pages 69�100, 1989. [3] S. Asmussen, R
A modular approach to creating large engineered cartilage surfaces.
Ford, Audrey C; Chui, Wan Fung; Zeng, Anne Y; Nandy, Aditya; Liebenberg, Ellen; Carraro, Carlo; Kazakia, Galateia; Alliston, Tamara; O'Connell, Grace D
2018-01-23
Native articular cartilage has limited capacity to repair itself from focal defects or osteoarthritis. Tissue engineering has provided a promising biological treatment strategy that is currently being evaluated in clinical trials. However, current approaches in translating these techniques to developing large engineered tissues remains a significant challenge. In this study, we present a method for developing large-scale engineered cartilage surfaces through modular fabrication. Modular Engineered Tissue Surfaces (METS) uses the well-known, but largely under-utilized self-adhesion properties of de novo tissue to create large scaffolds with nutrient channels. Compressive mechanical properties were evaluated throughout METS specimens, and the tensile mechanical strength of the bonds between attached constructs was evaluated over time. Raman spectroscopy, biochemical assays, and histology were performed to investigate matrix distribution. Results showed that by Day 14, stable connections had formed between the constructs in the METS samples. By Day 21, bonds were robust enough to form a rigid sheet and continued to increase in size and strength over time. Compressive mechanical properties and glycosaminoglycan (GAG) content of METS and individual constructs increased significantly over time. The METS technique builds on established tissue engineering accomplishments of developing constructs with GAG composition and compressive properties approaching native cartilage. This study demonstrated that modular fabrication is a viable technique for creating large-scale engineered cartilage, which can be broadly applied to many tissue engineering applications and construct geometries. Copyright © 2017 Elsevier Ltd. All rights reserved.
TERMINOLOGY MANAGEMENT FRAMEWORK DEVIATIONS IN PROJECTS
Directory of Open Access Journals (Sweden)
Олена Борисівна ДАНЧЕНКО
2015-05-01
Full Text Available The article reviews new approaches to managing projects deviations (risks, changes, problems. By offering integrated control these parameters of the project and by analogy with medical terminological systems building a new system for managing terminological variations in the projects. With an improved method of triads system definitions are analyzed medical terms that make up terminological basis. Using the method of analogy proposed new definitions for managing deviations in projects. By using triad integrity built a new system triad in project management, which will subsequently also analogous to develop a new methodology of deviations in projects.
Visualizing the Sample Standard Deviation
Sarkar, Jyotirmoy; Rashid, Mamunur
2017-01-01
The standard deviation (SD) of a random sample is defined as the square-root of the sample variance, which is the "mean" squared deviation of the sample observations from the sample mean. Here, we interpret the sample SD as the square-root of twice the mean square of all pairwise half deviations between any two sample observations. This…
A Technical Approach on Large Data Distributed Over a Network
Directory of Open Access Journals (Sweden)
Suhasini G
2011-12-01
Full Text Available Data mining is nontrivial extraction of implicit, previously unknown and potential useful information from the data. For a database with number of records and for a set of classes such that each record belongs to one of the given classes, the problem of classification is to decide the class to which the given record belongs. The classification problem is also to generate a model for each class from given data set. We are going to make use of supervised classification in which we have training dataset of record, and for each record the class to which it belongs is known. There are many approaches to supervised classification. Decision tree is attractive in data mining environment as they represent rules. Rules can readily expressed in natural languages and they can be even mapped o database access languages. Now a days classification based on decision trees is one of the important problems in data mining which has applications in many areas. Now a days database system have become highly distributed, and we are using many paradigms. we consider the problem of inducing decision trees in a large distributed network of highly distributed databases. The classification based on decision tree can be done on the existence of distributed databases in healthcare and in bioinformatics, human computer interaction and by the view that these databases are soon to contain large amounts of data, characterized by its high dimensionality. Current decision tree algorithms would require high communication bandwidth, memory, and they are less efficient and scalability reduces when executed on such large volume of data. So there are some approaches being developed to improve the scalability and even approaches to analyse the data distributed over a network.[keywords: Data mining, Decision tree, decision tree induction, distributed data, classification
Simplified approach for estimating large early release frequency
International Nuclear Information System (INIS)
Pratt, W.T.; Mubayi, V.; Nourbakhsh, H.; Brown, T.; Gregory, J.
1998-04-01
The US Nuclear Regulatory Commission (NRC) Policy Statement related to Probabilistic Risk Analysis (PRA) encourages greater use of PRA techniques to improve safety decision-making and enhance regulatory efficiency. One activity in response to this policy statement is the use of PRA in support of decisions related to modifying a plant's current licensing basis (CLB). Risk metrics such as core damage frequency (CDF) and Large Early Release Frequency (LERF) are recommended for use in making risk-informed regulatory decisions and also for establishing acceptance guidelines. This paper describes a simplified approach for estimating LERF, and changes in LERF resulting from changes to a plant's CLB
[Laparoscopic approach in large hiatal hernia--particular considerations].
Munteanu, R; Copăescu, C; Iosifescu, R; Timişescu, Lucia; Dragomirescu, C
2003-01-01
Large hiatal hernia are associated with permanent or intermittent protrusion of more than 1/3 of the stomach into the chest, single or in associated with other organs, a hiatal defect greater than 5 cm and various complications related to the morphological and physiological modifications. While the laparoscopic approach in small hiatal hernia and gastro-esophageal reflux disease is a standard procedure in large hiatal hernia persists a number of questions and controversies. Between 1995 and 2002 a number of 23 patients with large hiatal hernia (9 men, 14 women), mean age 65.8 years (range 49 to 77) underwent laparoscopic surgery. The majority of the patients had complications of the disease (dysphagia, severe esophagitis, anemia, respiratory and cardiac failure). In 16 cases was a sliding hernia (one recurrent after open procedure), in 2 paraesophageal and in 5 a mixed hernia (two "upside-down" type). In 7 cases we perform, in the same operation, cholecystectomy for gallbladder stones and in one cases Heller myotomy for achalasia. In all cases the repairs was performed by using interrupted stitches to approximate the crurae, but in three of them (recurrent and upside down hernia) we consider necessary to repair with a polypropylene mesh (10 x 5 cm) with a "keyhole" for the esophagus. In these particular cases we do not perform a antireflux procedure, in others 20 cases a short floppy Nissen was done. During the operation one patient developed a left pneumothorax and required pleural drainage. Postoperatively one patient had dysphagia treated by pneumatic dilatation and another die 3 weeks after the surgery because severe respiratory and cardiac failure. Laparoscopic approach is a feasible and effective procedure with good postoperatively results, but required good skills in mininvasive technique.
Large neutrino mixings in MSSM and SUSY GUTs: Democratic approach
International Nuclear Information System (INIS)
Shafi, Qaisar; Tavartkiladze, Zurab
2003-01-01
We show how, with aid from a U (1) flavor symmetry, the hierarchical structure in the charged fermion sector and a democratic approach for neutrinos that yields large solar and atmospheric neutrino mixings can be simultaneously realized in the MSSM framework. In SU(5), due to the unified multiplets, we encounter difficulties. Namely, democracy for the neutrinos leads to a wrong hierarchical pattern for charged fermion masses and mixings. We discuss how this is overcome in flipped SU(5). We then proceed to an example based on 5D SUSY SU(5) GUT in which the neutrino democracy idea can be realized. A crucial role is played by bulk states, the so-called 'copies', which are split by compactifying the fifth dimension on an S(1)/Z2 x Z'2 orbifold
Fidelity deviation in quantum teleportation
Bang, Jeongho; Ryu, Junghee; Kaszlikowski, Dagomir
2018-01-01
We analyze the performance of quantum teleportation in terms of average fidelity and fidelity deviation. The average fidelity is defined as the average value of the fidelities over all possible input states and the fidelity deviation is their standard deviation, which is referred to as a concept of fluctuation or universality. In the analysis, we find the condition to optimize both measures under a noisy quantum channel---we here consider the so-called Werner channel. To characterize our resu...
A convex optimization approach for solving large scale linear systems
Directory of Open Access Journals (Sweden)
Debora Cores
2017-01-01
Full Text Available The well-known Conjugate Gradient (CG method minimizes a strictly convex quadratic function for solving large-scale linear system of equations when the coefficient matrix is symmetric and positive definite. In this work we present and analyze a non-quadratic convex function for solving any large-scale linear system of equations regardless of the characteristics of the coefficient matrix. For finding the global minimizers, of this new convex function, any low-cost iterative optimization technique could be applied. In particular, we propose to use the low-cost globally convergent Spectral Projected Gradient (SPG method, which allow us to extend this optimization approach for solving consistent square and rectangular linear system, as well as linear feasibility problem, with and without convex constraints and with and without preconditioning strategies. Our numerical results indicate that the new scheme outperforms state-of-the-art iterative techniques for solving linear systems when the symmetric part of the coefficient matrix is indefinite, and also for solving linear feasibility problems.
An Analysis of the Linguistic Deviation in Chapter X of Oliver Twist
Institute of Scientific and Technical Information of China (English)
刘聪
2013-01-01
Charles Dickens is one of the greatest critical realist writers of the Victorian Age. In language, he is often compared with William Shakespeare for his adeptness with the vernacular and large vocabulary. Charles Dickens achieved a recognizable place among English writers through the use of the stylistic features in his fictional language. Oliver Twist is the best representative of Charles Dickens’style, which makes it the most appropriate choice for the present stylistic study on Charles Dickens. No one who has ever read the dehumanizing workhouse scenes of Oliver Twist and the dark, criminal underworld life can forget them. This thesis attempts to investigate Oliver Twist through the approach of modern stylistics, particularly the theory of linguistic devia-tion. This thesis consists of an introduction, the main body and a conclusion. The introduction offers a brief summary of the com-ments on Charles Dickens and Chapter X of Oliver Twist, introduces the newly rising linguistic deviation theories, and brings about the theories on which this thesis settles. The main body explores the deviation effects produced from four aspects: lexical deviation, grammatical deviation, graphological deviation, and semantic deviation. It endeavors to show Dickens ’manipulating language and the effects achieved through this manipulation. The conclusion mainly sums up the previous analysis, and reveals the theme of the novel, positive effect of linguistic deviation and significance of deviation application.
A Modular Approach To Developing A Large Deployable Reflector
Pittman, R.; Leidich, C.; Mascy, F.; Swenson, B.
1984-01-01
NASA is currently studying the feasibility of developing a Large Deployable Reflector (LDR) astronomical facility to perform astrophysical studies of the infrared and submillimeter portion of the spectrum in the mid 1990's. The LDR concept was recommended by the Astronomy Survey Committee of the National Academy of Sciences as one of two space based projects to be started this decade. The current baseline calls for a 20 m (65.6 ft) aperture telescope diffraction limited at 30 μm and automatically deployed from a single Shuttle launch. The volume, performance, and single launch constraints place great demands on the technology and place LDR beyond the state-of-the-art in certain areas such as lightweight reflector segments. The advent of the Shuttle is opening up many new options and capabilities for producing large space systems. Until now, LDR has always been conceived as an integrated system, deployed autonomously in a single launch. This paper will look at a combination of automatic deployment and on-orbit assembly that may reduce the technological complexity and cost of the LDR system. Many technological tools are now in use or under study that will greatly enhance our capabilities to do assembly in space. Two Shuttle volume budget scenarios will be examined to assess the potential of these tools to reduce the LDR system complexity. Further study will be required to reach the full optimal combination of deployment and assembly, since in most cases the capabilities of these new tools have not been demonstrated. In order to take maximum advantage of these concepts, the design of LDR must be flexible and allow one subsystem to be modified without adversely affecting the entire system. One method of achieving this flexibility is to use a modular design approach in which the major subsystems are physically separated during launch and assembled on orbit. A modular design approach facilitates this flexibility but requires that the subsystems be interfaced in a simple
A practical and automated approach to large area forest disturbance mapping with remote sensing.
Directory of Open Access Journals (Sweden)
Mutlu Ozdogan
Full Text Available In this paper, I describe a set of procedures that automate forest disturbance mapping using a pair of Landsat images. The approach is built on the traditional pair-wise change detection method, but is designed to extract training data without user interaction and uses a robust classification algorithm capable of handling incorrectly labeled training data. The steps in this procedure include: i creating masks for water, non-forested areas, clouds, and cloud shadows; ii identifying training pixels whose value is above or below a threshold defined by the number of standard deviations from the mean value of the histograms generated from local windows in the short-wave infrared (SWIR difference image; iii filtering the original training data through a number of classification algorithms using an n-fold cross validation to eliminate mislabeled training samples; and finally, iv mapping forest disturbance using a supervised classification algorithm. When applied to 17 Landsat footprints across the U.S. at five-year intervals between 1985 and 2010, the proposed approach produced forest disturbance maps with 80 to 95% overall accuracy, comparable to those obtained from traditional approaches to forest change detection. The primary sources of mis-classification errors included inaccurate identification of forests (errors of commission, issues related to the land/water mask, and clouds and cloud shadows missed during image screening. The approach requires images from the peak growing season, at least for the deciduous forest sites, and cannot readily distinguish forest harvest from natural disturbances or other types of land cover change. The accuracy of detecting forest disturbance diminishes with the number of years between the images that make up the image pair. Nevertheless, the relatively high accuracies, little or no user input needed for processing, speed of map production, and simplicity of the approach make the new method especially practical for
Prospective detection of large prediction errors: a hypothesis testing approach
International Nuclear Information System (INIS)
Ruan, Dan
2010-01-01
Real-time motion management is important in radiotherapy. In addition to effective monitoring schemes, prediction is required to compensate for system latency, so that treatment can be synchronized with tumor motion. However, it is difficult to predict tumor motion at all times, and it is critical to determine when large prediction errors may occur. Such information can be used to pause the treatment beam or adjust monitoring/prediction schemes. In this study, we propose a hypothesis testing approach for detecting instants corresponding to potentially large prediction errors in real time. We treat the future tumor location as a random variable, and obtain its empirical probability distribution with the kernel density estimation-based method. Under the null hypothesis, the model probability is assumed to be a concentrated Gaussian centered at the prediction output. Under the alternative hypothesis, the model distribution is assumed to be non-informative uniform, which reflects the situation that the future position cannot be inferred reliably. We derive the likelihood ratio test (LRT) for this hypothesis testing problem and show that with the method of moments for estimating the null hypothesis Gaussian parameters, the LRT reduces to a simple test on the empirical variance of the predictive random variable. This conforms to the intuition to expect a (potentially) large prediction error when the estimate is associated with high uncertainty, and to expect an accurate prediction when the uncertainty level is low. We tested the proposed method on patient-derived respiratory traces. The 'ground-truth' prediction error was evaluated by comparing the prediction values with retrospective observations, and the large prediction regions were subsequently delineated by thresholding the prediction errors. The receiver operating characteristic curve was used to describe the performance of the proposed hypothesis testing method. Clinical implication was represented by miss
Solution approach for a large scale personnel transport system for a large company in Latin America
Energy Technology Data Exchange (ETDEWEB)
Garzón-Garnica, Eduardo-Arturo; Caballero-Morales, Santiago-Omar; Martínez-Flores, José-Luis
2017-07-01
The present paper focuses on the modelling and solution of a large-scale personnel transportation system in Mexico where many routes and vehicles are currently used to service 525 points. The routing system proposed can be applied to many cities in the Latin-American region. Design/methodology/approach: This system was modelled as a VRP model considering the use of real-world transit times, and the fact that routes start at the farthest point from the destination center. Experiments were performed on different sized sets of service points. As the size of the instances was increased, the performance of the heuristic method was assessed in comparison with the results of an exact algorithm, the results remaining very close between both. When the size of the instance was full-scale and the exact algorithm took too much time to solve the problem, then the heuristic algorithm provided a feasible solution. Supported by the validation with smaller scale instances, where the difference between both solutions was close to a 6%, the full –scale solution obtained with the heuristic algorithm was considered to be within that same range. Findings: The proposed modelling and solving method provided a solution that would produce significant savings in the daily operation of the routes. Originality/value: The urban distribution of the cities in Latin America is unique to other regions in the world. The general layout of the large cities in this region includes a small town center, usually antique, and a somewhat disordered outer region. The lack of a vehicle-centered urban planning poses distinct challenges for vehicle routing problems in the region. The use of a heuristic VRP combined with the results of an exact VRP, allowed the obtention of an improved routing plan specific to the requirements of the region.
Solution approach for a large scale personnel transport system for a large company in Latin America
International Nuclear Information System (INIS)
Garzón-Garnica, Eduardo-Arturo; Caballero-Morales, Santiago-Omar; Martínez-Flores, José-Luis
2017-01-01
The present paper focuses on the modelling and solution of a large-scale personnel transportation system in Mexico where many routes and vehicles are currently used to service 525 points. The routing system proposed can be applied to many cities in the Latin-American region. Design/methodology/approach: This system was modelled as a VRP model considering the use of real-world transit times, and the fact that routes start at the farthest point from the destination center. Experiments were performed on different sized sets of service points. As the size of the instances was increased, the performance of the heuristic method was assessed in comparison with the results of an exact algorithm, the results remaining very close between both. When the size of the instance was full-scale and the exact algorithm took too much time to solve the problem, then the heuristic algorithm provided a feasible solution. Supported by the validation with smaller scale instances, where the difference between both solutions was close to a 6%, the full –scale solution obtained with the heuristic algorithm was considered to be within that same range. Findings: The proposed modelling and solving method provided a solution that would produce significant savings in the daily operation of the routes. Originality/value: The urban distribution of the cities in Latin America is unique to other regions in the world. The general layout of the large cities in this region includes a small town center, usually antique, and a somewhat disordered outer region. The lack of a vehicle-centered urban planning poses distinct challenges for vehicle routing problems in the region. The use of a heuristic VRP combined with the results of an exact VRP, allowed the obtention of an improved routing plan specific to the requirements of the region.
Solution approach for a large scale personnel transport system for a large company in Latin America
Directory of Open Access Journals (Sweden)
Eduardo-Arturo Garzón-Garnica
2017-10-01
Full Text Available Purpose: The present paper focuses on the modelling and solution of a large-scale personnel transportation system in Mexico where many routes and vehicles are currently used to service 525 points. The routing system proposed can be applied to many cities in the Latin-American region. Design/methodology/approach: This system was modelled as a VRP model considering the use of real-world transit times, and the fact that routes start at the farthest point from the destination center. Experiments were performed on different sized sets of service points. As the size of the instances was increased, the performance of the heuristic method was assessed in comparison with the results of an exact algorithm, the results remaining very close between both. When the size of the instance was full-scale and the exact algorithm took too much time to solve the problem, then the heuristic algorithm provided a feasible solution. Supported by the validation with smaller scale instances, where the difference between both solutions was close to a 6%, the full –scale solution obtained with the heuristic algorithm was considered to be within that same range. Findings: The proposed modelling and solving method provided a solution that would produce significant savings in the daily operation of the routes. Originality/value: The urban distribution of the cities in Latin America is unique to other regions in the world. The general layout of the large cities in this region includes a small town center, usually antique, and a somewhat disordered outer region. The lack of a vehicle-centered urban planning poses distinct challenges for vehicle routing problems in the region. The use of a heuristic VRP combined with the results of an exact VRP, allowed the obtention of an improved routing plan specific to the requirements of the region.
Computer generation of random deviates
International Nuclear Information System (INIS)
Cormack, John
1991-01-01
The need for random deviates arises in many scientific applications. In medical physics, Monte Carlo simulations have been used in radiology, radiation therapy and nuclear medicine. Specific instances include the modelling of x-ray scattering processes and the addition of random noise to images or curves in order to assess the effects of various processing procedures. Reliable sources of random deviates with statistical properties indistinguishable from true random deviates are a fundamental necessity for such tasks. This paper provides a review of computer algorithms which can be used to generate uniform random deviates and other distributions of interest to medical physicists, along with a few caveats relating to various problems and pitfalls which can occur. Source code listings for the generators discussed (in FORTRAN, Turbo-PASCAL and Data General ASSEMBLER) are available on request from the authors. 27 refs., 3 tabs., 5 figs
Fidelity deviation in quantum teleportation
Bang, Jeongho; Ryu, Junghee; Kaszlikowski, Dagomir
2018-04-01
We analyze the performance of quantum teleportation in terms of average fidelity and fidelity deviation. The average fidelity is defined as the average value of the fidelities over all possible input states and the fidelity deviation is their standard deviation, which is referred to as a concept of fluctuation or universality. In the analysis, we find the condition to optimize both measures under a noisy quantum channel—we here consider the so-called Werner channel. To characterize our results, we introduce a 2D space defined by the aforementioned measures, in which the performance of the teleportation is represented as a point with the channel noise parameter. Through further analysis, we specify some regions drawn for different channel conditions, establishing the connection to the dissimilar contributions of the entanglement to the teleportation and the Bell inequality violation.
Standard Deviation for Small Samples
Joarder, Anwar H.; Latif, Raja M.
2006-01-01
Neater representations for variance are given for small sample sizes, especially for 3 and 4. With these representations, variance can be calculated without a calculator if sample sizes are small and observations are integers, and an upper bound for the standard deviation is immediate. Accessible proofs of lower and upper bounds are presented for…
Fire management over large landscapes: a hierarchical approach
Kenneth G. Boykin
2008-01-01
Management planning for fires becomes increasingly difficult as scale increases. Stratification provides land managers with multiple scales in which to prepare plans. Using statistical techniques, Geographic Information Systems (GIS), and meetings with land managers, we divided a large landscape of over 2 million acres (White Sands Missile Range) into parcels useful in...
Disruptions in large value payment systems: an experimental approach
Abbink, K.; Bosman, R.; Heijmans, R.; van Winden, F.
2010-01-01
This experimental study investigates the behaviour of banks in a large value payment system. More specifically,we look at 1) the reactions of banks to disruptions in the payment system, 2) the way in which the history of disruptions affects the behaviour of banks (path dependency) and 3) the effect
Disruptions in large value payment systems: An experimental approach
Abbink, K.; Bosman, R.; Heijmans, R.; van Winden, F.; Hellqvist, M.; Laine, T.
2012-01-01
This experimental study investigates the behaviour of banks in a large value payment system. More specifically, we look at 1) the reactions of banks to disruptions in the payment system, 2) the way in which the history of disruptions affects the behaviour of banks (path dependency) and 3) the effect
Baryons in QCDAS at large Nc: A roundabout approach
International Nuclear Information System (INIS)
Cohen, Thomas D.; Shafer, Daniel L.; Lebed, Richard F.
2010-01-01
QCD AS , a variant of large N c QCD in which quarks transform under the color two-index antisymmetric representation, reduces to standard QCD at N c =3 and provides an alternative to the usual large N c extrapolation that uses fundamental representation quarks. Previous strong plausibility arguments assert that the QCD AS baryon mass scales as N c 2 ; however, the complicated combinatoric problem associated with quarks carrying two color indices impeded a complete demonstration. We develop a diagrammatic technique to solve this problem. The key ingredient is the introduction of an effective multigluon vertex: a ''traffic circle'' or roundabout diagram. We show that arbitrarily complicated diagrams can be reduced to simple ones with the same leading N c scaling using this device, and that the leading contribution to baryon mass does, in fact, scale as N c 2 .
Parametric Approach in Designing Large-Scale Urban Architectural Objects
Directory of Open Access Journals (Sweden)
Arne Riekstiņš
2011-04-01
Full Text Available When all the disciplines of various science fields converge and develop, new approaches to contemporary architecture arise. The author looks towards approaching digital architecture from parametric viewpoint, revealing its generative capacity, originating from the fields of aeronautical, naval, automobile and product-design industries. The author also goes explicitly through his design cycle workflow for testing the latest methodologies in architectural design. The design process steps involved: extrapolating valuable statistical data about the site into three-dimensional diagrams, defining certain materiality of what is being produced, ways of presenting structural skin and structure simultaneously, contacting the object with the ground, interior program definition of the building with floors and possible spaces, logic of fabrication, CNC milling of the proto-type. The author’s developed tool that is reviewed in this article features enormous performative capacity and is applicable to various architectural design scales.Article in English
New approach to large haemorrhoidal prolapse: double stapled haemorrhoidopexy.
Naldini, Gabriele; Martellucci, Jacopo; Talento, Pasquale; Caviglia, Angelo; Moraldi, Luca; Rossi, Mauro
2009-12-01
To verify if in large haemorrhoidal prolapse (independently from the degree) in patients with no symptoms of obstructed defaecation syndrome, the use of a stapled hemorrhoidopexy variant, comprising a double stapler haemorrhoidopexy (DSH), makes it possible to reduce the percentage of failures or relapses and to standardise an objective intraoperative parameter for the purpose of quantifying internal prolapses which can then be used as a guide in determining the type of treatment to be provided. Between June 2003 and June 2004, 353 patients were treated for haemorrhoidal prolapse. The patients suffering from large haemorrhoidal prolapse occupying more than half of the length of the anal dilator were intraoperatively selected for DSH. Eighty-three patients (23.5%) underwent a DSH. The degrees of the large haemorrhoidal prolapse intraoperatively selected for DSH were sub-divided as follows: 7.2% (second), 24% (third) and 68.6% (fourth). The follow-up period was 48 months. There were three cases (3.6%) of residual illnesses and five cases (6%) of a relapse. The following complications were recorded: urgency at <3 months (7.2%), haemostasis revisions (2.4%) and spontaneously draining anterior haematoma (1.2%). The results of the 270 haemorrhoidal prolapse (38 second degree, 159 third degree and 130 fourth degree) treated with the procedure for prolapse and haemorrhoids were: nine (3.3%) residual illness and 12 (4.44%) relapse illness. The following complications were recorded: urgency at <3 months (6.6%), haemostasis revisions (2.5%) and spontaneously draining anterior haematoma (0.7%). The intraoperative selection criterion was both efficacious and reproducible. This variant technique, which can be used in large haemorrhoidal prolapses, could allow us to further improve the quality of treatment for haemorrhoidal conditions using stapled haemorrhoidopexy, without increasing the complications.
Industrial approach to piezoelectric damping of large fighter aircraft components
Simpson, John; Schweiger, Johannes
1998-06-01
Different concepts to damp structural vibrations of the vertical tail of fighter aircraft are reported. The various requirements for a vertical tail bias an integrated approach for the design. Several active vibrations suppression concepts had been investigated during the preparatory phase of a research program shared by Daimler-Benz Aerospace Military Aircraft (Dasa), Daimler-Benz Forschung (DBF) and Deutsche Forschungsandstalt fuer Luftund Raumfahrt (DLR). Now in the main phase of the programme, four concepts were finally chosen: two concepts with aerodynamic control surfaces and two concepts with piezoelectric components. One piezo concept approach will be described rigorously, the other concepts are briefly addressed. In the Dasa concept, thin surface piezo actuators are set out carefully to flatten the dynamic portion of the combined static and dynamic maximum bending moment loading case directly in the shell structure. The second piezo concept by DLR involves pre-loaded lead zirconate titanate (PZT)-block actuators at host structure fixtures. To this end a research apparatus was designed and built as a full scale simplified fin box with carbon fiber reinformed plastic skins and an aluminium stringer-rib substructure restrained by relevant aircraft fixtures. It constitutes a benchmark 3D-structural impedance. The engineering design incorporates 7kg of PZT surface actuators. The structural system then should be excited to more than 15mm tip displacement amplitude. This prepares the final step to total A/C integration. Typical analysis methods using cyclic thermal analogies adapted to induced load levels are compared. Commercial approaches leading onto basic state space model interpretation wrt. actuator sizing and positioning, structural integrity constraints, FE-validation and testing are described. Both piezoelectric strategies are aimed at straight open-loop performance related to concept weight penalty and input electric power. The required actuators, power
48 CFR 801.403 - Individual deviations.
2010-10-01
... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Individual deviations. 801... Individual deviations. (a) Authority to authorize individual deviations from the FAR and VAAR is delegated to... nature of the deviation. (d) The DSPE may authorize individual deviations from the FAR and VAAR when an...
48 CFR 2001.403 - Individual deviations.
2010-10-01
... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Individual deviations. 2001... Individual deviations. In individual cases, deviations from either the FAR or the NRCAR will be authorized... deviations clearly in the best interest of the Government. Individual deviations must be authorized in...
[The crooked nose: correction of dorsal and caudal septal deviations].
Foda, H M T
2010-09-01
The deviated nose represents a complex cosmetic and functional problem. Septal surgery plays a central role in the successful management of the externally deviated nose. This study included 800 patients seeking rhinoplasty to correct external nasal deviations; 71% of these suffered from variable degrees of nasal obstruction. Septal surgery was necessary in 736 (92%) patients, not only to improve breathing, but also to achieve a straight, symmetric external nose. A graduated surgical approach was adopted to allow correction of the dorsal and caudal deviations of the nasal septum without weakening its structural support to the nasal dorsum or nasal tip. The approach depended on full mobilization of deviated cartilage, followed by straightening of the cartilage and its fixation in the corrected position by using bony splinting grafts through an external rhinoplasty approach.
Preperitoneal approach to parastomal hernia with coexistent large incisional hernia.
Egun, A; Hill, J; MacLennan, I; Pearson, R. C
2002-03-01
OBJECTIVE: To assess the outcome of preperitoneal mesh repair of complex incisional herniae incorporating a stoma and large parastomal hernia. METHODS: From 1994 to 1998, symptomatic patients who had repair of combined incisional hernia and parastomal hernia were reviewed. Body mass index, co-morbidity, length of hospital stay, patient satisfaction and outcomes were recorded. RESULTS: Ten patients (seven females and three males), mean age 62 (range 48-80) years underwent primary repair. All had significant comorbidities (ASA grade 3) and mean body mass index was 31.1 (range 20-49). Median hospital stay was 15 (range 8-150) days. Complications were of varying clinical significance (seroma, superficial infection, major respiratory tract infection and stomal necrosis). There were no recurrences after a mean follow up of 54 (range 22-69) months. CONCLUSION: The combination of a parastomal hernia and generalised wound dehiscence is an uncommon but difficult problem. The application of the principles of low-tension mesh repair can provide a satisfactory outcome and low recurrence rate. This must be tempered by recognition of the potential for significant major postoperative complication.
Deviations in human gut microbiota
DEFF Research Database (Denmark)
Casén, C; Vebø, H C; Sekelja, M
2015-01-01
microbiome profiling. AIM: To develop and validate a novel diagnostic test using faecal samples to profile the intestinal microbiota and identify and characterise dysbiosis. METHODS: Fifty-four DNA probes targeting ≥300 bacteria on different taxonomic levels were selected based on ability to distinguish......, and potential clinically relevant deviation in the microbiome from normobiosis. This model was tested in different samples from healthy volunteers and IBS and IBD patients (n = 330) to determine the ability to detect dysbiosis. RESULTS: Validation confirms dysbiosis was detected in 73% of IBS patients, 70...
Note onset deviations as musical piece signatures.
Serrà, Joan; Özaslan, Tan Hakan; Arcos, Josep Lluis
2013-01-01
A competent interpretation of a musical composition presents several non-explicit departures from the written score. Timing variations are perhaps the most important ones: they are fundamental for expressive performance and a key ingredient for conferring a human-like quality to machine-based music renditions. However, the nature of such variations is still an open research question, with diverse theories that indicate a multi-dimensional phenomenon. In the present study, we consider event-shift timing variations and show that sequences of note onset deviations are robust and reliable predictors of the musical piece being played, irrespective of the performer. In fact, our results suggest that only a few consecutive onset deviations are already enough to identify a musical composition with statistically significant accuracy. We consider a mid-size collection of commercial recordings of classical guitar pieces and follow a quantitative approach based on the combination of standard statistical tools and machine learning techniques with the semi-automatic estimation of onset deviations. Besides the reported results, we believe that the considered materials and the methodology followed widen the testing ground for studying musical timing and could open new perspectives in related research fields.
Note onset deviations as musical piece signatures.
Directory of Open Access Journals (Sweden)
Joan Serrà
Full Text Available A competent interpretation of a musical composition presents several non-explicit departures from the written score. Timing variations are perhaps the most important ones: they are fundamental for expressive performance and a key ingredient for conferring a human-like quality to machine-based music renditions. However, the nature of such variations is still an open research question, with diverse theories that indicate a multi-dimensional phenomenon. In the present study, we consider event-shift timing variations and show that sequences of note onset deviations are robust and reliable predictors of the musical piece being played, irrespective of the performer. In fact, our results suggest that only a few consecutive onset deviations are already enough to identify a musical composition with statistically significant accuracy. We consider a mid-size collection of commercial recordings of classical guitar pieces and follow a quantitative approach based on the combination of standard statistical tools and machine learning techniques with the semi-automatic estimation of onset deviations. Besides the reported results, we believe that the considered materials and the methodology followed widen the testing ground for studying musical timing and could open new perspectives in related research fields.
48 CFR 1301.403 - Individual deviations.
2010-10-01
... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Individual deviations... DEPARTMENT OF COMMERCE ACQUISITION REGULATIONS SYSTEM Deviations From the FAR 1301.403 Individual deviations. The designee authorized to approve individual deviations from the FAR is set forth in CAM 1301.70. ...
48 CFR 301.403 - Individual deviations.
2010-10-01
... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Individual deviations. 301... ACQUISITION REGULATION SYSTEM Deviations From the FAR 301.403 Individual deviations. Contracting activities shall prepare requests for individual deviations to either the FAR or HHSAR in accordance with 301.470. ...
48 CFR 1501.403 - Individual deviations.
2010-10-01
... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Individual deviations. 1501.403 Section 1501.403 Federal Acquisition Regulations System ENVIRONMENTAL PROTECTION AGENCY GENERAL GENERAL Deviations 1501.403 Individual deviations. Requests for individual deviations from the FAR and the...
48 CFR 501.403 - Individual deviations.
2010-10-01
... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Individual deviations. 501... Individual deviations. (a) An individual deviation affects only one contract action. (1) The Head of the Contracting Activity (HCA) must approve an individual deviation to the FAR. The authority to grant an...
48 CFR 1201.403 - Individual deviations.
2010-10-01
... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Individual deviations... FEDERAL ACQUISITION REGULATIONS SYSTEM 70-Deviations From the FAR and TAR 1201.403 Individual deviations... Executive Service (SES) official or that of a Flag Officer, may authorize individual deviations (unless (FAR...
48 CFR 401.403 - Individual deviations.
2010-10-01
... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Individual deviations. 401... AGRICULTURE ACQUISITION REGULATION SYSTEM Deviations From the FAR and AGAR 401.403 Individual deviations. In individual cases, deviations from either the FAR or the AGAR will be authorized only when essential to effect...
48 CFR 2401.403 - Individual deviations.
2010-10-01
... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Individual deviations. 2401... DEVELOPMENT GENERAL FEDERAL ACQUISITION REGULATION SYSTEM Deviations 2401.403 Individual deviations. In individual cases, proposed deviations from the FAR or HUDAR shall be submitted to the Senior Procurement...
48 CFR 2801.403 - Individual deviations.
2010-10-01
... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Individual deviations. 2801... OF JUSTICE ACQUISITION REGULATIONS SYSTEM Deviations From the FAR and JAR 2801.403 Individual deviations. Individual deviations from the FAR or the JAR shall be approved by the head of the contracting...
Mod-ϕ convergence normality zones and precise deviations
Féray, Valentin; Nikeghbali, Ashkan
2016-01-01
The canonical way to establish the central limit theorem for i.i.d. random variables is to use characteristic functions and Lévy’s continuity theorem. This monograph focuses on this characteristic function approach and presents a renormalization theory called mod-ϕ convergence. This type of convergence is a relatively new concept with many deep ramifications, and has not previously been published in a single accessible volume. The authors construct an extremely flexible framework using this concept in order to study limit theorems and large deviations for a number of probabilistic models related to classical probability, combinatorics, non-commutative random variables, as well as geometric and number-theoretical objects. Intended for researchers in probability theory, the text is carefully well-written and well-structured, containing a great amount of detail and interesting examples. .
Allan deviation analysis of financial return series
Hernández-Pérez, R.
2012-05-01
We perform a scaling analysis for the return series of different financial assets applying the Allan deviation (ADEV), which is used in the time and frequency metrology to characterize quantitatively the stability of frequency standards since it has demonstrated to be a robust quantity to analyze fluctuations of non-stationary time series for different observation intervals. The data used are opening price daily series for assets from different markets during a time span of around ten years. We found that the ADEV results for the return series at short scales resemble those expected for an uncorrelated series, consistent with the efficient market hypothesis. On the other hand, the ADEV results for absolute return series for short scales (first one or two decades) decrease following approximately a scaling relation up to a point that is different for almost each asset, after which the ADEV deviates from scaling, which suggests that the presence of clustering, long-range dependence and non-stationarity signatures in the series drive the results for large observation intervals.
Various approaches to the modelling of large scale 3-dimensional circulation in the Ocean
Digital Repository Service at National Institute of Oceanography (India)
Shaji, C.; Bahulayan, N.; Rao, A.D.; Dube, S.K.
In this paper, the three different approaches to the modelling of large scale 3-dimensional flow in the ocean such as the diagnostic, semi-diagnostic (adaptation) and the prognostic are discussed in detail. Three-dimensional solutions are obtained...
The reinterpretation of standard deviation concept
Ye, Xiaoming
2017-01-01
Existing mathematical theory interprets the concept of standard deviation as the dispersion degree. Therefore, in measurement theory, both uncertainty concept and precision concept, which are expressed with standard deviation or times standard deviation, are also defined as the dispersion of measurement result, so that the concept logic is tangled. Through comparative analysis of the standard deviation concept and re-interpreting the measurement error evaluation principle, this paper points o...
Introducing the Mean Absolute Deviation "Effect" Size
Gorard, Stephen
2015-01-01
This paper revisits the use of effect sizes in the analysis of experimental and similar results, and reminds readers of the relative advantages of the mean absolute deviation as a measure of variation, as opposed to the more complex standard deviation. The mean absolute deviation is easier to use and understand, and more tolerant of extreme…
48 CFR 201.403 - Individual deviations.
2010-10-01
... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Individual deviations. 201.403 Section 201.403 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM... Individual deviations. (1) Individual deviations, except those described in 201.402(1) and paragraph (2) of...
48 CFR 3401.403 - Individual deviations.
2010-10-01
... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Individual deviations. 3401.403 Section 3401.403 Federal Acquisition Regulations System DEPARTMENT OF EDUCATION ACQUISITION REGULATION GENERAL ED ACQUISITION REGULATION SYSTEM Deviations 3401.403 Individual deviations. An individual...
48 CFR 1.403 - Individual deviations.
2010-10-01
... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Individual deviations. 1.403 Section 1.403 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION GENERAL FEDERAL ACQUISITION REGULATIONS SYSTEM Deviations from the FAR 1.403 Individual deviations. Individual...
48 CFR 3001.403 - Individual deviations.
2010-10-01
... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Individual deviations... from the FAR and HSAR 3001.403 Individual deviations. Unless precluded by law, executive order, or other regulation, the HCA is authorized to approve individual deviation (except with respect to (FAR) 48...
48 CFR 601.403 - Individual deviations.
2010-10-01
... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Individual deviations. 601.403 Section 601.403 Federal Acquisition Regulations System DEPARTMENT OF STATE GENERAL DEPARTMENT OF STATE ACQUISITION REGULATIONS SYSTEM Deviations from the FAR 601.403 Individual deviations. The...
48 CFR 1901.403 - Individual deviations.
2010-10-01
... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Individual deviations. 1901.403 Section 1901.403 Federal Acquisition Regulations System BROADCASTING BOARD OF GOVERNORS GENERAL... Individual deviations. Deviations from the IAAR or the FAR in individual cases shall be authorized by the...
48 CFR 2501.403 - Individual deviations.
2010-10-01
... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Individual deviations. 2501.403 Section 2501.403 Federal Acquisition Regulations System NATIONAL SCIENCE FOUNDATION GENERAL FEDERAL ACQUISITION REGULATIONS SYSTEM Deviations From the FAR 2501.403 Individual deviations. Individual...
Using Flipped Classroom Approach to Explore Deep Learning in Large Classrooms
Danker, Brenda
2015-01-01
This project used two Flipped Classroom approaches to stimulate deep learning in large classrooms during the teaching of a film module as part of a Diploma in Performing Arts course at Sunway University, Malaysia. The flipped classes utilized either a blended learning approach where students first watched online lectures as homework, and then…
Direct evaluation of free energy for large system through structure integration approach.
Takeuchi, Kazuhito; Tanaka, Ryohei; Yuge, Koretaka
2015-09-30
We propose a new approach, 'structure integration', enabling direct evaluation of configurational free energy for large systems. The present approach is based on the statistical information of lattice. Through first-principles-based simulation, we find that the present method evaluates configurational free energy accurately in disorder states above critical temperature.
A Top-Down Approach to Construct Execution Views of a Large Software-Intensive System
Callo Arias, T.B.; America, P.H.M.; Avgeriou, P.
2011-01-01
This paper presents a top-down approach to construct execution views of a large and complex software intensive system. Execution viewsdescribe what the software does at runtime and how it does it. The presented approach represents a reverse architecting solution that follows a set of pre-defined
Phylogenetic rooting using minimal ancestor deviation.
Tria, Fernando Domingues Kümmel; Landan, Giddy; Dagan, Tal
2017-06-19
Ancestor-descendent relations play a cardinal role in evolutionary theory. Those relations are determined by rooting phylogenetic trees. Existing rooting methods are hampered by evolutionary rate heterogeneity or the unavailability of auxiliary phylogenetic information. Here we present a rooting approach, the minimal ancestor deviation (MAD) method, which accommodates heterotachy by using all pairwise topological and metric information in unrooted trees. We demonstrate the performance of the method, in comparison to existing rooting methods, by the analysis of phylogenies from eukaryotes and prokaryotes. MAD correctly recovers the known root of eukaryotes and uncovers evidence for the origin of cyanobacteria in the ocean. MAD is more robust and consistent than existing methods, provides measures of the root inference quality and is applicable to any tree with branch lengths.
The Standard Deviation of Launch Vehicle Environments
Yunis, Isam
2005-01-01
Statistical analysis is used in the development of the launch vehicle environments of acoustics, vibrations, and shock. The standard deviation of these environments is critical to accurate statistical extrema. However, often very little data exists to define the standard deviation and it is better to use a typical standard deviation than one derived from a few measurements. This paper uses Space Shuttle and expendable launch vehicle flight data to define a typical standard deviation for acoustics and vibrations. The results suggest that 3dB is a conservative and reasonable standard deviation for the source environment and the payload environment.
Linguistics deviation, a tool for teaching English grammar: evidence ...
African Journals Online (AJOL)
We have always advocated that those teaching the Use of English must seek out novel ways of teaching the grammar of English to take out the drudgery of the present approach. Here, we proposed using Linguistic deviation as a tool for teaching English grammar. This approach will produce students who are both strong in ...
Partitioned based approach for very large scale database in Indian nuclear power plants
International Nuclear Information System (INIS)
Tiwari, Sachin; Upadhyay, Pushp; Sengupta, Nabarun; Bhandarkar, S.G.; Agilandaeswari
2012-01-01
This paper presents a partition based approach for handling very large tables with size running in giga-bytes to tera-bytes. The scheme is developed from our experience in handling large signal storage which is required in various computer based data acquisition and control room operator information systems such as Distribution Recording System (DRS) and Computerised Operator Information System (COIS). Whenever there is a disturbance in an operating nuclear power plant, it triggers an action where a large volume of data from multiple sources is generated and this data needs to be stored. Concurrency issues as data is from multiple sources and very large amount of data are the problems which are addressed in this paper by applying partition based approach. Advantages of partition based approach with other techniques are discussed. (author)
An Axiomatic Analysis Approach for Large-Scale Disaster-Tolerant Systems Modeling
Directory of Open Access Journals (Sweden)
Theodore W. Manikas
2011-02-01
Full Text Available Disaster tolerance in computing and communications systems refers to the ability to maintain a degree of functionality throughout the occurrence of a disaster. We accomplish the incorporation of disaster tolerance within a system by simulating various threats to the system operation and identifying areas for system redesign. Unfortunately, extremely large systems are not amenable to comprehensive simulation studies due to the large computational complexity requirements. To address this limitation, an axiomatic approach that decomposes a large-scale system into smaller subsystems is developed that allows the subsystems to be independently modeled. This approach is implemented using a data communications network system example. The results indicate that the decomposition approach produces simulation responses that are similar to the full system approach, but with greatly reduced simulation time.
General Approach to Characterize Reservoir Fluids Using a Large PVT Database
DEFF Research Database (Denmark)
Varzandeh, Farhad; Yan, Wei; Stenby, Erling Halfdan
2016-01-01
methods. We proposed a general approach to develop correlations for model parameters and applied it to the characterization for the PC-SAFT EoS. The approach consists in first developing the correlations based on the DIPPR database, and then adjusting the correlations based on a large PVT database......, the approach gives better PVT calculation results for the tested systems. Comparison was also made between PC-SAFT with the proposed characterization method and other EoS models. The proposed approach can be applied to other EoS models for improving their fluid characterization. Besides, the challenges...
INDICATIVE MODEL OF DEVIATIONS IN PROJECT
Directory of Open Access Journals (Sweden)
Олена Борисівна ДАНЧЕНКО
2016-02-01
Full Text Available The article shows the process of constructing the project deviations indicator model. It based on a conceptual model of project deviations integrated management (PDIM. During the project different causes (such as risks, changes, problems, crises, conflicts, stress lead to deviations of integrated project indicators - time, cost, quality, and content. For a more detailed definition of where in the project deviations occur and how they are dangerous for the whole project, it needs to develop an indicative model of project deviations. It allows identifying the most dangerous deviations that require PDIM. As a basis for evaluation of project's success has been taken famous model IPMA Delta. During the evaluation, IPMA Delta estimated project management competence of organization in three modules: I-Module ("Individuals" - a self-assessment personnel, P-module ("Projects" - self-assessment of projects and/or programs, and O-module ("Organization" - used to conduct interviews with selected people during auditing company. In the process of building an indicative model of deviations in the project, the first step is the assessment of project management in the organization by IPMA Delta. In the future, built cognitive map and matrix of system interconnections of the project, which conducted simulations and built a scale of deviations for the selected project. They determined a size and place of deviations. To identify the detailed causes of deviations in the project management has been proposed to use the extended system of indicators, which is based on indicators of project management model Project Excellence. The proposed indicative model of deviations in projects allows to estimate the size of variation and more accurately identify the place of negative deviations in the project and provides the project manager information for operational decision making for the management of deviations in the implementation of the project
Phase separation and large deviations of lattice active matter
Whitelam, Stephen; Klymko, Katherine; Mandal, Dibyendu
2018-04-01
Off-lattice active Brownian particles form clusters and undergo phase separation even in the absence of attractions or velocity-alignment mechanisms. Arguments that explain this phenomenon appeal only to the ability of particles to move persistently in a direction that fluctuates, but existing lattice models of hard particles that account for this behavior do not exhibit phase separation. Here we present a lattice model of active matter that exhibits motility-induced phase separation in the absence of velocity alignment. Using direct and rare-event sampling of dynamical trajectories, we show that clustering and phase separation are accompanied by pronounced fluctuations of static and dynamic order parameters. This model provides a complement to off-lattice models for the study of motility-induced phase separation.
A large deviation based splitting estimation of power flow reliability
W.S. Wadman (Wander); D.T. Crommelin (Daan); A.P. Zwart (Bert)
2016-01-01
htmlabstractGiven the continued integration of intermittent renewable generators in electrical power grids, connection overloads are of increasing concern for grid operators. The risk of an overload due to injection variability can be described mathematically as a barrier crossing probability of a
A Large Deviation based Splitting Estimation of Power Flow Reliability
W.S. Wadman (Wander); D.T. Crommelin (Daan); A.P. Zwart (Bert)
2016-01-01
htmlabstractGiven the continued integration of intermittent renewable generators in electrical power grids, connection overloads are of increasing concern for grid operators. The risk of an overload due to injection variability can be described mathematically as a barrier crossing probability of a
Large deviations and queueing networks: Methods for rate function identification
Atar, Rami; Dupuis, Paul
1999-01-01
This paper considers the problem of rate function identification for multidimensional queueing models with feedback. A set of techniques are introduced which allow this identification when the model possesses certain structural properties. The main tools used are representation formulas for exponential integrals, weak convergence methods, and the regularity properties of associated Skorokhod Problems. Two examples are treated as special cases of the general theory: the classical Jackson netwo...
Fluctuations and large deviations in non-equilibrium systems
Indian Academy of Sciences (India)
When ρa = ρb = r, the steady state is a Bernoulli measure where all the ... where the function F(x) is the monotone solution of the differential equation ρ(x) = F + .... quantity is conserved (numbers of particles, energy, momentum..) would also be.
Large deviations for Gaussian queues modelling communication networks
Mandjes, Michel
2007-01-01
Michel Mandjes, Centre for Mathematics and Computer Science (CWI) Amsterdam, The Netherlands, and Professor, Faculty of Engineering, University of Twente. At CWI Mandjes is a senior researcher and Director of the Advanced Communications Network group. He has published for 60 papers on queuing theory, networks, scheduling, and pricing of networks.
Deviations from thermal equilibrium in plasmas
International Nuclear Information System (INIS)
Burm, K.T.A.L.
2004-01-01
A plasma system in local thermal equilibrium can usually be described with only two parameters. To describe deviations from equilibrium two extra parameters are needed. However, it will be shown that deviations from temperature equilibrium and deviations from Saha equilibrium depend on one another. As a result, non-equilibrium plasmas can be described with three parameters. This reduction in parameter space will ease the plasma describing effort enormously
Directory of Open Access Journals (Sweden)
Xianglin Meng
2018-03-01
Full Text Available The normal vector estimation of the large-scale scattered point cloud (LSSPC plays an important role in point-based shape editing. However, the normal vector estimation for LSSPC cannot meet the great challenge of the sharp increase of the point cloud that is mainly attributed to its low computational efficiency. In this paper, a novel, fast method-based on bi-linear interpolation is reported on the normal vector estimation for LSSPC. We divide the point sets into many small cubes to speed up the local point search and construct interpolation nodes on the isosurface expressed by the point cloud. On the premise of calculating the normal vectors of these interpolated nodes, a normal vector bi-linear interpolation of the points in the cube is realized. The proposed approach has the merits of accurate, simple, and high efficiency, because the algorithm only needs to search neighbor and calculates normal vectors for interpolation nodes that are usually far less than the point cloud. The experimental results of several real and simulated point sets show that our method is over three times faster than the Elliptic Gabriel Graph-based method, and the average deviation is less than 0.01 mm.
A large-scale multi-objective flights conflict avoidance approach supporting 4D trajectory operation
Guan, Xiangmin; Zhang, Xuejun; Lv, Renli; Chen, Jun; Weiszer, Michal
2017-01-01
Recently, the long-term conflict avoidance approaches based on large-scale flights scheduling have attracted much attention due to their ability to provide solutions from a global point of view. However, the current approaches which focus only on a single objective with the aim of minimizing the total delay and the number of conflicts, cannot provide the controllers with variety of optional solutions, representing different trade-offs. Furthermore, the flight track error is often overlooked i...
Directory of Open Access Journals (Sweden)
M. Ghayeni
2010-12-01
Full Text Available This paper proposes an algorithm for transmission cost allocation (TCA in a large power system based on nodal pricing approach using the multi-area scheme. The nodal pricing approach is introduced to allocate the transmission costs by the control of nodal prices in a single area network. As the number of equations is dependent on the number of buses and generators, this method will be very time consuming for large power systems. To solve this problem, the present paper proposes a new algorithm based on multi-area approach for regulating the nodal prices, so that the simulation time is greatly reduced and therefore the TCA problem with nodal pricing approach will be applicable for large power systems. In addition, in this method the transmission costs are allocated to users more equitable. Since the higher transmission costs in an area having a higher reliability are paid only by users of that area in contrast with the single area method, in which these costs are allocated to all users regardless of their locations. The proposed method is implemented on the IEEE 118 bus test system which comprises three areas. Results show that with application of multi-area approach, the simulation time is greatly reduced and the transmission costs are also allocated to users with less variation in new nodal prices with respect to the single area approach.
Mizell, Carolyn; Malone, Linda
2007-01-01
It is very difficult for project managers to develop accurate cost and schedule estimates for large, complex software development projects. None of the approaches or tools available today can estimate the true cost of software with any high degree of accuracy early in a project. This paper provides an approach that utilizes a software development process simulation model that considers and conveys the level of uncertainty that exists when developing an initial estimate. A NASA project will be analyzed using simulation and data from the Software Engineering Laboratory to show the benefits of such an approach.
Influence of asymmetrical drawing radius deviation in micro deep drawing
Heinrich, L.; Kobayashi, H.; Shimizu, T.; Yang, M.; Vollertsen, F.
2017-09-01
Nowadays, an increasing demand for small metal parts in electronic and automotive industries can be observed. Deep drawing is a well-suited technology for the production of such parts due to its excellent qualities for mass production. However, the downscaling of the forming process leads to new challenges in tooling and process design, such as high relative deviation of tool geometry or blank displacement compared to the macro scale. FEM simulation has been a widely-used tool to investigate the influence of symmetrical process deviations as for instance a global variance of the drawing radius. This study shows a different approach that allows to determine the impact of asymmetrical process deviations on micro deep drawing. In this particular case the impact of an asymmetrical drawing radius deviation and blank displacement on cup geometry deviation was investigated for different drawing ratios by experiments and FEM simulation. It was found that both variations result in an increasing cup height deviation. Nevertheless, with increasing drawing ratio a constant drawing radius deviation has an increasing impact, while blank displacement results in a decreasing offset of the cups geometry. This is explained by different mechanisms that result in an uneven cup geometry. While blank displacement leads to material surplus on one side of the cup, an unsymmetrical radius deviation on the other hand generates uneven stretching of the cups wall. This is intensified for higher drawing ratios. It can be concluded that the effect of uneven radius geometry proves to be of major importance for the production of accurately shaped micro cups and cannot be compensated by intentional blank displacement.
Comparing Standard Deviation Effects across Contexts
Ost, Ben; Gangopadhyaya, Anuj; Schiman, Jeffrey C.
2017-01-01
Studies using tests scores as the dependent variable often report point estimates in student standard deviation units. We note that a standard deviation is not a standard unit of measurement since the distribution of test scores can vary across contexts. As such, researchers should be cautious when interpreting differences in the numerical size of…
FINDING STANDARD DEVIATION OF A FUZZY NUMBER
Fokrul Alom Mazarbhuiya
2017-01-01
Two probability laws can be root of a possibility law. Considering two probability densities over two disjoint ranges, we can define the fuzzy standard deviation of a fuzzy variable with the help of the standard deviation two random variables in two disjoint spaces.
Exploring Students' Conceptions of the Standard Deviation
delMas, Robert; Liu, Yan
2005-01-01
This study investigated introductory statistics students' conceptual understanding of the standard deviation. A computer environment was designed to promote students' ability to coordinate characteristics of variation of values about the mean with the size of the standard deviation as a measure of that variation. Twelve students participated in an…
48 CFR 1401.403 - Individual deviations.
2010-10-01
... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Individual deviations. 1401.403 Section 1401.403 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR GENERAL DEPARTMENT OF THE INTERIOR ACQUISITION REGULATION SYSTEM Deviations from the FAR and DIAR 1401.403 Individual...
2010-04-01
... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Deviations. 226.4 Section 226.4 Foreign Relations AGENCY FOR INTERNATIONAL DEVELOPMENT ADMINISTRATION OF ASSISTANCE AWARDS TO U.S. NON-GOVERNMENTAL ORGANIZATIONS General § 226.4 Deviations. The Office of Management and Budget (OMB) may grant exceptions for...
41 CFR 115-1.110 - Deviations.
2010-07-01
... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false Deviations. 115-1.110 Section 115-1.110 Public Contracts and Property Management Federal Property Management Regulations System (Continued) ENVIRONMENTAL PROTECTION AGENCY 1-INTRODUCTION 1.1-Regulation System § 115-1.110 Deviations...
2010-07-01
... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false Deviation. 105-1.110 Section 105-1.110 Public Contracts and Property Management Federal Property Management Regulations System (Continued) GENERAL SERVICES ADMINISTRATION 1-INTRODUCTION 1.1-Regulations System § 105-1.110 Deviation. (a...
2010-07-01
... 41 Public Contracts and Property Management 2 2010-07-01 2010-07-01 true Deviation. 101-1.110 Section 101-1.110 Public Contracts and Property Management Federal Property Management Regulations System FEDERAL PROPERTY MANAGEMENT REGULATIONS GENERAL 1-INTRODUCTION 1.1-Regulation System § 101-1.110 Deviation...
A Flipped Mode Teaching Approach for Large and Advanced Electrical Engineering Courses
Ravishankar, Jayashri; Epps, Julien; Ambikairajah, Eliathamby
2018-01-01
A fully flipped mode teaching approach is challenging for students in advanced engineering courses, because of demanding pre-class preparation load, due to the complex and analytical nature of the topics. When this is applied to large classes, it brings an additional complexity in terms of promoting the intended active learning. This paper…
A flipped mode teaching approach for large and advanced electrical engineering courses
Ravishankar, Jayashri; Epps, Julien; Ambikairajah, Eliathamby
2018-05-01
A fully flipped mode teaching approach is challenging for students in advanced engineering courses, because of demanding pre-class preparation load, due to the complex and analytical nature of the topics. When this is applied to large classes, it brings an additional complexity in terms of promoting the intended active learning. This paper presents a novel selective flipped mode teaching approach designed for large and advanced courses that has two aspects: (i) it provides selective flipping of a few topics, while delivering others in traditional face-to-face teaching, to provide an effective trade-off between the two approaches according to the demands of individual topics and (ii) it introduces technology-enabled live in-class quizzes to obtain instant feedback and facilitate collaborative problem-solving exercises. The proposed approach was implemented for a large fourth year course in electrical power engineering over three successive years and the criteria for selecting between the flipped mode teaching and traditional teaching modes are outlined. Results confirmed that the proposed approach improved both students' academic achievements and their engagement in the course, without overloading them during the teaching period.
Efficient Approach for Harmonic Resonance Identification of Large Wind Power Plants
DEFF Research Database (Denmark)
Ebrahimzadeh, Esmaeil; Blaabjerg, Frede; Wang, Xiongfei
2016-01-01
Unlike conventional power systems where the resonance frequencies are mainly determined by the passive components parameters, large Wind Power Plants (WPPs) may introduce additional harmonic resonances because of the interactions of the wideband control systems of power converters with each other...... and with passive components. This paper presents an efficient approach for identification of harmonic resonances in large WPPs containing power electronic converters, cable, transformer, capacitor banks, shunt reactors, etc. The proposed approach introduces a large WPP as a Multi-Input Multi-Output (MIMO) control...... system by considering the linearized models of the inner control loops of grid-side converters. Therefore, the resonance frequencies of the WPP resulting from passive components and the control loop interactions are identified based on the determinant of the transfer function matrix of the introduced...
Honey, K. T.
2014-12-01
The global coastal ocean and watersheds are divided into 66 Large Marine Ecosystems (LMEs), which encompass regions from river basins, estuaries, and coasts to the seaward boundaries of continental shelves and margins of major currents. Approximately 80% of global fisheries catch comes from LME waters. Ecosystem goods and services from LMEs contribute an estimated US 18-25 trillion dollars annually to the global economy in market and non-market value. The critical importance of these large-scale systems, however, is threatened by human populations and pressures, including climate change. Fortunately, there is pragmatic reason for optimism. Interdisciplinary frameworks exist, such as the Large Marine Ecosystem (LME) approach for adaptive management that can integrate both nature-centric and human-centric views into ecosystem monitoring, assessment, and adaptive management practices for long-term sustainability. Originally proposed almost 30 years ago, the LME approach rests on five modules are: (i) productivity, (ii) fish and fisheries, (iii) pollution and ecosystem health, (iv) socioeconomics, and (v) governance for iterative adaptive management at a large, international scale of 200,000 km2 or greater. The Global Environment Facility (GEF), World Bank, and United Nations agencies recognize and support the LME approach—as evidenced by over 3.15 billion in financial assistance to date for LME projects. This year of 2014 is an exciting milestone in LME history, after 20 years of the United Nations and GEF organizations adopting LMEs as a unit for ecosystem-based approaches to management. The LME approach, however, is not perfect. Nor is it immutable. Similar to the adaptive management framework it propones, the LME approach itself must adapt to new and emerging 21st Century technologies, science, and realities. The LME approach must further consider socioeconomics and governance. Within the socioeconomics module alone, several trillion-dollar opportunities exist
An Autonomous Sensor Tasking Approach for Large Scale Space Object Cataloging
Linares, R.; Furfaro, R.
The field of Space Situational Awareness (SSA) has progressed over the last few decades with new sensors coming online, the development of new approaches for making observations, and new algorithms for processing them. Although there has been success in the development of new approaches, a missing piece is the translation of SSA goals to sensors and resource allocation; otherwise known as the Sensor Management Problem (SMP). This work solves the SMP using an artificial intelligence approach called Deep Reinforcement Learning (DRL). Stable methods for training DRL approaches based on neural networks exist, but most of these approaches are not suitable for high dimensional systems. The Asynchronous Advantage Actor-Critic (A3C) method is a recently developed and effective approach for high dimensional systems, and this work leverages these results and applies this approach to decision making in SSA. The decision space for the SSA problems can be high dimensional, even for tasking of a single telescope. Since the number of SOs in space is relatively high, each sensor will have a large number of possible actions at a given time. Therefore, efficient DRL approaches are required when solving the SMP for SSA. This work develops a A3C based method for DRL applied to SSA sensor tasking. One of the key benefits of DRL approaches is the ability to handle high dimensional data. For example DRL methods have been applied to image processing for the autonomous car application. For example, a 256x256 RGB image has 196608 parameters (256*256*3=196608) which is very high dimensional, and deep learning approaches routinely take images like this as inputs. Therefore, when applied to the whole catalog the DRL approach offers the ability to solve this high dimensional problem. This work has the potential to, for the first time, solve the non-myopic sensor tasking problem for the whole SO catalog (over 22,000 objects) providing a truly revolutionary result.
Melas, Ioannis N; Mitsos, Alexander; Messinis, Dimitris E; Weiss, Thomas S; Rodriguez, Julio-Saez; Alexopoulos, Leonidas G
2012-04-01
Construction of large and cell-specific signaling pathways is essential to understand information processing under normal and pathological conditions. On this front, gene-based approaches offer the advantage of large pathway exploration whereas phosphoproteomic approaches offer a more reliable view of pathway activities but are applicable to small pathway sizes. In this paper, we demonstrate an experimentally adaptive approach to construct large signaling pathways from phosphoproteomic data within a 3-day time frame. Our approach--taking advantage of the fast turnaround time of the xMAP technology--is carried out in four steps: (i) screen optimal pathway inducers, (ii) select the responsive ones, (iii) combine them in a combinatorial fashion to construct a phosphoproteomic dataset, and (iv) optimize a reduced generic pathway via an Integer Linear Programming formulation. As a case study, we uncover novel players and their corresponding pathways in primary human hepatocytes by interrogating the signal transduction downstream of 81 receptors of interest and constructing a detailed model for the responsive part of the network comprising 177 species (of which 14 are measured) and 365 interactions.
Directory of Open Access Journals (Sweden)
Michael J. Drinkwater
2014-09-01
Full Text Available Turning lectures into interactive, student-led question and answer sessions is known to increase learning, but enabling interaction in a large class seems aninsurmountable task. This can discourage adoption of this new approach – who has time to individualize responses, address questions from over 200 students and encourage active participation in class? An approach adopted by a teaching team in large first-year classes at a research-intensive university appears to provide a means to do so. We describe the implementation of active learning strategies in a large first-year undergraduate physics unit of study, replacing traditional, content-heavy lectures with an integrated approach to question-driven learning. A key feature of our approach is that it facilitates intensive in-class discussions by requiring students to engage in preparatory reading and answer short written quizzes before every class. The lecturer uses software to rapidly analyze the student responses and identify the main issues faced by the students before the start of each class. We report the success of the integration of student preparation with this analysis and feedback framework, and the impact on the in-class discussions. We also address some of the difficulties commonly experienced by staff preparing for active learning classes.
Complex service recovery processes: how to avoid triple deviation
Edvardsson, Bo; Tronvoll, Bård; Höykinpuro, Ritva
2011-01-01
Purpose – This article seeks to develop a new framework to outline factors that influence the resolution of unfavourable service experiences as a result of double deviation. The focus is on understanding and managing complex service recovery processes. Design/methodology/approach – An inductive, explorative and narrative approach was selected. Data were collected in the form of narratives from the field through interviews with actors at various levels in organisations as well as with custo...
Sanyal, Soumya; Jain, Amit; Das, Sajal K.; Biswas, Rupak
2003-01-01
In this paper, we propose a distributed approach for mapping a single large application to a heterogeneous grid environment. To minimize the execution time of the parallel application, we distribute the mapping overhead to the available nodes of the grid. This approach not only provides a fast mapping of tasks to resources but is also scalable. We adopt a hierarchical grid model and accomplish the job of mapping tasks to this topology using a scheduler tree. Results show that our three-phase algorithm provides high quality mappings, and is fast and scalable.
Hierarchical approach to optimization of parallel matrix multiplication on large-scale platforms
Hasanov, Khalid
2014-03-04
© 2014, Springer Science+Business Media New York. Many state-of-the-art parallel algorithms, which are widely used in scientific applications executed on high-end computing systems, were designed in the twentieth century with relatively small-scale parallelism in mind. Indeed, while in 1990s a system with few hundred cores was considered a powerful supercomputer, modern top supercomputers have millions of cores. In this paper, we present a hierarchical approach to optimization of message-passing parallel algorithms for execution on large-scale distributed-memory systems. The idea is to reduce the communication cost by introducing hierarchy and hence more parallelism in the communication scheme. We apply this approach to SUMMA, the state-of-the-art parallel algorithm for matrix–matrix multiplication, and demonstrate both theoretically and experimentally that the modified Hierarchical SUMMA significantly improves the communication cost and the overall performance on large-scale platforms.
Wang, Chun; Zheng, Yi; Chang, Hua-Hua
2014-01-01
With the advent of web-based technology, online testing is becoming a mainstream mode in large-scale educational assessments. Most online tests are administered continuously in a testing window, which may post test security problems because examinees who take the test earlier may share information with those who take the test later. Researchers have proposed various statistical indices to assess the test security, and one most often used index is the average test-overlap rate, which was further generalized to the item pooling index (Chang & Zhang, 2002, 2003). These indices, however, are all defined as the means (that is, the expected proportion of common items among examinees) and they were originally proposed for computerized adaptive testing (CAT). Recently, multistage testing (MST) has become a popular alternative to CAT. The unique features of MST make it important to report not only the mean, but also the standard deviation (SD) of test overlap rate, as we advocate in this paper. The standard deviation of test overlap rate adds important information to the test security profile, because for the same mean, a large SD reflects that certain groups of examinees share more common items than other groups. In this study, we analytically derived the lower bounds of the SD under MST, with the results under CAT as a benchmark. It is shown that when the mean overlap rate is the same between MST and CAT, the SD of test overlap tends to be larger in MST. A simulation study was conducted to provide empirical evidence. We also compared the security of MST under the single-pool versus the multiple-pool designs; both analytical and simulation studies show that the non-overlapping multiple-pool design will slightly increase the security risk.
Complexity analysis based on generalized deviation for financial markets
Li, Chao; Shang, Pengjian
2018-03-01
In this paper, a new modified method is proposed as a measure to investigate the correlation between past price and future volatility for financial time series, known as the complexity analysis based on generalized deviation. In comparison with the former retarded volatility model, the new approach is both simple and computationally efficient. The method based on the generalized deviation function presents us an exhaustive way showing the quantization of the financial market rules. Robustness of this method is verified by numerical experiments with both artificial and financial time series. Results show that the generalized deviation complexity analysis method not only identifies the volatility of financial time series, but provides a comprehensive way distinguishing the different characteristics between stock indices and individual stocks. Exponential functions can be used to successfully fit the volatility curves and quantify the changes of complexity for stock market data. Then we study the influence for negative domain of deviation coefficient and differences during the volatile periods and calm periods. after the data analysis of the experimental model, we found that the generalized deviation model has definite advantages in exploring the relationship between the historical returns and future volatility.
Wolter, Andrea Elaine
2014-01-01
I apply a forensic, multidisciplinary approach that integrates engineering geology field investigations, engineering geomorphology mapping, long-range terrestrial photogrammetry, and a numerical modelling toolbox to two large rock slope failures to study their causes, initiation, kinematics, and dynamics. I demonstrate the significance of endogenic and exogenic processes, both separately and in concert, in contributing to landscape evolution and conditioning slopes for failure, and use geomor...
International Nuclear Information System (INIS)
Ababou, R.
1991-08-01
This report develops a broad review and assessment of quantitative modeling approaches and data requirements for large-scale subsurface flow in radioactive waste geologic repository. The data review includes discussions of controlled field experiments, existing contamination sites, and site-specific hydrogeologic conditions at Yucca Mountain. Local-scale constitutive models for the unsaturated hydrodynamic properties of geologic media are analyzed, with particular emphasis on the effect of structural characteristics of the medium. The report further reviews and analyzes large-scale hydrogeologic spatial variability from aquifer data, unsaturated soil data, and fracture network data gathered from the literature. Finally, various modeling strategies toward large-scale flow simulations are assessed, including direct high-resolution simulation, and coarse-scale simulation based on auxiliary hydrodynamic models such as single equivalent continuum and dual-porosity continuum. The roles of anisotropy, fracturing, and broad-band spatial variability are emphasized. 252 refs
Directory of Open Access Journals (Sweden)
Heng-Yi Su
2016-11-01
Full Text Available This paper proposes an efficient approach for the computation of voltage stability margin (VSM in a large-scale power grid. The objective is to accurately and rapidly determine the load power margin which corresponds to voltage collapse phenomena. The proposed approach is based on the impedance match-based technique and the model-based technique. It combines the Thevenin equivalent (TE network method with cubic spline extrapolation technique and the continuation technique to achieve fast and accurate VSM computation for a bulk power grid. Moreover, the generator Q limits are taken into account for practical applications. Extensive case studies carried out on Institute of Electrical and Electronics Engineers (IEEE benchmark systems and the Taiwan Power Company (Taipower, Taipei, Taiwan system are used to demonstrate the effectiveness of the proposed approach.
Gene prediction in metagenomic fragments: A large scale machine learning approach
Directory of Open Access Journals (Sweden)
Morgenstern Burkhard
2008-04-01
Full Text Available Abstract Background Metagenomics is an approach to the characterization of microbial genomes via the direct isolation of genomic sequences from the environment without prior cultivation. The amount of metagenomic sequence data is growing fast while computational methods for metagenome analysis are still in their infancy. In contrast to genomic sequences of single species, which can usually be assembled and analyzed by many available methods, a large proportion of metagenome data remains as unassembled anonymous sequencing reads. One of the aims of all metagenomic sequencing projects is the identification of novel genes. Short length, for example, Sanger sequencing yields on average 700 bp fragments, and unknown phylogenetic origin of most fragments require approaches to gene prediction that are different from the currently available methods for genomes of single species. In particular, the large size of metagenomic samples requires fast and accurate methods with small numbers of false positive predictions. Results We introduce a novel gene prediction algorithm for metagenomic fragments based on a two-stage machine learning approach. In the first stage, we use linear discriminants for monocodon usage, dicodon usage and translation initiation sites to extract features from DNA sequences. In the second stage, an artificial neural network combines these features with open reading frame length and fragment GC-content to compute the probability that this open reading frame encodes a protein. This probability is used for the classification and scoring of gene candidates. With large scale training, our method provides fast single fragment predictions with good sensitivity and specificity on artificially fragmented genomic DNA. Additionally, this method is able to predict translation initiation sites accurately and distinguishes complete from incomplete genes with high reliability. Conclusion Large scale machine learning methods are well-suited for gene
Energy Technology Data Exchange (ETDEWEB)
Inoue, Hiroshi K.; Negishi, Masatoshi; Kohga, Hideaki; Hirato, Masafumi; Ohye, Chihiro [Gunma Univ., Maebashi (Japan). School of Medicine; Shibazaki, Tohru
1998-09-01
A major aim of minimally invasive neurosurgery is to preserve function in the brain and cranial nerves. Based on previous results of radiosurgery for central lesions (19 craniopharyngiomas, 46 pituitary adenomas, 9 meningeal tumors), combined micro- and/or radiosurgery was applied for large lesions compressing the hypothalamus and/or brainstem. A basal interhemispheric approach via superomedial orbitotomy or a transcallosal-transforaminal approach was used for these large tumors. Tumors left behind in the hypothalamus or cavernous sinus were treated with radiosurgery using a gamma unit. Preoperative hypothalamo-pituitary functions were preserved in most of these patients. Radiosurgical results were evaluated in patients followed for more than 2 years after treatment. All 9 craniopharyngiomas decreased in size after radiosurgery, although a second treatment was required in 4 patients. All 20 pituitary adenomas were stable or decreased in size and 5 of 7 functioning adenomas showed normalized values of hormones in the serum. All 3 meningeal tumors were stable or decreased in size after treatment. No cavernous sinus symptoms developed after radiosurgery. We conclude that combined micro- and radio-neurosurgery is an effective and less invasive treatment for large central lesions compressing the hypothalamus and brainstem. (author)
International Nuclear Information System (INIS)
Inoue, Hiroshi K.; Negishi, Masatoshi; Kohga, Hideaki; Hirato, Masafumi; Ohye, Chihiro; Shibazaki, Tohru
1998-01-01
A major aim of minimally invasive neurosurgery is to preserve function in the brain and cranial nerves. Based on previous results of radiosurgery for central lesions (19 craniopharyngiomas, 46 pituitary adenomas, 9 meningeal tumors), combined micro- and/or radiosurgery was applied for large lesions compressing the hypothalamus and/or brainstem. A basal interhemispheric approach via superomedial orbitotomy or a transcallosal-transforaminal approach was used for these large tumors. Tumors left behind in the hypothalamus or cavernous sinus were treated with radiosurgery using a gamma unit. Preoperative hypothalamo-pituitary functions were preserved in most of these patients. Radiosurgical results were evaluated in patients followed for more than 2 years after treatment. All 9 craniopharyngiomas decreased in size after radiosurgery, although a second treatment was required in 4 patients. All 20 pituitary adenomas were stable or decreased in size and 5 of 7 functioning adenomas showed normalized values of hormones in the serum. All 3 meningeal tumors were stable or decreased in size after treatment. No cavernous sinus symptoms developed after radiosurgery. We conclude that combined micro- and radio-neurosurgery is an effective and less invasive treatment for large central lesions compressing the hypothalamus and brainstem. (author)
Time to "go large" on biofilm research: advantages of an omics approach.
Azevedo, Nuno F; Lopes, Susana P; Keevil, Charles W; Pereira, Maria O; Vieira, Maria J
2009-04-01
In nature, the biofilm mode of life is of great importance in the cell cycle for many microorganisms. Perhaps because of biofilm complexity and variability, the characterization of a given microbial system, in terms of biofilm formation potential, structure and associated physiological activity, in a large-scale, standardized and systematic manner has been hindered by the absence of high-throughput methods. This outlook is now starting to change as new methods involving the utilization of microtiter-plates and automated spectrophotometry and microscopy systems are being developed to perform large-scale testing of microbial biofilms. Here, we evaluate if the time is ripe to start an integrated omics approach, i.e., the generation and interrogation of large datasets, to biofilms--"biofomics". This omics approach would bring much needed insight into how biofilm formation ability is affected by a number of environmental, physiological and mutational factors and how these factors interplay between themselves in a standardized manner. This could then lead to the creation of a database where biofilm signatures are identified and interrogated. Nevertheless, and before embarking on such an enterprise, the selection of a versatile, robust, high-throughput biofilm growing device and of appropriate methods for biofilm analysis will have to be performed. Whether such device and analytical methods are already available, particularly for complex heterotrophic biofilms is, however, very debatable.
Directory of Open Access Journals (Sweden)
Kaisheng Zhang
2016-12-01
Full Text Available Recently, population density has grown quickly with the increasing acceleration of urbanization. At the same time, overcrowded situations are more likely to occur in populous urban areas, increasing the risk of accidents. This paper proposes a synthetic approach to recognize and identify the large pedestrian flow. In particular, a hybrid pedestrian flow detection model was constructed by analyzing real data from major mobile phone operators in China, including information from smartphones and base stations (BS. With the hybrid model, the Log Distance Path Loss (LDPL model was used to estimate the pedestrian density from raw network data, and retrieve information with the Gaussian Progress (GP through supervised learning. Temporal-spatial prediction of the pedestrian data was carried out with Machine Learning (ML approaches. Finally, a case study of a real Central Business District (CBD scenario in Shanghai, China using records of millions of cell phone users was conducted. The results showed that the new approach significantly increases the utility and capacity of the mobile network. A more reasonable overcrowding detection and alert system can be developed to improve safety in subway lines and other hotspot landmark areas, such as the Bundle, People’s Square or Disneyland, where a large passenger flow generally exists.
A Self-Organizing Spatial Clustering Approach to Support Large-Scale Network RTK Systems.
Shen, Lili; Guo, Jiming; Wang, Lei
2018-06-06
The network real-time kinematic (RTK) technique can provide centimeter-level real time positioning solutions and play a key role in geo-spatial infrastructure. With ever-increasing popularity, network RTK systems will face issues in the support of large numbers of concurrent users. In the past, high-precision positioning services were oriented towards professionals and only supported a few concurrent users. Currently, precise positioning provides a spatial foundation for artificial intelligence (AI), and countless smart devices (autonomous cars, unmanned aerial-vehicles (UAVs), robotic equipment, etc.) require precise positioning services. Therefore, the development of approaches to support large-scale network RTK systems is urgent. In this study, we proposed a self-organizing spatial clustering (SOSC) approach which automatically clusters online users to reduce the computational load on the network RTK system server side. The experimental results indicate that both the SOSC algorithm and the grid algorithm can reduce the computational load efficiently, while the SOSC algorithm gives a more elastic and adaptive clustering solution with different datasets. The SOSC algorithm determines the cluster number and the mean distance to cluster center (MDTCC) according to the data set, while the grid approaches are all predefined. The side-effects of clustering algorithms on the user side are analyzed with real global navigation satellite system (GNSS) data sets. The experimental results indicate that 10 km can be safely used as the cluster radius threshold for the SOSC algorithm without significantly reducing the positioning precision and reliability on the user side.
PoDMan: Policy Deviation Management
Directory of Open Access Journals (Sweden)
Aishwarya Bakshi
2017-07-01
Full Text Available Whenever an unexpected or exceptional situation occurs, complying with the existing policies may not be possible. The main objective of this work is to assist individuals and organizations to decide in the process of deviating from policies and performing a non-complying action. The paper proposes utilizing software agents as supportive tools to provide the best non-complying action while deviating from policies. The article also introduces a process in which the decision on the choice of non-complying action can be made. The work is motivated by a real scenario observed in a hospital in Norway and demonstrated through the same settings.
A New Approach for Structural Monitoring of Large Dams with a Three-Dimensional Laser Scanner
Directory of Open Access Journals (Sweden)
JosÃƒÂ© SÃƒÂ¡nchez
2008-09-01
Full Text Available Driven by progress in sensor technology, computer methods and data processing capabilities, 3D laser scanning has found a wide range of new application fields in recent years. Particularly, monitoring the static and dynamic behaviour of large dams has always been a topic of great importance, due to the impact these structures have on the whole landscape where they are built. The main goal of this paper is to show the relevance and novelty of the laserscanning methodology developed, which incorporates different statistical and modelling approaches not considered until now. As a result, the methods proposed in this paper have provided the measurement and monitoring of the large Ã¢Â€ÂœLas CogotasÃ¢Â€Â dam (Avila, Spain.
Standard and biological treatment in large vessel vasculitis: guidelines and current approaches.
Muratore, Francesco; Pipitone, Nicolò; Salvarani, Carlo
2017-04-01
Giant cell arteritis and Takayasu arteritis are the two major forms of idiopathic large vessel vasculitis. High doses of glucocorticoids are effective in inducing remission in both conditions, but relapses and recurrences are common, requiring prolonged glucocorticoid treatment with the risk of the related adverse events. Areas covered: In this article, we will review the standard and biological treatment strategies in large vessel vasculitis, and we will focus on the current approaches to these diseases. Expert commentary: The results of treatment trials with conventional immunosuppressive agents such as methotrexate, azathioprine, mycophenolate mofetil, and cyclophosphamide have overall been disappointing. TNF-α blockers are ineffective in giant cell arteritis, while observational evidence and a phase 2 randomized trial support the use of tocilizumab in relapsing giant cell arteritis. Observational evidence strongly supports the use of anti-TNF-α agents and tocilizumab in Takayasu patients with relapsing disease. However biological agents are not curative, and relapses remain common.
A semiparametric graphical modelling approach for large-scale equity selection.
Liu, Han; Mulvey, John; Zhao, Tianqi
2016-01-01
We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption.
Directory of Open Access Journals (Sweden)
Md. Rezaul Karim
2012-03-01
Full Text Available Mining interesting patterns from DNA sequences is one of the most challenging tasks in bioinformatics and computational biology. Maximal contiguous frequent patterns are preferable for expressing the function and structure of DNA sequences and hence can capture the common data characteristics among related sequences. Biologists are interested in finding frequent orderly arrangements of motifs that are responsible for similar expression of a group of genes. In order to reduce mining time and complexity, however, most existing sequence mining algorithms either focus on finding short DNA sequences or require explicit specification of sequence lengths in advance. The challenge is to find longer sequences without specifying sequence lengths in advance. In this paper, we propose an efficient approach to mining maximal contiguous frequent patterns from large DNA sequence datasets. The experimental results show that our proposed approach is memory-efficient and mines maximal contiguous frequent patterns within a reasonable time.
DEFF Research Database (Denmark)
Chivaee, Hamid Sarlak; Sørensen, Jens Nørkær; Mikkelsen, Robert Flemming
2012-01-01
Large eddy simulation (LES) of flow in a wind farm is studied in neutral as well as thermally stratified atmospheric boundary layer (ABL). An approach has been practiced to simulate the flow in a fully developed wind farm boundary layer. The approach is based on the Immersed Boundary Method (IBM......) and involves implementation of an arbitrary prescribed initial boundary layer (See [1]). A prescribed initial boundary layer profile is enforced through the computational domain using body forces to maintain a desired flow field. The body forces are then stored and applied on the domain through the simulation...... and the boundary layer shape will be modified due to the interaction of the turbine wakes and buoyancy contributions. The implemented method is capable of capturing the most important features of wakes of wind farms [1] while having the advantage of resolving the wall layer with a coarser grid than typically...
A minimally invasive surgical approach for large cyst-like periapical lesions: a case series.
Shah, Naseem; Logani, Ajay; Kumar, Vijay
2014-01-01
Various conservative approaches have been utilized to manage large periapical lesions. This article presents a relatively new, very conservative technique known as surgical fenestration which is both diagnostic and curative. The technique involves partially excising the cystic lining, gently curetting the cystic cavity, performing copious irrigation, and closing the surgical site. This technique allows for decompression and allows the clinician the freedom to take a biopsy of the lesion, as well as perform other procedures such as root resection and retrograde sealing, if required. As the procedure does not perform a complete excision of the cystic lining, it is both minimally invasive and cost-effective. The technique and the concepts involved are reviewed in 4 cases treated with this novel surgical approach.
Modeling and control of a large nuclear reactor. A three-time-scale approach
Energy Technology Data Exchange (ETDEWEB)
Shimjith, S.R. [Indian Institute of Technology Bombay, Mumbai (India); Bhabha Atomic Research Centre, Mumbai (India); Tiwari, A.P. [Bhabha Atomic Research Centre, Mumbai (India); Bandyopadhyay, B. [Indian Institute of Technology Bombay, Mumbai (India). IDP in Systems and Control Engineering
2013-07-01
Recent research on Modeling and Control of a Large Nuclear Reactor. Presents a three-time-scale approach. Written by leading experts in the field. Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property, with emphasis on three-time-scale systems.
A Genetic Algorithm-based Antenna Selection Approach for Large-but-Finite MIMO Networks
Makki, Behrooz
2016-12-29
We study the performance of antenna selectionbased multiple-input-multiple-output (MIMO) networks with large but finite number of transmit antennas and receivers. Considering the continuous and bursty communication scenarios with different users’ data request probabilities, we develop an efficient antenna selection scheme using genetic algorithms (GA). As demonstrated, the proposed algorithm is generic in the sense that it can be used in the cases with different objective functions, precoding methods, levels of available channel state information and channel models. Our results show that the proposed GAbased algorithm reaches (almost) the same throughput as the exhaustive search-based optimal approach, with substantially less implementation complexity.
A Genetic Algorithm-based Antenna Selection Approach for Large-but-Finite MIMO Networks
Makki, Behrooz; Ide, Anatole; Svensson, Tommy; Eriksson, Thomas; Alouini, Mohamed-Slim
2016-01-01
We study the performance of antenna selectionbased multiple-input-multiple-output (MIMO) networks with large but finite number of transmit antennas and receivers. Considering the continuous and bursty communication scenarios with different users’ data request probabilities, we develop an efficient antenna selection scheme using genetic algorithms (GA). As demonstrated, the proposed algorithm is generic in the sense that it can be used in the cases with different objective functions, precoding methods, levels of available channel state information and channel models. Our results show that the proposed GAbased algorithm reaches (almost) the same throughput as the exhaustive search-based optimal approach, with substantially less implementation complexity.
A dynamic programming approach for quickly estimating large network-based MEV models
DEFF Research Database (Denmark)
Mai, Tien; Frejinger, Emma; Fosgerau, Mogens
2017-01-01
We propose a way to estimate a family of static Multivariate Extreme Value (MEV) models with large choice sets in short computational time. The resulting model is also straightforward and fast to use for prediction. Following Daly and Bierlaire (2006), the correlation structure is defined by a ro...... to converge (4.3 h on an Intel(R) 3.2 GHz machine using a non-parallelized code). We also show that our approach allows to estimate a cross-nested logit model of 111 nests with a real data set of more than 100,000 observations in 14 h....
Channel Capacity Calculation at Large SNR and Small Dispersion within Path-Integral Approach
Reznichenko, A. V.; Terekhov, I. S.
2018-04-01
We consider the optical fiber channel modelled by the nonlinear Shrödinger equation with additive white Gaussian noise. Using Feynman path-integral approach for the model with small dispersion we find the first nonzero corrections to the conditional probability density function and the channel capacity estimations at large signal-to-noise ratio. We demonstrate that the correction to the channel capacity in small dimensionless dispersion parameter is quadratic and positive therefore increasing the earlier calculated capacity for a nondispersive nonlinear optical fiber channel in the intermediate power region. Also for small dispersion case we find the analytical expressions for simple correlators of the output signals in our noisy channel.
Labidi, Moujahed; Watanabe, Kentaro; Loit, Marie-Pier; Hanakita, Shunya; Froelich, Sébastien
2018-02-01
Objectives To discuss the use of the posterior petrosal approach for the resection of a retrochiasmatic craniopharyngioma. Design Operative video. Results In this case video, the authors discuss the surgical management of a large craniopharyngioma, presenting with mass effect on the third ventricle and optic apparatus. A first surgical stage, through an endoscopic endonasal transtubercular approach, allowed satisfactory decompression of the optic chiasma and nerves in preparation for adjuvant therapy. However, accelerated growth of the tumor, with renewed visual deficits and mass effect on the hypothalamus and third ventricle, warranted a supplementary resection. A posterior transpetrosal 1 2 (also called "retrolabyrinthine transtentorial") was performed to obtain a better exposure of the tumor and the surrounding anatomy (floor and walls of the third ventricle, perforating vessels, optic nerves, etc.) 3 . Nuances of technique and surgical pearls related to the posterior transpetrosal are discussed and illustrated in this operative video, including the posterior mobilization of the transverse-sigmoid sinuses junction, preservation of the venous anatomy during the tentorial incision, identification and preservation of the floor of the third ventricle during tumor resection, and a careful multilayer closure. Conclusion Retrochiasmatic craniopharyngiomas are difficult to reach tumors that often require skull base approaches, either endoscopic endonasal or transcranial. The posterior transpetrosal approach is an important part of the surgical armamentarium to safely resect these complex tumors. The link to the video can be found at: https://youtu.be/2MyGLJ_v1kI .
SAMPLE STANDARD DEVIATION(s) CHART UNDER THE ASSUMPTION OF MODERATENESS AND ITS PERFORMANCE ANALYSIS
Kalpesh S. Tailor
2017-01-01
Moderate distribution proposed by Naik V.D and Desai J.M., is a sound alternative of normal distribution, which has mean and mean deviation as pivotal parameters and which has properties similar to normal distribution. Mean deviation (δ) is a very good alternative of standard deviation (σ) as mean deviation is considered to be the most intuitively and rationally defined measure of dispersion. This fact can be very useful in the field of quality control to construct the control limits of the c...
A robust standard deviation control chart
Schoonhoven, M.; Does, R.J.M.M.
2012-01-01
This article studies the robustness of Phase I estimators for the standard deviation control chart. A Phase I estimator should be efficient in the absence of contaminations and resistant to disturbances. Most of the robust estimators proposed in the literature are robust against either diffuse
Evolutionary implications of genetic code deviations
International Nuclear Information System (INIS)
Chela Flores, J.
1986-07-01
By extending the standard genetic code into a temperature dependent regime, we propose a train of molecular events leading to alternative coding. The first few examples of these deviations have already been reported in some ciliated protozoans and Gram positive bacteria. A possible range of further alternative coding, still within the context of universality, is pointed out. (author)
48 CFR 201.404 - Class deviations.
2010-10-01
..., and the Defense Logistics Agency, may approve any class deviation, other than those described in 201...) Diminish any preference given small business concerns by the FAR or DFARS; or (D) Extend to requirements imposed by statute or by regulations of other agencies such as the Small Business Administration and the...
Bodily Deviations and Body Image in Adolescence
Vilhjalmsson, Runar; Kristjansdottir, Gudrun; Ward, Dianne S.
2012-01-01
Adolescents with unusually sized or shaped bodies may experience ridicule, rejection, or exclusion based on their negatively valued bodily characteristics. Such experiences can have negative consequences for a person's image and evaluation of self. This study focuses on the relationship between bodily deviations and body image and is based on a…
Association between septal deviation and sinonasal papilloma.
Nomura, Kazuhiro; Ogawa, Takenori; Sugawara, Mitsuru; Honkura, Yohei; Oshima, Hidetoshi; Arakawa, Kazuya; Oshima, Takeshi; Katori, Yukio
2013-12-01
Sinonasal papilloma is a common benign epithelial tumor of the sinonasal tract and accounts for 0.5% to 4% of all nasal tumors. The etiology of sinonasal papilloma remains unclear, although human papilloma virus has been proposed as a major risk factor. Other etiological factors, such as anatomical variations of the nasal cavity, may be related to the pathogenesis of sinonasal papilloma, because deviated nasal septum is seen in patients with chronic rhinosinusitis. We, therefore, investigated the involvement of deviated nasal septum in the development of sinonasal papilloma. Preoperative computed tomography or magnetic resonance imaging findings of 83 patients with sinonasal papilloma were evaluated retrospectively. The side of papilloma and the direction of septal deviation showed a significant correlation. Septum deviated to the intact side in 51 of 83 patients (61.4%) and to the affected side in 18 of 83 patients (21.7%). Straight or S-shaped septum was observed in 14 of 83 patients (16.9%). Even after excluding 27 patients who underwent revision surgery and 15 patients in whom the papilloma touched the concave portion of the nasal septum, the concave side of septal deviation was associated with the development of sinonasal papilloma (p = 0.040). The high incidence of sinonasal papilloma in the concave side may reflect the consequences of the traumatic effects caused by wall shear stress of the high-velocity airflow and the increased chance of inhaling viruses and pollutants. The present study supports the causative role of human papilloma virus and toxic chemicals in the occurrence of sinonasal papilloma.
Prediction of welding residual distortions of large structures using a local/global approach
International Nuclear Information System (INIS)
Duan, Y. G.; Bergheau, J. M.; Vincent, Y.; Boitour, F.; Leblond, J. B.
2007-01-01
Prediction of welding residual distortions is more difficult than that of the microstructure and residual stresses. On the one hand, a fine mesh (often 3D) has to be used in the heat affected zone for the sake of the sharp variations of thermal, metallurgical and mechanical fields in this region. On the other hand, the whole structure is required to be meshed for the calculation of residual distortions. But for large structures, a 3D mesh is inconceivable caused by the costs of the calculation. Numerous methods have been developed to reduce the size of models. A local/global approach has been proposed to determine the welding residual distortions of large structures. The plastic strains and the microstructure due to welding are supposed can be determined from a local 3D model which concerns only the weld and its vicinity. They are projected as initial strains into a global 3D model which consists of the whole structure and obviously much less fine in the welded zone than the local model. The residual distortions are then calculated using a simple elastic analysis, which makes this method particularly effective in an industrial context. The aim of this article is to present the principle of the local/global approach then show the capacity of this method in an industrial context and finally study the definition of the local model
Reduced representation approaches to interrogate genome diversity in large repetitive plant genomes.
Hirsch, Cory D; Evans, Joseph; Buell, C Robin; Hirsch, Candice N
2014-07-01
Technology and software improvements in the last decade now provide methodologies to access the genome sequence of not only a single accession, but also multiple accessions of plant species. This provides a means to interrogate species diversity at the genome level. Ample diversity among accessions in a collection of species can be found, including single-nucleotide polymorphisms, insertions and deletions, copy number variation and presence/absence variation. For species with small, non-repetitive rich genomes, re-sequencing of query accessions is robust, highly informative, and economically feasible. However, for species with moderate to large sized repetitive-rich genomes, technical and economic barriers prevent en masse genome re-sequencing of accessions. Multiple approaches to access a focused subset of loci in species with larger genomes have been developed, including reduced representation sequencing, exome capture and transcriptome sequencing. Collectively, these approaches have enabled interrogation of diversity on a genome scale for large plant genomes, including crop species important to worldwide food security. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com.
A Core Set Based Large Vector-Angular Region and Margin Approach for Novelty Detection
Directory of Open Access Journals (Sweden)
Jiusheng Chen
2016-01-01
Full Text Available A large vector-angular region and margin (LARM approach is presented for novelty detection based on imbalanced data. The key idea is to construct the largest vector-angular region in the feature space to separate normal training patterns; meanwhile, maximize the vector-angular margin between the surface of this optimal vector-angular region and abnormal training patterns. In order to improve the generalization performance of LARM, the vector-angular distribution is optimized by maximizing the vector-angular mean and minimizing the vector-angular variance, which separates the normal and abnormal examples well. However, the inherent computation of quadratic programming (QP solver takes O(n3 training time and at least O(n2 space, which might be computational prohibitive for large scale problems. By (1+ε and (1-ε-approximation algorithm, the core set based LARM algorithm is proposed for fast training LARM problem. Experimental results based on imbalanced datasets have validated the favorable efficiency of the proposed approach in novelty detection.
Analyzing price and efficiency dynamics of large appliances with the experience curve approach
International Nuclear Information System (INIS)
Weiss, Martin; Patel, Martin K.; Junginger, Martin; Blok, Kornelis
2010-01-01
Large appliances are major power consumers in households of industrialized countries. Although their energy efficiency has been increasing substantially in past decades, still additional energy efficiency potentials exist. Energy policy that aims at realizing these potentials faces, however, growing concerns about possible adverse effects on commodity prices. Here, we address these concerns by applying the experience curve approach to analyze long-term price and energy efficiency trends of three wet appliances (washing machines, laundry dryers, and dishwashers) and two cold appliances (refrigerators and freezers). We identify a robust long-term decline in both specific price and specific energy consumption of large appliances. Specific prices of wet appliances decline at learning rates (LR) of 29±8% and thereby much faster than those of cold appliances (LR of 9±4%). Our results demonstrate that technological learning leads to substantial price decline, thus indicating that the introduction of novel and initially expensive energy efficiency technologies does not necessarily imply adverse price effects in the long term. By extending the conventional experience curve approach, we find a steady decline in the specific energy consumption of wet appliances (LR of 20-35%) and cold appliances (LR of 13-17%). Our analysis suggests that energy policy might be able to bend down energy experience curves. (author)
Mangussi-Gomes, João; Vellutini, Eduardo A; Truong, Huy Q; Pahl, Felix H; Stamm, Aldo C
2018-04-01
Objectives To demonstrate an endoscopic endonasal transplanum transtuberculum approach for the resection of a large suprasellar craniopharyngioma. Design Single-case-based operative video. Setting Tertiary center with dedicated skull base team. Participants A 72-year-old male patient diagnosed with a suprasellar craniopharyngioma. Main Outcomes Measured Surgical resection of the tumor and preservation of the normal surrounding neurovascular structures. Results A 72-year-old male patient presented with a 1-year history of progressive bitemporal visual loss. He also referred symptoms suggestive of hypogonadism. Neurological examination was unremarkable and endocrine workup demonstrated mildly elevated prolactin levels. Magnetic resonance images demonstrated a large solid-cystic suprasellar lesion, consistent with the diagnosis of craniopharyngioma. The lesion was retrochiasmatic, compressed the optic chiasm, and extended into the interpeduncular cistern ( Fig. 1 ). Because of that, the patient underwent an endoscopic endonasal transplanum transtuberculum approach. 1 2 3 The nasal stage consisted of a transnasal transseptal approach, with complete preservation of the patient's left nasal cavity. 4 The cystic component of the tumor was decompressed and its solid part was resected. It was possible to preserve the surrounding normal neurovascular structures ( Fig. 2 ). Skull base reconstruction was performed with a dural substitute, a fascia lata graft, and a right nasoseptal flap ( Video 1 ). The patient did well after surgery and referred complete visual improvement. However, he also presented pan-hypopituitarism on long-term follow-up. Conclusions The endoscopic endonasal route is a good alternative for the resection of suprasellar lesions. It permits tumor resection and preservation of the surrounding neurovascular structures while avoiding external incisions and brain retraction. The link to the video can be found at: https://youtu.be/zmgxQe8w-JQ .
A Self-Organizing Spatial Clustering Approach to Support Large-Scale Network RTK Systems
Directory of Open Access Journals (Sweden)
Lili Shen
2018-06-01
Full Text Available The network real-time kinematic (RTK technique can provide centimeter-level real time positioning solutions and play a key role in geo-spatial infrastructure. With ever-increasing popularity, network RTK systems will face issues in the support of large numbers of concurrent users. In the past, high-precision positioning services were oriented towards professionals and only supported a few concurrent users. Currently, precise positioning provides a spatial foundation for artificial intelligence (AI, and countless smart devices (autonomous cars, unmanned aerial-vehicles (UAVs, robotic equipment, etc. require precise positioning services. Therefore, the development of approaches to support large-scale network RTK systems is urgent. In this study, we proposed a self-organizing spatial clustering (SOSC approach which automatically clusters online users to reduce the computational load on the network RTK system server side. The experimental results indicate that both the SOSC algorithm and the grid algorithm can reduce the computational load efficiently, while the SOSC algorithm gives a more elastic and adaptive clustering solution with different datasets. The SOSC algorithm determines the cluster number and the mean distance to cluster center (MDTCC according to the data set, while the grid approaches are all predefined. The side-effects of clustering algorithms on the user side are analyzed with real global navigation satellite system (GNSS data sets. The experimental results indicate that 10 km can be safely used as the cluster radius threshold for the SOSC algorithm without significantly reducing the positioning precision and reliability on the user side.
A Ranking Approach on Large-Scale Graph With Multidimensional Heterogeneous Information.
Wei, Wei; Gao, Bin; Liu, Tie-Yan; Wang, Taifeng; Li, Guohui; Li, Hang
2016-04-01
Graph-based ranking has been extensively studied and frequently applied in many applications, such as webpage ranking. It aims at mining potentially valuable information from the raw graph-structured data. Recently, with the proliferation of rich heterogeneous information (e.g., node/edge features and prior knowledge) available in many real-world graphs, how to effectively and efficiently leverage all information to improve the ranking performance becomes a new challenging problem. Previous methods only utilize part of such information and attempt to rank graph nodes according to link-based methods, of which the ranking performances are severely affected by several well-known issues, e.g., over-fitting or high computational complexity, especially when the scale of graph is very large. In this paper, we address the large-scale graph-based ranking problem and focus on how to effectively exploit rich heterogeneous information of the graph to improve the ranking performance. Specifically, we propose an innovative and effective semi-supervised PageRank (SSP) approach to parameterize the derived information within a unified semi-supervised learning framework (SSLF-GR), then simultaneously optimize the parameters and the ranking scores of graph nodes. Experiments on the real-world large-scale graphs demonstrate that our method significantly outperforms the algorithms that consider such graph information only partially.
Redmon, John W.; Shirley, Michael C.; Kinard, Paul S.
2012-01-01
This paper presents a method for performing large-scale design integration, taking a classical 2D drawing envelope and interface approach and applying it to modern three dimensional computer aided design (3D CAD) systems. Today, the paradigm often used when performing design integration with 3D models involves a digital mockup of an overall vehicle, in the form of a massive, fully detailed, CAD assembly; therefore, adding unnecessary burden and overhead to design and product data management processes. While fully detailed data may yield a broad depth of design detail, pertinent integration features are often obscured under the excessive amounts of information, making them difficult to discern. In contrast, the envelope and interface method results in a reduction in both the amount and complexity of information necessary for design integration while yielding significant savings in time and effort when applied to today's complex design integration projects. This approach, combining classical and modern methods, proved advantageous during the complex design integration activities of the Ares I vehicle. Downstream processes, benefiting from this approach by reducing development and design cycle time, include: Creation of analysis models for the Aerodynamic discipline; Vehicle to ground interface development; Documentation development for the vehicle assembly.
Modelling of large sodium fires: A coupled experimental and calculational approach
International Nuclear Information System (INIS)
Astegiano, J.C.; Balard, F.; Cartier, L.; De Pascale, C.; Forestier, A.; Merigot, C.; Roubin, P.; Tenchine, D.; Bakouta, N.
1996-01-01
The consequences of large sodium leaks in secondary circuit of Super-Phenix have been studied mainly with the FEUMIX code, on the basis of sodium fire experiments. This paper presents the status of the coupled AIRBUS (water experiment) FEUMIX approach under development in order to strengthen the extrapolation made for the Super-Phenix secondary circuits calculations for large leakage flow. FEUMIX code is a point code based on the concept of a global interfacial area between sodium and air. Mass and heat transfers through this global area is supposed to be similar. Then, global interfacial transfer coefficient Sih is an important parameter of the model. Correlations for the interfacial area are extracted from a large number of sodium tests. For the studies of hypothetical large sodium leak in secondary circuit of Super-Phenix, flow rates of more than 1 t/s have been considered and extrapolation was made from the existing results (maximum flow rate 225 kg/s). In order to strengthen the extrapolation, water test has been contemplated, on the basis of a thermal hydraulic similarity. The principle is to measure the interfacial area of a hot water jet in air, then to transpose the Sih to sodium without combustion, and to use this value in FEUMIX with combustion modelling. AIRBUS test section is a parallelepipedic gastight tank, 106 m 3 (5.7 x 3.7 x 5) internally insulated. Water jet is injected from heated external auxiliary tank into the cell using pressurized air tank and specific valve. The main measurements performed during each test are injected flow rate air pressure water temperature gas temperature A first series of tests were performed in order to qualify the methodology: typical FCA and IGNA sodium fire tests were represented in AIRBUS, and a comparison of the FEUMIX calculation using Sih value deduced from water experiments show satisfactory agreement. A second series of test for large flow rate, corresponding to large sodium leak in secondary circuit of Super
Deviations from LTE in a stellar atmosphere
International Nuclear Information System (INIS)
Kalkofen, W.; Klein, R.I.; Stein, R.F.
1979-01-01
Deviations from LTE are investigated in an atmosphere of hydrogen atoms with one bound level, satisfying the equations of radiative, hydrostatic, and statistical equilibrium. The departure coefficient and the kinetic temperature as functions of the frequency dependence of the radiative cross section are studied analytically and numerically. Near the outer boundary of the atmosphere, the departure coefficient b is smaller than unity when the radiative cross section αsub(ν) grows with frequency ν faster than ν 2 ; b exceeds unity otherwise. Far from the boundary the departure coefficient tends to exceed unity for any frequency dependence of αsub(ν). Overpopulation (b > 1) always implies that the kinetic temperature in the statistical equilibrium atmosphere is higher than the temperature in the corresponding LTE atmosphere. Upper and lower bounds on the kinetic temperature are given for an atmosphere with deviations from LTE only in the optically shallow layers when the emergent intensity can be described by a radiation temperature. (author)
Deviations from LTE in a stellar atmosphere
Kalkofen, W.; Klein, R. I.; Stein, R. F.
1979-01-01
Deviations for LTE are investigated in an atmosphere of hydrogen atoms with one bound level, satisfying the equations of radiative, hydrostatic, and statistical equilibrium. The departure coefficient and the kinetic temperature as functions of the frequency dependence of the radiative cross section are studied analytically and numerically. Near the outer boundary of the atmosphere, the departure coefficient is smaller than unity when the radiative cross section grows with frequency faster than with the square of frequency; it exceeds unity otherwise. Far from the boundary the departure coefficient tends to exceed unity for any frequency dependence of the radiative cross section. Overpopulation always implies that the kinetic temperature in the statistical-equilibrium atmosphere is higher than the temperature in the corresponding LTE atmosphere. Upper and lower bounds on the kinetic temperature are given for an atmosphere with deviations from LTE only in the optically shallow layers when the emergent intensity can be described by a radiation temperature.
Using Flipped Classroom Approach to Explore Deep Learning in Large Classrooms
Directory of Open Access Journals (Sweden)
Brenda Danker
2015-01-01
Full Text Available This project used two Flipped Classroom approaches to stimulate deep learning in large classrooms during the teaching of a film module as part of a Diploma in Performing Arts course at Sunway University, Malaysia. The flipped classes utilized either a blended learning approach where students first watched online lectures as homework, and then completed their assignments and practical work in class; or utilized a guided inquiry approach at the beginning of class using this same process. During the class the lecturers were present to help the students, and in addition, the students were advantaged by being able to help one another. The in-class learning activities also included inquiry-based learning, active learning, and peer-learning. This project used an action research approach to improve the in-class instructional design progressively to achieve its impact of deep learning among the students. The in-class learning activities that was included in the later flipped classes merged aspects of blended learning with an inquiry-based learning cycle which focused on the exploration of concepts. Data was gathered from questionnaires filled out by the students and from short interviews with the students, as well as from the teacher’s reflective journals. The findings verified that the flipped classrooms were able to remodel large lecture classes into active-learning classes. The results also support the possibility of individualised learning for the students as being high as a result of the teacher’s ability to provide one-on-one tutoring through technology-infused lessons. It is imperative that the in-class learning activities are purposefully designed as the inclusion of the exploratory learning through guided inquiry-based activities in the flipped classes was a successful way to engage students on a deeper level and increased the students’ curiosity and engaged them to develop higher-order thinking skills. This project also concluded that
Hearing protector performance and standard deviation.
Williams, W; Dillon, H
2005-01-01
The attenuation performance of a hearing protector is used to estimate the protected exposure level of the user. The aim is to reduce the exposed level to an acceptable value. Users should expect the attenuation to fall within a reasonable range of values around a norm. However, an analysis of extensive test data indicates that there is a negative relationship between attenuation performance and the standard deviation. This result is deduced using a variation in the method of calculating a single number rating of attenuation that is more amenable to drawing statistical inferences. As performance is typically specified as a function of the mean attenuation minus one or two standard deviations from the mean to ensure that greater than 50% of the wearer population are well protected, the implication of increasing standard deviation with decreasing attenuation found in this study means that a significant number of users are, in fact, experiencing over-protection. These users may be disinclined to use their hearing protectors because of an increased feeling of acoustic isolation. This problem is exacerbated in areas with lower noise levels.
Top Yukawa deviation in extra dimension
International Nuclear Information System (INIS)
Haba, Naoyuki; Oda, Kin-ya; Takahashi, Ryo
2009-01-01
We suggest a simple one-Higgs-doublet model living in the bulk of five-dimensional spacetime compactified on S 1 /Z 2 , in which the top Yukawa coupling can be smaller than the naive standard-model expectation, i.e. the top quark mass divided by the Higgs vacuum expectation value. If we find only single Higgs particle at the LHC and also observe the top Yukawa deviation, our scenario becomes a realistic candidate beyond the standard model. The Yukawa deviation comes from the fact that the wave function profile of the free physical Higgs field can become different from that of the vacuum expectation value, due to the presence of the brane-localized Higgs potentials. In the Brane-Localized Fermion scenario, we find sizable top Yukawa deviation, which could be checked at the LHC experiment, with a dominant Higgs production channel being the WW fusion. We also study the Bulk Fermion scenario with brane-localized Higgs potential, which resembles the Universal Extra Dimension model with a stable dark matter candidate. We show that both scenarios are consistent with the current electroweak precision measurements.
Moderate Deviation Analysis for Classical Communication over Quantum Channels
Chubb, Christopher T.; Tan, Vincent Y. F.; Tomamichel, Marco
2017-11-01
We analyse families of codes for classical data transmission over quantum channels that have both a vanishing probability of error and a code rate approaching capacity as the code length increases. To characterise the fundamental tradeoff between decoding error, code rate and code length for such codes we introduce a quantum generalisation of the moderate deviation analysis proposed by Altŭg and Wagner as well as Polyanskiy and Verdú. We derive such a tradeoff for classical-quantum (as well as image-additive) channels in terms of the channel capacity and the channel dispersion, giving further evidence that the latter quantity characterises the necessary backoff from capacity when transmitting finite blocks of classical data. To derive these results we also study asymmetric binary quantum hypothesis testing in the moderate deviations regime. Due to the central importance of the latter task, we expect that our techniques will find further applications in the analysis of other quantum information processing tasks.
A Parallel Approach for Frequent Subgraph Mining in a Single Large Graph Using Spark
Directory of Open Access Journals (Sweden)
Fengcai Qiao
2018-02-01
Full Text Available Frequent subgraph mining (FSM plays an important role in graph mining, attracting a great deal of attention in many areas, such as bioinformatics, web data mining and social networks. In this paper, we propose SSiGraM (Spark based Single Graph Mining, a Spark based parallel frequent subgraph mining algorithm in a single large graph. Aiming to approach the two computational challenges of FSM, we conduct the subgraph extension and support evaluation parallel across all the distributed cluster worker nodes. In addition, we also employ a heuristic search strategy and three novel optimizations: load balancing, pre-search pruning and top-down pruning in the support evaluation process, which significantly improve the performance. Extensive experiments with four different real-world datasets demonstrate that the proposed algorithm outperforms the existing GraMi (Graph Mining algorithm by an order of magnitude for all datasets and can work with a lower support threshold.
Burnout of pulverized biomass particles in large scale boiler - Single particle model approach
Energy Technology Data Exchange (ETDEWEB)
Saastamoinen, Jaakko; Aho, Martti; Moilanen, Antero [VTT Technical Research Centre of Finland, Box 1603, 40101 Jyvaeskylae (Finland); Soerensen, Lasse Holst [ReaTech/ReAddit, Frederiksborgsveij 399, Niels Bohr, DK-4000 Roskilde (Denmark); Clausen, Soennik [Risoe National Laboratory, DK-4000 Roskilde (Denmark); Berg, Mogens [ENERGI E2 A/S, A.C. Meyers Vaenge 9, DK-2450 Copenhagen SV (Denmark)
2010-05-15
Burning of coal and biomass particles are studied and compared by measurements in an entrained flow reactor and by modelling. The results are applied to study the burning of pulverized biomass in a large scale utility boiler originally planned for coal. A simplified single particle approach, where the particle combustion model is coupled with one-dimensional equation of motion of the particle, is applied for the calculation of the burnout in the boiler. The particle size of biomass can be much larger than that of coal to reach complete burnout due to lower density and greater reactivity. The burner location and the trajectories of the particles might be optimised to maximise the residence time and burnout. (author)
Foster-wittig, T. A.; Thoma, E.; Green, R.; Hater, G.; Swan, N.; Chanton, J.
2013-12-01
Improved understanding of air emissions from large area sources such as landfills, waste water ponds, open-source processing, and agricultural operations is a topic of increasing environmental importance. In many cases, the size of the area source, coupled with spatial-heterogeneity, make direct (on-site) emission assessment difficult; methane emissions, from landfills for example, can be particularly complex [Thoma et al, 2009]. Recently, whole-facility (remote) measurement approaches based on tracer correlation have been utilized [Scheutz et al, 2011]. The approach uses a mobile platform to simultaneously measure a metered-release of a conservative gas (the tracer) along with the target compound (methane in the case of landfills). The known-rate tracer release provides a measure of atmospheric dispersion at the downwind observing location allowing the area source emission to be determined by a ratio calculation [Green et al, 2010]. Although powerful in concept, the approach has been somewhat limited to research applications due to the complexities and cost of the high-sensitivity measurement equipment required to quantify the part-per billion levels of tracer and target gas at kilometer-scale distances. The advent of compact, robust, and easy to use near-infrared optical measurement systems (such as cavity ring down spectroscopy) allow the tracer correlation approach to be investigated for wider use. Over the last several years, Waste Management Inc., the U.S. EPA, and collaborators have conducted method evaluation activities to determine the viability of a standardized approach through execution of a large number of field measurement trials at U.S. landfills. As opposed to previous studies [Scheutz et al, 2011] conducted at night (optimal plume transport conditions), the current work evaluated realistic use-scenarios; these scenarios include execution by non-scientist personnel, daylight operation, and full range of atmospheric condition (all plume transport
Haer, Toon; Aerts, Jeroen
2015-04-01
Between 1998 and 2009, Europe suffered over 213 major damaging floods, causing 1126 deaths, displacing around half a million people. In this period, floods caused at least 52 billion euro in insured economic losses making floods the most costly natural hazard faced in Europe. In many low-lying areas, the main strategy to cope with floods is to reduce the risk of the hazard through flood defence structures, like dikes and levees. However, it is suggested that part of the responsibility for flood protection needs to shift to households and businesses in areas at risk, and that governments and insurers can effectively stimulate the implementation of individual protective measures. However, adaptive behaviour towards flood risk reduction and the interaction between the government, insurers, and individuals has hardly been studied in large-scale flood risk assessments. In this study, an European Agent-Based Model is developed including agent representatives for the administrative stakeholders of European Member states, insurers and reinsurers markets, and individuals following complex behaviour models. The Agent-Based Modelling approach allows for an in-depth analysis of the interaction between heterogeneous autonomous agents and the resulting (non-)adaptive behaviour. Existing flood damage models are part of the European Agent-Based Model to allow for a dynamic response of both the agents and the environment to changing flood risk and protective efforts. By following an Agent-Based Modelling approach this study is a first contribution to overcome the limitations of traditional large-scale flood risk models in which the influence of individual adaptive behaviour towards flood risk reduction is often lacking.
The large break LOCA evaluation method with the simplified statistic approach
International Nuclear Information System (INIS)
Kamata, Shinya; Kubo, Kazuo
2004-01-01
USNRC published the Code Scaling, Applicability and Uncertainty (CSAU) evaluation methodology to large break LOCA which supported the revised rule for Emergency Core Cooling System performance in 1989. In USNRC regulatory guide 1.157, it is required that the peak cladding temperature (PCT) cannot exceed 2200deg F with high probability 95th percentile. In recent years, overseas countries have developed statistical methodology and best estimate code with the model which can provide more realistic simulation for the phenomena based on the CSAU evaluation methodology. In order to calculate PCT probability distribution by Monte Carlo trials, there are approaches such as the response surface technique using polynomials, the order statistics method, etc. For the purpose of performing rational statistic analysis, Mitsubishi Heavy Industries, LTD (MHI) tried to develop the statistic LOCA method using the best estimate LOCA code MCOBRA/TRAC and the simplified code HOTSPOT. HOTSPOT is a Monte Carlo heat conduction solver to evaluate the uncertainties of the significant fuel parameters at the PCT positions of the hot rod. The direct uncertainty sensitivity studies can be performed without the response surface because the Monte Carlo simulation for key parameters can be performed in short time using HOTSPOT. With regard to the parameter uncertainties, MHI established the treatment that the bounding conditions are given for LOCA boundary and plant initial conditions, the Monte Carlo simulation using HOTSPOT is applied to the significant fuel parameters. The paper describes the large break LOCA evaluation method with the simplified statistic approach and the results of the application of the method to the representative four-loop nuclear power plant. (author)
A long-term, continuous simulation approach for large-scale flood risk assessments
Falter, Daniela; Schröter, Kai; Viet Dung, Nguyen; Vorogushyn, Sergiy; Hundecha, Yeshewatesfa; Kreibich, Heidi; Apel, Heiko; Merz, Bruno
2014-05-01
The Regional Flood Model (RFM) is a process based model cascade developed for flood risk assessments of large-scale basins. RFM consists of four model parts: the rainfall-runoff model SWIM, a 1D channel routing model, a 2D hinterland inundation model and the flood loss estimation model for residential buildings FLEMOps+r. The model cascade was recently undertaken a proof-of-concept study at the Elbe catchment (Germany) to demonstrate that flood risk assessments, based on a continuous simulation approach, including rainfall-runoff, hydrodynamic and damage estimation models, are feasible for large catchments. The results of this study indicated that uncertainties are significant, especially for hydrodynamic simulations. This was basically a consequence of low data quality and disregarding dike breaches. Therefore, RFM was applied with a refined hydraulic model setup for the Elbe tributary Mulde. The study area Mulde catchment comprises about 6,000 km2 and 380 river-km. The inclusion of more reliable information on overbank cross-sections and dikes considerably improved the results. For the application of RFM for flood risk assessments, long-term climate input data is needed to drive the model chain. This model input was provided by a multi-site, multi-variate weather generator that produces sets of synthetic meteorological data reproducing the current climate statistics. The data set comprises 100 realizations of 100 years of meteorological data. With the proposed continuous simulation approach of RFM, we simulated a virtual period of 10,000 years covering the entire flood risk chain including hydrological, 1D/2D hydrodynamic and flood damage estimation models. This provided a record of around 2.000 inundation events affecting the study area with spatially detailed information on inundation depths and damage to residential buildings on a resolution of 100 m. This serves as basis for a spatially consistent, flood risk assessment for the Mulde catchment presented in
The role of septal surgery in management of the deviated nose.
Foda, Hossam M T
2005-02-01
The deviated nose represents a complex cosmetic and functional problem. Septal surgery plays a central role in the successful management of the externally deviated nose. This study included 260 patients seeking rhinoplasty to correct external nasal deviations; 75 percent of them had various degrees of nasal obstruction. Septal surgery was necessary in 232 patients (89 percent), not only to improve breathing but also to achieve a straight, symmetrical, external nose as well. A graduated surgical approach was adopted to allow correction of the dorsal and caudal deviations of the nasal septum without weakening its structural support to the dorsum or nasal tip. The approach depended on full mobilization of deviated cartilage, followed by straightening of the cartilage and its fixation in the corrected position by using bony splinting grafts through an external rhinoplasty approach.
Geometry of river networks. I. Scaling, fluctuations, and deviations
International Nuclear Information System (INIS)
Dodds, Peter Sheridan; Rothman, Daniel H.
2001-01-01
This paper is the first in a series of three papers investigating the detailed geometry of river networks. Branching networks are a universal structure employed in the distribution and collection of material. Large-scale river networks mark an important class of two-dimensional branching networks, being not only of intrinsic interest but also a pervasive natural phenomenon. In the description of river network structure, scaling laws are uniformly observed. Reported values of scaling exponents vary, suggesting that no unique set of scaling exponents exists. To improve this current understanding of scaling in river networks and to provide a fuller description of branching network structure, here we report a theoretical and empirical study of fluctuations about and deviations from scaling. We examine data for continent-scale river networks such as the Mississippi and the Amazon and draw inspiration from a simple model of directed, random networks. We center our investigations on the scaling of the length of a subbasin's dominant stream with its area, a characterization of basin shape known as Hack's law. We generalize this relationship to a joint probability density, and provide observations and explanations of deviations from scaling. We show that fluctuations about scaling are substantial, and grow with system size. We find strong deviations from scaling at small scales which can be explained by the existence of a linear network structure. At intermediate scales, we find slow drifts in exponent values, indicating that scaling is only approximately obeyed and that universality remains indeterminate. At large scales, we observe a breakdown in scaling due to decreasing sample space and correlations with overall basin shape. The extent of approximate scaling is significantly restricted by these deviations, and will not be improved by increases in network resolution
Preparing laboratory and real-world EEG data for large-scale analysis: A containerized approach
Directory of Open Access Journals (Sweden)
Nima eBigdely-Shamlo
2016-03-01
Full Text Available Large-scale analysis of EEG and other physiological measures promises new insights into brain processes and more accurate and robust brain-computer interface (BCI models.. However, the absence of standard-ized vocabularies for annotating events in a machine understandable manner, the welter of collection-specific data organizations, the diffi-culty in moving data across processing platforms, and the unavailability of agreed-upon standards for preprocessing have prevented large-scale analyses of EEG. Here we describe a containerized approach and freely available tools we have developed to facilitate the process of an-notating, packaging, and preprocessing EEG data collections to enable data sharing, archiving, large-scale machine learning/data mining and (meta-analysis. The EEG Study Schema (ESS comprises three data Levels, each with its own XML-document schema and file/folder convention, plus a standardized (PREP pipeline to move raw (Data Level 1 data to a basic preprocessed state (Data Level 2 suitable for application of a large class of EEG analysis methods. Researchers can ship a study as a single unit and operate on its data using a standardized interface. ESS does not require a central database and provides all the metadata data necessary to execute a wide variety of EEG processing pipelines. The primary focus of ESS is automated in-depth analysis and meta-analysis EEG studies. However, ESS can also encapsulate meta-information for the other modalities such as eye tracking, that are in-creasingly used in both laboratory and real-world neuroimaging. ESS schema and tools are freely available at eegstudy.org, and a central cata-log of over 850 GB of existing data in ESS format is available at study-catalog.org. These tools and resources are part of a larger effort to ena-ble data sharing at sufficient scale for researchers to engage in truly large-scale EEG analysis and data mining (BigEEG.org.
Image subsampling and point scoring approaches for large-scale marine benthic monitoring programs
Perkins, Nicholas R.; Foster, Scott D.; Hill, Nicole A.; Barrett, Neville S.
2016-07-01
Benthic imagery is an effective tool for quantitative description of ecologically and economically important benthic habitats and biota. The recent development of autonomous underwater vehicles (AUVs) allows surveying of spatial scales that were previously unfeasible. However, an AUV collects a large number of images, the scoring of which is time and labour intensive. There is a need to optimise the way that subsamples of imagery are chosen and scored to gain meaningful inferences for ecological monitoring studies. We examine the trade-off between the number of images selected within transects and the number of random points scored within images on the percent cover of target biota, the typical output of such monitoring programs. We also investigate the efficacy of various image selection approaches, such as systematic or random, on the bias and precision of cover estimates. We use simulated biotas that have varying size, abundance and distributional patterns. We find that a relatively small sampling effort is required to minimise bias. An increased precision for groups that are likely to be the focus of monitoring programs is best gained through increasing the number of images sampled rather than the number of points scored within images. For rare species, sampling using point count approaches is unlikely to provide sufficient precision, and alternative sampling approaches may need to be employed. The approach by which images are selected (simple random sampling, regularly spaced etc.) had no discernible effect on mean and variance estimates, regardless of the distributional pattern of biota. Field validation of our findings is provided through Monte Carlo resampling analysis of a previously scored benthic survey from temperate waters. We show that point count sampling approaches are capable of providing relatively precise cover estimates for candidate groups that are not overly rare. The amount of sampling required, in terms of both the number of images and
Solving Large-Scale TSP Using a Fast Wedging Insertion Partitioning Approach
Directory of Open Access Journals (Sweden)
Zuoyong Xiang
2015-01-01
Full Text Available A new partitioning method, called Wedging Insertion, is proposed for solving large-scale symmetric Traveling Salesman Problem (TSP. The idea of our proposed algorithm is to cut a TSP tour into four segments by nodes’ coordinate (not by rectangle, such as Strip, FRP, and Karp. Each node is located in one of their segments, which excludes four particular nodes, and each segment does not twist with other segments. After the partitioning process, this algorithm utilizes traditional construction method, that is, the insertion method, for each segment to improve the quality of tour, and then connects the starting node and the ending node of each segment to obtain the complete tour. In order to test the performance of our proposed algorithm, we conduct the experiments on various TSPLIB instances. The experimental results show that our proposed algorithm in this paper is more efficient for solving large-scale TSPs. Specifically, our approach is able to obviously reduce the time complexity for running the algorithm; meanwhile, it will lose only about 10% of the algorithm’s performance.
Large Scale Proteomic Data and Network-Based Systems Biology Approaches to Explore the Plant World.
Di Silvestre, Dario; Bergamaschi, Andrea; Bellini, Edoardo; Mauri, PierLuigi
2018-06-03
The investigation of plant organisms by means of data-derived systems biology approaches based on network modeling is mainly characterized by genomic data, while the potential of proteomics is largely unexplored. This delay is mainly caused by the paucity of plant genomic/proteomic sequences and annotations which are fundamental to perform mass-spectrometry (MS) data interpretation. However, Next Generation Sequencing (NGS) techniques are contributing to filling this gap and an increasing number of studies are focusing on plant proteome profiling and protein-protein interactions (PPIs) identification. Interesting results were obtained by evaluating the topology of PPI networks in the context of organ-associated biological processes as well as plant-pathogen relationships. These examples foreshadow well the benefits that these approaches may provide to plant research. Thus, in addition to providing an overview of the main-omic technologies recently used on plant organisms, we will focus on studies that rely on concepts of module, hub and shortest path, and how they can contribute to the plant discovery processes. In this scenario, we will also consider gene co-expression networks, and some examples of integration with metabolomic data and genome-wide association studies (GWAS) to select candidate genes will be mentioned.
Color Doppler Score: A New Approach for Monitoring a Large Placental Chorioangioma
Directory of Open Access Journals (Sweden)
Maria Angelica Zoppi
2014-01-01
Full Text Available We employed color Doppler score as an innovative approach for the prenatal diagnosis and monitoring of a large placental chorioangioma case diagnosed at 26 weeks and the subjective semiquantitative assessment of the vascularization. The blood flow was assessed by a color Doppler score based on the intensity of the color signal with the following value ranges: (1 no flow, (2 minimal flow, (3 moderate flow, and (4 high vascular flow. Weekly examinations were programmed. Initially, a color Doppler score 3 was assigned, remaining unchanged at the following two exams and decreasing to Score 2 in the following 2 exams and to Score 1 thereafter. The ultrasonographic scan showed an increase of the mass size at the second and third exams and was followed by an arrest of the growth persisting for the rest of the pregnancy. Some hyperechogenic spots inside the mass appeared at the end. Expectant management was opted for, and the delivery was at 39, 2 weeks and maternal and fetal outcomes were favourable. The color Doppler score employed for assessment of vascularization in successive examinations proved to be an important tool for the prediction of the chorioangioma involution, and this new approach of monitoring allowed effective surveillance and successful tailored management.
QAPgrid: a two level QAP-based approach for large-scale data analysis and visualization.
Directory of Open Access Journals (Sweden)
Mario Inostroza-Ponta
Full Text Available BACKGROUND: The visualization of large volumes of data is a computationally challenging task that often promises rewarding new insights. There is great potential in the application of new algorithms and models from combinatorial optimisation. Datasets often contain "hidden regularities" and a combined identification and visualization method should reveal these structures and present them in a way that helps analysis. While several methodologies exist, including those that use non-linear optimization algorithms, severe limitations exist even when working with only a few hundred objects. METHODOLOGY/PRINCIPAL FINDINGS: We present a new data visualization approach (QAPgrid that reveals patterns of similarities and differences in large datasets of objects for which a similarity measure can be computed. Objects are assigned to positions on an underlying square grid in a two-dimensional space. We use the Quadratic Assignment Problem (QAP as a mathematical model to provide an objective function for assignment of objects to positions on the grid. We employ a Memetic Algorithm (a powerful metaheuristic to tackle the large instances of this NP-hard combinatorial optimization problem, and we show its performance on the visualization of real data sets. CONCLUSIONS/SIGNIFICANCE: Overall, the results show that QAPgrid algorithm is able to produce a layout that represents the relationships between objects in the data set. Furthermore, it also represents the relationships between clusters that are feed into the algorithm. We apply the QAPgrid on the 84 Indo-European languages instance, producing a near-optimal layout. Next, we produce a layout of 470 world universities with an observed high degree of correlation with the score used by the Academic Ranking of World Universities compiled in the The Shanghai Jiao Tong University Academic Ranking of World Universities without the need of an ad hoc weighting of attributes. Finally, our Gene Ontology-based study on
A Full-Maxwell Approach for Large-Angle Polar Wander of Viscoelastic Bodies
Hu, H.; van der Wal, W.; Vermeersen, L. L. A.
2017-12-01
For large-angle long-term true polar wander (TPW) there are currently two types of nonlinear methods which give approximated solutions: those assuming that the rotational axis coincides with the axis of maximum moment of inertia (MoI), which simplifies the Liouville equation, and those based on the quasi-fluid approximation, which approximates the Love number. Recent studies show that both can have a significant bias for certain models. Therefore, we still lack an (semi)analytical method which can give exact solutions for large-angle TPW for a model based on Maxwell rheology. This paper provides a method which analytically solves the MoI equation and adopts an extended iterative procedure introduced in Hu et al. (2017) to obtain a time-dependent solution. The new method can be used to simulate the effect of a remnant bulge or models in different hydrostatic states. We show the effect of the viscosity of the lithosphere on long-term, large-angle TPW. We also simulate models without hydrostatic equilibrium and show that the choice of the initial stress-free shape for the elastic (or highly viscous) lithosphere of a given model is as important as its thickness for obtaining a correct TPW behavior. The initial shape of the lithosphere can be an alternative explanation to mantle convection for the difference between the observed and model predicted flattening. Finally, it is concluded that based on the quasi-fluid approximation, TPW speed on Earth and Mars is underestimated, while the speed of the rotational axis approaching the end position on Venus is overestimated.
Savic, Ivana
2012-02-01
Decreasing the thermal conductivity of bulk materials by nanostructuring and dimensionality reduction, or by introducing some amount of disorder represents a promising strategy in the search for efficient thermoelectric materials [1]. For example, considerable improvements of the thermoelectric efficiency in nanowires with surface roughness [2], superlattices [3] and nanocomposites [4] have been attributed to a significantly reduced thermal conductivity. In order to accurately describe thermal transport processes in complex nanostructured materials and directly compare with experiments, the development of theoretical and computational approaches that can account for both anharmonic and disorder effects in large samples is highly desirable. We will first summarize the strengths and weaknesses of the standard atomistic approaches to thermal transport (molecular dynamics [5], Boltzmann transport equation [6] and Green's function approach [7]) . We will then focus on the methods based on the solution of the Boltzmann transport equation, that are computationally too demanding, at present, to treat large scale systems and thus to investigate realistic materials. We will present a Monte Carlo method [8] to solve the Boltzmann transport equation in the relaxation time approximation [9], that enables computation of the thermal conductivity of ordered and disordered systems with a number of atoms up to an order of magnitude larger than feasible with straightforward integration. We will present a comparison between exact and Monte Carlo Boltzmann transport results for small SiGe nanostructures and then use the Monte Carlo method to analyze the thermal properties of realistic SiGe nanostructured materials. This work is done in collaboration with Davide Donadio, Francois Gygi, and Giulia Galli from UC Davis.[4pt] [1] See e.g. A. J. Minnich, M. S. Dresselhaus, Z. F. Ren, and G. Chen, Energy Environ. Sci. 2, 466 (2009).[0pt] [2] A. I. Hochbaum et al, Nature 451, 163 (2008).[0pt
Decreasing the amplitude deviation of Guassian filter in surface roughness measurements
Liu, Bo; Wang, Yu
2008-12-01
A new approach for decreasing the amplitude characteristic deviation of Guassian filter in surface roughness measurements is presented in this paper. According to Central Limit Theorem, many different Guassian approximation filters could be constructed. By using first-order Butterworth filter and moving average filter to approximate Guassian filter, their directions of amplitude deviation are opposite, and their locations of extreme value are close. So the linear combination of them could reduce the amplitude deviation greatly. The maximum amplitude deviation is only about 0.11% through paralleling them. The algorithm of this new method is simple and its efficiency is high.
Standard deviation of scatterometer measurements from space.
Fischer, R. E.
1972-01-01
The standard deviation of scatterometer measurements has been derived under assumptions applicable to spaceborne scatterometers. Numerical results are presented which show that, with sufficiently long integration times, input signal-to-noise ratios below unity do not cause excessive degradation of measurement accuracy. The effects on measurement accuracy due to varying integration times and changing the ratio of signal bandwidth to IF filter-noise bandwidth are also plotted. The results of the analysis may resolve a controversy by showing that in fact statistically useful scatterometer measurements can be made from space using a 20-W transmitter, such as will be used on the S-193 experiment for Skylab-A.
Boari, Nicola; Gagliardi, Filippo; Roberti, Fabio; Barzaghi, Lina Raffaella; Caputy, Anthony J; Mortini, Pietro
2013-05-01
Several surgical approaches have been previously reported for the treatment of olfactory groove meningiomas (OGM).The trans-frontal-sinus subcranial approach (TFSSA) for the removal of large OGMs is described, comparing it with other reported approaches in terms of advantages and drawbacks. The TFSSA was performed on cadaveric specimens to illustrate the surgical technique. The surgical steps of the TFSSA and the related anatomical pictures are reported. The approach was adopted in a clinical setting; a case illustration is reported to demonstrate the feasibility of the described approach and to provide intraoperative pictures. The TFSSA represents a possible route to treat large OGMs. The subcranial approach provides early devascularization of the tumor, direct tumor access from the base without traction on the frontal lobes, good overview of dissection of the optic nerves and anterior cerebral arteries, and dural reconstruction with pedicled pericranial flap. Georg Thieme Verlag KG Stuttgart · New York.
Examining Food Risk in the Large using a Complex, Networked System-of-sytems Approach
Energy Technology Data Exchange (ETDEWEB)
Ambrosiano, John [Los Alamos National Laboratory; Newkirk, Ryan [U OF MINNESOTA; Mc Donald, Mark P [VANDERBILT U
2010-12-03
The food production infrastructure is a highly complex system of systems. Characterizing the risks of intentional contamination in multi-ingredient manufactured foods is extremely challenging because the risks depend on the vulnerabilities of food processing facilities and on the intricacies of the supply-distribution networks that link them. A pure engineering approach to modeling the system is impractical because of the overall system complexity and paucity of data. A methodology is needed to assess food contamination risk 'in the large', based on current, high-level information about manufacturing facilities, corrunodities and markets, that will indicate which food categories are most at risk of intentional contamination and warrant deeper analysis. The approach begins by decomposing the system for producing a multi-ingredient food into instances of two subsystem archetypes: (1) the relevant manufacturing and processing facilities, and (2) the networked corrunodity flows that link them to each other and consumers. Ingredient manufacturing subsystems are modeled as generic systems dynamics models with distributions of key parameters that span the configurations of real facilities. Networks representing the distribution systems are synthesized from general information about food corrunodities. This is done in a series of steps. First, probability networks representing the aggregated flows of food from manufacturers to wholesalers, retailers, other manufacturers, and direct consumers are inferred from high-level approximate information. This is followed by disaggregation of the general flows into flows connecting 'large' and 'small' categories of manufacturers, wholesalers, retailers, and consumers. Optimization methods are then used to determine the most likely network flows consistent with given data. Vulnerability can be assessed for a potential contamination point using a modified CARVER + Shock model. Once the facility and
Evaluation of digital soil mapping approaches with large sets of environmental covariates
Nussbaum, Madlene; Spiess, Kay; Baltensweiler, Andri; Grob, Urs; Keller, Armin; Greiner, Lucie; Schaepman, Michael E.; Papritz, Andreas
2018-01-01
The spatial assessment of soil functions requires maps of basic soil properties. Unfortunately, these are either missing for many regions or are not available at the desired spatial resolution or down to the required soil depth. The field-based generation of large soil datasets and conventional soil maps remains costly. Meanwhile, legacy soil data and comprehensive sets of spatial environmental data are available for many regions. Digital soil mapping (DSM) approaches relating soil data (responses) to environmental data (covariates) face the challenge of building statistical models from large sets of covariates originating, for example, from airborne imaging spectroscopy or multi-scale terrain analysis. We evaluated six approaches for DSM in three study regions in Switzerland (Berne, Greifensee, ZH forest) by mapping the effective soil depth available to plants (SD), pH, soil organic matter (SOM), effective cation exchange capacity (ECEC), clay, silt, gravel content and fine fraction bulk density for four soil depths (totalling 48 responses). Models were built from 300-500 environmental covariates by selecting linear models through (1) grouped lasso and (2) an ad hoc stepwise procedure for robust external-drift kriging (georob). For (3) geoadditive models we selected penalized smoothing spline terms by component-wise gradient boosting (geoGAM). We further used two tree-based methods: (4) boosted regression trees (BRTs) and (5) random forest (RF). Lastly, we computed (6) weighted model averages (MAs) from the predictions obtained from methods 1-5. Lasso, georob and geoGAM successfully selected strongly reduced sets of covariates (subsets of 3-6 % of all covariates). Differences in predictive performance, tested on independent validation data, were mostly small and did not reveal a single best method for 48 responses. Nevertheless, RF was often the best among methods 1-5 (28 of 48 responses), but was outcompeted by MA for 14 of these 28 responses. RF tended to over
Computation of standard deviations in eigenvalue calculations
International Nuclear Information System (INIS)
Gelbard, E.M.; Prael, R.
1990-01-01
In Brissenden and Garlick (1985), the authors propose a modified Monte Carlo method for eigenvalue calculations, designed to decrease particle transport biases in the flux and eigenvalue estimates, and in corresponding estimates of standard deviations. Apparently a very similar method has been used by Soviet Monte Carlo specialists. The proposed method is based on the generation of ''superhistories'', chains of histories run in sequence without intervening renormalization of the fission source. This method appears to have some disadvantages, discussed elsewhere. Earlier numerical experiments suggest that biases in fluxes and eigenvalues are negligibly small, even for very small numbers of histories per generation. Now more recent experiments, run on the CRAY-XMP, tend to confirm these earlier conclusions. The new experiments, discussed in this paper, involve the solution of one-group 1D diffusion theory eigenvalue problems, in difference form, via Monte Carlo. Experiments covered a range of dominance ratios from ∼0.75 to ∼0.985. In all cases flux and eigenvalue biases were substantially smaller than one standard deviation. The conclusion that, in practice, the eigenvalue bias is negligible has strong theoretical support. (author)
Large scale debris-flow hazard assessment: a geotechnical approach and GIS modelling
Directory of Open Access Journals (Sweden)
G. Delmonaco
2003-01-01
Full Text Available A deterministic distributed model has been developed for large-scale debris-flow hazard analysis in the basin of River Vezza (Tuscany Region – Italy. This area (51.6 km 2 was affected by over 250 landslides. These were classified as debris/earth flow mainly involving the metamorphic geological formations outcropping in the area, triggered by the pluviometric event of 19 June 1996. In the last decades landslide hazard and risk analysis have been favoured by the development of GIS techniques permitting the generalisation, synthesis and modelling of stability conditions on a large scale investigation (>1:10 000. In this work, the main results derived by the application of a geotechnical model coupled with a hydrological model for the assessment of debris flows hazard analysis, are reported. This analysis has been developed starting by the following steps: landslide inventory map derived by aerial photo interpretation, direct field survey, generation of a database and digital maps, elaboration of a DTM and derived themes (i.e. slope angle map, definition of a superficial soil thickness map, geotechnical soil characterisation through implementation of a backanalysis on test slopes, laboratory test analysis, inference of the influence of precipitation, for distinct return times, on ponding time and pore pressure generation, implementation of a slope stability model (infinite slope model and generalisation of the safety factor for estimated rainfall events with different return times. Such an approach has allowed the identification of potential source areas of debris flow triggering. This is used to detected precipitation events with estimated return time of 10, 50, 75 and 100 years. The model shows a dramatic decrease of safety conditions for the simulation when is related to a 75 years return time rainfall event. It corresponds to an estimated cumulated daily intensity of 280–330 mm. This value can be considered the hydrological triggering
Cislaghi, Alessio; Rigon, Emanuel; Lenzi, Mario Aristide; Bischetti, Gian Battista
2018-04-01
Large wood (LW) plays a key role in physical, chemical, environmental, and biological processes in most natural and seminatural streams. However, it is also a source of hydraulic hazard in anthropised territories. Recruitment from fluvial processes has been the subject of many studies, whereas less attention has been given to hillslope recruitment, which is linked to episodic and spatially distributed events and requires a reliable and accurate slope stability model and a hillslope-channel transfer model. The purpose of this study is to develop an innovative LW hillslope-recruitment estimation approach that combines forest stand characteristics in a spatially distributed form, a probabilistic multidimensional slope stability model able to include the reinforcement exerted by roots, and a hillslope-channel transfer procedure. The approach was tested on a small mountain headwater catchment in the eastern Italian Alps that is prone to shallow landslide and debris flow phenomena. The slope stability model (that had not been calibrated) provided accurate performances, in terms of unstable areas identification according to the landslide inventory (AUC = 0.832) and of LW volume estimation in comparison with LW volume produced by inventoried landslides (7702 m3 corresponding to a recurrence time of about 30 years in the susceptibility curve). The results showed that most LW potentially mobilised by landslides does not reach the channel network (only about 16%), in agreement with the few data reported by other studies, as well as the data normalized for unit length of channel and unit length of channel per year (0-116 m3/km and 0-4 m3/km y-1). This study represents an important contribution to LW research. A rigorous and site-specific estimation of LW hillslope recruitment should, in fact, be an integral part of more general studies on LW dynamics, for forest planning and management, and positioning in-channel wood retention structures.
McCowan, Brenda; Beisner, Brianne; Hannibal, Darcy
2017-12-07
Biomedical facilities across the nation and worldwide aim to develop cost-effective methods for the reproductive management of macaque breeding groups, typically by housing macaques in large, multi-male multi-female social groups that provide monkey subjects for research as well as appropriate socialization for their psychological well-being. One of the most difficult problems in managing socially housed macaques is their propensity for deleterious aggression. From a management perspective, deleterious aggression (as opposed to less intense aggression that serves to regulate social relationships) is undoubtedly the most problematic behavior observed in group-housed macaques, which can readily escalate to the degree that it causes social instability, increases serious physical trauma leading to group dissolution, and reduces psychological well-being. Thus for both welfare and other management reasons, aggression among rhesus macaques at primate centers and facilities needs to be addressed with a more proactive approach.Management strategies need to be instituted that maximize social housing while also reducing problematic social aggression due to instability using efficacious methods for detection and prevention in the most cost effective manner. Herein we review a new proactive approach using social network analysis to assess and predict deleterious aggression in macaque groups. We discovered three major pathways leading to instability, such as unusually high rates and severity of trauma and social relocations.These pathways are linked either directly or indirectly to network structure in rhesus macaque societies. We define these pathways according to the key intrinsic and extrinsic variables (e.g., demographic, genetic or social factors) that influence network and behavioral measures of stability (see Fig. 1). They are: (1) presence of natal males, (2) matrilineal genetic fragmentation, and (3) the power structure and conflict policing behavior supported by this
Chen, Wei; Deng, Da
2014-11-11
We report a new, low-cost and simple top-down approach, "sodium-cutting", to cut and open nanostructures deposited on a nonplanar surface on a large scale. The feasibility of sodium-cutting was demonstrated with the successfully cutting open of ∼100% carbon nanospheres into nanobowls on a large scale from Sn@C nanospheres for the first time.
a Stochastic Approach to Multiobjective Optimization of Large-Scale Water Reservoir Networks
Bottacin-Busolin, A.; Worman, A. L.
2013-12-01
A main challenge for the planning and management of water resources is the development of multiobjective strategies for operation of large-scale water reservoir networks. The optimal sequence of water releases from multiple reservoirs depends on the stochastic variability of correlated hydrologic inflows and on various processes that affect water demand and energy prices. Although several methods have been suggested, large-scale optimization problems arising in water resources management are still plagued by the high dimensional state space and by the stochastic nature of the hydrologic inflows. In this work, the optimization of reservoir operation is approached using approximate dynamic programming (ADP) with policy iteration and function approximators. The method is based on an off-line learning process in which operating policies are evaluated for a number of stochastic inflow scenarios, and the resulting value functions are used to design new, improved policies until convergence is attained. A case study is presented of a multi-reservoir system in the Dalälven River, Sweden, which includes 13 interconnected reservoirs and 36 power stations. Depending on the late spring and summer peak discharges, the lowlands adjacent to Dalälven can often be flooded during the summer period, and the presence of stagnating floodwater during the hottest months of the year is the cause of a large proliferation of mosquitos, which is a major problem for the people living in the surroundings. Chemical pesticides are currently being used as a preventive countermeasure, which do not provide an effective solution to the problem and have adverse environmental impacts. In this study, ADP was used to analyze the feasibility of alternative operating policies for reducing the flood risk at a reasonable economic cost for the hydropower companies. To this end, mid-term operating policies were derived by combining flood risk reduction with hydropower production objectives. The performance
Chwalowski, Pawel; Samareh, Jamshid A.; Horta, Lucas G.; Piatak, David J.; McGowan, Anna-Maria R.
2009-01-01
The conceptual and preliminary design processes for aircraft with large shape changes are generally difficult and time-consuming, and the processes are often customized for a specific shape change concept to streamline the vehicle design effort. Accordingly, several existing reports show excellent results of assessing a particular shape change concept or perturbations of a concept. The goal of the current effort was to develop a multidisciplinary analysis tool and process that would enable an aircraft designer to assess several very different morphing concepts early in the design phase and yet obtain second-order performance results so that design decisions can be made with better confidence. The approach uses an efficient parametric model formulation that allows automatic model generation for systems undergoing radical shape changes as a function of aerodynamic parameters, geometry parameters, and shape change parameters. In contrast to other more self-contained approaches, the approach utilizes off-the-shelf analysis modules to reduce development time and to make it accessible to many users. Because the analysis is loosely coupled, discipline modules like a multibody code can be easily swapped for other modules with similar capabilities. One of the advantages of this loosely coupled system is the ability to use the medium- to high-fidelity tools early in the design stages when the information can significantly influence and improve overall vehicle design. Data transfer among the analysis modules are based on an accurate and automated general purpose data transfer tool. In general, setup time for the integrated system presented in this paper is 2-4 days for simple shape change concepts and 1-2 weeks for more mechanically complicated concepts. Some of the key elements briefly described in the paper include parametric model development, aerodynamic database generation, multibody analysis, and the required software modules as well as examples for a telescoping wing
A reliable approach to the closure of large acquired midline defects of the back
International Nuclear Information System (INIS)
Casas, L.A.; Lewis, V.L. Jr.
1989-01-01
A systematic regionalized approach for the reconstruction of acquired thoracic and lumbar midline defects of the back is described. Twenty-three patients with wounds resulting from pressure necrosis, radiation injury, and postoperative wound infection and dehiscence were successfully reconstructed. The latissimus dorsi, trapezius, gluteus maximus, and paraspinous muscles are utilized individually or in combination as advancement, rotation, island, unipedicle, turnover, or bipedicle flaps. All flaps are designed so that their vascular pedicles are out of the field of injury. After thorough debridement, large, deep wounds are closed with two layers of muscle, while smaller, more superficial wounds are reconstructed with one layer. The trapezius muscle is utilized in the high thoracic area for the deep wound layer, while the paraspinous muscle is used for this layer in the thoracic and lumbar regions. Superficial layer and small wounds in the high thoracic area are reconstructed with either latissimus dorsi or trapezius muscle. Corresponding wounds in the thoracic and lumbar areas are closed with latissimus dorsi muscle alone or in combination with gluteus maximus muscle. The rationale for systematic regionalized reconstruction of acquired midline back wounds is described
International Nuclear Information System (INIS)
Rodriguez, A.
2005-01-01
Full text: Spanish experience holds a relatively important position in the field of the decommissioning of nuclear and radioactive facilities. Decommissioning projects of uranium concentrate mill facilities are near completion; some old uranium mine sites have already been restored; several projects for the dismantling of various small research nuclear reactors and a few pilot plants are at various phases of the dismantling process, with some already completed. The most notable Spanish project in this field is undoubtedly the decommissioning of the Vandellos 1 nuclear power plant that is currently ready to enter a safe enclosure, or dormancy, period. The management of radioactive wastes in Spain is undertaken by 'Empresa Nacional de Residuos Radioactivos, S.A.' (ENRESA), the Spanish national radioactive waste company, constituted in 1984. ENRESA operates as a management company, whose role is to develop radioactive waste management programmes in accordance with the policy and strategy approved by the Spanish Government. Its responsibilities include the decommissioning and dismantling of nuclear installations. Decommissioning and dismantling nuclear installations is an increasingly important topic for governments, regulators, industries and civil society. There are many aspects that have to be carefully considered, planned and organised in many cases well in advance of when they really need to be implemented. The goal of this paper is describe proven approaches relevant to organizing and managing large decommissioning projects, in particular in the case of Vandellos-1 NPP decommissioning. (author)
Neural ensemble communities: Open-source approaches to hardware for large-scale electrophysiology
Siegle, Joshua H.; Hale, Gregory J.; Newman, Jonathan P.; Voigts, Jakob
2014-01-01
One often-overlooked factor when selecting a platform for large-scale electrophysiology is whether or not a particular data acquisition system is “open” or “closed”: that is, whether or not the system’s schematics and source code are available to end users. Open systems have a reputation for being difficult to acquire, poorly documented, and hard to maintain. With the arrival of more powerful and compact integrated circuits, rapid prototyping services, and web-based tools for collaborative development, these stereotypes must be reconsidered. We discuss some of the reasons why multichannel extracellular electrophysiology could benefit from open-source approaches and describe examples of successful community-driven tool development within this field. In order to promote the adoption of open-source hardware and to reduce the need for redundant development efforts, we advocate a move toward standardized interfaces that connect each element of the data processing pipeline. This will give researchers the flexibility to modify their tools when necessary, while allowing them to continue to benefit from the high-quality products and expertise provided by commercial vendors. PMID:25528614
Treatment of Large Periapical Cyst Like Lesion: A Noninvasive Approach: A Report of Two Cases.
Sood, Nikhil; Maheshwari, Neha; Gothi, Rajat; Sood, Niti
2015-01-01
Periapical lesions develop as sequelae to pulp disease. Periapical radiolucent areas are generally diagnosed either during routine dental radiographic examination or following acute toothache. Various methods can be used in the nonsurgical management of periapical lesions: the conservative root canal treatment, decompression technique, active nonsurgical decompression technique, aspiration-irrigation technique, method using calcium hydroxide, lesion sterilization and repair therapy and the apexum procedure. Monitoring the healing of periapical lesions is essential through periodic follow-up examinations. The ultimate goal of endodontic therapy should be to return the involved teeth to a state of health and function without surgical intervention. All inflammatory periapical lesions should be initially treated with conservative nonsurgical procedures. Surgical intervention is recommended only after nonsurgical techniques have failed. Besides, surgery has many drawbacks, which limit its use in the management of periapical lesions. How to cite this article: Sood N, Maheshwari N, Gothi R, Sood N. Treatment of Large Periapical Cyst Like Lesion: A Noninvasive Approach: A Report of Two Cases. Int J Clin Pediatr Dent 2015;8(2):133-137.
Directory of Open Access Journals (Sweden)
İhsan Çaça
2004-01-01
Full Text Available We evaluated the correlation with success rates and deviation type and degree inhorizontal concomitant deviations. 104 horizontal concomitan strabismus cases whowere operated in our clinic between January 1994 – December 2000 were included in thestudy. 56 cases undergone recession-resection procedure in the same eye 19 cases twomuscle recession and one muscle resection, 20 cases two muscle recession, 9 cases onlyone muscle recession. 10 ± prism diopter deviation in postoperative sixth monthexamination was accepted as surgical success.Surgical success rate was 90% and 89.3% in the cases with deviation angle of 15-30and 31-50 prism diopter respectively. Success rate was 78.9% if the angle was more than50 prism diopter. According to strabismus type when surgical success rate examined; inalternan esotropia 88.33%, in alternan exotropia 84.6%, in monocular esotropia 88%and in monocular exotropia 83.3% success was fixed. Statistically significant differencewas not found between strabismus type and surgical success rate. The binocular visiongaining rate was found as 51.8% after the treatment of cases.In strabismus surgery, preoperative deviation angle was found to be an effectivefactor on the success rate.
Transmission-type angle deviation microscopy
International Nuclear Information System (INIS)
Chiu, M.-H.; Lai, C.-W.; Tan, C.-T.; Lai, C.-F.
2008-01-01
We present a new microscopy technique that we call transmission angle deviation microscopy (TADM). It is based on common-path heterodyne interferometry and geometrical optics. An ultrahigh sensitivity surface plasmon resonance (SPR) angular sensor is used to expand dynamic measurement ranges and to improve the axial resolution in three-dimensional optical microscopy. When transmitted light is incident upon a specimen, the beam converges or diverges because of refractive and/or surface height variations. Advantages include high axial resolution (∼32 nm), nondestructive and noncontact measurement, and larger measurement ranges (± 80 μm) for a numerical aperture of 0.21in a transparent measurement medium. The technique can be used without conductivity and pretreatment
Investigating deviations from norms in court interpreting
DEFF Research Database (Denmark)
Dubslaff, Friedel; Martinsen, Bodil
Since Shlesinger (1989) discussed the applicability of translational norms to the field of interpreting, a number of scholars have advocated the use of this concept as a frame of reference in interpreting research (e.g. Harris 1990, Schjoldager 1994, 1995, Jansen 1995, Gile 1999, Garzone 2002). Due...... for the study, we intend to conduct interviews instead. The purpose of the study is to investigate deviations from translational norms in court interpreting. More specifically, we aim to identify and describe instances of deviant behaviour on the part of the interpreters, discuss signs of possible deviant...... speaking these languages. This example does not immediately indicate that Translation Studies might be able to contribute to, for example, an improvement of the training situation for the group of court interpreters mentioned above. However, in our opinion, there is reason to believe that TS can make...
14 CFR 21.609 - Approval for deviation.
2010-01-01
... deviation. (a) Each manufacturer who requests approval to deviate from any performance standard of a TSO shall show that the standards from which a deviation is requested are compensated for by factors or... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Approval for deviation. 21.609 Section 21...
International Nuclear Information System (INIS)
Hoegl, A.
1996-01-01
This study investigates how, from a legal point of view, deviations in radiation protection measurements should be treated in comparisons between measured results and limits stipulated by nuclear legislation or goods transport regulations. A case-by-case distinction is proposed which is based on the legal concequences of the respective measurement. Commentaries on nuclear law contain no references to the legal assessment of deviating measurements in radiation protection. The examples quoted in legal commentaries on civil and criminal proceedings of the way in which errors made in measurements for speed control and determinations of the alcohol content in the blood are to be taken into account, and a commentary on ozone legislation, are examined for analogies with radiation protection measurements. Leading cases in the nuclear field are evaluated in the light of the requirements applying in case of deviations in measurements. The final section summarizes the most important findings and conclusions. (orig.) [de
A modular approach to large-scale design optimization of aerospace systems
Hwang, John T.
Gradient-based optimization and the adjoint method form a synergistic combination that enables the efficient solution of large-scale optimization problems. Though the gradient-based approach struggles with non-smooth or multi-modal problems, the capability to efficiently optimize up to tens of thousands of design variables provides a valuable design tool for exploring complex tradeoffs and finding unintuitive designs. However, the widespread adoption of gradient-based optimization is limited by the implementation challenges for computing derivatives efficiently and accurately, particularly in multidisciplinary and shape design problems. This thesis addresses these difficulties in two ways. First, to deal with the heterogeneity and integration challenges of multidisciplinary problems, this thesis presents a computational modeling framework that solves multidisciplinary systems and computes their derivatives in a semi-automated fashion. This framework is built upon a new mathematical formulation developed in this thesis that expresses any computational model as a system of algebraic equations and unifies all methods for computing derivatives using a single equation. The framework is applied to two engineering problems: the optimization of a nanosatellite with 7 disciplines and over 25,000 design variables; and simultaneous allocation and mission optimization for commercial aircraft involving 330 design variables, 12 of which are integer variables handled using the branch-and-bound method. In both cases, the framework makes large-scale optimization possible by reducing the implementation effort and code complexity. The second half of this thesis presents a differentiable parametrization of aircraft geometries and structures for high-fidelity shape optimization. Existing geometry parametrizations are not differentiable, or they are limited in the types of shape changes they allow. This is addressed by a novel parametrization that smoothly interpolates aircraft
International Nuclear Information System (INIS)
Choi, Clara Y.H.; Chang, Steven D.; Gibbs, Iris C.; Adler, John R.; Harsh, Griffith R.; Atalar, Banu; Lieberson, Robert E.; Soltys, Scott G.
2012-01-01
Purpose: Single-modality treatment of large brain metastases (>2 cm) with whole-brain irradiation, stereotactic radiosurgery (SRS) alone, or surgery alone is not effective, with local failure (LF) rates of 50% to 90%. Our goal was to improve local control (LC) by using multimodality therapy of surgery and adjuvant SRS targeting the resection cavity. Patients and Methods: We retrospectively evaluated 97 patients with brain metastases >2 cm in diameter treated with surgery and cavity SRS. Local and distant brain failure (DF) rates were analyzed with competing risk analysis, with death as a competing risk. The overall survival rate was calculated by the Kaplain-Meier product-limit method. Results: The median imaging follow-up duration for all patients was 10 months (range, 1–80 months). The 12-month cumulative incidence rates of LF, with death as a competing risk, were 9.3% (95% confidence interval [CI], 4.5%–16.1%), and the median time to LF was 6 months (range, 3–17 months). The 12-month cumulative incidence rate of DF, with death as a competing risk, was 53% (95% CI, 43%–63%). The median survival time for all patients was 15.6 months. The median survival times for recursive partitioning analysis classes 1, 2, and 3 were 33.8, 13.7, and 9.0 months, respectively (p = 0.022). On multivariate analysis, Karnofsky Performance Status (≥80 vs. <80; hazard ratio 0.54; 95% CI 0.31–0.94; p = 0.029) and maximum preoperative tumor diameter (hazard ratio 1.41; 95% CI 1.08–1.85; p = 0.013) were associated with survival. Five patients (5%) required intervention for Common Terminology Criteria for Adverse Events v4.02 grade 2 and 3 toxicity. Conclusion: Surgery and adjuvant resection cavity SRS yields excellent LC of large brain metastases. Compared with other multimodality treatment options, this approach allows patients to avoid or delay whole-brain irradiation without compromising LC.
Topographic mapping on large-scale tidal flats with an iterative approach on the waterline method
Kang, Yanyan; Ding, Xianrong; Xu, Fan; Zhang, Changkuan; Ge, Xiaoping
2017-05-01
Tidal flats, which are both a natural ecosystem and a type of landscape, are of significant importance to ecosystem function and land resource potential. Morphologic monitoring of tidal flats has become increasingly important with respect to achieving sustainable development targets. Remote sensing is an established technique for the measurement of topography over tidal flats; of the available methods, the waterline method is particularly effective for constructing a digital elevation model (DEM) of intertidal areas. However, application of the waterline method is more limited in large-scale, shifting tidal flats areas, where the tides are not synchronized and the waterline is not a quasi-contour line. For this study, a topographical map of the intertidal regions within the Radial Sand Ridges (RSR) along the Jiangsu Coast, China, was generated using an iterative approach on the waterline method. A series of 21 multi-temporal satellite images (18 HJ-1A/B CCD and three Landsat TM/OLI) of the RSR area collected at different water levels within a five month period (31 December 2013-28 May 2014) was used to extract waterlines based on feature extraction techniques and artificial further modification. These 'remotely-sensed waterlines' were combined with the corresponding water levels from the 'model waterlines' simulated by a hydrodynamic model with an initial generalized DEM of exposed tidal flats. Based on the 21 heighted 'remotely-sensed waterlines', a DEM was constructed using the ANUDEM interpolation method. Using this new DEM as the input data, it was re-entered into the hydrodynamic model, and a new round of water level assignment of waterlines was performed. A third and final output DEM was generated covering an area of approximately 1900 km2 of tidal flats in the RSR. The water level simulation accuracy of the hydrodynamic model was within 0.15 m based on five real-time tide stations, and the height accuracy (root mean square error) of the final DEM was 0.182 m
KnowLife: a versatile approach for constructing a large knowledge graph for biomedical sciences.
Ernst, Patrick; Siu, Amy; Weikum, Gerhard
2015-05-14
Biomedical knowledge bases (KB's) have become important assets in life sciences. Prior work on KB construction has three major limitations. First, most biomedical KBs are manually built and curated, and cannot keep up with the rate at which new findings are published. Second, for automatic information extraction (IE), the text genre of choice has been scientific publications, neglecting sources like health portals and online communities. Third, most prior work on IE has focused on the molecular level or chemogenomics only, like protein-protein interactions or gene-drug relationships, or solely address highly specific topics such as drug effects. We address these three limitations by a versatile and scalable approach to automatic KB construction. Using a small number of seed facts for distant supervision of pattern-based extraction, we harvest a huge number of facts in an automated manner without requiring any explicit training. We extend previous techniques for pattern-based IE with confidence statistics, and we combine this recall-oriented stage with logical reasoning for consistency constraint checking to achieve high precision. To our knowledge, this is the first method that uses consistency checking for biomedical relations. Our approach can be easily extended to incorporate additional relations and constraints. We ran extensive experiments not only for scientific publications, but also for encyclopedic health portals and online communities, creating different KB's based on different configurations. We assess the size and quality of each KB, in terms of number of facts and precision. The best configured KB, KnowLife, contains more than 500,000 facts at a precision of 93% for 13 relations covering genes, organs, diseases, symptoms, treatments, as well as environmental and lifestyle risk factors. KnowLife is a large knowledge base for health and life sciences, automatically constructed from different Web sources. As a unique feature, KnowLife is harvested from
Extremes of 2d Coulomb gas: universal intermediate deviation regime
Lacroix-A-Chez-Toine, Bertrand; Grabsch, Aurélien; Majumdar, Satya N.; Schehr, Grégory
2018-01-01
In this paper, we study the extreme statistics in the complex Ginibre ensemble of N × N random matrices with complex Gaussian entries, but with no other symmetries. All the N eigenvalues are complex random variables and their joint distribution can be interpreted as a 2d Coulomb gas with a logarithmic repulsion between any pair of particles and in presence of a confining harmonic potential v(r) \\propto r2 . We study the statistics of the eigenvalue with the largest modulus r\\max in the complex plane. The typical and large fluctuations of r\\max around its mean had been studied before, and they match smoothly to the right of the mean. However, it remained a puzzle to understand why the large and typical fluctuations to the left of the mean did not match. In this paper, we show that there is indeed an intermediate fluctuation regime that interpolates smoothly between the large and the typical fluctuations to the left of the mean. Moreover, we compute explicitly this ‘intermediate deviation function’ (IDF) and show that it is universal, i.e. independent of the confining potential v(r) as long as it is spherically symmetric and increases faster than \\ln r2 for large r with an unbounded support. If the confining potential v(r) has a finite support, i.e. becomes infinite beyond a finite radius, we show via explicit computation that the corresponding IDF is different. Interestingly, in the borderline case where the confining potential grows very slowly as v(r) ∼ \\ln r2 for r \\gg 1 with an unbounded support, the intermediate regime disappears and there is a smooth matching between the central part and the left large deviation regime.
Equations-of-motion approach to a quantum theory of large-amplitude collective motion
International Nuclear Information System (INIS)
Klein, A.
1984-01-01
The equations-of-motion approach to large-amplitude collective motion is implemented both for systems of coupled bosons, also studied in a previous paper, and for systems of coupled fermions. For the fermion case, the underlying formulation is that provided by the generalized Hartree-Fock approximation (or generalized density matrix method). To obtain results valid in the semi-classical limit, as in most previous work, we compute the Wigner transform of quantum matrices in the representation in which collective coordinates are diagonal and keep only the leading contributions. Higher-order contributions can be retained, however, and, in any case, there is no ambiguity of requantization. The semi-classical limit is seen to comprise the dynamics of time-dependent Hartree-Fock theory (TDHF) and a classical canonicity condition. By utilizing a well-known parametrization of the manifold of Slater determinants in terms of classical canonical variables, we are able to derive and understand the equations of the adiabatic limit in full parallelism with the boson case. As in the previous paper, we can thus show: (i) to zero and first order in the adiabatic limit the physics is contained in Villar's equations; (ii) to second order there is consistency and no new conditions. The structure of the solution space (discussed thoroughly in the previous paper) is summarized. A discussion of associated variational principles is given. A form of the theory equivalent to self-consistent cranking is described. A method of solution is illustrated by working out several elementary examples. The relationship to previsous work, especially that of Zelevinsky and Marumori and coworkers is discussed briefly. Three appendices deal respectively with the equations-of-motion method, with useful properties of Slater determinants, and with some technical details associated with the fermion equations of motion. (orig.)
Qualitative Variation in Approaches to University Teaching and Learning in Large First-Year Classes
Prosser, Michael; Trigwell, Keith
2014-01-01
Research on teaching from a student learning perspective has identified two qualitatively different approaches to university teaching. They are an information transmission and teacher-focused approach, and a conceptual change and student-focused approach. The fundamental difference being in the former the intention is to transfer information to…
Stability and Control of Large-Scale Dynamical Systems A Vector Dissipative Systems Approach
Haddad, Wassim M
2011-01-01
Modern complex large-scale dynamical systems exist in virtually every aspect of science and engineering, and are associated with a wide variety of physical, technological, environmental, and social phenomena, including aerospace, power, communications, and network systems, to name just a few. This book develops a general stability analysis and control design framework for nonlinear large-scale interconnected dynamical systems, and presents the most complete treatment on vector Lyapunov function methods, vector dissipativity theory, and decentralized control architectures. Large-scale dynami
A Large Group Decision Making Approach Based on TOPSIS Framework with Unknown Weights Information
Li Yupeng; Lian Xiaozhen; Lu Cheng; Wang Zhaotong
2017-01-01
Large group decision making considering multiple attributes is imperative in many decision areas. The weights of the decision makers (DMs) is difficult to obtain for the large number of DMs. To cope with this issue, an integrated multiple-attributes large group decision making framework is proposed in this article. The fuzziness and hesitation of the linguistic decision variables are described by interval-valued intuitionistic fuzzy sets. The weights of the DMs are optimized by constructing a...
Experience of Integrated Safeguards Approach for Large-scale Hot Cell Laboratory
International Nuclear Information System (INIS)
Miyaji, N.; Kawakami, Y.; Koizumi, A.; Otsuji, A.; Sasaki, K.
2010-01-01
The Japan Atomic Energy Agency (JAEA) has been operating a large-scale hot cell laboratory, the Fuels Monitoring Facility (FMF), located near the experimental fast reactor Joyo at the Oarai Research and Development Center (JNC-2 site). The FMF conducts post irradiation examinations (PIE) of fuel assemblies irradiated in Joyo. The assemblies are disassembled and non-destructive examinations, such as X-ray computed tomography tests, are carried out. Some of the fuel pins are cut into specimens and destructive examinations, such as ceramography and X-ray micro analyses, are performed. Following PIE, the tested material, in the form of a pin or segments, is shipped back to a Joyo spent fuel pond. In some cases, after reassembly of the examined irradiated fuel pins is completed, the fuel assemblies are shipped back to Joyo for further irradiation. For the IAEA to apply the integrated safeguards approach (ISA) to the FMF, a new verification system on material shipping and receiving process between Joyo and the FMF has been established by the IAEA under technical collaboration among the Japan Safeguard Office (JSGO) of MEXT, the Nuclear Material Control Center (NMCC) and the JAEA. The main concept of receipt/shipment verification under the ISA for JNC-2 site is as follows: under the IS, the FMF is treated as a Joyo-associated facility in terms of its safeguards system because it deals with the same spent fuels. Verification of the material shipping and receiving process between Joyo and the FMF can only be applied to the declared transport routes and transport casks. The verification of the nuclear material contained in the cask is performed with the method of gross defect at the time of short notice random interim inspections (RIIs) by measuring the surface neutron dose rate of the cask, filled with water to reduce radiation. The JAEA performed a series of preliminary tests with the IAEA, the JSGO and the NMCC, and confirmed from the standpoint of the operator that this
Clements, Hayley S.; Tambling, Craig J.; Hayward, Matt W.; Kerley, Graham I. H.
2014-01-01
Broad-scale models describing predator prey preferences serve as useful departure points for understanding predator-prey interactions at finer scales. Previous analyses used a subjective approach to identify prey weight preferences of the five large African carnivores, hence their accuracy is questionable. This study uses a segmented model of prey weight versus prey preference to objectively quantify the prey weight preferences of the five large African carnivores. Based on simulations of kno...
The Impact of Advanced Technologies on Treatment Deviations in Radiation Treatment Delivery
International Nuclear Information System (INIS)
Marks, Lawrence B.; Light, Kim L.; Hubbs, Jessica L.; Georgas, Debra L.; Jones, Ellen L.; Wright, Melanie C.; Willett, Christopher G.; Yin Fangfang
2007-01-01
Purpose: To assess the impact of new technologies on deviation rates in radiation therapy (RT). Methods and Materials: Treatment delivery deviations in RT were prospectively monitored during a time of technology upgrade. In January 2003, our department had three accelerators, none with 'modern' technologies (e.g., without multileaf collimators [MLC]). In 2003 to 2004, we upgraded to five new accelerators, four with MLC, and associated advanced capabilities. The deviation rates among patients treated on 'high-technology' versus 'low-technology' machines (defined as those with vs. without MLC) were compared over time using the two-tailed Fisher's exact test. Results: In 2003, there was no significant difference between the deviation rate in the 'high-technology' versus 'low-technology' groups (0.16% vs. 0.11%, p = 0.45). In 2005 to 2006, the deviation rate for the 'high-technology' groups was lower than the 'low-technology' (0.083% vs. 0.21%, p = 0.009). This difference was caused by a decline in deviations on the 'high-technology' machines over time (p = 0.053), as well as an unexpected trend toward an increase in deviations over time on the 'low-technology' machines (p = 0.15). Conclusions: Advances in RT delivery systems appear to reduce the rate of treatment deviations. Deviation rates on 'high-technology' machines with MLC decline over time, suggesting a learning curve after the introduction of new technologies. Associated with the adoption of 'high-technology' was an unexpected increase in the deviation rate with 'low-technology' approaches, which may reflect an over-reliance on tools inherent to 'high-technology' machines. With the introduction of new technologies, continued diligence is needed to ensure that staff remain proficient with 'low-technology' approaches
Approach for growth of high-quality and large protein crystals
Energy Technology Data Exchange (ETDEWEB)
Matsumura, Hiroyoshi, E-mail: matsumura@chem.eng.osaka-u.ac.jp [Graduate School of Engineering, Osaka University, Suita, Osaka 565-0871 (Japan); JST (Japan); SOSHO Inc., Osaka 541-0053 (Japan); Sugiyama, Shigeru; Hirose, Mika; Kakinouchi, Keisuke; Maruyama, Mihoko; Murai, Ryota [Graduate School of Engineering, Osaka University, Suita, Osaka 565-0871 (Japan); JST (Japan); Adachi, Hiroaki; Takano, Kazufumi [Graduate School of Engineering, Osaka University, Suita, Osaka 565-0871 (Japan); JST (Japan); SOSHO Inc., Osaka 541-0053 (Japan); Murakami, Satoshi [JST (Japan); SOSHO Inc., Osaka 541-0053 (Japan); Graduate School of Bioscience and Biotechnology, Tokyo Institute of Technology, Nagatsuta, Midori-ku, Yokohama 226-8501 (Japan); Mori, Yusuke; Inoue, Tsuyoshi [Graduate School of Engineering, Osaka University, Suita, Osaka 565-0871 (Japan); JST (Japan); SOSHO Inc., Osaka 541-0053 (Japan)
2011-01-01
Three crystallization methods, including crystallization in the presence of a semi-solid agarose gel, top-seeded solution growth (TSSG) and a large-scale hanging-drop method, have previously been presented. In this study, crystallization has been further evaluated in the presence of a semi-solid agarose gel by crystallizing additional proteins. A novel crystallization method combining TSSG and the large-scale hanging-drop method has also been developed. Three crystallization methods for growing large high-quality protein crystals, i.e. crystallization in the presence of a semi-solid agarose gel, top-seeded solution growth (TSSG) and a large-scale hanging-drop method, have previously been presented. In this study the effectiveness of crystallization in the presence of a semi-solid agarose gel has been further evaluated by crystallizing additional proteins in the presence of 2.0% (w/v) agarose gel, resulting in complete gelification with high mechanical strength. In TSSG the seed crystals are hung by a seed holder protruding from the top of the growth vessel to prevent polycrystallization. In the large-scale hanging-drop method, a cut pipette tip was used to maintain large-scale droplets consisting of protein–precipitant solution. Here a novel crystallization method that combines TSSG and the large-scale hanging-drop method is reported. A large and single crystal of lysozyme was obtained by this method.
A top-down approach to construct execution views of a large software-intensive system
Callo Arias, Trosky B.; America, Pierre; Avgeriou, Paris
This paper presents an approach to construct execution views, which are views that describe what the software of a software-intensive system does at runtime and how it does it. The approach represents an architecture reconstruction solution based on a metamodel, a set of viewpoints, and a dynamic
Directory of Open Access Journals (Sweden)
Zhi-Feng Yao
2016-01-01
Full Text Available The turbulent flow in a centrifugal pump impeller is bounded by complex surfaces, including blades, a hub and a shroud. The primary challenge of the flow simulation arises from the generation of a boundary layer between the surface of the impeller and the moving fluid. The principal objective is to evaluate the near-wall solution approaches that are typically used to deal with the flow in the boundary layer for the large-eddy simulation (LES of a centrifugal pump impeller. Three near-wall solution approaches –the wall-function approach, the wall-resolved approach and the hybrid Reynolds averaged Navier–Stoke (RANS and LES approach – are tested. The simulation results are compared with experimental results conducted through particle imaging velocimetry (PIV and laser Doppler velocimetry (LDV. It is found that the wall-function approach is more sparing of computational resources, while the other two approaches have the important advantage of providing highly accurate boundary layer flow prediction. The hybrid RANS/LES approach is suitable for predicting steady-flow features, such as time-averaged velocities and hydraulic losses. Despite the fact that the wall-resolved approach is expensive in terms of computing resources, it exhibits a strong ability to capture a small-scale vortex and predict instantaneous velocity in the near-wall region in the impeller. The wall-resolved approach is thus recommended for the transient simulation of flows in centrifugal pump impellers.
International Nuclear Information System (INIS)
Ohdachi, S.; Watanabe, K.Y.; Sakakibara, S.
2008-10-01
From detailed optimization of configuration, volume averaged beta ∼ 5% has been achieved in the Large Helical Device(LHD). While the heating efficiency was the main point to be optimized in this approach, to form a more peaked pressure profile is another promising approach towards the high beta regime. A higher electron density profile with a steeper pressure gradient has been formed by pellet injection. From the MHD stability analysis, this peaked pressure profile is stable against the ideal MHD modes. By both approaches, the central plasma β 0 reaches about 10%. (author)
A new approach to ductile tearing assessment of pipelines under large-scale yielding
Energy Technology Data Exchange (ETDEWEB)
Ostby, Erling [SINTEF Materials and Chemistry, N-7465, Trondheim (Norway)]. E-mail: Erling.Obstby@sintef.no; Thaulow, Christian [Norwegian University of Science and Technology, N-7491, Trondheim (Norway); Nyhus, Bard [SINTEF Materials and Chemistry, N-7465, Trondheim (Norway)
2007-06-15
In this paper we focus on the issue of ductile tearing assessment for cases with global plasticity, relevant for example to strain-based design of pipelines. A proposal for a set of simplified strain-based driving force equations is used as a basis for calculation of ductile tearing. We compare the traditional approach using the tangency criterion to predict unstable tearing, with a new alternative approach for ductile tearing calculations. A criterion to determine the CTOD at maximum load carrying capacity in the crack ligament is proposed, and used as the failure criterion in the new approach. Compared to numerical reference simulations, the tangency criterion predicts conservative results with regard to the strain capacity. The new approach yields results in better agreement with the reference numerical simulations.
Large-scale identification of polymorphic microsatellites using an in silico approach
Tang, J.; Baldwin, S.J.; Jacobs, J.M.E.; Linden, van der C.G.; Voorrips, R.E.; Leunissen, J.A.M.; Eck, van H.J.; Vosman, B.
2008-01-01
Background - Simple Sequence Repeat (SSR) or microsatellite markers are valuable for genetic research. Experimental methods to develop SSR markers are laborious, time consuming and expensive. In silico approaches have become a practicable and relatively inexpensive alternative during the last
Sims, Benjamin H.; Sinitsyn, Nikolai; Eidenbenz, Stephan J.
2014-01-01
This paper presents findings from a study of the email network of a large scientific research organization, focusing on methods for visualizing and modeling organizational hierarchies within large, complex network datasets. In the first part of the paper, we find that visualization and interpretation of complex organizational network data is facilitated by integration of network data with information on formal organizational divisions and levels. By aggregating and visualizing email traffic b...
3D asthenopia in horizontal deviation.
Kim, Seung-Hyun; Suh, Young-Woo; Yun, Cheol-Min; Yoo, Eun-Joo; Yeom, Ji-Hyun; Cho, Yoonae A
2013-05-01
This study was conducted to investigate the asthenopic symptoms in patients with exotropia and esotropia while watching stereoscopic 3D (S3D) television (TV). A total 77 subjects who more than 9 years of age were enrolled in this study. We divided them into three groups; Thirty-four patients with exodeviation (Exo group), 11 patients with esodeviation (Eso group) and 32 volunteers with normal binocular vision (control group). The S3D images were shown to all patients with S3D high-definition TV for a period of 20 min. Best corrected visual acuity, refractive errors, angle of strabismus, stereopsis test and history of strabismus surgery, were evaluated. After watching S3D TV for 20 min, a survey of subjective symptoms was conducted with a questionnaire to evaluate the degree of S3D perception and asthenopic symptoms such as headache, dizziness and ocular fatigue while watching 3D TV. The mean amounts of deviation in the Exo group and Eso group were 11.2 PD and 7.73PD, respectively. Mean stereoacuity was 102.7 arc sec in the the Exo group and 1389.1 arc sec in the Eso group. In the control group, it was 41.9 arc sec. Twenty-nine patients in the Exo group showed excellent stereopsis (≤60 arc sec at near), but all 11 subjects of the Eso group showed 140 arc sec or worse and showed more decreased 3D perception than the Exo and the control group (p Kruskal-Wallis test). The Exo group reported more eye fatigue (p Kruskal-Wallis test) than the Eso and the control group. However, the scores of ocular fatigue in the patients who had undergone corrective surgery were less than in the patients who had not in the Exo group (p Kruskal-Wallis test) and the amount of exodeviation was not correlated with the asthenopic symptoms (dizziness, r = 0.034, p = 0.33; headache, r = 0.320, p = 0.119; eye fatigue, r = 0.135, p = 0.519, Spearman rank correlation test, respectively). Symptoms of 3D asthenopia were related to the presence of exodeviation but not to esodeviation. This may
Deviation from the mean in teaching uncertainties
Budini, N.; Giorgi, S.; Sarmiento, L. M.; Cámara, C.; Carreri, R.; Gómez Carrillo, S. C.
2017-07-01
In this work we present two simple and interactive web-based activities for introducing students to the concepts of uncertainties in measurements. These activities are based on the real-time construction of histograms from students measurements and their subsequent analysis through an active and dynamic approach.
9 CFR 318.308 - Deviations in processing.
2010-01-01
...) Deviations in processing (or process deviations) must be handled according to: (1)(i) A HACCP plan for canned...) of this section. (c) [Reserved] (d) Procedures for handling process deviations where the HACCP plan... accordance with the following procedures: (a) Emergency stops. (1) When retort jams or breakdowns occur...
7 CFR 400.204 - Notification of deviation from standards.
2010-01-01
... 7 Agriculture 6 2010-01-01 2010-01-01 false Notification of deviation from standards. 400.204... Contract-Standards for Approval § 400.204 Notification of deviation from standards. A Contractor shall advise the Corporation immediately if the Contractor deviates from the requirements of these standards...
A Visual Model for the Variance and Standard Deviation
Orris, J. B.
2011-01-01
This paper shows how the variance and standard deviation can be represented graphically by looking at each squared deviation as a graphical object--in particular, as a square. A series of displays show how the standard deviation is the size of the average square.
21 CFR 330.11 - NDA deviations from applicable monograph.
2010-04-01
... 21 Food and Drugs 5 2010-04-01 2010-04-01 false NDA deviations from applicable monograph. 330.11... EFFECTIVE AND NOT MISBRANDED Administrative Procedures § 330.11 NDA deviations from applicable monograph. A new drug application requesting approval of an OTC drug deviating in any respect from a monograph that...
41 CFR 109-1.110-50 - Deviation procedures.
2010-07-01
... best interest of the Government; (3) If applicable, the name of the contractor and identification of... background information which will contribute to a full understanding of the desired deviation. (b)(1... authorized to grant deviations to the DOE-PMR. (d) Requests for deviations from the FPMR will be coordinated...
Xu, Min; Chai, Xiaoqi; Muthakana, Hariank; Liang, Xiaodan; Yang, Ge; Zeev-Ben-Mordehai, Tzviya; Xing, Eric P
2017-07-15
Cellular Electron CryoTomography (CECT) enables 3D visualization of cellular organization at near-native state and in sub-molecular resolution, making it a powerful tool for analyzing structures of macromolecular complexes and their spatial organizations inside single cells. However, high degree of structural complexity together with practical imaging limitations makes the systematic de novo discovery of structures within cells challenging. It would likely require averaging and classifying millions of subtomograms potentially containing hundreds of highly heterogeneous structural classes. Although it is no longer difficult to acquire CECT data containing such amount of subtomograms due to advances in data acquisition automation, existing computational approaches have very limited scalability or discrimination ability, making them incapable of processing such amount of data. To complement existing approaches, in this article we propose a new approach for subdividing subtomograms into smaller but relatively homogeneous subsets. The structures in these subsets can then be separately recovered using existing computation intensive methods. Our approach is based on supervised structural feature extraction using deep learning, in combination with unsupervised clustering and reference-free classification. Our experiments show that, compared with existing unsupervised rotation invariant feature and pose-normalization based approaches, our new approach achieves significant improvements in both discrimination ability and scalability. More importantly, our new approach is able to discover new structural classes and recover structures that do not exist in training data. Source code freely available at http://www.cs.cmu.edu/∼mxu1/software . mxu1@cs.cmu.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Deviations from Wick's theorem in the canonical ensemble
Schönhammer, K.
2017-07-01
Wick's theorem for the expectation values of products of field operators for a system of noninteracting fermions or bosons plays an important role in the perturbative approach to the quantum many-body problem. A finite-temperature version holds in the framework of the grand canonical ensemble, but not for the canonical ensemble appropriate for systems with fixed particle number such as ultracold quantum gases in optical lattices. Here we present formulas for expectation values of products of field operators in the canonical ensemble using a method in the spirit of Gaudin's proof of Wick's theorem for the grand canonical case. The deviations from Wick's theorem are examined quantitatively for two simple models of noninteracting fermions.
Stationary deviations from quasineutrality in plasma dynamics
International Nuclear Information System (INIS)
Sholin, G.V.; Trushin, S.A.
1985-01-01
The general assumption of quasineutrality of plasmas is broken in some cases. A self-consistent method is presented to solve the nonlinear differential equations of two-liquid hydrodynamics. The method is based on the theory of singularly perturbed differential equations of A.N. Tikhonov. The case of perpendicular magneto-acoustic wave of large amplitude is described. The rearrangement of the charges is related to the instability of root of the gendering system. (D.Gy.)
Presenting an Approach for Conducting Knowledge Architecture within Large-Scale Organizations.
Varaee, Touraj; Habibi, Jafar; Mohaghar, Ali
2015-01-01
Knowledge architecture (KA) establishes the basic groundwork for the successful implementation of a short-term or long-term knowledge management (KM) program. An example of KA is the design of a prototype before a new vehicle is manufactured. Due to a transformation to large-scale organizations, the traditional architecture of organizations is undergoing fundamental changes. This paper explores the main strengths and weaknesses in the field of KA within large-scale organizations and provides a suitable methodology and supervising framework to overcome specific limitations. This objective was achieved by applying and updating the concepts from the Zachman information architectural framework and the information architectural methodology of enterprise architecture planning (EAP). The proposed solution may be beneficial for architects in knowledge-related areas to successfully accomplish KM within large-scale organizations. The research method is descriptive; its validity is confirmed by performing a case study and polling the opinions of KA experts.
An approach to the damping of local modes of oscillations resulting from large hydraulic transients
Energy Technology Data Exchange (ETDEWEB)
Dobrijevic, D.M.; Jankovic, M.V.
1999-09-01
A new method of damping of local modes of oscillations under large disturbance is presented in this paper. The digital governor controller is used. Controller operates in real time to improve the generating unit transients through the guide vane position and the runner blade position. The developed digital governor controller, whose control signals are adjusted using the on-line measurements, offers better damping effects for the generator oscillations under large disturbances than the conventional controller. Digital simulations of hydroelectric power plant equipped with low-head Kaplan turbine are performed and the comparisons between the digital governor control and the conventional governor control are presented. Simulation results show that the new controller offers better performances, than the conventional controller, when the system is subjected to large disturbances.
Presenting an Approach for Conducting Knowledge Architecture within Large-Scale Organizations
Varaee, Touraj; Habibi, Jafar; Mohaghar, Ali
2015-01-01
Knowledge architecture (KA) establishes the basic groundwork for the successful implementation of a short-term or long-term knowledge management (KM) program. An example of KA is the design of a prototype before a new vehicle is manufactured. Due to a transformation to large-scale organizations, the traditional architecture of organizations is undergoing fundamental changes. This paper explores the main strengths and weaknesses in the field of KA within large-scale organizations and provides a suitable methodology and supervising framework to overcome specific limitations. This objective was achieved by applying and updating the concepts from the Zachman information architectural framework and the information architectural methodology of enterprise architecture planning (EAP). The proposed solution may be beneficial for architects in knowledge-related areas to successfully accomplish KM within large-scale organizations. The research method is descriptive; its validity is confirmed by performing a case study and polling the opinions of KA experts. PMID:25993414
Modeling and Control of a Large Nuclear Reactor A Three-Time-Scale Approach
Shimjith, S R; Bandyopadhyay, B
2013-01-01
Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property,...
Gauging the ungauged basin: a top-down approach in a large semiarid watershed in China
Directory of Open Access Journals (Sweden)
F. K. Barthold
2008-06-01
Full Text Available A major research challenge in ungauged basins is to quickly assess the dominant hydrological processes of watersheds. In this paper we present a top-down approach from first field reconnaissance to perceptual model development, model conceptualization, evaluation, rejection and eventually, to a more substantial field campaign to build upon the initial modeling. This approach led us from an initial state where very little was known about catchment behavior towards a more complete view of catchment hydrological processes, including the preliminary identification of water sources and an assessment of the effectiveness of our sampling design.
D'Angelo, C.A.; Giuffrida, C.; Abramo, G.
2011-01-01
National exercises for the evaluation of research activity by universities are becoming regular practice in ever more countries. These exercises have mainly been conducted through the application of peer-review methods. Bibliometrics has not been able to offer a valid large-scale alternative because
Received signal strength in large-scale wireless relay sensor network: a stochastic ray approach
Hu, L.; Chen, Y.; Scanlon, W.G.
2011-01-01
The authors consider a point percolation lattice representation of a large-scale wireless relay sensor network (WRSN) deployed in a cluttered environment. Each relay sensor corresponds to a grid point in the random lattice and the signal sent by the source is modelled as an ensemble of photons that
An Active-Learning Approach to Fostering Understanding of Research Methods in Large Classes
LaCosse, Jennifer; Ainsworth, Sarah E.; Shepherd, Melissa A.; Ent, Michael; Klein, Kelly M.; Holland-Carter, Lauren A.; Moss, Justin H.; Licht, Mark; Licht, Barbara
2017-01-01
The current investigation tested the effectiveness of an online student research project designed to supplement traditional methods (e.g., lectures, discussions, and assigned readings) of teaching research methods in a large-enrollment Introduction to Psychology course. Over the course of the semester, students completed seven assignments, each…
Sturges, Diana; Maurer, Trent W.; Cole, Oladipo
2009-01-01
This study investigated the effectiveness of role play in a large undergraduate science class. The targeted population consisted of 298 students enrolled in 2 sections of an undergraduate Human Anatomy and Physiology course taught by the same instructor. The section engaged in the role-play activity served as the study group, whereas the section…
A Logically Centralized Approach for Control and Management of Large Computer Networks
Iqbal, Hammad A.
2012-01-01
Management of large enterprise and Internet service provider networks is a complex, error-prone, and costly challenge. It is widely accepted that the key contributors to this complexity are the bundling of control and data forwarding in traditional routers and the use of fully distributed protocols for network control. To address these…
MUSiC - An Automated Scan for Deviations between Data and Monte Carlo Simulation
Meyer, Arnd
2009-01-01
A model independent analysis approach is presented, systematically scanning the data for deviations from the standard model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of event generators. The approach is sensitive to a variety of models of new physics, including those not yet thought of.
MUSiC - An Automated Scan for Deviations between Data and Monte Carlo Simulation
International Nuclear Information System (INIS)
Meyer, Arnd
2010-01-01
A model independent analysis approach is presented, systematically scanning the data for deviations from the standard model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of event generators. The approach is sensitive to a variety of models of new physics, including those not yet thought of.
DEFF Research Database (Denmark)
Lamarine, Marc; Hager, Jörg; Saris, Wim H M
2018-01-01
the EuroFIR resource. Two approaches were tested: the first was based solely on food name similarity (fuzzy matching). The second used a machine learning approach (C5.0 classifier) combining both fuzzy matching and food energy. We tested mapping food items using their original names and also an English...... not lead to any improvements compared to the fuzzy matching. However, it could increase substantially the recall rate for food items without any clear equivalent in the FCTs (+7 and +20% when mapping items using their original or English-translated names). Our approaches have been implemented as R packages...... and are freely available from GitHub. Conclusion: This study is the first to provide automated approaches for large-scale food item mapping onto FCTs. We demonstrate that both high precision and recall can be achieved. Our solutions can be used with any FCT and do not require any programming background...
Matrix shaped pulsed laser deposition: New approach to large area and homogeneous deposition
Energy Technology Data Exchange (ETDEWEB)
Akkan, C.K.; May, A. [INM – Leibniz Institute for New Materials, CVD/Biosurfaces Group, Campus D2 2, 66123 Saarbrücken (Germany); Hammadeh, M. [Department for Obstetrics, Gynecology and Reproductive Medicine, IVF Laboratory, Saarland University Medical Center and Faculty of Medicine, Building 9, 66421 Homburg, Saar (Germany); Abdul-Khaliq, H. [Clinic for Pediatric Cardiology, Saarland University Medical Center and Faculty of Medicine, Building 9, 66421 Homburg, Saar (Germany); Aktas, O.C., E-mail: cenk.aktas@inm-gmbh.de [INM – Leibniz Institute for New Materials, CVD/Biosurfaces Group, Campus D2 2, 66123 Saarbrücken (Germany)
2014-05-01
Pulsed laser deposition (PLD) is one of the well-established physical vapor deposition methods used for synthesis of ultra-thin layers. Especially PLD is suitable for the preparation of thin films of complex alloys and ceramics where the conservation of the stoichiometry is critical. Beside several advantages of PLD, inhomogeneity in thickness limits use of PLD in some applications. There are several approaches such as rotation of the substrate or scanning of the laser beam over the target to achieve homogenous layers. On the other hand movement and transition create further complexity in process parameters. Here we present a new approach which we call Matrix Shaped PLD to control the thickness and homogeneity of deposited layers precisely. This new approach is based on shaping of the incoming laser beam by a microlens array and a Fourier lens. The beam is split into much smaller multi-beam array over the target and this leads to a homogenous plasma formation. The uniform intensity distribution over the target yields a very uniform deposit on the substrate. This approach is used to deposit carbide and oxide thin films for biomedical applications. As a case study coating of a stent which has a complex geometry is presented briefly.
Hierarchical approach to optimization of parallel matrix multiplication on large-scale platforms
Hasanov, Khalid; Quintin, Jean-Noë l; Lastovetsky, Alexey
2014-01-01
-scale parallelism in mind. Indeed, while in 1990s a system with few hundred cores was considered a powerful supercomputer, modern top supercomputers have millions of cores. In this paper, we present a hierarchical approach to optimization of message-passing parallel
An approach to large scale identification of non-obvious structural similarities between proteins
Cherkasov, Artem; Jones, Steven JM
2004-01-01
Background A new sequence independent bioinformatics approach allowing genome-wide search for proteins with similar three dimensional structures has been developed. By utilizing the numerical output of the sequence threading it establishes putative non-obvious structural similarities between proteins. When applied to the testing set of proteins with known three dimensional structures the developed approach was able to recognize structurally similar proteins with high accuracy. Results The method has been developed to identify pathogenic proteins with low sequence identity and high structural similarity to host analogues. Such protein structure relationships would be hypothesized to arise through convergent evolution or through ancient horizontal gene transfer events, now undetectable using current sequence alignment techniques. The pathogen proteins, which could mimic or interfere with host activities, would represent candidate virulence factors. The developed approach utilizes the numerical outputs from the sequence-structure threading. It identifies the potential structural similarity between a pair of proteins by correlating the threading scores of the corresponding two primary sequences against the library of the standard folds. This approach allowed up to 64% sensitivity and 99.9% specificity in distinguishing protein pairs with high structural similarity. Conclusion Preliminary results obtained by comparison of the genomes of Homo sapiens and several strains of Chlamydia trachomatis have demonstrated the potential usefulness of the method in the identification of bacterial proteins with known or potential roles in virulence. PMID:15147578
An approach to large scale identification of non-obvious structural similarities between proteins
Directory of Open Access Journals (Sweden)
Cherkasov Artem
2004-05-01
Full Text Available Abstract Background A new sequence independent bioinformatics approach allowing genome-wide search for proteins with similar three dimensional structures has been developed. By utilizing the numerical output of the sequence threading it establishes putative non-obvious structural similarities between proteins. When applied to the testing set of proteins with known three dimensional structures the developed approach was able to recognize structurally similar proteins with high accuracy. Results The method has been developed to identify pathogenic proteins with low sequence identity and high structural similarity to host analogues. Such protein structure relationships would be hypothesized to arise through convergent evolution or through ancient horizontal gene transfer events, now undetectable using current sequence alignment techniques. The pathogen proteins, which could mimic or interfere with host activities, would represent candidate virulence factors. The developed approach utilizes the numerical outputs from the sequence-structure threading. It identifies the potential structural similarity between a pair of proteins by correlating the threading scores of the corresponding two primary sequences against the library of the standard folds. This approach allowed up to 64% sensitivity and 99.9% specificity in distinguishing protein pairs with high structural similarity. Conclusion Preliminary results obtained by comparison of the genomes of Homo sapiens and several strains of Chlamydia trachomatis have demonstrated the potential usefulness of the method in the identification of bacterial proteins with known or potential roles in virulence.
Analysis of some fuel characteristics deviations and their influence over WWER-440 fuel cycle design
International Nuclear Information System (INIS)
Stoyanova, I.; Kamenov, K.
2001-01-01
The aim of this study is to estimate the influence of some deviations in WWER-440 fuel assemblies (FA) characteristics upon fuel core design. A large number of different fresh fuel assemblies with enrichment of 3.5 t % are examined related to the enrichment, mass of initial metal Uranium and assembly shroud thickness. Infinite multiplication factor (Kinf) in fuel assembly has been calculated by HELIOS spectral code for basic assembly and for different FA with deviation of a single parameter. The effects from single parameter deviation (enrichment) and from two parameter deviations (enrichment and wall thickness) on the neutron-physics characteristics of the core are estimated for different fuel assemblies. Relatively week burnup dependence on Kinf is observed as result of deviation in the enrichment of the fuel and in the wall thickness of the assembly. An assessment of a FA single and two parameter deviations effects on design fuel cycle duration and relative power peaking factor is also considers in the paper. As a final conclusion can be settled that the maximum relative shortness of fuel cycle can be observed in the case of two FA parameters deviations
Management of large complex multi-stakeholders projects: a bibliometric approach
Directory of Open Access Journals (Sweden)
Aline Sacchi Homrich
2017-06-01
Full Text Available The growing global importance of large infrastructure projects has piqued the interest of many researchers in a variety of issues related to the management of large, multi-stakeholder projects, characterized by their high complexity and intense interaction among numerous stake-holders with distinct levels of responsibility. The objective of this study is to provide an overview of the academic literature focused on the management of these kinds of projects, describing the main themes considered, the lines of research identified and prominent trends. Bibliometric analysis techniques were used as well as network and content analysis. Research for information was performed in the scientific database, ISI Web of Knowledge and Scopus. The initial sample analysis consisted of 144 papers published between 1984 and 2014 and expanded to the references cited in these papers. The models identified in the literature converge with the following key-processes: project delivery systems; risk-management models; project cost management; public-private partnership.
Directory of Open Access Journals (Sweden)
Artur Diaz-Carandell, MD
2014-08-01
Full Text Available Summary: The reconstruction of mandibular defects has always been of great concern, and it still represents a challenge for head-and-neck reconstructive surgeons. The mandible plays a major role in mastication, articulation, swallowing, respiration, and facial contour. Thus, when undertaking mandibular reconstruction, restoration of both function and cosmetics should be considered as the measure of success. Microsurgical reconstruction is the gold-standard method to repair a segmental mandibular defect. Reconstruction of sizeable defects often needs a large neck incision, leading to unsatisfactory cosmetic outcomes. Virtual surgical planning and stereolithographic modeling are new techniques that offer excellent results and can provide precise data for mandibular reconstruction and improve postoperative outcomes. We present a case of complete intraoral resection and reconstruction of a large ameloblastoma of the mandible.
A Multi-Level Middle-Out Cross-Zooming Approach for Large Graph Analytics
Energy Technology Data Exchange (ETDEWEB)
Wong, Pak C.; Mackey, Patrick S.; Cook, Kristin A.; Rohrer, Randall M.; Foote, Harlan P.; Whiting, Mark A.
2009-10-11
This paper presents a working graph analytics model that embraces the strengths of the traditional top-down and bottom-up approaches with a resilient crossover concept to exploit the vast middle-ground information overlooked by the two extreme analytical approaches. Our graph analytics model is developed in collaboration with researchers and users, who carefully studied the functional requirements that reflect the critical thinking and interaction pattern of a real-life intelligence analyst. To evaluate the model, we implement a system prototype, known as GreenHornet, which allows our analysts to test the theory in practice, identify the technological and usage-related gaps in the model, and then adapt the new technology in their work space. The paper describes the implementation of GreenHornet and compares its strengths and weaknesses against the other prevailing models and tools.
Authormagic – An Approach to Author Disambiguation in Large-Scale Digital Libraries
Weiler, Henning; Mele, Salvatore
2011-01-01
A collaboration of leading research centers in the field of High Energy Physics (HEP) has built INSPIRE, a novel information infrastructure, which comprises the entire corpus of about one million documents produced within the discipline, including a rich set of metadata, citation information and half a million full-text documents, and offers a unique opportunity for author disambiguation strategies. The presented approach features extended metadata comparison metrics and a three-step unsupervised graph clustering technique. The algorithm aided in identifying 200'000 individuals from 6'500'000 author signatures. Preliminary tests based on knowledge of external experts and a pilot of a crowd-sourcing system show a success rate of more than 96% within the selected test cases. The obtained author clusters serve as a recommendation for INSPIRE users to further clean the publication list in a crowd-sourced approach.
Jing-min CHENG; Jian-wen GU; Yong-qin KUANG; Wei-qi HE; Xue-min XING; Hai-dong HUANG; Yuan MA; Xun XIA; Tao YANG; Xiu-zhong ZHANG; Lin CHENG; Fan-jun ZENG
2011-01-01
Objective To explore the operative method and therapeutic efficacy of surgical resection of large invasive pituitary adenomas with individualized approach under neuronavigator guidance.Methods Seventeen patients(10 males and 7 females,aged from 22 to 78 years with a mean of 39.2±9.2 years) suffering from large invasive pituitary adenoma of higher than Hardy IV grade hospitalized from 2004 to 2009 were involved in the present study.All procedures were performed with the assistance of neuronavi...
International Nuclear Information System (INIS)
Mattson, G.E.
1983-01-01
As experiments continue to grow in size and complexity, a few technicians will no longer be able to maintain and operate the complete experiment. Specialization is becoming the norm. Subsystems are becoming very large and complex, requiring a great deal of experience and training for technicians to become qualified maintenance/operation personnel. Formal in-house and off-site programs supplement on-the-job training to fulfill the qualification criteria. This paper presents the Tandem Mirror Experiment-Upgrade (TMX-U) approach to manpower staffing, some problems encountered, possible improvements, and safety considerations for the successful operation of a large experimental facility
New approach to the readout system for a very large bismuth germanate calorimeter
International Nuclear Information System (INIS)
Sumner, R.
1982-01-01
This note presents a possible solution to the problem of data acquisition and control for a very large array of BGO crystals. The array is a total energy calorimeter, which is a part of a detector being designed for LEPC. After a brief description of the environment, we present a working definition of the calorimeter, followed by a statement of the desirable characteristics of the readout system. After a discussion of some alternatives, a complete system is described
A Study of Revenue Cost Dynamics in Large Data Centers: A Factorial Design Approach
Sampatrao, Gambhire Swati; Dey, Sudeepa Roy; Goswami, Bidisha; S, Sai Prasanna M.; Saha, Snehanshu
2016-01-01
Revenue optimization of large data centers is an open and challenging problem. The intricacy of the problem is due to the presence of too many parameters posing as costs or investment. This paper proposes a model to optimize the revenue in cloud data center and analyzes the model, revenue and different investment or cost commitments of organizations investing in data centers. The model uses the Cobb-Douglas production function to quantify the boundaries and the most significant factors to gen...
A Lean Approach to Improving SE Visibility in Large Operational Systems Evolution
2013-06-01
engineering activities in such instances. An initial generalization of pull concepts using a standard kanban approach was developed. During the development... Kanban -based Scheduling System (KSS) (Turner, Lane, et al. 2012). The second phase of this research is describing an implementation of the KSS concept...software and systems engineering tasks and the required capabilities. Because kanban concepts have been primarily used with single level value streams
Price, C; Briggs, K; Brown, P J
1999-01-01
Healthcare terminologies have become larger and more complex, aiming to support a diverse range of functions across the whole spectrum of healthcare activity. Prioritization of development, implementation and evaluation can be achieved by regarding the "terminology" as an integrated system of content-based and functional components. Matching these components to target segments within the healthcare community, supports a strategic approach to evolutionary development and provides essential product differentiation to enable terminology providers and systems suppliers to focus on end-user requirements.
Map Archive Mining: Visual-Analytical Approaches to Explore Large Historical Map Collections
Directory of Open Access Journals (Sweden)
Johannes H. Uhl
2018-04-01
Full Text Available Historical maps are unique sources of retrospective geographical information. Recently, several map archives containing map series covering large spatial and temporal extents have been systematically scanned and made available to the public. The geographical information contained in such data archives makes it possible to extend geospatial analysis retrospectively beyond the era of digital cartography. However, given the large data volumes of such archives (e.g., more than 200,000 map sheets in the United States Geological Survey topographic map archive and the low graphical quality of older, manually-produced map sheets, the process to extract geographical information from these map archives needs to be automated to the highest degree possible. To understand the potential challenges (e.g., salient map characteristics and data quality variations in automating large-scale information extraction tasks for map archives, it is useful to efficiently assess spatio-temporal coverage, approximate map content, and spatial accuracy of georeferenced map sheets at different map scales. Such preliminary analytical steps are often neglected or ignored in the map processing literature but represent critical phases that lay the foundation for any subsequent computational processes including recognition. Exemplified for the United States Geological Survey topographic map and the Sanborn fire insurance map archives, we demonstrate how such preliminary analyses can be systematically conducted using traditional analytical and cartographic techniques, as well as visual-analytical data mining tools originating from machine learning and data science.
A Large Group Decision Making Approach Based on TOPSIS Framework with Unknown Weights Information
Directory of Open Access Journals (Sweden)
Li Yupeng
2017-01-01
Full Text Available Large group decision making considering multiple attributes is imperative in many decision areas. The weights of the decision makers (DMs is difficult to obtain for the large number of DMs. To cope with this issue, an integrated multiple-attributes large group decision making framework is proposed in this article. The fuzziness and hesitation of the linguistic decision variables are described by interval-valued intuitionistic fuzzy sets. The weights of the DMs are optimized by constructing a non-linear programming model, in which the original decision matrices are aggregated by using the interval-valued intuitionistic fuzzy weighted average operator. By solving the non-linear programming model with MATLAB®, the weights of the DMs and the fuzzy comprehensive decision matrix are determined. Then the weights of the criteria are calculated based on the information entropy theory. At last, the TOPSIS framework is employed to establish the decision process. The divergence between interval-valued intuitionistic fuzzy numbers is calculated by interval-valued intuitionistic fuzzy cross entropy. A real-world case study is constructed to elaborate the feasibility and effectiveness of the proposed methodology.
Heavy flavor at the large hadron collider in a strong coupling approach
Energy Technology Data Exchange (ETDEWEB)
He, Min [Department of Applied Physics, Nanjing University of Science and Technology, Nanjing 210094 (China); Fries, Rainer J.; Rapp, Ralf [Cyclotron Institute and Department of Physics and Astronomy, Texas A and M University, College Station, TX 77843-3366 (United States)
2014-07-30
Employing nonperturbative transport coefficients for heavy-flavor (HF) diffusion through quark–gluon plasma (QGP), hadronization and hadronic matter, we compute D- and B-meson observables in Pb+Pb (√(s)=2.76 TeV) collisions at the LHC. Elastic heavy-quark scattering in the QGP is evaluated within a thermodynamic T-matrix approach, generating resonances close to the critical temperature which are utilized for recombination into D and B mesons, followed by hadronic diffusion using effective hadronic scattering amplitudes. The transport coefficients are implemented via Fokker–Planck Langevin dynamics within hydrodynamic simulations of the bulk medium in nuclear collisions. The hydro expansion is quantitatively constrained by transverse-momentum spectra and elliptic flow of light hadrons. Our approach thus incorporates the paradigm of a strongly coupled medium in both bulk and HF dynamics throughout the thermal evolution of the system. At low and intermediate p{sub T}, HF observables at LHC are reasonably well accounted for, while discrepancies at high p{sub T} are indicative for radiative mechanisms not included in our approach.
Bayu Bati, Tesfaye; Gelderblom, Helene; van Biljon, Judy
2014-01-01
The challenge of teaching programming in higher education is complicated by problems associated with large class teaching, a prevalent situation in many developing countries. This paper reports on an investigation into the use of a blended learning approach to teaching and learning of programming in a class of more than 200 students. A course and learning environment was designed by integrating constructivist learning models of Constructive Alignment, Conversational Framework and the Three-Stage Learning Model. Design science research is used for the course redesign and development of the learning environment, and action research is integrated to undertake participatory evaluation of the intervention. The action research involved the Students' Approach to Learning survey, a comparative analysis of students' performance, and qualitative data analysis of data gathered from various sources. The paper makes a theoretical contribution in presenting a design of a blended learning solution for large class teaching of programming grounded in constructivist learning theory and use of free and open source technologies.
DeSantis, Michael C; DeCenzo, Shawn H; Li, Je-Luen; Wang, Y M
2010-03-29
Standard deviation measurements of intensity profiles of stationary single fluorescent molecules are useful for studying axial localization, molecular orientation, and a fluorescence imaging system's spatial resolution. Here we report on the analysis of the precision of standard deviation measurements of intensity profiles of single fluorescent molecules imaged using an EMCCD camera.We have developed an analytical expression for the standard deviation measurement error of a single image which is a function of the total number of detected photons, the background photon noise, and the camera pixel size. The theoretical results agree well with the experimental, simulation, and numerical integration results. Using this expression, we show that single-molecule standard deviation measurements offer nanometer precision for a large range of experimental parameters.
A compact to revitalise large-scale irrigation systems: A ‘theory of change’ approach
Directory of Open Access Journals (Sweden)
Bruce A. Lankford
2016-02-01
Full Text Available In countries with transitional economies such as those found in South Asia, large-scale irrigation systems (LSIS with a history of public ownership account for about 115 million ha (Mha or approximately 45% of their total area under irrigation. In terms of the global area of irrigation (320 Mha for all countries, LSIS are estimated at 130 Mha or 40% of irrigated land. These systems can potentially deliver significant local, regional and global benefits in terms of food, water and energy security, employment, economic growth and ecosystem services. For example, primary crop production is conservatively valued at about US$355 billion. However, efforts to enhance these benefits and reform the sector have been costly and outcomes have been underwhelming and short-lived. We propose the application of a 'theory of change' (ToC as a foundation for promoting transformational change in large-scale irrigation centred upon a 'global irrigation compact' that promotes new forms of leadership, partnership and ownership (LPO. The compact argues that LSIS can change by switching away from the current channelling of aid finances controlled by government irrigation agencies. Instead it is for irrigators, closely partnered by private, public and NGO advisory and regulatory services, to develop strong leadership models and to find new compensatory partnerships with cities and other river basin neighbours. The paper summarises key assumptions for change in the LSIS sector including the need to initially test this change via a handful of volunteer systems. Our other key purpose is to demonstrate a ToC template by which large-scale irrigation policy can be better elaborated and discussed.
Energy Technology Data Exchange (ETDEWEB)
Liu, Yonghao; Chadha, Arvinder; Zhao, Deyin; Shuai, Yichen; Menon, Laxmy; Yang, Hongjun; Zhou, Weidong, E-mail: wzhou@uta.edu [Nanophotonics Lab, Department of Electrical Engineering, University of Texas at Arlington, Arlington, Texas 76019 (United States); Piper, Jessica R.; Fan, Shanhui [Ginzton Laboratory, Department of Electrical Engineering, Stanford University, Stanford, California 94305 (United States); Jia, Yichen; Xia, Fengnian [Department of Electrical Engineering, Yale University, New Haven, Connecticut 06520 (United States); Ma, Zhenqiang [Department of Electrical and Computer Engineering, University of Wisconsin-Madison, Madison, Wisconsin 53706 (United States)
2014-11-03
We demonstrate experimentally close to total absorption in monolayer graphene based on critical coupling with guided resonances in transfer printed photonic crystal Fano resonance filters at near infrared. Measured peak absorptions of 35% and 85% were obtained from cavity coupled monolayer graphene for the structures without and with back reflectors, respectively. These measured values agree very well with the theoretical values predicted with the coupled mode theory based critical coupling design. Such strong light-matter interactions can lead to extremely compact and high performance photonic devices based on large area monolayer graphene and other two–dimensional materials.
Sub-bottom profiling for large-scale maritime archaeological survey An experience-based approach
DEFF Research Database (Denmark)
Grøn, Ole; Boldreel, Lars Ole
2013-01-01
and wrecks partially or wholly embedded in the sea-floor sediments demands the application of highresolution sub-bottom profilers. This paper presents a strategy for the cost-effective large-scale mapping of unknown sedimentembedded sites such as submerged Stone Age settlements or wrecks, based on sub...... of the submerged cultural heritage. Elements such as archaeological wreck sites exposed on the sea floor are mapped using side-scan and multi-beam techniques. These can also provide information on bathymetric patterns representing potential Stone Age settlements, whereas the detection of such archaeological sites...
The response to prism deviations in human infants.
Riddell, P M; Horwood, A M; Houston, S M; Turner, J E
1999-09-23
Previous research has suggested that infants are unable to make a corrective eye movement in response to a small base-out prism placed in front of one eye before 14-16 weeks [1]. Three hypotheses have been proposed to explain this early inability, and each of these makes different predictions for the time of onset of a response to a larger prism. The first proposes that infants have a 'degraded sensory capacity' and so require a larger retinal disparity (difference in the position of the image on the retina of each eye) to stimulate disparity detectors [2]. This predicts that infants might respond at an earlier age than previously reported [1] when tested using a larger prism. The second hypothesis proposes that infants learn to respond to larger retinal disparities through practice with small disparities [3]. According to this theory, using a larger prism will not result in developmentally earlier responses, and may even delay the response. The third hypothesis proposes that the ability to respond to prismatic deviation depends on maturational factors indicated by the onset of stereopsis (the ability to detect depth in an image on the basis of retinal disparity cues only) [4] [5], predicting that the size of the prism is irrelevant. To differentiate between these hypotheses, we tested 192 infants ranging from 2 to 52 weeks of age using a larger prism. Results showed that 63% of infants of 5-8 weeks of age produced a corrective eye movement in response to placement of a prism in front of the eye when in the dark. Both the percentage of infants who produced a response, and the speed of the response, increased with age. These results suggest that infants can make corrective eye movements in response to large prismatic deviations before 14-16 weeks of age. This, in combination with other recent results [6], discounts previous hypotheses.
The gait standard deviation, a single measure of kinematic variability.
Sangeux, Morgan; Passmore, Elyse; Graham, H Kerr; Tirosh, Oren
2016-05-01
Measurement of gait kinematic variability provides relevant clinical information in certain conditions affecting the neuromotor control of movement. In this article, we present a measure of overall gait kinematic variability, GaitSD, based on combination of waveforms' standard deviation. The waveform standard deviation is the common numerator in established indices of variability such as Kadaba's coefficient of multiple correlation or Winter's waveform coefficient of variation. Gait data were collected on typically developing children aged 6-17 years. Large number of strides was captured for each child, average 45 (SD: 11) for kinematics and 19 (SD: 5) for kinetics. We used a bootstrap procedure to determine the precision of GaitSD as a function of the number of strides processed. We compared the within-subject, stride-to-stride, variability with the, between-subject, variability of the normative pattern. Finally, we investigated the correlation between age and gait kinematic, kinetic and spatio-temporal variability. In typically developing children, the relative precision of GaitSD was 10% as soon as 6 strides were captured. As a comparison, spatio-temporal parameters required 30 strides to reach the same relative precision. The ratio stride-to-stride divided by normative pattern variability was smaller in kinematic variables (the smallest for pelvic tilt, 28%) than in kinetic and spatio-temporal variables (the largest for normalised stride length, 95%). GaitSD had a strong, negative correlation with age. We show that gait consistency may stabilise only at, or after, skeletal maturity. Copyright © 2016 Elsevier B.V. All rights reserved.
Constraints on deviations from ΛCDM within Horndeski gravity
Energy Technology Data Exchange (ETDEWEB)
Bellini, Emilio; Cuesta, Antonio J. [ICCUB, University of Barcelona (IEEC-UB), Martí i Franquès 1, E08028 Barcelona (Spain); Jimenez, Raul; Verde, Licia, E-mail: emilio.bellini@icc.ub.edu, E-mail: ajcuesta@icc.ub.edu, E-mail: rauljimenez@g.harvard.edu, E-mail: liciaverde@icc.ub.edu [Institució Catalana de Recerca i Estudis Avançats (ICREA), 08010 Barcelona (Spain)
2016-02-01
Recent anomalies found in cosmological datasets such as the low multipoles of the Cosmic Microwave Background or the low redshift amplitude and growth of clustering measured by e.g., abundance of galaxy clusters and redshift space distortions in galaxy surveys, have motivated explorations of models beyond standard ΛCDM. Of particular interest are models where general relativity (GR) is modified on large cosmological scales. Here we consider deviations from ΛCDM+GR within the context of Horndeski gravity, which is the most general theory of gravity with second derivatives in the equations of motion. We adopt a parametrization in which the four additional Horndeski functions of time α{sub i}(t) are proportional to the cosmological density of dark energy Ω{sub DE}(t). Constraints on this extended parameter space using a suite of state-of-the art cosmological observations are presented for the first time. Although the theory is able to accommodate the low multipoles of the Cosmic Microwave Background and the low amplitude of fluctuations from redshift space distortions, we find no significant tension with ΛCDM+GR when performing a global fit to recent cosmological data and thus there is no evidence against ΛCDM+GR from an analysis of the value of the Bayesian evidence ratio of the modified gravity models with respect to ΛCDM, despite introducing extra parameters. The posterior distribution of these extra parameters that we derive return strong constraints on any possible deviations from ΛCDM+GR in the context of Horndeski gravity. We illustrate how our results can be applied to a more general frameworks of modified gravity models.
Conservative approach to the acute management of a large mesenteric cyst.
Leung, Billy C; Sankey, Ruth; Fronza, Matteo; Maatouk, Mohamed
2017-09-16
Mesenteric cysts are rare, benign gastrointestinal cystic lesions, which are often non-troublesome and present as an incidental radiological finding. However, surgery is often performed in the acute setting to remove lesions that are symptomatic. This report highlights the case of a large, symptomatic mesenteric cyst managed successfully with initial conservative measures followed by planned elective surgery. A 44-year-old female presented with a four-day history of generalised abdominal pain associated with distension, fever, diarrhoea and vomiting. Computer tomography revealed a large (21.7 cm × 11.8 cm × 14 cm) mesenteric cyst within the left abdomen cavity. She was admitted and treated conservatively with intravenous fluids and antibiotics for four days, which lead to complete symptom resolution. Follow-up at intervals of one and three months revealed no return of symptoms. An elective laparotomy and excision of the mesenteric cyst was then scheduled and performed safely at nine months after the initial presentation. Compared to acute surgery, acute conservative management followed by planned elective resection of a symptomatic mesenteric cyst may prove safer. The withholding of an immediate operation may potentially avoid unnecessary operative risk and should be considered in patients without obstructive and peritonitic symptoms. Our case demonstrated the safe use of initial conservative management followed by planned elective surgery of a mesenteric cyst found in the acute setting, which was symptomatic but was not obstructive or causing peritonitic symptoms.
High-throughput film-densitometry: An efficient approach to generate large data sets
Energy Technology Data Exchange (ETDEWEB)
Typke, Dieter; Nordmeyer, Robert A.; Jones, Arthur; Lee, Juyoung; Avila-Sakar, Agustin; Downing, Kenneth H.; Glaeser, Robert M.
2004-07-14
A film-handling machine (robot) has been built which can, in conjunction with a commercially available film densitometer, exchange and digitize over 300 electron micrographs per day. Implementation of robotic film handling effectively eliminates the delay and tedium associated with digitizing images when data are initially recorded on photographic film. The modulation transfer function (MTF) of the commercially available densitometer is significantly worse than that of a high-end, scientific microdensitometer. Nevertheless, its signal-to-noise ratio (S/N) is quite excellent, allowing substantial restoration of the output to ''near-to-perfect'' performance. Due to the large area of the standard electron microscope film that can be digitized by the commercial densitometer (up to 10,000 x 13,680 pixels with an appropriately coded holder), automated film digitization offers a fast and inexpensive alternative to high-end CCD cameras as a means of acquiring large amounts of image data in electron microscopy.
Software development and maintenance: An approach for a large accelerator control system
International Nuclear Information System (INIS)
Casalegno, L.; Orsini, L.; Sicard, C.H.
1990-01-01
Maintenance costs presently form a large part of the total life-cycle cost of a software system. In case of large systems, while the costs of eliminating bugs, fixing analysis and design errors and introducing updates must be taken into account, the coherence of the system as a whole must be maintained while its parts are evolving independently. The need to devise and supply tools to aid programmers in housekeeping and updating has been strongly felt in the case of the LEP preinjector control system. A set of utilities has been implemented to create a safe interface between the programmers and the files containing the control software. Through this interface consistent naming schemes, common compiling and object-building procedures can be enforced, so that development and maintenance staff need not be concerned with the details of executable code generation. Procedures have been built to verify the consistency, generate maintenance diagnostics and automatically update object and executable files, taking into account multiple releases and versions. The tools and the techniques reported in this paper are of general use in the UNIX environment and have already been adopted for other projects. (orig.)
Tsai, Jason Sheng-Hong; Du, Yan-Yi; Huang, Pei-Hsiang; Guo, Shu-Mei; Shieh, Leang-San; Chen, Yuhua
2011-07-01
In this paper, a digital redesign methodology of the iterative learning-based decentralized adaptive tracker is proposed to improve the dynamic performance of sampled-data linear large-scale control systems consisting of N interconnected multi-input multi-output subsystems, so that the system output will follow any trajectory which may not be presented by the analytic reference model initially. To overcome the interference of each sub-system and simplify the controller design, the proposed model reference decentralized adaptive control scheme constructs a decoupled well-designed reference model first. Then, according to the well-designed model, this paper develops a digital decentralized adaptive tracker based on the optimal analog control and prediction-based digital redesign technique for the sampled-data large-scale coupling system. In order to enhance the tracking performance of the digital tracker at specified sampling instants, we apply the iterative learning control (ILC) to train the control input via continual learning. As a result, the proposed iterative learning-based decentralized adaptive tracker not only has robust closed-loop decoupled property but also possesses good tracking performance at both transient and steady state. Besides, evolutionary programming is applied to search for a good learning gain to speed up the learning process of ILC. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.
2012-01-01
An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.
Argo_CUDA: Exhaustive GPU based approach for motif discovery in large DNA datasets.
Vishnevsky, Oleg V; Bocharnikov, Andrey V; Kolchanov, Nikolay A
2018-02-01
The development of chromatin immunoprecipitation sequencing (ChIP-seq) technology has revolutionized the genetic analysis of the basic mechanisms underlying transcription regulation and led to accumulation of information about a huge amount of DNA sequences. There are a lot of web services which are currently available for de novo motif discovery in datasets containing information about DNA/protein binding. An enormous motif diversity makes their finding challenging. In order to avoid the difficulties, researchers use different stochastic approaches. Unfortunately, the efficiency of the motif discovery programs dramatically declines with the query set size increase. This leads to the fact that only a fraction of top "peak" ChIP-Seq segments can be analyzed or the area of analysis should be narrowed. Thus, the motif discovery in massive datasets remains a challenging issue. Argo_Compute Unified Device Architecture (CUDA) web service is designed to process the massive DNA data. It is a program for the detection of degenerate oligonucleotide motifs of fixed length written in 15-letter IUPAC code. Argo_CUDA is a full-exhaustive approach based on the high-performance GPU technologies. Compared with the existing motif discovery web services, Argo_CUDA shows good prediction quality on simulated sets. The analysis of ChIP-Seq sequences revealed the motifs which correspond to known transcription factor binding sites.
Robust mode space approach for atomistic modeling of realistically large nanowire transistors
Huang, Jun Z.; Ilatikhameneh, Hesameddin; Povolotskyi, Michael; Klimeck, Gerhard
2018-01-01
Nanoelectronic transistors have reached 3D length scales in which the number of atoms is countable. Truly atomistic device representations are needed to capture the essential functionalities of the devices. Atomistic quantum transport simulations of realistically extended devices are, however, computationally very demanding. The widely used mode space (MS) approach can significantly reduce the numerical cost, but a good MS basis is usually very hard to obtain for atomistic full-band models. In this work, a robust and parallel algorithm is developed to optimize the MS basis for atomistic nanowires. This enables engineering-level, reliable tight binding non-equilibrium Green's function simulation of nanowire metal-oxide-semiconductor field-effect transistor (MOSFET) with a realistic cross section of 10 nm × 10 nm using a small computer cluster. This approach is applied to compare the performance of InGaAs and Si nanowire n-type MOSFETs (nMOSFETs) with various channel lengths and cross sections. Simulation results with full-band accuracy indicate that InGaAs nanowire nMOSFETs have no drive current advantage over their Si counterparts for cross sections up to about 10 nm × 10 nm.
Directory of Open Access Journals (Sweden)
Aliyeh Kazemi
2016-09-01
Full Text Available Construction projects have always been complex. By growing trend of this complexity, implementations of large-scale constructions become harder. Hence, evaluating and understanding these complexities are critical. Correct evaluation of a project complication can provide executives and managers with good source to use. Fuzzy analytic network process (ANP is a logical and systematic approach toward defining, evaluation, and grading. This method allows for analyzing complex systems, and determining complexity of them. In this study, by taking advantage of fuzzy ANP, effective indexes for development of complications in large-scale construction projects in Iran have been determined and prioritized. The results show socio-political, project system interdependencies, and technological complexity indexes ranked top to three. Furthermore, in comparison of three main huge projects: commercial-administrative, hospital, and skyscrapers, the hospital project had been evaluated as the most complicated. This model is beneficial for professionals in managing large-scale projects.
On the field/string theory approach to theta dependence in large N Yang-Mills theory
International Nuclear Information System (INIS)
Gabadadze, Gregory
1999-01-01
The theta dependence of the vacuum energy in large N Yang-Mills theory has been studied some time ago by Witten using a duality of large N gauge theories with the string theory compactified on a certain space-time. We show that within the field theory context vacuum fluctuations of the topological charge give rise to the vacuum energy consistent with the string theory computation. Furthermore, we calculate 1/N suppressed corrections to the string theory result. The reconciliation of the string and field theory approaches is based on the fact that the gauge theory instantons carry zerobrane charge in the corresponding D-brane construction of Yang-Mills theory. Given the formula for the vacuum energy we study certain aspects of stability of the false vacua of the model for different realizations of the initial conditions. The vacuum structure appears to be different depending on whether N is infinite or, alternatively, large but finite
International Nuclear Information System (INIS)
Sakai, Hirotada; Ikawa, Koji
1994-01-01
A preliminary study on a safeguards approach for the chemical processing area in a large scale reprocessing plant has been carried out. In this approach, plutonium inventory at the plutonium evaporator will not be taken, but containment and surveillance (C/S) measures will be applied to ensure the integrity of an area specifically defined to include the plutonium evaporator. The plutonium evaporator area consists of the evaporator itself and two accounting points, i.e., one before the plutonium evaporator and the other after the plutonium evaporator. For newly defined accounting points, two alternative measurement methods, i.e., accounting vessels with high accuracy and flow meters, were examined. Conditions to provide the integrity of the plutonium evaporator area were also examined as well as other technical aspects associated with this approach. The results showed that an appropriate combination of NRTA and C/S measures would be essential to realize a cost effective safeguards approach to be applied for a large scale reprocessing plant. (author)
Iino, Yoichi; Kojima, Takeji
2012-08-01
This study investigated the validity of the top-down approach of inverse dynamics analysis in fast and large rotational movements of the trunk about three orthogonal axes of the pelvis for nine male collegiate students. The maximum angles of the upper trunk relative to the pelvis were approximately 47°, 49°, 32°, and 55° for lateral bending, flexion, extension, and axial rotation, respectively, with maximum angular velocities of 209°/s, 201°/s, 145°/s, and 288°/s, respectively. The pelvic moments about the axes during the movements were determined using the top-down and bottom-up approaches of inverse dynamics and compared between the two approaches. Three body segment inertial parameter sets were estimated using anthropometric data sets (Ae et al., Biomechanism 11, 1992; De Leva, J Biomech, 1996; Dumas et al., J Biomech, 2007). The root-mean-square errors of the moments and the absolute errors of the peaks of the moments were generally smaller than 10 N·m. The results suggest that the pelvic moment in motions involving fast and large trunk movements can be determined with a certain level of validity using the top-down approach in which the trunk is modeled as two or three rigid-link segments.
Directory of Open Access Journals (Sweden)
Bhanu Pratap Soni
2016-12-01
Full Text Available This paper proposes an effective supervised learning approach for static security assessment of a large power system. Supervised learning approach employs least square support vector machine (LS-SVM to rank the contingencies and predict the system severity level. The severity of the contingency is measured by two scalar performance indices (PIs: line MVA performance index (PIMVA and Voltage-reactive power performance index (PIVQ. SVM works in two steps. Step I is the estimation of both standard indices (PIMVA and PIVQ that is carried out under different operating scenarios and Step II contingency ranking is carried out based on the values of PIs. The effectiveness of the proposed methodology is demonstrated on IEEE 39-bus (New England system. The approach can be beneficial tool which is less time consuming and accurate security assessment and contingency analysis at energy management center.
An Integrated Approach for Monitoring Contemporary and Recruitable Large Woody Debris
Directory of Open Access Journals (Sweden)
Jeffrey J. Richardson
2016-09-01
Full Text Available Large woody debris (LWD plays a critical structural role in riparian ecosystems, but it can be difficult and time-consuming to quantify and survey in the field. We demonstrate an automated method for quantifying LWD using aerial LiDAR and object-based image analysis techniques, as well as a manual method for quantifying LWD using image interpretation derived from LiDAR rasters and aerial four-band imagery. In addition, we employ an established method for estimating the number of individual trees within the riparian forest. These methods are compared to field data showing high accuracies for the LWD method and moderate accuracy for the individual tree method. These methods can be integrated to quantify the contemporary and recruitable LWD in a river system.
ADN-Viewer: a 3D approach for bioinformatic analyses of large DNA sequences.
Hérisson, Joan; Ferey, Nicolas; Gros, Pierre-Emmanuel; Gherbi, Rachid
2007-01-20
Most of biologists work on textual DNA sequences that are limited to the linear representation of DNA. In this paper, we address the potential offered by Virtual Reality for 3D modeling and immersive visualization of large genomic sequences. The representation of the 3D structure of naked DNA allows biologists to observe and analyze genomes in an interactive way at different levels. We developed a powerful software platform that provides a new point of view for sequences analysis: ADNViewer. Nevertheless, a classical eukaryotic chromosome of 40 million base pairs requires about 6 Gbytes of 3D data. In order to manage these huge amounts of data in real-time, we designed various scene management algorithms and immersive human-computer interaction for user-friendly data exploration. In addition, one bioinformatics study scenario is proposed.
International Nuclear Information System (INIS)
Jean Jacques, M.; Maurel, J.J.; Maillet, J.
1994-01-01
Over the years, France has built up significant experience in dismantling nuclear fuel reprocessing facilities or various types of units representative of a modern reprocessing plant. However, only small or medium scale operations have been carried out so far. To prepare the future decommissioning of large size industrial facilities such as UP1 (Marcoule) and UP2 (La Hague), new technologies must be developed to maximize waste recycling and optimize direct operations by operators, taking the integrated dose and cost aspects into account. The decommissioning and dismantling methodology comprises: a preparation phase for inventory, choice and installation of tools and arrangement of working areas, a dismantling phase with decontamination, and a final contamination control phase. Detailed description of dismantling operations of the MA Pu finishing facility (La Hague) and of the RM2 radio metallurgical laboratory (CEA-Fontenay-aux-Roses) are given as examples. (J.S.). 3 tabs
Directory of Open Access Journals (Sweden)
R. Maugé
2008-03-01
Full Text Available A set of evolution equations is derived for the modal coefficients in a weakly nonlinear nonhydrostatic internal-tide generation problem. The equations allow for the presence of large-amplitude topography, e.g. a continental slope, which is formally assumed to have a length scale much larger than that of the internal tide. However, comparison with results from more sophisticated numerical models show that this restriction can in practice be relaxed. It is shown that a topographically induced coupling between modes occurs that is distinct from nonlinear coupling. Nonlinear effects include the generation of higher harmonics by reflection from boundaries, i.e. steeper tidal beams at frequencies that are multiples of the basic tidal frequency. With a seasonal thermocline included, the model is capable of reproducing the phenomenon of local generation of internal solitary waves by a tidal beam impinging on the seasonal thermocline.
Management of a large radicular cyst: A non-surgical endodontic approach
Directory of Open Access Journals (Sweden)
Shweta Dwivedi
2014-01-01
Full Text Available A radicular cyst arises from epithelial remnants stimulated to proliferate by an inflammatory process originating from pulpal necrosis of a non-vital tooth. Radiographically, the classical description of the lesion is a round or oval, well-circumscribed radiolucent image involving the apex of the tooth. A radicular cyst is usually sterile unless it is secondarily infected. This paper presents a case report of conservative non-surgical management of a radicular cyst associated with permanent maxillary right central incisor, right lateral incisor and right canine in a 24-year-old female patient. Root canal treatment was done together with cystic aspiration of the lesion. The lesion was periodically followed up and significant bone formation was seen at the periapical region of affected teeth and at the palate at about 9 months. Thus, nonsurgical healing of a large radicular cyst with palatal swelling provided favorable clinical and radiographic response.
Cabaret, S; Coppier, H; Rachid, A; Barillère, R; CERN. Geneva. IT Department
2007-01-01
The GCS (Gas Control System) project team at CERN uses a Model Driven Approach with a Framework - UNICOS (UNified Industrial COntrol System) - based on PLC (Programming Language Controller) and SCADA (Supervisory Control And Data Acquisition) technologies. The first' UNICOS versions were able to provide a PID (Proportional Integrative Derivative) controller whereas the Gas Systems required more advanced control strategies. The MultiController is a new UNICOS object which provides the following advanced control algorithms: Smith Predictor, PFC (Predictive Function Control), RST* and GPC (Global Predictive Control). Its design is based on a monolithic entity with a global structure definition which is able to capture the desired set of parameters of any specific control algorithm supported by the object. The SCADA system -- PVSS - supervises the MultiController operation. The PVSS interface provides users with supervision faceplate, in particular it links any MultiController with recipes: the GCS experts are ab...
Laparoscopic approach in the treatment of large leiomyoma of the lower third of the esophagus.
Lipnickas, Vytautas; Beiša, Augustas; Makūnaitė, Gabija; Strupas, Kęstutis
2017-12-01
Leiomyoma of the lower third of the esophagus is a relatively rare disorder but the most common benign tumor of the esophagus. We present a case of an involuted esophageal leiomyoma, 11 cm in size, treated by the laparoscopic approach. The preoperative computed tomogram visualized a mass 3 × 1.5 cm in diameter in the lower esophagus without an eccentric lumen or compression of nearby organs. Resection of the tumor was indicated according to the patient's symptoms and to exclude malignancy. Laparoscopic enucleation of esophageal leiomyoma was performed. The overall operative time was 205 min. The diagnosis of leiomyoma was established on histopathology and immunohistochemistry staining. The patient resumed the intake of a normal diet on the 5 th postoperative day and was discharged from hospital 8 days after the surgery. We have found this minimally invasive operation to be an effective and well-tolerated treatment option, determined by the experience of the surgeon.
Energy Technology Data Exchange (ETDEWEB)
Muneed ur Rehman, M.; Evzelman, M.; Hathaway, K.; Zane, R.; Plett, G. L.; Smith, K.; Wood, E.; Maksimovic, D.
2014-10-01
Energy storage systems require battery cell balancing circuits to avoid divergence of cell state of charge (SOC). A modular approach based on distributed continuous cell-level control is presented that extends the balancing function to higher level pack performance objectives such as improving power capability and increasing pack lifetime. This is achieved by adding DC-DC converters in parallel with cells and using state estimation and control to autonomously bias individual cell SOC and SOC range, forcing healthier cells to be cycled deeper than weaker cells. The result is a pack with improved degradation characteristics and extended lifetime. The modular architecture and control concepts are developed and hardware results are demonstrated for a 91.2-Wh battery pack consisting of four series Li-ion battery cells and four dual active bridge (DAB) bypass DC-DC converters.
Closed-form confidence intervals for functions of the normal mean and standard deviation.
Donner, Allan; Zou, G Y
2012-08-01
Confidence interval methods for a normal mean and standard deviation are well known and simple to apply. However, the same cannot be said for important functions of these parameters. These functions include the normal distribution percentiles, the Bland-Altman limits of agreement, the coefficient of variation and Cohen's effect size. We present a simple approach to this problem by using variance estimates recovered from confidence limits computed for the mean and standard deviation separately. All resulting confidence intervals have closed forms. Simulation results demonstrate that this approach performs very well for limits of agreement, coefficients of variation and their differences.
Automatic feature extraction in large fusion databases by using deep learning approach
Energy Technology Data Exchange (ETDEWEB)
Farias, Gonzalo, E-mail: gonzalo.farias@ucv.cl [Pontificia Universidad Católica de Valparaíso, Valparaíso (Chile); Dormido-Canto, Sebastián [Departamento de Informática y Automática, UNED, Madrid (Spain); Vega, Jesús; Rattá, Giuseppe [Asociación EURATOM/CIEMAT Para Fusión, CIEMAT, Madrid (Spain); Vargas, Héctor; Hermosilla, Gabriel; Alfaro, Luis; Valencia, Agustín [Pontificia Universidad Católica de Valparaíso, Valparaíso (Chile)
2016-11-15
Highlights: • Feature extraction is a very critical stage in any machine learning algorithm. • The problem dimensionality can be reduced enormously when selecting suitable attributes. • Despite the importance of feature extraction, the process is commonly done manually by trial and error. • Fortunately, recent advances in deep learning approach have proposed an encouraging way to find a good feature representation automatically. • In this article, deep learning is applied to the TJ-II fusion database to get more robust and accurate classifiers in comparison to previous work. - Abstract: Feature extraction is one of the most important machine learning issues. Finding suitable attributes of datasets can enormously reduce the dimensionality of the input space, and from a computational point of view can help all of the following steps of pattern recognition problems, such as classification or information retrieval. However, the feature extraction step is usually performed manually. Moreover, depending on the type of data, we can face a wide range of methods to extract features. In this sense, the process to select appropriate techniques normally takes a long time. This work describes the use of recent advances in deep learning approach in order to find a good feature representation automatically. The implementation of a special neural network called sparse autoencoder and its application to two classification problems of the TJ-II fusion database is shown in detail. Results have shown that it is possible to get robust classifiers with a high successful rate, in spite of the fact that the feature space is reduced to less than 0.02% from the original one.
Large-scaled biomonitoring of trace-element air pollution: goals and approaches
International Nuclear Information System (INIS)
Wolterbeek, H.T.
2000-01-01
Biomonitoring is often used in multi-parameter approaches in especially larger scaled surveys. The information obtained may consist of thousands of data points, which can be processed in a variety of mathematical routines to permit a condensed and strongly-smoothed presentation of results and conclusions. Although reports on larger-scaled biomonitoring surveys are 'easy- to-read' and often include far-reaching interpretations, it is not possible to obtain an insight into the real meaningfulness or quality of the survey performed. In any set-up, the aims of the survey should be put forward as clear as possible. Is the survey to provide information on atmospheric element levels, or on total, wet and dry deposition, what should be the time- or geographical scale and resolution of the survey, which elements should be determined, is the survey to give information on emission or immission characteristics? Answers to all these questions are of paramount importance, not only regarding the choice of the biomonitoring species or necessary handling/analysis techniques, but also with respect to planning and personnel, and, not to forget, the expected/available means of data interpretation. In considering a survey set-up, rough survey dimensions may follow directly from the goals; in practice, however, they will be governed by other aspects such as available personnel, handling means/capacity, costs, etc. In what sense and to what extent these factors may cause the survey to drift away from the pre-set goals should receive ample attention: in extreme cases the survey should not be carried out. Bearing in mind the above considerations, the present paper focuses on goals, quality and approaches of larger-scaled biomonitoring surveys on trace element air pollution. The discussion comprises practical problems, options, decisions, analytical means, quality measures, and eventual survey results. (author)
Automatic feature extraction in large fusion databases by using deep learning approach
International Nuclear Information System (INIS)
Farias, Gonzalo; Dormido-Canto, Sebastián; Vega, Jesús; Rattá, Giuseppe; Vargas, Héctor; Hermosilla, Gabriel; Alfaro, Luis; Valencia, Agustín
2016-01-01
Highlights: • Feature extraction is a very critical stage in any machine learning algorithm. • The problem dimensionality can be reduced enormously when selecting suitable attributes. • Despite the importance of feature extraction, the process is commonly done manually by trial and error. • Fortunately, recent advances in deep learning approach have proposed an encouraging way to find a good feature representation automatically. • In this article, deep learning is applied to the TJ-II fusion database to get more robust and accurate classifiers in comparison to previous work. - Abstract: Feature extraction is one of the most important machine learning issues. Finding suitable attributes of datasets can enormously reduce the dimensionality of the input space, and from a computational point of view can help all of the following steps of pattern recognition problems, such as classification or information retrieval. However, the feature extraction step is usually performed manually. Moreover, depending on the type of data, we can face a wide range of methods to extract features. In this sense, the process to select appropriate techniques normally takes a long time. This work describes the use of recent advances in deep learning approach in order to find a good feature representation automatically. The implementation of a special neural network called sparse autoencoder and its application to two classification problems of the TJ-II fusion database is shown in detail. Results have shown that it is possible to get robust classifiers with a high successful rate, in spite of the fact that the feature space is reduced to less than 0.02% from the original one.
Heterodyne Angle Deviation Interferometry in Vibration and Bubble Measurements
Ming-Hung Chiu; Jia-Ze Shen; Jian-Ming Huang
2016-01-01
We proposed heterodyne angle deviation interferometry (HADI) for angle deviation measurements. The phase shift of an angular sensor (which can be a metal film or a surface plasmon resonance (SPR) prism) is proportional to the deviation angle of the test beam. The method has been demonstrated in bubble and speaker’s vibration measurements in this paper. In the speaker’s vibration measurement, the voltage from the phase channel of a lock-in amplifier includes the vibration level and frequency. ...
Quantum uncertainty relation based on the mean deviation
Sharma, Gautam; Mukhopadhyay, Chiranjib; Sazim, Sk; Pati, Arun Kumar
2018-01-01
Traditional forms of quantum uncertainty relations are invariably based on the standard deviation. This can be understood in the historical context of simultaneous development of quantum theory and mathematical statistics. Here, we present alternative forms of uncertainty relations, in both state dependent and state independent forms, based on the mean deviation. We illustrate the robustness of this formulation in situations where the standard deviation based uncertainty relation is inapplica...
Forward Modeling of Large-scale Structure: An Open-source Approach with Halotools
Hearin, Andrew P.; Campbell, Duncan; Tollerud, Erik; Behroozi, Peter; Diemer, Benedikt; Goldbaum, Nathan J.; Jennings, Elise; Leauthaud, Alexie; Mao, Yao-Yuan; More, Surhud; Parejko, John; Sinha, Manodeep; Sipöcz, Brigitta; Zentner, Andrew
2017-11-01
We present the first stable release of Halotools (v0.2), a community-driven Python package designed to build and test models of the galaxy-halo connection. Halotools provides a modular platform for creating mock universes of galaxies starting from a catalog of dark matter halos obtained from a cosmological simulation. The package supports many of the common forms used to describe galaxy-halo models: the halo occupation distribution, the conditional luminosity function, abundance matching, and alternatives to these models that include effects such as environmental quenching or variable galaxy assembly bias. Satellite galaxies can be modeled to live in subhalos or to follow custom number density profiles within their halos, including spatial and/or velocity bias with respect to the dark matter profile. The package has an optimized toolkit to make mock observations on a synthetic galaxy population—including galaxy clustering, galaxy-galaxy lensing, galaxy group identification, RSD multipoles, void statistics, pairwise velocities and others—allowing direct comparison to observations. Halotools is object-oriented, enabling complex models to be built from a set of simple, interchangeable components, including those of your own creation. Halotools has an automated testing suite and is exhaustively documented on http://halotools.readthedocs.io, which includes quickstart guides, source code notes and a large collection of tutorials. The documentation is effectively an online textbook on how to build and study empirical models of galaxy formation with Python.
A rational approach to resonance saturation in large-Nc QCD
International Nuclear Information System (INIS)
Masjuan, Pere; Peris, Santiago
2007-01-01
We point out that resonance saturation in QCD can be understood in the large-N c limit from the mathematical theory of Pade Approximants to meromorphic functions. These approximants are rational functions which encompass any saturation with a finite number of resonances as a particular example, explaining several results which have appeared in the literature. We review the main properties of Pade Approximants with the help of a toy model for the (VV-AA) two-point correlator, paying particular attention to the relationship among the Chiral Expansion, the Operator Product Expansion and the resonance spectrum. In passing, we also comment on an old proposal made by Migdal in 1977 which has recently attracted much attention in the context of AdS/QCD models. Finally, we apply the simplest Pade Approximant to the (VV-AA) correlator in the real case of QCD. The general conclusion is that a rational approximant may reliably describe a Green's function in the Euclidean, but the same is not true in the Minkowski regime due to the appearance of unphysical poles and/or residues
Novel disease targets and management approaches for diffuse large B-cell lymphoma.
Wilson, Wyndham H; Hernandez-Ilizaliturri, Francisco J; Dunleavy, Kieron; Little, Richard F; O'Connor, Owen A
2010-08-01
Diffuse large B-cell lymphoma (DLBCL) responds well to treatment with CHOP and the R-CHOP regimen, but a subset of patients still fail to achieve complete or durable responses. Recent advances in gene expression profiling have led to the identification of three different subtypes of DLBCL, and confirmed that patients with the activated B-cell (ABC) disease subtype are less likely to respond well to CHOP-based regimens than those with germinal centre B-cell-type (GCB) disease. This discovery could herald the use of gene expression profiling to aid treatment decisions in DLBCL, and help identify the most effective management strategies for patients. Treatment options for patients with relapsed or refractory DLBCL are limited and several novel agents are being developed to address this unmet clinical need. Novel agents developed to treat plasma cell disorders such as multiple myeloma have shown promising activity in patients with NHL. Indeed, the immunomodulatory agent lenalidomide and the proteasome inhibitors bortezomib and carfilzomib, as single agents or in combination with chemotherapy, have already demonstrated promising activity in patients with the ABC subtype of DLBCL. One should not be complacent however when applying these agents to new disease types, because dose and drug scheduling can have marked effects on the responses achieved with investigational agents. As more targeted agents are developed, the timing of administration with other agents in clinical trials will become increasingly important to ensure maximal efficacy while minimizing side effects.
Homemdemello, Luiz S.
1992-01-01
An assembly planner for tetrahedral truss structures is presented. To overcome the difficulties due to the large number of parts, the planner exploits the simplicity and uniformity of the shapes of the parts and the regularity of their interconnection. The planning automation is based on the computational formalism known as production system. The global data base consists of a hexagonal grid representation of the truss structure. This representation captures the regularity of tetrahedral truss structures and their multiple hierarchies. It maps into quadratic grids and can be implemented in a computer by using a two-dimensional array data structure. By maintaining the multiple hierarchies explicitly in the model, the choice of a particular hierarchy is only made when needed, thus allowing a more informed decision. Furthermore, testing the preconditions of the production rules is simple because the patterned way in which the struts are interconnected is incorporated into the topology of the hexagonal grid. A directed graph representation of assembly sequences allows the use of both graph search and backtracking control strategies.
A pragmatic approach to voltage stability analysis of large power systems
Energy Technology Data Exchange (ETDEWEB)
Sarmiento, H.G.; Pampin, G. [Inst. de Investigaciones Electricas, Morelos (Mexico); Diaz de Leon, J.A. [American Superconductor, Middleton, WI (United States)
2008-07-01
A methodology for performing voltage stability analyses for large power systems was presented. Modal and time-domain analyses were used for selection and siting solutions for potential voltage instability and collapse. Steady state systems were used to compute the smallest eigenvalues and associated eigenvalues of a reduced Jacobean matrix. The eigenvalues were used to provide a relative measure of proximity to voltage instability. The analysis was applied to provide an indication of a network's proximity to voltage collapse. Negative eigenvalues were representative of voltage instability conditions, while small positive values indicated proximity to voltage instability. The analysis technique was used to identify buses, lines, and generators prone to voltage instabilities for a 10-node network. A comparative analysis of results obtained from modal and time domain analyses were used to identify areas vulnerable to voltage instability conditions. Pre-fault, fault, and post-fault conditions were analyzed statically and dynamically. Results of the study showed that the combined method can be used to identify and place reactive power compensation solutions for voltage collapses in electric networks. 20 refs., 5 tabs., 7 figs.
International Nuclear Information System (INIS)
Fang, X.W.; Zhang, G.P.; Yao, Y.X.; Wang, C.Z.; Ding, Z.J.; Ho, K.M.
2011-01-01
The conductance of single-atom carbon chain (SACC) between two zigzag graphene nanoribbons (GNR) is studied by an efficient scheme utilizing tight-binding (TB) parameters generated via quasi-atomic minimal basis set orbitals (QUAMBOs) and non-equilibrium Green's function (NEGF). Large systems (SACC contains more than 50 atoms) are investigated and the electronic transport properties are found to correlate with SACC's parity. The SACCs provide a stable off or on state in broad energy region (0.1-1 eV) around Fermi energy. The off state is not sensitive to the length of SACC while the corresponding energy region decreases with the increase of the width of GNR. -- Highlights: → Graphene has many superior electronic properties. → First-principles calculation are accurate but limited to system size. → QUAMBOs construct tight-binding parameters with spatial localization, and then use divide-and-conquer method. → SACC (single carbon atom chain): structure and transport show even-odd parity, and long chains are studied.
Dimov, I.; Georgieva, R.; Todorov, V.; Ostromsky, Tz.
2017-10-01
Reliability of large-scale mathematical models is an important issue when such models are used to support decision makers. Sensitivity analysis of model outputs to variation or natural uncertainties of model inputs is crucial for improving the reliability of mathematical models. A comprehensive experimental study of Monte Carlo algorithms based on Sobol sequences for multidimensional numerical integration has been done. A comparison with Latin hypercube sampling and a particular quasi-Monte Carlo lattice rule based on generalized Fibonacci numbers has been presented. The algorithms have been successfully applied to compute global Sobol sensitivity measures corresponding to the influence of several input parameters (six chemical reactions rates and four different groups of pollutants) on the concentrations of important air pollutants. The concentration values have been generated by the Unified Danish Eulerian Model. The sensitivity study has been done for the areas of several European cities with different geographical locations. The numerical tests show that the stochastic algorithms under consideration are efficient for multidimensional integration and especially for computing small by value sensitivity indices. It is a crucial element since even small indices may be important to be estimated in order to achieve a more accurate distribution of inputs influence and a more reliable interpretation of the mathematical model results.
A novel approach to predict the stability limits of combustion chambers with large eddy simulation
Pritz, B.; Magagnato, F.; Gabi, M.
2010-06-01
Lean premixed combustion, which allows for reducing the production of thermal NOx, is prone to combustion instabilities. There is an extensive research to develop a reduced physical model, which allows — without time-consuming measurements — to calculate the resonance characteristics of a combustion system consisting of Helmholtz resonator type components (burner plenum, combustion chamber). For the formulation of this model numerical investigations by means of compressible Large Eddy Simulation (LES) were carried out. In these investigations the flow in the combustion chamber is isotherm, non-reacting and excited with a sinusoidal mass flow rate. Firstly a combustion chamber as a single resonator subsequently a coupled system of a burner plenum and a combustion chamber were investigated. In this paper the results of additional investigations of the single resonator are presented. The flow in the combustion chamber was investigated without excitation at the inlet. It was detected, that the mass flow rate at the outlet cross section is pulsating once the flow in the chamber is turbulent. The fast Fourier transform of the signal showed that the dominant mode is at the resonance frequency of the combustion chamber. This result sheds light on a very important source of self-excited combustion instabilities. Furthermore the LES can provide not only the damping ratio for the analytical model but the eigenfrequency of the resonator also.
Simulation-optimization of large agro-hydrosystems using a decomposition approach
Schuetze, Niels; Grundmann, Jens
2014-05-01
In this contribution a stochastic simulation-optimization framework for decision support for optimal planning and operation of water supply of large agro-hydrosystems is presented. It is based on a decomposition solution strategy which allows for (i) the usage of numerical process models together with efficient Monte Carlo simulations for a reliable estimation of higher quantiles of the minimum agricultural water demand for full and deficit irrigation strategies at small scale (farm level), and (ii) the utilization of the optimization results at small scale for solving water resources management problems at regional scale. As a secondary result of several simulation-optimization runs at the smaller scale stochastic crop-water production functions (SCWPF) for different crops are derived which can be used as a basic tool for assessing the impact of climate variability on risk for potential yield. In addition, microeconomic impacts of climate change and the vulnerability of the agro-ecological systems are evaluated. The developed methodology is demonstrated through its application on a real-world case study for the South Al-Batinah region in the Sultanate of Oman where a coastal aquifer is affected by saltwater intrusion due to excessive groundwater withdrawal for irrigated agriculture.
Binary Large Object-Based Approach for QR Code Detection in Uncontrolled Environments
Directory of Open Access Journals (Sweden)
Omar Lopez-Rincon
2017-01-01
Full Text Available Quick Response QR barcode detection in nonarbitrary environment is still a challenging task despite many existing applications for finding 2D symbols. The main disadvantage of recent applications for QR code detection is a low performance for rotated and distorted single or multiple symbols in images with variable illumination and presence of noise. In this paper, a particular solution for QR code detection in uncontrolled environments is presented. The proposal consists in recognizing geometrical features of QR code using a binary large object- (BLOB- based algorithm with subsequent iterative filtering QR symbol position detection patterns that do not require complex processing and training of classifiers frequently used for these purposes. The high precision and speed are achieved by adaptive threshold binarization of integral images. In contrast to well-known scanners, which fail to detect QR code with medium to strong blurring, significant nonuniform illumination, considerable symbol deformations, and noising, the proposed technique provides high recognition rate of 80%–100% with a speed compatible to real-time applications. In particular, speed varies from 200 ms to 800 ms per single or multiple QR code detected simultaneously in images with resolution from 640 × 480 to 4080 × 2720, respectively.
Whelan, Simon
2007-10-01
Phylogenetic tree estimation plays a critical role in a wide variety of molecular studies, including molecular systematics, phylogenetics, and comparative genomics. Finding the optimal tree relating a set of sequences using score-based (optimality criterion) methods, such as maximum likelihood and maximum parsimony, may require all possible trees to be considered, which is not feasible even for modest numbers of sequences. In practice, trees are estimated using heuristics that represent a trade-off between topological accuracy and speed. I present a series of novel algorithms suitable for score-based phylogenetic tree reconstruction that demonstrably improve the accuracy of tree estimates while maintaining high computational speeds. The heuristics function by allowing the efficient exploration of large numbers of trees through novel hill-climbing and resampling strategies. These heuristics, and other computational approximations, are implemented for maximum likelihood estimation of trees in the program Leaphy, and its performance is compared to other popular phylogenetic programs. Trees are estimated from 4059 different protein alignments using a selection of phylogenetic programs and the likelihoods of the tree estimates are compared. Trees estimated using Leaphy are found to have equal to or better likelihoods than trees estimated using other phylogenetic programs in 4004 (98.6%) families and provide a unique best tree that no other program found in 1102 (27.1%) families. The improvement is particularly marked for larger families (80 to 100 sequences), where Leaphy finds a unique best tree in 81.7% of families.
Forward Modeling of Large-scale Structure: An Open-source Approach with Halotools
Energy Technology Data Exchange (ETDEWEB)
Hearin, Andrew P.; Campbell, Duncan; Tollerud, Erik; Behroozi, Peter; Diemer, Benedikt; Goldbaum, Nathan J.; Jennings, Elise; Leauthaud, Alexie; Mao, Yao-Yuan; More, Surhud; Parejko, John; Sinha, Manodeep; Sipöcz, Brigitta; Zentner, Andrew
2017-10-18
We present the first stable release of Halotools (v0.2), a community-driven Python package designed to build and test models of the galaxy-halo connection. Halotools provides a modular platform for creating mock universes of galaxies starting from a catalog of dark matter halos obtained from a cosmological simulation. The package supports many of the common forms used to describe galaxy-halo models: the halo occupation distribution, the conditional luminosity function, abundance matching, and alternatives to these models that include effects such as environmental quenching or variable galaxy assembly bias. Satellite galaxies can be modeled to live in subhalos or to follow custom number density profiles within their halos, including spatial and/or velocity bias with respect to the dark matter profile. The package has an optimized toolkit to make mock observations on a synthetic galaxy population—including galaxy clustering, galaxy–galaxy lensing, galaxy group identification, RSD multipoles, void statistics, pairwise velocities and others—allowing direct comparison to observations. Halotools is object-oriented, enabling complex models to be built from a set of simple, interchangeable components, including those of your own creation. Halotools has an automated testing suite and is exhaustively documented on http://halotools.readthedocs.io, which includes quickstart guides, source code notes and a large collection of tutorials. The documentation is effectively an online textbook on how to build and study empirical models of galaxy formation with Python.
General Dynamics Convair Division approach to structural analysis of large superconducting coils
International Nuclear Information System (INIS)
Baldi, R.W.
1979-01-01
This paper describes the overall integrated analysis approach and highlights the results obtained. Most of the procedures and techniques described were developed over the past three years. Starting in late 1976, development began on high-accuracy computer codes for electromagnetic field and force analysis. This effort resulted in completion of a family of computer programs called MAGIC (MAGnetic Integration Calculation). Included in this group of programs is a post-processor called POSTMAGIC that links MAGIC to GDSAP (General Dynamics Structural Analysis Program) by automatically transferring force data. Integrating these computer programs afforded us the capability to readily analyze several different conditions that are anticipated to occur during tokamak operation. During 1977 we initiated the development of the CONVERT program that effectively links our THERMAL ANALYZER program to GDSAP by automatically transferring temperature data. The CONVERT program allowed us the capability to readily predict thermal stresses at several different time phases during the computer-simulated cooldown and warmup cycle. This feature aided us in determining the most crucial time phases and to adjust recommended operating procedure to minimize risk. (orig.)
Ritchie, Scott C; Watts, Stephen; Fearnley, Liam G; Holt, Kathryn E; Abraham, Gad; Inouye, Michael
2016-07-01
Network modules-topologically distinct groups of edges and nodes-that are preserved across datasets can reveal common features of organisms, tissues, cell types, and molecules. Many statistics to identify such modules have been developed, but testing their significance requires heuristics. Here, we demonstrate that current methods for assessing module preservation are systematically biased and produce skewed p values. We introduce NetRep, a rapid and computationally efficient method that uses a permutation approach to score module preservation without assuming data are normally distributed. NetRep produces unbiased p values and can distinguish between true and false positives during multiple hypothesis testing. We use NetRep to quantify preservation of gene coexpression modules across murine brain, liver, adipose, and muscle tissues. Complex patterns of multi-tissue preservation were revealed, including a liver-derived housekeeping module that displayed adipose- and muscle-specific association with body weight. Finally, we demonstrate the broader applicability of NetRep by quantifying preservation of bacterial networks in gut microbiota between men and women. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.
Choi, Leena; Carroll, Robert J; Beck, Cole; Mosley, Jonathan D; Roden, Dan M; Denny, Joshua C; Van Driest, Sara L
2018-04-18
Phenome-wide association studies (PheWAS) have been used to discover many genotype-phenotype relationships and have the potential to identify therapeutic and adverse drug outcomes using longitudinal data within electronic health records (EHRs). However, the statistical methods for PheWAS applied to longitudinal EHR medication data have not been established. In this study, we developed methods to address two challenges faced with reuse of EHR for this purpose: confounding by indication, and low exposure and event rates. We used Monte Carlo simulation to assess propensity score (PS) methods, focusing on two of the most commonly used methods, PS matching and PS adjustment, to address confounding by indication. We also compared two logistic regression approaches (the default of Wald vs. Firth's penalized maximum likelihood, PML) to address complete separation due to sparse data with low exposure and event rates. PS adjustment resulted in greater power than propensity score matching, while controlling Type I error at 0.05. The PML method provided reasonable p-values, even in cases with complete separation, with well controlled Type I error rates. Using PS adjustment and the PML method, we identify novel latent drug effects in pediatric patients exposed to two common antibiotic drugs, ampicillin and gentamicin. R packages PheWAS and EHR are available at https://github.com/PheWAS/PheWAS and at CRAN (https://www.r-project.org/), respectively. The R script for data processing and the main analysis is available at https://github.com/choileena/EHR. leena.choi@vanderbilt.edu. Supplementary data are available at Bioinformatics online.
Directory of Open Access Journals (Sweden)
Marc Lamarine
2018-05-01
Full Text Available Aim of Study: The use of weighed food diaries in nutritional studies provides a powerful method to quantify food and nutrient intakes. Yet, mapping these records onto food composition tables (FCTs is a challenging, time-consuming and error-prone process. Experts make this effort manually and no automation has been previously proposed. Our study aimed to assess automated approaches to map food items onto FCTs.Methods: We used food diaries (~170,000 records pertaining to 4,200 unique food items from the DiOGenes randomized clinical trial. We attempted to map these items onto six FCTs available from the EuroFIR resource. Two approaches were tested: the first was based solely on food name similarity (fuzzy matching. The second used a machine learning approach (C5.0 classifier combining both fuzzy matching and food energy. We tested mapping food items using their original names and also an English-translation. Top matching pairs were reviewed manually to derive performance metrics: precision (the percentage of correctly mapped items and recall (percentage of mapped items.Results: The simpler approach: fuzzy matching, provided very good performance. Under a relaxed threshold (score > 50%, this approach enabled to remap 99.49% of the items with a precision of 88.75%. With a slightly more stringent threshold (score > 63%, the precision could be significantly improved to 96.81% while keeping a recall rate > 95% (i.e., only 5% of the queried items would not be mapped. The machine learning approach did not lead to any improvements compared to the fuzzy matching. However, it could increase substantially the recall rate for food items without any clear equivalent in the FCTs (+7 and +20% when mapping items using their original or English-translated names. Our approaches have been implemented as R packages and are freely available from GitHub.Conclusion: This study is the first to provide automated approaches for large-scale food item mapping onto FCTs. We
Directory of Open Access Journals (Sweden)
Jing-min CHENG
2011-05-01
Full Text Available Objective To explore the operative method and therapeutic efficacy of surgical resection of large invasive pituitary adenomas with individualized approach under neuronavigator guidance.Methods Seventeen patients(10 males and 7 females,aged from 22 to 78 years with a mean of 39.2±9.2 years suffering from large invasive pituitary adenoma of higher than Hardy IV grade hospitalized from 2004 to 2009 were involved in the present study.All procedures were performed with the assistance of neuronavigator via individualized pterion approach,subfrontal extradural approach,trans-sphenoidal approach,or combined approach.The dispersedly invasive pituitary adenomas were resected under the guidance of neuronavigator by fully utilizing the natural anatomical cleavages.All the patients received follow-up CT scanning 3 days after operation,MRI scanning 1 to 3 months after operation,and clinical follow-up ranged from 6 to 72 months.The resection extent and outcome were assessed by imaging examination and clinical results.Results Total tumor removal was achieved in 15 cases,subtotal removal in 1 case,and extensive partial removal in 1 case.The visual impairment and headache were ameliorated in most cases,but in 1 patient they were worsened.Transient diabetes insipidus occurred in 8 cases,electrolyte disturbances were observed in 2 cases,leakage of cerebrospinal fluid appeared in 2 cases,hyposmia in 2 cases,visual impairment aggravated in 1 case,oculomotor nerve and abducens nerve paralysis on the operative side in 1 case,epidural hematoma in occipital and parietal regions in 1 case.No patient died during the follow-up period.Conclusions Individualized surgical approach designed according to the growth direction of tumor under neuronavigator guidance is helpful for the operators to identify the vessels and nerves in the operative field distinctly during the operation,thus the total removal rate is improved,safely of the operation to remove large invasive pituitary
Tan, F.; Wang, G.; Chen, C.; Ge, Z.
2016-12-01
Back-projection of teleseismic P waves [Ishii et al., 2005] has been widely used to image the rupture of earthquakes. Besides the conventional narrowband beamforming in time domain, approaches in frequency domain such as MUSIC back projection (Meng 2011) and compressive sensing (Yao et al, 2011), are proposed to improve the resolution. Each method has its advantages and disadvantages and should be properly used in different cases. Therefore, a thorough research to compare and test these methods is needed. We write a GUI program, which puts the three methods together so that people can conveniently use different methods to process the same data and compare the results. Then we use all the methods to process several earthquake data, including 2008 Wenchuan Mw7.9 earthquake and 2011 Tohoku-Oki Mw9.0 earthquake, and theoretical seismograms of both simple sources and complex ruptures. Our results show differences in efficiency, accuracy and stability among the methods. Quantitative and qualitative analysis are applied to measure their dependence on data and parameters, such as station number, station distribution, grid size, calculate window length and so on. In general, back projection makes it possible to get a good result in a very short time using less than 20 lines of high-quality data with proper station distribution, but the swimming artifact can be significant. Some ways, for instance, combining global seismic data, could help ameliorate this method. Music back projection needs relatively more data to obtain a better and more stable result, which means it needs a lot more time since its runtime accumulates obviously faster than back projection with the increase of station number. Compressive sensing deals more effectively with multiple sources in a same time window, however, costs the longest time due to repeatedly solving matrix. Resolution of all the methods is complicated and depends on many factors. An important one is the grid size, which in turn influences
Perception of midline deviations in smile esthetics by laypersons.
Ferreira, Jamille Barros; Silva, Licínio Esmeraldo da; Caetano, Márcia Tereza de Oliveira; Motta, Andrea Fonseca Jardim da; Cury-Saramago, Adriana de Alcantara; Mucha, José Nelson
2016-01-01
To evaluate the esthetic perception of upper dental midline deviation by laypersons and if adjacent structures influence their judgment. An album with 12 randomly distributed frontal view photographs of the smile of a woman with the midline digitally deviated was evaluated by 95 laypersons. The frontal view smiling photograph was modified to create from 1 mm to 5 mm deviations in the upper midline to the left side. The photographs were cropped in two different manners and divided into two groups of six photographs each: group LCN included the lips, chin, and two-thirds of the nose, and group L included the lips only. The laypersons performed the rate of each smile using a visual analog scale (VAS). Wilcoxon test, Student's t-test and Mann-Whitney test were applied, adopting a 5% level of significance. Laypersons were able to perceive midline deviations starting at 1 mm. Statistically significant results (p< 0.05) were found for all multiple comparisons of the values in photographs of group LCN and for almost all comparisons in photographs of group L. Comparisons between the photographs of groups LCN and L showed statistically significant values (p< 0.05) when the deviation was 1 mm. Laypersons were able to perceive the upper dental midline deviations of 1 mm, and above when the adjacent structures of the smiles were included. Deviations of 2 mm and above when the lips only were included. The visualization of structures adjacent to the smile demonstrated influence on the perception of midline deviation.
Statistics as Unbiased Estimators: Exploring the Teaching of Standard Deviation
Wasserman, Nicholas H.; Casey, Stephanie; Champion, Joe; Huey, Maryann
2017-01-01
This manuscript presents findings from a study about the knowledge for and planned teaching of standard deviation. We investigate how understanding variance as an unbiased (inferential) estimator--not just a descriptive statistic for the variation (spread) in data--is related to teachers' instruction regarding standard deviation, particularly…
Moderate deviations principles for the kernel estimator of ...
African Journals Online (AJOL)
Abstract. The aim of this paper is to provide pointwise and uniform moderate deviations principles for the kernel estimator of a nonrandom regression function. Moreover, we give an application of these moderate deviations principles to the construction of condence regions for the regression function. Resume. L'objectif de ...
Generation of deviation parameters for amino acid singlets, doublets ...
Indian Academy of Sciences (India)
We present a new method, secondary structure prediction by deviation parameter (SSPDP) for predicting the secondary structure of proteins from amino acid sequence. Deviation parameters (DP) for amino acid singlets, doublets and triplets were computed with respect to secondary structural elements of proteins based on ...
38 CFR 36.4304 - Deviations; changes of identity.
2010-07-01
... identity. 36.4304 Section 36.4304 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS... Deviations; changes of identity. A deviation of more than 5 percent between the estimates upon which a... change in the identity of the property upon which the original appraisal was based, will invalidate the...
The deviation matrix of a continuous-time Markov chain
Coolen-Schrijner, P.; van Doorn, E.A.
2001-01-01
The deviation matrix of an ergodic, continuous-time Markov chain with transition probability matrix $P(.)$ and ergodic matrix $\\Pi$ is the matrix $D \\equiv \\int_0^{\\infty} (P(t)-\\Pi)dt$. We give conditions for $D$ to exist and discuss properties and a representation of $D$. The deviation matrix of a
The deviation matrix of a continuous-time Markov chain
Coolen-Schrijner, Pauline; van Doorn, Erik A.
2002-01-01
he deviation matrix of an ergodic, continuous-time Markov chain with transition probability matrix $P(.)$ and ergodic matrix $\\Pi$ is the matrix $D \\equiv \\int_0^{\\infty} (P(t)-\\Pi)dt$. We give conditions for $D$ to exist and discuss properties and a representation of $D$. The deviation matrix of a
Sensitivity Analysis of Deviation Source for Fast Assembly Precision Optimization
Directory of Open Access Journals (Sweden)
Jianjun Tang
2014-01-01
Full Text Available Assembly precision optimization of complex product has a huge benefit in improving the quality of our products. Due to the impact of a variety of deviation source coupling phenomena, the goal of assembly precision optimization is difficult to be confirmed accurately. In order to achieve optimization of assembly precision accurately and rapidly, sensitivity analysis of deviation source is proposed. First, deviation source sensitivity is defined as the ratio of assembly dimension variation and deviation source dimension variation. Second, according to assembly constraint relations, assembly sequences and locating, deviation transmission paths are established by locating the joints between the adjacent parts, and establishing each part’s datum reference frame. Third, assembly multidimensional vector loops are created using deviation transmission paths, and the corresponding scalar equations of each dimension are established. Then, assembly deviation source sensitivity is calculated by using a first-order Taylor expansion and matrix transformation method. Finally, taking assembly precision optimization of wing flap rocker as an example, the effectiveness and efficiency of the deviation source sensitivity analysis method are verified.
45 CFR 63.19 - Budget revisions and minor deviations.
2010-10-01
... 45 Public Welfare 1 2010-10-01 2010-10-01 false Budget revisions and minor deviations. 63.19 Section 63.19 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION GRANT PROGRAMS... Budget revisions and minor deviations. Pursuant to § 74.102(d) of this title, paragraphs (b)(3) and (b)(4...
Refraction in Terms of the Deviation of the Light.
Goldberg, Fred M.
1985-01-01
Discusses refraction in terms of the deviation of light. Points out that in physics courses where very little mathematics is used, it might be more suitable to describe refraction entirely in terms of the deviation, rather than by introducing Snell's law. (DH)
DEFF Research Database (Denmark)
Davidsson, Eva; Sørensen, Helene
Large scale studies play an increasing role in educational politics and results from surveys such as TIMSS and PISA are extensively used in medial debates about students' knowledge in science and mathematics. Although this debate does not usually shed light on the more extensive quantitative...... analyses, there is a lack of investigations which aim at exploring what is possible to conclude or not to conclude from these analyses. There is also a need for more detailed discussions about what trends could be discern concerning students' knowledge in science and mathematics. The aim of this symposium...... is therefore to highlight and discuss different approaches to how data from large scale studies could be used for additional analyses in order to increase our understanding of students' knowledge in science and mathematics, but also to explore possible longitudinal trends, hidden in the data material...
Dutra, Robson Azevedo; Perez-Bóscollo, Adriana Cartafina; Ribeiro, Fernanda Cristina Silva Alves; Vietez, Nádia Bicego
2008-03-01
A 10-year-old premenarchal girl was admitted to our hospital with moderate abdominal pain, although presenting no vomiting or abdominal rebound tenderness. A large abdominal mass was visible and palpable in the periumbilical and epigastric regions. Results of physical examination revealed that the general health status was satisfactory. Computed tomographic scan revealed a large, thin-walled cyst occupying nearly the entire peritoneal cavity. The other viscera were of normal aspect. A laparoscopic approach revealed a left ovarian cystic tumor that was twisted 360 degrees in conjunction with the uterine corpus with hemorrhagic infarction. A partial hysterectomy and a left salpingo-oophorectomy were carried out. The tumor was classified as mature cystic teratoma of the ovary accompanied by hemorrhagic necrosis, not only of the cyst but also of the left uterine tube and the uterine corpus.
Energy Technology Data Exchange (ETDEWEB)
Zhou, Ye [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Thornber, Ben [The Univ. of Sydney, Sydney, NSW (Australia)
2016-04-12
Here, the implicit large-eddy simulation (ILES) has been utilized as an effective approach for calculating many complex flows at high Reynolds number flows. Richtmyer–Meshkov instability (RMI) induced flow can be viewed as a homogeneous decaying turbulence (HDT) after the passage of the shock. In this article, a critical evaluation of three methods for estimating the effective Reynolds number and the effective kinematic viscosity is undertaken utilizing high-resolution ILES data. Effective Reynolds numbers based on the vorticity and dissipation rate, or the integral and inner-viscous length scales, are found to be the most self-consistent when compared to the expected phenomenology and wind tunnel experiments.
Kalid, Naser; Zaidan, A A; Zaidan, B B; Salman, Omar H; Hashim, M; Albahri, O S; Albahri, A S
2018-03-02
This paper presents a new approach to prioritize "Large-scale Data" of patients with chronic heart diseases by using body sensors and communication technology during disasters and peak seasons. An evaluation matrix is used for emergency evaluation and large-scale data scoring of patients with chronic heart diseases in telemedicine environment. However, one major problem in the emergency evaluation of these patients is establishing a reasonable threshold for patients with the most and least critical conditions. This threshold can be used to detect the highest and lowest priority levels when all the scores of patients are identical during disasters and peak seasons. A practical study was performed on 500 patients with chronic heart diseases and different symptoms, and their emergency levels were evaluated based on four main measurements: electrocardiogram, oxygen saturation sensor, blood pressure monitoring, and non-sensory measurement tool, namely, text frame. Data alignment was conducted for the raw data and decision-making matrix by converting each extracted feature into an integer. This integer represents their state in the triage level based on medical guidelines to determine the features from different sources in a platform. The patients were then scored based on a decision matrix by using multi-criteria decision-making techniques, namely, integrated multi-layer for analytic hierarchy process (MLAHP) and technique for order performance by similarity to ideal solution (TOPSIS). For subjective validation, cardiologists were consulted to confirm the ranking results. For objective validation, mean ± standard deviation was computed to check the accuracy of the systematic ranking. This study provides scenarios and checklist benchmarking to evaluate the proposed and existing prioritization methods. Experimental results revealed the following. (1) The integration of TOPSIS and MLAHP effectively and systematically solved the patient settings on triage and
Directory of Open Access Journals (Sweden)
Edson Rocha Constantino
2016-05-01
Full Text Available ABSTRACT Objective In this study, we investigate our institutional experience of patients who underwent endoscopic endonasal transsphenoidal approach for treatment of large and giant pituitary adenomas emphasizing the surgical results and approach-related complications. Method The authors reviewed 28 consecutive patients who underwent surgery between March, 2010 and March, 2014. Results The mean preoperative tumor diameter was 4.6 cm. Gross-total resection was achieved in 14.3%, near-total in 10.7%, subtotal in 39.3%, and partial in 35.7%. Nine patients experienced improvement in visual acuity, while one patient worsened. The most common complications were transient diabetes insipidus (53%, new pituitary deficit (35.7%, endonasal adhesions (21.4%, and cerebrospinal fluid leak (17.8%. Surgical mortality was 7.1%. Conclusions Endoscopic endonasal transsphenoidal surgery is a valuable treatment option for large or giant pituitary adenomas, which results in high rates of surgical decompression of cerebrovascular structures.
Panagopoulos, Yiannis; Gassman, Philip W.; Jha, Manoj K.; Kling, Catherine L.; Campbell, Todd; Srinivasan, Raghavan; White, Michael; Arnold, Jeffrey G.
2015-05-01
Nonpoint source pollution from agriculture is the main source of nitrogen and phosphorus in the stream systems of the Corn Belt region in the Midwestern US. This region is comprised of two large river basins, the intensely row-cropped Upper Mississippi River Basin (UMRB) and Ohio-Tennessee River Basin (OTRB), which are considered the key contributing areas for the Northern Gulf of Mexico hypoxic zone according to the US Environmental Protection Agency. Thus, in this area it is of utmost importance to ensure that intensive agriculture for food, feed and biofuel production can coexist with a healthy water environment. To address these objectives within a river basin management context, an integrated modeling system has been constructed with the hydrologic Soil and Water Assessment Tool (SWAT) model, capable of estimating river basin responses to alternative cropping and/or management strategies. To improve modeling performance compared to previous studies and provide a spatially detailed basis for scenario development, this SWAT Corn Belt application incorporates a greatly refined subwatershed structure based on 12-digit hydrologic units or 'subwatersheds' as defined by the US Geological Service. The model setup, calibration and validation are time-demanding and challenging tasks for these large systems, given the scale intensive data requirements, and the need to ensure the reliability of flow and pollutant load predictions at multiple locations. Thus, the objectives of this study are both to comprehensively describe this large-scale modeling approach, providing estimates of pollution and crop production in the region as well as to present strengths and weaknesses of integrated modeling at such a large scale along with how it can be improved on the basis of the current modeling structure and results. The predictions were based on a semi-automatic hydrologic calibration approach for large-scale and spatially detailed modeling studies, with the use of the Sequential
Vertical dispersion generated by correlated closed orbit deviations
International Nuclear Information System (INIS)
Kewisch, J.; Limberg, T.; Rossbach, J.; Willeke, F.
1986-02-01
Vertical displacement of quadrupole magnets is one of the main causes of a vertical dispersion in a flat storage ring and thus a major contributor to the height of an electron beam. Computer simulations of the beam height in the HERA electron ring give a value of the ratio ε z /ε x of more than 10 percent. This large value occurs even for an rms value of the quadrupole vertical displacements Δz as small as 0.01 mm. Such a vertical emittance is much larger than one expects on the base of a theoretical estimate and it is clearly necessary to investigate the origin of the disagreements especially since the beam height has such an important influence on the machine performance. The key to the understanding of this discrepancy lies in the correlations of the closed orbit deviations at different position of the machine. This is investigated in the next section and in the section which follows we derive the expression for the rms value of dispersion and the vertical emittance. Finally the theoretical results are compared with computer simulations. (orig.)
A method for searching the possible deviations from exponential decay law
International Nuclear Information System (INIS)
Tran Dai Nghiep; Vu Hoang Lam; Tran Vien Ha
1993-01-01
A continuous kinetic function approach is proposed for analyzing the experimental decay curves. In the case of purely exponential behaviour, the values of kinetic function are the same at different ages of the investigated radionuclide. The deviation from main decay curve could be found by a comparison of experimental kinetic function values with those obtained in purely exponential case. (author). 12 refs
Directory of Open Access Journals (Sweden)
Hayley S Clements
Full Text Available Broad-scale models describing predator prey preferences serve as useful departure points for understanding predator-prey interactions at finer scales. Previous analyses used a subjective approach to identify prey weight preferences of the five large African carnivores, hence their accuracy is questionable. This study uses a segmented model of prey weight versus prey preference to objectively quantify the prey weight preferences of the five large African carnivores. Based on simulations of known predator prey preference, for prey species sample sizes above 32 the segmented model approach detects up to four known changes in prey weight preference (represented by model break-points with high rates of detection (75% to 100% of simulations, depending on number of break-points and accuracy (within 1.3±4.0 to 2.7±4.4 of known break-point. When applied to the five large African carnivores, using carnivore diet information from across Africa, the model detected weight ranges of prey that are preferred, killed relative to their abundance, and avoided by each carnivore. Prey in the weight ranges preferred and killed relative to their abundance are together termed "accessible prey". Accessible prey weight ranges were found to be 14-135 kg for cheetah Acinonyx jubatus, 1-45 kg for leopard Panthera pardus, 32-632 kg for lion Panthera leo, 15-1600 kg for spotted hyaena Crocuta crocuta and 10-289 kg for wild dog Lycaon pictus. An assessment of carnivore diets throughout Africa found these accessible prey weight ranges include 88±2% (cheetah, 82±3% (leopard, 81±2% (lion, 97±2% (spotted hyaena and 96±2% (wild dog of kills. These descriptions of prey weight preferences therefore contribute to our understanding of the diet spectrum of the five large African carnivores. Where datasets meet the minimum sample size requirements, the segmented model approach provides a means of determining, and comparing, the prey weight range preferences of any carnivore
Clements, Hayley S; Tambling, Craig J; Hayward, Matt W; Kerley, Graham I H
2014-01-01
Broad-scale models describing predator prey preferences serve as useful departure points for understanding predator-prey interactions at finer scales. Previous analyses used a subjective approach to identify prey weight preferences of the five large African carnivores, hence their accuracy is questionable. This study uses a segmented model of prey weight versus prey preference to objectively quantify the prey weight preferences of the five large African carnivores. Based on simulations of known predator prey preference, for prey species sample sizes above 32 the segmented model approach detects up to four known changes in prey weight preference (represented by model break-points) with high rates of detection (75% to 100% of simulations, depending on number of break-points) and accuracy (within 1.3±4.0 to 2.7±4.4 of known break-point). When applied to the five large African carnivores, using carnivore diet information from across Africa, the model detected weight ranges of prey that are preferred, killed relative to their abundance, and avoided by each carnivore. Prey in the weight ranges preferred and killed relative to their abundance are together termed "accessible prey". Accessible prey weight ranges were found to be 14-135 kg for cheetah Acinonyx jubatus, 1-45 kg for leopard Panthera pardus, 32-632 kg for lion Panthera leo, 15-1600 kg for spotted hyaena Crocuta crocuta and 10-289 kg for wild dog Lycaon pictus. An assessment of carnivore diets throughout Africa found these accessible prey weight ranges include 88±2% (cheetah), 82±3% (leopard), 81±2% (lion), 97±2% (spotted hyaena) and 96±2% (wild dog) of kills. These descriptions of prey weight preferences therefore contribute to our understanding of the diet spectrum of the five large African carnivores. Where datasets meet the minimum sample size requirements, the segmented model approach provides a means of determining, and comparing, the prey weight range preferences of any carnivore species.
A numerical approach for the study of large sodium spray fires and its application for SPX1
International Nuclear Information System (INIS)
Varet, T.; Leroy, B.; Barthez, M.; Malet, J.C.
1996-01-01
For the original design of SUPER-PHENIX, only pool fires were analysed for secondary sodium because these were thought to be the most likely. However, after the sodium spray fire at the solar plant of ALMERIA, an analysis of the consequences of secondary spray fire was undertaken. According to the French Safety Authority, the most penalizing cases of sodium leak and fire must be taken into account for each type of consequences, up to the complete rupture of a main secondary pipe. The experimental data available were mainly based on sodium flowrates in the range of ten kilograms per second, which are far below the leak flowrates obtained in case of a complete rupture of a main secondary pipe, i.e. several tons of sodium per second during a short time interval; moreover, it was obviously not possible to perform sodium tests with such high flowrate conditions. Consequently a complete methodology for the prediction of the behaviour of large sodium spray fires has been developed: the two-dimensional code PULSAR, which solves the two phase flow Navier-Stokes equations with source terms of mass and energy, is first used to evaluate the physical behaviour of a spray of sodium droplets in a cell in diverse conditions and thus to determine the burning rate. This last value is then used as data in the FEUMIX code in which other phenomena such as the dynamic response of pressure relief systems are described, in order to determine the pressure transient in the cell. This approach has been successfully tested using the experimental data available from past and recent tests, particularly the high flowrates tests IGNA 3602 and IGNA 3604. This numerical approach has been applied to the analysis of the consequences of postulated large sodium leaks in SUPER-PHENIX and allowed us to justify the hypotheses used to design the protective measures implemented on the plant, and thus the demonstration of safety with regard to large sodium leaks. (author)
Kwon, Oh-Hyun; Crnovrsanin, Tarik; Ma, Kwan-Liu
2018-01-01
Using different methods for laying out a graph can lead to very different visual appearances, with which the viewer perceives different information. Selecting a "good" layout method is thus important for visualizing a graph. The selection can be highly subjective and dependent on the given task. A common approach to selecting a good layout is to use aesthetic criteria and visual inspection. However, fully calculating various layouts and their associated aesthetic metrics is computationally expensive. In this paper, we present a machine learning approach to large graph visualization based on computing the topological similarity of graphs using graph kernels. For a given graph, our approach can show what the graph would look like in different layouts and estimate their corresponding aesthetic metrics. An important contribution of our work is the development of a new framework to design graph kernels. Our experimental study shows that our estimation calculation is considerably faster than computing the actual layouts and their aesthetic metrics. Also, our graph kernels outperform the state-of-the-art ones in both time and accuracy. In addition, we conducted a user study to demonstrate that the topological similarity computed with our graph kernel matches perceptual similarity assessed by human users.
Liu, Jiakai; Tan, Chin Hon; Badrick, Tony; Loh, Tze Ping
2018-02-01
An increase in analytical imprecision (expressed as CV a ) can introduce additional variability (i.e. noise) to the patient results, which poses a challenge to the optimal management of patients. Relatively little work has been done to address the need for continuous monitoring of analytical imprecision. Through numerical simulations, we describe the use of moving standard deviation (movSD) and a recently described moving sum of outlier (movSO) patient results as means for detecting increased analytical imprecision, and compare their performances against internal quality control (QC) and the average of normal (AoN) approaches. The power of detecting an increase in CV a is suboptimal under routine internal QC procedures. The AoN technique almost always had the highest average number of patient results affected before error detection (ANPed), indicating that it had generally the worst capability for detecting an increased CV a . On the other hand, the movSD and movSO approaches were able to detect an increased CV a at significantly lower ANPed, particularly for measurands that displayed a relatively small ratio of biological variation to CV a. CONCLUSION: The movSD and movSO approaches are effective in detecting an increase in CV a for high-risk measurands with small biological variation. Their performance is relatively poor when the biological variation is large. However, the clinical risks of an increase in analytical imprecision is attenuated for these measurands as an increased analytical imprecision will only add marginally to the total variation and less likely to impact on the clinical care. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Deviations outside the acceptance limits in the IAEA/WHO TLD audits for radiotherapy hospitals
International Nuclear Information System (INIS)
Vatnitsky, S.; Izewska, J.
2002-01-01
The main purpose of the IAEA/WHO TLD postal dose audit programme for dosimetry in radiotherapy is to provide an independent verification of the dose delivered by treatment machines in radiotherapy hospitals. The results of the TLD audit are considered acceptable if the relative deviation between the participant's stated dose and the TLD determined dose is within ±5%. The goal of this note is to draw the attention of participants of the TLD programme to some of the common reasons for deviations outside the acceptance limits. Armed with this knowledge, other participants may avoid similar problems in the future. The analysis of deviations presented here is based on the results of TLD audits of the calibration of approximately 1000 Co-60 beams and 600 high-energy X-ray beams performed in the period 1996-2001. A total of 259 deviations outside the ±5% limits have been detected, including 204 deviations for Co-60 beams (20% of all Co-60 beams checked) and 55 for high-energy X-ray beams (10% of all X-ray beams checked). It is worth mentioning that the percentage of large deviations (beyond 10%) is also higher for Co-60 beams than for high-energy X-ray beams. Some problems may be caused by obsolete dosimetry equipment or poor treatment machine conditions. Other problems may be due to insufficient training of staff working in radiotherapy. The clinical relevance of severe TLD deviations detected in the audit programme was confirmed in many cases, but, fortunately, not all-poor dosimetric results reflect deficiencies in the calibration of clinical beams or machine faults. Sometime it happens, that the TLDs are irradiated with an incorrect dose due to misunderstanding of the instructions on how to perform the TLD irradiation. Such dosimetry errors would have no direct impact on actual dose delivered to a patient
A new proposed approach for future large-scale de-carbonization coal-fired power plants
International Nuclear Information System (INIS)
Xu, Gang; Liang, Feifei; Wu, Ying; Yang, Yongping; Zhang, Kai; Liu, Wenyi
2015-01-01
The post-combustion CO 2 capture technology provides a feasible and promising method for large-scale CO 2 capture in coal-fired power plants. However, the large-scale CO 2 capture in conventionally designed coal-fired power plants is confronted with various problems, such as the selection of the steam extraction point and steam parameter mismatch. To resolve these problems, an improved design idea for the future coal-fired power plant with large-scale de-carbonization is proposed. A main characteristic of the proposed design is the adoption of a back-pressure steam turbine, which extracts the suitable steam for CO 2 capture and ensures the stability of the integrated system. A new let-down steam turbine generator is introduced to retrieve the surplus energy from the exhaust steam of the back-pressure steam turbine when CO 2 capture is cut off. Results show that the net plant efficiency of the improved design is 2.56% points higher than that of the conventional one when CO 2 capture ratio reaches 80%. Meanwhile, the net plant efficiency of the improved design maintains the same level to that of the conventional design when CO 2 capture is cut off. Finally, the match between the extracted steam and the heat demand of the reboiler is significantly increased, which solves the steam parameter mismatch problem. The techno-economic analysis indicates that the proposed design is a cost-effective approach for the large-scale CO 2 capture in coal-fired power plants. - Highlights: • Problems caused by CO 2 capture in the power plant are deeply analyzed. • An improved design idea for coal-fired power plants with CO 2 capture is proposed. • Thermodynamic, exergy and techno-economic analyses are quantitatively conducted. • Energy-saving effects are found in the proposed coal-fired power plant design idea
Management of Contract Waivers and Deviations for Defense Systems
National Research Council Canada - National Science Library
1998-01-01
This report is the fourth and final in a series of reports resulting from our audit of management of contract waivers and deviations for Defense systems and summarizes our overall evaluation. Report...
New g-2 measurement deviates further from Standard Model
2004-01-01
"The latest result from an international collaboration of scientists investigating how the spin of a muon is affected as this type of subatomic particle moves through a magnetic field deviates further than previous measurements from theoretical predictions" (1 page).
A Note on Standard Deviation and Standard Error
Hassani, Hossein; Ghodsi, Mansoureh; Howell, Gareth
2010-01-01
Many students confuse the standard deviation and standard error of the mean and are unsure which, if either, to use in presenting data. In this article, we endeavour to address these questions and cover some related ambiguities about these quantities.
Prosthodontic management of mandibular deviation using palatal ramp appliance
Directory of Open Access Journals (Sweden)
Prince Kumar
2012-08-01
Full Text Available Segmental resection of the mandible generally results in deviation of the mandible to the defective side. This loss of continuity of the mandible destroys the balance of the lower face and leads to decreased mandibular function by deviation of the residual segment toward the surgical site. Prosthetic methods advocated to reduce or eliminate mandibular deviation include intermaxillary fixation, removable mandibular guide flange, palatal ramp, implant-supported prosthesis and palatal guidance restorations which may be useful in reducing mandibular deviation and improving masticatory performance and efficiency. These methods and restorations would be combined with a well organized mandibular exercise regimen. This clinical report describes the rehabilitation following segmental mandibulectomy using palatal ramp prosthesis.
Directory of Open Access Journals (Sweden)
Beitz Eric
2006-06-01
Full Text Available Abstract Background Recognition of relevant sequence deviations can be valuable for elucidating functional differences between protein subfamilies. Interesting residues at highly conserved positions can then be mutated and experimentally analyzed. However, identification of such sites is tedious because automated approaches are scarce. Results Subfamily logos visualize subfamily-specific sequence deviations. The display is similar to classical sequence logos but extends into the negative range. Positive, upright characters correspond to residues which are characteristic for the subfamily, negative, upside-down characters to residues typical for the remaining sequences. The symbol height is adjusted to the information content of the alignment position. Residues which are conserved throughout do not appear. Conclusion Subfamily logos provide an intuitive display of relevant sequence deviations. The method has proven to be valid using a set of 135 aligned aquaporin sequences in which established subfamily-specific positions were readily identified by the algorithm.
International Nuclear Information System (INIS)
Tran Dai Nghiep; Vu Hoang Lam; Vo Tuong Hanh; Do Nguyet Minh; Nguyen Ngoc Son
1995-01-01
Present work is aimed at a formulation of an experimental approach to search the proposed nonexponential deviations from decay curve and at description of an attempt to test them in case of 52 V. Some theoretical description of decay processes are formulated in clarified forms. A continuous kinetic function (CKF) method is described for analysis of experimental data and CKF for purely exponential case is considered as a standard for comparison between theoretical and experimental data. The degree of agreement is defined by the factor of goodness. Typical deviations of oscillation behaviour of 52 V decay were observed in a wide range of time. The proposed deviation related to interaction between decay products and environment is researched. A complex type of decay is discussed. (authors). 10 refs., 4 figs., 2 tabs
On the Horizontal Deviation of a Spinning Projectile Penetrating into Granular Systems
Directory of Open Access Journals (Sweden)
Waseem Ghazi Alshanti
2017-01-01
Full Text Available The absence of a general theory that describes the dynamical behavior of the particulate materials makes the numerical simulations the most current powerful tool that can grasp many mechanical problems relevant to the granular materials. In this paper, based on a two-dimensional soft particle discrete element method (DEM, a numerical approach is developed to investigate the consequence of the orthogonal impact into various granular beds of projectile rotating in both clockwise (CW and counterclockwise (CCW directions. Our results reveal that, depending on the rotation direction, there is a significant deviation of the x-coordinate of the final stopping point of a spinning projectile from that of its original impact point. For CW rotations, a deviation to the right occurs while a left deviation has been recorded for CCW rotation case.
Non-linear neutron star oscillations viewed as deviations from an equilibrium state
International Nuclear Information System (INIS)
Sperhake, U
2002-01-01
A numerical technique is presented which facilitates the evolution of non-linear neutron star oscillations with a high accuracy essentially independent of the oscillation amplitude. We apply this technique to radial neutron star oscillations in a Lagrangian formulation and demonstrate the superior performance of the new scheme compared with 'conventional' techniques. The key feature of our approach is to describe the evolution in terms of deviations from an equilibrium configuration. In contrast to standard perturbation analysis we keep all higher order terms in the evolution equations and thus obtain a fully non-linear description. The advantage of our scheme lies in the elimination of background terms from the equations and the associated numerical errors. The improvements thus achieved will be particularly significant in the study of mildly non-linear effects where the amplitude of the dynamic signal is small compared with the equilibrium values but large enough to warrant non-linear effects. We apply the new technique to the study of non-linear coupling of Eigenmodes and non-linear effects in the oscillations of marginally stable neutron stars. We find non-linear effects in low amplitude oscillations to be particularly pronounced in the range of modes with vanishing frequency which typically mark the onset of instability. (author)
The Analysis of a Deviation of Investment and Corporate Governance
Shoichi Hisa
2008-01-01
Investment of firms is affected by not only fundamentals factors, but liquidity constraint, ownership or corporate structure. Information structure between manager and owner is a significant factor to decide the level of investment, and deviation of investment from optimal condition. The reputation model between manager and owner suggest that the separate of ownership and management may induce the deviation of investment, and indicate that governance structure is important to reduce it. In th...
Marucci-Wellman, Helen R; Corns, Helen L; Lehto, Mark R
2017-01-01
Injury narratives are now available real time and include useful information for injury surveillance and prevention. However, manual classification of the cause or events leading to injury found in large batches of narratives, such as workers compensation claims databases, can be prohibitive. In this study we compare the utility of four machine learning algorithms (Naïve Bayes, Single word and Bi-gram models, Support Vector Machine and Logistic Regression) for classifying narratives into Bureau of Labor Statistics Occupational Injury and Illness event leading to injury classifications for a large workers compensation database. These algorithms are known to do well classifying narrative text and are fairly easy to implement with off-the-shelf software packages such as Python. We propose human-machine learning ensemble approaches which maximize the power and accuracy of the algorithms for machine-assigned codes and allow for strategic filtering of rare, emerging or ambiguous narratives for manual review. We compare human-machine approaches based on filtering on the prediction strength of the classifier vs. agreement between algorithms. Regularized Logistic Regression (LR) was the best performing algorithm alone. Using this algorithm and filtering out the bottom 30% of predictions for manual review resulted in high accuracy (overall sensitivity/positive predictive value of 0.89) of the final machine-human coded dataset. The best pairings of algorithms included Naïve Bayes with Support Vector Machine whereby the triple ensemble NB SW =NB BI-GRAM =SVM had very high performance (0.93 overall sensitivity/positive predictive value and high accuracy (i.e. high sensitivity and positive predictive values)) across both large and small categories leaving 41% of the narratives for manual review. Integrating LR into this ensemble mix improved performance only slightly. For large administrative datasets we propose incorporation of methods based on human-machine pairings such as
Linear maps preserving maximal deviation and the Jordan structure of quantum systems
International Nuclear Information System (INIS)
Hamhalter, Jan
2012-01-01
In the algebraic approach to quantum theory, a quantum observable is given by an element of a Jordan algebra and a state of the system is modelled by a normalized positive functional on the underlying algebra. Maximal deviation of a quantum observable is the largest statistical deviation one can obtain in a particular state of the system. The main result of the paper shows that each linear bijective transformation between JBW algebras preserving maximal deviations is formed by a Jordan isomorphism or a minus Jordan isomorphism perturbed by a linear functional multiple of an identity. It shows that only one numerical statistical characteristic has the power to determine the Jordan algebraic structure completely. As a consequence, we obtain that only very special maps can preserve the diameter of the spectra of elements. Nonlinear maps preserving the pseudometric given by maximal deviation are also described. The results generalize hitherto known theorems on preservers of maximal deviation in the case of self-adjoint parts of von Neumann algebras proved by Molnár.
Yu, Huai-Zhong; Yin, Xiang-Chu; Zhu, Qing-Yong; Yan, Yu-Ding
2006-12-01
The concept of state vector stems from statistical physics, where it is usually used to describe activity patterns of a physical field in its manner of coarsegrain. In this paper, we propose an approach by which the state vector was applied to describe quantitatively the damage evolution of the brittle heterogeneous systems, and some interesting results are presented, i.e., prior to the macro-fracture of rock specimens and occurrence of a strong earthquake, evolutions of the four relevant scalars time series derived from the state vectors changed anomalously. As retrospective studies, some prominent large earthquakes occurred in the Chinese Mainland (e.g., the M 7.4 Haicheng earthquake on February 4, 1975, and the M 7.8 Tangshan earthquake on July 28, 1976, etc) were investigated. Results show considerable promise that the time-dependent state vectors could serve as a kind of precursor to predict earthquakes.
Takayasu, Misako; Takayasu, Hideki; Econophysics Approaches to Large-Scale Business Data and Financial Crisis
2010-01-01
The new science of econophysics has arisen out of the information age. As large-scale economic data are being increasingly generated by industries and enterprises worldwide, researchers from fields such as physics, mathematics, and information sciences are becoming involved. The vast number of transactions taking place, both in the financial markets and in the retail sector, is usually studied by economists and management and now by econophysicists. Using cutting-edge tools of computational analysis while searching for regularities and “laws” such as those found in the natural sciences, econophysicists have come up with intriguing results. The ultimate aim is to establish fundamental data collection and analysis techniques that embrace the expertise of a variety of academic disciplines. This book comprises selected papers from the international conference on novel analytical approaches to economic data held in Tokyo in March 2009. The papers include detailed reports on the market behavior during the finan...
International Nuclear Information System (INIS)
Bedregal, P.S.; Mendoza, A.; Montoya, E.H.; Cohen, I.M.; Universidad Tecnologica Nacional, Buenos Aires; Oscar Baltuano
2012-01-01
A new approach for analysis of entire potsherds of archaeological interest by INAA, using the conventional relative method, is described. The analytical method proposed involves, primarily, the preparation of replicates of the original archaeological pottery, with well known chemical composition (standard), destined to be irradiated simultaneously, in a well thermalized external neutron beam of the RP-10 reactor, with the original object (sample). The basic advantage of this proposal is to avoid the need of performing complicated effect corrections when dealing with large samples, due to neutron self shielding, neutron self-thermalization and gamma ray attenuation. In addition, and in contrast with the other methods, the main advantages are the possibility of evaluating the uncertainty of the results and, fundamentally, validating the overall methodology. (author)
Outcomes of minimally invasive strabismus surgery for horizontal deviation.
Merino, P; Blanco Domínguez, I; Gómez de Liaño, P
2016-02-01
To study the outcomes of minimally invasive strabismus surgery (MISS) for treating horizontal deviation Case Series of the first 26 consecutive patients operated on using the MISS technique in our hospital from February 2010 to March 2014. A total of 40 eyes were included: 26 patients (mean age: 7.7 years old ± 4.9); 34.61%: male. A total of 43 muscles were operated on: 20 medial, and 23 lateral recti; 28 recessions (range: 3-7.5mm), 6 resections (6-7 mm), and 9 plications (6.5-7.5 mm) were performed. No significant difference was found (P>0.05) for visual acuity at postoperative day 1, and 6 months after surgery. A mild hyperaemia was observed in 29.27%, moderate in 48.78%, and severe in 21.95% at postoperative day 1 and in 63.41%, 31.70% and 4.87%, respectively, at 4 days after surgery. The complications observed were 4 intraoperative conjunctival haemorrhages, 1 scleral perforation, and 2 Tenon's prolapses. A conversion from MISS to a fornix approach was necessary in 1 patient because of bad visualization. The operating time range decreased from 30 to 15 minutes. The MISS technique has obtained good results in horizontal strabismus surgery. The conjunctival inflammation was mild in most of the cases at postoperative day 4. The visual acuity was stable during follow-up, and operating time decreased after a 4-year learning curve. Copyright © 2015 Sociedad Española de Oftalmología. Published by Elsevier España, S.L.U. All rights reserved.
Petrothermal heat extraction using a single deviated well (Horstberg, revisited)
Ghergut, Julia; Behrens, Horst; Vogt, Esther; Bartetzko, Anne; Sauter, Martin
2013-04-01
The single-well tracer test conducted (Behrens et al. 2006) in conjunction with waterfrac experiments at Horstberg is re-examined with a view at four basic issues: why single-well? why fracturing? why tracers? does this only work at Horstberg, or can it work almost anywhere else in the Northern-German sedimentary basin? Heat and tracer transport within a composite reservoir (impermeable matrix + waterfrac + permeable layer), as accessed by a single deviated well, turn out to fit into a surprisingly simple description, as the plain (arithmetic) sum of certain petrothermal-type and aquifer-type contributions, whose weighting relative to each other can vary from site to site, depending upon stratigraphy and upon wellbore geometry. At Horstberg, within the particular formations tested ('Volpriehausen', 'Detfurth', 'Solling', comprising mainly claystone and sandstone layers), thermal lifetime results to be petrothermally-dominated, while tracer residence times prove to be 'aquifer'-dominated. Despite this disparity, the reservoir's thermal lifetime can reliably be predicted from tracer test results. What cannot be determined from waterfrac flow-path tracing is the very waterfrac's aperture. Aperture uncertainty, however, does not impede upon thermal lifetime predictability. The results of the semi-analytical approach are confirmed by numerical simulations using a FE model that includes more details of hydrogeological heterogeneity for the Horstberg site. They are complemented by a parameter sensitivity analysis. ACKNOWLEDGEMENT: This study is funded by MWK Niedersachsen (Lower-Saxony's Science and Culture Ministry) and by Baker Hughes (Celle) within task unit G6 of the Collaborative Research Project 'gebo' ('Geothermal Energy and High-Performance Drilling').
Deetjen, Ulrike; Powell, John A
2016-05-01
This research examines the extent to which informational and emotional elements are employed in online support forums for 14 purposively sampled chronic medical conditions and the factors that influence whether posts are of a more informational or emotional nature. Large-scale qualitative data were obtained from Dailystrength.org. Based on a hand-coded training dataset, all posts were classified into informational or emotional using a Bayesian classification algorithm to generalize the findings. Posts that could not be classified with a probability of at least 75% were excluded. The overall tendency toward emotional posts differs by condition: mental health (depression, schizophrenia) and Alzheimer's disease consist of more emotional posts, while informational posts relate more to nonterminal physical conditions (irritable bowel syndrome, diabetes, asthma). There is no gender difference across conditions, although prostate cancer forums are oriented toward informational support, whereas breast cancer forums rather feature emotional support. Across diseases, the best predictors for emotional content are lower age and a higher number of overall posts by the support group member. The results are in line with previous empirical research and unify empirical findings from single/2-condition research. Limitations include the analytical restriction to predefined categories (informational, emotional) through the chosen machine-learning approach. Our findings provide an empirical foundation for building theory on informational versus emotional support across conditions, give insights for practitioners to better understand the role of online support groups for different patients, and show the usefulness of machine-learning approaches to analyze large-scale qualitative health data from online settings. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Lee, Chin Yik; Cant, Stewart
2017-07-01
A premixed propane-air flame stabilised on a triangular bluff body in a model jet-engine afterburner configuration is investigated using large-eddy simulation (LES). The reaction rate source term for turbulent premixed combustion is closed using the transported flame surface density (TFSD) model. In this approach, there is no need to assume local equilibrium between the generation and destruction of subgrid FSD, as commonly done in simple algebraic closure models. Instead, the key processes that create and destroy FSD are accounted for explicitly. This allows the model to capture large-scale unsteady flame propagation in the presence of combustion instabilities, or in situations where the flame encounters progressive wrinkling with time. In this study, comprehensive validation of the numerical method is carried out. For the non-reacting flow, good agreement for both the time-averaged and root-mean-square velocity fields are obtained, and the Karman type vortex shedding behaviour seen in the experiment is well represented. For the reacting flow, two mesh configurations are used to investigate the sensitivity of the LES results to the numerical resolution. Profiles for the velocity and temperature fields exhibit good agreement with the experimental data for both the coarse and dense mesh. This demonstrates the capability of LES coupled with the TFSD approach in representing the highly unsteady premixed combustion observed in this configuration. The instantaneous flow pattern and turbulent flame behaviour are discussed, and the differences between the non-reacting and reacting flow are described through visualisation of vortical structures and their interaction with the flame. Lastly, the generation and destruction of FSD are evaluated by examining the individual terms in the FSD transport equation. Localised regions where straining, curvature and propagation are each dominant are observed, highlighting the importance of non-equilibrium effects of FSD generation and
Directory of Open Access Journals (Sweden)
Xiaohan Liu
2015-08-01
Full Text Available Aquatic vegetation serves many important ecological and socioeconomic functions in lake ecosystems. The presence of floating algae poses difficulties for accurately estimating the distribution of aquatic vegetation in eutrophic lakes. We present an approach to map the distribution of aquatic vegetation in Lake Taihu (a large, shallow eutrophic lake in China and reduce the influence of floating algae on aquatic vegetation mapping. Our approach involved a frequency analysis over a 2003–2013 time series of the floating algal index (FAI based on moderate-resolution imaging spectroradiometer (MODIS data. Three phenological periods were defined based on the vegetation presence frequency (VPF and the growth of algae and aquatic vegetation: December and January composed the period of wintering aquatic vegetation; February and March composed the period of prolonged coexistence of algal blooms and wintering aquatic vegetation; and June to October was the peak period of the coexistence of algal blooms and aquatic vegetation. By comparing and analyzing the satellite-derived aquatic vegetation distribution and 244 in situ measurements made in 2013, we established a FAI threshold of −0.025 and VPF thresholds of 0.55, 0.45 and 0.85 for the three phenological periods. We validated the accuracy of our approach by comparing the results between the satellite-derived maps and the in situ results obtained from 2008–2012. The overall classification accuracy was 87%, 81%, 77%, 88% and 73% in the five years from 2008–2012, respectively. We then applied the approach to the MODIS images from 2003–2013 and obtained the total area of the aquatic vegetation, which varied from 265.94 km2 in 2007 to 503.38 km2 in 2008, with an average area of 359.62 ± 69.20 km2 over the 11 years. Our findings suggest that (1 the proposed approach can be used to map the distribution of aquatic vegetation in eutrophic algae-rich waters and (2 dramatic changes occurred in the
International Nuclear Information System (INIS)
Wissmann, F; Reginatto, M; Moeller, T
2010-01-01
The problem of finding a simple, generally applicable description of worldwide measured ambient dose equivalent rates at aviation altitudes between 8 and 12 km is difficult to solve due to the large variety of functional forms and parametrisations that are possible. We present an approach that uses Bayesian statistics and Monte Carlo methods to fit mathematical models to a large set of data and to compare the different models. About 2500 data points measured in the periods 1997-1999 and 2003-2006 were used. Since the data cover wide ranges of barometric altitude, vertical cut-off rigidity and phases in the solar cycle 23, we developed functions which depend on these three variables. Whereas the dependence on the vertical cut-off rigidity is described by an exponential, the dependences on barometric altitude and solar activity may be approximated by linear functions in the ranges under consideration. Therefore, a simple Taylor expansion was used to define different models and to investigate the relevance of the different expansion coefficients. With the method presented here, it is possible to obtain probability distributions for each expansion coefficient and thus to extract reliable uncertainties even for the dose rate evaluated. The resulting function agrees well with new measurements made at fixed geographic positions and during long haul flights covering a wide range of latitudes.
Large-Scale Compute-Intensive Analysis via a Combined In-situ and Co-scheduling Workflow Approach
Energy Technology Data Exchange (ETDEWEB)
Messer, Bronson [ORNL; Sewell, Christopher [Los Alamos National Laboratory (LANL); Heitmann, Katrin [ORNL; Finkel, Dr. Hal J [Argonne National Laboratory (ANL); Fasel, Patricia [Los Alamos National Laboratory (LANL); Zagaris, George [Lawrence Livermore National Laboratory (LLNL); Pope, Adrian [Los Alamos National Laboratory (LANL); Habib, Salman [ORNL; Parete-Koon, Suzanne T [ORNL
2015-01-01
Large-scale simulations can produce tens of terabytes of data per analysis cycle, complicating and limiting the efficiency of workflows. Traditionally, outputs are stored on the file system and analyzed in post-processing. With the rapidly increasing size and complexity of simulations, this approach faces an uncertain future. Trending techniques consist of performing the analysis in situ, utilizing the same resources as the simulation, and/or off-loading subsets of the data to a compute-intensive analysis system. We introduce an analysis framework developed for HACC, a cosmological N-body code, that uses both in situ and co-scheduling approaches for handling Petabyte-size outputs. An initial in situ step is used to reduce the amount of data to be analyzed, and to separate out the data-intensive tasks handled off-line. The analysis routines are implemented using the PISTON/VTK-m framework, allowing a single implementation of an algorithm that simultaneously targets a variety of GPU, multi-core, and many-core architectures.
Fuller, Nathaniel J.; Licata, Nicholas A.
2018-05-01
Obtaining a detailed understanding of the physical interactions between a cell and its environment often requires information about the flow of fluid surrounding the cell. Cells must be able to effectively absorb and discard material in order to survive. Strategies for nutrient acquisition and toxin disposal, which have been evolutionarily selected for their efficacy, should reflect knowledge of the physics underlying this mass transport problem. Motivated by these considerations, in this paper we discuss the results from an undergraduate research project on the advection-diffusion equation at small Reynolds number and large Péclet number. In particular, we consider the problem of mass transport for a Stokesian spherical swimmer. We approach the problem numerically and analytically through a rescaling of the concentration boundary layer. A biophysically motivated first-passage problem for the absorption of material by the swimming cell demonstrates quantitative agreement between the numerical and analytical approaches. We conclude by discussing the connections between our results and the design of smart toxin disposal systems.
Directory of Open Access Journals (Sweden)
Ahmadreza Vajdi
2018-05-01
Full Text Available We study the problem of employing a mobile-sink into a large-scale Event-Driven Wireless Sensor Networks (EWSNs for the purpose of data harvesting from sensor-nodes. Generally, this employment improves the main weakness of WSNs that is about energy-consumption in battery-driven sensor-nodes. The main motivation of our work is to address challenges which are related to a network’s topology by adopting a mobile-sink that moves in a predefined trajectory in the environment. Since, in this fashion, it is not possible to gather data from sensor-nodes individually, we adopt the approach of defining some of the sensor-nodes as Rendezvous Points (RPs in the network. We argue that RP-planning in this case is a tradeoff between minimizing the number of RPs while decreasing the number of hops for a sensor-node that needs data transformation to the related RP which leads to minimizing average energy consumption in the network. We address the problem by formulating the challenges and expectations as a Mixed Integer Linear Programming (MILP. Henceforth, by proving the NP-hardness of the problem, we propose three effective and distributed heuristics for RP-planning, identifying sojourn locations, and constructing routing trees. Finally, experimental results prove the effectiveness of our approach.
Vajdi, Ahmadreza; Zhang, Gongxuan; Zhou, Junlong; Wei, Tongquan; Wang, Yongli; Wang, Tianshu
2018-05-04
We study the problem of employing a mobile-sink into a large-scale Event-Driven Wireless Sensor Networks (EWSNs) for the purpose of data harvesting from sensor-nodes. Generally, this employment improves the main weakness of WSNs that is about energy-consumption in battery-driven sensor-nodes. The main motivation of our work is to address challenges which are related to a network’s topology by adopting a mobile-sink that moves in a predefined trajectory in the environment. Since, in this fashion, it is not possible to gather data from sensor-nodes individually, we adopt the approach of defining some of the sensor-nodes as Rendezvous Points (RPs) in the network. We argue that RP-planning in this case is a tradeoff between minimizing the number of RPs while decreasing the number of hops for a sensor-node that needs data transformation to the related RP which leads to minimizing average energy consumption in the network. We address the problem by formulating the challenges and expectations as a Mixed Integer Linear Programming (MILP). Henceforth, by proving the NP-hardness of the problem, we propose three effective and distributed heuristics for RP-planning, identifying sojourn locations, and constructing routing trees. Finally, experimental results prove the effectiveness of our approach.
Bozzeda, Fabio; Zangrilli, Maria Paola; Defeo, Omar
2016-06-01
A Fuzzy Naïve Bayes (FNB) classifier was developed to assess large-scale variations in abundance, species richness and diversity of the macrofauna inhabiting fifteen Uruguayan sandy beaches affected by the effects of beach morphodynamics and the estuarine gradient generated by Rio de la Plata. Information from six beaches was used to estimate FNB parameters, while abiotic data of the remaining nine beaches were used to forecast abundance, species richness and diversity. FNB simulations reproduced the general increasing trend of target variables from inner estuarine reflective beaches to marine dissipative ones. The FNB model also identified a threshold value of salinity range beyond which diversity markedly increased towards marine beaches. Salinity range is suggested as an ecological master factor governing distributional patterns in sandy beach macrofauna. However, the model: 1) underestimated abundance and species richness at the innermost estuarine beach, with the lowest salinity, and 2) overestimated species richness in marine beaches with a reflective morphodynamic state, which is strongly linked to low abundance, species richness and diversity. Therefore, future modeling efforts should be refined by giving a dissimilar weigh to the gradients defined by estuarine (estuarine beaches) and morphodynamic (marine beaches) variables, which could improve predictions of target variables. Our modeling approach could be applied to a wide spectrum of issues, ranging from basic ecology to social-ecological systems. This approach seems relevant, given the current challenge to develop predictive methodologies to assess the simultaneous and nonlinear effects of anthropogenic and natural impacts in coastal ecosystems.
Zhang, Gongxuan; Wang, Yongli; Wang, Tianshu
2018-01-01
We study the problem of employing a mobile-sink into a large-scale Event-Driven Wireless Sensor Networks (EWSNs) for the purpose of data harvesting from sensor-nodes. Generally, this employment improves the main weakness of WSNs that is about energy-consumption in battery-driven sensor-nodes. The main motivation of our work is to address challenges which are related to a network’s topology by adopting a mobile-sink that moves in a predefined trajectory in the environment. Since, in this fashion, it is not possible to gather data from sensor-nodes individually, we adopt the approach of defining some of the sensor-nodes as Rendezvous Points (RPs) in the network. We argue that RP-planning in this case is a tradeoff between minimizing the number of RPs while decreasing the number of hops for a sensor-node that needs data transformation to the related RP which leads to minimizing average energy consumption in the network. We address the problem by formulating the challenges and expectations as a Mixed Integer Linear Programming (MILP). Henceforth, by proving the NP-hardness of the problem, we propose three effective and distributed heuristics for RP-planning, identifying sojourn locations, and constructing routing trees. Finally, experimental results prove the effectiveness of our approach. PMID:29734718
Directory of Open Access Journals (Sweden)
Simon Boitard
2016-03-01
Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.
Wu, Bin; Zheng, Yi; Wu, Xin; Tian, Yong; Han, Feng; Liu, Jie; Zheng, Chunmiao
2015-04-01
Integrated surface water-groundwater modeling can provide a comprehensive and coherent understanding on basin-scale water cycle, but its high computational cost has impeded its application in real-world management. This study developed a new surrogate-based approach, SOIM (Surrogate-based Optimization for Integrated surface water-groundwater Modeling), to incorporate the integrated modeling into water management optimization. Its applicability and advantages were evaluated and validated through an optimization research on the conjunctive use of surface water (SW) and groundwater (GW) for irrigation in a semiarid region in northwest China. GSFLOW, an integrated SW-GW model developed by USGS, was employed. The study results show that, due to the strong and complicated SW-GW interactions, basin-scale water saving could be achieved by spatially optimizing the ratios of groundwater use in different irrigation districts. The water-saving potential essentially stems from the reduction of nonbeneficial evapotranspiration from the aqueduct system and shallow groundwater, and its magnitude largely depends on both water management schemes and hydrological conditions. Important implications for water resources management in general include: first, environmental flow regulation needs to take into account interannual variation of hydrological conditions, as well as spatial complexity of SW-GW interactions; and second, to resolve water use conflicts between upper stream and lower stream, a system approach is highly desired to reflect ecological, economic, and social concerns in water management decisions. Overall, this study highlights that surrogate-based approaches like SOIM represent a promising solution to filling the gap between complex environmental modeling and real-world management decision-making.
Effect of nasal deviation on quality of life.
de Lima Ramos, Sueli; Hochman, Bernardo; Gomes, Heitor Carvalho; Abla, Luiz Eduardo Felipe; Veiga, Daniela Francescato; Juliano, Yara; Dini, Gal Moreira; Ferreira, Lydia Masako
2011-07-01
Nasal deviation is a common complaint in otorhinolaryngology and plastic surgery. This condition not only causes impairment of nasal function but also affects quality of life, leading to psychological distress. The subjective assessment of quality of life, as an important aspect of outcomes research, has received increasing attention in recent decades. Quality of life is measured using standardized questionnaires that have been tested for reliability, validity, and sensitivity. The aim of this study was to evaluate health-related quality of life, self-esteem, and depression in patients with nasal deviation. Sixty patients were selected for the study. Patients with nasal deviation (n = 32) were assigned to the study group, and patients without nasal deviation (n = 28) were assigned to the control group. The diagnosis of nasal deviation was made by digital photogrammetry. Quality of life was assessed using the Medical Outcomes Study 36-Item Short Form Health Survey questionnaire; the Rosenberg Self-Esteem/Federal University of São Paulo, Escola Paulista de Medicina Scale; and the 20-item Self-Report Questionnaire. There were significant differences between groups in the physical functioning and general health subscales of the Medical Outcomes Study 36-Item Short Form Health Survey (p < 0.05). Depression was detected in 11 patients (34.4 percent) in the study group and in two patients in the control group, with a significant difference between groups (p < 0.05). Nasal deviation is an aspect of rhinoplasty of which the surgeon should be aware so that proper psychological diagnosis can be made and suitable treatment can be planned because psychologically the patients with nasal deviation have significantly worse quality of life and are more prone to depression. Risk, II.(Figure is included in full-text article.).
Performance of Phonatory Deviation Diagrams in Synthesized Voice Analysis.
Lopes, Leonardo Wanderley; da Silva, Karoline Evangelista; da Silva Evangelista, Deyverson; Almeida, Anna Alice; Silva, Priscila Oliveira Costa; Lucero, Jorge; Behlau, Mara
2018-05-02
To analyze the performance of a phonatory deviation diagram (PDD) in discriminating the presence and severity of voice deviation and the predominant voice quality of synthesized voices. A speech-language pathologist performed the auditory-perceptual analysis of the synthesized voice (n = 871). The PDD distribution of voice signals was analyzed according to area, quadrant, shape, and density. Differences in signal distribution regarding the PDD area and quadrant were detected when differentiating the signals with and without voice deviation and with different predominant voice quality. Differences in signal distribution were found in all PDD parameters as a function of the severity of voice disorder. The PDD area and quadrant can differentiate normal voices from deviant synthesized voices. There are differences in signal distribution in PDD area and quadrant as a function of the severity of voice disorder and the predominant voice quality. However, the PDD area and quadrant do not differentiate the signals as a function of severity of voice disorder and differentiated only the breathy and rough voices from the normal and strained voices. PDD density is able to differentiate only signals with moderate and severe deviation. PDD shape shows differences between signals with different severities of voice deviation. © 2018 S. Karger AG, Basel.
Directory of Open Access Journals (Sweden)
Tripputi Mark
2006-10-01
Full Text Available Abstract Background Many of the most popular pre-processing methods for Affymetrix expression arrays, such as RMA, gcRMA, and PLIER, simultaneously analyze data across a set of predetermined arrays to improve precision of the final measures of expression. One problem associated with these algorithms is that expression measurements for a particular sample are highly dependent on the set of samples used for normalization and results obtained by normalization with a different set may not be comparable. A related problem is that an organization producing and/or storing large amounts of data in a sequential fashion will need to either re-run the pre-processing algorithm every time an array is added or store them in batches that are pre-processed together. Furthermore, pre-processing of large numbers of arrays requires loading all the feature-level data into memory which is a difficult task even with modern computers. We utilize a scheme that produces all the information necessary for pre-processing using a very large training set that can be used for summarization of samples outside of the training set. All subsequent pre-processing tasks can be done on an individual array basis. We demonstrate the utility of this approach by defining a new version of the Robust Multi-chip Averaging (RMA algorithm which we refer to as refRMA. Results We assess performance based on multiple sets of samples processed over HG U133A Affymetrix GeneChip® arrays. We show that the refRMA workflow, when used in conjunction with a large, biologically diverse training set, results in the same general characteristics as that of RMA in its classic form when comparing overall data structure, sample-to-sample correlation, and variation. Further, we demonstrate that the refRMA workflow and reference set can be robustly applied to naïve organ types and to benchmark data where its performance indicates respectable results. Conclusion Our results indicate that a biologically diverse
Mean-deviation analysis in the theory of choice.
Grechuk, Bogdan; Molyboha, Anton; Zabarankin, Michael
2012-08-01
Mean-deviation analysis, along with the existing theories of coherent risk measures and dual utility, is examined in the context of the theory of choice under uncertainty, which studies rational preference relations for random outcomes based on different sets of axioms such as transitivity, monotonicity, continuity, etc. An axiomatic foundation of the theory of coherent risk measures is obtained as a relaxation of the axioms of the dual utility theory, and a further relaxation of the axioms are shown to lead to the mean-deviation analysis. Paradoxes arising from the sets of axioms corresponding to these theories and their possible resolutions are discussed, and application of the mean-deviation analysis to optimal risk sharing and portfolio selection in the context of rational choice is considered. © 2012 Society for Risk Analysis.
Minimizing Hexapod Robot Foot Deviations Using Multilayer Perceptron
Directory of Open Access Journals (Sweden)
Vytautas Valaitis
2015-12-01
Full Text Available Rough-terrain traversability is one of the most valuable characteristics of walking robots. Even despite their slower speeds and more complex control algorithms, walking robots have far wider usability than wheeled or tracked robots. However, efficient movement over irregular surfaces can only be achieved by eliminating all possible difficulties, which in many cases are caused by a high number of degrees of freedom, feet slippage, frictions and inertias between different robot parts or even badly developed inverse kinematics (IK. In this paper we address the hexapod robot-foot deviation problem. We compare the foot-positioning accuracy of unconfigured inverse kinematics and Multilayer Perceptron-based (MLP methods via theory, computer modelling and experiments on a physical robot. Using MLP-based methods, we were able to significantly decrease deviations while reaching desired positions with the hexapod's foot. Furthermore, this method is able to compensate for deviations of the robot arising from any possible reason.
Doulamis, A.; Doulamis, N.; Ioannidis, C.; Chrysouli, C.; Grammalidis, N.; Dimitropoulos, K.; Potsiou, C.; Stathopoulou, E.-K.; Ioannides, M.
2015-08-01
Outdoor large-scale cultural sites are mostly sensitive to environmental, natural and human made factors, implying an imminent need for a spatio-temporal assessment to identify regions of potential cultural interest (material degradation, structuring, conservation). On the other hand, in Cultural Heritage research quite different actors are involved (archaeologists, curators, conservators, simple users) each of diverse needs. All these statements advocate that a 5D modelling (3D geometry plus time plus levels of details) is ideally required for preservation and assessment of outdoor large scale cultural sites, which is currently implemented as a simple aggregation of 3D digital models at different time and levels of details. The main bottleneck of such an approach is its complexity, making 5D modelling impossible to be validated in real life conditions. In this paper, a cost effective and affordable framework for 5D modelling is proposed based on a spatial-temporal dependent aggregation of 3D digital models, by incorporating a predictive assessment procedure to indicate which regions (surfaces) of an object should be reconstructed at higher levels of details at next time instances and which at lower ones. In this way, dynamic change history maps are created, indicating spatial probabilities of regions needed further 3D modelling at forthcoming instances. Using these maps, predictive assessment can be made, that is, to localize surfaces within the objects where a high accuracy reconstruction process needs to be activated at the forthcoming time instances. The proposed 5D Digital Cultural Heritage Model (5D-DCHM) is implemented using open interoperable standards based on the CityGML framework, which also allows the description of additional semantic metadata information. Visualization aspects are also supported to allow easy manipulation, interaction and representation of the 5D-DCHM geometry and the respective semantic information. The open source 3DCity
Directory of Open Access Journals (Sweden)
C.M. Moll
2012-01-01
Full Text Available Most South African organisations were historically part of a closed competitive system with little global competition and a relatively stable economy (Manning: 18, Sunter: 32. Since the political transformation, the globalisation of the world economy, the decline of world economic fundamentals and specific challenges in the South African scenario such as GEAR and employment equity, the whole playingfield has changed. With these changes, new challenges ', appear. A significant challenge for organisations within this scenario is to think, plan and manage strategically. In order to do so, the organisation must understand its relationship with its environment and establish innovative new strategies to manipulate; interact with; and ultimately survive in the environment. The legacy of the past has, in many organisations, implanted an operational short-term focus because the planning horizon was stable. It was sufficient to construct annual plans rather than strategies. These plans were typically internally focused rather than driven by the external environment. Strategic planning in this environment tended to be a form of team building through which the various members of the organisation 's management team discussed and documented the problems of the day. A case study is presented of the development of a strategic management process for a large South African Mining company. The authors believe that the approach is a new and different way of addressing a problem that exists in many organisations - the establishment of a process of strategic thinking, whilst at the same time ensuring that a formal process of strategic planning is followed in order to prompt the management of the organisation for strategic action. The lessons that were drawn from this process are applicable to a larger audience due to the homogenous nature of the management style of a large number of South African organisations.
Effect of density deviations of concrete on its attenuation efficiency
International Nuclear Information System (INIS)
Szymendera, L.; Wincel, K.; Blociszewski, S.; Kordyasz, D.; Sobolewska, I.
In the work, the influence of concrete density deviation on shield thickness and total dose ratio outside the reactor shield, has--on the basis of numerical analysis--been considered. It has been noticed the possibility of introducing flexible corrections--without additional shielding calculation--to the design thickness of the shield. It has been also found that in common cases of shield design, where any necessity of minimizing the shield thickness does not exist, the tendency to minimize the value of this deviation is hardly substantiable
Deviation from Covered Interest Rate Parity in Korea
Directory of Open Access Journals (Sweden)
Seungho Lee
2003-06-01
Full Text Available This paper tested the factors which cause deviation from covered interest rate parity (CIRP in Korea, using regression and VAR models. The empirical evidence indicates that the difference between the swap rate and interest rate differential exists and is greatly affected by variables which represent the currency liquidity situation of foreign exchange banks. In other words, the deviation from CIRP can easily occur due to the lack of foreign exchange liquidity of banks in a thin market, despite few capital constraints, small transaction costs, and trivial default risk in Korea.
International Nuclear Information System (INIS)
2001-10-01
In pursuance of the objectives of the Council Resolutions of 1975 and 1992 on the technological issues of nuclear safety, the European Commission (EC) is seeking to promote a sustained joint in-depth study on possible significant future nuclear power reactor safety cases. To that end the EC decided to support financially a study by the grouping of the European Union Technical Safety Organisations (TSOG). The general objective of the study programme was to promote, through a collaboration of European Union Technical Safety Organisations (TSOs), common views on technical safety issues related to large evolutionary PWRs in Europe, which could be ready for operation during the next decade. AVN (Belgium) (Technical project leader), AEA Technology (United Kingdom), ANPA (Italy) CIEMAT (Spain), GRS (Germany), IPSN (France), were the TSOs participating in the study which was co-ordinated by RISKAUDIT. The study focused notably on the EPR project initiated by the French and German utilities and vendors. It also considered relevant projects, even of plants of different size, developed outside the European Union in order to provide elements important for the safety characterisation and which could contribute to the credibility and confidence of EPR. It is expected that this study will constitute a significant step towards the development of a common safety approach in EU countries. The study constitutes an important step forward in the development of a common approach of the TSOs to the safety of advanced evolutionary pressurised water reactors. This goal was mainly achieved by an in-depth analysis of the key safety issues, taking into account new developments in the national technical safety objectives and in the EPR design. For this reason the Commission has decided to publish at least the present summary report containing the main outcomes of the TSO study. Confidentiality considerations unfortunately prevent the open publication of the full series of reports. (author)
MUSiC - A general search for deviations from Monte Carlo predictions in CMS
Energy Technology Data Exchange (ETDEWEB)
Biallass, Philipp A, E-mail: biallass@cern.c [Physics Institute IIIA, RWTH Aachen, Physikzentrum, 52056 Aachen (Germany)
2009-06-01
A model independent analysis approach in CMS is presented, systematically scanning the data for deviations from the Monte Carlo expectation. Such an analysis can contribute to the understanding of the detector and the tuning of the event generators. Furthermore, due to the minimal theoretical bias this approach is sensitive to a variety of models of new physics, including those not yet thought of. Events are classified into event classes according to their particle content (muons, electrons, photons, jets and missing transverse energy). A broad scan of various distributions is performed, identifying significant deviations from the Monte Carlo simulation. The importance of systematic uncertainties is outlined, which are taken into account rigorously within the algorithm. Possible detector effects and generator issues, as well as models involving Supersymmetry and new heavy gauge bosons are used as an input to the search algorithm.
MUSIC -- An Automated Scan for Deviations between Data and Monte Carlo Simulation
CMS Collaboration
2008-01-01
We present a model independent analysis approach, systematically scanning the data for deviations from the Monte Carlo expectation. Such an analysis can contribute to the understanding of the detector and the tuning of the event generators. Due to the minimal theoretical bias this approach is sensitive to a variety of models, including those not yet thought of. Events are classified into event classes according to their particle content (muons, electrons, photons, jets and missing transverse energy). A broad scan of various distributions is performed, identifying significant deviations from the Monte Carlo simulation. We outline the importance of systematic uncertainties, which are taken into account rigorously within the algorithm. Possible detector effects and generator issues, as well as models involving supersymmetry and new heavy gauge bosons have been used as an input to the search algorithm. %Several models involving supersymmetry, new heavy gauge bosons and leptoquarks, as well as possible detector ef...
MUSiC A General Search for Deviations from Monte Carlo Predictions in CMS
Biallass, Philipp
2009-01-01
A model independent analysis approach in CMS is presented, systematically scanning the data for deviations from the Monte Carlo expectation. Such an analysis can contribute to the understanding of the detector and the tuning of the event generators. Furthermore, due to the minimal theoretical bias this approach is sensitive to a variety of models of new physics, including those not yet thought of. Events are classified into event classes according to their particle content (muons, electrons, photons, jets and missing transverse energy). A broad scan of various distributions is performed, identifying significant deviations from the Monte Carlo simulation. The importance of systematic uncertainties is outlined, which are taken into account rigorously within the algorithm. Possible detector effects and generator issues, as well as models involving Supersymmetry and new heavy gauge bosons are used as an input to the search algorithm.
MUSiC - A general search for deviations from Monte Carlo predictions in CMS
International Nuclear Information System (INIS)
Biallass, Philipp A
2009-01-01
A model independent analysis approach in CMS is presented, systematically scanning the data for deviations from the Monte Carlo expectation. Such an analysis can contribute to the understanding of the detector and the tuning of the event generators. Furthermore, due to the minimal theoretical bias this approach is sensitive to a variety of models of new physics, including those not yet thought of. Events are classified into event classes according to their particle content (muons, electrons, photons, jets and missing transverse energy). A broad scan of various distributions is performed, identifying significant deviations from the Monte Carlo simulation. The importance of systematic uncertainties is outlined, which are taken into account rigorously within the algorithm. Possible detector effects and generator issues, as well as models involving Supersymmetry and new heavy gauge bosons are used as an input to the search algorithm.
Directory of Open Access Journals (Sweden)
Meha Jain
2017-06-01
Full Text Available Fine-scale agricultural statistics are an important tool for understanding trends in food production and their associated drivers, yet these data are rarely collected in smallholder systems. These statistics are particularly important for smallholder systems given the large amount of fine-scale heterogeneity in production that occurs in these regions. To overcome the lack of ground data, satellite data are often used to map fine-scale agricultural statistics. However, doing so is challenging for smallholder systems because of (1 complex sub-pixel heterogeneity; (2 little to no available calibration data; and (3 high amounts of cloud cover as most smallholder systems occur in the tropics. We develop an automated method termed the MODIS Scaling Approach (MSA to map smallholder cropped area across large spatial and temporal scales using MODIS Enhanced Vegetation Index (EVI satellite data. We use this method to map winter cropped area, a key measure of cropping intensity, across the Indian subcontinent annually from 2000–2001 to 2015–2016. The MSA defines a pixel as cropped based on winter growing season phenology and scales the percent of cropped area within a single MODIS pixel based on observed EVI values at peak phenology. We validated the result with eleven high-resolution scenes (spatial scale of 5 × 5 m2 or finer that we classified into cropped versus non-cropped maps using training data collected by visual inspection of the high-resolution imagery. The MSA had moderate to high accuracies when validated using these eleven scenes across India (R2 ranging between 0.19 and 0.89 with an overall R2 of 0.71 across all sites. This method requires no calibration data, making it easy to implement across large spatial and temporal scales, with 100% spatial coverage due to the compositing of EVI to generate cloud-free data sets. The accuracies found in this study are similar to those of other studies that map crop production using automated methods
Directory of Open Access Journals (Sweden)
Francisco Palacios-Quiñonero
2014-01-01
Full Text Available We present a new design strategy that makes it possible to synthesize decentralized output-feedback controllers by solving two successive optimization problems with linear matrix inequality (LMI constraints. In the initial LMI optimization problem, two auxiliary elements are computed: a standard state-feedback controller, which can be taken as a reference in the performance assessment, and a matrix that facilitates a proper definition of the main LMI optimization problem. Next, by solving the second optimization problem, the output-feedback controller is obtained. The proposed strategy extends recent results in static output-feedback control and can be applied to design complex passive-damping systems for vibrational control of large structures. More precisely, by taking advantages of the existing link between fully decentralized velocity-feedback controllers and passive linear dampers, advanced active feedback control strategies can be used to design complex passive-damping systems, which combine the simplicity and robustness of passive control systems with the efficiency of active feedback control. To demonstrate the effectiveness of the proposed approach, a passive-damping system for the seismic protection of a five-story building is designed with excellent results.
Directory of Open Access Journals (Sweden)
Liu Jiping
2017-12-01
Full Text Available Surface area estimation is a widely used tool for resource evaluation in the physical world. When processing large scale spatial data, the input/output (I/O can easily become the bottleneck in parallelizing the algorithm due to the limited physical memory resources and the very slow disk transfer rate. In this paper, we proposed a stream tilling approach to surface area estimation that first decomposed a spatial data set into tiles with topological expansions. With these tiles, the one-to-one mapping relationship between the input and the computing process was broken. Then, we realized a streaming framework towards the scheduling of the I/O processes and computing units. Herein, each computing unit encapsulated a same copy of the estimation algorithm, and multiple asynchronous computing units could work individually in parallel. Finally, the performed experiment demonstrated that our stream tilling estimation can efficiently alleviate the heavy pressures from the I/O-bound work, and the measured speedup after being optimized have greatly outperformed the directly parallel versions in shared memory systems with multi-core processors.
Directory of Open Access Journals (Sweden)
Ackchai Sirikijpanichkul
2015-01-01
Full Text Available For the agricultural-based countries, the requirement on transportation infrastructure should not only be limited to accommodate general traffic but also the transportation of crop and agricultural products during the harvest seasons. Most of the past researches focus on the development of truck trip estimation techniques for urban, statewide, or nationwide freight movement but neglect the importance of rural freight movement which contributes to pavement deterioration on rural roads especially during harvest seasons. Recently, the Thai Government initiated a plan to construct a network of reservoirs within the northeastern region, aiming at improving existing irrigation system particularly in the areas where a more effective irrigation system is needed. It is expected to bring in new opportunities on expanding the cultivation areas, increasing the economy of scale and enlarging the extent market of area. As a consequence, its effects on truck trip generation needed to be investigated to assure the service quality of related transportation infrastructure. This paper proposes a combinatory input-output commodity-based approach to estimate truck trips on rural highway infrastructure network. The large-scale irrigation project for the northeastern of Thailand is demonstrated as a case study.
48 CFR 552.252-6 - Authorized Deviations in Clauses.
2010-10-01
... published in the General Services Administration Acquisition Regulation (48 CFR chapter 5). (2) This... published in the General Services Administration Acquisition Regulation by the addition of “(DEVIATION (FAR... ADMINISTRATION CLAUSES AND FORMS SOLICITATION PROVISIONS AND CONTRACT CLAUSES Text of Provisions and Clauses 552...
A Positional Deviation Sensor for Training of Robots
Directory of Open Access Journals (Sweden)
Fredrik Dessen
1988-04-01
Full Text Available A device for physically guiding a robot manipulator through its task is described. It consists of inductive, contact-free positional deviation sensors. The sensor will be used in high performance sensory control systems. The paper describes problems concerning multi-dimensional, non-linear measurement functions and the design of the servo control system.
Oscillations in deviating difference equations using an iterative technique
Directory of Open Access Journals (Sweden)
George E Chatzarakis
2017-07-01
Full Text Available Abstract The paper deals with the oscillation of the first-order linear difference equation with deviating argument and nonnegative coefficients. New sufficient oscillation conditions, involving limsup, are given, which essentially improve all known results, based on an iterative technique. We illustrate the results and the improvement over other known oscillation criteria by examples, numerically solved in Matlab.
Patterns of deviation in Niyi Osundare's poetry | Dick | Mgbakoigba ...
African Journals Online (AJOL)
Log in or Register to get access to full text downloads. ... A critical stylistic study of the poetry of Niyi Osundare from Nigeria reveals that he has made an exemplary ... deviate from norms and conventions of language thereby creating aesthetics ...
International asset pricing under segmentation and PPP deviations
Chaieb, I.; Errunza, V.
2007-01-01
We analyze the impact of both purchasing power parity (PPP) deviations and market segmentation on asset pricing and investor's portfolio holdings. The freely traded securities command a world market risk premium and an inflation risk premium. The securities that can be held by only a subset of
9 CFR 381.308 - Deviations in processing.
2010-01-01
...) must be handled according to: (1)(i) A HACCP plan for canned product that addresses hazards associated... (d) of this section. (c) [Reserved] (d) Procedures for handling process deviations where the HACCP... accordance with the following procedures: (a) Emergency stops. (1) When retort jams or breakdowns occur...
Semiparametric Bernstein–von Mises for the error standard deviation
Jonge, de R.; Zanten, van J.H.
2013-01-01
We study Bayes procedures for nonparametric regression problems with Gaussian errors, giving conditions under which a Bernstein–von Mises result holds for the marginal posterior distribution of the error standard deviation. We apply our general results to show that a single Bayes procedure using a
Semiparametric Bernstein-von Mises for the error standard deviation
de Jonge, R.; van Zanten, H.
2013-01-01
We study Bayes procedures for nonparametric regression problems with Gaussian errors, giving conditions under which a Bernstein-von Mises result holds for the marginal posterior distribution of the error standard deviation. We apply our general results to show that a single Bayes procedure using a
Robust Confidence Interval for a Ratio of Standard Deviations
Bonett, Douglas G.
2006-01-01
Comparing variability of test scores across alternate forms, test conditions, or subpopulations is a fundamental problem in psychometrics. A confidence interval for a ratio of standard deviations is proposed that performs as well as the classic method with normal distributions and performs dramatically better with nonnormal distributions. A simple…
Analysis of form deviation in non-isothermal glass molding
Kreilkamp, H.; Grunwald, T.; Dambon, O.; Klocke, F.
2018-02-01
Especially in the market of sensors, LED lighting and medical technologies, there is a growing demand for precise yet low-cost glass optics. This demand poses a major challenge for glass manufacturers who are confronted with the challenge arising from the trend towards ever-higher levels of precision combined with immense pressure on market prices. Since current manufacturing technologies especially grinding and polishing as well as Precision Glass Molding (PGM) are not able to achieve the desired production costs, glass manufacturers are looking for alternative technologies. Non-isothermal Glass Molding (NGM) has been shown to have a big potential for low-cost mass manufacturing of complex glass optics. However, the biggest drawback of this technology at the moment is the limited accuracy of the manufactured glass optics. This research is addressing the specific challenges of non-isothermal glass molding with respect to form deviation of molded glass optics. Based on empirical models, the influencing factors on form deviation in particular form accuracy, waviness and surface roughness will be discussed. A comparison with traditional isothermal glass molding processes (PGM) will point out the specific challenges of non-isothermal process conditions. Furthermore, the underlying physical principle leading to the formation of form deviations will be analyzed in detail with the help of numerical simulation. In this way, this research contributes to a better understanding of form deviations in non-isothermal glass molding and is an important step towards new applications demanding precise yet low-cost glass optics.
Process Measurement Deviation Analysis for Flow Rate due to Miscalibration
Energy Technology Data Exchange (ETDEWEB)
Oh, Eunsuk; Kim, Byung Rae; Jeong, Seog Hwan; Choi, Ji Hye; Shin, Yong Chul; Yun, Jae Hee [KEPCO Engineering and Construction Co., Deajeon (Korea, Republic of)
2016-10-15
An analysis was initiated to identify the root cause, and the exemption of high static line pressure correction to differential pressure (DP) transmitters was one of the major deviation factors. Also the miscalibrated DP transmitter range was identified as another major deviation factor. This paper presents considerations to be incorporated in the process flow measurement instrumentation calibration and the analysis results identified that the DP flow transmitter electrical output decreased by 3%. Thereafter, flow rate indication decreased by 1.9% resulting from the high static line pressure correction exemption and measurement range miscalibration. After re-calibration, the flow rate indication increased by 1.9%, which is consistent with the analysis result. This paper presents the brief calibration procedures for Rosemount DP flow transmitter, and analyzes possible three cases of measurement deviation including error and cause. Generally, the DP transmitter is required to be calibrated with precise process input range according to the calibration procedure provided for specific DP transmitter. Especially, in case of the DP transmitter installed in high static line pressure, it is important to correct the high static line pressure effect to avoid the inherent systematic error for Rosemount DP transmitter. Otherwise, failure to notice the correction may lead to indicating deviation from actual value.
Linear Estimation of Standard Deviation of Logistic Distribution ...
African Journals Online (AJOL)
The paper presents a theoretical method based on order statistics and a FORTRAN program for computing the variance and relative efficiencies of the standard deviation of the logistic population with respect to the Cramer-Rao lower variance bound and the best linear unbiased estimators (BLUE\\'s) when the mean is ...
The one-shot deviation principle for sequential rationality
DEFF Research Database (Denmark)
Hendon, Ebbe; Whitta-Jacobsen, Hans Jørgen; Sloth, Birgitte
1996-01-01
We present a decentralization result which is useful for practical and theoretical work with sequential equilibrium, perfect Bayesian equilibrium, and related equilibrium concepts for extensive form games. A weak consistency condition is sufficient to obtain an analogy to the well known One-Stage......-Stage-Deviation Principle for subgame perfect equilibrium...
International Nuclear Information System (INIS)
Crabol, B.
1985-04-01
An original concept on the difference of behaviour of the high frequency (small-scale) and low frequency (large-scale) atmospheric turbulence relatively to the mean wind speed has been introduced. Through a dimensional analysis based on TAYLOR's formulation, it has been shown that the parameter of the atmospheric dispersion standard-deviations was the travel distance near the source, and the travel time far from the source. Using hypotheses on the energy spectrum in the atmosphere, a numerical application has made it possible to quantify the evolution of the horizontal standard deviation for different mean wind speeds between 0,2 and 10m/s. The areas of validity of the parameter (travel distance or travel time) are clearly shown. The first one is confined in the near field and is all the smaller if the wind speed decreases. For t > 5000s, the dependence on the wind speed of the horizontal standard-deviation expressed in function of the travel time becomes insignificant. The horizontal standard-deviation is only function of the travel time. Results are compared with experimental data obtained in the atmosphere. The similar evolution of the calculated and experimental curves confirms the validity of the hypothesis and input data in calculation. This study can be applied to radioactive effluents transport in the atmosphere
On the influence of airfoil deviations on the aerodynamic performance of wind turbine rotors
International Nuclear Information System (INIS)
Winstroth, J; Seume, J R
2016-01-01
The manufacture of large wind turbine rotor blades is a difficult task that still involves a certain degree of manual labor. Due to the complexity, airfoil deviations between the design airfoils and the manufactured blade are certain to arise. Presently, the understanding of the impact of manufacturing uncertainties on the aerodynamic performance is still incomplete. The present work analyzes the influence of a series of airfoil deviations likely to occur during manufacturing by means of Computational Fluid Dynamics and the aeroelastic code FAST. The average power production of the NREL 5MW wind turbine is used to evaluate the different airfoil deviations. Analyzed deviations include: Mold tilt towards the leading and trailing edge, thick bond lines, thick bond lines with cantilever correction, backward facing steps and airfoil waviness. The most severe influences are observed for mold tilt towards the leading and thick bond lines. By applying the cantilever correction, the influence of thick bond lines is almost compensated. Airfoil waviness is very dependent on amplitude height and the location along the surface of the airfoil. Increased influence is observed for backward facing steps, once they are high enough to trigger boundary layer transition close to the leading edge. (paper)
Directory of Open Access Journals (Sweden)
Keeeun Lee
2016-01-01
Full Text Available The enhanced R&D cooperative efforts between large firms and small and medium-sized enterprises (SMEs have been emphasized to perform innovation projects and succeed in deploying profitable businesses. In order to promote such win-win alliances, it is necessary to consider the capabilities of large firms and SMEs, respectively. Thus, this paper proposes a new approach of partner selection when a large firm assesses SMEs as potential candidates for R&D collaboration. The first step of the suggested approach is to define the necessary technology for a firm by referring to a structured technology roadmap, which is a useful technique in the partner selection from the perspectives of a large firm. Second, a list of appropriate SME candidates is generated by patent information. Finally, a Bayesian network model is formulated to select an SME as an R&D collaboration partner which fits in the industry and the large firm by utilizing a bibliography with United States patents. This paper applies the proposed approach to the semiconductor industry and selects potential R&D partners for a large firm. This paper will explain how to use the model as a systematic and analytic approach for creating effective partnerships between large firms and SMEs.
Antoine Jacquier; Martin Keller-Ressel; Aleksandar Mijatovic
2011-01-01
Let $\\sigma_t(x)$ denote the implied volatility at maturity $t$ for a strike $K=S_0 e^{xt}$, where $x\\in\\bbR$ and $S_0$ is the current value of the underlying. We show that $\\sigma_t(x)$ has a uniform (in $x$) limit as maturity $t$ tends to infinity, given by the formula $\\sigma_\\infty(x)=\\sqrt{2}(h^*(x)^{1/2}+(h^*(x)-x)^{1/2})$, for $x$ in some compact neighbourhood of zero in the class of affine stochastic volatility models. The function $h^*$ is the convex dual of the limiting cumulant gen...
Exact asymptotics of probabilities of large deviations for Markov chains: the Laplace method
Energy Technology Data Exchange (ETDEWEB)
Fatalov, Vadim R [M. V. Lomonosov Moscow State University, Faculty of Mechanics and Mathematics, Moscow (Russian Federation)
2011-08-31
We prove results on exact asymptotics as n{yields}{infinity} for the expectations E{sub a} exp{l_brace}-{theta}{Sigma}{sub k=0}{sup n-1}g(X{sub k}){r_brace} and probabilities P{sub a}{l_brace}(1/n {Sigma}{sub k=0}{sup n-1}g(X{sub k})
Development of fusion fuel cycles: Large deviations from US defense program systems
Energy Technology Data Exchange (ETDEWEB)
Klein, James Edward, E-mail: james.klein@srnl.doe.gov; Poore, Anita Sue; Babineau, David W.
2015-10-15
Highlights: • All tritium fuel cycles start with a “Tritium Process.” All have similar tritium processing steps. • Fusion tritium fuel cycles minimize process tritium inventories for various reasons. • US defense program facility designs did not minimize in-process inventories. • Reduced inventory tritium facilities will lower public risk. - Abstract: Fusion energy research is dominated by plasma physics and materials technology development needs with smaller levels of effort and funding dedicated to tritium fuel cycle development. The fuel cycle is necessary to supply and recycle tritium at the required throughput rate; additionally, tritium confinement throughout the facility is needed to meet regulatory and environmental release limits. Small fuel cycle development efforts are sometimes rationalized by stating that tritium processing technology has already been developed by nuclear weapons programs and these existing processes only need rescaling or engineering design to meet the needs of fusion fuel cycles. This paper compares and contrasts features of tritium fusion fuel cycles to United States Cold War era defense program tritium systems. It is concluded that further tritium fuel cycle development activities are needed to provide technology development beneficial to both fusion and defense programs tritium systems.
Large deviation tail estimates and related limit laws for stochastic fixed point equations
DEFF Research Database (Denmark)
Collamore, Jeffrey F.; Vidyashankar, Anand N.
2013-01-01
We study the forward and backward recursions generated by a stochastic fixed point equation (SFPE) of the form $V \\stackrel{d}{=} A\\max\\{V, D\\}+B$, where $(A, B, D) \\in (0, \\infty)\\times {\\mathbb R}^2$, for both the stationary and explosive cases. In the stationary case (when ${\\bf E} [\\log \\: A......] explosive case (when ${\\bf E} [\\log \\: A] > 0)$, we establish a central limit theorem for the forward recursion generated by the SFPE, namely the process $V_n= A_n \\max\\{V_{n-1...
Truijens, Sophie E M; Meems, Margreet; Kuppens, Simone M I; Broeren, Maarten A C; Nabbe, Karin C A M; Wijnen, Hennie A; Oei, S Guid; van Son, Maarten J M; Pop, Victor J M
2014-09-08
The HAPPY study is a large prospective longitudinal cohort study in which pregnant women (N ≈ 2,500) are followed during the entire pregnancy and the whole first year postpartum. The study collects a substantial amount of psychological and physiological data investigating all kinds of determinants that might interfere with general well-being during pregnancy and postpartum, with special attention to the effect of maternal mood, pregnancy-related somatic symptoms (including nausea and vomiting (NVP) and carpal tunnel syndrome (CTS) symptoms), thyroid function, and human chorionic gonadotropin (HCG) on pregnancy outcome of mother and foetus. During pregnancy, participants receive questionnaires at 12, 22 and 32 weeks of gestation. Apart from a previous obstetric history, demographic features, distress symptoms, and pregnancy-related somatic symptoms are assessed. Furthermore, obstetrical data of the obstetric record form and ultrasound data are collected during pregnancy. At 12 and 30 weeks, thyroid function is assessed by blood analysis of thyroid stimulating hormone (TSH), free thyroxine (FT4) and thyroid peroxidase antibodies (TPO-Ab), as well as HCG. Also, depression is assessed with special focus on the two key symptoms: depressed mood and anhedonia. After childbirth, cord blood, neonatal heel screening results and all obstetrical data with regard to start of labour, mode of delivery and complications are collected. Moreover, mothers receive questionnaires at one week, six weeks, four, eight, and twelve months postpartum, to investigate recovery after pregnancy and delivery, including postpartum mood changes, emotional distress, feeding and development of the newborn. The key strength of this large prospective cohort study is the holistic (multifactorial) approach on perinatal well-being combined with a longitudinal design with measurements during all trimesters of pregnancy and the whole first year postpartum, taking into account two physiological possible
Okada, T.; McAneney, K. J.; Chen, K.
2011-12-01
Flooding on the Tone River, which drains the largest catchment area in Japan and is now home to 12 million people, poses significant risk to the Greater Tokyo Area. In April 2010, an expert panel in Japan, the Central Disaster Prevention Council, examined the potential for large-scale flooding and outlined possible mitigation measures in the Greater Tokyo Area. One of the scenarios considered closely mimics the pattern of flooding that occurred with the passage of Typhoon Kathleen in 1947 and would potentially flood some 680 000 households above floor level. Building upon that report, this study presents a Geographical Information System (GIS)-based data integration approach to estimate the insurance losses for residential buildings and contents as just one component of the potential financial cost. Using a range of publicly available data - census information, location reference data, insurance market information and flood water elevation data - this analysis finds that insurance losses for residential property alone could reach approximately 1 trillion JPY (US 12.5 billion). Total insurance losses, including commercial and industrial lines of business, are likely to be at least double this figure with total economic costs being much greater again. The results are sensitive to the flood scenario assumed, position of levee failures, local flood depths and extents, population and building heights. The Average Recurrence Interval (ARI) of the rainfall following Typhoon Kathleen has been estimated to be on the order of 200 yr; however, at this juncture it is not possible to put an ARI on the modelled loss since we cannot know the relative or joint probability of the different flooding scenarios. It is possible that more than one of these scenarios could occur simultaneously or that levee failure at one point might lower water levels downstream and avoid a failure at all other points. In addition to insurance applications, spatial analyses like that presented here have
Solar radiation pressure and deviations from Keplerian orbits
Energy Technology Data Exchange (ETDEWEB)
Kezerashvili, Roman Ya. [Physics Department, New York City College of Technology, the City University of New York, Brooklyn, NY 11201 (United States); Vazquez-Poritz, Justin F. [Physics Department, New York City College of Technology, City University of New York, Brooklyn, NY 11201 (United States)], E-mail: jporitz@gmail.com
2009-05-04
Newtonian gravity and general relativity give exactly the same expression for the period of an object in circular orbit around a static central mass. However, when the effects of the curvature of spacetime and solar radiation pressure are considered simultaneously for a solar sail propelled satellite, there is a deviation from Kepler's third law. It is shown that solar radiation pressure affects the period of this satellite in two ways: by effectively decreasing the solar mass, thereby increasing the period, and by enhancing the effects of other phenomena, potentially rendering some of them detectable. In particular, we consider deviations from Keplerian orbits due to spacetime curvature, frame dragging from the rotation of the sun, the oblateness of the sun, a possible net electric charge of the sun, and a very small positive cosmological constant.
Beat the Deviations in Estimating Maximum Power of Thermoelectric Modules
DEFF Research Database (Denmark)
Gao, Junling; Chen, Min
2013-01-01
Under a certain temperature difference, the maximum power of a thermoelectric module can be estimated by the open-circuit voltage and the short-circuit current. In practical measurement, there exist two switch modes, either from open to short or from short to open, but the two modes can give...... different estimations on the maximum power. Using TEG-127-2.8-3.5-250 and TEG-127-1.4-1.6-250 as two examples, the difference is about 10%, leading to some deviations with the temperature change. This paper analyzes such differences by means of a nonlinear numerical model of thermoelectricity, and finds out...... that the main cause is the influence of various currents on the produced electromotive potential. A simple and effective calibration method is proposed to minimize the deviations in specifying the maximum power. Experimental results validate the method with improved estimation accuracy....
Higher-order geodesic deviations applied to the Kerr metric
Colistete, R J; Kerner, R
2002-01-01
Starting with an exact and simple geodesic, we generate approximate geodesics by summing up higher-order geodesic deviations within a general relativistic setting, without using Newtonian and post-Newtonian approximations. We apply this method to the problem of closed orbital motion of test particles in the Kerr metric spacetime. With a simple circular orbit in the equatorial plane taken as the initial geodesic, we obtain finite eccentricity orbits in the form of Taylor series with the eccentricity playing the role of a small parameter. The explicit expressions of these higher-order geodesic deviations are derived using successive systems of linear equations with constant coefficients, whose solutions are of harmonic oscillator type. This scheme gives best results when applied to orbits with low eccentricities, but with arbitrary possible values of (GM/Rc sup 2).
Deviation from the superparamagnetic behaviour of fine-particle systems
Malaescu, I
2000-01-01
Studies concerning superparamagnetic behaviour of fine magnetic particle systems were performed using static and radiofrequency measurements, in the range 1-60 MHz. The samples were: a ferrofluid with magnetite particles dispersed in kerosene (sample A), magnetite powder (sample B) and the same magnetite powder dispersed in a polymer (sample C). Radiofrequency measurements indicated a maximum in the imaginary part of the complex magnetic susceptibility, for each of the samples, at frequencies with the magnitude order of tens of MHz, the origin of which was assigned to Neel-type relaxation processes. The static measurements showed a Langevin-type dependence of magnetisation M and of susceptibility chi, on the magnetic field for sample A. For samples B and C deviations from this type of dependence were found. These deviations were analysed qualitatively and explained in terms of the interparticle interactions, dispersion medium influence and surface effects.
OBSERVABLE DEVIATIONS FROM HOMOGENEITY IN AN INHOMOGENEOUS UNIVERSE
Energy Technology Data Exchange (ETDEWEB)
Giblin, John T. Jr. [Department of Physics, Kenyon College, 201 N College Road Gambier, OH 43022 (United States); Mertens, James B.; Starkman, Glenn D. [CERCA/ISO, Department of Physics, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH 44106 (United States)
2016-12-20
How does inhomogeneity affect our interpretation of cosmological observations? It has long been wondered to what extent the observable properties of an inhomogeneous universe differ from those of a corresponding Friedmann–Lemaître–Robertson–Walker (FLRW) model, and how the inhomogeneities affect that correspondence. Here, we use numerical relativity to study the behavior of light beams traversing an inhomogeneous universe, and construct the resulting Hubble diagrams. The universe that emerges exhibits an average FLRW behavior, but inhomogeneous structures contribute to deviations in observables across the observer’s sky. We also investigate the relationship between angular diameter distance and the angular extent of a source, finding deviations that grow with source redshift. These departures from FLRW are important path-dependent effects, with implications for using real observables in an inhomogeneous universe such as our own.
OBSERVABLE DEVIATIONS FROM HOMOGENEITY IN AN INHOMOGENEOUS UNIVERSE
International Nuclear Information System (INIS)
Giblin, John T. Jr.; Mertens, James B.; Starkman, Glenn D.
2016-01-01
How does inhomogeneity affect our interpretation of cosmological observations? It has long been wondered to what extent the observable properties of an inhomogeneous universe differ from those of a corresponding Friedmann–Lemaître–Robertson–Walker (FLRW) model, and how the inhomogeneities affect that correspondence. Here, we use numerical relativity to study the behavior of light beams traversing an inhomogeneous universe, and construct the resulting Hubble diagrams. The universe that emerges exhibits an average FLRW behavior, but inhomogeneous structures contribute to deviations in observables across the observer’s sky. We also investigate the relationship between angular diameter distance and the angular extent of a source, finding deviations that grow with source redshift. These departures from FLRW are important path-dependent effects, with implications for using real observables in an inhomogeneous universe such as our own.
OSMOSIS: A CAUSE OF APPARENT DEVIATIONS FROM DARCY'S LAW.
Olsen, Harold W.
1985-01-01
This review of the existing evidence shows that osmosis causes intercepts in flow rate versus hydraulic gradient relationships that are consistent with the observed deviations from Darcy's law at very low gradients. Moreover, it is suggested that a natural cause of osmosis in laboratory samples could be chemical reactions such as those involved in aging effects. This hypothesis is analogous to the previously proposed occurrence of electroosmosis in nature generated by geochemical weathering reactions. Refs.
Semiparametric Bernstein–von Mises for the error standard deviation
Jonge, de, R.; Zanten, van, J.H.
2013-01-01
We study Bayes procedures for nonparametric regression problems with Gaussian errors, giving conditions under which a Bernstein–von Mises result holds for the marginal posterior distribution of the error standard deviation. We apply our general results to show that a single Bayes procedure using a hierarchical spline-based prior on the regression function and an independent prior on the error variance, can simultaneously achieve adaptive, rate-optimal estimation of a smooth, multivariate regr...
[Deviation in psychosexual development in the pre-puberty children].
Liavshina, G Kh
2002-01-01
Psychosexual health of 308 children, aged 2-11 years, as well as that of their families, was studied. Deviations in psychosexual development were found in 34.6% of the children examined. The following types were detected: difficulties in formation of gender-determined behavior features--64.4%, precocious psychosexual development--13.7%, delayed psychosexual development--12.3%, obsessive masturbation--9.6%. Risk factors for deviant psychosexual development were found.
Direct training of robots using a positional deviation sensor
Dessen, Fredrik
1988-01-01
A device and system for physically guiding a manipulator through its task is described. The device consists of inductive, contact-free positional deviation sensors, enabling the rcbot to track a motion marker. Factors limiting the tracking performance are the kinematics of the sensor device and the bartdwidth of the servo system. Means for improving it includes the use of optimal motion coordination and force and velocity feedback. This enables real-time manual training o...
Wavelength selection method with standard deviation: application to pulse oximetry.
Vazquez-Jaccaud, Camille; Paez, Gonzalo; Strojnik, Marija
2011-07-01
Near-infrared spectroscopy provides useful biological information after the radiation has penetrated through the tissue, within the therapeutic window. One of the significant shortcomings of the current applications of spectroscopic techniques to a live subject is that the subject may be uncooperative and the sample undergoes significant temporal variations, due to his health status that, from radiometric point of view, introduce measurement noise. We describe a novel wavelength selection method for monitoring, based on a standard deviation map, that allows low-noise sensitivity. It may be used with spectral transillumination, transmission, or reflection signals, including those corrupted by noise and unavoidable temporal effects. We apply it to the selection of two wavelengths for the case of pulse oximetry. Using spectroscopic data, we generate a map of standard deviation that we propose as a figure-of-merit in the presence of the noise introduced by the living subject. Even in the presence of diverse sources of noise, we identify four wavelength domains with standard deviation, minimally sensitive to temporal noise, and two wavelengths domains with low sensitivity to temporal noise.
Test of nonexponential deviations from decay curve of 52V using continuous kinetic function method
International Nuclear Information System (INIS)
Tran Dai Nghiep; Vu Hoang Lam; Vo Tuong Hanh; Do Nguyet Minh; Nguyen Ngoc Son
1993-01-01
The present work is aimed at a formulation of an experimental approach to search the proposed description of an attempt to test them in case of 52 V. Some theoretical description of decay processes are formulated in clarified forms. The continuous kinetic function (CKF) method is used for analysis of experimental data and CKF for purely exponential case is considered as a standard for comparison between theoretical and experimental data. The degree of agreement is defined by the factor of goodness. Typical deviations of oscillation behavior of 52 V decay were observed in a wide range of time. The proposed deviation related to interaction between decay products and environment is research. A complex type of decay is discussed. (author). 10 refs, 2 tabs, 5 figs