WorldWideScience

Sample records for random average process

  1. Matrix product approach for the asymmetric random average process

    International Nuclear Information System (INIS)

    Zielen, F; Schadschneider, A

    2003-01-01

    We consider the asymmetric random average process which is a one-dimensional stochastic lattice model with nearest-neighbour interaction but continuous and unbounded state variables. First, the explicit functional representations, so-called beta densities, of all local interactions leading to steady states of product measure form are rigorously derived. This also completes an outstanding proof given in a previous publication. Then we present an alternative solution for the processes with factorized stationary states by using a matrix product ansatz. Due to continuous state variables we obtain a matrix algebra in the form of a functional equation which can be solved exactly

  2. A One Line Derivation of DCC: Application of a Vector Random Coefficient Moving Average Process

    NARCIS (Netherlands)

    C.M. Hafner (Christian); M.J. McAleer (Michael)

    2014-01-01

    markdownabstract__Abstract__ One of the most widely-used multivariate conditional volatility models is the dynamic conditional correlation (or DCC) specification. However, the underlying stochastic process to derive DCC has not yet been established, which has made problematic the derivation of

  3. Average beta-beating from random errors

    CERN Document Server

    Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department

    2018-01-01

    The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic effect on the tune.

  4. Ergodic averages via dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2006-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....

  5. Averaging in SU(2) open quantum random walk

    International Nuclear Information System (INIS)

    Ampadu Clement

    2014-01-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT

  6. Averaging in SU(2) open quantum random walk

    Science.gov (United States)

    Clement, Ampadu

    2014-03-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.

  7. Average subentropy, coherence and entanglement of random mixed quantum states

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lin, E-mail: godyalin@163.com [Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018 (China); Singh, Uttam, E-mail: uttamsingh@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India); Pati, Arun K., E-mail: akpati@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India)

    2017-02-15

    Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate that mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.

  8. Effect of random edge failure on the average path length

    Energy Technology Data Exchange (ETDEWEB)

    Guo Dongchao; Liang Mangui; Li Dandan; Jiang Zhongyuan, E-mail: mgliang58@gmail.com, E-mail: 08112070@bjtu.edu.cn [Institute of Information Science, Beijing Jiaotong University, 100044, Beijing (China)

    2011-10-14

    We study the effect of random removal of edges on the average path length (APL) in a large class of uncorrelated random networks in which vertices are characterized by hidden variables controlling the attachment of edges between pairs of vertices. A formula for approximating the APL of networks suffering random edge removal is derived first. Then, the formula is confirmed by simulations for classical ER (Erdoes and Renyi) random graphs, BA (Barabasi and Albert) networks, networks with exponential degree distributions as well as random networks with asymptotic power-law degree distributions with exponent {alpha} > 2. (paper)

  9. Exactly averaged equations for flow and transport in random media

    International Nuclear Information System (INIS)

    Shvidler, Mark; Karasaki, Kenzi

    2001-01-01

    It is well known that exact averaging of the equations of flow and transport in random porous media can be realized only for a small number of special, occasionally exotic, fields. On the other hand, the properties of approximate averaging methods are not yet fully understood. For example, the convergence behavior and the accuracy of truncated perturbation series. Furthermore, the calculation of the high-order perturbations is very complicated. These problems for a long time have stimulated attempts to find the answer for the question: Are there in existence some exact general and sufficiently universal forms of averaged equations? If the answer is positive, there arises the problem of the construction of these equations and analyzing them. There exist many publications related to these problems and oriented on different applications: hydrodynamics, flow and transport in porous media, theory of elasticity, acoustic and electromagnetic waves in random fields, etc. We present a method of finding the general form of exactly averaged equations for flow and transport in random fields by using (1) an assumption of the existence of Green's functions for appropriate stochastic problems, (2) some general properties of the Green's functions, and (3) the some basic information about the random fields of the conductivity, porosity and flow velocity. We present a general form of the exactly averaged non-local equations for the following cases. 1. Steady-state flow with sources in porous media with random conductivity. 2. Transient flow with sources in compressible media with random conductivity and porosity. 3. Non-reactive solute transport in random porous media. We discuss the problem of uniqueness and the properties of the non-local averaged equations, for the cases with some types of symmetry (isotropic, transversal isotropic, orthotropic) and we analyze the hypothesis of the structure non-local equations in general case of stochastically homogeneous fields. (author)

  10. The average crossing number of equilateral random polygons

    International Nuclear Information System (INIS)

    Diao, Y; Dobay, A; Kusner, R B; Millett, K; Stasiak, A

    2003-01-01

    In this paper, we study the average crossing number of equilateral random walks and polygons. We show that the mean average crossing number ACN of all equilateral random walks of length n is of the form (3/16)n ln n + O(n). A similar result holds for equilateral random polygons. These results are confirmed by our numerical studies. Furthermore, our numerical studies indicate that when random polygons of length n are divided into individual knot types, the for each knot type K can be described by a function of the form = a(n-n 0 )ln(n-n 0 ) + b(n-n 0 ) + c where a, b and c are constants depending on K and n 0 is the minimal number of segments required to form K. The profiles diverge from each other, with more complex knots showing higher than less complex knots. Moreover, the profiles intersect with the profile of all closed walks. These points of intersection define the equilibrium length of K, i.e., the chain length n e (K) at which a statistical ensemble of configurations with given knot type K-upon cutting, equilibration and reclosure to a new knot type K'-does not show a tendency to increase or decrease . This concept of equilibrium length seems to be universal, and applies also to other length-dependent observables for random knots, such as the mean radius of gyration g >

  11. A note on moving average models for Gaussian random fields

    DEFF Research Database (Denmark)

    Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.

    The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...... basis, a general modeling framework which includes several types of non-Gaussian models. We propose a new one-parameter spatial correlation model which arises from a power kernel and show that the associated Hausdorff dimension of the sample paths can take any value between 2 and 3. As a result...

  12. Average size of random polygons with fixed knot topology.

    Science.gov (United States)

    Matsuda, Hiroshi; Yao, Akihisa; Tsukahara, Hiroshi; Deguchi, Tetsuo; Furuta, Ko; Inami, Takeo

    2003-07-01

    We have evaluated by numerical simulation the average size R(K) of random polygons of fixed knot topology K=,3(1),3(1) musical sharp 4(1), and we have confirmed the scaling law R(2)(K) approximately N(2nu(K)) for the number N of polygonal nodes in a wide range; N=100-2200. The best fit gives 2nu(K) approximately 1.11-1.16 with good fitting curves in the whole range of N. The estimate of 2nu(K) is consistent with the exponent of self-avoiding polygons. In a limited range of N (N greater, similar 600), however, we have another fit with 2nu(K) approximately 1.01-1.07, which is close to the exponent of random polygons.

  13. A Campbell random process

    International Nuclear Information System (INIS)

    Reuss, J.D.; Misguich, J.H.

    1993-02-01

    The Campbell process is a stationary random process which can have various correlation functions, according to the choice of an elementary response function. The statistical properties of this process are presented. A numerical algorithm and a subroutine for generating such a process is built up and tested, for the physically interesting case of a Campbell process with Gaussian correlations. The (non-Gaussian) probability distribution appears to be similar to the Gamma distribution

  14. A simple consensus algorithm for distributed averaging in random ...

    Indian Academy of Sciences (India)

    Random geographical networks are realistic models for wireless sensor ... work are cheap, unreliable, with limited computational power and limited .... signal xj from node j, j does not need to transmit its degree to i in order to let i compute.

  15. Influence of Averaging Preprocessing on Image Analysis with a Markov Random Field Model

    Science.gov (United States)

    Sakamoto, Hirotaka; Nakanishi-Ohno, Yoshinori; Okada, Masato

    2018-02-01

    This paper describes our investigations into the influence of averaging preprocessing on the performance of image analysis. Averaging preprocessing involves a trade-off: image averaging is often undertaken to reduce noise while the number of image data available for image analysis is decreased. We formulated a process of generating image data by using a Markov random field (MRF) model to achieve image analysis tasks such as image restoration and hyper-parameter estimation by a Bayesian approach. According to the notions of Bayesian inference, posterior distributions were analyzed to evaluate the influence of averaging. There are three main results. First, we found that the performance of image restoration with a predetermined value for hyper-parameters is invariant regardless of whether averaging is conducted. We then found that the performance of hyper-parameter estimation deteriorates due to averaging. Our analysis of the negative logarithm of the posterior probability, which is called the free energy based on an analogy with statistical mechanics, indicated that the confidence of hyper-parameter estimation remains higher without averaging. Finally, we found that when the hyper-parameters are estimated from the data, the performance of image restoration worsens as averaging is undertaken. We conclude that averaging adversely influences the performance of image analysis through hyper-parameter estimation.

  16. Average multiplications in deep inelastic processes and their interpretation

    International Nuclear Information System (INIS)

    Kiselev, A.V.; Petrov, V.A.

    1983-01-01

    Inclusive production of hadrons in deep inelastic proceseseus is considered. It is shown that at high energies the jet evolution in deep inelastic processes is mainly of nonperturbative character. With the increase of a final hadron state energy the leading contribution to an average multiplicity comes from a parton subprocess due to production of massive quark and gluon jets and their further fragmentation as diquark contribution becomes less and less essential. The ratio of the total average multiplicity in deep inelastic processes to the average multiplicity in e + e - -annihilation at high energies tends to unity

  17. Modeling methane emission via the infinite moving average process

    Czech Academy of Sciences Publication Activity Database

    Jordanova, D.; Dušek, Jiří; Stehlík, M.

    2013-01-01

    Roč. 122, - (2013), s. 40-49 ISSN 0169-7439 R&D Projects: GA MŠk(CZ) ED1.1.00/02.0073; GA ČR(CZ) GAP504/11/1151 Institutional support: RVO:67179843 Keywords : Environmental chemistry * Pareto tails * t-Hill estimator * Weak consistency * Moving average process * Methane emission model Subject RIV: EH - Ecology, Behaviour Impact factor: 2.381, year: 2013

  18. The average inter-crossing number of equilateral random walks and polygons

    International Nuclear Information System (INIS)

    Diao, Y; Dobay, A; Stasiak, A

    2005-01-01

    In this paper, we study the average inter-crossing number between two random walks and two random polygons in the three-dimensional space. The random walks and polygons in this paper are the so-called equilateral random walks and polygons in which each segment of the walk or polygon is of unit length. We show that the mean average inter-crossing number ICN between two equilateral random walks of the same length n is approximately linear in terms of n and we were able to determine the prefactor of the linear term, which is a = 3ln2/8 ∼ 0.2599. In the case of two random polygons of length n, the mean average inter-crossing number ICN is also linear, but the prefactor of the linear term is different from that of the random walks. These approximations apply when the starting points of the random walks and polygons are of a distance ρ apart and ρ is small compared to n. We propose a fitting model that would capture the theoretical asymptotic behaviour of the mean average ICN for large values of ρ. Our simulation result shows that the model in fact works very well for the entire range of ρ. We also study the mean ICN between two equilateral random walks and polygons of different lengths. An interesting result is that even if one random walk (polygon) has a fixed length, the mean average ICN between the two random walks (polygons) would still approach infinity if the length of the other random walk (polygon) approached infinity. The data provided by our simulations match our theoretical predictions very well

  19. Averaging processes in granular flows driven by gravity

    Science.gov (United States)

    Rossi, Giulia; Armanini, Aronne

    2016-04-01

    One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental

  20. Random processes in nuclear reactors

    CERN Document Server

    Williams, M M R

    1974-01-01

    Random Processes in Nuclear Reactors describes the problems that a nuclear engineer may meet which involve random fluctuations and sets out in detail how they may be interpreted in terms of various models of the reactor system. Chapters set out to discuss topics on the origins of random processes and sources; the general technique to zero-power problems and bring out the basic effect of fission, and fluctuations in the lifetime of neutrons, on the measured response; the interpretation of power reactor noise; and associated problems connected with mechanical, hydraulic and thermal noise sources

  1. Entanglement in random pure states: spectral density and average von Neumann entropy

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Santosh; Pandey, Akhilesh, E-mail: skumar.physics@gmail.com, E-mail: ap0700@mail.jnu.ac.in [School of Physical Sciences, Jawaharlal Nehru University, New Delhi 110 067 (India)

    2011-11-04

    Quantum entanglement plays a crucial role in quantum information, quantum teleportation and quantum computation. The information about the entanglement content between subsystems of the composite system is encoded in the Schmidt eigenvalues. We derive here closed expressions for the spectral density of Schmidt eigenvalues for all three invariant classes of random matrix ensembles. We also obtain exact results for average von Neumann entropy. We find that maximum average entanglement is achieved if the system belongs to the symplectic invariant class. (paper)

  2. Free Energy Self-Averaging in Protein-Sized Random Heteropolymers

    International Nuclear Information System (INIS)

    Chuang, Jeffrey; Grosberg, Alexander Yu.; Kardar, Mehran

    2001-01-01

    Current theories of heteropolymers are inherently macroscopic, but are applied to mesoscopic proteins. To compute the free energy over sequences, one assumes self-averaging -- a property established only in the macroscopic limit. By enumerating the states and energies of compact 18, 27, and 36mers on a lattice with an ensemble of random sequences, we test the self-averaging approximation. We find that fluctuations in the free energy between sequences are weak, and that self-averaging is valid at the scale of real proteins. The results validate sequence design methods which exponentially speed up computational design and simplify experimental realizations

  3. Historical Data for Average Processing Time Until Hearing Held

    Data.gov (United States)

    Social Security Administration — This dataset provides historical data for average wait time (in days) from the hearing request date until a hearing was held. This dataset includes data from fiscal...

  4. Disability Reconsideration Average Processing Time (in Days) (Excludes technical denials)

    Data.gov (United States)

    Social Security Administration — A presentation of the overall cumulative number of elapsed days (including processing time for transit, medical determinations, and SSA quality review) from the date...

  5. Average thermal stress in the Al+SiC composite due to its manufacturing process

    International Nuclear Information System (INIS)

    Miranda, Carlos A.J.; Libardi, Rosani M.P.; Marcelino, Sergio; Boari, Zoroastro M.

    2013-01-01

    The numerical analyses framework to obtain the average thermal stress in the Al+SiC Composite due to its manufacturing process is presented along with the obtained results. The mixing of Aluminum and SiC powders is done at elevated temperature and the usage is at room temperature. A thermal stress state arises in the composite due to the different thermal expansion coefficients of the materials. Due to the particles size and randomness in the SiC distribution, some sets of models were analyzed and a statistical procedure used to evaluate the average stress state in the composite. In each model the particles position, form and size are randomly generated considering a volumetric ratio (VR) between 20% and 25%, close to an actual composite. The obtained stress field is represented by a certain number of iso stress curves, each one weighted by the area it represents. Systematically it was investigated the influence of: (a) the material behavior: linear x non-linear; (b) the carbide particles form: circular x quadrilateral; (c) the number of iso stress curves considered in each analysis; and (e) the model size (the number of particles). Each of above analyzed condition produced conclusions to guide the next step. Considering a confidence level of 95%, the average thermal stress value in the studied composite (20% ≤ VR ≤ 25%) is 175 MPa with a standard deviation of 10 MPa. Depending on its usage, this value should be taken into account when evaluating the material strength. (author)

  6. Multiple-scale stochastic processes: Decimation, averaging and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Bo, Stefano, E-mail: stefano.bo@nordita.org [Nordita, KTH Royal Institute of Technology and Stockholm University, Roslagstullsbacken 23, SE-106 91 Stockholm (Sweden); Celani, Antonio [Quantitative Life Sciences, The Abdus Salam International Centre for Theoretical Physics (ICTP), Strada Costiera 11, I-34151 - Trieste (Italy)

    2017-02-07

    The recent experimental progresses in handling microscopic systems have allowed to probe them at levels where fluctuations are prominent, calling for stochastic modeling in a large number of physical, chemical and biological phenomena. This has provided fruitful applications for established stochastic methods and motivated further developments. These systems often involve processes taking place on widely separated time scales. For an efficient modeling one usually focuses on the slower degrees of freedom and it is of great importance to accurately eliminate the fast variables in a controlled fashion, carefully accounting for their net effect on the slower dynamics. This procedure in general requires to perform two different operations: decimation and coarse-graining. We introduce the asymptotic methods that form the basis of this procedure and discuss their application to a series of physical, biological and chemical examples. We then turn our attention to functionals of the stochastic trajectories such as residence times, counting statistics, fluxes, entropy production, etc. which have been increasingly studied in recent years. For such functionals, the elimination of the fast degrees of freedom can present additional difficulties and naive procedures can lead to blatantly inconsistent results. Homogenization techniques for functionals are less covered in the literature and we will pedagogically present them here, as natural extensions of the ones employed for the trajectories. We will also discuss recent applications of these techniques to the thermodynamics of small systems and their interpretation in terms of information-theoretic concepts.

  7. Dynamic Average Consensus and Consensusability of General Linear Multiagent Systems with Random Packet Dropout

    Directory of Open Access Journals (Sweden)

    Wen-Min Zhou

    2013-01-01

    Full Text Available This paper is concerned with the consensus problem of general linear discrete-time multiagent systems (MASs with random packet dropout that happens during information exchange between agents. The packet dropout phenomenon is characterized as being a Bernoulli random process. A distributed consensus protocol with weighted graph is proposed to address the packet dropout phenomenon. Through introducing a new disagreement vector, a new framework is established to solve the consensus problem. Based on the control theory, the perturbation argument, and the matrix theory, the necessary and sufficient condition for MASs to reach mean-square consensus is derived in terms of stability of an array of low-dimensional matrices. Moreover, mean-square consensusable conditions with regard to network topology and agent dynamic structure are also provided. Finally, the effectiveness of the theoretical results is demonstrated through an illustrative example.

  8. A signal theoretic introduction to random processes

    CERN Document Server

    Howard, Roy M

    2015-01-01

    A fresh introduction to random processes utilizing signal theory By incorporating a signal theory basis, A Signal Theoretic Introduction to Random Processes presents a unique introduction to random processes with an emphasis on the important random phenomena encountered in the electronic and communications engineering field. The strong mathematical and signal theory basis provides clarity and precision in the statement of results. The book also features:  A coherent account of the mathematical fundamentals and signal theory that underpin the presented material Unique, in-depth coverage of

  9. Probability, random variables, and random processes theory and signal processing applications

    CERN Document Server

    Shynk, John J

    2012-01-01

    Probability, Random Variables, and Random Processes is a comprehensive textbook on probability theory for engineers that provides a more rigorous mathematical framework than is usually encountered in undergraduate courses. It is intended for first-year graduate students who have some familiarity with probability and random variables, though not necessarily of random processes and systems that operate on random signals. It is also appropriate for advanced undergraduate students who have a strong mathematical background. The book has the following features: Several app

  10. Pseudo random signal processing theory and application

    CERN Document Server

    Zepernick, Hans-Jurgen

    2013-01-01

    In recent years, pseudo random signal processing has proven to be a critical enabler of modern communication, information, security and measurement systems. The signal's pseudo random, noise-like properties make it vitally important as a tool for protecting against interference, alleviating multipath propagation and allowing the potential of sharing bandwidth with other users. Taking a practical approach to the topic, this text provides a comprehensive and systematic guide to understanding and using pseudo random signals. Covering theoretical principles, design methodologies and applications

  11. Elements of random walk and diffusion processes

    CERN Document Server

    Ibe, Oliver C

    2013-01-01

    Presents an important and unique introduction to random walk theory Random walk is a stochastic process that has proven to be a useful model in understanding discrete-state discrete-time processes across a wide spectrum of scientific disciplines. Elements of Random Walk and Diffusion Processes provides an interdisciplinary approach by including numerous practical examples and exercises with real-world applications in operations research, economics, engineering, and physics. Featuring an introduction to powerful and general techniques that are used in the application of physical and dynamic

  12. Asymptotic behaviour of time averages for non-ergodic Gaussian processes

    Science.gov (United States)

    Ślęzak, Jakub

    2017-08-01

    In this work, we study the behaviour of time-averages for stationary (non-ageing), but ergodicity-breaking Gaussian processes using their representation in Fourier space. We provide explicit formulae for various time-averaged quantities, such as mean square displacement, density, and analyse the behaviour of time-averaged characteristic function, which gives insight into rich memory structure of the studied processes. Moreover, we show applications of the ergodic criteria in Fourier space, determining the ergodicity of the generalised Langevin equation's solutions.

  13. Auto Regressive Moving Average (ARMA) Modeling Method for Gyro Random Noise Using a Robust Kalman Filter

    Science.gov (United States)

    Huang, Lei

    2015-01-01

    To solve the problem in which the conventional ARMA modeling methods for gyro random noise require a large number of samples and converge slowly, an ARMA modeling method using a robust Kalman filtering is developed. The ARMA model parameters are employed as state arguments. Unknown time-varying estimators of observation noise are used to achieve the estimated mean and variance of the observation noise. Using the robust Kalman filtering, the ARMA model parameters are estimated accurately. The developed ARMA modeling method has the advantages of a rapid convergence and high accuracy. Thus, the required sample size is reduced. It can be applied to modeling applications for gyro random noise in which a fast and accurate ARMA modeling method is required. PMID:26437409

  14. Convergence and approximate calculation of average degree under different network sizes for decreasing random birth-and-death networks

    Science.gov (United States)

    Long, Yin; Zhang, Xiao-Jun; Wang, Kui

    2018-05-01

    In this paper, convergence and approximate calculation of average degree under different network sizes for decreasing random birth-and-death networks (RBDNs) are studied. First, we find and demonstrate that the average degree is convergent in the form of power law. Meanwhile, we discover that the ratios of the back items to front items of convergent reminder are independent of network link number for large network size, and we theoretically prove that the limit of the ratio is a constant. Moreover, since it is difficult to calculate the analytical solution of the average degree for large network sizes, we adopt numerical method to obtain approximate expression of the average degree to approximate its analytical solution. Finally, simulations are presented to verify our theoretical results.

  15. Original article Functioning of memory and attention processes in children with intelligence below average

    Directory of Open Access Journals (Sweden)

    Aneta Rita Borkowska

    2014-05-01

    Full Text Available BACKGROUND The aim of the research was to assess memorization and recall of logically connected and unconnected material, coded graphically and linguistically, and the ability to focus attention, in a group of children with intelligence below average, compared to children with average intelligence. PARTICIPANTS AND PROCEDURE The study group included 27 children with intelligence below average. The control group consisted of 29 individuals. All of them were examined using the authors’ experimental trials and the TUS test (Attention and Perceptiveness Test. RESULTS Children with intelligence below average memorized significantly less information contained in the logical material, demonstrated lower ability to memorize the visual material, memorized significantly fewer words in the verbal material learning task, achieved lower results in such indicators of the visual attention process pace as the number of omissions and mistakes, and had a lower pace of perceptual work, compared to children with average intelligence. CONCLUSIONS The results confirm that children with intelligence below average have difficulties with memorizing new material, both logically connected and unconnected. The significantly lower capacity of direct memory is independent of modality. The results of the study on the memory process confirm the hypothesis about lower abilities of children with intelligence below average, in terms of concentration, work pace, efficiency and perception.

  16. Fundamentals of applied probability and random processes

    CERN Document Server

    Ibe, Oliver

    2014-01-01

    The long-awaited revision of Fundamentals of Applied Probability and Random Processes expands on the central components that made the first edition a classic. The title is based on the premise that engineers use probability as a modeling tool, and that probability can be applied to the solution of engineering problems. Engineers and students studying probability and random processes also need to analyze data, and thus need some knowledge of statistics. This book is designed to provide students with a thorough grounding in probability and stochastic processes, demonstrate their applicability t

  17. A random matrix approach to VARMA processes

    International Nuclear Information System (INIS)

    Burda, Zdzislaw; Jarosz, Andrzej; Nowak, Maciej A; Snarska, Malgorzata

    2010-01-01

    We apply random matrix theory to derive the spectral density of large sample covariance matrices generated by multivariate VMA(q), VAR(q) and VARMA(q 1 , q 2 ) processes. In particular, we consider a limit where the number of random variables N and the number of consecutive time measurements T are large but the ratio N/T is fixed. In this regime, the underlying random matrices are asymptotically equivalent to free random variables (FRV). We apply the FRV calculus to calculate the eigenvalue density of the sample covariance for several VARMA-type processes. We explicitly solve the VARMA(1, 1) case and demonstrate perfect agreement between the analytical result and the spectra obtained by Monte Carlo simulations. The proposed method is purely algebraic and can be easily generalized to q 1 >1 and q 2 >1.

  18. Asymptotically optimum multialternative sequential procedures for discernment of processes minimizing average length of observations

    Science.gov (United States)

    Fishman, M. M.

    1985-01-01

    The problem of multialternative sequential discernment of processes is formulated in terms of conditionally optimum procedures minimizing the average length of observations, without any probabilistic assumptions about any one occurring process, rather than in terms of Bayes procedures minimizing the average risk. The problem is to find the procedure that will transform inequalities into equalities. The problem is formulated for various models of signal observation and data processing: (1) discernment of signals from background interference by a multichannel system; (2) discernment of pulse sequences with unknown time delay; (3) discernment of harmonic signals with unknown frequency. An asymptotically optimum sequential procedure is constructed which compares the statistics of the likelihood ratio with the mean-weighted likelihood ratio and estimates the upper bound for conditional average lengths of observations. This procedure is shown to remain valid as the upper bound for the probability of erroneous partial solutions decreases approaching zero and the number of hypotheses increases approaching infinity. It also remains valid under certain special constraints on the probability such as a threshold. A comparison with a fixed-length procedure reveals that this sequential procedure decreases the length of observations to one quarter, on the average, when the probability of erroneous partial solutions is low.

  19. Scaling behaviour of randomly alternating surface growth processes

    International Nuclear Information System (INIS)

    Raychaudhuri, Subhadip; Shapir, Yonathan

    2002-01-01

    The scaling properties of the roughness of surfaces grown by two different processes randomly alternating in time are addressed. The duration of each application of the two primary processes is assumed to be independently drawn from given distribution functions. We analytically address processes in which the two primary processes are linear and extend the conclusions to nonlinear processes as well. The growth scaling exponent of the average roughness with the number of applications is found to be determined by the long time tail of the distribution functions. For processes in which both mean application times are finite, the scaling behaviour follows that of the corresponding cyclical process in which the uniform application time of each primary process is given by its mean. If the distribution functions decay with a small enough power law for the mean application times to diverge, the growth exponent is found to depend continuously on this power-law exponent. In contrast, the roughness exponent does not depend on the timing of the applications. The analytical results are supported by numerical simulations of various pairs of primary processes and with different distribution functions. Self-affine surfaces grown by two randomly alternating processes are common in nature (e.g., due to randomly changing weather conditions) and in man-made devices such as rechargeable batteries

  20. Averaging Principle for the Higher Order Nonlinear Schrödinger Equation with a Random Fast Oscillation

    Science.gov (United States)

    Gao, Peng

    2018-04-01

    This work concerns the problem associated with averaging principle for a higher order nonlinear Schrödinger equation perturbed by a oscillating term arising as the solution of a stochastic reaction-diffusion equation evolving with respect to the fast time. This model can be translated into a multiscale stochastic partial differential equations. Stochastic averaging principle is a powerful tool for studying qualitative analysis of stochastic dynamical systems with different time-scales. To be more precise, under suitable conditions, we prove that there is a limit process in which the fast varying process is averaged out and the limit process which takes the form of the higher order nonlinear Schrödinger equation is an average with respect to the stationary measure of the fast varying process. Finally, by using the Khasminskii technique we can obtain the rate of strong convergence for the slow component towards the solution of the averaged equation, and as a consequence, the system can be reduced to a single higher order nonlinear Schrödinger equation with a modified coefficient.

  1. Averaging Principle for the Higher Order Nonlinear Schrödinger Equation with a Random Fast Oscillation

    Science.gov (United States)

    Gao, Peng

    2018-06-01

    This work concerns the problem associated with averaging principle for a higher order nonlinear Schrödinger equation perturbed by a oscillating term arising as the solution of a stochastic reaction-diffusion equation evolving with respect to the fast time. This model can be translated into a multiscale stochastic partial differential equations. Stochastic averaging principle is a powerful tool for studying qualitative analysis of stochastic dynamical systems with different time-scales. To be more precise, under suitable conditions, we prove that there is a limit process in which the fast varying process is averaged out and the limit process which takes the form of the higher order nonlinear Schrödinger equation is an average with respect to the stationary measure of the fast varying process. Finally, by using the Khasminskii technique we can obtain the rate of strong convergence for the slow component towards the solution of the averaged equation, and as a consequence, the system can be reduced to a single higher order nonlinear Schrödinger equation with a modified coefficient.

  2. Scaling behaviour of randomly alternating surface growth processes

    CERN Document Server

    Raychaudhuri, S

    2002-01-01

    The scaling properties of the roughness of surfaces grown by two different processes randomly alternating in time are addressed. The duration of each application of the two primary processes is assumed to be independently drawn from given distribution functions. We analytically address processes in which the two primary processes are linear and extend the conclusions to nonlinear processes as well. The growth scaling exponent of the average roughness with the number of applications is found to be determined by the long time tail of the distribution functions. For processes in which both mean application times are finite, the scaling behaviour follows that of the corresponding cyclical process in which the uniform application time of each primary process is given by its mean. If the distribution functions decay with a small enough power law for the mean application times to diverge, the growth exponent is found to depend continuously on this power-law exponent. In contrast, the roughness exponent does not depe...

  3. A new mathematical process for the calculation of average forms of teeth.

    Science.gov (United States)

    Mehl, A; Blanz, V; Hickel, R

    2005-12-01

    Qualitative visual inspections and linear metric measurements have been predominant methods for describing the morphology of teeth. No quantitative formulation exists for the description of dental features. The aim of this study was to determine and validate a mathematical process for calculation of the average form of first maxillary molars, including the general occlusal features. Stone replicas of 174 caries-free first maxillary molar crowns from young patients ranging from 6 to 9 years of age were measured 3-dimensionally with a laser scanning system at a resolution of approximately 100,000 points. Then, the average tooth was computed, which captured the common features of the molar's surface quantitatively. This new method adapts algorithms both from computer science and neuroscience to detect and associate the same features and same surface points (correspondences) between 1 reference tooth and all other teeth. In this study, the method was tested for 7 different reference teeth. The algorithm does not involve any prior knowledge about teeth and their features. Irrespective of the reference tooth used, the procedure yielded average teeth that showed nearly no differences (less than +/-30 microm). This approach provides a valid quantitative process for calculating 3-dimensional (3D) averages of occlusal surfaces of teeth even in the event of a high number of digitized surface points. Additionally, because this process detects and assigns point-wise feature correspondences between all library teeth, it may also serve as a basis for a more substantiated principal component analysis evaluating the main natural shape deviations from the 3D average.

  4. Effects of stratospheric aerosol surface processes on the LLNL two-dimensional zonally averaged model

    International Nuclear Information System (INIS)

    Connell, P.S.; Kinnison, D.E.; Wuebbles, D.J.; Burley, J.D.; Johnston, H.S.

    1992-01-01

    We have investigated the effects of incorporating representations of heterogeneous chemical processes associated with stratospheric sulfuric acid aerosol into the LLNL two-dimensional, zonally averaged, model of the troposphere and stratosphere. Using distributions of aerosol surface area and volume density derived from SAGE 11 satellite observations, we were primarily interested in changes in partitioning within the Cl- and N- families in the lower stratosphere, compared to a model including only gas phase photochemical reactions

  5. Extracting gravitational waves induced by plasma turbulence in the early Universe through an averaging process

    International Nuclear Information System (INIS)

    Garrison, David; Ramirez, Christopher

    2017-01-01

    This work is a follow-up to the paper, ‘Numerical relativity as a tool for studying the early Universe’. In this article, we determine if cosmological gravitational waves can be accurately extracted from a dynamical spacetime using an averaging process as opposed to conventional methods of gravitational wave extraction using a complex Weyl scalar. We calculate the normalized energy density, strain and degree of polarization of gravitational waves produced by a simulated turbulent plasma similar to what was believed to have existed shortly after the electroweak scale. This calculation is completed using two numerical codes, one which utilizes full general relativity calculations based on modified BSSN equations while the other utilizes a linearized approximation of general relativity. Our results show that the spectrum of gravitational waves calculated from the nonlinear code using an averaging process is nearly indistinguishable from those calculated from the linear code. This result validates the use of the averaging process for gravitational wave extraction of cosmological systems. (paper)

  6. Provable quantum advantage in randomness processing

    OpenAIRE

    Dale, H; Jennings, D; Rudolph, T

    2015-01-01

    Quantum advantage is notoriously hard to find and even harder to prove. For example the class of functions computable with classical physics actually exactly coincides with the class computable quantum-mechanically. It is strongly believed, but not proven, that quantum computing provides exponential speed-up for a range of problems, such as factoring. Here we address a computational scenario of "randomness processing" in which quantum theory provably yields, not only resource reduction over c...

  7. Random-sign observables nonvanishing upon averaging: Enhancement of weak perturbations and parity nonconservation in compound nuclei

    International Nuclear Information System (INIS)

    Flambaum, V.V.; Gribakin, G.F.

    1994-01-01

    Weak perturbations can be strongly enhanced in many-body systems that have dense spectra of excited states (compound nuclei, rare-earth atoms, molecules, clusters, quantum dots, etc.). Statistical consideration shows that in the case of zero-width states the probability distribution for the effect of the perturbation has an infinitte variance and does not obey the standard central limit theorem, i.e., the probability density for the average effect X=1/n tsum i=1 n x i does not tend to a Gaussian (normal) distribution with variance σ n =σ 1 / √n , where n is the ''number of measurements.'' Since for probability densities of this form [f(x)congruent a/x 2 at large x] the central limit theorem is F n (X)=a/X 2 +π 2 a 2 at n much-gt 1, the breadth of the distribution does not decrease with the increase of n. This means the following. (1) In spite of the random signs of observable effects for different compound states the probability of finding a large average effect for n levels is the same as that for a single-resonance measurements. (2) In some cases one does not need to resolve individual compound resonances and the enhanced value of the effect can be observed in the integral spectrum. This substantially increases the chances to observe statistical enhancement of weak perturbations in different reactions and systems. (3) The average value of parity and time-nonconserving effects in low-energy nucleon scattering cannot be described by a smooth weak optical potential. This ''potential'' would randomly fluctuate as a function of energy, with typical magnitudes much larger than the nucleon-nucleus weak potential. The effect of finite compound-state widths is considered

  8. Ergodic averages for monotone functions using upper and lower dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain. Our...... methods are studied in detail for three models using Markov chain Monte Carlo methods and we also discuss various types of other models for which our methods apply....

  9. Ergodic averages for monotone functions using upper and lower dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2007-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain. Our...... methods are studied in detail for three models using Markov chain Monte Carlo methods and we also discuss various types of other models for which our methods apply....

  10. Fundamentals of applied probability and random processes

    CERN Document Server

    Ibe, Oliver

    2005-01-01

    This book is based on the premise that engineers use probability as a modeling tool, and that probability can be applied to the solution of engineering problems. Engineers and students studying probability and random processes also need to analyze data, and thus need some knowledge of statistics. This book is designed to provide students with a thorough grounding in probability and stochastic processes, demonstrate their applicability to real-world problems, and introduce the basics of statistics. The book''s clear writing style and homework problems make it ideal for the classroom or for self-study.* Good and solid introduction to probability theory and stochastic processes * Logically organized; writing is presented in a clear manner * Choice of topics is comprehensive within the area of probability * Ample homework problems are organized into chapter sections

  11. Averaging, not internal noise, limits the development of coherent motion processing

    Directory of Open Access Journals (Sweden)

    Catherine Manning

    2014-10-01

    Full Text Available The development of motion processing is a critical part of visual development, allowing children to interact with moving objects and navigate within a dynamic environment. However, global motion processing, which requires pooling motion information across space, develops late, reaching adult-like levels only by mid-to-late childhood. The reasons underlying this protracted development are not yet fully understood. In this study, we sought to determine whether the development of motion coherence sensitivity is limited by internal noise (i.e., imprecision in estimating the directions of individual elements and/or global pooling across local estimates. To this end, we presented equivalent noise direction discrimination tasks and motion coherence tasks at both slow (1.5°/s and fast (6°/s speeds to children aged 5, 7, 9 and 11 years, and adults. We show that, as children get older, their levels of internal noise reduce, and they are able to average across more local motion estimates. Regression analyses indicated, however, that age-related improvements in coherent motion perception are driven solely by improvements in averaging and not by reductions in internal noise. Our results suggest that the development of coherent motion sensitivity is primarily limited by developmental changes within brain regions involved in integrating motion signals (e.g., MT/V5.

  12. Macrotransport processes: Brownian tracers as stochastic averagers in effective medium theories of heterogeneous media

    International Nuclear Information System (INIS)

    Brenner, H.

    1991-01-01

    Macrotransport processes (generalized Taylor dispersion phenomena) constitute coarse-grained descriptions of comparable convective diffusive-reactive microtransport processes, the latter supposed governed by microscale linear constitutive equations and boundary conditions, but characterized by spatially nonuniform phenomenological coefficients. Following a brief review of existing applications of the theory, the author focuses - by way of background information-upon the original (and now classical) Taylor - Aris dispersion problem, involving the combined convective and molecular diffusive transport of a point-size Brownian solute molecule (tracer) suspended in a Poiseuille solvent flow within a circular tube. A series of elementary generalizations of this prototype problem to chromatographic-like solute transport processes in tubes is used to illustrate some novel statistical-physical features. These examples emphasize the fact that a solute molecule may, on average, move axially down the tube at a different mean velocity (either larger or smaller) than that of a solvent molecule. Moreover, this solute molecule may suffer axial dispersion about its mean velocity at a rate greatly exceeding that attributable to its axial molecular diffusion alone. Such chromatographic anomalies represent novel macroscale non-linearities originating from physicochemical interactions between spatially inhomogeneous convective-diffusive-reactive microtransport processes

  13. Asymptotic theory of weakly dependent random processes

    CERN Document Server

    Rio, Emmanuel

    2017-01-01

    Presenting tools to aid understanding of asymptotic theory and weakly dependent processes, this book is devoted to inequalities and limit theorems for sequences of random variables that are strongly mixing in the sense of Rosenblatt, or absolutely regular. The first chapter introduces covariance inequalities under strong mixing or absolute regularity. These covariance inequalities are applied in Chapters 2, 3 and 4 to moment inequalities, rates of convergence in the strong law, and central limit theorems. Chapter 5 concerns coupling. In Chapter 6 new deviation inequalities and new moment inequalities for partial sums via the coupling lemmas of Chapter 5 are derived and applied to the bounded law of the iterated logarithm. Chapters 7 and 8 deal with the theory of empirical processes under weak dependence. Lastly, Chapter 9 describes links between ergodicity, return times and rates of mixing in the case of irreducible Markov chains. Each chapter ends with a set of exercises. The book is an updated and extended ...

  14. Probability, random processes, and ergodic properties

    CERN Document Server

    Gray, Robert M

    1988-01-01

    This book has been written for several reasons, not all of which are academic. This material was for many years the first half of a book in progress on information and ergodic theory. The intent was and is to provide a reasonably self-contained advanced treatment of measure theory, prob ability theory, and the theory of discrete time random processes with an emphasis on general alphabets and on ergodic and stationary properties of random processes that might be neither ergodic nor stationary. The intended audience was mathematically inc1ined engineering graduate students and visiting scholars who had not had formal courses in measure theoretic probability . Much of the material is familiar stuff for mathematicians, but many of the topics and results have not previously appeared in books. The original project grew too large and the first part contained much that would likely bore mathematicians and dis courage them from the second part. Hence I finally followed the suggestion to separate the material and split...

  15. Ra and the average effective strain of surface asperities deformed in metal-working processes

    DEFF Research Database (Denmark)

    Bay, Niels; Wanheim, Tarras; Petersen, A. S

    1975-01-01

    Based upon a slip-line analysis of the plastic deformation of surface asperities, a theory is developed determining the Ra-value (c.l.a.) and the average effective strain in the surface layer when deforming asperities in metal-working processes. The ratio between Ra and Ra0, the Ra-value after...... and before deformation, is a function of the nominal normal pressure and the initial slope γ0 of the surface asperities. The last parameter does not influence Ra significantly. The average effective strain View the MathML sourcege in the deformed surface layer is a function of the nominal normal pressure...... and γ0. View the MathML sourcege is highly dependent on γ0, View the MathML sourcege increasing with increasing γ0. It is shown that the Ra-value and the strain are hardly affected by the normal pressure until interacting deformation of the asperities begins, that is until the limit of Amonton's law...

  16. Dose calculation with respiration-averaged CT processed from cine CT without a respiratory surrogate

    International Nuclear Information System (INIS)

    Riegel, Adam C.; Ahmad, Moiz; Sun Xiaojun; Pan Tinsu

    2008-01-01

    Dose calculation for thoracic radiotherapy is commonly performed on a free-breathing helical CT despite artifacts caused by respiratory motion. Four-dimensional computed tomography (4D-CT) is one method to incorporate motion information into the treatment planning process. Some centers now use the respiration-averaged CT (RACT), the pixel-by-pixel average of the ten phases of 4D-CT, for dose calculation. This method, while sparing the tedious task of 4D dose calculation, still requires 4D-CT technology. The authors have recently developed a means to reconstruct RACT directly from unsorted cine CT data from which 4D-CT is formed, bypassing the need for a respiratory surrogate. Using RACT from cine CT for dose calculation may be a means to incorporate motion information into dose calculation without performing 4D-CT. The purpose of this study was to determine if RACT from cine CT can be substituted for RACT from 4D-CT for the purposes of dose calculation, and if increasing the cine duration can decrease differences between the dose distributions. Cine CT data and corresponding 4D-CT simulations for 23 patients with at least two breathing cycles per cine duration were retrieved. RACT was generated four ways: First from ten phases of 4D-CT, second, from 1 breathing cycle of images, third, from 1.5 breathing cycles of images, and fourth, from 2 breathing cycles of images. The clinical treatment plan was transferred to each RACT and dose was recalculated. Dose planes were exported at orthogonal planes through the isocenter (coronal, sagittal, and transverse orientations). The resulting dose distributions were compared using the gamma (γ) index within the planning target volume (PTV). Failure criteria were set to 2%/1 mm. A follow-up study with 50 additional lung cancer patients was performed to increase sample size. The same dose recalculation and analysis was performed. In the primary patient group, 22 of 23 patients had 100% of points within the PTV pass γ criteria

  17. Traffic and random processes an introduction

    CERN Document Server

    Mauro, Raffaele

    2015-01-01

    This book deals in a basic and systematic manner with a the fundamentals of random function theory and looks at some aspects related to arrival, vehicle headway and operational speed processes at the same time. The work serves as a useful practical and educational tool and aims at providing stimulus and motivation to investigate issues of such a strong applicative interest. It has a clearly discursive and concise structure, in which numerical examples are given to clarify the applications of the suggested theoretical model. Some statistical characterizations are fully developed in order to illustrate the peculiarities of specific modeling approaches; finally, there is a useful bibliography for in-depth thematic analysis.

  18. An Invariance Property for the Maximum Likelihood Estimator of the Parameters of a Gaussian Moving Average Process

    OpenAIRE

    Godolphin, E. J.

    1980-01-01

    It is shown that the estimation procedure of Walker leads to estimates of the parameters of a Gaussian moving average process which are asymptotically equivalent to the maximum likelihood estimates proposed by Whittle and represented by Godolphin.

  19. UNDERSTANDING SEVERE WEATHER PROCESSES THROUGH SPATIOTEMPORAL RELATIONAL RANDOM FORESTS

    Data.gov (United States)

    National Aeronautics and Space Administration — UNDERSTANDING SEVERE WEATHER PROCESSES THROUGH SPATIOTEMPORAL RELATIONAL RANDOM FORESTS AMY MCGOVERN, TIMOTHY SUPINIE, DAVID JOHN GAGNE II, NATHANIEL TROUTMAN,...

  20. The Initial Regression Statistical Characteristics of Intervals Between Zeros of Random Processes

    Directory of Open Access Journals (Sweden)

    V. K. Hohlov

    2014-01-01

    Full Text Available The article substantiates the initial regression statistical characteristics of intervals between zeros of realizing random processes, studies their properties allowing the use these features in the autonomous information systems (AIS of near location (NL. Coefficients of the initial regression (CIR to minimize the residual sum of squares of multiple initial regression views are justified on the basis of vector representations associated with a random vector notion of analyzed signal parameters. It is shown that even with no covariance-based private CIR it is possible to predict one random variable through another with respect to the deterministic components. The paper studies dependences of CIR interval sizes between zeros of the narrowband stationary in wide-sense random process with its energy spectrum. Particular CIR for random processes with Gaussian and rectangular energy spectra are obtained. It is shown that the considered CIRs do not depend on the average frequency of spectra, are determined by the relative bandwidth of the energy spectra, and weakly depend on the type of spectrum. CIR properties enable its use as an informative parameter when implementing temporary regression methods of signal processing, invariant to the average rate and variance of the input implementations. We consider estimates of the average energy spectrum frequency of the random stationary process by calculating the length of the time interval corresponding to the specified number of intervals between zeros. It is shown that the relative variance in estimation of the average energy spectrum frequency of stationary random process with increasing relative bandwidth ceases to depend on the last process implementation in processing above ten intervals between zeros. The obtained results can be used in the AIS NL to solve the tasks of detection and signal recognition, when a decision is made in conditions of unknown mathematical expectations on a limited observation

  1. Studies in astronomical time series analysis: Modeling random processes in the time domain

    Science.gov (United States)

    Scargle, J. D.

    1979-01-01

    Random process models phased in the time domain are used to analyze astrophysical time series data produced by random processes. A moving average (MA) model represents the data as a sequence of pulses occurring randomly in time, with random amplitudes. An autoregressive (AR) model represents the correlations in the process in terms of a linear function of past values. The best AR model is determined from sampled data and transformed to an MA for interpretation. The randomness of the pulse amplitudes is maximized by a FORTRAN algorithm which is relatively stable numerically. Results of test cases are given to study the effects of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the optical light curve of the quasar 3C 273 is given.

  2. A Computerized Approach to Trickle-Process, Random Assignment.

    Science.gov (United States)

    Braucht, G. Nicholas; Reichardt, Charles S.

    1993-01-01

    Procedures for implementing random assignment with trickle processing and ways they can be corrupted are described. A computerized method for implementing random assignment with trickle processing is presented as a desirable alternative in many situations and a way of protecting against threats to assignment validity. (SLD)

  3. Efficient processing of CFRP with a picosecond laser with up to 1.4 kW average power

    Science.gov (United States)

    Onuseit, V.; Freitag, C.; Wiedenmann, M.; Weber, R.; Negel, J.-P.; Löscher, A.; Abdou Ahmed, M.; Graf, T.

    2015-03-01

    Laser processing of carbon fiber reinforce plastic (CFRP) is a very promising method to solve a lot of the challenges for large-volume production of lightweight constructions in automotive and airplane industries. However, the laser process is actual limited by two main issues. First the quality might be reduced due to thermal damage and second the high process energy needed for sublimation of the carbon fibers requires laser sources with high average power for productive processing. To achieve thermal damage of the CFRP of less than 10μm intensities above 108 W/cm² are needed. To reach these high intensities in the processing area ultra-short pulse laser systems are favored. Unfortunately the average power of commercially available laser systems is up to now in the range of several tens to a few hundred Watt. To sublimate the carbon fibers a large volume specific enthalpy of 85 J/mm³ is necessary. This means for example that cutting of 2 mm thick material with a kerf width of 0.2 mm with industry-typical 100 mm/sec requires several kilowatts of average power. At the IFSW a thin-disk multipass amplifier yielding a maximum average output power of 1100 W (300 kHz, 8 ps, 3.7 mJ) allowed for the first time to process CFRP at this average power and pulse energy level with picosecond pulse duration. With this unique laser system cutting of CFRP with a thickness of 2 mm an effective average cutting speed of 150 mm/sec with a thermal damage below 10μm was demonstrated.

  4. Modelling and analysis of turbulent datasets using Auto Regressive Moving Average processes

    International Nuclear Information System (INIS)

    Faranda, Davide; Dubrulle, Bérengère; Daviaud, François; Pons, Flavio Maria Emanuele; Saint-Michel, Brice; Herbert, Éric; Cortet, Pierre-Philippe

    2014-01-01

    We introduce a novel way to extract information from turbulent datasets by applying an Auto Regressive Moving Average (ARMA) statistical analysis. Such analysis goes well beyond the analysis of the mean flow and of the fluctuations and links the behavior of the recorded time series to a discrete version of a stochastic differential equation which is able to describe the correlation structure in the dataset. We introduce a new index Υ that measures the difference between the resulting analysis and the Obukhov model of turbulence, the simplest stochastic model reproducing both Richardson law and the Kolmogorov spectrum. We test the method on datasets measured in a von Kármán swirling flow experiment. We found that the ARMA analysis is well correlated with spatial structures of the flow, and can discriminate between two different flows with comparable mean velocities, obtained by changing the forcing. Moreover, we show that the Υ is highest in regions where shear layer vortices are present, thereby establishing a link between deviations from the Kolmogorov model and coherent structures. These deviations are consistent with the ones observed by computing the Hurst exponents for the same time series. We show that some salient features of the analysis are preserved when considering global instead of local observables. Finally, we analyze flow configurations with multistability features where the ARMA technique is efficient in discriminating different stability branches of the system

  5. Statistical properties of several models of fractional random point processes

    Science.gov (United States)

    Bendjaballah, C.

    2011-08-01

    Statistical properties of several models of fractional random point processes have been analyzed from the counting and time interval statistics points of view. Based on the criterion of the reduced variance, it is seen that such processes exhibit nonclassical properties. The conditions for these processes to be treated as conditional Poisson processes are examined. Numerical simulations illustrate part of the theoretical calculations.

  6. Modelling river bank erosion processes and mass failure mechanisms using 2-D depth averaged numerical model

    Science.gov (United States)

    Die Moran, Andres; El kadi Abderrezzak, Kamal; Tassi, Pablo; Herouvet, Jean-Michel

    2014-05-01

    Bank erosion is a key process that may cause a large number of economic and environmental problems (e.g. land loss, damage to structures and aquatic habitat). Stream bank erosion (toe erosion and mass failure) represents an important form of channel morphology changes and a significant source of sediment. With the advances made in computational techniques, two-dimensional (2-D) numerical models have become valuable tools for investigating flow and sediment transport in open channels at large temporal and spatial scales. However, the implementation of mass failure process in 2D numerical models is still a challenging task. In this paper, a simple, innovative algorithm is implemented in the Telemac-Mascaret modeling platform to handle bank failure: failure occurs whether the actual slope of one given bed element is higher than the internal friction angle. The unstable bed elements are rotated around an appropriate axis, ensuring mass conservation. Mass failure of a bank due to slope instability is applied at the end of each sediment transport evolution iteration, once the bed evolution due to bed load (and/or suspended load) has been computed, but before the global sediment mass balance is verified. This bank failure algorithm is successfully tested using two laboratory experimental cases. Then, bank failure in a 1:40 scale physical model of the Rhine River composed of non-uniform material is simulated. The main features of the bank erosion and failure are correctly reproduced in the numerical simulations, namely the mass wasting at the bank toe, followed by failure at the bank head, and subsequent transport of the mobilised material in an aggradation front. Volumes of eroded material obtained are of the same order of magnitude as the volumes measured during the laboratory tests.

  7. Discrete random signal processing and filtering primer with Matlab

    CERN Document Server

    Poularikas, Alexander D

    2013-01-01

    Engineers in all fields will appreciate a practical guide that combines several new effective MATLAB® problem-solving approaches and the very latest in discrete random signal processing and filtering.Numerous Useful Examples, Problems, and Solutions - An Extensive and Powerful ReviewWritten for practicing engineers seeking to strengthen their practical grasp of random signal processing, Discrete Random Signal Processing and Filtering Primer with MATLAB provides the opportunity to doubly enhance their skills. The author, a leading expert in the field of electrical and computer engineering, offe

  8. Level sets and extrema of random processes and fields

    CERN Document Server

    Azais, Jean-Marc

    2009-01-01

    A timely and comprehensive treatment of random field theory with applications across diverse areas of study Level Sets and Extrema of Random Processes and Fields discusses how to understand the properties of the level sets of paths as well as how to compute the probability distribution of its extremal values, which are two general classes of problems that arise in the study of random processes and fields and in related applications. This book provides a unified and accessible approach to these two topics and their relationship to classical theory and Gaussian processes and fields, and the most modern research findings are also discussed. The authors begin with an introduction to the basic concepts of stochastic processes, including a modern review of Gaussian fields and their classical inequalities. Subsequent chapters are devoted to Rice formulas, regularity properties, and recent results on the tails of the distribution of the maximum. Finally, applications of random fields to various areas of mathematics a...

  9. The concept of the average stress in the fracture process zone for the search of the crack path

    Directory of Open Access Journals (Sweden)

    Yu.G. Matvienko

    2015-10-01

    Full Text Available The concept of the average stress has been employed to propose the maximum average tangential stress (MATS criterion for predicting the direction of fracture angle. This criterion states that a crack grows when the maximum average tangential stress in the fracture process zone ahead of the crack tip reaches its critical value and the crack growth direction coincides with the direction of the maximum average tangential stress along a constant radius around the crack tip. The tangential stress is described by the singular and nonsingular (T-stress terms in the Williams series solution. To demonstrate the validity of the proposed MATS criterion, this criterion is directly applied to experiments reported in the literature for the mixed mode I/II crack growth behavior of Guiting limestone. The predicted directions of fracture angle are consistent with the experimental data. The concept of the average stress has been also employed to predict the surface crack path under rolling-sliding contact loading. The proposed model considers the size and orientation of the initial crack, normal and tangential loading due to rolling–sliding contact as well as the influence of fluid trapped inside the crack by a hydraulic pressure mechanism. The MATS criterion is directly applied to equivalent contact model for surface crack growth on a gear tooth flank.

  10. Transforming spatial point processes into Poisson processes using random superposition

    DEFF Research Database (Denmark)

    Møller, Jesper; Berthelsen, Kasper Klitgaaard

    with a complementary spatial point process Y  to obtain a Poisson process X∪Y  with intensity function β. Underlying this is a bivariate spatial birth-death process (Xt,Yt) which converges towards the distribution of (X,Y). We study the joint distribution of X and Y, and their marginal and conditional distributions....... In particular, we introduce a fast and easy simulation procedure for Y conditional on X. This may be used for model checking: given a model for the Papangelou intensity of the original spatial point process, this model is used to generate the complementary process, and the resulting superposition is a Poisson...... process with intensity function β if and only if the true Papangelou intensity is used. Whether the superposition is actually such a Poisson process can easily be examined using well known results and fast simulation procedures for Poisson processes. We illustrate this approach to model checking...

  11. Advanced pulse oximeter signal processing technology compared to simple averaging. I. Effect on frequency of alarms in the operating room.

    Science.gov (United States)

    Rheineck-Leyssius, A T; Kalkman, C J

    1999-05-01

    To determine the effect of a new signal processing technique (Oxismart, Nellcor, Inc., Pleasanton, CA) on the incidence of false pulse oximeter alarms in the operating room (OR). Prospective observational study. Nonuniversity hospital. 53 ASA physical status I, II, and III consecutive patients undergoing general anesthesia with tracheal intubation. In the OR we compared the number of alarms produced by a recently developed third generation pulse oximeter (Nellcor Symphony N-3000) with Oxismart signal processing technique and a conventional pulse oximeter (Criticare 504). Three pulse oximeters were used simultaneously in each patient: a Nellcor pulse oximeter, a Criticare with the signal averaging time set at 3 seconds (Criticareaverage3s) and a similar unit with the signal averaging time set at 21 seconds (Criticareaverage21s). For each pulse oximeter, the number of false (artifact) alarms was counted. One false alarm was produced by the Nellcor (duration 55 sec) and one false alarm by the Criticareaverage21s monitor (5 sec). The incidence of false alarms was higher in Criticareaverage3s. In eight patients, Criticareaverage3s produced 20 false alarms (p signal processing compared with the Criticare monitor with the longer averaging time of 21 seconds.

  12. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  13. PySpline: A Modern, Cross-Platform Program for the Processing of Raw Averaged XAS Edge and EXAFS Data

    International Nuclear Information System (INIS)

    Tenderholt, Adam; Hedman, Britt; Hodgson, Keith O.

    2007-01-01

    PySpline is a modern computer program for processing raw averaged XAS and EXAFS data using an intuitive approach which allows the user to see the immediate effect of various processing parameters on the resulting k- and R-space data. The Python scripting language and Qt and Qwt widget libraries were chosen to meet the design requirement that it be cross-platform (i.e. versions for Windows, Mac OS X, and Linux). PySpline supports polynomial pre- and post-edge background subtraction, splining of the EXAFS region with a multi-segment polynomial spline, and Fast Fourier Transform (FFT) of the resulting k3-weighted EXAFS data

  14. Scattering analysis of point processes and random measures

    International Nuclear Information System (INIS)

    Hanisch, K.H.

    1984-01-01

    In the present paper scattering analysis of point processes and random measures is studied. Known formulae which connect the scattering intensity with the pair distribution function of the studied structures are proved in a rigorous manner with tools of the theory of point processes and random measures. For some special fibre processes the scattering intensity is computed. For a class of random measures, namely for 'grain-germ-models', a new formula is proved which yields the pair distribution function of the 'grain-germ-model' in terms of the pair distribution function of the underlying point process (the 'germs') and of the mean structure factor and the mean squared structure factor of the particles (the 'grains'). (author)

  15. Renewal theory for perturbed random walks and similar processes

    CERN Document Server

    Iksanov, Alexander

    2016-01-01

    This book offers a detailed review of perturbed random walks, perpetuities, and random processes with immigration. Being of major importance in modern probability theory, both theoretical and applied, these objects have been used to model various phenomena in the natural sciences as well as in insurance and finance. The book also presents the many significant results and efficient techniques and methods that have been worked out in the last decade. The first chapter is devoted to perturbed random walks and discusses their asymptotic behavior and various functionals pertaining to them, including supremum and first-passage time. The second chapter examines perpetuities, presenting results on continuity of their distributions and the existence of moments, as well as weak convergence of divergent perpetuities. Focusing on random processes with immigration, the third chapter investigates the existence of moments, describes long-time behavior and discusses limit theorems, both with and without scaling. Chapters fou...

  16. On the speed towards the mean for continuous time autoregressive moving average processes with applications to energy markets

    International Nuclear Information System (INIS)

    Benth, Fred Espen; Taib, Che Mohd Imran Che

    2013-01-01

    We extend the concept of half life of an Ornstein–Uhlenbeck process to Lévy-driven continuous-time autoregressive moving average processes with stochastic volatility. The half life becomes state dependent, and we analyze its properties in terms of the characteristics of the process. An empirical example based on daily temperatures observed in Petaling Jaya, Malaysia, is presented, where the proposed model is estimated and the distribution of the half life is simulated. The stationarity of the dynamics yield futures prices which asymptotically tend to constant at an exponential rate when time to maturity goes to infinity. The rate is characterized by the eigenvalues of the dynamics. An alternative description of this convergence can be given in terms of our concept of half life. - Highlights: • The concept of half life is extended to Levy-driven continuous time autoregressive moving average processes • The dynamics of Malaysian temperatures are modeled using a continuous time autoregressive model with stochastic volatility • Forward prices on temperature become constant when time to maturity tends to infinity • Convergence in time to maturity is at an exponential rate given by the eigenvalues of the model temperature model

  17. Compositions, Random Sums and Continued Random Fractions of Poisson and Fractional Poisson Processes

    Science.gov (United States)

    Orsingher, Enzo; Polito, Federico

    2012-08-01

    In this paper we consider the relation between random sums and compositions of different processes. In particular, for independent Poisson processes N α ( t), N β ( t), t>0, we have that N_{α}(N_{β}(t)) stackrel{d}{=} sum_{j=1}^{N_{β}(t)} Xj, where the X j s are Poisson random variables. We present a series of similar cases, where the outer process is Poisson with different inner processes. We highlight generalisations of these results where the external process is infinitely divisible. A section of the paper concerns compositions of the form N_{α}(tauk^{ν}), ν∈(0,1], where tauk^{ν} is the inverse of the fractional Poisson process, and we show how these compositions can be represented as random sums. Furthermore we study compositions of the form Θ( N( t)), t>0, which can be represented as random products. The last section is devoted to studying continued fractions of Cauchy random variables with a Poisson number of levels. We evaluate the exact distribution and derive the scale parameter in terms of ratios of Fibonacci numbers.

  18. Money creation process in a random redistribution model

    Science.gov (United States)

    Chen, Siyan; Wang, Yougui; Li, Keqiang; Wu, Jinshan

    2014-01-01

    In this paper, the dynamical process of money creation in a random exchange model with debt is investigated. The money creation kinetics are analyzed by both the money-transfer matrix method and the diffusion method. From both approaches, we attain the same conclusion: the source of money creation in the case of random exchange is the agents with neither money nor debt. These analytical results are demonstrated by computer simulations.

  19. Studies in astronomical time series analysis. IV - Modeling chaotic and random processes with linear filters

    Science.gov (United States)

    Scargle, Jeffrey D.

    1990-01-01

    While chaos arises only in nonlinear systems, standard linear time series models are nevertheless useful for analyzing data from chaotic processes. This paper introduces such a model, the chaotic moving average. This time-domain model is based on the theorem that any chaotic process can be represented as the convolution of a linear filter with an uncorrelated process called the chaotic innovation. A technique, minimum phase-volume deconvolution, is introduced to estimate the filter and innovation. The algorithm measures the quality of a model using the volume covered by the phase-portrait of the innovation process. Experiments on synthetic data demonstrate that the algorithm accurately recovers the parameters of simple chaotic processes. Though tailored for chaos, the algorithm can detect both chaos and randomness, distinguish them from each other, and separate them if both are present. It can also recover nonminimum-delay pulse shapes in non-Gaussian processes, both random and chaotic.

  20. Melnikov processes and chaos in randomly perturbed dynamical systems

    Science.gov (United States)

    Yagasaki, Kazuyuki

    2018-07-01

    We consider a wide class of randomly perturbed systems subjected to stationary Gaussian processes and show that chaotic orbits exist almost surely under some nondegenerate condition, no matter how small the random forcing terms are. This result is very contrasting to the deterministic forcing case, in which chaotic orbits exist only if the influence of the forcing terms overcomes that of the other terms in the perturbations. To obtain the result, we extend Melnikov’s method and prove that the corresponding Melnikov functions, which we call the Melnikov processes, have infinitely many zeros, so that infinitely many transverse homoclinic orbits exist. In addition, a theorem on the existence and smoothness of stable and unstable manifolds is given and the Smale–Birkhoff homoclinic theorem is extended in an appropriate form for randomly perturbed systems. We illustrate our theory for the Duffing oscillator subjected to the Ornstein–Uhlenbeck process parametrically.

  1. Continuous state branching processes in random environment: The Brownian case

    OpenAIRE

    Palau, Sandra; Pardo, Juan Carlos

    2015-01-01

    We consider continuous state branching processes that are perturbed by a Brownian motion. These processes are constructed as the unique strong solution of a stochastic differential equation. The long-term extinction and explosion behaviours are studied. In the stable case, the extinction and explosion probabilities are given explicitly. We find three regimes for the asymptotic behaviour of the explosion probability and, as in the case of branching processes in random environment, we find five...

  2. Designing neural networks that process mean values of random variables

    International Nuclear Information System (INIS)

    Barber, Michael J.; Clark, John W.

    2014-01-01

    We develop a class of neural networks derived from probabilistic models posed in the form of Bayesian networks. Making biologically and technically plausible assumptions about the nature of the probabilistic models to be represented in the networks, we derive neural networks exhibiting standard dynamics that require no training to determine the synaptic weights, that perform accurate calculation of the mean values of the relevant random variables, that can pool multiple sources of evidence, and that deal appropriately with ambivalent, inconsistent, or contradictory evidence. - Highlights: • High-level neural computations are specified by Bayesian belief networks of random variables. • Probability densities of random variables are encoded in activities of populations of neurons. • Top-down algorithm generates specific neural network implementation of given computation. • Resulting “neural belief networks” process mean values of random variables. • Such networks pool multiple sources of evidence and deal properly with inconsistent evidence

  3. Designing neural networks that process mean values of random variables

    Energy Technology Data Exchange (ETDEWEB)

    Barber, Michael J. [AIT Austrian Institute of Technology, Innovation Systems Department, 1220 Vienna (Austria); Clark, John W. [Department of Physics and McDonnell Center for the Space Sciences, Washington University, St. Louis, MO 63130 (United States); Centro de Ciências Matemáticas, Universidade de Madeira, 9000-390 Funchal (Portugal)

    2014-06-13

    We develop a class of neural networks derived from probabilistic models posed in the form of Bayesian networks. Making biologically and technically plausible assumptions about the nature of the probabilistic models to be represented in the networks, we derive neural networks exhibiting standard dynamics that require no training to determine the synaptic weights, that perform accurate calculation of the mean values of the relevant random variables, that can pool multiple sources of evidence, and that deal appropriately with ambivalent, inconsistent, or contradictory evidence. - Highlights: • High-level neural computations are specified by Bayesian belief networks of random variables. • Probability densities of random variables are encoded in activities of populations of neurons. • Top-down algorithm generates specific neural network implementation of given computation. • Resulting “neural belief networks” process mean values of random variables. • Such networks pool multiple sources of evidence and deal properly with inconsistent evidence.

  4. On the joint statistics of stable random processes

    International Nuclear Information System (INIS)

    Hopcraft, K I; Jakeman, E

    2011-01-01

    A utilitarian continuous bi-variate random process whose first-order probability density function is a stable random variable is constructed. Results paralleling some of those familiar from the theory of Gaussian noise are derived. In addition to the joint-probability density for the process, these include fractional moments and structure functions. Although the correlation functions for stable processes other than Gaussian do not exist, we show that there is coherence between values adopted by the process at different times, which identifies a characteristic evolution with time. The distribution of the derivative of the process, and the joint-density function of the value of the process and its derivative measured at the same time are evaluated. These enable properties to be calculated analytically such as level crossing statistics and those related to the random telegraph wave. When the stable process is fractal, the proportion of time it spends at zero is finite and some properties of this quantity are evaluated, an optical interpretation for which is provided. (paper)

  5. Generation and monitoring of a discrete stable random process

    CERN Document Server

    Hopcraft, K I; Matthews, J O

    2002-01-01

    A discrete stochastic process with stationary power law distribution is obtained from a death-multiple immigration population model. Emigrations from the population form a random series of events which are monitored by a counting process with finite-dynamic range and response time. It is shown that the power law behaviour of the population is manifested in the intermittent behaviour of the series of events. (letter to the editor)

  6. Spatial birth-and-death processes in random environment

    OpenAIRE

    Fernandez, Roberto; Ferrari, Pablo A.; Guerberoff, Gustavo R.

    2004-01-01

    We consider birth-and-death processes of objects (animals) defined in ${\\bf Z}^d$ having unit death rates and random birth rates. For animals with uniformly bounded diameter we establish conditions on the rate distribution under which the following holds for almost all realizations of the birth rates: (i) the process is ergodic with at worst power-law time mixing; (ii) the unique invariant measure has exponential decay of (spatial) correlations; (iii) there exists a perfect-simulation algorit...

  7. Random sampling of evolution time space and Fourier transform processing

    International Nuclear Information System (INIS)

    Kazimierczuk, Krzysztof; Zawadzka, Anna; Kozminski, Wiktor; Zhukov, Igor

    2006-01-01

    Application of Fourier Transform for processing 3D NMR spectra with random sampling of evolution time space is presented. The 2D FT is calculated for pairs of frequencies, instead of conventional sequence of one-dimensional transforms. Signal to noise ratios and linewidths for different random distributions were investigated by simulations and experiments. The experimental examples include 3D HNCA, HNCACB and 15 N-edited NOESY-HSQC spectra of 13 C 15 N labeled ubiquitin sample. Obtained results revealed general applicability of proposed method and the significant improvement of resolution in comparison with conventional spectra recorded in the same time

  8. Random Matrices for Information Processing – A Democratic Vision

    DEFF Research Database (Denmark)

    Cakmak, Burak

    The thesis studies three important applications of random matrices to information processing. Our main contribution is that we consider probabilistic systems involving more general random matrix ensembles than the classical ensembles with iid entries, i.e. models that account for statistical...... dependence between the entries. Specifically, the involved matrices are invariant or fulfill a certain asymptotic freeness condition as their dimensions grow to infinity. Informally speaking, all latent variables contribute to the system model in a democratic fashion – there are no preferred latent variables...

  9. An Efficient Randomized Algorithm for Real-Time Process Scheduling in PicOS Operating System

    Science.gov (United States)

    Helmy*, Tarek; Fatai, Anifowose; Sallam, El-Sayed

    PicOS is an event-driven operating environment designed for use with embedded networked sensors. More specifically, it is designed to support the concurrency in intensive operations required by networked sensors with minimal hardware requirements. Existing process scheduling algorithms of PicOS; a commercial tiny, low-footprint, real-time operating system; have their associated drawbacks. An efficient, alternative algorithm, based on a randomized selection policy, has been proposed, demonstrated, confirmed for efficiency and fairness, on the average, and has been recommended for implementation in PicOS. Simulations were carried out and performance measures such as Average Waiting Time (AWT) and Average Turn-around Time (ATT) were used to assess the efficiency of the proposed randomized version over the existing ones. The results prove that Randomized algorithm is the best and most attractive for implementation in PicOS, since it is most fair and has the least AWT and ATT on average over the other non-preemptive scheduling algorithms implemented in this paper.

  10. Analytical explicit formulas of average run length for long memory process with ARFIMA model on CUSUM control chart

    Directory of Open Access Journals (Sweden)

    Wilasinee Peerajit

    2017-12-01

    Full Text Available This paper proposes the explicit formulas for the derivation of exact formulas from Average Run Lengths (ARLs using integral equation on CUSUM control chart when observations are long memory processes with exponential white noise. The authors compared efficiency in terms of the percentage of absolute difference to a similar method to verify the accuracy of the ARLs between the values obtained by the explicit formulas and numerical integral equation (NIE method. The explicit formulas were based on Banach fixed point theorem which was used to guarantee the existence and uniqueness of the solution for ARFIMA(p,d,q. Results showed that the two methods are similar in good agreement with the percentage of absolute difference at less than 0.23%. Therefore, the explicit formulas are an efficient alternative for implementation in real applications because the computational CPU time for ARLs from the explicit formulas are 1 second preferable over the NIE method.

  11. Solution-Processed Carbon Nanotube True Random Number Generator.

    Science.gov (United States)

    Gaviria Rojas, William A; McMorrow, Julian J; Geier, Michael L; Tang, Qianying; Kim, Chris H; Marks, Tobin J; Hersam, Mark C

    2017-08-09

    With the growing adoption of interconnected electronic devices in consumer and industrial applications, there is an increasing demand for robust security protocols when transmitting and receiving sensitive data. Toward this end, hardware true random number generators (TRNGs), commonly used to create encryption keys, offer significant advantages over software pseudorandom number generators. However, the vast network of devices and sensors envisioned for the "Internet of Things" will require small, low-cost, and mechanically flexible TRNGs with low computational complexity. These rigorous constraints position solution-processed semiconducting single-walled carbon nanotubes (SWCNTs) as leading candidates for next-generation security devices. Here, we demonstrate the first TRNG using static random access memory (SRAM) cells based on solution-processed SWCNTs that digitize thermal noise to generate random bits. This bit generation strategy can be readily implemented in hardware with minimal transistor and computational overhead, resulting in an output stream that passes standardized statistical tests for randomness. By using solution-processed semiconducting SWCNTs in a low-power, complementary architecture to achieve TRNG, we demonstrate a promising approach for improving the security of printable and flexible electronics.

  12. Advanced pulse oximeter signal processing technology compared to simple averaging. II. Effect on frequency of alarms in the postanesthesia care unit.

    Science.gov (United States)

    Rheineck-Leyssius, A T; Kalkman, C J

    1999-05-01

    To determine the effect of a new pulse oximeter (Nellcor Symphony N-3000, Pleasanton, CA) with signal processing technique (Oxismart) on the incidence of false alarms in the postanesthesia care unit (PACU). Prospective study. Nonuniversity hospital. 603 consecutive ASA physical status I, II, and III patients recovering from general or regional anesthesia in the PACU. We compared the number of alarms produced by a recently developed "third"-generation pulse oximeter (Nellcor Symphony N-3000) with Oxismart signal processing technique and a conventional pulse oximeter (Criticare 504, Waukesha, WI). Patients were randomly assigned to either a Nellcor pulse oximeter or a Criticare with the signal averaging time set at either 12 or 21 seconds. For each patient the number of false (artifact) alarms was counted. The Nellcor generated one false alarm in 199 patients and 36 (in 31 patients) "loss of pulse" alarms. The conventional pulse oximeter with the averaging time set at 12 seconds generated a total of 32 false alarms in 17 of 197 patients [compared with the Nellcor, relative risk (RR) 0.06, confidence interval (CI) 0.01 to 0.25] and a total of 172 "loss of pulse" alarms in 79 patients (RR 0.39, CI 0.28 to 0.55). The conventional pulse oximeter with the averaging time set at 21 seconds generated 12 false alarms in 11 of 207 patients (compared with the Nellcor, RR 0.09, CI 0.02 to 0.48) and a total of 204 "loss of pulse" alarms in 81 patients (RR 0.40, CI 0.28 to 0.56). The lower incidence of false alarms of the conventional pulse oximeter with the longest averaging time compared with the shorter averaging time did not reach statistical significance (false alarms RR 0.62, CI 0.3 to 1.27; "loss of pulse" alarms RR 0.98, CI 0.77 to 1.3). To date, this is the first report of a pulse oximeter that produced almost no false alarms in the PACU.

  13. Optimal redundant systems for works with random processing time

    International Nuclear Information System (INIS)

    Chen, M.; Nakagawa, T.

    2013-01-01

    This paper studies the optimal redundant policies for a manufacturing system processing jobs with random working times. The redundant units of the parallel systems and standby systems are subject to stochastic failures during the continuous production process. First, a job consisting of only one work is considered for both redundant systems and the expected cost functions are obtained. Next, each redundant system with a random number of units is assumed for a single work. The expected cost functions and the optimal expected numbers of units are derived for redundant systems. Subsequently, the production processes of N tandem works are introduced for parallel and standby systems, and the expected cost functions are also summarized. Finally, the number of works is estimated by a Poisson distribution for the parallel and standby systems. Numerical examples are given to demonstrate the optimization problems of redundant systems

  14. Multifractal detrended fluctuation analysis of analog random multiplicative processes

    Energy Technology Data Exchange (ETDEWEB)

    Silva, L.B.M.; Vermelho, M.V.D. [Instituto de Fisica, Universidade Federal de Alagoas, Maceio - AL, 57072-970 (Brazil); Lyra, M.L. [Instituto de Fisica, Universidade Federal de Alagoas, Maceio - AL, 57072-970 (Brazil)], E-mail: marcelo@if.ufal.br; Viswanathan, G.M. [Instituto de Fisica, Universidade Federal de Alagoas, Maceio - AL, 57072-970 (Brazil)

    2009-09-15

    We investigate non-Gaussian statistical properties of stationary stochastic signals generated by an analog circuit that simulates a random multiplicative process with weak additive noise. The random noises are originated by thermal shot noise and avalanche processes, while the multiplicative process is generated by a fully analog circuit. The resulting signal describes stochastic time series of current interest in several areas such as turbulence, finance, biology and environment, which exhibit power-law distributions. Specifically, we study the correlation properties of the signal by employing a detrended fluctuation analysis and explore its multifractal nature. The singularity spectrum is obtained and analyzed as a function of the control circuit parameter that tunes the asymptotic power-law form of the probability distribution function.

  15. Random migration processes between two stochastic epidemic centers.

    Science.gov (United States)

    Sazonov, Igor; Kelbert, Mark; Gravenor, Michael B

    2016-04-01

    We consider the epidemic dynamics in stochastic interacting population centers coupled by random migration. Both the epidemic and the migration processes are modeled by Markov chains. We derive explicit formulae for the probability distribution of the migration process, and explore the dependence of outbreak patterns on initial parameters, population sizes and coupling parameters, using analytical and numerical methods. We show the importance of considering the movement of resident and visitor individuals separately. The mean field approximation for a general migration process is derived and an approximate method that allows the computation of statistical moments for networks with highly populated centers is proposed and tested numerically. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Apparent scale correlations in a random multifractal process

    DEFF Research Database (Denmark)

    Cleve, Jochen; Schmiegel, Jürgen; Greiner, Martin

    2008-01-01

    We discuss various properties of a homogeneous random multifractal process, which are related to the issue of scale correlations. By design, the process has no built-in scale correlations. However, when it comes to observables like breakdown coefficients, which are based on a coarse......-graining of the multifractal field, scale correlations do appear. In the log-normal limit of the model process, the conditional distributions and moments of breakdown coefficients reproduce the observations made in fully developed small-scale turbulence. These findings help to understand several puzzling empirical details...

  17. Network formation determined by the diffusion process of random walkers

    International Nuclear Information System (INIS)

    Ikeda, Nobutoshi

    2008-01-01

    We studied the diffusion process of random walkers in networks formed by their traces. This model considers the rise and fall of links determined by the frequency of transports of random walkers. In order to examine the relation between the formed network and the diffusion process, a situation in which multiple random walkers start from the same vertex is investigated. The difference in diffusion rate of random walkers according to the difference in dimension of the initial lattice is very important for determining the time evolution of the networks. For example, complete subgraphs can be formed on a one-dimensional lattice while a graph with a power-law vertex degree distribution is formed on a two-dimensional lattice. We derived some formulae for predicting network changes for the 1D case, such as the time evolution of the size of nearly complete subgraphs and conditions for their collapse. The networks formed on the 2D lattice are characterized by the existence of clusters of highly connected vertices and their life time. As the life time of such clusters tends to be small, the exponent of the power-law distribution changes from γ ≅ 1-2 to γ ≅ 3

  18. Comparison of population-averaged and cluster-specific models for the analysis of cluster randomized trials with missing binary outcomes: a simulation study

    Directory of Open Access Journals (Sweden)

    Ma Jinhui

    2013-01-01

    Full Text Available Abstracts Background The objective of this simulation study is to compare the accuracy and efficiency of population-averaged (i.e. generalized estimating equations (GEE and cluster-specific (i.e. random-effects logistic regression (RELR models for analyzing data from cluster randomized trials (CRTs with missing binary responses. Methods In this simulation study, clustered responses were generated from a beta-binomial distribution. The number of clusters per trial arm, the number of subjects per cluster, intra-cluster correlation coefficient, and the percentage of missing data were allowed to vary. Under the assumption of covariate dependent missingness, missing outcomes were handled by complete case analysis, standard multiple imputation (MI and within-cluster MI strategies. Data were analyzed using GEE and RELR. Performance of the methods was assessed using standardized bias, empirical standard error, root mean squared error (RMSE, and coverage probability. Results GEE performs well on all four measures — provided the downward bias of the standard error (when the number of clusters per arm is small is adjusted appropriately — under the following scenarios: complete case analysis for CRTs with a small amount of missing data; standard MI for CRTs with variance inflation factor (VIF 50. RELR performs well only when a small amount of data was missing, and complete case analysis was applied. Conclusion GEE performs well as long as appropriate missing data strategies are adopted based on the design of CRTs and the percentage of missing data. In contrast, RELR does not perform well when either standard or within-cluster MI strategy is applied prior to the analysis.

  19. Order out of Randomness: Self-Organization Processes in Astrophysics

    Science.gov (United States)

    Aschwanden, Markus J.; Scholkmann, Felix; Béthune, William; Schmutz, Werner; Abramenko, Valentina; Cheung, Mark C. M.; Müller, Daniel; Benz, Arnold; Chernov, Guennadi; Kritsuk, Alexei G.; Scargle, Jeffrey D.; Melatos, Andrew; Wagoner, Robert V.; Trimble, Virginia; Green, William H.

    2018-03-01

    Self-organization is a property of dissipative nonlinear processes that are governed by a global driving force and a local positive feedback mechanism, which creates regular geometric and/or temporal patterns, and decreases the entropy locally, in contrast to random processes. Here we investigate for the first time a comprehensive number of (17) self-organization processes that operate in planetary physics, solar physics, stellar physics, galactic physics, and cosmology. Self-organizing systems create spontaneous " order out of randomness", during the evolution from an initially disordered system to an ordered quasi-stationary system, mostly by quasi-periodic limit-cycle dynamics, but also by harmonic (mechanical or gyromagnetic) resonances. The global driving force can be due to gravity, electromagnetic forces, mechanical forces (e.g., rotation or differential rotation), thermal pressure, or acceleration of nonthermal particles, while the positive feedback mechanism is often an instability, such as the magneto-rotational (Balbus-Hawley) instability, the convective (Rayleigh-Bénard) instability, turbulence, vortex attraction, magnetic reconnection, plasma condensation, or a loss-cone instability. Physical models of astrophysical self-organization processes require hydrodynamic, magneto-hydrodynamic (MHD), plasma, or N-body simulations. Analytical formulations of self-organizing systems generally involve coupled differential equations with limit-cycle solutions of the Lotka-Volterra or Hopf-bifurcation type.

  20. Average bioequivalence of single 500 mg doses of two oral formulations of levofloxacin: a randomized, open-label, two-period crossover study in healthy adult Brazilian volunteers

    Directory of Open Access Journals (Sweden)

    Eunice Kazue Kano

    2015-03-01

    Full Text Available Average bioequivalence of two 500 mg levofloxacin formulations available in Brazil, Tavanic(c (Sanofi-Aventis Farmacêutica Ltda, Brazil, reference product and Levaquin(c (Janssen-Cilag Farmacêutica Ltda, Brazil, test product was evaluated by means of a randomized, open-label, 2-way crossover study performed in 26 healthy Brazilian volunteers under fasting conditions. A single dose of 500 mg levofloxacin tablets was orally administered, and blood samples were collected over a period of 48 hours. Levofloxacin plasmatic concentrations were determined using a validated HPLC method. Pharmacokinetic parameters Cmax, Tmax, Kel, T1/2el, AUC0-t and AUC0-inf were calculated using noncompartmental analysis. Bioequivalence was determined by calculating 90% confidence intervals (90% CI for the ratio of Cmax, AUC0-t and AUC0-inf values for test and reference products, using logarithmic transformed data. Tolerability was assessed by monitoring vital signs and laboratory analysis results, by subject interviews and by spontaneous report of adverse events. 90% CIs for Cmax, AUC0-t and AUC0-inf were 92.1% - 108.2%, 90.7% - 98.0%, and 94.8% - 100.0%, respectively. Observed adverse events were nausea and headache. It was concluded that Tavanic(c and Levaquin(c are bioequivalent, since 90% CIs are within the 80% - 125% interval proposed by regulatory agencies.

  1. Random Process Theory Approach to Geometric Heterogeneous Surfaces: Effective Fluid-Solid Interaction

    Science.gov (United States)

    Khlyupin, Aleksey; Aslyamov, Timur

    2017-06-01

    Realistic fluid-solid interaction potentials are essential in description of confined fluids especially in the case of geometric heterogeneous surfaces. Correlated random field is considered as a model of random surface with high geometric roughness. We provide the general theory of effective coarse-grained fluid-solid potential by proper averaging of the free energy of fluid molecules which interact with the solid media. This procedure is largely based on the theory of random processes. We apply first passage time probability problem and assume the local Markov properties of random surfaces. General expression of effective fluid-solid potential is obtained. In the case of small surface irregularities analytical approximation for effective potential is proposed. Both amorphous materials with large surface roughness and crystalline solids with several types of fcc lattices are considered. It is shown that the wider the lattice spacing in terms of molecular diameter of the fluid, the more obtained potentials differ from classical ones. A comparison with published Monte-Carlo simulations was discussed. The work provides a promising approach to explore how the random geometric heterogeneity affects on thermodynamic properties of the fluids.

  2. Studies in astronomical time series analysis. I - Modeling random processes in the time domain

    Science.gov (United States)

    Scargle, J. D.

    1981-01-01

    Several random process models in the time domain are defined and discussed. Attention is given to the moving average model, the autoregressive model, and relationships between and combinations of these models. Consideration is then given to methods for investigating pulse structure, procedures of model construction, computational methods, and numerical experiments. A FORTRAN algorithm of time series analysis has been developed which is relatively stable numerically. Results of test cases are given to study the effect of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the light curve of the quasar 3C 272 is considered as an example.

  3. Accumulated damage evaluation for a piping system by the response factor on non-stationary random process, 2

    International Nuclear Information System (INIS)

    Shintani, Masanori

    1988-01-01

    This paper shows that the average and variance of the accumulated damage caused by earthquakes on the piping system attached to a building are related to the seismic response factor λ. The earthquakes refered to in this paper are of a non-stationary random process kind. The average is proportional to λ 2 and the variance to λ 4 . The analytical values of the average and variance for a single-degree-of-freedom system are compared with those obtained from computer simulations. Here the model of the building is a single-degree-of-freedom system. Both average of accumulated damage are approximately equal. The variance obtained from the analysis does not coincide with that from simulations. The reason is considered to be the forced vibraiton by sinusoidal waves, and the sinusoidal waves included random waves. Taking account of amplitude magnification factor, the values of the variance approach those obtained from simulations. (author)

  4. Polymers and Random graphs: Asymptotic equivalence to branching processes

    International Nuclear Information System (INIS)

    Spouge, J.L.

    1985-01-01

    In 1974, Falk and Thomas did a computer simulation of Flory's Equireactive RA/sub f/ Polymer model, rings forbidden and rings allowed. Asymptotically, the Rings Forbidden model tended to Stockmayer's RA/sub f/ distribution (in which the sol distribution ''sticks'' after gelation), while the Rings Allowed model tended to the Flory version of the RA/sub f/ distribution. In 1965, Whittle introduced the Tree and Pseudomultigraph models. We show that these random graphs generalize the Falk and Thomas models by incorporating first-shell substitution effects. Moreover, asymptotically the Tree model displays postgelation ''sticking.'' Hence this phenomenon results from the absence of rings and occurs independently of equireactivity. We also show that the Pseudomultigraph model is asymptotically identical to the Branching Process model introduced by Gordon in 1962. This provides a possible basis for the Branching Process model in standard statistical mechanics

  5. Nonstationary random acoustic and electromagnetic fields as wave diffusion processes

    International Nuclear Information System (INIS)

    Arnaut, L R

    2007-01-01

    We investigate the effects of relatively rapid variations of the boundaries of an overmoded cavity on the stochastic properties of its interior acoustic or electromagnetic field. For quasi-static variations, this field can be represented as an ideal incoherent and statistically homogeneous isotropic random scalar or vector field, respectively. A physical model is constructed showing that the field dynamics can be characterized as a generalized diffusion process. The Langevin-It o-hat and Fokker-Planck equations are derived and their associated statistics and distributions for the complex analytic field, its magnitude and energy density are computed. The energy diffusion parameter is found to be proportional to the square of the ratio of the standard deviation of the source field to the characteristic time constant of the dynamic process, but is independent of the initial energy density, to first order. The energy drift vanishes in the asymptotic limit. The time-energy probability distribution is in general not separable, as a result of nonstationarity. A general solution of the Fokker-Planck equation is obtained in integral form, together with explicit closed-form solutions for several asymptotic cases. The findings extend known results on statistics and distributions of quasi-stationary ideal random fields (pure diffusions), which are retrieved as special cases

  6. 5th Seminar on Stochastic Processes, Random Fields and Applications

    CERN Document Server

    Russo, Francesco; Dozzi, Marco

    2008-01-01

    This volume contains twenty-eight refereed research or review papers presented at the 5th Seminar on Stochastic Processes, Random Fields and Applications, which took place at the Centro Stefano Franscini (Monte Verità) in Ascona, Switzerland, from May 30 to June 3, 2005. The seminar focused mainly on stochastic partial differential equations, random dynamical systems, infinite-dimensional analysis, approximation problems, and financial engineering. The book will be a valuable resource for researchers in stochastic analysis and professionals interested in stochastic methods in finance. Contributors: Y. Asai, J.-P. Aubin, C. Becker, M. Benaïm, H. Bessaih, S. Biagini, S. Bonaccorsi, N. Bouleau, N. Champagnat, G. Da Prato, R. Ferrière, F. Flandoli, P. Guasoni, V.B. Hallulli, D. Khoshnevisan, T. Komorowski, R. Léandre, P. Lescot, H. Lisei, J.A. López-Mimbela, V. Mandrekar, S. Méléard, A. Millet, H. Nagai, A.D. Neate, V. Orlovius, M. Pratelli, N. Privault, O. Raimond, M. Röckner, B. Rüdiger, W.J. Runggaldi...

  7. British Standard method for determination of ISO speed and average gradient of direct-exposure medical and dental radiographic film/process combinations

    International Nuclear Information System (INIS)

    1983-01-01

    Under the direction of the Cinematography and Photography Standards Committee, a British Standard method has been prepared for determining ISO speed and average gradient of direct-exposure medical and dental radiographic film/film-process combinations. The method determines the speed and gradient, i.e. contrast, of the X-ray films processed according to their manufacturer's recommendations. (U.K.)

  8. How Does the Supply Requisitioning Process Affect Average Customer Wait Time Onboard U.S. Navy Destroyers?

    Science.gov (United States)

    2013-06-01

    repairs faster and increase readiness levels across the fleet. Applying a six sigma define, measure, analyze, improve and control ( DMAIC ) process approach...measure, analyze, improve and control ( DMAIC ) process approach, this report describes current procedures from initial demand to issue of repair parts...6  1.  DMAIC ..................................................................................................7  a

  9. Random number generation as an index of controlled processing.

    Science.gov (United States)

    Jahanshahi, Marjan; Saleem, T; Ho, Aileen K; Dirnberger, Georg; Fuller, R

    2006-07-01

    Random number generation (RNG) is a functionally complex process that is highly controlled and therefore dependent on Baddeley's central executive. This study addresses this issue by investigating whether key predictions from this framework are compatible with empirical data. In Experiment 1, the effect of increasing task demands by increasing the rate of the paced generation was comprehensively examined. As expected, faster rates affected performance negatively because central resources were increasingly depleted. Next, the effects of participants' exposure were manipulated in Experiment 2 by providing increasing amounts of practice on the task. There was no improvement over 10 practice trials, suggesting that the high level of strategic control required by the task was constant and not amenable to any automatization gain with repeated exposure. Together, the results demonstrate that RNG performance is a highly controlled and demanding process sensitive to additional demands on central resources (Experiment 1) and is unaffected by repeated performance or practice (Experiment 2). These features render the easily administered RNG task an ideal and robust index of executive function that is highly suitable for repeated clinical use. ((c) 2006 APA, all rights reserved).

  10. Probability on graphs random processes on graphs and lattices

    CERN Document Server

    Grimmett, Geoffrey

    2018-01-01

    This introduction to some of the principal models in the theory of disordered systems leads the reader through the basics, to the very edge of contemporary research, with the minimum of technical fuss. Topics covered include random walk, percolation, self-avoiding walk, interacting particle systems, uniform spanning tree, random graphs, as well as the Ising, Potts, and random-cluster models for ferromagnetism, and the Lorentz model for motion in a random medium. This new edition features accounts of major recent progress, including the exact value of the connective constant of the hexagonal lattice, and the critical point of the random-cluster model on the square lattice. The choice of topics is strongly motivated by modern applications, and focuses on areas that merit further research. Accessible to a wide audience of mathematicians and physicists, this book can be used as a graduate course text. Each chapter ends with a range of exercises.

  11. An empirical test of pseudo random number generators by means of an exponential decaying process

    International Nuclear Information System (INIS)

    Coronel B, H.F.; Hernandez M, A.R.; Jimenez M, M.A.; Mora F, L.E.

    2007-01-01

    Empirical tests for pseudo random number generators based on the use of processes or physical models have been successfully used and are considered as complementary to theoretical tests of randomness. In this work a statistical methodology for evaluating the quality of pseudo random number generators is presented. The method is illustrated in the context of the so-called exponential decay process, using some pseudo random number generators commonly used in physics. (Author)

  12. A randomized controlled trial of an electronic informed consent process.

    Science.gov (United States)

    Rothwell, Erin; Wong, Bob; Rose, Nancy C; Anderson, Rebecca; Fedor, Beth; Stark, Louisa A; Botkin, Jeffrey R

    2014-12-01

    A pilot study assessed an electronic informed consent model within a randomized controlled trial (RCT). Participants who were recruited for the parent RCT project were randomly selected and randomized to either an electronic consent group (n = 32) or a simplified paper-based consent group (n = 30). Results from the electronic consent group reported significantly higher understanding of the purpose of the study, alternatives to participation, and who to contact if they had questions or concerns about the study. However, participants in the paper-based control group reported higher mean scores on some survey items. This research suggests that an electronic informed consent presentation may improve participant understanding for some aspects of a research study. © The Author(s) 2014.

  13. Random skew plane partitions and the Pearcey process

    DEFF Research Database (Denmark)

    Reshetikhin, Nicolai; Okounkov, Andrei

    2007-01-01

    We study random skew 3D partitions weighted by q vol and, specifically, the q → 1 asymptotics of local correlations near various points of the limit shape. We obtain sine-kernel asymptotics for correlations in the bulk of the disordered region, Airy kernel asymptotics near a general point of the ...

  14. A Randomization Procedure for "Trickle-Process" Evaluations

    Science.gov (United States)

    Goldman, Jerry

    1977-01-01

    This note suggests a solution to the problem of achieving randomization in experimental settings where units deemed eligible for treatment "trickle in," that is, appear at any time. The solution permits replication of the experiment in order to test for time-dependent effects. (Author/CTM)

  15. Fluid hydration to prevent post-ERCP pancreatitis in average- to high-risk patients receiving prophylactic rectal NSAIDs (FLUYT trial): study protocol for a randomized controlled trial.

    Science.gov (United States)

    Smeets, Xavier J N M; da Costa, David W; Fockens, Paul; Mulder, Chris J J; Timmer, Robin; Kievit, Wietske; Zegers, Marieke; Bruno, Marco J; Besselink, Marc G H; Vleggaar, Frank P; van der Hulst, Rene W M; Poen, Alexander C; Heine, Gerbrand D N; Venneman, Niels G; Kolkman, Jeroen J; Baak, Lubbertus C; Römkens, Tessa E H; van Dijk, Sven M; Hallensleben, Nora D L; van de Vrie, Wim; Seerden, Tom C J; Tan, Adriaan C I T L; Voorburg, Annet M C J; Poley, Jan-Werner; Witteman, Ben J; Bhalla, Abha; Hadithi, Muhammed; Thijs, Willem J; Schwartz, Matthijs P; Vrolijk, Jan Maarten; Verdonk, Robert C; van Delft, Foke; Keulemans, Yolande; van Goor, Harry; Drenth, Joost P H; van Geenen, Erwin J M

    2018-04-02

    Post-endoscopic retrograde cholangiopancreatography (ERCP) pancreatitis (PEP) is the most common complication of ERCP and may run a severe course. Evidence suggests that vigorous periprocedural hydration can prevent PEP, but studies to date have significant methodological drawbacks. Importantly, evidence for its added value in patients already receiving prophylactic rectal non-steroidal anti-inflammatory drugs (NSAIDs) is lacking and the cost-effectiveness of the approach has not been investigated. We hypothesize that combination therapy of rectal NSAIDs and periprocedural hydration would significantly lower the incidence of post-ERCP pancreatitis compared to rectal NSAIDs alone in moderate- to high-risk patients undergoing ERCP. The FLUYT trial is a multicenter, parallel group, open label, superiority randomized controlled trial. A total of 826 moderate- to high-risk patients undergoing ERCP that receive prophylactic rectal NSAIDs will be randomized to a control group (no fluids or normal saline with a maximum of 1.5 mL/kg/h and 3 L/24 h) or intervention group (lactated Ringer's solution with 20 mL/kg over 60 min at start of ERCP, followed by 3 mL/kg/h for 8 h thereafter). The primary endpoint is the incidence of post-ERCP pancreatitis. Secondary endpoints include PEP severity, hydration-related complications, and cost-effectiveness. The FLUYT trial design, including hydration schedule, fluid type, and sample size, maximize its power of identifying a potential difference in post-ERCP pancreatitis incidence in patients receiving prophylactic rectal NSAIDs. EudraCT: 2015-000829-37 . Registered on 18 February 2015. 13659155 . Registered on 18 May 2015.

  16. Do MENA stock market returns follow a random walk process?

    Directory of Open Access Journals (Sweden)

    Salim Lahmiri

    2013-01-01

    Full Text Available In this research, three variance ratio tests: the standard variance ratio test, the wild bootstrap multiple variance ratio test, and the non-parametric rank scores test are adopted to test the random walk hypothesis (RWH of stock markets in Middle East and North Africa (MENA region using most recent data from January 2010 to September 2012. The empirical results obtained by all three econometric tests show that the RWH is strongly rejected for Kuwait, Tunisia, and Morocco. However, the standard variance ratio test and the wild bootstrap multiple variance ratio test reject the null hypothesis of random walk in Jordan and KSA, while non-parametric rank scores test do not. We may conclude that Jordan and KSA stock market are weak efficient. In sum, the empirical results suggest that return series in Kuwait, Tunisia, and Morocco are predictable. In other words, predictable patterns that can be exploited in these markets still exit. Therefore, investors may make profits in such less efficient markets.

  17. Low to Moderate Average Alcohol Consumption and Binge Drinking in Early Pregnancy: Effects on Choice Reaction Time and Information Processing Time in Five-Year-Old Children.

    Directory of Open Access Journals (Sweden)

    Tina R Kilburn

    Full Text Available Deficits in information processing may be a core deficit after fetal alcohol exposure. This study was designed to investigate the possible effects of weekly low to moderate maternal alcohol consumption and binge drinking episodes in early pregnancy on choice reaction time (CRT and information processing time (IPT in young children.Participants were sampled based on maternal alcohol consumption during pregnancy. At the age of 60-64 months, 1,333 children were administered a modified version of the Sternberg paradigm to assess CRT and IPT. In addition, a test of general intelligence (WPPSI-R was administered.Adjusted for a wide range of potential confounders, this study showed no significant effects of average weekly maternal alcohol consumption during pregnancy on CRT or IPT. There was, however, an indication of slower CRT associated with binge drinking episodes in gestational weeks 1-4.This study observed no significant effects of average weekly maternal alcohol consumption during pregnancy on CRT or IPT as assessed by the Sternberg paradigm. However, there were some indications of CRT being associated with binge drinking during very early pregnancy. Further large-scale studies are needed to investigate effects of different patterns of maternal alcohol consumption on basic cognitive processes in offspring.

  18. Use of Play Therapy in Nursing Process: A Prospective Randomized Controlled Study.

    Science.gov (United States)

    Sezici, Emel; Ocakci, Ayse Ferda; Kadioglu, Hasibe

    2017-03-01

    Play therapy is a nursing intervention employed in multidisciplinary approaches to develop the social, emotional, and behavioral skills of children. In this study, we aim to determine the effects of play therapy on the social, emotional, and behavioral skills of pre-school children through the nursing process. A single-blind, prospective, randomized controlled study was undertaken. The design, conduct, and reporting of this study adhere to the Consolidated Standards of Reporting Trials (CONSORT) guidelines. The participants included 4- to 5-year-old kindergarten children with no oral or aural disabilities and parents who agreed to participate in the study. The Pre-school Child and Family Identification Form and Social Competence and the Behavior Evaluation Scale were used to gather data. Games in the play therapy literature about nursing diagnoses (fear, social disturbance, impaired social interactions, ineffective coping, anxiety), which were determined after the preliminary test, constituted the application of the study. There was no difference in the average scores of the children in the experimental and control groups in their Anger-Aggression (AA), Social Competence (SC), and Anxiety-Withdrawal (AW) scores beforehand (t = 0.015, p = .988; t = 0.084, p = .933; t = 0.214, p = .831, respectively). The difference between the average AA and SC scores in the post-test (t = 2.041, p = .045; t = 2.692, p = .009, respectively), and the retests were statistically significant in AA and SC average scores in the experimental and control groups (t = 4.538, p = .000; t = 4.693; p = .000, respectively). In AW average scores, no statistical difference was found in the post-test (t = 0.700, p = .486), whereas in the retest, a significant difference was identified (t = 5.839, p = .000). Play therapy helped pre-school children to improve their social, emotional, and behavioral skills. It also provided benefits for the children to decrease their fear and anxiety levels, to improve

  19. Strategies for processing diffraction data from randomly oriented particles

    International Nuclear Information System (INIS)

    Elser, Veit

    2011-01-01

    The high intensity of free-electron X-ray light sources may enable structure determinations of viruses or even individual proteins without the encumbrance of first forming crystals. This note compares two schemes of non-crystalline diffraction data collection that have been proposed: serial single-shot data from individual particles, and averaged cross-correlation data from particle ensembles. The information content of these schemes is easily compared and we show that the single-shot approach, although experimentally more challenging, is always superior in this respect. In fact, for 3D structure determination a constraint counting argument shows that the cross-correlation scheme suffers from data deficiency. -- Research Highlights: →We compare two data collection schemes for imaging single particles with x-rays. →Cross-correlation data suffers an information deficit relative to single-shot data. →We recognize John Spence for his many contributions to single particle imaging.

  20. Investigation of Random Switching Driven by a Poisson Point Process

    DEFF Research Database (Denmark)

    Simonsen, Maria; Schiøler, Henrik; Leth, John-Josef

    2015-01-01

    This paper investigates the switching mechanism of a two-dimensional switched system, when the switching events are generated by a Poisson point process. A model, in the shape of a stochastic process, for such a system is derived and the distribution of the trajectory's position is developed...... together with marginal density functions for the coordinate functions. Furthermore, the joint probability distribution is given explicitly....

  1. Art Therapy and Cognitive Processing Therapy for Combat-Related PTSD: A Randomized Controlled Trial

    Science.gov (United States)

    Campbell, Melissa; Decker, Kathleen P.; Kruk, Kerry; Deaver, Sarah P.

    2016-01-01

    This randomized controlled trial was designed to determine if art therapy in conjunction with Cognitive Processing Therapy (CPT) was more effective for reducing symptoms of combat posttraumatic stress disorder (PTSD) than CPT alone. Veterans (N = 11) were randomized to receive either individual CPT, or individual CPT in conjunction with individual…

  2. Structure and Randomness of Continuous-Time, Discrete-Event Processes

    Science.gov (United States)

    Marzen, Sarah E.; Crutchfield, James P.

    2017-10-01

    Loosely speaking, the Shannon entropy rate is used to gauge a stochastic process' intrinsic randomness; the statistical complexity gives the cost of predicting the process. We calculate, for the first time, the entropy rate and statistical complexity of stochastic processes generated by finite unifilar hidden semi-Markov models—memoryful, state-dependent versions of renewal processes. Calculating these quantities requires introducing novel mathematical objects (ɛ -machines of hidden semi-Markov processes) and new information-theoretic methods to stochastic processes.

  3. Random-effects linear modeling and sample size tables for two special crossover designs of average bioequivalence studies: the four-period, two-sequence, two-formulation and six-period, three-sequence, three-formulation designs.

    Science.gov (United States)

    Diaz, Francisco J; Berg, Michel J; Krebill, Ron; Welty, Timothy; Gidal, Barry E; Alloway, Rita; Privitera, Michael

    2013-12-01

    Due to concern and debate in the epilepsy medical community and to the current interest of the US Food and Drug Administration (FDA) in revising approaches to the approval of generic drugs, the FDA is currently supporting ongoing bioequivalence studies of antiepileptic drugs, the EQUIGEN studies. During the design of these crossover studies, the researchers could not find commercial or non-commercial statistical software that quickly allowed computation of sample sizes for their designs, particularly software implementing the FDA requirement of using random-effects linear models for the analyses of bioequivalence studies. This article presents tables for sample-size evaluations of average bioequivalence studies based on the two crossover designs used in the EQUIGEN studies: the four-period, two-sequence, two-formulation design, and the six-period, three-sequence, three-formulation design. Sample-size computations assume that random-effects linear models are used in bioequivalence analyses with crossover designs. Random-effects linear models have been traditionally viewed by many pharmacologists and clinical researchers as just mathematical devices to analyze repeated-measures data. In contrast, a modern view of these models attributes an important mathematical role in theoretical formulations in personalized medicine to them, because these models not only have parameters that represent average patients, but also have parameters that represent individual patients. Moreover, the notation and language of random-effects linear models have evolved over the years. Thus, another goal of this article is to provide a presentation of the statistical modeling of data from bioequivalence studies that highlights the modern view of these models, with special emphasis on power analyses and sample-size computations.

  4. Interspinous process device versus standard conventional surgical decompression for lumbar spinal stenosis: Randomized controlled trial

    NARCIS (Netherlands)

    W.A. Moojen (Wouter); M.P. Arts (Mark); W.C.H. Jacobs (Wilco); E.W. van Zwet (Erik); M.E. van den Akker-van Marle (Elske); B.W. Koes (Bart); C.L.A.M. Vleggeert-Lankamp (Carmen); W.C. Peul (Wilco)

    2013-01-01

    markdownabstractAbstract Objective To assess whether interspinous process device implantation is more effective in the short term than conventional surgical decompression for patients with intermittent neurogenic claudication due to lumbar spinal stenosis. Design Randomized controlled

  5. Directed motion emerging from two coupled random processes

    DEFF Research Database (Denmark)

    Ambjörnsson, T.; Lomholt, Michael Andersen; Metzler, R.

    2005-01-01

    detail, we develop a dynamical description of the process in terms of a (2+1)-variable master equation for the probability of having m monomers on the target side of the membrane with n bound chaperones at time t. Emphasis is put on the calculation of the mean first passage time as a function of total...... dynamics ( and ), we perform the adiabatic elimination of the fast variable n, and find that for a very long polymer , but with a smaller prefactor than for ratchet-like dynamics. We solve the general case numerically as a function of the dimensionless parameters λ, κ and γ, and compare to the three...

  6. Robust iterative learning control for multi-phase batch processes: an average dwell-time method with 2D convergence indexes

    Science.gov (United States)

    Wang, Limin; Shen, Yiteng; Yu, Jingxian; Li, Ping; Zhang, Ridong; Gao, Furong

    2018-01-01

    In order to cope with system disturbances in multi-phase batch processes with different dimensions, a hybrid robust control scheme of iterative learning control combined with feedback control is proposed in this paper. First, with a hybrid iterative learning control law designed by introducing the state error, the tracking error and the extended information, the multi-phase batch process is converted into a two-dimensional Fornasini-Marchesini (2D-FM) switched system with different dimensions. Second, a switching signal is designed using the average dwell-time method integrated with the related switching conditions to give sufficient conditions ensuring stable running for the system. Finally, the minimum running time of the subsystems and the control law gains are calculated by solving the linear matrix inequalities. Meanwhile, a compound 2D controller with robust performance is obtained, which includes a robust extended feedback control for ensuring the steady-state tracking error to converge rapidly. The application on an injection molding process displays the effectiveness and superiority of the proposed strategy.

  7. Random covering of the circle: the configuration-space of the free deposition process

    Energy Technology Data Exchange (ETDEWEB)

    Huillet, Thierry [Laboratoire de Physique Theorique et Modelisation, CNRS-UMR 8089 et Universite de Cergy-Pontoise, 5 mail Gay-Lussac, 95031, Neuville sur Oise (France)

    2003-12-12

    Consider a circle of circumference 1. Throw at random n points, sequentially, on this circle and append clockwise an arc (or rod) of length s to each such point. The resulting random set (the free gas of rods) is a collection of a random number of clusters with random sizes. It models a free deposition process on a 1D substrate. For such processes, we shall consider the occurrence times (number of rods) and probabilities, as n grows, of the following configurations: those avoiding rod overlap (the hard-rod gas), those for which the largest gap is smaller than rod length s (the packing gas), those (parking configurations) for which hard rod and packing constraints are both fulfilled and covering configurations. Special attention is paid to the statistical properties of each such (rare) configuration in the asymptotic density domain when ns = {rho}, for some finite density {rho} of points. Using results from spacings in the random division of the circle, explicit large deviation rate functions can be computed in each case from state equations. Lastly, a process consisting in selecting at random one of these specific equilibrium configurations (called the observable) can be modelled. When particularized to the parking model, this system produces parking configurations differently from Renyi's random sequential adsorption model.

  8. Image-processing of time-averaged interface distributions representing CCFL characteristics in a large scale model of a PWR hot-leg pipe geometry

    International Nuclear Information System (INIS)

    Al Issa, Suleiman; Macián-Juan, Rafael

    2017-01-01

    Highlights: • CCFL characteristics are investigated in PWR large-scale hot-leg pipe geometry. • Image processing of air-water interface produced time-averaged interface distributions. • Time-averages provide a comparative method of CCFL characteristics among different studies. • CCFL correlations depend upon the range of investigated water delivery for Dh ≫ 50 mm. • 1D codes are incapable of investigating CCFL because of lack of interface distribution. - Abstract: Countercurrent Flow Limitation (CCFL) was experimentally investigated in a 1/3.9 downscaled COLLIDER facility with a 190 mm pipe’s diameter using air/water at 1 atmospheric pressure. Previous investigations provided knowledge over the onset of CCFL mechanisms. In current article, CCFL characteristics at the COLLIDER facility are measured and discussed along with time-averaged distributions of the air/water interface for a selected matrix of liquid/gas velocities. The article demonstrates the time-averaged interface as a useful method to identify CCFL characteristics at quasi-stationary flow conditions eliminating variations that appears in single images, and showing essential comparative flow features such as: the degree of restriction at the bend, the extension and the intensity of the two-phase mixing zones, and the average water level within the horizontal part and the steam generator. Consequently, making it possible to compare interface distributions obtained at different investigations. The distributions are also beneficial for CFD validations of CCFL as the instant chaotic gas/liquid interface is impossible to reproduce in CFD simulations. The current study shows that final CCFL characteristics curve (and the corresponding CCFL correlation) depends upon the covered measuring range of water delivery. It also shows that a hydraulic diameter should be sufficiently larger than 50 mm in order to obtain CCFL characteristics comparable to the 1:1 scale data (namely the UPTF data). Finally

  9. Method for sampling and analysis of volatile biomarkers in process gas from aerobic digestion of poultry carcasses using time-weighted average SPME and GC-MS.

    Science.gov (United States)

    Koziel, Jacek A; Nguyen, Lam T; Glanville, Thomas D; Ahn, Heekwon; Frana, Timothy S; Hans van Leeuwen, J

    2017-10-01

    A passive sampling method, using retracted solid-phase microextraction (SPME) - gas chromatography-mass spectrometry and time-weighted averaging, was developed and validated for tracking marker volatile organic compounds (VOCs) emitted during aerobic digestion of biohazardous animal tissue. The retracted SPME configuration protects the fragile fiber from buffeting by the process gas stream, and it requires less equipment and is potentially more biosecure than conventional active sampling methods. VOC concentrations predicted via a model based on Fick's first law of diffusion were within 6.6-12.3% of experimentally controlled values after accounting for VOC adsorption to the SPME fiber housing. Method detection limits for five marker VOCs ranged from 0.70 to 8.44ppbv and were statistically equivalent (p>0.05) to those for active sorbent-tube-based sampling. The sampling time of 30min and fiber retraction of 5mm were found to be optimal for the tissue digestion process. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Random Fields

    Science.gov (United States)

    Vanmarcke, Erik

    1983-03-01

    Random variation over space and time is one of the few attributes that might safely be predicted as characterizing almost any given complex system. Random fields or "distributed disorder systems" confront astronomers, physicists, geologists, meteorologists, biologists, and other natural scientists. They appear in the artifacts developed by electrical, mechanical, civil, and other engineers. They even underlie the processes of social and economic change. The purpose of this book is to bring together existing and new methodologies of random field theory and indicate how they can be applied to these diverse areas where a "deterministic treatment is inefficient and conventional statistics insufficient." Many new results and methods are included. After outlining the extent and characteristics of the random field approach, the book reviews the classical theory of multidimensional random processes and introduces basic probability concepts and methods in the random field context. It next gives a concise amount of the second-order analysis of homogeneous random fields, in both the space-time domain and the wave number-frequency domain. This is followed by a chapter on spectral moments and related measures of disorder and on level excursions and extremes of Gaussian and related random fields. After developing a new framework of analysis based on local averages of one-, two-, and n-dimensional processes, the book concludes with a chapter discussing ramifications in the important areas of estimation, prediction, and control. The mathematical prerequisite has been held to basic college-level calculus.

  11. Timing of the Crab pulsar III. The slowing down and the nature of the random process

    International Nuclear Information System (INIS)

    Groth, E.J.

    1975-01-01

    The Crab pulsar arrival times are analyzed. The data are found to be consistent with a smooth slowing down with a braking index of 2.515+-0.005. Superposed on the smooth slowdown is a random process which has the same second moments as a random walk in the frequency. The strength of the random process is R 2 >=0.53 (+0.24, -0.12) x10 -22 Hz 2 s -1 , where R is the mean rate of steps and 2 > is the second moment of the step amplitude distribution. Neither the braking index nor the strength of the random process shows evidence of statistically significant time variations, although small fluctuations in the braking index and rather large fluctuations in the noise strength cannot be ruled out. There is a possibility that the random process contains a small component with the same second moments as a random walk in the phase. If so, a time scale of 3.5 days is indicated

  12. Post-processing Free Quantum Random Number Generator Based on Avalanche Photodiode Array

    International Nuclear Information System (INIS)

    Li Yang; Liao Sheng-Kai; Liang Fu-Tian; Shen Qi; Liang Hao; Peng Cheng-Zhi

    2016-01-01

    Quantum random number generators adopting single photon detection have been restricted due to the non-negligible dead time of avalanche photodiodes (APDs). We propose a new approach based on an APD array to improve the generation rate of random numbers significantly. This method compares the detectors' responses to consecutive optical pulses and generates the random sequence. We implement a demonstration experiment to show its simplicity, compactness and scalability. The generated numbers are proved to be unbiased, post-processing free, ready to use, and their randomness is verified by using the national institute of standard technology statistical test suite. The random bit generation efficiency is as high as 32.8% and the potential generation rate adopting the 32 × 32 APD array is up to tens of Gbits/s. (paper)

  13. Efficient tests for equivalence of hidden Markov processes and quantum random walks

    NARCIS (Netherlands)

    U. Faigle; A. Schönhuth (Alexander)

    2011-01-01

    htmlabstractWhile two hidden Markov process (HMP) resp.~quantum random walk (QRW) parametrizations can differ from one another, the stochastic processes arising from them can be equivalent. Here a polynomial-time algorithm is presented which can determine equivalence of two HMP parametrizations

  14. High-Performance Pseudo-Random Number Generation on Graphics Processing Units

    OpenAIRE

    Nandapalan, Nimalan; Brent, Richard P.; Murray, Lawrence M.; Rendell, Alistair

    2011-01-01

    This work considers the deployment of pseudo-random number generators (PRNGs) on graphics processing units (GPUs), developing an approach based on the xorgens generator to rapidly produce pseudo-random numbers of high statistical quality. The chosen algorithm has configurable state size and period, making it ideal for tuning to the GPU architecture. We present a comparison of both speed and statistical quality with other common parallel, GPU-based PRNGs, demonstrating favourable performance o...

  15. A framework about flow measurements by LDA–PDA as a spatio-temporal average: application to data post-processing

    International Nuclear Information System (INIS)

    Calvo, Esteban; García, Juan A; García, Ignacio; Aísa, Luis; Santolaya, José Luis

    2012-01-01

    method and the cross-section integral calibration method. Finally, a physical interpretation of the statistical reconstruction process is provided: it is a spatio-temporal averaging of the detected particle data, and some of the algorithms used are related to the Eulerian–Eulerian mathematical description of multiphase flows. (paper)

  16. A framework about flow measurements by LDA-PDA as a spatio-temporal average: application to data post-processing

    Science.gov (United States)

    Calvo, Esteban; García, Juan A.; Santolaya, José Luis; García, Ignacio; Aísa, Luis

    2012-05-01

    method and the cross-section integral calibration method. Finally, a physical interpretation of the statistical reconstruction process is provided: it is a spatio-temporal averaging of the detected particle data, and some of the algorithms used are related to the Eulerian-Eulerian mathematical description of multiphase flows.

  17. Investigation of the thermal and optical performance of a spatial light modulator with high average power picosecond laser exposure for materials processing applications

    Science.gov (United States)

    Zhu, G.; Whitehead, D.; Perrie, W.; Allegre, O. J.; Olle, V.; Li, Q.; Tang, Y.; Dawson, K.; Jin, Y.; Edwardson, S. P.; Li, L.; Dearden, G.

    2018-03-01

    Spatial light modulators (SLMs) addressed with computer generated holograms (CGHs) can create structured light fields on demand when an incident laser beam is diffracted by a phase CGH. The power handling limitations of these devices based on a liquid crystal layer has always been of some concern. With careful engineering of chip thermal management, we report the detailed optical phase and temperature response of a liquid cooled SLM exposed to picosecond laser powers up to 〈P〉  =  220 W at 1064 nm. This information is critical for determining device performance at high laser powers. SLM chip temperature rose linearly with incident laser exposure, increasing by only 5 °C at 〈P〉  =  220 W incident power, measured with a thermal imaging camera. Thermal response time with continuous exposure was 1-2 s. The optical phase response with incident power approaches 2π radians with average power up to 〈P〉  =  130 W, hence the operational limit, while above this power, liquid crystal thickness variations limit phase response to just over π radians. Modelling of the thermal and phase response with exposure is also presented, supporting experimental observations well. These remarkable performance characteristics show that liquid crystal based SLM technology is highly robust when efficiently cooled. High speed, multi-beam plasmonic surface micro-structuring at a rate R  =  8 cm2 s-1 is achieved on polished metal surfaces at 〈P〉  =  25 W exposure while diffractive, multi-beam surface ablation with average power 〈P〉  =100 W on stainless steel is demonstrated with ablation rate of ~4 mm3 min-1. However, above 130 W, first order diffraction efficiency drops significantly in accord with the observed operational limit. Continuous exposure for a period of 45 min at a laser power of 〈P〉  =  160 W did not result in any detectable drop in diffraction efficiency, confirmed afterwards by the efficient

  18. Do Self-Regulated Processes such as Study Strategies and Satisfaction Predict Grade Point Averages for First and Second Generation College Students?

    Science.gov (United States)

    DiBenedetto, Maria K.

    2010-01-01

    The current investigation sought to determine whether self-regulatory variables: "study strategies" and "self-satisfaction" correlate with first and second generation college students' grade point averages, and to determine if these two variables would improve the prediction of their averages if used along with high school grades and SAT scores.…

  19. Neutron resonance averaging

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs

  20. Characterisation of random Gaussian and non-Gaussian stress processes in terms of extreme responses

    Directory of Open Access Journals (Sweden)

    Colin Bruno

    2015-01-01

    Full Text Available In the field of military land vehicles, random vibration processes generated by all-terrain wheeled vehicles in motion are not classical stochastic processes with a stationary and Gaussian nature. Non-stationarity of processes induced by the variability of the vehicle speed does not form a major difficulty because the designer can have good control over the vehicle speed by characterising the histogram of instantaneous speed of the vehicle during an operational situation. Beyond this non-stationarity problem, the hard point clearly lies in the fact that the random processes are not Gaussian and are generated mainly by the non-linear behaviour of the undercarriage and the strong occurrence of shocks generated by roughness of the terrain. This non-Gaussian nature is expressed particularly by very high flattening levels that can affect the design of structures under extreme stresses conventionally acquired by spectral approaches, inherent to Gaussian processes and based essentially on spectral moments of stress processes. Due to these technical considerations, techniques for characterisation of random excitation processes generated by this type of carrier need to be changed, by proposing innovative characterisation methods based on time domain approaches as described in the body of the text rather than spectral domain approaches.

  1. Multifractal properties of diffusion-limited aggregates and random multiplicative processes

    International Nuclear Information System (INIS)

    Canessa, E.

    1991-04-01

    We consider the multifractal properties of irreversible diffusion-limited aggregation (DLA) from the point of view of the self-similarity of fluctuations in random multiplicative processes. In particular we analyse the breakdown of multifractal behaviour and phase transition associated with the negative moments of the growth probabilities in DLA. (author). 20 refs, 5 figs

  2. Eliciting and Developing Teachers' Conceptions of Random Processes in a Probability and Statistics Course

    Science.gov (United States)

    Smith, Toni M.; Hjalmarson, Margret A.

    2013-01-01

    The purpose of this study is to examine prospective mathematics specialists' engagement in an instructional sequence designed to elicit and develop their understandings of random processes. The study was conducted with two different sections of a probability and statistics course for K-8 teachers. Thirty-two teachers participated. Video analyses…

  3. Setting up a randomized clinical trial in the UK: approvals and process.

    Science.gov (United States)

    Greene, Louise Eleanor; Bearn, David R

    2013-06-01

    Randomized clinical trials are considered the 'gold standard' in primary research for healthcare interventions. However, they can be expensive and time-consuming to set up and require many approvals to be in place before they can begin. This paper outlines how to determine what approvals are required for a trial, the background of each approval and the process for obtaining them.

  4. Human norovirus inactivation in oysters by high hydrostatic pressure processing: A randomized double-blinded study

    Science.gov (United States)

    This randomized, double-blinded, clinical trial assessed the effect of high hydrostatic pressure processing (HPP) on genogroup I.1 human norovirus (HuNoV) inactivation in virus-seeded oysters when ingested by subjects. The safety and efficacy of HPP treatments were assessed in three study phases wi...

  5. ON THE ESTIMATION OF DISTANCE DISTRIBUTION FUNCTIONS FOR POINT PROCESSES AND RANDOM SETS

    Directory of Open Access Journals (Sweden)

    Dietrich Stoyan

    2011-05-01

    Full Text Available This paper discusses various estimators for the nearest neighbour distance distribution function D of a stationary point process and for the quadratic contact distribution function Hq of a stationary random closed set. It recommends the use of Hanisch's estimator of D, which is of Horvitz-Thompson type, and the minussampling estimator of Hq. This recommendation is based on simulations for Poisson processes and Boolean models.

  6. Time at which the maximum of a random acceleration process is reached

    International Nuclear Information System (INIS)

    Majumdar, Satya N; Rosso, Alberto; Zoia, Andrea

    2010-01-01

    We study the random acceleration model, which is perhaps one of the simplest, yet nontrivial, non-Markov stochastic processes, and is key to many applications. For this non-Markov process, we present exact analytical results for the probability density p(t m |T) of the time t m at which the process reaches its maximum, within a fixed time interval [0, T]. We study two different boundary conditions, which correspond to the process representing respectively (i) the integral of a Brownian bridge and (ii) the integral of a free Brownian motion. Our analytical results are also verified by numerical simulations.

  7. Quantitative Model of Price Diffusion and Market Friction Based on Trading as a Mechanistic Random Process

    Science.gov (United States)

    Daniels, Marcus G.; Farmer, J. Doyne; Gillemot, László; Iori, Giulia; Smith, Eric

    2003-03-01

    We model trading and price formation in a market under the assumption that order arrival and cancellations are Poisson random processes. This model makes testable predictions for the most basic properties of markets, such as the diffusion rate of prices (which is the standard measure of financial risk) and the spread and price impact functions (which are the main determinants of transaction cost). Guided by dimensional analysis, simulation, and mean-field theory, we find scaling relations in terms of order flow rates. We show that even under completely random order flow the need to store supply and demand to facilitate trading induces anomalous diffusion and temporal structure in prices.

  8. Pseudo-random number generators for Monte Carlo simulations on ATI Graphics Processing Units

    Science.gov (United States)

    Demchik, Vadim

    2011-03-01

    Basic uniform pseudo-random number generators are implemented on ATI Graphics Processing Units (GPU). The performance results of the realized generators (multiplicative linear congruential (GGL), XOR-shift (XOR128), RANECU, RANMAR, RANLUX and Mersenne Twister (MT19937)) on CPU and GPU are discussed. The obtained speed up factor is hundreds of times in comparison with CPU. RANLUX generator is found to be the most appropriate for using on GPU in Monte Carlo simulations. The brief review of the pseudo-random number generators used in modern software packages for Monte Carlo simulations in high-energy physics is presented.

  9. Generalized random walk algorithm for the numerical modeling of complex diffusion processes

    CERN Document Server

    Vamos, C; Vereecken, H

    2003-01-01

    A generalized form of the random walk algorithm to simulate diffusion processes is introduced. Unlike the usual approach, at a given time all the particles from a grid node are simultaneously scattered using the Bernoulli repartition. This procedure saves memory and computing time and no restrictions are imposed for the maximum number of particles to be used in simulations. We prove that for simple diffusion the method generalizes the finite difference scheme and gives the same precision for large enough number of particles. As an example, simulations of diffusion in random velocity field are performed and the main features of the stochastic mathematical model are numerically tested.

  10. Generalized random walk algorithm for the numerical modeling of complex diffusion processes

    International Nuclear Information System (INIS)

    Vamos, Calin; Suciu, Nicolae; Vereecken, Harry

    2003-01-01

    A generalized form of the random walk algorithm to simulate diffusion processes is introduced. Unlike the usual approach, at a given time all the particles from a grid node are simultaneously scattered using the Bernoulli repartition. This procedure saves memory and computing time and no restrictions are imposed for the maximum number of particles to be used in simulations. We prove that for simple diffusion the method generalizes the finite difference scheme and gives the same precision for large enough number of particles. As an example, simulations of diffusion in random velocity field are performed and the main features of the stochastic mathematical model are numerically tested

  11. Minimal-post-processing 320-Gbps true random bit generation using physical white chaos.

    Science.gov (United States)

    Wang, Anbang; Wang, Longsheng; Li, Pu; Wang, Yuncai

    2017-02-20

    Chaotic external-cavity semiconductor laser (ECL) is a promising entropy source for generation of high-speed physical random bits or digital keys. The rate and randomness is unfortunately limited by laser relaxation oscillation and external-cavity resonance, and is usually improved by complicated post processing. Here, we propose using a physical broadband white chaos generated by optical heterodyning of two ECLs as entropy source to construct high-speed random bit generation (RBG) with minimal post processing. The optical heterodyne chaos not only has a white spectrum without signature of relaxation oscillation and external-cavity resonance but also has a symmetric amplitude distribution. Thus, after quantization with a multi-bit analog-digital-convertor (ADC), random bits can be obtained by extracting several least significant bits (LSBs) without any other processing. In experiments, a white chaos with a 3-dB bandwidth of 16.7 GHz is generated. Its entropy rate is estimated as 16 Gbps by single-bit quantization which means a spectrum efficiency of 96%. With quantization using an 8-bit ADC, 320-Gbps physical RBG is achieved by directly extracting 4 LSBs at 80-GHz sampling rate.

  12. To be and not to be: scale correlations in random multifractal processes

    DEFF Research Database (Denmark)

    Cleve, Jochen; Schmiegel, Jürgen; Greiner, Martin

    We discuss various properties of a random multifractal process, which are related to the issue of scale correlations. By design, the process is homogeneous, non-conservative and has no built-in scale correlations. However, when it comes to observables like breakdown coefficients, which are based...... on a coarse-graining of the multifractal field, scale correlations do appear. In the log-normal limit of the model process, the conditional distributions and moments of breakdown coefficients reproduce the observations made in fully developed small-scale turbulence. These findings help to understand several...

  13. Gaussian random-matrix process and universal parametric correlations in complex systems

    International Nuclear Information System (INIS)

    Attias, H.; Alhassid, Y.

    1995-01-01

    We introduce the framework of the Gaussian random-matrix process as an extension of Dyson's Gaussian ensembles and use it to discuss the statistical properties of complex quantum systems that depend on an external parameter. We classify the Gaussian processes according to the short-distance diffusive behavior of their energy levels and demonstrate that all parametric correlation functions become universal upon the appropriate scaling of the parameter. The class of differentiable Gaussian processes is identified as the relevant one for most physical systems. We reproduce the known spectral correlators and compute eigenfunction correlators in their universal form. Numerical evidence from both a chaotic model and weakly disordered model confirms our predictions

  14. Auditory detection of an increment in the rate of a random process

    International Nuclear Information System (INIS)

    Brown, W.S.; Emmerich, D.S.

    1994-01-01

    Recent experiments have presented listeners with complex tonal stimuli consisting of components with values (i.e., intensities or frequencies) randomly sampled from probability distributions [e.g., R. A. Lutfi, J. Acoust. Soc. Am. 86, 934--944 (1989)]. In the present experiment, brief tones were presented at intervals corresponding to the intensity of a random process. Specifically, the intervals between tones were randomly selected from exponential probability functions. Listeners were asked to decide whether tones presented during a defined observation interval represented a ''noise'' process alone or the ''noise'' with a ''signal'' process added to it. The number of tones occurring in any observation interval is a Poisson variable; receiver operating characteristics (ROCs) arising from Poisson processes have been considered by Egan [Signal Detection Theory and ROC Analysis (Academic, New York, 1975)]. Several sets of noise and signal intensities and observation interval durations were selected which were expected to yield equivalent performance. Rating ROCs were generated based on subjects' responses in a single-interval, yes--no task. The performance levels achieved by listeners and the effects of intensity and duration are compared to those predicted for an ideal observer

  15. Art Therapy and Cognitive Processing Therapy for Combat-Related PTSD: A Randomized Controlled Trial

    Science.gov (United States)

    Campbell, Melissa; Decker, Kathleen P.; Kruk, Kerry; Deaver, Sarah P.

    2018-01-01

    This randomized controlled trial was designed to determine if art therapy in conjunction with Cognitive Processing Therapy (CPT) was more effective for reducing symptoms of combat posttraumatic stress disorder (PTSD) than CPT alone. Veterans (N = 11) were randomized to receive either individual CPT, or individual CPT in conjunction with individual art therapy. PTSD Checklist–Military Version and Beck Depression Inventory–II scores improved with treatment in both groups with no significant difference in improvement between the experimental and control groups. Art therapy in conjunction with CPT was found to improve trauma processing and veterans considered it to be an important part of their treatment as it provided healthy distancing, enhanced trauma recall, and increased access to emotions. PMID:29332989

  16. An Artificial Bee Colony Algorithm for the Job Shop Scheduling Problem with Random Processing Times

    Directory of Open Access Journals (Sweden)

    Rui Zhang

    2011-09-01

    Full Text Available Due to the influence of unpredictable random events, the processing time of each operation should be treated as random variables if we aim at a robust production schedule. However, compared with the extensive research on the deterministic model, the stochastic job shop scheduling problem (SJSSP has not received sufficient attention. In this paper, we propose an artificial bee colony (ABC algorithm for SJSSP with the objective of minimizing the maximum lateness (which is an index of service quality. First, we propose a performance estimate for preliminary screening of the candidate solutions. Then, the K-armed bandit model is utilized for reducing the computational burden in the exact evaluation (through Monte Carlo simulation process. Finally, the computational results on different-scale test problems validate the effectiveness and efficiency of the proposed approach.

  17. A random point process model for the score in sport matches

    Czech Academy of Sciences Publication Activity Database

    Volf, Petr

    2009-01-01

    Roč. 20, č. 2 (2009), s. 121-131 ISSN 1471-678X R&D Projects: GA AV ČR(CZ) IAA101120604 Institutional research plan: CEZ:AV0Z10750506 Keywords : sport statistics * scoring intensity * Cox’s regression model Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2009/SI/volf-a random point process model for the score in sport matches.pdf

  18. The McMillan Theorem for Colored Branching Processes and Dimensions of Random Fractals

    Directory of Open Access Journals (Sweden)

    Victor Bakhtin

    2014-12-01

    Full Text Available For the simplest colored branching process, we prove an analog to the McMillan theorem and calculate the Hausdorff dimensions of random fractals defined in terms of the limit behavior of empirical measures generated by finite genetic lines. In this setting, the role of Shannon’s entropy is played by the Kullback–Leibler divergence, and the Hausdorff dimensions are computed by means of the so-called Billingsley–Kullback entropy, defined in the paper.

  19. Distributed Random Process for a Large-Scale Peer-to-Peer Lottery

    OpenAIRE

    Grumbach, Stéphane; Riemann, Robert

    2017-01-01

    International audience; Most online lotteries today fail to ensure the verifiability of the random process and rely on a trusted third party. This issue has received little attention since the emergence of distributed protocols like Bitcoin that demonstrated the potential of protocols with no trusted third party. We argue that the security requirements of online lotteries are similar to those of online voting, and propose a novel distributed online lottery protocol that applies techniques dev...

  20. New Nordic Diet versus Average Danish Diet: A Randomized Controlled Trial Revealed Healthy Long-Term Effects of the New Nordic Diet by GC-MS Blood Plasma Metabolomics.

    Science.gov (United States)

    Khakimov, Bekzod; Poulsen, Sanne Kellebjerg; Savorani, Francesco; Acar, Evrim; Gürdeniz, Gözde; Larsen, Thomas M; Astrup, Arne; Dragsted, Lars O; Engelsen, Søren Balling

    2016-06-03

    A previous study has shown effects of the New Nordic Diet (NND) to stimulate weight loss and lower systolic and diastolic blood pressure in obese Danish women and men in a randomized, controlled dietary intervention study. This work demonstrates long-term metabolic effects of the NND as compared with an Average Danish Diet (ADD) in blood plasma and reveals associations between metabolic changes and health beneficial effects of the NND including weight loss. A total of 145 individuals completed the intervention and blood samples were taken along with clinical examinations before the intervention started (week 0) and after 12 and 26 weeks. The plasma metabolome was measured using GC-MS, and the final metabolite table contained 144 variables. Significant and novel metabolic effects of the diet, resulting weight loss, gender, and intervention study season were revealed using PLS-DA and ASCA. Several metabolites reflecting specific differences in the diets, especially intake of plant foods and seafood, and in energy metabolism related to ketone bodies and gluconeogenesis formed the predominant metabolite pattern discriminating the intervention groups. Among NND subjects, higher levels of vaccenic acid and 3-hydroxybutanoic acid were related to a higher weight loss, while higher concentrations of salicylic, lactic, and N-aspartic acids and 1,5-anhydro-d-sorbitol were related to a lower weight loss. Specific gender and seasonal differences were also observed. The study strongly indicates that healthy diets high in fish, vegetables, fruit, and whole grain facilitated weight loss and improved insulin sensitivity by increasing ketosis and gluconeogenesis in the fasting state.

  1. MINIMUM ENTROPY DECONVOLUTION OF ONE-AND MULTI-DIMENSIONAL NON-GAUSSIAN LINEAR RANDOM PROCESSES

    Institute of Scientific and Technical Information of China (English)

    程乾生

    1990-01-01

    The minimum entropy deconvolution is considered as one of the methods for decomposing non-Gaussian linear processes. The concept of peakedness of a system response sequence is presented and its properties are studied. With the aid of the peakedness, the convergence theory of the minimum entropy deconvolution is established. The problem of the minimum entropy deconvolution of multi-dimensional non-Gaussian linear random processes is first investigated and the corresponding theory is given. In addition, the relation between the minimum entropy deconvolution and parameter method is discussed.

  2. Is neutron evaporation from highly excited nuclei a poisson random process

    International Nuclear Information System (INIS)

    Simbel, M.H.

    1982-01-01

    It is suggested that neutron emission from highly excited nuclei follows a Poisson random process. The continuous variable of the process is the excitation energy excess over the binding energy of the emitted neutrons and the discrete variable is the number of emitted neutrons. Cross sections for (HI,xn) reactions are analyzed using a formula containing a Poisson distribution function. The post- and pre-equilibrium components of the cross section are treated separately. The agreement between the predictions of this formula and the experimental results is very good. (orig.)

  3. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  4. A prospective randomized trial of content expertise versus process expertise in small group teaching.

    Science.gov (United States)

    Peets, Adam D; Cooke, Lara; Wright, Bruce; Coderre, Sylvain; McLaughlin, Kevin

    2010-10-14

    Effective teaching requires an understanding of both what (content knowledge) and how (process knowledge) to teach. While previous studies involving medical students have compared preceptors with greater or lesser content knowledge, it is unclear whether process expertise can compensate for deficient content expertise. Therefore, the objective of our study was to compare the effect of preceptors with process expertise to those with content expertise on medical students' learning outcomes in a structured small group environment. One hundred and fifty-one first year medical students were randomized to 11 groups for the small group component of the Cardiovascular-Respiratory course at the University of Calgary. Each group was then block randomized to one of three streams for the entire course: tutoring exclusively by physicians with content expertise (n = 5), tutoring exclusively by physicians with process expertise (n = 3), and tutoring by content experts for 11 sessions and process experts for 10 sessions (n = 3). After each of the 21 small group sessions, students evaluated their preceptors' teaching with a standardized instrument. Students' knowledge acquisition was assessed by an end-of-course multiple choice (EOC-MCQ) examination. Students rated the process experts significantly higher on each of the instrument's 15 items, including the overall rating. Students' mean score (±SD) on the EOC-MCQ exam was 76.1% (8.1) for groups taught by content experts, 78.2% (7.8) for the combination group and 79.5% (9.2) for process expert groups (p = 0.11). By linear regression student performance was higher if they had been taught by process experts (regression coefficient 2.7 [0.1, 5.4], p teach first year medical students within a structured small group environment; preceptors with process expertise result in at least equivalent, if not superior, student outcomes in this setting.

  5. Averaged RMHD equations

    International Nuclear Information System (INIS)

    Ichiguchi, Katsuji

    1998-01-01

    A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)

  6. Determining average yarding distance.

    Science.gov (United States)

    Roger H. Twito; Charles N. Mann

    1979-01-01

    Emphasis on environmental and esthetic quality in timber harvesting has brought about increased use of complex boundaries of cutting units and a consequent need for a rapid and accurate method of determining the average yarding distance and area of these units. These values, needed for evaluation of road and landing locations in planning timber harvests, are easily and...

  7. Average Revisited in Context

    Science.gov (United States)

    Watson, Jane; Chick, Helen

    2012-01-01

    This paper analyses the responses of 247 middle school students to items requiring the concept of average in three different contexts: a city's weather reported in maximum daily temperature, the number of children in a family, and the price of houses. The mixed but overall disappointing performance on the six items in the three contexts indicates…

  8. Averaging operations on matrices

    Indian Academy of Sciences (India)

    2014-07-03

    Jul 3, 2014 ... Role of Positive Definite Matrices. • Diffusion Tensor Imaging: 3 × 3 pd matrices model water flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. Tanvi Jain. Averaging operations on matrices ...

  9. Average-energy games

    Directory of Open Access Journals (Sweden)

    Patricia Bouyer

    2015-09-01

    Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.

  10. Generation and monitoring of discrete stable random processes using multiple immigration population models

    Energy Technology Data Exchange (ETDEWEB)

    Matthews, J O; Hopcraft, K I; Jakeman, E [Applied Mathematics Division, School of Mathematical Sciences, University of Nottingham, Nottingham, NG7 2RD (United Kingdom)

    2003-11-21

    Some properties of classical population processes that comprise births, deaths and multiple immigrations are investigated. The rates at which the immigrants arrive can be tailored to produce a population whose steady state fluctuations are described by a pre-selected distribution. Attention is focused on the class of distributions with a discrete stable law, which have power-law tails and whose moments and autocorrelation function do not exist. The separate problem of monitoring and characterizing the fluctuations is studied, analysing the statistics of individuals that leave the population. The fluctuations in the size of the population are transferred to the times between emigrants that form an intermittent time series of events. The emigrants are counted with a detector of finite dynamic range and response time. This is modelled through clipping the time series or saturating it at an arbitrary but finite level, whereupon its moments and correlation properties become finite. Distributions for the time to the first counted event and for the time between events exhibit power-law regimes that are characteristic of the fluctuations in population size. The processes provide analytical models with which properties of complex discrete random phenomena can be explored, and in addition provide generic means by which random time series encompassing a wide range of intermittent and other discrete random behaviour may be generated.

  11. Generation and monitoring of discrete stable random processes using multiple immigration population models

    International Nuclear Information System (INIS)

    Matthews, J O; Hopcraft, K I; Jakeman, E

    2003-01-01

    Some properties of classical population processes that comprise births, deaths and multiple immigrations are investigated. The rates at which the immigrants arrive can be tailored to produce a population whose steady state fluctuations are described by a pre-selected distribution. Attention is focused on the class of distributions with a discrete stable law, which have power-law tails and whose moments and autocorrelation function do not exist. The separate problem of monitoring and characterizing the fluctuations is studied, analysing the statistics of individuals that leave the population. The fluctuations in the size of the population are transferred to the times between emigrants that form an intermittent time series of events. The emigrants are counted with a detector of finite dynamic range and response time. This is modelled through clipping the time series or saturating it at an arbitrary but finite level, whereupon its moments and correlation properties become finite. Distributions for the time to the first counted event and for the time between events exhibit power-law regimes that are characteristic of the fluctuations in population size. The processes provide analytical models with which properties of complex discrete random phenomena can be explored, and in addition provide generic means by which random time series encompassing a wide range of intermittent and other discrete random behaviour may be generated

  12. Simulation study on characteristics of long-range interaction in randomly asymmetric exclusion process

    Science.gov (United States)

    Zhao, Shi-Bo; Liu, Ming-Zhe; Yang, Lan-Ying

    2015-04-01

    In this paper we investigate the dynamics of an asymmetric exclusion process on a one-dimensional lattice with long-range hopping and random update via Monte Carlo simulations theoretically. Particles in the model will firstly try to hop over successive unoccupied sites with a probability q, which is different from previous exclusion process models. The probability q may represent the random access of particles. Numerical simulations for stationary particle currents, density profiles, and phase diagrams are obtained. There are three possible stationary phases: the low density (LD) phase, high density (HD) phase, and maximal current (MC) in the system, respectively. Interestingly, bulk density in the LD phase tends to zero, while the MC phase is governed by α, β, and q. The HD phase is nearly the same as the normal TASEP, determined by exit rate β. Theoretical analysis is in good agreement with simulation results. The proposed model may provide a better understanding of random interaction dynamics in complex systems. Project supported by the National Natural Science Foundation of China (Grant Nos. 41274109 and 11104022), the Fund for Sichuan Youth Science and Technology Innovation Research Team (Grant No. 2011JTD0013), and the Creative Team Program of Chengdu University of Technology.

  13. LED Lighting System Reliability Modeling and Inference via Random Effects Gamma Process and Copula Function

    Directory of Open Access Journals (Sweden)

    Huibing Hao

    2015-01-01

    Full Text Available Light emitting diode (LED lamp has attracted increasing interest in the field of lighting systems due to its low energy and long lifetime. For different functions (i.e., illumination and color, it may have two or more performance characteristics. When the multiple performance characteristics are dependent, it creates a challenging problem to accurately analyze the system reliability. In this paper, we assume that the system has two performance characteristics, and each performance characteristic is governed by a random effects Gamma process where the random effects can capture the unit to unit differences. The dependency of performance characteristics is described by a Frank copula function. Via the copula function, the reliability assessment model is proposed. Considering the model is so complicated and analytically intractable, the Markov chain Monte Carlo (MCMC method is used to estimate the unknown parameters. A numerical example about actual LED lamps data is given to demonstrate the usefulness and validity of the proposed model and method.

  14. [The third lumbar transverse process syndrome treated with acupuncture at zygapophyseal joint and transverse process:a randomized controlled trial].

    Science.gov (United States)

    Li, Fangling; Bi, Dingyan

    2017-08-12

    To explore the effects differences for the third lumbar transverse process syndrome between acupuncture mainly at zygapophyseal joint and transverse process and conventional acupuncture. Eighty cases were randomly assigned into an observation group and a control group, 40 cases in each one. In the observation group, patients were treated with acupuncture at zygapophyseal joint, transverse process, the superior gluteus nerve into the hip point and Weizhong (BL 40), and those in the control group were treated with acupuncture at Qihaishu (BL 24), Jiaji (EX-B 2) of L 2 -L 4 , the superior gluteus nerve into the hip point and Weizhong (BL 40). The treatment was given 6 times a week for 2 weeks, once a day. The visual analogue scale (VAS), Japanese Orthopaedic Association (JOA) low back pain score and simplified Chinese Oswestry disability index (SC-ODI) were observed before and after treatment as well as 6 months after treatment, and the clinical effects were evaluated. The total effective rate in the observation group was 95.0% (38/40), which was significantly higher than 82.5% (33/40) in the control group ( P process for the third lumbar transverse process syndrome achieves good effect, which is better than that of conventional acupuncture on relieving pain, improving lumbar function and life quality.

  15. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  16. Finding Order in Randomness: Single-Molecule Studies Reveal Stochastic RNA Processing | Center for Cancer Research

    Science.gov (United States)

    Producing a functional eukaryotic messenger RNA (mRNA) requires the coordinated activity of several large protein complexes to initiate transcription, elongate nascent transcripts, splice together exons, and cleave and polyadenylate the 3’ end. Kinetic competition between these various processes has been proposed to regulate mRNA maturation, but this model could lead to multiple, randomly determined, or stochastic, pathways or outcomes. Regulatory checkpoints have been suggested as a means of ensuring quality control. However, current methods have been unable to tease apart the contributions of these processes at a single gene or on a time scale that could provide mechanistic insight. To begin to investigate the kinetic relationship between transcription and splicing, Daniel Larson, Ph.D., of CCR’s Laboratory of Receptor Biology and Gene Expression, and his colleagues employed a single-molecule RNA imaging approach to monitor production and processing of a human β-globin reporter gene in living cells.

  17. A Correlated Random Effects Model for Non-homogeneous Markov Processes with Nonignorable Missingness.

    Science.gov (United States)

    Chen, Baojiang; Zhou, Xiao-Hua

    2013-05-01

    Life history data arising in clusters with prespecified assessment time points for patients often feature incomplete data since patients may choose to visit the clinic based on their needs. Markov process models provide a useful tool describing disease progression for life history data. The literature mainly focuses on time homogeneous process. In this paper we develop methods to deal with non-homogeneous Markov process with incomplete clustered life history data. A correlated random effects model is developed to deal with the nonignorable missingness, and a time transformation is employed to address the non-homogeneity in the transition model. Maximum likelihood estimate based on the Monte-Carlo EM algorithm is advocated for parameter estimation. Simulation studies demonstrate that the proposed method works well in many situations. We also apply this method to an Alzheimer's disease study.

  18. Quasi-steady-state analysis of two-dimensional random intermittent search processes

    KAUST Repository

    Bressloff, Paul C.

    2011-06-01

    We use perturbation methods to analyze a two-dimensional random intermittent search process, in which a searcher alternates between a diffusive search phase and a ballistic movement phase whose velocity direction is random. A hidden target is introduced within a rectangular domain with reflecting boundaries. If the searcher moves within range of the target and is in the search phase, it has a chance of detecting the target. A quasi-steady-state analysis is applied to the corresponding Chapman-Kolmogorov equation. This generates a reduced Fokker-Planck description of the search process involving a nonzero drift term and an anisotropic diffusion tensor. In the case of a uniform direction distribution, for which there is zero drift, and isotropic diffusion, we use the method of matched asymptotics to compute the mean first passage time (MFPT) to the target, under the assumption that the detection range of the target is much smaller than the size of the domain. We show that an optimal search strategy exists, consistent with previous studies of intermittent search in a radially symmetric domain that were based on a decoupling or moment closure approximation. We also show how the decoupling approximation can break down in the case of biased search processes. Finally, we analyze the MFPT in the case of anisotropic diffusion and find that anisotropy can be useful when the searcher starts from a fixed location. © 2011 American Physical Society.

  19. Random Walk on a Perturbation of the Infinitely-Fast Mixing Interchange Process

    Science.gov (United States)

    Salvi, Michele; Simenhaus, François

    2018-03-01

    We consider a random walk in dimension d≥1 in a dynamic random environment evolving as an interchange process with rate γ >0 . We prove that, if we choose γ large enough, almost surely the empirical velocity of the walker X_t/t eventually lies in an arbitrary small ball around the annealed drift. This statement is thus a perturbation of the case γ =+∞ where the environment is refreshed between each step of the walker. We extend three-way part of the results of Huveneers and Simenhaus (Electron J Probab 20(105):42, 2015), where the environment was given by the 1-dimensional exclusion process: (i) We deal with any dimension d≥1 ; (ii) We treat the much more general interchange process, where each particle carries a transition vector chosen according to an arbitrary law μ ; (iii) We show that X_t/t is not only in the same direction of the annealed drift, but that it is also close to it.

  20. Characteristics of the probability function for three random-walk models of reaction--diffusion processes

    International Nuclear Information System (INIS)

    Musho, M.K.; Kozak, J.J.

    1984-01-01

    A method is presented for calculating exactly the relative width (sigma 2 )/sup 1/2// , the skewness γ 1 , and the kurtosis γ 2 characterizing the probability distribution function for three random-walk models of diffusion-controlled processes. For processes in which a diffusing coreactant A reacts irreversibly with a target molecule B situated at a reaction center, three models are considered. The first is the traditional one of an unbiased, nearest-neighbor random walk on a d-dimensional periodic/confining lattice with traps; the second involves the consideration of unbiased, non-nearest-neigh bor (i.e., variable-step length) walks on the same d-dimensional lattice; and, the third deals with the case of a biased, nearest-neighbor walk on a d-dimensional lattice (wherein a walker experiences a potential centered at the deep trap site of the lattice). Our method, which has been described in detail elsewhere [P.A. Politowicz and J. J. Kozak, Phys. Rev. B 28, 5549 (1983)] is based on the use of group theoretic arguments within the framework of the theory of finite Markov processes

  1. Random Walk on a Perturbation of the Infinitely-Fast Mixing Interchange Process

    Science.gov (United States)

    Salvi, Michele; Simenhaus, François

    2018-05-01

    We consider a random walk in dimension d≥ 1 in a dynamic random environment evolving as an interchange process with rate γ >0. We prove that, if we choose γ large enough, almost surely the empirical velocity of the walker X_t/t eventually lies in an arbitrary small ball around the annealed drift. This statement is thus a perturbation of the case γ =+∞ where the environment is refreshed between each step of the walker. We extend three-way part of the results of Huveneers and Simenhaus (Electron J Probab 20(105):42, 2015), where the environment was given by the 1-dimensional exclusion process: (i) We deal with any dimension d≥1; (ii) We treat the much more general interchange process, where each particle carries a transition vector chosen according to an arbitrary law μ ; (iii) We show that X_t/t is not only in the same direction of the annealed drift, but that it is also close to it.

  2. Quasi-steady-state analysis of two-dimensional random intermittent search processes

    KAUST Repository

    Bressloff, Paul C.; Newby, Jay M.

    2011-01-01

    We use perturbation methods to analyze a two-dimensional random intermittent search process, in which a searcher alternates between a diffusive search phase and a ballistic movement phase whose velocity direction is random. A hidden target is introduced within a rectangular domain with reflecting boundaries. If the searcher moves within range of the target and is in the search phase, it has a chance of detecting the target. A quasi-steady-state analysis is applied to the corresponding Chapman-Kolmogorov equation. This generates a reduced Fokker-Planck description of the search process involving a nonzero drift term and an anisotropic diffusion tensor. In the case of a uniform direction distribution, for which there is zero drift, and isotropic diffusion, we use the method of matched asymptotics to compute the mean first passage time (MFPT) to the target, under the assumption that the detection range of the target is much smaller than the size of the domain. We show that an optimal search strategy exists, consistent with previous studies of intermittent search in a radially symmetric domain that were based on a decoupling or moment closure approximation. We also show how the decoupling approximation can break down in the case of biased search processes. Finally, we analyze the MFPT in the case of anisotropic diffusion and find that anisotropy can be useful when the searcher starts from a fixed location. © 2011 American Physical Society.

  3. Spherical particle Brownian motion in viscous medium as non-Markovian random process

    International Nuclear Information System (INIS)

    Morozov, Andrey N.; Skripkin, Alexey V.

    2011-01-01

    The Brownian motion of a spherical particle in an infinite medium is described by the conventional methods and integral transforms considering the entrainment of surrounding particles of the medium by the Brownian particle. It is demonstrated that fluctuations of the Brownian particle velocity represent a non-Markovian random process. The features of Brownian motion in short time intervals and in small displacements are considered. -- Highlights: → Description of Brownian motion considering the entrainment of medium is developed. → We find the equations for statistical characteristics of impulse fluctuations. → Brownian motion at small time intervals is considered. → Theoretical results and experimental data are compared.

  4. Increased certification of semi-device independent random numbers using many inputs and more post-processing

    International Nuclear Information System (INIS)

    Mironowicz, Piotr; Tavakoli, Armin; Hameedi, Alley; Marques, Breno; Bourennane, Mohamed; Pawłowski, Marcin

    2016-01-01

    Quantum communication with systems of dimension larger than two provides advantages in information processing tasks. Examples include higher rates of key distribution and random number generation. The main disadvantage of using such multi-dimensional quantum systems is the increased complexity of the experimental setup. Here, we analyze a not-so-obvious problem: the relation between randomness certification and computational requirements of the post-processing of experimental data. In particular, we consider semi-device independent randomness certification from an experiment using a four dimensional quantum system to violate the classical bound of a random access code. Using state-of-the-art techniques, a smaller quantum violation requires more computational power to demonstrate randomness, which at some point becomes impossible with today’s computers although the randomness is (probably) still there. We show that by dedicating more input settings of the experiment to randomness certification, then by more computational postprocessing of the experimental data which corresponds to a quantum violation, one may increase the amount of certified randomness. Furthermore, we introduce a method that significantly lowers the computational complexity of randomness certification. Our results show how more randomness can be generated without altering the hardware and indicate a path for future semi-device independent protocols to follow. (paper)

  5. Bayesian model averaging in vector autoregressive processes with an investigation of stability of the US great ratios and risk of a liquidity trap in the USA, UK and Japan

    NARCIS (Netherlands)

    R.W. Strachan (Rodney); H.K. van Dijk (Herman)

    2007-01-01

    textabstractA Bayesian model averaging procedure is presented within the class of vector autoregressive (VAR) processes and applied to two empirical issues. First, stability of the "Great Ratios" in U.S. macro-economic time series is investigated, together with the presence and e¤ects of permanent

  6. Source Reconstruction of Brain Potentials Using Bayesian Model Averaging to Analyze Face Intra-Domain vs. Face-Occupation Cross-Domain Processing.

    Science.gov (United States)

    Olivares, Ela I; Lage-Castellanos, Agustín; Bobes, María A; Iglesias, Jaime

    2018-01-01

    We investigated the neural correlates of the access to and retrieval of face structure information in contrast to those concerning the access to and retrieval of person-related verbal information, triggered by faces. We experimentally induced stimulus familiarity via a systematic learning procedure including faces with and without associated verbal information. Then, we recorded event-related potentials (ERPs) in both intra-domain (face-feature) and cross-domain (face-occupation) matching tasks while N400-like responses were elicited by incorrect eyes-eyebrows completions and occupations, respectively. A novel Bayesian source reconstruction approach plus conjunction analysis of group effects revealed that in both cases the generated N170s were of similar amplitude but had different neural origin. Thus, whereas the N170 of faces was associated predominantly to right fusiform and occipital regions (the so-called "Fusiform Face Area", "FFA" and "Occipital Face Area", "OFA", respectively), the N170 of occupations was associated to a bilateral very posterior activity, suggestive of basic perceptual processes. Importantly, the right-sided perceptual P200 and the face-related N250 were evoked exclusively in the intra-domain task, with sources in OFA and extensively in the fusiform region, respectively. Regarding later latencies, the intra-domain N400 seemed to be generated in right posterior brain regions encompassing mainly OFA and, to some extent, the FFA, likely reflecting neural operations triggered by structural incongruities. In turn, the cross-domain N400 was related to more anterior left-sided fusiform and temporal inferior sources, paralleling those described previously for the classic verbal N400. These results support the existence of differentiated neural streams for face structure and person-related verbal processing triggered by faces, which can be activated differentially according to specific task demands.

  7. Source Reconstruction of Brain Potentials Using Bayesian Model Averaging to Analyze Face Intra-Domain vs. Face-Occupation Cross-Domain Processing

    Directory of Open Access Journals (Sweden)

    Ela I. Olivares

    2018-03-01

    Full Text Available We investigated the neural correlates of the access to and retrieval of face structure information in contrast to those concerning the access to and retrieval of person-related verbal information, triggered by faces. We experimentally induced stimulus familiarity via a systematic learning procedure including faces with and without associated verbal information. Then, we recorded event-related potentials (ERPs in both intra-domain (face-feature and cross-domain (face-occupation matching tasks while N400-like responses were elicited by incorrect eyes-eyebrows completions and occupations, respectively. A novel Bayesian source reconstruction approach plus conjunction analysis of group effects revealed that in both cases the generated N170s were of similar amplitude but had different neural origin. Thus, whereas the N170 of faces was associated predominantly to right fusiform and occipital regions (the so-called “Fusiform Face Area”, “FFA” and “Occipital Face Area”, “OFA”, respectively, the N170 of occupations was associated to a bilateral very posterior activity, suggestive of basic perceptual processes. Importantly, the right-sided perceptual P200 and the face-related N250 were evoked exclusively in the intra-domain task, with sources in OFA and extensively in the fusiform region, respectively. Regarding later latencies, the intra-domain N400 seemed to be generated in right posterior brain regions encompassing mainly OFA and, to some extent, the FFA, likely reflecting neural operations triggered by structural incongruities. In turn, the cross-domain N400 was related to more anterior left-sided fusiform and temporal inferior sources, paralleling those described previously for the classic verbal N400. These results support the existence of differentiated neural streams for face structure and person-related verbal processing triggered by faces, which can be activated differentially according to specific task demands.

  8. Average is Over

    Science.gov (United States)

    Eliazar, Iddo

    2018-02-01

    The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.

  9. Nonparametric estimation of the heterogeneity of a random medium using compound Poisson process modeling of wave multiple scattering.

    Science.gov (United States)

    Le Bihan, Nicolas; Margerin, Ludovic

    2009-07-01

    In this paper, we present a nonparametric method to estimate the heterogeneity of a random medium from the angular distribution of intensity of waves transmitted through a slab of random material. Our approach is based on the modeling of forward multiple scattering using compound Poisson processes on compact Lie groups. The estimation technique is validated through numerical simulations based on radiative transfer theory.

  10. Autoregressive Moving Average Graph Filtering

    OpenAIRE

    Isufi, Elvin; Loukas, Andreas; Simonetto, Andrea; Leus, Geert

    2016-01-01

    One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design phi...

  11. The emergence of typical entanglement in two-party random processes

    International Nuclear Information System (INIS)

    Dahlsten, O C O; Oliveira, R; Plenio, M B

    2007-01-01

    We investigate the entanglement within a system undergoing a random, local process. We find that there is initially a phase of very fast generation and spread of entanglement. At the end of this phase the entanglement is typically maximal. In Oliveira et al (2007 Phys. Rev. Lett. 98 130502) we proved that the maximal entanglement is reached to a fixed arbitrary accuracy within O(N 3 ) steps, where N is the total number of qubits. Here we provide a detailed and more pedagogical proof. We demonstrate that one can use the so-called stabilizer gates to simulate this process efficiently on a classical computer. Furthermore, we discuss three ways of identifying the transition from the phase of rapid spread of entanglement to the stationary phase: (i) the time when saturation of the maximal entanglement is achieved, (ii) the cutoff moment, when the entanglement probability distribution is practically stationary, and (iii) the moment block entanglement exhibits volume scaling. We furthermore investigate the mixed state and multipartite setting. Numerically, we find that the mutual information appears to behave similarly to the quantum correlations and that there is a well-behaved phase-space flow of entanglement properties towards an equilibrium. We describe how the emergence of typical entanglement can be used to create a much simpler tripartite entanglement description. The results form a bridge between certain abstract results concerning typical (also known as generic) entanglement relative to an unbiased distribution on pure states and the more physical picture of distributions emerging from random local interactions

  12. Average nuclear surface properties

    International Nuclear Information System (INIS)

    Groote, H. von.

    1979-01-01

    The definition of the nuclear surface energy is discussed for semi-infinite matter. This definition is extended also for the case that there is a neutron gas instead of vacuum on the one side of the plane surface. The calculations were performed with the Thomas-Fermi Model of Syler and Blanchard. The parameters of the interaction of this model were determined by a least squares fit to experimental masses. The quality of this fit is discussed with respect to nuclear masses and density distributions. The average surface properties were calculated for different particle asymmetry of the nucleon-matter ranging from symmetry beyond the neutron-drip line until the system no longer can maintain the surface boundary and becomes homogeneous. The results of the calculations are incorporated in the nuclear Droplet Model which then was fitted to experimental masses. (orig.)

  13. Americans' Average Radiation Exposure

    International Nuclear Information System (INIS)

    2000-01-01

    We live with radiation every day. We receive radiation exposures from cosmic rays, from outer space, from radon gas, and from other naturally radioactive elements in the earth. This is called natural background radiation. It includes the radiation we get from plants, animals, and from our own bodies. We also are exposed to man-made sources of radiation, including medical and dental treatments, television sets and emission from coal-fired power plants. Generally, radiation exposures from man-made sources are only a fraction of those received from natural sources. One exception is high exposures used by doctors to treat cancer patients. Each year in the United States, the average dose to people from natural and man-made radiation sources is about 360 millirem. A millirem is an extremely tiny amount of energy absorbed by tissues in the body

  14. Rosuvastatin for Primary Prevention in Older Persons With Elevated C-Reactive Protein and Low to Average Low-Density Lipoprotein Cholesterol Levels: Exploratory Analysis of a Randomized Trial

    DEFF Research Database (Denmark)

    Glynn, R.J.; Koenig, W.; Nordestgaard, B.G.

    2010-01-01

    or older. Design: Secondary analysis of JUPITER ( Justification for the Use of statins in Prevention: an Intervention Trial Evaluating Rosuvastatin), a randomized, double-blind, placebo-controlled trial. Setting: 1315 sites in 26 countries randomly assigned participants in JUPITER. Participants: Among......Background: Randomized data on statins for primary prevention in older persons are limited, and the relative hazard of cardiovascular disease associated with an elevated cholesterol level weakens with advancing age. Objective: To assess the efficacy and safety of rosuvastatin in persons 70 years...... assigned in a 1: 1 ratio to receive 20 mg of rosuvastatin daily or placebo. Measurements: The primary end point was the occurrence of a first cardiovascular event ( myocardial infarction, stroke, arterial revascularization, hospitalization for unstable angina, or death from cardiovascular causes). Results...

  15. Random mutagenesis of aspergillus niger and process optimization for enhanced production of glucose oxidase

    International Nuclear Information System (INIS)

    Haq, I.; Nawaz, A.; Mukhtar, A.N.H.; Mansoor, H.M.Z.; Ameer, S.M.

    2014-01-01

    The study deals with the improvement of wild strain Aspergillus niger IIB-31 through random mutagenesis using chemical mutagens. The main aim of the work was to enhance the glucose oxidase (GOX) yield of wild strain (24.57+-0.01 U/g of cell mass) through random mutagenesis and process optimization. The wild strain of Aspergillus niger IIB-31 was treated with chemical mutagens such as Ethyl methane sulphonate (EMS) and nitrous acid for this purpose. Mutagen treated 98 variants indicating the positive results were picked and screened for the glucose oxidase production using submerged fermentation. EMS treated E45 mutant strain gave the highest glucose oxidase production (69.47 + 0.01 U/g of cell mass), which was approximately 3-folds greater than the wild strain IIB-31. The preliminary cultural conditions for the production of glucose oxidase using submerged fermentation from strain E45 were also optimized. The highest yield of GOD was obtained using 8% glucose as carbon and 0.3% peptone as nitrogen source at a medium pH of 7.0 after an incubation period of 72 hrs at 30 degree. (author)

  16. Topological characterization of antireflective and hydrophobic rough surfaces: are random process theory and fractal modeling applicable?

    Science.gov (United States)

    Borri, Claudia; Paggi, Marco

    2015-02-01

    The random process theory (RPT) has been widely applied to predict the joint probability distribution functions (PDFs) of asperity heights and curvatures of rough surfaces. A check of the predictions of RPT against the actual statistics of numerically generated random fractal surfaces and of real rough surfaces has been only partially undertaken. The present experimental and numerical study provides a deep critical comparison on this matter, providing some insight into the capabilities and limitations in applying RPT and fractal modeling to antireflective and hydrophobic rough surfaces, two important types of textured surfaces. A multi-resolution experimental campaign using a confocal profilometer with different lenses is carried out and a comprehensive software for the statistical description of rough surfaces is developed. It is found that the topology of the analyzed textured surfaces cannot be fully described according to RPT and fractal modeling. The following complexities emerge: (i) the presence of cut-offs or bi-fractality in the power-law power-spectral density (PSD) functions; (ii) a more pronounced shift of the PSD by changing resolution as compared to what was expected from fractal modeling; (iii) inaccuracy of the RPT in describing the joint PDFs of asperity heights and curvatures of textured surfaces; (iv) lack of resolution-invariance of joint PDFs of textured surfaces in case of special surface treatments, not accounted for by fractal modeling.

  17. Randomized clinical trials as reflexive-interpretative process in patients with rheumatoid arthritis: a qualitative study.

    Science.gov (United States)

    de Jorge, Mercedes; Parra, Sonia; de la Torre-Aboki, Jenny; Herrero-Beaumont, Gabriel

    2015-08-01

    Patients in randomized clinical trials have to adapt themselves to a restricted language to capture the necessary information to determine the safety and efficacy of a new treatment. The aim of this study was to explore the experience of patients with rheumatoid arthritis after completing their participation in a biologic therapy randomized clinical trial for a period of 3 years. A qualitative approach was used. The information was collected using 15 semi-structured interviews of patients with rheumatoid arthritis. Data collection was guided by the emergent analysis until no more relevant variations in the categories were found. The data were analysed using the grounded theory method. The objective of the patients when entering the study was to improve their quality of life by initiating the treatment. However, the experience changed the significance of the illness as they acquired skills and practical knowledge related to the management of their disease. The category "Interactional Empowerment" emerged as core category, as it represented the participative experience in a clinical trial. The process integrates the follow categories: "weight of systematisation", "working together", and the significance of the experience: "the duties". Simultaneously these categories evolved. The clinical trial monitoring activities enabled patients to engage in a reflexive-interpretative mechanism that transformed the emotional and symbolic significance of their disease and improved the empowerment of the patient. A better communicative strategy with the health professionals, the relatives of the patients, and the community was also achieved.

  18. Cognitive processing therapy versus supportive counseling for acute stress disorder following assault: a randomized pilot trial.

    Science.gov (United States)

    Nixon, Reginald D V

    2012-12-01

    The study tested the efficacy and tolerability of cognitive processing therapy (CPT) for survivors of assault with acute stress disorder. Participants (N=30) were randomly allocated to CPT or supportive counseling. Therapy comprised six individual weekly sessions of 90-min duration. Independent diagnostic assessment for PTSD was conducted at posttreatment. Participants completed self-report measures of posttraumatic stress, depression, and negative trauma-related beliefs at pre-, posttreatment, and 6-month follow-up. Results indicated that both interventions were successful in reducing symptoms at posttreatment with no statistical difference between the two; within and between-group effect sizes and the proportion of participants not meeting PTSD criteria was greater in CPT. Treatment gains were maintained for both groups at 6-month follow-up. Copyright © 2012. Published by Elsevier Ltd.

  19. Hierarchical random cellular neural networks for system-level brain-like signal processing.

    Science.gov (United States)

    Kozma, Robert; Puljic, Marko

    2013-09-01

    Sensory information processing and cognition in brains are modeled using dynamic systems theory. The brain's dynamic state is described by a trajectory evolving in a high-dimensional state space. We introduce a hierarchy of random cellular automata as the mathematical tools to describe the spatio-temporal dynamics of the cortex. The corresponding brain model is called neuropercolation which has distinct advantages compared to traditional models using differential equations, especially in describing spatio-temporal discontinuities in the form of phase transitions. Phase transitions demarcate singularities in brain operations at critical conditions, which are viewed as hallmarks of higher cognition and awareness experience. The introduced Monte-Carlo simulations obtained by parallel computing point to the importance of computer implementations using very large-scale integration (VLSI) and analog platforms. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Application of random-point processes to the detection of radiation sources

    International Nuclear Information System (INIS)

    Woods, J.W.

    1978-01-01

    In this report the mathematical theory of random-point processes is reviewed and it is shown how use of the theory can obtain optimal solutions to the problem of detecting radiation sources. As noted, the theory also applies to image processing in low-light-level or low-count-rate situations. Paralleling Snyder's work, the theory is extended to the multichannel case of a continuous, two-dimensional (2-D), energy-time space. This extension essentially involves showing that the data are doubly stochastic Poisson (DSP) point processes in energy as well as time. Further, a new 2-D recursive formulation is presented for the radiation-detection problem with large computational savings over nonrecursive techniques when the number of channels is large (greater than or equal to 30). Finally, some adaptive strategies for on-line ''learning'' of unknown, time-varying signal and background-intensity parameters and statistics are present and discussed. These adaptive procedures apply when a complete statistical description is not available a priori

  1. Analytical expressions for conditional averages: A numerical test

    DEFF Research Database (Denmark)

    Pécseli, H.L.; Trulsen, J.

    1991-01-01

    Conditionally averaged random potential fluctuations are an important quantity for analyzing turbulent electrostatic plasma fluctuations. Experimentally, this averaging can be readily performed by sampling the fluctuations only when a certain condition is fulfilled at a reference position...

  2. Random Gap Detection Test (RGDT) performance of individuals with central auditory processing disorders from 5 to 25 years of age.

    Science.gov (United States)

    Dias, Karin Ziliotto; Jutras, Benoît; Acrani, Isabela Olszanski; Pereira, Liliane Desgualdo

    2012-02-01

    The aim of the present study was to assess the auditory temporal resolution ability in individuals with central auditory processing disorders, to examine the maturation effect and to investigate the relationship between the performance on a temporal resolution test with the performance on other central auditory tests. Participants were divided in two groups: 131 with Central Auditory Processing Disorder and 94 with normal auditory processing. They had pure-tone air-conduction thresholds no poorer than 15 dB HL bilaterally, normal admittance measures and presence of acoustic reflexes. Also, they were assessed with a central auditory test battery. Participants who failed at least one or more tests were included in the Central Auditory Processing Disorder group and those in the control group obtained normal performance on all tests. Following the auditory processing assessment, the Random Gap Detection Test was administered to the participants. A three-way ANOVA was performed. Correlation analyses were also done between the four Random Gap Detection Test subtests data as well as between Random Gap Detection Test data and the other auditory processing test results. There was a significant difference between the age-group performances in children with and without Central Auditory Processing Disorder. Also, 48% of children with Central Auditory Processing Disorder failed the Random Gap Detection Test and the percentage decreased as a function of age. The highest percentage (86%) was found in the 5-6 year-old children. Furthermore, results revealed a strong significant correlation between the four Random Gap Detection Test subtests. There was a modest correlation between the Random Gap Detection Test results and the dichotic listening tests. No significant correlation was observed between the Random Gap Detection Test data and the results of the other tests in the battery. Random Gap Detection Test should not be administered to children younger than 7 years old because

  3. Brain training game improves executive functions and processing speed in the elderly: a randomized controlled trial.

    Science.gov (United States)

    Nouchi, Rui; Taki, Yasuyuki; Takeuchi, Hikaru; Hashizume, Hiroshi; Akitsuki, Yuko; Shigemune, Yayoi; Sekiguchi, Atsushi; Kotozaki, Yuka; Tsukiura, Takashi; Yomogida, Yukihito; Kawashima, Ryuta

    2012-01-01

    The beneficial effects of brain training games are expected to transfer to other cognitive functions, but these beneficial effects are poorly understood. Here we investigate the impact of the brain training game (Brain Age) on cognitive functions in the elderly. Thirty-two elderly volunteers were recruited through an advertisement in the local newspaper and randomly assigned to either of two game groups (Brain Age, Tetris). This study was completed by 14 of the 16 members in the Brain Age group and 14 of the 16 members in the Tetris group. To maximize the benefit of the interventions, all participants were non-gamers who reported playing less than one hour of video games per week over the past 2 years. Participants in both the Brain Age and the Tetris groups played their game for about 15 minutes per day, at least 5 days per week, for 4 weeks. Each group played for a total of about 20 days. Measures of the cognitive functions were conducted before and after training. Measures of the cognitive functions fell into four categories (global cognitive status, executive functions, attention, and processing speed). Results showed that the effects of the brain training game were transferred to executive functions and to processing speed. However, the brain training game showed no transfer effect on any global cognitive status nor attention. Our results showed that playing Brain Age for 4 weeks could lead to improve cognitive functions (executive functions and processing speed) in the elderly. This result indicated that there is a possibility which the elderly could improve executive functions and processing speed in short term training. The results need replication in large samples. Long-term effects and relevance for every-day functioning remain uncertain as yet. UMIN Clinical Trial Registry 000002825.

  4. Brain training game improves executive functions and processing speed in the elderly: a randomized controlled trial.

    Directory of Open Access Journals (Sweden)

    Rui Nouchi

    Full Text Available The beneficial effects of brain training games are expected to transfer to other cognitive functions, but these beneficial effects are poorly understood. Here we investigate the impact of the brain training game (Brain Age on cognitive functions in the elderly.Thirty-two elderly volunteers were recruited through an advertisement in the local newspaper and randomly assigned to either of two game groups (Brain Age, Tetris. This study was completed by 14 of the 16 members in the Brain Age group and 14 of the 16 members in the Tetris group. To maximize the benefit of the interventions, all participants were non-gamers who reported playing less than one hour of video games per week over the past 2 years. Participants in both the Brain Age and the Tetris groups played their game for about 15 minutes per day, at least 5 days per week, for 4 weeks. Each group played for a total of about 20 days. Measures of the cognitive functions were conducted before and after training. Measures of the cognitive functions fell into four categories (global cognitive status, executive functions, attention, and processing speed. Results showed that the effects of the brain training game were transferred to executive functions and to processing speed. However, the brain training game showed no transfer effect on any global cognitive status nor attention.Our results showed that playing Brain Age for 4 weeks could lead to improve cognitive functions (executive functions and processing speed in the elderly. This result indicated that there is a possibility which the elderly could improve executive functions and processing speed in short term training. The results need replication in large samples. Long-term effects and relevance for every-day functioning remain uncertain as yet.UMIN Clinical Trial Registry 000002825.

  5. Design of Energy Aware Adder Circuits Considering Random Intra-Die Process Variations

    Directory of Open Access Journals (Sweden)

    Marco Lanuzza

    2011-04-01

    Full Text Available Energy consumption is one of the main barriers to current high-performance designs. Moreover, the increased variability experienced in advanced process technologies implies further timing yield concerns and therefore intensifies this obstacle. Thus, proper techniques to achieve robust designs are a critical requirement for integrated circuit success. In this paper, the influence of intra-die random process variations is analyzed considering the particular case of the design of energy aware adder circuits. Five well known adder circuits were designed exploiting an industrial 45 nm static complementary metal-oxide semiconductor (CMOS standard cell library. The designed adders were comparatively evaluated under different energy constraints. As a main result, the performed analysis demonstrates that, for a given energy budget, simpler circuits (which are conventionally identified as low-energy slow architectures operating at higher power supply voltages can achieve a timing yield significantly better than more complex faster adders when used in low-power design with supply voltages lower than nominal.

  6. Lattice Boltzmann simulation of the gas-solid adsorption process in reconstructed random porous media

    Science.gov (United States)

    Zhou, L.; Qu, Z. G.; Ding, T.; Miao, J. Y.

    2016-04-01

    The gas-solid adsorption process in reconstructed random porous media is numerically studied with the lattice Boltzmann (LB) method at the pore scale with consideration of interparticle, interfacial, and intraparticle mass transfer performances. Adsorbent structures are reconstructed in two dimensions by employing the quartet structure generation set approach. To implement boundary conditions accurately, all the porous interfacial nodes are recognized and classified into 14 types using a proposed universal program called the boundary recognition and classification program. The multiple-relaxation-time LB model and single-relaxation-time LB model are adopted to simulate flow and mass transport, respectively. The interparticle, interfacial, and intraparticle mass transfer capacities are evaluated with the permeability factor and interparticle transfer coefficient, Langmuir adsorption kinetics, and the solid diffusion model, respectively. Adsorption processes are performed in two groups of adsorbent media with different porosities and particle sizes. External and internal mass transfer resistances govern the adsorption system. A large porosity leads to an early time for adsorption equilibrium because of the controlling factor of external resistance. External and internal resistances are dominant at small and large particle sizes, respectively. Particle size, under which the total resistance is minimum, ranges from 3 to 7 μm with the preset parameters. Pore-scale simulation clearly explains the effect of both external and internal mass transfer resistances. The present paper provides both theoretical and practical guidance for the design and optimization of adsorption systems.

  7. Random function representation of stationary stochastic vector processes for probability density evolution analysis of wind-induced structures

    Science.gov (United States)

    Liu, Zhangjun; Liu, Zenghui

    2018-06-01

    This paper develops a hybrid approach of spectral representation and random function for simulating stationary stochastic vector processes. In the proposed approach, the high-dimensional random variables, included in the original spectral representation (OSR) formula, could be effectively reduced to only two elementary random variables by introducing the random functions that serve as random constraints. Based on this, a satisfactory simulation accuracy can be guaranteed by selecting a small representative point set of the elementary random variables. The probability information of the stochastic excitations can be fully emerged through just several hundred of sample functions generated by the proposed approach. Therefore, combined with the probability density evolution method (PDEM), it could be able to implement dynamic response analysis and reliability assessment of engineering structures. For illustrative purposes, a stochastic turbulence wind velocity field acting on a frame-shear-wall structure is simulated by constructing three types of random functions to demonstrate the accuracy and efficiency of the proposed approach. Careful and in-depth studies concerning the probability density evolution analysis of the wind-induced structure have been conducted so as to better illustrate the application prospects of the proposed approach. Numerical examples also show that the proposed approach possesses a good robustness.

  8. Multi-fidelity Gaussian process regression for prediction of random fields

    Energy Technology Data Exchange (ETDEWEB)

    Parussini, L. [Department of Engineering and Architecture, University of Trieste (Italy); Venturi, D., E-mail: venturi@ucsc.edu [Department of Applied Mathematics and Statistics, University of California Santa Cruz (United States); Perdikaris, P. [Department of Mechanical Engineering, Massachusetts Institute of Technology (United States); Karniadakis, G.E. [Division of Applied Mathematics, Brown University (United States)

    2017-05-01

    We propose a new multi-fidelity Gaussian process regression (GPR) approach for prediction of random fields based on observations of surrogate models or hierarchies of surrogate models. Our method builds upon recent work on recursive Bayesian techniques, in particular recursive co-kriging, and extends it to vector-valued fields and various types of covariances, including separable and non-separable ones. The framework we propose is general and can be used to perform uncertainty propagation and quantification in model-based simulations, multi-fidelity data fusion, and surrogate-based optimization. We demonstrate the effectiveness of the proposed recursive GPR techniques through various examples. Specifically, we study the stochastic Burgers equation and the stochastic Oberbeck–Boussinesq equations describing natural convection within a square enclosure. In both cases we find that the standard deviation of the Gaussian predictors as well as the absolute errors relative to benchmark stochastic solutions are very small, suggesting that the proposed multi-fidelity GPR approaches can yield highly accurate results.

  9. A method of signal transmission path analysis for multivariate random processes

    International Nuclear Information System (INIS)

    Oguma, Ritsuo

    1984-04-01

    A method for noise analysis called ''STP (signal transmission path) analysis'' is presentd as a tool to identify noise sources and their propagation paths in multivariate random proceses. Basic idea of the analysis is to identify, via time series analysis, effective network for the signal power transmission among variables in the system and to make use of its information to the noise analysis. In the present paper, we accomplish this through two steps of signal processings; first, we estimate, using noise power contribution analysis, variables which have large contribution to the power spectrum of interest, and then evaluate the STPs for each pair of variables to identify STPs which play significant role for the generated noise to transmit to the variable under evaluation. The latter part of the analysis is executed through comparison of partial coherence function and newly introduced partial noise power contribution function. This paper presents the procedure of the STP analysis and demonstrates, using simulation data as well as Borssele PWR noise data, its effectiveness for investigation of noise generation and propagation mechanisms. (author)

  10. Multi-fidelity Gaussian process regression for prediction of random fields

    International Nuclear Information System (INIS)

    Parussini, L.; Venturi, D.; Perdikaris, P.; Karniadakis, G.E.

    2017-01-01

    We propose a new multi-fidelity Gaussian process regression (GPR) approach for prediction of random fields based on observations of surrogate models or hierarchies of surrogate models. Our method builds upon recent work on recursive Bayesian techniques, in particular recursive co-kriging, and extends it to vector-valued fields and various types of covariances, including separable and non-separable ones. The framework we propose is general and can be used to perform uncertainty propagation and quantification in model-based simulations, multi-fidelity data fusion, and surrogate-based optimization. We demonstrate the effectiveness of the proposed recursive GPR techniques through various examples. Specifically, we study the stochastic Burgers equation and the stochastic Oberbeck–Boussinesq equations describing natural convection within a square enclosure. In both cases we find that the standard deviation of the Gaussian predictors as well as the absolute errors relative to benchmark stochastic solutions are very small, suggesting that the proposed multi-fidelity GPR approaches can yield highly accurate results.

  11. Leaving Distress Behind: A Randomized Controlled Study on Change in Emotional Processing in Borderline Personality Disorder.

    Science.gov (United States)

    Berthoud, Laurent; Pascual-Leone, Antonio; Caspar, Franz; Tissot, Hervé; Keller, Sabine; Rohde, Kristina B; de Roten, Yves; Despland, Jean-Nicolas; Kramer, Ueli

    2017-01-01

    The marked impulsivity and instability of clients suffering from borderline personality disorder (BPD) greatly challenge therapists' understanding and responsiveness. This may hinder the development of a constructive therapeutic relationship despite it being of particular importance in their treatment. Recent studies have shown that using motive-oriented therapeutic relationship (MOTR), a possible operationalization of appropriate therapist responsiveness, can enhance treatment outcome for BPD. The overall objective of this study is to examine change in emotional processing in BPD clients following the therapist's use of MOTR. The present paper focuses on N = 50 cases, n = 25 taken from each of two conditions of a randomized controlled add-on effectiveness design. Clients were either allocated to a manual-based psychiatric-psychodynamic 10-session version of general psychiatric management (GPM), a borderline-specific treatment, or to a 10-session version of GPM augmented with MOTR. Emotional states were assessed using the Classification of Affective-Meaning States (Pascual-Leone & Greenberg, 2005) at intake, midtreatment, and in the penultimate session. Across treatment, early expressions of distress, especially the emotion state of global distress, were shown to significantly decrease (p = .00), and adaptive emotions were found to emerge (p emotional variability and stronger outcome predictors in the MOTR condition. The findings indicate initial emotional change in BPD clients in a relatively short time frame and suggest the addition of MOTR to psychotherapeutic treatments as promising. Clinical implications are discussed.

  12. MMRW-BOOKS, Legacy books on slowing down, thermalization, particle transport theory, random processes in reactors

    International Nuclear Information System (INIS)

    Williams, M.M.R.

    2007-01-01

    Description: Prof. M.M..R Williams has now released three of his legacy books for free distribution: 1 - M.M.R. Williams: The Slowing Down and Thermalization of Neutrons, North-Holland Publishing Company - Amsterdam, 582 pages, 1966. Content: Part I - The Thermal Energy Region: 1. Introduction and Historical Review, 2. The Scattering Kernel, 3. Neutron Thermalization in an Infinite Homogeneous Medium, 4. Neutron Thermalization in Finite Media, 5. The Spatial Dependence of the Energy Spectrum, 6. Reactor Cell Calculations, 7. Synthetic Scattering Kernels. Part II - The Slowing Down Region: 8. Scattering Kernels in the Slowing Down Region, 9. Neutron Slowing Down in an Infinite Homogeneous Medium, 10.Neutron Slowing Down and Diffusion. 2 - M.M.R. Williams: Mathematical Methods in Particle Transport Theory, Butterworths, London, 430 pages, 1971. Content: 1 The General Problem of Particle Transport, 2 The Boltzmann Equation for Gas Atoms and Neutrons, 3 Boundary Conditions, 4 Scattering Kernels, 5 Some Basic Problems in Neutron Transport and Rarefied Gas Dynamics, 6 The Integral Form of the Transport Equation in Plane, Spherical and Cylindrical Geometries, 7 Exact Solutions of Model Problems, 8 Eigenvalue Problems in Transport Theory, 9 Collision Probability Methods, 10 Variational Methods, 11 Polynomial Approximations. 3 - M.M.R. Williams: Random Processes in Nuclear Reactors, Pergamon Press Oxford New York Toronto Sydney, 243 pages, 1974. Content: 1. Historical Survey and General Discussion, 2. Introductory Mathematical Treatment, 3. Applications of the General Theory, 4. Practical Applications of the Probability Distribution, 5. The Langevin Technique, 6. Point Model Power Reactor Noise, 7. The Spatial Variation of Reactor Noise, 8. Random Phenomena in Heterogeneous Reactor Systems, 9. Associated Fluctuation Problems, Appendix: Noise Equivalent Sources. Note to the user: Prof. M.M.R Williams owns the copyright of these books and he authorises the OECD/NEA Data Bank

  13. Random Decrement Based FRF Estimation

    DEFF Research Database (Denmark)

    Brincker, Rune; Asmussen, J. C.

    to speed and quality. The basis of the new method is the Fourier transformation of the Random Decrement functions which can be used to estimate the frequency response functions. The investigations are based on load and response measurements of a laboratory model of a 3 span bridge. By applying both methods...... that the Random Decrement technique is based on a simple controlled averaging of time segments of the load and response processes. Furthermore, the Random Decrement technique is expected to produce reliable results. The Random Decrement technique will reduce leakage, since the Fourier transformation...

  14. Random Decrement Based FRF Estimation

    DEFF Research Database (Denmark)

    Brincker, Rune; Asmussen, J. C.

    1997-01-01

    to speed and quality. The basis of the new method is the Fourier transformation of the Random Decrement functions which can be used to estimate the frequency response functions. The investigations are based on load and response measurements of a laboratory model of a 3 span bridge. By applying both methods...... that the Random Decrement technique is based on a simple controlled averaging of time segments of the load and response processes. Furthermore, the Random Decrement technique is expected to produce reliable results. The Random Decrement technique will reduce leakage, since the Fourier transformation...

  15. Average-case analysis of numerical problems

    CERN Document Server

    2000-01-01

    The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.

  16. Modelling estimation and analysis of dynamic processes from image sequences using temporal random closed sets and point processes with application to the cell exocytosis and endocytosis

    OpenAIRE

    Díaz Fernández, Ester

    2010-01-01

    In this thesis, new models and methodologies are introduced for the analysis of dynamic processes characterized by image sequences with spatial temporal overlapping. The spatial temporal overlapping exists in many natural phenomena and should be addressed properly in several Science disciplines such as Microscopy, Material Sciences, Biology, Geostatistics or Communication Networks. This work is related to the Point Process and Random Closed Set theories, within Stochastic Ge...

  17. Efficient rare-event simulation for multiple jump events in regularly varying random walks and compound Poisson processes

    NARCIS (Netherlands)

    B. Chen (Bohan); J. Blanchet; C.H. Rhee (Chang-Han); A.P. Zwart (Bert)

    2017-01-01

    textabstractWe propose a class of strongly efficient rare event simulation estimators for random walks and compound Poisson processes with a regularly varying increment/jump-size distribution in a general large deviations regime. Our estimator is based on an importance sampling strategy that hinges

  18. Scaling characteristics of one-dimensional fractional diffusion processes in the presence of power-law distributed random noise.

    Science.gov (United States)

    Nezhadhaghighi, Mohsen Ghasemi

    2017-08-01

    Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ-stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α. We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ-stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.

  19. Scaling characteristics of one-dimensional fractional diffusion processes in the presence of power-law distributed random noise

    Science.gov (United States)

    Nezhadhaghighi, Mohsen Ghasemi

    2017-08-01

    Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ -stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α . We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ -stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.

  20. A process evaluation of the Supermarket Healthy Eating for Life (SHELf) randomized controlled trial.

    Science.gov (United States)

    Olstad, Dana Lee; Ball, Kylie; Abbott, Gavin; McNaughton, Sarah A; Le, Ha N D; Ni Mhurchu, Cliona; Pollard, Christina; Crawford, David A

    2016-02-24

    Supermarket Healthy Eating for Life (SHELf) was a randomized controlled trial that operationalized a socioecological approach to population-level dietary behaviour change in a real-world supermarket setting. SHELf tested the impact of individual (skill-building), environmental (20% price reductions), and combined (skill-building + 20% price reductions) interventions on women's purchasing and consumption of fruits, vegetables, low-calorie carbonated beverages and water. This process evaluation investigated the reach, effectiveness, implementation, and maintenance of the SHELf interventions. RE-AIM provided a conceptual framework to examine the processes underlying the impact of the interventions using data from participant surveys and objective sales data collected at baseline, post-intervention (3 months) and 6-months post-intervention. Fisher's exact, χ (2) and t-tests assessed differences in quantitative survey responses among groups. Adjusted linear regression examined the impact of self-reported intervention dose on food purchasing and consumption outcomes. Thematic analysis identified key themes within qualitative survey responses. Reach of the SHELf interventions to disadvantaged groups, and beyond study participants themselves, was moderate. Just over one-third of intervention participants indicated that the interventions were effective in changing the way they bought, cooked or consumed food (p < 0.001 compared to control), with no differences among intervention groups. Improvements in purchasing and consumption outcomes were greatest among those who received a higher intervention dose. Most notably, participants who said they accessed price reductions on fruits and vegetables purchased (519 g/week) and consumed (0.5 servings/day) more vegetables. The majority of participants said they accessed (82%) and appreciated discounts on fruits and vegetables, while there was limited use (40%) and appreciation of discounts on low-calorie carbonated

  1. The difference between alternative averages

    Directory of Open Access Journals (Sweden)

    James Vaupel

    2012-09-01

    Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.

  2. Flexible time domain averaging technique

    Science.gov (United States)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  3. Time-averaged MSD of Brownian motion

    OpenAIRE

    Andreanov, Alexei; Grebenkov, Denis

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we de...

  4. An innovative scintillation process for correcting, cooling, and reducing the randomness of waveforms

    International Nuclear Information System (INIS)

    Shen, J.

    1991-01-01

    Research activities were concentrated on an innovative scintillation technique for high-energy collider detection. Heretofore, scintillation waveform data of high- energy physics events have been problematically random. This paper represents a bottleneck of data flow for the next generation of detectors for proton colliders like SSC or LHC. Prevailing problems to resolve were: additional time walk and jitter resulting from the random hitting positions of particles, increased walk and jitter caused by scintillation photon propagation dispersions, and quantum fluctuations of luminescence. However, these were manageable when the different aspects of randomness had been clarified in increased detail. For this purpose, these three were defined as pseudorandomness, quasi-randomness, and real randomness, respectively. A unique scintillation counter incorporating long scintillators with light guides, a drift chamber, and fast discriminators plus integrators was employed to resolve problems of correcting time walk and reducing the additional jitter by establishing an analytical waveform description of V(t,z) for a measured (z). Resolving problem was accomplished by reducing jitter by compressing V(t,z) with a nonlinear medium, called cooling scintillation. Resolving problem was proposed by orienting molecular and polarizing scintillation through the use of intense magnetic technology, called stabilizing the waveform

  5. Random practice - one of the factors of the motor learning process

    Directory of Open Access Journals (Sweden)

    Petr Valach

    2012-01-01

    Full Text Available BACKGROUND: An important concept of acquiring motor skills is the random practice (contextual interference - CI. The explanation of the effect of contextual interference is that the memory has to work more intensively, and therefore it provides higher effect of motor skills retention than the block practice. Only active remembering of a motor skill assigns the practical value for appropriate using in the future. OBJECTIVE: The aim of this research was to determine the difference in how the motor skills in sport gymnastics are acquired and retained using the two different teaching methods - blocked and random practice. METHODS: The blocked and random practice on the three selected gymnastics tasks were applied in the two groups students of physical education (blocked practice - the group BP, random practice - the group RP during two months, in one session a week (totally 80 trials. At the end of the experiment and 6 months after (retention tests the groups were tested on the selected gymnastics skills. RESULTS: No significant differences in a level of the gymnastics skills were found between BP group and RP group at the end of the experiment. However, the retention tests showed significantly higher level of the gymnastics skills in the RP group in comparison with the BP group. CONCLUSION: The results confirmed that a retention of the gymnastics skills using the teaching method of the random practice was significantly higher than with use of the blocked practice.

  6. Processing speed and working memory training in multiple sclerosis: a double-blind randomized controlled pilot study.

    Science.gov (United States)

    Hancock, Laura M; Bruce, Jared M; Bruce, Amanda S; Lynch, Sharon G

    2015-01-01

    Between 40-65% of multiple sclerosis patients experience cognitive deficits, with processing speed and working memory most commonly affected. This pilot study investigated the effect of computerized cognitive training focused on improving processing speed and working memory. Participants were randomized into either an active or a sham training group and engaged in six weeks of training. The active training group improved on a measure of processing speed and attention following cognitive training, and data trended toward significance on measures of other domains. Results provide preliminary evidence that cognitive training with multiple sclerosis patients may produce moderate improvement in select areas of cognitive functioning.

  7. Randomized benchmarking of single- and multi-qubit control in liquid-state NMR quantum information processing

    International Nuclear Information System (INIS)

    Ryan, C A; Laforest, M; Laflamme, R

    2009-01-01

    Being able to quantify the level of coherent control in a proposed device implementing a quantum information processor (QIP) is an important task for both comparing different devices and assessing a device's prospects with regards to achieving fault-tolerant quantum control. We implement in a liquid-state nuclear magnetic resonance QIP the randomized benchmarking protocol presented by Knill et al (2008 Phys. Rev. A 77 012307). We report an error per randomized π/2 pulse of 1.3±0.1x10 -4 with a single-qubit QIP and show an experimentally relevant error model where the randomized benchmarking gives a signature fidelity decay which is not possible to interpret as a single error per gate. We explore and experimentally investigate multi-qubit extensions of this protocol and report an average error rate for one- and two-qubit gates of 4.7±0.3x10 -3 for a three-qubit QIP. We estimate that these error rates are still not decoherence limited and thus can be improved with modifications to the control hardware and software.

  8. Process convergence of self-normalized sums of i.i.d. random ...

    Indian Academy of Sciences (India)

    The study of the asymptotics of the self-normalized sums are also interesting. Logan ... if the constituent random variables are from the domain of attraction of a normal dis- tribution ... index of stability α which equals 2 (for definition, see §2).

  9. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....

  10. Parameters, test criteria and fault assessment in random sampling of waste barrels from non-qualified processes

    International Nuclear Information System (INIS)

    Martens, B.R.

    1989-01-01

    In the context of random sampling tests, parameters are checked on the waste barrels and criteria are given on which these tests are based. Also, it is shown how faulty data on the properties of the waste or faulty waste barrels should be treated. To decide the extent of testing, the properties of the waste relevant to final storage are determined based on the conditioning process used. (DG) [de

  11. Choosing between Higher Moment Maximum Entropy Models and Its Application to Homogeneous Point Processes with Random Effects

    Directory of Open Access Journals (Sweden)

    Lotfi Khribi

    2017-12-01

    Full Text Available In the Bayesian framework, the usual choice of prior in the prediction of homogeneous Poisson processes with random effects is the gamma one. Here, we propose the use of higher order maximum entropy priors. Their advantage is illustrated in a simulation study and the choice of the best order is established by two goodness-of-fit criteria: Kullback–Leibler divergence and a discrepancy measure. This procedure is illustrated on a warranty data set from the automobile industry.

  12. Longest interval between zeros of the tied-down random walk, the Brownian bridge and related renewal processes

    Science.gov (United States)

    Godrèche, Claude

    2017-05-01

    The probability distribution of the longest interval between two zeros of a simple random walk starting and ending at the origin, and of its continuum limit, the Brownian bridge, was analysed in the past by Rosén and Wendel, then extended by the latter to stable processes. We recover and extend these results using simple concepts of renewal theory, which allows to revisit past and recent works of the physics literature.

  13. Longest interval between zeros of the tied-down random walk, the Brownian bridge and related renewal processes

    International Nuclear Information System (INIS)

    Godrèche, Claude

    2017-01-01

    The probability distribution of the longest interval between two zeros of a simple random walk starting and ending at the origin, and of its continuum limit, the Brownian bridge, was analysed in the past by Rosén and Wendel, then extended by the latter to stable processes. We recover and extend these results using simple concepts of renewal theory, which allows to revisit past and recent works of the physics literature. (paper)

  14. How to average logarithmic retrievals?

    Directory of Open Access Journals (Sweden)

    B. Funke

    2012-04-01

    Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.

  15. Random-walk simulation of diffusion-controlled processes among static traps

    International Nuclear Information System (INIS)

    Lee, S.B.; Kim, I.C.; Miller, C.A.; Torquato, S.; Department of Mechanical and Aerospace Engineering and Department of Chemical Engineering, North Carolina State University, Raleigh, North Carolina 27695-7910)

    1989-01-01

    We present computer-simulation results for the trapping rate (rate constant) k associated with diffusion-controlled reactions among identical, static spherical traps distributed with an arbitrary degree of impenetrability using a Pearson random-walk algorithm. We specifically consider the penetrable-concentric-shell model in which each trap of diameter σ is composed of a mutually impenetrable core of diameter λσ, encompassed by a perfectly penetrable shell of thickness (1-λ)σ/2: λ=0 corresponding to randomly centered or ''fully penetrable'' traps and λ=1 corresponding to totally impenetrable traps. Trapping rates are calculated accurately from the random-walk algorithm at the extreme limits of λ (λ=0 and 1) and at an intermediate value (λ=0.8), for a wide range of trap densities. Our simulation procedure has a relatively fast execution time. It is found that k increases with increasing impenetrability at fixed trap concentration. These ''exact'' data are compared with previous theories for the trapping rate. Although a good approximate theory exists for the fully-penetrable-trap case, there are no currently available theories that can provide good estimates of the trapping rate for a moderate to high density of traps with nonzero hard cores (λ>0)

  16. Lagrangian averaging with geodesic mean.

    Science.gov (United States)

    Oliver, Marcel

    2017-11-01

    This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.

  17. Levy flights and random searches

    Energy Technology Data Exchange (ETDEWEB)

    Raposo, E P [Laboratorio de Fisica Teorica e Computacional, Departamento de Fisica, Universidade Federal de Pernambuco, Recife-PE, 50670-901 (Brazil); Buldyrev, S V [Department of Physics, Yeshiva University, New York, 10033 (United States); Da Luz, M G E [Departamento de Fisica, Universidade Federal do Parana, Curitiba-PR, 81531-990 (Brazil); Viswanathan, G M [Instituto de Fisica, Universidade Federal de Alagoas, Maceio-AL, 57072-970 (Brazil); Stanley, H E [Center for Polymer Studies and Department of Physics, Boston University, Boston, MA 02215 (United States)

    2009-10-30

    In this work we discuss some recent contributions to the random search problem. Our analysis includes superdiffusive Levy processes and correlated random walks in several regimes of target site density, mobility and revisitability. We present results in the context of mean-field-like and closed-form average calculations, as well as numerical simulations. We then consider random searches performed in regular lattices and lattices with defects, and we discuss a necessary criterion for distinguishing true superdiffusion from correlated random walk processes. We invoke energy considerations in relation to critical survival states on the edge of extinction, and we analyze the emergence of Levy behavior in deterministic search walks. Finally, we comment on the random search problem in the context of biological foraging.

  18. Averaging in spherically symmetric cosmology

    International Nuclear Information System (INIS)

    Coley, A. A.; Pelavas, N.

    2007-01-01

    The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis

  19. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  20. Iterative algorithm of discrete Fourier transform for processing randomly sampled NMR data sets

    International Nuclear Information System (INIS)

    Stanek, Jan; Kozminski, Wiktor

    2010-01-01

    Spectra obtained by application of multidimensional Fourier Transformation (MFT) to sparsely sampled nD NMR signals are usually corrupted due to missing data. In the present paper this phenomenon is investigated on simulations and experiments. An effective iterative algorithm for artifact suppression for sparse on-grid NMR data sets is discussed in detail. It includes automated peak recognition based on statistical methods. The results enable one to study NMR spectra of high dynamic range of peak intensities preserving benefits of random sampling, namely the superior resolution in indirectly measured dimensions. Experimental examples include 3D 15 N- and 13 C-edited NOESY-HSQC spectra of human ubiquitin.

  1. Direct observation of asperity deformation of specimens with random rough surfaces in upsetting and indentation processes

    DEFF Research Database (Denmark)

    Azushima, A.; Kuba, S.; Tani, S.

    2006-01-01

    The trapping behavior of liquid lubricant and contact behavior of asperities at the workpiece-tool interface during upsetting and indentation are observed directly using a compression subpress which consists of a transparent die made of sapphire, a microscope with a CCD camera and a video system....... The experiments are carried out without lubricant and with lubricant. Specimens used are commercially pure A1100 aluminum with a random rough surface. From these observations, the change in the fraction of real contact area is measured by an image processor. The real contact area ratios in upsetting experiments...

  2. Direct Observation of Asperity Deformation of Specimen with Random Rough Surface in Upsetting Process

    DEFF Research Database (Denmark)

    Azushima, A.; Kuba, S.; Tani, S.

    2004-01-01

    The trapping behavior of liquid lubricant and contact behavior of asperities at the workpiece-tool interface during upsetting and indentation are observed directly using a compression subpress which consists of a transparent die made of sapphire, a microscope with a CCD camera and a video system....... The experiments are carried out without lubricant and with lubricant. Specimens used are commercially pure A1100 Aluminum with a random rough surface. From this observation, the change in the fraction of real contact area is measured by an image processor. The real contact area ratios in upsetting experiment...

  3. Geographic Gossip: Efficient Averaging for Sensor Networks

    Science.gov (United States)

    Dimakis, Alexandros D. G.; Sarwate, Anand D.; Wainwright, Martin J.

    Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log n}} \\log \\epsilon^{-1})$ radio transmissions, which yields a $\\sqrt{\\frac{n}{\\log n}}$ factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental comparisons between our algorithm and standard methods as applied to various classes of random fields.

  4. Fluctuations of wavefunctions about their classical average

    International Nuclear Information System (INIS)

    Benet, L; Flores, J; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H

    2003-01-01

    Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics

  5. Time-averaged MSD of Brownian motion

    International Nuclear Information System (INIS)

    Andreanov, Alexei; Grebenkov, Denis S

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution

  6. Time-averaged MSD of Brownian motion

    Science.gov (United States)

    Andreanov, Alexei; Grebenkov, Denis S.

    2012-07-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution.

  7. The impact of randomness on the distribution of wealth: Some economic aspects of the Wright-Fisher diffusion process

    Science.gov (United States)

    Bouleau, Nicolas; Chorro, Christophe

    2017-08-01

    In this paper we consider some elementary and fair zero-sum games of chance in order to study the impact of random effects on the wealth distribution of N interacting players. Even if an exhaustive analytical study of such games between many players may be tricky, numerical experiments highlight interesting asymptotic properties. In particular, we emphasize that randomness plays a key role in concentrating wealth in the extreme, in the hands of a single player. From a mathematical perspective, we interestingly adopt some diffusion limits for small and high-frequency transactions which are otherwise extensively used in population genetics. Finally, the impact of small tax rates on the preceding dynamics is discussed for several regulation mechanisms. We show that taxation of income is not sufficient to overcome this extreme concentration process in contrast to the uniform taxation of capital which stabilizes the economy and prevents agents from being ruined.

  8. Scaling in Rate-Changeable Birth and Death Processes with Random Removals

    International Nuclear Information System (INIS)

    Ke Jianhong; Lin Zhenquan; Chen Xiaoshuang

    2009-01-01

    We propose a monomer birth-death model with random removals, in which an aggregate of size k can produce a new monomer at a time-dependent rate I(t)k or lose one monomer at a rate J(t)k, and with a probability P (t) an aggregate of any size is randomly removed. We then analytically investigate the kinetic evolution of the model by means of the rate equation. The results show that the scaling behavior of the aggregate size distribution is dependent crucially on the net birth rate I(t) - J(t) as well as the birth rate I(t). The aggregate size distribution can approach a standard or modified scaling form in some cases, but it may take a scale-free form in other cases. Moreover, the species can survive finally only if either I(t) - J(t) ≥ P (t) or [J(t) + P (t) - I(t)]t ≅ 0 at t >> 1; otherwise, it will become extinct.

  9. Order acceptance in food processing systems with random raw material requirements

    NARCIS (Netherlands)

    Kilic, Onur A.; van Donk, Dirk Pieter; Wijngaard, Jacob; Tarim, S. Armagan

    This study considers a food production system that processes a single perishable raw material into several products having stochastic demands. In order to process an order, the amount of raw material delivery from storage needs to meet the raw material requirement of the order. However, the amount

  10. Evaluations of average level spacings

    International Nuclear Information System (INIS)

    Liou, H.I.

    1980-01-01

    The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables

  11. Statistics on exponential averaging of periodograms

    Energy Technology Data Exchange (ETDEWEB)

    Peeters, T.T.J.M. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a {chi}{sup 2} distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.).

  12. Statistics on exponential averaging of periodograms

    International Nuclear Information System (INIS)

    Peeters, T.T.J.M.; Ciftcioglu, Oe.

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)

  13. Average stress in a Stokes suspension of disks

    NARCIS (Netherlands)

    Prosperetti, Andrea

    2004-01-01

    The ensemble-average velocity and pressure in an unbounded quasi-random suspension of disks (or aligned cylinders) are calculated in terms of average multipoles allowing for the possibility of spatial nonuniformities in the system. An expression for the stress due to the suspended particles is

  14. [Working memory and executive control: inhibitory processes in updating and random generation tasks].

    Science.gov (United States)

    Macizo, Pedro; Bajo, Teresa; Soriano, Maria Felipa

    2006-02-01

    Working Memory (WM) span predicts subjects' performance in control executive tasks and, in addition, it has been related to the capacity to inhibit irrelevant information. In this paper we investigate the role of WM span in two executive tasks focusing our attention on inhibitory components of both tasks. High and low span participants recalled targets words rejecting irrelevant items at the same time (Experiment 1) and they generated random numbers (Experiment 2). Results showed a clear relation between WM span and performance in both tasks. In addition, analyses of intrusion errors (Experiment 1) and stereotyped responses (Experiment 2) indicated that high span individuals were able to efficiently use the inhibitory component implied in both tasks. The pattern of data provides support to the relation between WM span and control executive tasks through an inhibitory mechanism.

  15. β-decay rates of r-process nuclei in the relativistic quasiparticle random phase approximation

    International Nuclear Information System (INIS)

    Niksic, T.; Marketin, T.; Vretenar, D.; Paar, N.; Ring, P.

    2004-01-01

    The fully consistent relativistic proton-neutron quasiparticle random phase approximation (PN-RQRPA) is employed in the calculation of β-decay half-lives of neutron-rich nuclei in the N∼50 and N∼82 regions. A new density-dependent effective interaction, with an enhanced value of the nucleon effective mass, is used in relativistic Hartree-Bogolyubov calculation of nuclear ground states and in the particle-hole channel of the PN-RQRPA. The finite range Gogny D1S interaction is employed in the T=1 pairing channel, and the model also includes a proton-neutron particle-particle interaction. The theoretical half-lives reproduce the experimental data for the Fe, Zn, Cd, and Te isotopic chains, but overestimate the lifetimes of Ni isotopes and predict a stable 132 Sn. (orig.)

  16. β-decay rates of r-process nuclei in the relativistic quasiparticle random phase approximation

    International Nuclear Information System (INIS)

    Niksic, T.; Marketin, T.; Vretenar, D.; Paar, N.; Ring, P.

    2005-01-01

    The fully consistent relativistic proton-neutron quasiparticle random phase approximation (PN-RQRPA) is employed in the calculation of β-decay half-lives of neutron-rich nuclei in the N≅50 and N≅82 regions. A new density-dependent effective interaction, with an enhanced value of the nucleon effective mass, is used in relativistic Hartree-Bogoliubov calculation of nuclear ground states and in the particle-hole channel of the PN-RQRPA. The finite range Gogny D1S interaction is employed in the T=1 pairing channel, and the model also includes a proton-neutron particle-particle interaction. The theoretical half-lives reproduce the experimental data for the Fe, Zn, Cd, and Te isotopic chains but overestimate the lifetimes of Ni isotopes and predict a stable 132 Sn

  17. {beta}-decay rates of r-process nuclei in the relativistic quasiparticle random phase approximation

    Energy Technology Data Exchange (ETDEWEB)

    Niksic, T.; Marketin, T.; Vretenar, D. [Zagreb Univ. (Croatia). Faculty of Science, Physics Dept.; Paar, N. [Technische Univ. Darmstadt (Germany). Inst. fuer Kernphysik; Ring, P. [Technische Univ. Muenchen, Garching (Germany). Physik-Department

    2004-12-08

    The fully consistent relativistic proton-neutron quasiparticle random phase approximation (PN-RQRPA) is employed in the calculation of {beta}-decay half-lives of neutron-rich nuclei in the N{approx}50 and N{approx}82 regions. A new density-dependent effective interaction, with an enhanced value of the nucleon effective mass, is used in relativistic Hartree-Bogolyubov calculation of nuclear ground states and in the particle-hole channel of the PN-RQRPA. The finite range Gogny D1S interaction is employed in the T=1 pairing channel, and the model also includes a proton-neutron particle-particle interaction. The theoretical half-lives reproduce the experimental data for the Fe, Zn, Cd, and Te isotopic chains, but overestimate the lifetimes of Ni isotopes and predict a stable {sup 132}Sn. (orig.)

  18. Solution-processed flexible NiO resistive random access memory device

    Science.gov (United States)

    Kim, Soo-Jung; Lee, Heon; Hong, Sung-Hoon

    2018-04-01

    Non-volatile memories (NVMs) using nanocrystals (NCs) as active materials can be applied to soft electronic devices requiring a low-temperature process because NCs do not require a heat treatment process for crystallization. In addition, memory devices can be implemented simply by using a patterning technique using a solution process. In this study, a flexible NiO ReRAM device was fabricated using a simple NC patterning method that controls the capillary force and dewetting of a NiO NC solution at low temperature. The switching behavior of a NiO NC based memory was clearly observed by conductive atomic force microscopy (c-AFM).

  19. Simultaneous Range-Velocity Processing and SNR Analysis of AFIT’s Random Noise Radar

    Science.gov (United States)

    2012-03-22

    reducing the overall processing time. Two computers, equipped with NVIDIA ® GPUs, were used to process the col- 45 lected data. The specifications for each...gather the results back to the CPU. Another company , AccelerEyes®, has developed a product called Jacket® that claims to be better than the parallel...Number of Processing Cores 4 8 Processor Speed 3.33 GHz 3.07 GHz Installed Memory 48 GB 48 GB GPU Make NVIDIA NVIDIA GPU Model Tesla 1060 Tesla C2070 GPU

  20. Hierarchical random additive process and logarithmic scaling of generalized high order, two-point correlations in turbulent boundary layer flow

    Science.gov (United States)

    Yang, X. I. A.; Marusic, I.; Meneveau, C.

    2016-06-01

    Townsend [Townsend, The Structure of Turbulent Shear Flow (Cambridge University Press, Cambridge, UK, 1976)] hypothesized that the logarithmic region in high-Reynolds-number wall-bounded flows consists of space-filling, self-similar attached eddies. Invoking this hypothesis, we express streamwise velocity fluctuations in the inertial layer in high-Reynolds-number wall-bounded flows as a hierarchical random additive process (HRAP): uz+=∑i=1Nzai . Here u is the streamwise velocity fluctuation, + indicates normalization in wall units, z is the wall normal distance, and ai's are independently, identically distributed random additives, each of which is associated with an attached eddy in the wall-attached hierarchy. The number of random additives is Nz˜ln(δ /z ) where δ is the boundary layer thickness and ln is natural log. Due to its simplified structure, such a process leads to predictions of the scaling behaviors for various turbulence statistics in the logarithmic layer. Besides reproducing known logarithmic scaling of moments, structure functions, and correlation function [" close="]3/2 uz(x ) uz(x +r ) >, new logarithmic laws in two-point statistics such as uz4(x ) > 1 /2, 1/3, etc. can be derived using the HRAP formalism. Supporting empirical evidence for the logarithmic scaling in such statistics is found from the Melbourne High Reynolds Number Boundary Layer Wind Tunnel measurements. We also show that, at high Reynolds numbers, the above mentioned new logarithmic laws can be derived by assuming the arrival of an attached eddy at a generic point in the flow field to be a Poisson process [Woodcock and Marusic, Phys. Fluids 27, 015104 (2015), 10.1063/1.4905301]. Taken together, the results provide new evidence supporting the essential ingredients of the attached eddy hypothesis to describe streamwise velocity fluctuations of large, momentum transporting eddies in wall-bounded turbulence, while observed deviations suggest the need for further extensions of the

  1. A teachable moment communication process for smoking cessation talk: description of a group randomized clinician-focused intervention

    Directory of Open Access Journals (Sweden)

    Flocke Susan A

    2012-05-01

    Full Text Available Abstract Background Effective clinician-patient communication about health behavior change is one of the most important and most overlooked strategies to promote health and prevent disease. Existing guidelines for specific health behavior counseling have been created and promulgated, but not successfully adopted in primary care practice. Building on work focused on creating effective clinician strategies for prompting health behavior change in the primary care setting, we developed an intervention intended to enhance clinician communication skills to create and act on teachable moments for smoking cessation. In this manuscript, we describe the development and implementation of the Teachable Moment Communication Process (TMCP intervention and the baseline characteristics of a group randomized trial designed to evaluate its effectiveness. Methods/Design This group randomized trial includes thirty-one community-based primary care clinicians practicing in Northeast Ohio and 840 of their adult patients. Clinicians were randomly assigned to receive either the Teachable Moments Communication Process (TMCP intervention for smoking cessation, or the delayed intervention. The TMCP intervention consisted of two, 3-hour educational training sessions including didactic presentation, skill demonstration through video examples, skills practices with standardized patients, and feedback from peers and the trainers. For each clinician enrolled, 12 patients were recruited for two time points. Pre- and post-intervention data from the clinicians, patients and audio-recorded clinician‒patient interactions were collected. At baseline, the two groups of clinicians and their patients were similar with regard to all demographic and practice characteristics examined. Both physician and patient recruitment goals were met, and retention was 96% and 94% respectively. Discussion Findings support the feasibility of training clinicians to use the Teachable Moments

  2. High average power supercontinuum sources

    Indian Academy of Sciences (India)

    The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium.

  3. Rapid Processing of Net-Shape Thermoplastic Planar-Random Composite Preforms

    Science.gov (United States)

    Jespersen, S. T.; Baudry, F.; Schmäh, D.; Wakeman, M. D.; Michaud, V.; Blanchard, P.; Norris, R. E.; Månson, J.-A. E.

    2009-02-01

    A novel thermoplastic composite preforming and moulding process is investigated to target cost issues in textile composite processing associated with trim waste, and the limited mechanical properties of current bulk flow-moulding composites. The thermoplastic programmable powdered preforming process (TP-P4) uses commingled glass and polypropylene yarns, which are cut to length before air assisted deposition onto a vacuum screen, enabling local preform areal weight tailoring. The as-placed fibres are heat-set for improved handling before an optional preconsolidation stage. The preforms are then preheated and press formed to obtain the final part. The process stages are examined to optimize part quality and throughput versus processing parameters. A viable processing route is proposed with typical cycle times below 40 s (for a plate 0.5 × 0.5 m2, weighing 2 kg), enabling high production capacity from one line. The mechanical performance is shown to surpass that of 40 wt.% GMT and has properties equivalent to those of 40 wt.% GMTex at both 20°C and 80°C.

  4. Process and effects of a community intervention on malaria in rural Burkina Faso: randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Gustafsson Lars

    2008-03-01

    Full Text Available Abstract Background In the rural areas of sub-Saharan Africa, the majority of young children affected by malaria have no access to formal health services. Home treatment through mothers of febrile children supported by mother groups and local health workers has the potential to reduce malaria morbidity and mortality. Methods A cluster-randomized controlled effectiveness trial was implemented from 2002–2004 in a malaria endemic area of rural Burkina Faso. Six and seven villages were randomly assigned to the intervention and control arms respectively. Febrile children from intervention villages were treated with chloroquine (CQ by their mothers, supported by local women group leaders. CQ was regularly supplied through a revolving fund from local health centres. The trial was evaluated through two cross-sectional surveys at baseline and after two years of intervention. The primary endpoint of the study was the proportion of moderate to severe anaemia in children aged 6–59 months. For assessment of the development of drug efficacy over time, an in vivo CQ efficacy study was nested into the trial. The study is registered under http://www.controlled-trials.com (ISRCTN 34104704. Results The intervention was shown to be feasible under program conditions and a total of 1.076 children and 999 children were evaluated at baseline and follow-up time points respectively. Self-reported CQ treatment of fever episodes at home as well as referrals to health centres increased over the study period. At follow-up, CQ was detected in the blood of high proportions of intervention and control children. Compared to baseline findings, the prevalence of anaemia (29% vs 16%, p P. falciparum parasitaemia, fever and palpable spleens was lower at follow-up but there were no differences between the intervention and control group. CQ efficacy decreased over the study period but this was not associated with the intervention. Discussion The decreasing prevalence of malaria

  5. Likelihood updating of random process load and resistance parameters by monitoring

    DEFF Research Database (Denmark)

    Friis-Hansen, Peter; Ditlevsen, Ove Dalager

    2003-01-01

    that maximum likelihood estimation is a rational alternative to an arbitrary weighting for least square fitting. The derived likelihood function gets singularities if the spectrum is prescribed with zero values at some frequencies. This is often the case for models of technically relevant processes......, even though it is of complicated mathematical form, allows an approximate Bayesian updating and control of the time development of the parameters. Some of these parameters can be structural parameters that by too much change reveal progressing damage or other malfunctioning. Thus current process......Spectral parameters for a stationary Gaussian process are most often estimated by Fourier transformation of a realization followed by some smoothing procedure. This smoothing is often a weighted least square fitting of some prespecified parametric form of the spectrum. In this paper it is shown...

  6. Impact of Cocoa Consumption on Inflammation Processes-A Critical Review of Randomized Controlled Trials.

    Science.gov (United States)

    Ellinger, Sabine; Stehle, Peter

    2016-05-26

    Cocoa flavanols have strong anti-inflammatory properties in vitro. If these also occur in vivo, cocoa consumption may contribute to the prevention or treatment of diseases mediated by chronic inflammation. This critical review judged the evidence for such effects occurring after cocoa consumption. A literature search in Medline was performed for randomized controlled trials (RCTs) that investigated the effects of cocoa consumption on inflammatory biomarkers. Thirty-three RCTs were included, along with 9 bolus and 24 regular consumption studies. Acute cocoa consumption decreased adhesion molecules and 4-series leukotrienes in serum, nuclear factor κB activation in leukocytes, and the expression of CD62P and CD11b on monocytes and neutrophils. In healthy subjects and in patients with cardiovascular diseases, most regular consumption trials did not find any changes except for a decreased number of endothelial microparticles, but several cellular and humoral inflammation markers decreased in patients suffering from type 2 diabetes and impaired fasting glucose. Little evidence exists that consumption of cocoa-rich food may reduce inflammation, probably by lowering the activation of monocytes and neutrophils. The efficacy seems to depend on the extent of the basal inflammatory burden. Further well-designed RCTs with inflammation as the primary outcome are needed, focusing on specific markers of leukocyte activation and considering endothelial microparticles as marker of vascular inflammation.

  7. Voter dynamics on an adaptive network with finite average connectivity

    Science.gov (United States)

    Mukhopadhyay, Abhishek; Schmittmann, Beate

    2009-03-01

    We study a simple model for voter dynamics in a two-party system. The opinion formation process is implemented in a random network of agents in which interactions are not restricted by geographical distance. In addition, we incorporate the rapidly changing nature of the interpersonal relations in the model. At each time step, agents can update their relationships, so that there is no history dependence in the model. This update is determined by their own opinion, and by their preference to make connections with individuals sharing the same opinion and with opponents. Using simulations and analytic arguments, we determine the final steady states and the relaxation into these states for different system sizes. In contrast to earlier studies, the average connectivity (``degree'') of each agent is constant here, independent of the system size. This has significant consequences for the long-time behavior of the model.

  8. Aerobic Exercise Training in Post-Polio Syndrome: Process Evaluation of a Randomized Controlled Trial

    NARCIS (Netherlands)

    Voorn, Eric L.; Koopman, Fieke S.; Brehm, Merel A.; Beelen, Anita; de Haan, Arnold; Gerrits, Karin H. L.; Nollet, Frans

    2016-01-01

    To explore reasons for the lack of efficacy of a high intensity aerobic exercise program in post-polio syndrome (PPS) on cardiorespiratory fitness by evaluating adherence to the training program and effects on muscle function. A process evaluation using data from an RCT. Forty-four severely fatigued

  9. When good = better than average

    Directory of Open Access Journals (Sweden)

    Don A. Moore

    2007-10-01

    Full Text Available People report themselves to be above average on simple tasks and below average on difficult tasks. This paper proposes an explanation for this effect that is simpler than prior explanations. The new explanation is that people conflate relative with absolute evaluation, especially on subjective measures. The paper then presents a series of four studies that test this conflation explanation. These tests distinguish conflation from other explanations, such as differential weighting and selecting the wrong referent. The results suggest that conflation occurs at the response stage during which people attempt to disambiguate subjective response scales in order to choose an answer. This is because conflation has little effect on objective measures, which would be equally affected if the conflation occurred at encoding.

  10. Experimental demonstration of squeezed-state quantum averaging

    DEFF Research Database (Denmark)

    Lassen, Mikael Østergaard; Madsen, Lars Skovgaard; Sabuncu, Metin

    2010-01-01

    We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The averaged variances are prepared probabilistically by means of linear optical interference and measurement-induced conditioning. We verify that the implemented...

  11. Averaging Robertson-Walker cosmologies

    International Nuclear Information System (INIS)

    Brown, Iain A.; Robbers, Georg; Behrend, Juliane

    2009-01-01

    The cosmological backreaction arises when one directly averages the Einstein equations to recover an effective Robertson-Walker cosmology, rather than assuming a background a priori. While usually discussed in the context of dark energy, strictly speaking any cosmological model should be recovered from such a procedure. We apply the scalar spatial averaging formalism for the first time to linear Robertson-Walker universes containing matter, radiation and dark energy. The formalism employed is general and incorporates systems of multiple fluids with ease, allowing us to consider quantitatively the universe from deep radiation domination up to the present day in a natural, unified manner. Employing modified Boltzmann codes we evaluate numerically the discrepancies between the assumed and the averaged behaviour arising from the quadratic terms, finding the largest deviations for an Einstein-de Sitter universe, increasing rapidly with Hubble rate to a 0.01% effect for h = 0.701. For the ΛCDM concordance model, the backreaction is of the order of Ω eff 0 ≈ 4 × 10 −6 , with those for dark energy models being within a factor of two or three. The impacts at recombination are of the order of 10 −8 and those in deep radiation domination asymptote to a constant value. While the effective equations of state of the backreactions in Einstein-de Sitter, concordance and quintessence models are generally dust-like, a backreaction with an equation of state w eff < −1/3 can be found for strongly phantom models

  12. Does analgesia affect the diagnostic process in acute abdomen? a randomized clinical trial

    Directory of Open Access Journals (Sweden)

    Khashayar P.

    2008-03-01

    Full Text Available Background: About one-forth of the patients admitted to the emergency department complain of acute abdominal pain. According to surgical records, most surgeons believe that pain relief for these patients may interfere with the clinical examinations and the final diagnoses. As a result, analgesics are withheld in patients with acute abdominal pain until the determination of a definite diagnosis and suitable management plan. The purpose of this study was to evaluate the effect of analgesics on the evaluation course and treatment in acute abdomen.Methods: Two hundred patients at a surgical emergency department with acute abdominal pain were enrolled in this prospective study and randomly divided into two groups at the time of admission. The case group consisted of 98 patients who received intravenous analgesia immediately after admission. The other 102 patients in the control group did not receive analgesia until a definite diagnosis was made. Diagnostic and therapeutic procedures were similar between the two groups. The primary and final diagnoses, and the time intervals between the admission and definite diagnosis, and that between admission and surgery were gathered and analyzed.Results: The mean time to definitive diagnosis was 1.7 and 2.04 hours in the case and control groups, respectively. There was no statistically significant relationship between analgesic use and gender, age, time to definite diagnosis, or accuracy of the diagnosis. In fact, the time required to achieve a definite diagnosis and the time between admission and surgery were less in the group that had received analgesics. Conclusions: In spite of the fact that analgesics remove the very symptoms that brings patients to the emergency room, appropriate use of analgesics does not reduce diagnostic efficiency for patients with acute abdominal pain.

  13. Incorrect modeling of the failure process of minimally repaired systems under random conditions: The effect on the maintenance costs

    International Nuclear Information System (INIS)

    Pulcini, Gianpaolo

    2015-01-01

    This note investigates the effect of the incorrect modeling of the failure process of minimally repaired systems that operates under random environmental conditions on the costs of a periodic replacement maintenance. The motivation of this paper is given by a recently published paper, where a wrong formulation of the expected cost for unit time under a periodic replacement policy is obtained. This wrong formulation is due to the incorrect assumption that the intensity function of minimally repaired systems that operate under random conditions has the same functional form as the failure rate of the first failure time. This produced an incorrect optimization of the replacement maintenance. Thus, in this note the conceptual differences between the intensity function and the failure rate of the first failure time are first highlighted. Then, the correct expressions of the expected cost and of the optimal replacement period are provided. Finally, a real application is used to measure how severe can be the economical consequences caused by the incorrect modeling of the failure process.

  14. On the Coupling Time of the Heat-Bath Process for the Fortuin-Kasteleyn Random-Cluster Model

    Science.gov (United States)

    Collevecchio, Andrea; Elçi, Eren Metin; Garoni, Timothy M.; Weigel, Martin

    2018-01-01

    We consider the coupling from the past implementation of the random-cluster heat-bath process, and study its random running time, or coupling time. We focus on hypercubic lattices embedded on tori, in dimensions one to three, with cluster fugacity at least one. We make a number of conjectures regarding the asymptotic behaviour of the coupling time, motivated by rigorous results in one dimension and Monte Carlo simulations in dimensions two and three. Amongst our findings, we observe that, for generic parameter values, the distribution of the appropriately standardized coupling time converges to a Gumbel distribution, and that the standard deviation of the coupling time is asymptotic to an explicit universal constant multiple of the relaxation time. Perhaps surprisingly, we observe these results to hold both off criticality, where the coupling time closely mimics the coupon collector's problem, and also at the critical point, provided the cluster fugacity is below the value at which the transition becomes discontinuous. Finally, we consider analogous questions for the single-spin Ising heat-bath process.

  15. Three-Dimensional Random Voronoi Tessellations: From Cubic Crystal Lattices to Poisson Point Processes

    Science.gov (United States)

    Lucarini, Valerio

    2009-01-01

    We perturb the simple cubic (SC), body-centered cubic (BCC), and face-centered cubic (FCC) structures with a spatial Gaussian noise whose adimensional strength is controlled by the parameter α and analyze the statistical properties of the cells of the resulting Voronoi tessellations using an ensemble approach. We concentrate on topological properties of the cells, such as the number of faces, and on metric properties of the cells, such as the area, volume and the isoperimetric quotient. The topological properties of the Voronoi tessellations of the SC and FCC crystals are unstable with respect to the introduction of noise, because the corresponding polyhedra are geometrically degenerate, whereas the tessellation of the BCC crystal is topologically stable even against noise of small but finite intensity. Whereas the average volume of the cells is the intensity parameter of the system and does not depend on the noise, the average area of the cells has a rather interesting behavior with respect to noise intensity. For weak noise, the mean area of the Voronoi tessellations corresponding to perturbed BCC and FCC perturbed increases quadratically with the noise intensity. In the case of perturbed SCC crystals, there is an optimal amount of noise that minimizes the mean area of the cells. Already for a moderate amount of noise ( α>0.5), the statistical properties of the three perturbed tessellations are indistinguishable, and for intense noise ( α>2), results converge to those of the Poisson-Voronoi tessellation. Notably, 2-parameter gamma distributions constitute an excellent model for the empirical pdf of all considered topological and metric properties. By analyzing jointly the statistical properties of the area and of the volume of the cells, we discover that also the cells shape, measured by the isoperimetric quotient, fluctuates. The Voronoi tessellations of the BCC and of the FCC structures result to be local maxima for the isoperimetric quotient among space

  16. Process Convergence of Self-Normalized Sums of i.i.d. Random ...

    Indian Academy of Sciences (India)

    ... either of tightness or finite dimensional convergence to a non-degenerate limiting distribution does not hold. This work is an extension of the work by Csörgő et al. who showed Donsker's theorem for Y n , 2 ( ⋅ p ) , i.e., for p = 2 , holds i f f =2 and identified the limiting process as a standard Brownian motion in sup norm.

  17. On the regularity of the extinction probability of a branching process in varying and random environments

    International Nuclear Information System (INIS)

    Alili, Smail; Rugh, Hans Henrik

    2008-01-01

    We consider a supercritical branching process in time-dependent environment ξ. We assume that the offspring distributions depend regularly (C k or real-analytically) on real parameters λ. We show that the extinction probability q λ (ξ), given the environment ξ 'inherits' this regularity whenever the offspring distributions satisfy a condition of contraction-type. Our proof makes use of the Poincaré metric on the complex unit disc and a real-analytic implicit function theorem

  18. Statistical properties of a filtered Poisson process with additive random noise: distributions, correlations and moment estimation

    International Nuclear Information System (INIS)

    Theodorsen, A; Garcia, O E; Rypdal, M

    2017-01-01

    Filtered Poisson processes are often used as reference models for intermittent fluctuations in physical systems. Such a process is here extended by adding a noise term, either as a purely additive term to the process or as a dynamical term in a stochastic differential equation. The lowest order moments, probability density function, auto-correlation function and power spectral density are derived and used to identify and compare the effects of the two different noise terms. Monte-Carlo studies of synthetic time series are used to investigate the accuracy of model parameter estimation and to identify methods for distinguishing the noise types. It is shown that the probability density function and the three lowest order moments provide accurate estimations of the model parameters, but are unable to separate the noise types. The auto-correlation function and the power spectral density also provide methods for estimating the model parameters, as well as being capable of identifying the noise type. The number of times the signal crosses a prescribed threshold level in the positive direction also promises to be able to differentiate the noise type. (paper)

  19. Improved averaging for non-null interferometry

    Science.gov (United States)

    Fleig, Jon F.; Murphy, Paul E.

    2013-09-01

    Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.

  20. Brain training game boosts executive functions, working memory and processing speed in the young adults: a randomized controlled trial.

    Science.gov (United States)

    Nouchi, Rui; Taki, Yasuyuki; Takeuchi, Hikaru; Hashizume, Hiroshi; Nozawa, Takayuki; Kambara, Toshimune; Sekiguchi, Atsushi; Miyauchi, Carlos Makoto; Kotozaki, Yuka; Nouchi, Haruka; Kawashima, Ryuta

    2013-01-01

    Do brain training games work? The beneficial effects of brain training games are expected to transfer to other cognitive functions. Yet in all honesty, beneficial transfer effects of the commercial brain training games in young adults have little scientific basis. Here we investigated the impact of the brain training game (Brain Age) on a wide range of cognitive functions in young adults. We conducted a double-blind (de facto masking) randomized controlled trial using a popular brain training game (Brain Age) and a popular puzzle game (Tetris). Thirty-two volunteers were recruited through an advertisement in the local newspaper and randomly assigned to either of two game groups (Brain Age, Tetris). Participants in both the Brain Age and the Tetris groups played their game for about 15 minutes per day, at least 5 days per week, for 4 weeks. Measures of the cognitive functions were conducted before and after training. Measures of the cognitive functions fell into eight categories (fluid intelligence, executive function, working memory, short-term memory, attention, processing speed, visual ability, and reading ability). Our results showed that commercial brain training game improves executive functions, working memory, and processing speed in young adults. Moreover, the popular puzzle game can engender improvement attention and visuo-spatial ability compared to playing the brain training game. The present study showed the scientific evidence which the brain training game had the beneficial effects on cognitive functions (executive functions, working memory and processing speed) in the healthy young adults. Our results do not indicate that everyone should play brain training games. However, the commercial brain training game might be a simple and convenient means to improve some cognitive functions. We believe that our findings are highly relevant to applications in educational and clinical fields. UMIN Clinical Trial Registry 000005618.

  1. Brain training game boosts executive functions, working memory and processing speed in the young adults: a randomized controlled trial.

    Directory of Open Access Journals (Sweden)

    Rui Nouchi

    Full Text Available BACKGROUND: Do brain training games work? The beneficial effects of brain training games are expected to transfer to other cognitive functions. Yet in all honesty, beneficial transfer effects of the commercial brain training games in young adults have little scientific basis. Here we investigated the impact of the brain training game (Brain Age on a wide range of cognitive functions in young adults. METHODS: We conducted a double-blind (de facto masking randomized controlled trial using a popular brain training game (Brain Age and a popular puzzle game (Tetris. Thirty-two volunteers were recruited through an advertisement in the local newspaper and randomly assigned to either of two game groups (Brain Age, Tetris. Participants in both the Brain Age and the Tetris groups played their game for about 15 minutes per day, at least 5 days per week, for 4 weeks. Measures of the cognitive functions were conducted before and after training. Measures of the cognitive functions fell into eight categories (fluid intelligence, executive function, working memory, short-term memory, attention, processing speed, visual ability, and reading ability. RESULTS AND DISCUSSION: Our results showed that commercial brain training game improves executive functions, working memory, and processing speed in young adults. Moreover, the popular puzzle game can engender improvement attention and visuo-spatial ability compared to playing the brain training game. The present study showed the scientific evidence which the brain training game had the beneficial effects on cognitive functions (executive functions, working memory and processing speed in the healthy young adults. CONCLUSIONS: Our results do not indicate that everyone should play brain training games. However, the commercial brain training game might be a simple and convenient means to improve some cognitive functions. We believe that our findings are highly relevant to applications in educational and clinical fields

  2. Asymptotic Time Averages and Frequency Distributions

    Directory of Open Access Journals (Sweden)

    Muhammad El-Taha

    2016-01-01

    Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t,  t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.

  3. Randomized random walk on a random walk

    International Nuclear Information System (INIS)

    Lee, P.A.

    1983-06-01

    This paper discusses generalizations of the model introduced by Kehr and Kunter of the random walk of a particle on a one-dimensional chain which in turn has been constructed by a random walk procedure. The superimposed random walk is randomised in time according to the occurrences of a stochastic point process. The probability of finding the particle in a particular position at a certain instant is obtained explicitly in the transform domain. It is found that the asymptotic behaviour for large time of the mean-square displacement of the particle depends critically on the assumed structure of the basic random walk, giving a diffusion-like term for an asymmetric walk or a square root law if the walk is symmetric. Many results are obtained in closed form for the Poisson process case, and these agree with those given previously by Kehr and Kunter. (author)

  4. The random walk model of intrafraction movement

    International Nuclear Information System (INIS)

    Ballhausen, H; Reiner, M; Kantz, S; Belka, C; Söhn, M

    2013-01-01

    The purpose of this paper is to understand intrafraction movement as a stochastic process driven by random external forces. The hypothetically proposed three-dimensional random walk model has significant impact on optimal PTV margins and offers a quantitatively correct explanation of experimental findings. Properties of the random walk are calculated from first principles, in particular fraction-average population density distributions for displacements along the principal axes. When substituted into the established optimal margin recipes these fraction-average distributions yield safety margins about 30% smaller as compared to the suggested values from end-of-fraction Gaussian fits. Stylized facts of a random walk are identified in clinical data, such as the increase of the standard deviation of displacements with the square root of time. Least squares errors in the comparison to experimental results are reduced by about 50% when accounting for non-Gaussian corrections from the random walk model. (paper)

  5. The random walk model of intrafraction movement.

    Science.gov (United States)

    Ballhausen, H; Reiner, M; Kantz, S; Belka, C; Söhn, M

    2013-04-07

    The purpose of this paper is to understand intrafraction movement as a stochastic process driven by random external forces. The hypothetically proposed three-dimensional random walk model has significant impact on optimal PTV margins and offers a quantitatively correct explanation of experimental findings. Properties of the random walk are calculated from first principles, in particular fraction-average population density distributions for displacements along the principal axes. When substituted into the established optimal margin recipes these fraction-average distributions yield safety margins about 30% smaller as compared to the suggested values from end-of-fraction gaussian fits. Stylized facts of a random walk are identified in clinical data, such as the increase of the standard deviation of displacements with the square root of time. Least squares errors in the comparison to experimental results are reduced by about 50% when accounting for non-gaussian corrections from the random walk model.

  6. Topological quantization of ensemble averages

    International Nuclear Information System (INIS)

    Prodan, Emil

    2009-01-01

    We define the current of a quantum observable and, under well-defined conditions, we connect its ensemble average to the index of a Fredholm operator. The present work builds on a formalism developed by Kellendonk and Schulz-Baldes (2004 J. Funct. Anal. 209 388) to study the quantization of edge currents for continuous magnetic Schroedinger operators. The generalization given here may be a useful tool to scientists looking for novel manifestations of the topological quantization. As a new application, we show that the differential conductance of atomic wires is given by the index of a certain operator. We also comment on how the formalism can be used to probe the existence of edge states

  7. Random matrices and random difference equations

    International Nuclear Information System (INIS)

    Uppuluri, V.R.R.

    1975-01-01

    Mathematical models leading to products of random matrices and random difference equations are discussed. A one-compartment model with random behavior is introduced, and it is shown how the average concentration in the discrete time model converges to the exponential function. This is of relevance to understanding how radioactivity gets trapped in bone structure in blood--bone systems. The ideas are then generalized to two-compartment models and mammillary systems, where products of random matrices appear in a natural way. The appearance of products of random matrices in applications in demography and control theory is considered. Then random sequences motivated from the following problems are studied: constant pulsing and random decay models, random pulsing and constant decay models, and random pulsing and random decay models

  8. Fuzzy control with random delays using invariant cones and its application to control of energy processes in microelectromechanical motion devices

    Energy Technology Data Exchange (ETDEWEB)

    Sinha, A.S.C. [Purdue Univ., Indianapolis, IN (United States). Dept. of Electrical Engineering; Lyshevski, S. [Rochester Inst. of Technology, NY (United States)

    2005-05-01

    In this paper, a class of microelectromechanical systems described by nonlinear differential equations with random delays is examined. Robust fuzzy controllers are designed to control the energy conversion processes with the ultimate objective to guarantee optimal achievable performance. The fuzzy rule base used consists of a collection of r fuzzy IF-THEN rules defined as a function of the conditional variable. The method of the theory of cones and Lyapunov functionals is used to design a class of local fuzzy control laws. A verifiably sufficient condition for stochastic stability of fuzzy stochastic microelectromechanical systems is given. As an example, we have considered the design of a fuzzy control law for an electrostatic micromotor. (author)

  9. Fuzzy control with random delays using invariant cones and its application to control of energy processes in microelectromechanical motion devices

    International Nuclear Information System (INIS)

    Sinha, A.S.C.; Lyshevski, S.

    2005-01-01

    In this paper, a class of microelectromechanical systems described by nonlinear differential equations with random delays is examined. Robust fuzzy controllers are designed to control the energy conversion processes with the ultimate objective to guarantee optimal achievable performance. The fuzzy rule base used consists of a collection of r fuzzy IF-THEN rules defined as a function of the conditional variable. The method of the theory of cones and Lyapunov functionals is used to design a class of local fuzzy control laws. A verifiably sufficient condition for stochastic stability of fuzzy stochastic microelectromechanical systems is given. As an example, we have considered the design of a fuzzy control law for an electrostatic micromotor

  10. Narrative exposure therapy for PTSD increases top-down processing of aversive stimuli - evidence from a randomized controlled treatment trial

    Directory of Open Access Journals (Sweden)

    Adenauer Hannah

    2011-12-01

    Full Text Available Abstract Background Little is known about the neurobiological foundations of psychotherapy for Posttraumatic Stress Disorder (PTSD. Prior studies have shown that PTSD is associated with altered processing of threatening and aversive stimuli. It remains unclear whether this functional abnormality can be changed by psychotherapy. This is the first randomized controlled treatment trial that examines whether narrative exposure therapy (NET causes changes in affective stimulus processing in patients with chronic PTSD. Methods 34 refugees with PTSD were randomly assigned to a NET group or to a waitlist control (WLC group. At pre-test and at four-months follow-up, the diagnostics included the assessment of clinical variables and measurements of neuromagnetic oscillatory brain activity (steady-state visual evoked fields, ssVEF resulting from exposure to aversive pictures compared to neutral pictures. Results PTSD as well as depressive symptom severity scores declined in the NET group, whereas symptoms persisted in the WLC group. Only in the NET group, parietal and occipital activity towards threatening pictures increased significantly after therapy. Conclusions Our results indicate that NET causes an increase of activity associated with cortical top-down regulation of attention towards aversive pictures. The increase of attention allocation to potential threat cues might allow treated patients to re-appraise the actual danger of the current situation and, thereby, reducing PTSD symptoms. Registration of the clinical trial Number: NCT00563888 Name: "Change of Neural Network Indicators Through Narrative Treatment of PTSD in Torture Victims" ULR: http://www.clinicaltrials.gov/ct2/show/NCT00563888

  11. Effectiveness of manual therapy versus surgery in pain processing due to carpal tunnel syndrome: A randomized clinical trial.

    Science.gov (United States)

    Fernández-de-Las-Peñas, C; Cleland, J; Palacios-Ceña, M; Fuensalida-Novo, S; Alonso-Blanco, C; Pareja, J A; Alburquerque-Sendín, F

    2017-08-01

    People with carpal tunnel syndrome (CTS) exhibit widespread pressure pain and thermal pain hypersensitivity as a manifestation of central sensitization. The aim of our study was to compare the effectiveness of manual therapy versus surgery for improving pain and nociceptive gain processing in people with CTS. The trial was conducted at a local regional Hospital in Madrid, Spain from August 2014 to February 2015. In this randomized parallel-group, blinded, clinical trial, 100 women with CTS were randomly allocated to either manual therapy (n = 50), who received three sessions (once/week) of manual therapies including desensitization manoeuvres of the central nervous system, or surgical intervention (n = 50) group. Outcomes including pressure pain thresholds (PPT), thermal pain thresholds (HPT or CPT), and pain intensity which were assessed at baseline, and 3, 6, 9 and 12 months after the intervention by an assessor unaware of group assignment. Analysis was by intention to treat with mixed ANCOVAs adjusted for baseline scores. At 12 months, 95 women completed the follow-up. Patients receiving manual therapy exhibited higher increases in PPT over the carpal tunnel at 3, 6 and 9 months (all, p < 0.01) and higher decrease of pain intensity at 3 month follow-up (p < 0.001) than those receiving surgery. No significant differences were observed between groups for the remaining outcomes. Manual therapy and surgery have similar effects on decreasing widespread pressure pain sensitivity and pain intensity in women with CTS. Neither manual therapy nor surgery resulted in changes in thermal pain sensitivity. The current study found that manual therapy and surgery exhibited similar effects on decreasing widespread pressure pain sensitivity and pain intensity in women with carpal tunnel syndrome at medium- and long-term follow-ups investigating changes in nociceptive gain processing after treatment in carpal tunnel syndrome. © 2017 European Pain Federation - EFIC®.

  12. Markov counting and reward processes for analysing the performance of a complex system subject to random inspections

    International Nuclear Information System (INIS)

    Ruiz-Castro, Juan Eloy

    2016-01-01

    In this paper, a discrete complex reliability system subject to internal failures and external shocks, is modelled algorithmically. Two types of internal failure are considered: repairable and non-repairable. When a repairable failure occurs, the unit goes to corrective repair. In addition, the unit is subject to external shocks that may produce an aggravation of the internal degradation level, cumulative damage or extreme failure. When a damage threshold is reached, the unit must be removed. When a non-repairable failure occurs, the device is replaced by a new, identical one. The internal performance and the external damage are partitioned in performance levels. Random inspections are carried out. When an inspection takes place, the internal performance of the system and the damage caused by external shocks are observed and if necessary the unit is sent to preventive maintenance. If the inspection observes minor state for the internal performance and/or external damage, then these states remain in memory when the unit goes to corrective or preventive maintenance. Transient and stationary analyses are performed. Markov counting and reward processes are developed in computational form to analyse the performance and profitability of the system with and without preventive maintenance. These aspects are implemented computationally with Matlab. - Highlights: • A multi-state device is modelled in an algorithmic and computational form. • The performance is partitioned in multi-states and degradation levels. • Several types of failures with repair times according to degradation levels. • Preventive maintenance as response to random inspection is introduced. • The performance-profitable is analysed through Markov counting and reward processes.

  13. The average Indian female nose.

    Science.gov (United States)

    Patil, Surendra B; Kale, Satish M; Jaiswal, Sumeet; Khare, Nishant; Math, Mahantesh

    2011-12-01

    This study aimed to delineate the anthropometric measurements of the noses of young women of an Indian population and to compare them with the published ideals and average measurements for white women. This anthropometric survey included a volunteer sample of 100 young Indian women ages 18 to 35 years with Indian parents and no history of previous surgery or trauma to the nose. Standardized frontal, lateral, oblique, and basal photographs of the subjects' noses were taken, and 12 standard anthropometric measurements of the nose were determined. The results were compared with published standards for North American white women. In addition, nine nasal indices were calculated and compared with the standards for North American white women. The nose of Indian women differs significantly from the white nose. All the nasal measurements for the Indian women were found to be significantly different from those for North American white women. Seven of the nine nasal indices also differed significantly. Anthropometric analysis suggests differences between the Indian female nose and the North American white nose. Thus, a single aesthetic ideal is inadequate. Noses of Indian women are smaller and wider, with a less projected and rounded tip than the noses of white women. This study established the nasal anthropometric norms for nasal parameters, which will serve as a guide for cosmetic and reconstructive surgery in Indian women.

  14. Bivariate copulas on the exponentially weighted moving average control chart

    Directory of Open Access Journals (Sweden)

    Sasigarn Kuvattana

    2016-10-01

    Full Text Available This paper proposes four types of copulas on the Exponentially Weighted Moving Average (EWMA control chart when observations are from an exponential distribution using a Monte Carlo simulation approach. The performance of the control chart is based on the Average Run Length (ARL which is compared for each copula. Copula functions for specifying dependence between random variables are used and measured by Kendall’s tau. The results show that the Normal copula can be used for almost all shifts.

  15. Modeling reactive transport processes in fractured rock using the time domain random walk approach within a dual-porosity framework

    Science.gov (United States)

    Roubinet, D.; Russian, A.; Dentz, M.; Gouze, P.

    2017-12-01

    Characterizing and modeling hydrodynamic reactive transport in fractured rock are critical challenges for various research fields and applications including environmental remediation, geological storage, and energy production. To this end, we consider a recently developed time domain random walk (TDRW) approach, which is adapted to reproduce anomalous transport behaviors and capture heterogeneous structural and physical properties. This method is also very well suited to optimize numerical simulations by memory-shared massive parallelization and provide numerical results at various scales. So far, the TDRW approach has been applied for modeling advective-diffusive transport with mass transfer between mobile and immobile regions and simple (theoretical) reactions in heterogeneous porous media represented as single continuum domains. We extend this approach to dual-continuum representations considering a highly permeable fracture network embedded into a poorly permeable rock matrix with heterogeneous geochemical reactions occurring in both geological structures. The resulting numerical model enables us to extend the range of the modeled heterogeneity scales with an accurate representation of solute transport processes and no assumption on the Fickianity of these processes. The proposed model is compared to existing particle-based methods that are usually used to model reactive transport in fractured rocks assuming a homogeneous surrounding matrix, and is used to evaluate the impact of the matrix heterogeneity on the apparent reaction rates for different 2D and 3D simple-to-complex fracture network configurations.

  16. Effect of baking process on postprandial metabolic consequences: randomized trials in normal and type 2 diabetic subjects.

    Science.gov (United States)

    Rizkalla, S W; Laromiguiere, M; Champ, M; Bruzzo, F; Boillot, J; Slama, G

    2007-02-01

    To determine the impact of the form, fibre content, baking and processing on the glycaemic, insulinaemic and lipidaemic responses of different French breads. First study: Nine healthy subjects were randomized to consume in a crossover design one of six kinds of French bread (each containing 50 g available carbohydrate): classic baguette, traditional baguette, loaf of wholemeal bread (WM-B), loaf of bread fermented with yeast or with leaven, a sandwich and a glucose challenge as reference. The glycaemic index (GI) values ranged from 57+/-9% (mean+/-s.e.m.), for the traditional baguette, to 85+/-27% for the WM-B. No significant difference was found among the different tested bread. The insulinaemic index (II), however, of the traditional baguette and of the bread fermented with leaven were lower than the other breads (analysis of variance: Pvarieties of French bread (the TB) have lower II, in healthy subjects, and lower GI, in type 2 diabetic subjects, than that of the other varieties. These results might be due to bread processing difference rather than fibre content. Supported by grants from the National French Milling Association.

  17. Feasibility of a randomized controlled trial to evaluate the impact of decision boxes on shared decision-making processes.

    Science.gov (United States)

    Giguere, Anik Mc; Labrecque, Michel; Légaré, France; Grad, Roland; Cauchon, Michel; Greenway, Matthew; Haynes, R Brian; Pluye, Pierre; Syed, Iqra; Banerjee, Debi; Carmichael, Pierre-Hugues; Martin, Mélanie

    2015-02-25

    Decision boxes (DBoxes) are two-page evidence summaries to prepare clinicians for shared decision making (SDM). We sought to assess the feasibility of a clustered Randomized Controlled Trial (RCT) to evaluate their impact. A convenience sample of clinicians (nurses, physicians and residents) from six primary healthcare clinics who received eight DBoxes and rated their interest in the topic and satisfaction. After consultations, their patients rated their involvement in decision-making processes (SDM-Q-9 instrument). We measured clinic and clinician recruitment rates, questionnaire completion rates, patient eligibility rates, and estimated the RCT needed sample size. Among the 20 family medicine clinics invited to participate in this study, four agreed to participate, giving an overall recruitment rate of 20%. Of 148 clinicians invited to the study, 93 participated (63%). Clinicians rated an interest in the topics ranging 6.4-8.2 out of 10 (with 10 highest) and a satisfaction with DBoxes of 4 or 5 out of 5 (with 5 highest) for 81% DBoxes. For the future RCT, we estimated that a sample size of 320 patients would allow detecting a 9% mean difference in the SDM-Q-9 ratings between our two arms (0.02 ICC; 0.05 significance level; 80% power). Clinicians' recruitment and questionnaire completion rates support the feasibility of the planned RCT. The level of interest of participants for the DBox topics, and their level of satisfaction with the Dboxes demonstrate the acceptability of the intervention. Processes to recruit clinics and patients should be optimized.

  18. Apatite fission track analysis: geological thermal history analysis based on a three-dimensional random process of linear radiation damage

    International Nuclear Information System (INIS)

    Galbraith, R.F.; Laslett, G.M.; Green, P.F.; Duddy, I.R.

    1990-01-01

    Spontaneous fission of uranium atoms over geological time creates a random process of linearly shaped features (fission tracks) inside an apatite crystal. The theoretical distributions associated with this process are governed by the elapsed time and temperature history, but other factors are also reflected in empirical measurements as consequences of sampling by plane section and chemical etching. These include geometrical biases leading to over-representation of long tracks, the shape and orientation of host features when sampling totally confined tracks, and 'gaps' in heavily annealed tracks. We study the estimation of geological parameters in the presence of these factors using measurements on both confined tracks and projected semi-tracks. Of particular interest is a history of sedimentation, uplift and erosion giving rise to a two-component mixture of tracks in which the parameters reflect the current temperature, the maximum temperature and the timing of uplift. A full likelihood analysis based on all measured densities, lengths and orientations is feasible, but because some geometrical biases and measurement limitations are only partly understood it seems preferable to use conditional likelihoods given numbers and orientations of confined tracks. (author)

  19. Randomized, double-blinded clinical trial for human norovirus inactivation in oysters by high hydrostatic pressure processing.

    Science.gov (United States)

    Leon, Juan S; Kingsley, David H; Montes, Julia S; Richards, Gary P; Lyon, G Marshall; Abdulhafid, Gwen M; Seitz, Scot R; Fernandez, Marina L; Teunis, Peter F; Flick, George J; Moe, Christine L

    2011-08-01

    Contamination of oysters with human noroviruses (HuNoV) constitutes a human health risk and may lead to severe economic losses in the shellfish industry. There is a need to identify a technology that can inactivate HuNoV in oysters. In this study, we conducted a randomized, double-blinded clinical trial to assess the effect of high hydrostatic pressure processing (HPP) on Norwalk virus (HuNoV genogroup I.1) inactivation in virus-seeded oysters ingested by subjects. Forty-four healthy, positive-secretor adults were divided into three study phases. Subjects in each phase were randomized into control and intervention groups. Subjects received Norwalk virus (8FIIb, 1.0 × 10(4) genomic equivalent copies) in artificially seeded oysters with or without HPP treatment (400 MPa at 25°C, 600 MPa at 6°C, or 400 MPa at 6°C for 5 min). HPP at 600 MPa, but not 400 MPa (at 6° or 25°C), completely inactivated HuNoV in seeded oysters and resulted in no HuNoV infection among these subjects, as determined by reverse transcription-PCR detection of HuNoV RNA in subjects' stool or vomitus samples. Interestingly, a white blood cell (granulocyte) shift was identified in 92% of the infected subjects and was significantly associated with infection (P = 0.0014). In summary, these data suggest that HPP is effective at inactivating HuNoV in contaminated whole oysters and suggest a potential intervention to inactivate infectious HuNoV in oysters for the commercial shellfish industry.

  20. Process Evaluation of the Type 2 Diabetes Mellitus PULSE Program Randomized Controlled Trial: Recruitment, Engagement, and Overall Satisfaction.

    Science.gov (United States)

    Aguiar, Elroy J; Morgan, Philip J; Collins, Clare E; Plotnikoff, Ronald C; Young, Myles D; Callister, Robin

    2017-07-01

    Men are underrepresented in weight loss and type 2 diabetes mellitus (T2DM) prevention studies. To determine the effectiveness of recruitment, and acceptability of the T2DM Prevention Using LifeStyle Education (PULSE) Program-a gender-targeted, self-administered intervention for men. Men (18-65 years, high risk for T2DM) were randomized to intervention ( n = 53) or wait-list control groups ( n = 48). The 6-month PULSE Program intervention focused on weight loss, diet, and exercise for T2DM prevention. A process evaluation questionnaire was administered at 6 months to examine recruitment and selection processes, and acceptability of the intervention's delivery and content. Associations between self-monitoring and selected outcomes were assessed using Spearman's rank correlation. A pragmatic recruitment and online screening process was effective in identifying men at high risk of T2DM (prediabetes prevalence 70%). Men reported the trial was appealing because it targeted weight loss, T2DM prevention, and getting fit, and because it was perceived as "doable" and tailored for men. The intervention was considered acceptable, with men reporting high overall satisfaction (83%) and engagement with the various components. Adherence to self-monitoring was poor, with only 13% meeting requisite criteria. However, significant associations were observed between weekly self-monitoring of weight and change in weight ( r s = -.47, p = .004) and waist circumference ( r s = -.38, p = .026). Men reported they would have preferred more intervention contact, for example, by phone or email. Gender-targeted, self-administered lifestyle interventions are feasible, appealing, and satisfying for men. Future studies should explore the effects of additional non-face-to-face contact on motivation, accountability, self-monitoring adherence, and program efficacy.

  1. Average inactivity time model, associated orderings and reliability properties

    Science.gov (United States)

    Kayid, M.; Izadkhah, S.; Abouammoh, A. M.

    2018-02-01

    In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.

  2. Baseline-dependent averaging in radio interferometry

    Science.gov (United States)

    Wijnholds, S. J.; Willis, A. G.; Salvini, S.

    2018-05-01

    This paper presents a detailed analysis of the applicability and benefits of baseline-dependent averaging (BDA) in modern radio interferometers and in particular the Square Kilometre Array. We demonstrate that BDA does not affect the information content of the data other than a well-defined decorrelation loss for which closed form expressions are readily available. We verify these theoretical findings using simulations. We therefore conclude that BDA can be used reliably in modern radio interferometry allowing a reduction of visibility data volume (and hence processing costs for handling visibility data) by more than 80 per cent.

  3. On a randomly imperfect spherical cap pressurized by a random ...

    African Journals Online (AJOL)

    On a randomly imperfect spherical cap pressurized by a random dynamic load. ... In this paper, we investigate a dynamical system in a random setting of dual ... characterization of the random process for determining the dynamic buckling load ...

  4. The balanced survivor average causal effect.

    Science.gov (United States)

    Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken

    2013-05-07

    Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure.

  5. Global industrial impact coefficient based on random walk process and inter-country input-output table

    Science.gov (United States)

    Xing, Lizhi; Dong, Xianlei; Guan, Jun

    2017-04-01

    Input-output table is very comprehensive and detailed in describing the national economic system with lots of economic relationships, which contains supply and demand information among industrial sectors. The complex network, a theory and method for measuring the structure of complex system, can describe the structural characteristics of the internal structure of the research object by measuring the structural indicators of the social and economic system, revealing the complex relationship between the inner hierarchy and the external economic function. This paper builds up GIVCN-WIOT models based on World Input-Output Database in order to depict the topological structure of Global Value Chain (GVC), and assumes the competitive advantage of nations is equal to the overall performance of its domestic sectors' impact on the GVC. Under the perspective of econophysics, Global Industrial Impact Coefficient (GIIC) is proposed to measure the national competitiveness in gaining information superiority and intermediate interests. Analysis of GIVCN-WIOT models yields several insights including the following: (1) sectors with higher Random Walk Centrality contribute more to transmitting value streams within the global economic system; (2) Half-Value Ratio can be used to measure robustness of open-economy macroeconomics in the process of globalization; (3) the positive correlation between GIIC and GDP indicates that one country's global industrial impact could reveal its international competitive advantage.

  6. A randomized controlled trial of cognitive training using a visual speed of processing intervention in middle aged and older adults.

    Directory of Open Access Journals (Sweden)

    Fredric D Wolinsky

    Full Text Available Age-related cognitive decline is common and may lead to substantial difficulties and disabilities in everyday life. We hypothesized that 10 hours of visual speed of processing training would prevent age-related declines and potentially improve cognitive processing speed.Within two age bands (50-64 and ≥ 65 681 patients were randomized to (a three computerized visual speed of processing training arms (10 hours on-site, 14 hours on-site, or 10 hours at-home or (b an on-site attention control group using computerized crossword puzzles for 10 hours. The primary outcome was the Useful Field of View (UFOV test, and the secondary outcomes were the Trail Making (Trails A and B Tests, Symbol Digit Modalities Test (SDMT, Stroop Color and Word Tests, Controlled Oral Word Association Test (COWAT, and the Digit Vigilance Test (DVT, which were assessed at baseline and at one year. 620 participants (91% completed the study and were included in the analyses. Linear mixed models were used with Blom rank transformations within age bands.All intervention groups had (p<0.05 small to medium standardized effect size improvements on UFOV (Cohen's d = -0.322 to -0.579, depending on intervention arm, Trails A (d = -0.204 to -0.265, Trails B (d = -0.225 to -0.320, SDMT (d = 0.263 to 0.351, and Stroop Word (d = 0.240 to 0.271. Converted to years of protection against age-related cognitive declines, these effects reflect 3.0 to 4.1 years on UFOV, 2.2 to 3.5 years on Trails A, 1.5 to 2.0 years on Trails B, 5.4 to 6.6 years on SDMT, and 2.3 to 2.7 years on Stroop Word.Visual speed of processing training delivered on-site or at-home to middle-aged or older adults using standard home computers resulted in stabilization or improvement in several cognitive function tests. Widespread implementation of this intervention is feasible.ClinicalTrials.gov NCT-01165463.

  7. An empirical test of pseudo random number generators by means of an exponential decaying process; Una prueba empirica de generadores de numeros pseudoaleatorios mediante un proceso de decaimiento exponencial

    Energy Technology Data Exchange (ETDEWEB)

    Coronel B, H.F.; Hernandez M, A.R.; Jimenez M, M.A. [Facultad de Fisica e Inteligencia Artificial, Universidad Veracruzana, A.P. 475, Xalapa, Veracruz (Mexico); Mora F, L.E. [CIMAT, A.P. 402, 36000 Guanajuato (Mexico)]. e-mail: hcoronel@uv.mx

    2007-07-01

    Empirical tests for pseudo random number generators based on the use of processes or physical models have been successfully used and are considered as complementary to theoretical tests of randomness. In this work a statistical methodology for evaluating the quality of pseudo random number generators is presented. The method is illustrated in the context of the so-called exponential decay process, using some pseudo random number generators commonly used in physics. (Author)

  8. Improving understanding in the research informed consent process: a systematic review of 54 interventions tested in randomized control trials.

    Science.gov (United States)

    Nishimura, Adam; Carey, Jantey; Erwin, Patricia J; Tilburt, Jon C; Murad, M Hassan; McCormick, Jennifer B

    2013-07-23

    Obtaining informed consent is a cornerstone of biomedical research, yet participants comprehension of presented information is often low. The most effective interventions to improve understanding rates have not been identified. To systematically analyze the random controlled trials testing interventions to research informed consent process. The primary outcome of interest was quantitative rates of participant understanding; secondary outcomes were rates of information retention, satisfaction, and accrual. Interventional categories included multimedia, enhanced consent documents, extended discussions, test/feedback quizzes, and miscellaneous methods. The search spanned from database inception through September 2010. It was run on Ovid MEDLINE, Ovid EMBASE, Ovid CINAHL, Ovid PsycInfo and Cochrane CENTRAL, ISI Web of Science and Scopus. Five reviewers working independently and in duplicate screened full abstract text to determine eligibility. We included only RCTs. 39 out of 1523 articles fulfilled review criteria (2.6%), with a total of 54 interventions. A data extraction form was created in Distiller, an online reference management system, through an iterative process. One author collected data on study design, population, demographics, intervention, and analytical technique. Meta-analysis was possible on 22 interventions: multimedia, enhanced form, and extended discussion categories; all 54 interventions were assessed by review. Meta-analysis of multimedia approaches was associated with a non-significant increase in understanding scores (SMD 0.30, 95% CI, -0.23 to 0.84); enhanced consent form, with significant increase (SMD 1.73, 95% CI, 0.99 to 2.47); and extended discussion, with significant increase (SMD 0.53, 95% CI, 0.21 to 0.84). By review, 31% of multimedia interventions showed significant improvement in understanding; 41% for enhanced consent form; 50% for extended discussion; 33% for test/feedback; and 29% for miscellaneous.Multiple sources of variation

  9. Averaging of nonlinearity-managed pulses

    International Nuclear Information System (INIS)

    Zharnitsky, Vadim; Pelinovsky, Dmitry

    2005-01-01

    We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons

  10. Determining average path length and average trapping time on generalized dual dendrimer

    Science.gov (United States)

    Li, Ling; Guan, Jihong

    2015-03-01

    Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.

  11. Specialized rheumatology nurse substitutes for rheumatologists in the diagnostic process of fibromyalgia: a cost-consequence analysis and a randomized controlled trial

    NARCIS (Netherlands)

    Kroese, Mariëlle E.; Severens, Johan L.; Schulpen, Guy J.; Bessems, Monique C.; Nijhuis, Frans J.; Landewé, Robert B.

    2011-01-01

    To perform a cost-consequence analysis of the substitution of specialized rheumatology nurses (SRN) for rheumatologists (RMT) in the diagnostic process of fibromyalgia (FM), using both a healthcare and societal perspective and a 9-month period. Alongside a randomized controlled trial, we measured

  12. Extubation process in bed-ridden elderly intensive care patients receiving inspiratory muscle training: a randomized clinical trial.

    Science.gov (United States)

    Cader, Samária Ali; de Souza Vale, Rodrigo Gomes; Zamora, Victor Emmanuel; Costa, Claudia Henrique; Dantas, Estélio Henrique Martin

    2012-01-01

    The purpose of this study was to evaluate the extubation process in bed-ridden elderly intensive care patients receiving inspiratory muscle training (IMT) and identify predictors of successful weaning. Twenty-eight elderly intubated patients in an intensive care unit were randomly assigned to an experimental group (n = 14) that received conventional physiotherapy plus IMT with a Threshold IMT(®) device or to a control group (n = 14) that received only conventional physiotherapy. The experimental protocol for muscle training consisted of an initial load of 30% maximum inspiratory pressure, which was increased by 10% daily. The training was administered for 5 minutes, twice daily, 7 days a week, with supplemental oxygen from the beginning of weaning until extubation. Successful extubation was defined by the ventilation time measurement with noninvasive positive pressure. A vacuum manometer was used for measurement of maximum inspiratory pressure, and the patients' Tobin index values were measured using a ventilometer. The maximum inspiratory pressure increased significantly (by 7 cm H(2)O, 95% confidence interval [CI] 4-10), and the Tobin index decreased significantly (by 16 breaths/ min/L, 95% CI -26 to 6) in the experimental group compared with the control group. The Chi-squared distribution did not indicate a significant difference in weaning success between the groups (χ(2) = 1.47; P = 0.20). However, a comparison of noninvasive positive pressure time dependence indicated a significantly lower value for the experimental group (P = 0.0001; 95% CI 13.08-18.06). The receiver-operating characteristic curve showed an area beneath the curve of 0.877 ± 0.06 for the Tobin index and 0.845 ± 0.07 for maximum inspiratory pressure. The IMT intervention significantly increased maximum inspiratory pressure and significantly reduced the Tobin index; both measures are considered to be good extubation indices. IMT was associated with a reduction in noninvasive positive

  13. Processing/structure/property Relationships of Barium Strontium Titanate Thin Films for Dynamic Random Access Memory Application.

    Science.gov (United States)

    Peng, Cheng-Jien

    The purpose of this study is to see the application feasibility of barium strontium titanate (BST) thin films on ultra large scale integration (ULSI) dynamic random access memory (DRAM) capacitors through the understanding of the relationships among processing, structure and electrical properties. Thin films of BST were deposited by multi-ion -beam reactive sputtering (MIBERS) technique and metallo -organic decomposition (MOD) method. The processing parameters such as Ba/Sr ratio, substrate temperature, annealing temperature and time, film thickness and doping concentration were correlated with the structure and electric properties of the films. Some effects of secondary low-energy oxygen ion bombardment were also examined. Microstructures of BST thin films could be classified into two types: (a) Type I structures, with multi-grains through the film thickness, for amorphous as-grown films after high temperature annealing, and (b) columnar structure (Type II) which remained even after high temperature annealing, for well-crystallized films deposited at high substrate temperatures. Type I films showed Curie-von Schweidler response, while Type II films showed Debted type behavior. Type I behavior may be attributed to the presence of a high density of disordered grain boundaries. Two types of current -voltage characteristics could be seen in non-bombarded films depending on the chemistry of the films (doped or undoped) and substrate temperature during deposition. Only the MIBERS films doped with high donor concentration and deposited at high substrate temperature showed space-charge -limited conduction (SCLC) with discrete shallow traps embedded in trap-distributed background at high electric field. All other non-bombarded films, including MOD films, showed trap-distributed SCLC behavior with a slope of {~}7.5-10 due to the presence of grain boundaries through film thickness or traps induced by unavoidable acceptor impurities in the films. Donor-doping could

  14. A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, Manfred

    2003-01-01

    We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...

  15. Industrial Applications of High Average Power FELS

    CERN Document Server

    Shinn, Michelle D

    2005-01-01

    The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...

  16. Average-case analysis of incremental topological ordering

    DEFF Research Database (Denmark)

    Ajwani, Deepak; Friedrich, Tobias

    2010-01-01

    Many applications like pointer analysis and incremental compilation require maintaining a topological ordering of the nodes of a directed acyclic graph (DAG) under dynamic updates. All known algorithms for this problem are either only analyzed for worst-case insertion sequences or only evaluated...... experimentally on random DAGs. We present the first average-case analysis of incremental topological ordering algorithms. We prove an expected runtime of under insertion of the edges of a complete DAG in a random order for the algorithms of Alpern et al. (1990) [4], Katriel and Bodlaender (2006) [18], and Pearce...

  17. A new type of exact arbitrarily inhomogeneous cosmology: evolution of deceleration in the flat homogeneous-on-average case

    Energy Technology Data Exchange (ETDEWEB)

    Hellaby, Charles, E-mail: Charles.Hellaby@uct.ac.za [Dept. of Maths. and Applied Maths, University of Cape Town, Rondebosch, 7701 (South Africa)

    2012-01-01

    A new method for constructing exact inhomogeneous universes is presented, that allows variation in 3 dimensions. The resulting spacetime may be statistically uniform on average, or have random, non-repeating variation. The construction utilises the Darmois junction conditions to join many different component spacetime regions. In the initial simple example given, the component parts are spatially flat and uniform, but much more general combinations should be possible. Further inhomogeneity may be added via swiss cheese vacuoles and inhomogeneous metrics. This model is used to explore the proposal, that observers are located in bound, non-expanding regions, while the universe is actually in the process of becoming void dominated, and thus its average expansion rate is increasing. The model confirms qualitatively that the faster expanding components come to dominate the average, and that inhomogeneity results in average parameters which evolve differently from those of any one component, but more realistic modelling of the effect will need this construction to be generalised.

  18. Run charts revisited: a simulation study of run chart rules for detection of non-random variation in health care processes.

    Science.gov (United States)

    Anhøj, Jacob; Olesen, Anne Vingaard

    2014-01-01

    A run chart is a line graph of a measure plotted over time with the median as a horizontal line. The main purpose of the run chart is to identify process improvement or degradation, which may be detected by statistical tests for non-random patterns in the data sequence. We studied the sensitivity to shifts and linear drifts in simulated processes using the shift, crossings and trend rules for detecting non-random variation in run charts. The shift and crossings rules are effective in detecting shifts and drifts in process centre over time while keeping the false signal rate constant around 5% and independent of the number of data points in the chart. The trend rule is virtually useless for detection of linear drift over time, the purpose it was intended for.

  19. Extubation process in bed-ridden elderly intensive care patients receiving inspiratory muscle training: a randomized clinical trial

    Directory of Open Access Journals (Sweden)

    Cader SA

    2012-10-01

    Full Text Available Samária Ali Cader,1 Rodrigo Gomes de Souza Vale,1 Victor Emmanuel Zamora,2 Claudia Henrique Costa,2 Estélio Henrique Martin Dantas11Laboratory of Human Kinetics Bioscience, Federal University of Rio de Janeiro State, 2Pedro Ernesto University Hospital, School of Medicine, State University of Rio de Janeiro, Rio de Janeiro, BrazilBackground: The purpose of this study was to evaluate the extubation process in bed-ridden elderly intensive care patients receiving inspiratory muscle training (IMT and identify predictors of successful weaning.Methods: Twenty-eight elderly intubated patients in an intensive care unit were randomly assigned to an experimental group (n = 14 that received conventional physiotherapy plus IMT with a Threshold IMT® device or to a control group (n = 14 that received only conventional physiotherapy. The experimental protocol for muscle training consisted of an initial load of 30% maximum inspiratory pressure, which was increased by 10% daily. The training was administered for 5 minutes, twice daily, 7 days a week, with supplemental oxygen from the beginning of weaning until extubation. Successful extubation was defined by the ventilation time measurement with noninvasive positive pressure. A vacuum manometer was used for measurement of maximum inspiratory pressure, and the patients' Tobin index values were measured using a ventilometer.Results: The maximum inspiratory pressure increased significantly (by 7 cm H2O, 95% confidence interval [CI] 4–10, and the Tobin index decreased significantly (by 16 breaths/min/L, 95% CI −26 to 6 in the experimental group compared with the control group. The Chi-squared distribution did not indicate a significant difference in weaning success between the groups (Χ2 = 1.47; P = 0.20. However, a comparison of noninvasive positive pressure time dependence indicated a significantly lower value for the experimental group (P = 0.0001; 95% CI 13.08–18.06. The receiver

  20. The average size of ordered binary subgraphs

    NARCIS (Netherlands)

    van Leeuwen, J.; Hartel, Pieter H.

    To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a

  1. Will Mobile Diabetes Education Teams (MDETs in primary care improve patient care processes and health outcomes? Study protocol for a randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Gucciardi Enza

    2012-09-01

    Full Text Available Abstract Background There is evidence to suggest that delivery of diabetes self-management support by diabetes educators in primary care may improve patient care processes and patient clinical outcomes; however, the evaluation of such a model in primary care is nonexistent in Canada. This article describes the design for the evaluation of the implementation of Mobile Diabetes Education Teams (MDETs in primary care settings in Canada. Methods/design This study will use a non-blinded, cluster-randomized controlled trial stepped wedge design to evaluate the Mobile Diabetes Education Teams' intervention in improving patient clinical and care process outcomes. A total of 1,200 patient charts at participating primary care sites will be reviewed for data extraction. Eligible patients will be those aged ≥18, who have type 2 diabetes and a hemoglobin A1c (HbA1c of ≥8%. Clusters (that is, primary care sites will be randomized to the intervention and control group using a block randomization procedure within practice size as the blocking factor. A stepped wedge design will be used to sequentially roll out the intervention so that all clusters eventually receive the intervention. The time at which each cluster begins the intervention is randomized to one of the four roll out periods (0, 6, 12, and 18 months. Clusters that are randomized into the intervention later will act as the control for those receiving the intervention earlier. The primary outcome measure will be the difference in the proportion of patients who achieve the recommended HbA1c target of ≤7% between intervention and control groups. Qualitative work (in-depth interviews with primary care physicians, MDET educators and patients; and MDET educators’ field notes and debriefing sessions will be undertaken to assess the implementation process and effectiveness of the MDET intervention. Trial registration ClinicalTrials.gov NCT01553266

  2. Cosmological measure with volume averaging and the vacuum energy problem

    Science.gov (United States)

    Astashenok, Artyom V.; del Popolo, Antonino

    2012-04-01

    In this paper, we give a possible solution to the cosmological constant problem. It is shown that the traditional approach, based on volume weighting of probabilities, leads to an incoherent conclusion: the probability that a randomly chosen observer measures Λ = 0 is exactly equal to 1. Using an alternative, volume averaging measure, instead of volume weighting can explain why the cosmological constant is non-zero.

  3. Cosmological measure with volume averaging and the vacuum energy problem

    International Nuclear Information System (INIS)

    Astashenok, Artyom V; Del Popolo, Antonino

    2012-01-01

    In this paper, we give a possible solution to the cosmological constant problem. It is shown that the traditional approach, based on volume weighting of probabilities, leads to an incoherent conclusion: the probability that a randomly chosen observer measures Λ = 0 is exactly equal to 1. Using an alternative, volume averaging measure, instead of volume weighting can explain why the cosmological constant is non-zero. (paper)

  4. Omega-3 and -6 fatty acid supplementation and sensory processing in toddlers with ASD symptomology born preterm: A randomized controlled trial.

    Science.gov (United States)

    Boone, Kelly M; Gracious, Barbara; Klebanoff, Mark A; Rogers, Lynette K; Rausch, Joseph; Coury, Daniel L; Keim, Sarah A

    2017-12-01

    Despite advances in the health and long-term survival of infants born preterm, they continue to face developmental challenges including higher risk for autism spectrum disorder (ASD) and atypical sensory processing patterns. This secondary analysis aimed to describe sensory profiles and explore effects of combined dietary docosahexaenoic acid (DHA), eicosapentaenoic acid (EPA), and gamma-linolenic acid (GLA) supplementation on parent-reported sensory processing in toddlers born preterm who were exhibiting ASD symptoms. 90-day randomized, double blinded, placebo-controlled trial. 31 children aged 18-38months who were born at ≤29weeks' gestation. Mixed effects regression analyses followed intent to treat and explored effects on parent-reported sensory processing measured by the Infant/Toddler Sensory Profile (ITSP). Baseline ITSP scores reflected atypical sensory processing, with the majority of atypical scores falling below the mean. Sensory processing sections: auditory (above=0%, below=65%), vestibular (above=13%, below=48%), tactile (above=3%, below=35%), oral sensory (above=10%; below=26%), visual (above=10%, below=16%); sensory processing quadrants: low registration (above=3%; below=71%), sensation avoiding (above=3%; below=39%), sensory sensitivity (above=3%; below=35%), and sensation seeking (above=10%; below=19%). Twenty-eight of 31 children randomized had complete outcome data. Although not statistically significant (p=0.13), the magnitude of the effect for reduction in behaviors associated with sensory sensitivity was medium to large (effect size=0.57). No other scales reflected a similar magnitude of effect size (range: 0.10 to 0.32). The findings provide support for larger randomized trials of omega fatty acid supplementation for children at risk of sensory processing difficulties, especially those born preterm. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Vacuum instability in a random electric field

    International Nuclear Information System (INIS)

    Krive, I.V.; Pastur, L.A.

    1984-01-01

    The reaction of the vacuum on an intense spatially homogeneous random electric field is investigated. It is shown that a stochastic electric field always causes a breakdown of the boson vacuum, and the number of pairs of particles which are created by the electric field increases exponentially in time. For the choice of potential field in the form of a dichotomic random process we find in explicit form the dependence of the average number of pairs of particles on the time of the action of the source of the stochastic field. For the fermion vacuum the average number of pairs of particles which are created by the field in the lowest order of perturbation theory in the amplitude of the random field is independent of time

  6. Efficient Numerical Methods for Analysis of Square Ratio of κ-μ and η-μ Random Processes with Their Applications in Telecommunications

    Directory of Open Access Journals (Sweden)

    Gradimir V. Milovanović

    2018-01-01

    Full Text Available We will provide statistical analysis of the square ratio of κ-μ and η-μ random processes and its application in the signal-to-interference ratio (SIR based performance analysis of wireless transmission subjected to the influence of multipath fading, modelled by κ-μ fading model, and undesired occurrence of co-channel interference (CCI, distributed as η-μ random process. First contribution of the paper is deriving exact closed expressions for the probability density function (PDF and cumulative distribution function (CDF of square ratio of κ-μ and η-μ random processes. Further, a verification of accuracy of these PDF and CDF expressions was given by comparison with the corresponding approximations obtained by the high-precision quadrature formulas of Gaussian type with respect to the weight functions on (0,+∞. The computational procedure of such quadrature rules is provided by using the constructive theory of orthogonal polynomials and the MATHEMATICA package OrthogonalPolynomials created by Cvetković and Milovanović (2004. Capitalizing on obtained expression, important wireless performance criteria, namely, outage probability (OP, have been obtained, as functions of transmission parameters. Also, possible performance improvement is observed through a glance at SC (selection combining reception employment based on obtained expressions.

  7. Averaging for solitons with nonlinearity management

    International Nuclear Information System (INIS)

    Pelinovsky, D.E.; Kevrekidis, P.G.; Frantzeskakis, D.J.

    2003-01-01

    We develop an averaging method for solitons of the nonlinear Schroedinger equation with a periodically varying nonlinearity coefficient, which is used to effectively describe solitons in Bose-Einstein condensates, in the context of the recently proposed technique of Feshbach resonance management. Using the derived local averaged equation, we study matter-wave bright and dark solitons and demonstrate a very good agreement between solutions of the averaged and full equations

  8. DSCOVR Magnetometer Level 2 One Minute Averages

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-minute average of Level 1 data

  9. DSCOVR Magnetometer Level 2 One Second Averages

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Interplanetary magnetic field observations collected from magnetometer on DSCOVR satellite - 1-second average of Level 1 data

  10. Spacetime averaging of exotic singularity universes

    International Nuclear Information System (INIS)

    Dabrowski, Mariusz P.

    2011-01-01

    Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.

  11. NOAA Average Annual Salinity (3-Zone)

    Data.gov (United States)

    California Natural Resource Agency — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...

  12. Reducing Noise by Repetition: Introduction to Signal Averaging

    Science.gov (United States)

    Hassan, Umer; Anwar, Muhammad Sabieh

    2010-01-01

    This paper describes theory and experiments, taken from biophysics and physiological measurements, to illustrate the technique of signal averaging. In the process, students are introduced to the basic concepts of signal processing, such as digital filtering, Fourier transformation, baseline correction, pink and Gaussian noise, and the cross- and…

  13. Improving consensus structure by eliminating averaging artifacts

    Directory of Open Access Journals (Sweden)

    KC Dukka B

    2009-03-01

    Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which

  14. An application of reactor noise techniques to neutron transport problems in a random medium

    International Nuclear Information System (INIS)

    Sahni, D.C.

    1989-01-01

    Neutron transport problems in a random medium are considered by defining a joint Markov process describing the fluctuations of one neutron population and the random changes in the medium. Backward Chapman-Kolmogorov equations are derived which yield an adjoint transport equation for the average neutron density. It is shown that this average density also satisfied the direct transport equation as given by the phenomenological model. (author)

  15. Identification and estimation of survivor average causal effects.

    Science.gov (United States)

    Tchetgen Tchetgen, Eric J

    2014-09-20

    In longitudinal studies, outcomes ascertained at follow-up are typically undefined for individuals who die prior to the follow-up visit. In such settings, outcomes are said to be truncated by death and inference about the effects of a point treatment or exposure, restricted to individuals alive at the follow-up visit, could be biased even if as in experimental studies, treatment assignment were randomized. To account for truncation by death, the survivor average causal effect (SACE) defines the effect of treatment on the outcome for the subset of individuals who would have survived regardless of exposure status. In this paper, the author nonparametrically identifies SACE by leveraging post-exposure longitudinal correlates of survival and outcome that may also mediate the exposure effects on survival and outcome. Nonparametric identification is achieved by supposing that the longitudinal data arise from a certain nonparametric structural equations model and by making the monotonicity assumption that the effect of exposure on survival agrees in its direction across individuals. A novel weighted analysis involving a consistent estimate of the survival process is shown to produce consistent estimates of SACE. A data illustration is given, and the methods are extended to the context of time-varying exposures. We discuss a sensitivity analysis framework that relaxes assumptions about independent errors in the nonparametric structural equations model and may be used to assess the extent to which inference may be altered by a violation of key identifying assumptions. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd.

  16. 40 CFR 76.11 - Emissions averaging.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General...

  17. Determinants of College Grade Point Averages

    Science.gov (United States)

    Bailey, Paul Dean

    2012-01-01

    Chapter 2: The Role of Class Difficulty in College Grade Point Averages. Grade Point Averages (GPAs) are widely used as a measure of college students' ability. Low GPAs can remove a students from eligibility for scholarships, and even continued enrollment at a university. However, GPAs are determined not only by student ability but also by the…

  18. Lateral dispersion coefficients as functions of averaging time

    International Nuclear Information System (INIS)

    Sheih, C.M.

    1980-01-01

    Plume dispersion coefficients are discussed in terms of single-particle and relative diffusion, and are investigated as functions of averaging time. To demonstrate the effects of averaging time on the relative importance of various dispersion processes, and observed lateral wind velocity spectrum is used to compute the lateral dispersion coefficients of total, single-particle and relative diffusion for various averaging times and plume travel times. The results indicate that for a 1 h averaging time the dispersion coefficient of a plume can be approximated by single-particle diffusion alone for travel times <250 s and by relative diffusion for longer travel times. Furthermore, it is shown that the power-law formula suggested by Turner for relating pollutant concentrations for other averaging times to the corresponding 15 min average is applicable to the present example only when the averaging time is less than 200 s and the tral time smaller than about 300 s. Since the turbulence spectrum used in the analysis is an observed one, it is hoped that the results could represent many conditions encountered in the atmosphere. However, as the results depend on the form of turbulence spectrum, the calculations are not for deriving a set of specific criteria but for demonstrating the need in discriminating various processes in studies of plume dispersion

  19. Process evaluation of the Enabling Mothers toPrevent Pediatric Obesity Through Web-Based Learning and Reciprocal Determinism (EMPOWER) randomized control trial.

    Science.gov (United States)

    Knowlden, Adam P; Sharma, Manoj

    2014-09-01

    Family-and-home-based interventions are an important vehicle for preventing childhood obesity. Systematic process evaluations have not been routinely conducted in assessment of these interventions. The purpose of this study was to plan and conduct a process evaluation of the Enabling Mothers to Prevent Pediatric Obesity Through Web-Based Learning and Reciprocal Determinism (EMPOWER) randomized control trial. The trial was composed of two web-based, mother-centered interventions for prevention of obesity in children between 4 and 6 years of age. Process evaluation used the components of program fidelity, dose delivered, dose received, context, reach, and recruitment. Categorical process evaluation data (program fidelity, dose delivered, dose exposure, and context) were assessed using Program Implementation Index (PII) values. Continuous process evaluation variables (dose satisfaction and recruitment) were assessed using ANOVA tests to evaluate mean differences between groups (experimental and control) and sessions (sessions 1 through 5). Process evaluation results found that both groups (experimental and control) were equivalent, and interventions were administered as planned. Analysis of web-based intervention process objectives requires tailoring of process evaluation models for online delivery. Dissemination of process evaluation results can advance best practices for implementing effective online health promotion programs. © 2014 Society for Public Health Education.

  20. Averaging Bias Correction for Future IPDA Lidar Mission MERLIN

    Directory of Open Access Journals (Sweden)

    Tellier Yoann

    2018-01-01

    Full Text Available The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4 and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.

  1. Averaging Bias Correction for Future IPDA Lidar Mission MERLIN

    Science.gov (United States)

    Tellier, Yoann; Pierangelo, Clémence; Wirth, Martin; Gibert, Fabien

    2018-04-01

    The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4) and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.

  2. EPQ model for imperfect production processes with rework and random preventive machine time for deteriorating items and trended demand

    Directory of Open Access Journals (Sweden)

    Shah Nita H.

    2015-01-01

    Full Text Available Economic production quantity (EPQ model has been analyzed for trended demand, and units in inventory are subject to constant rate. The system allows rework of imperfect units, and preventive maintenance time is random. A search method is used to study the model. The proposed methodology is validated by a numerical example. The sensitivity analysis is carried out to determine the critical model parameters. It is observed that the rate of change of demand, and the deterioration rate have a significant impact on the decision variables and the total cost of an inventory system. The model is highly sensitive to the production and demand rate.

  3. Making working memory work: the effects of extended practice on focus capacity and the processes of updating, forward access, and random access.

    Science.gov (United States)

    Price, John M; Colflesh, Gregory J H; Cerella, John; Verhaeghen, Paul

    2014-05-01

    We investigated the effects of 10h of practice on variations of the N-Back task to investigate the processes underlying possible expansion of the focus of attention within working memory. Using subtractive logic, we showed that random access (i.e., Sternberg-like search) yielded a modest effect (a 50% increase in speed) whereas the processes of forward access (i.e., retrieval in order, as in a standard N-Back task) and updating (i.e., changing the contents of working memory) were executed about 5 times faster after extended practice. We additionally found that extended practice increased working memory capacity as measured by the size of the focus of attention for the forward-access task, but not for variations where probing was in random order. This suggests that working memory capacity may depend on the type of search process engaged, and that certain working-memory-related cognitive processes are more amenable to practice than others. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. A comparison of random walks in dependent random environments

    NARCIS (Netherlands)

    Scheinhardt, Willem R.W.; Kroese, Dirk

    We provide exact computations for the drift of random walks in dependent random environments, including $k$-dependent and moving average environments. We show how the drift can be characterized and evaluated using Perron–Frobenius theory. Comparing random walks in various dependent environments, we

  5. Computation of the bounce-average code

    International Nuclear Information System (INIS)

    Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.

    1977-01-01

    The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended

  6. Size and emotion averaging: costs of dividing attention after all.

    Science.gov (United States)

    Brand, John; Oriet, Chris; Tottenham, Laurie Sykes

    2012-03-01

    Perceptual averaging is a process by which sets of similar items are represented by summary statistics such as their average size, luminance, or orientation. Researchers have argued that this process is automatic, able to be carried out without interference from concurrent processing. Here, we challenge this conclusion and demonstrate a reliable cost of computing the mean size of circles distinguished by colour (Experiments 1 and 2) and the mean emotionality of faces distinguished by sex (Experiment 3). We also test the viability of two strategies that could have allowed observers to guess the correct response without computing the average size or emotionality of both sets concurrently. We conclude that although two means can be computed concurrently, doing so incurs a cost of dividing attention.

  7. Rotational averaging of multiphoton absorption cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)

    2014-11-28

    Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.

  8. Sea Surface Temperature Average_SST_Master

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...

  9. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-01-01

    to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic

  10. Should the average tax rate be marginalized?

    Czech Academy of Sciences Publication Activity Database

    Feldman, N. E.; Katuščák, Peter

    -, č. 304 (2006), s. 1-65 ISSN 1211-3298 Institutional research plan: CEZ:MSM0021620846 Keywords : tax * labor supply * average tax Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp304.pdf

  11. A practical guide to averaging functions

    CERN Document Server

    Beliakov, Gleb; Calvo Sánchez, Tomasa

    2016-01-01

    This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...

  12. MN Temperature Average (1961-1990) - Line

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  13. MN Temperature Average (1961-1990) - Polygon

    Data.gov (United States)

    Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...

  14. Average Bandwidth Allocation Model of WFQ

    Directory of Open Access Journals (Sweden)

    Tomáš Balogh

    2012-01-01

    Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.

  15. Nonequilibrium statistical averages and thermo field dynamics

    International Nuclear Information System (INIS)

    Marinaro, A.; Scarpetta, Q.

    1984-01-01

    An extension of thermo field dynamics is proposed, which permits the computation of nonequilibrium statistical averages. The Brownian motion of a quantum oscillator is treated as an example. In conclusion it is pointed out that the procedure proposed to computation of time-dependent statistical average gives the correct two-point Green function for the damped oscillator. A simple extension can be used to compute two-point Green functions of free particles

  16. Random tensors

    CERN Document Server

    Gurau, Razvan

    2017-01-01

    Written by the creator of the modern theory of random tensors, this book is the first self-contained introductory text to this rapidly developing theory. Starting from notions familiar to the average researcher or PhD student in mathematical or theoretical physics, the book presents in detail the theory and its applications to physics. The recent detections of the Higgs boson at the LHC and gravitational waves at LIGO mark new milestones in Physics confirming long standing predictions of Quantum Field Theory and General Relativity. These two experimental results only reinforce today the need to find an underlying common framework of the two: the elusive theory of Quantum Gravity. Over the past thirty years, several alternatives have been proposed as theories of Quantum Gravity, chief among them String Theory. While these theories are yet to be tested experimentally, key lessons have already been learned. Whatever the theory of Quantum Gravity may be, it must incorporate random geometry in one form or another....

  17. A high speed digital signal averager for pulsed NMR

    International Nuclear Information System (INIS)

    Srinivasan, R.; Ramakrishna, J.; Ra agopalan, S.R.

    1978-01-01

    A 256-channel digital signal averager suitable for pulsed nuclear magnetic resonance spectroscopy is described. It implements 'stable averaging' algorithm and hence provides a calibrated display of the average signal at all times during the averaging process on a CRT. It has a maximum sampling rate of 2.5 μ sec and a memory capacity of 256 x 12 bit words. Number of sweeps is selectable through a front panel control in binary steps from 2 3 to 2 12 . The enhanced signal can be displayed either on a CRT or by a 3.5-digit LED display. The maximum S/N improvement that can be achieved with this instrument is 36 dB. (auth.)

  18. Random walk in dynamically disordered chains: Poisson white noise disorder

    International Nuclear Information System (INIS)

    Hernandez-Garcia, E.; Pesquera, L.; Rodriguez, M.A.; San Miguel, M.

    1989-01-01

    Exact solutions are given for a variety of models of random walks in a chain with time-dependent disorder. Dynamic disorder is modeled by white Poisson noise. Models with site-independent (global) and site-dependent (local) disorder are considered. Results are described in terms of an affective random walk in a nondisordered medium. In the cases of global disorder the effective random walk contains multistep transitions, so that the continuous limit is not a diffusion process. In the cases of local disorder the effective process is equivalent to usual random walk in the absence of disorder but with slower diffusion. Difficulties associated with the continuous-limit representation of random walk in a disordered chain are discussed. In particular, the authors consider explicit cases in which taking the continuous limit and averaging over disorder sources do not commute

  19. Fast Decentralized Averaging via Multi-scale Gossip

    Science.gov (United States)

    Tsianos, Konstantinos I.; Rabbat, Michael G.

    We are interested in the problem of computing the average consensus in a distributed fashion on random geometric graphs. We describe a new algorithm called Multi-scale Gossip which employs a hierarchical decomposition of the graph to partition the computation into tractable sub-problems. Using only pairwise messages of fixed size that travel at most O(n^{1/3}) hops, our algorithm is robust and has communication cost of O(n loglogn logɛ - 1) transmissions, which is order-optimal up to the logarithmic factor in n. Simulated experiments verify the good expected performance on graphs of many thousands of nodes.

  20. Stochastic Optimal Prediction with Application to Averaged Euler Equations

    Energy Technology Data Exchange (ETDEWEB)

    Bell, John [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chorin, Alexandre J. [Univ. of California, Berkeley, CA (United States); Crutchfield, William [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-04-24

    Optimal prediction (OP) methods compensate for a lack of resolution in the numerical solution of complex problems through the use of an invariant measure as a prior measure in the Bayesian sense. In first-order OP, unresolved information is approximated by its conditional expectation with respect to the invariant measure. In higher-order OP, unresolved information is approximated by a stochastic estimator, leading to a system of random or stochastic differential equations. We explain the ideas through a simple example, and then apply them to the solution of Averaged Euler equations in two space dimensions.

  1. Randomization tests

    CERN Document Server

    Edgington, Eugene

    2007-01-01

    Statistical Tests That Do Not Require Random Sampling Randomization Tests Numerical Examples Randomization Tests and Nonrandom Samples The Prevalence of Nonrandom Samples in Experiments The Irrelevance of Random Samples for the Typical Experiment Generalizing from Nonrandom Samples Intelligibility Respect for the Validity of Randomization Tests Versatility Practicality Precursors of Randomization Tests Other Applications of Permutation Tests Questions and Exercises Notes References Randomized Experiments Unique Benefits of Experiments Experimentation without Mani

  2. A virtual pebble game to ensemble average graph rigidity.

    Science.gov (United States)

    González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J

    2015-01-01

    The body-bar Pebble Game (PG) algorithm is commonly used to calculate network rigidity properties in proteins and polymeric materials. To account for fluctuating interactions such as hydrogen bonds, an ensemble of constraint topologies are sampled, and average network properties are obtained by averaging PG characterizations. At a simpler level of sophistication, Maxwell constraint counting (MCC) provides a rigorous lower bound for the number of internal degrees of freedom (DOF) within a body-bar network, and it is commonly employed to test if a molecular structure is globally under-constrained or over-constrained. MCC is a mean field approximation (MFA) that ignores spatial fluctuations of distance constraints by replacing the actual molecular structure by an effective medium that has distance constraints globally distributed with perfect uniform density. The Virtual Pebble Game (VPG) algorithm is a MFA that retains spatial inhomogeneity in the density of constraints on all length scales. Network fluctuations due to distance constraints that may be present or absent based on binary random dynamic variables are suppressed by replacing all possible constraint topology realizations with the probabilities that distance constraints are present. The VPG algorithm is isomorphic to the PG algorithm, where integers for counting "pebbles" placed on vertices or edges in the PG map to real numbers representing the probability to find a pebble. In the VPG, edges are assigned pebble capacities, and pebble movements become a continuous flow of probability within the network. Comparisons between the VPG and average PG results over a test set of proteins and disordered lattices demonstrate the VPG quantitatively estimates the ensemble average PG results well. The VPG performs about 20% faster than one PG, and it provides a pragmatic alternative to averaging PG rigidity characteristics over an ensemble of constraint topologies. The utility of the VPG falls in between the most

  3. Small Acute Benefits of 4 Weeks Processing Speed Training Games on Processing Speed and Inhibition Performance and Depressive Mood in the Healthy Elderly People: Evidence from a Randomized Control Trial.

    Science.gov (United States)

    Nouchi, Rui; Saito, Toshiki; Nouchi, Haruka; Kawashima, Ryuta

    2016-01-01

    Background: Processing speed training using a 1-year intervention period improves cognitive functions and emotional states of elderly people. Nevertheless, it remains unclear whether short-term processing speed training such as 4 weeks can benefit elderly people. This study was designed to investigate effects of 4 weeks of processing speed training on cognitive functions and emotional states of elderly people. Methods: We used a single-blinded randomized control trial (RCT). Seventy-two older adults were assigned randomly to two groups: a processing speed training game (PSTG) group and knowledge quiz training game (KQTG) group, an active control group. In PSTG, participants were asked to play PSTG (12 processing speed games) for 15 min, during five sessions per week, for 4 weeks. In the KQTG group, participants were asked to play KQTG (four knowledge quizzes) for 15 min, during five sessions per week, for 4 weeks. We measured several cognitive functions and emotional states before and after the 4 week intervention period. Results: Our results revealed that PSTG improved performances in processing speed and inhibition compared to KQTG, but did not improve performance in reasoning, shifting, short term/working memory, and episodic memory. Moreover, PSTG reduced the depressive mood score as measured by the Profile of Mood State compared to KQTG during the 4 week intervention period, but did not change other emotional measures. Discussion: This RCT first provided scientific evidence related to small acute benefits of 4 week PSTG on processing speed, inhibition, and depressive mood in healthy elderly people. We discuss possible mechanisms for improvements in processing speed and inhibition and reduction of the depressive mood. Trial registration: This trial was registered in The University Hospital Medical Information Network Clinical Trials Registry (UMIN000022250).

  4. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  5. Liquid-borne nano particles impact on the random yield during critical processes in IC’s production

    NARCIS (Netherlands)

    Wali, F.; Knotter, D. Martin; Kuper, F.G.

    2008-01-01

    Semiconductor industry faces a continuous challenge to decrease the transistor size as well as to increase the yield by eliminating defect sources. One of the sources of particle defects is ultra pure water used in different production tools at different stages of processing. In this paper, particle

  6. Return to work and occupational physicians' management of common mental health problems--process evaluation of a randomized controlled trial

    NARCIS (Netherlands)

    Rebergen, David S.; Bruinvels, David J.; Bos, Chris M.; van der Beek, Allard J.; van Mechelen, Willem

    2010-01-01

    The aim of this study was to examine the adherence of occupational physicians (OP) to the Dutch guideline on the management of common mental health problems and its effect on return to work as part of the process evaluation of a trial comparing adherence to the guideline to care as usual. The first

  7. Asynchronous Gossip for Averaging and Spectral Ranking

    Science.gov (United States)

    Borkar, Vivek S.; Makhijani, Rahul; Sundaresan, Rajesh

    2014-08-01

    We consider two variants of the classical gossip algorithm. The first variant is a version of asynchronous stochastic approximation. We highlight a fundamental difficulty associated with the classical asynchronous gossip scheme, viz., that it may not converge to a desired average, and suggest an alternative scheme based on reinforcement learning that has guaranteed convergence to the desired average. We then discuss a potential application to a wireless network setting with simultaneous link activation constraints. The second variant is a gossip algorithm for distributed computation of the Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant draws upon a reinforcement learning algorithm for an average cost controlled Markov decision problem, the second variant draws upon a reinforcement learning algorithm for risk-sensitive control. We then discuss potential applications of the second variant to ranking schemes, reputation networks, and principal component analysis.

  8. Benchmarking statistical averaging of spectra with HULLAC

    Science.gov (United States)

    Klapisch, Marcel; Busquet, Michel

    2008-11-01

    Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).

  9. An approach to averaging digitized plantagram curves.

    Science.gov (United States)

    Hawes, M R; Heinemeyer, R; Sovak, D; Tory, B

    1994-07-01

    The averaging of outline shapes of the human foot for the purposes of determining information concerning foot shape and dimension within the context of comfort of fit of sport shoes is approached as a mathematical problem. An outline of the human footprint is obtained by standard procedures and the curvature is traced with a Hewlett Packard Digitizer. The paper describes the determination of an alignment axis, the identification of two ray centres and the division of the total curve into two overlapping arcs. Each arc is divided by equiangular rays which intersect chords between digitized points describing the arc. The radial distance of each ray is averaged within groups of foot lengths which vary by +/- 2.25 mm (approximately equal to 1/2 shoe size). The method has been used to determine average plantar curves in a study of 1197 North American males (Hawes and Sovak 1993).

  10. Books average previous decade of economic misery.

    Science.gov (United States)

    Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.

  11. Books Average Previous Decade of Economic Misery

    Science.gov (United States)

    Bentley, R. Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios

    2014-01-01

    For the 20th century since the Depression, we find a strong correlation between a ‘literary misery index’ derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade. PMID:24416159

  12. Exploiting scale dependence in cosmological averaging

    International Nuclear Information System (INIS)

    Mattsson, Teppo; Ronkainen, Maria

    2008-01-01

    We study the role of scale dependence in the Buchert averaging method, using the flat Lemaitre–Tolman–Bondi model as a testing ground. Within this model, a single averaging scale gives predictions that are too coarse, but by replacing it with the distance of the objects R(z) for each redshift z, we find an O(1%) precision at z<2 in the averaged luminosity and angular diameter distances compared to their exact expressions. At low redshifts, we show the improvement for generic inhomogeneity profiles, and our numerical computations further verify it up to redshifts z∼2. At higher redshifts, the method breaks down due to its inability to capture the time evolution of the inhomogeneities. We also demonstrate that the running smoothing scale R(z) can mimic acceleration, suggesting that it could be at least as important as the backreaction in explaining dark energy as an inhomogeneity induced illusion

  13. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  14. Aperture averaging in strong oceanic turbulence

    Science.gov (United States)

    Gökçe, Muhsin Caner; Baykal, Yahya

    2018-04-01

    Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.

  15. Regional averaging and scaling in relativistic cosmology

    International Nuclear Information System (INIS)

    Buchert, Thomas; Carfora, Mauro

    2002-01-01

    Averaged inhomogeneous cosmologies lie at the forefront of interest, since cosmological parameters such as the rate of expansion or the mass density are to be considered as volume-averaged quantities and only these can be compared with observations. For this reason the relevant parameters are intrinsically scale-dependent and one wishes to control this dependence without restricting the cosmological model by unphysical assumptions. In the latter respect we contrast our way to approach the averaging problem in relativistic cosmology with shortcomings of averaged Newtonian models. Explicitly, we investigate the scale-dependence of Eulerian volume averages of scalar functions on Riemannian three-manifolds. We propose a complementary view of a Lagrangian smoothing of (tensorial) variables as opposed to their Eulerian averaging on spatial domains. This programme is realized with the help of a global Ricci deformation flow for the metric. We explain rigorously the origin of the Ricci flow which, on heuristic grounds, has already been suggested as a possible candidate for smoothing the initial dataset for cosmological spacetimes. The smoothing of geometry implies a renormalization of averaged spatial variables. We discuss the results in terms of effective cosmological parameters that would be assigned to the smoothed cosmological spacetime. In particular, we find that on the smoothed spatial domain B-bar evaluated cosmological parameters obey Ω-bar B-bar m + Ω-bar B-bar R + Ω-bar B-bar A + Ω-bar B-bar Q 1, where Ω-bar B-bar m , Ω-bar B-bar R and Ω-bar B-bar A correspond to the standard Friedmannian parameters, while Ω-bar B-bar Q is a remnant of cosmic variance of expansion and shear fluctuations on the averaging domain. All these parameters are 'dressed' after smoothing out the geometrical fluctuations, and we give the relations of the 'dressed' to the 'bare' parameters. While the former provide the framework of interpreting observations with a 'Friedmannian bias

  16. Average: the juxtaposition of procedure and context

    Science.gov (United States)

    Watson, Jane; Chick, Helen; Callingham, Rosemary

    2014-09-01

    This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.

  17. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...

  18. On Using the Volatile Mem-Capacitive Effect of TiO2 Resistive Random Access Memory to Mimic the Synaptic Forgetting Process

    Science.gov (United States)

    Sarkar, Biplab; Mills, Steven; Lee, Bongmook; Pitts, W. Shepherd; Misra, Veena; Franzon, Paul D.

    2018-02-01

    In this work, we report on mimicking the synaptic forgetting process using the volatile mem-capacitive effect of a resistive random access memory (RRAM). TiO2 dielectric, which is known to show volatile memory operations due to migration of inherent oxygen vacancies, was used to achieve the volatile mem-capacitive effect. By placing the volatile RRAM candidate along with SiO2 at the gate of a MOS capacitor, a volatile capacitance change resembling the forgetting nature of a human brain is demonstrated. Furthermore, the memory operation in the MOS capacitor does not require a current flow through the gate dielectric indicating the feasibility of obtaining low power memory operations. Thus, the mem-capacitive effect of volatile RRAM candidates can be attractive to the future neuromorphic systems for implementing the forgetting process of a human brain.

  19. Random walks on generalized Koch networks

    International Nuclear Information System (INIS)

    Sun, Weigang

    2013-01-01

    For deterministically growing networks, it is a theoretical challenge to determine the topological properties and dynamical processes. In this paper, we study random walks on generalized Koch networks with features that include an initial state that is a globally connected network to r nodes. In each step, every existing node produces m complete graphs. We then obtain the analytical expressions for first passage time (FPT), average return time (ART), i.e. the average of FPTs for random walks from node i to return to the starting point i for the first time, and average sending time (AST), defined as the average of FPTs from a hub node to all other nodes, excluding the hub itself with regard to network parameters m and r. For this family of Koch networks, the ART of the new emerging nodes is identical and increases with the parameters m or r. In addition, the AST of our networks grows with network size N as N ln N and also increases with parameter m. The results obtained in this paper are the generalizations of random walks for the original Koch network. (paper)

  20. A model for Intelligent Random Access Memory architecture (IRAM) cellular automata algorithms on the Associative String Processing machine (ASTRA)

    CERN Document Server

    Rohrbach, F; Vesztergombi, G

    1997-01-01

    In the near future, the computer performance will be completely determined by how long it takes to access memory. There are bottle-necks in memory latency and memory-to processor interface bandwidth. The IRAM initiative could be the answer by putting Processor-In-Memory (PIM). Starting from the massively parallel processing concept, one reached a similar conclusion. The MPPC (Massively Parallel Processing Collaboration) project and the 8K processor ASTRA machine (Associative String Test bench for Research \\& Applications) developed at CERN \\cite{kuala} can be regarded as a forerunner of the IRAM concept. The computing power of the ASTRA machine, regarded as an IRAM with 64 one-bit processors on a 64$\\times$64 bit-matrix memory chip machine, has been demonstrated by running statistical physics algorithms: one-dimensional stochastic cellular automata, as a simple model for dynamical phase transitions. As a relevant result for physics, the damage spreading of this model has been investigated.

  1. Dual N-Back Working Memory Training in Healthy Adults: A Randomized Comparison to Processing Speed Training

    Science.gov (United States)

    Lawlor-Savage, Linette; Goghari, Vina M.

    2016-01-01

    Enhancing cognitive ability is an attractive concept, particularly for middle-aged adults interested in maintaining cognitive functioning and preventing age-related declines. Computerized working memory training has been investigated as a safe method of cognitive enhancement in younger and older adults, although few studies have considered the potential impact of working memory training on middle-aged adults. This study investigated dual n-back working memory training in healthy adults aged 30–60. Fifty-seven adults completed measures of working memory, processing speed, and fluid intelligence before and after a 5-week web-based dual n-back or active control (processing speed) training program. Results: Repeated measures multivariate analysis of variance failed to identify improvements across the three cognitive composites, working memory, processing speed, and fluid intelligence, after training. Follow-up Bayesian analyses supported null findings for training effects for each individual composite. Findings suggest that dual n-back working memory training may not benefit working memory or fluid intelligence in healthy adults. Further investigation is necessary to clarify if other forms of working memory training may be beneficial, and what factors impact training-related benefits, should they occur, in this population. PMID:27043141

  2. Dual N-Back Working Memory Training in Healthy Adults: A Randomized Comparison to Processing Speed Training.

    Directory of Open Access Journals (Sweden)

    Linette Lawlor-Savage

    Full Text Available Enhancing cognitive ability is an attractive concept, particularly for middle-aged adults interested in maintaining cognitive functioning and preventing age-related declines. Computerized working memory training has been investigated as a safe method of cognitive enhancement in younger and older adults, although few studies have considered the potential impact of working memory training on middle-aged adults. This study investigated dual n-back working memory training in healthy adults aged 30-60. Fifty-seven adults completed measures of working memory, processing speed, and fluid intelligence before and after a 5-week web-based dual n-back or active control (processing speed training program.Repeated measures multivariate analysis of variance failed to identify improvements across the three cognitive composites, working memory, processing speed, and fluid intelligence, after training. Follow-up Bayesian analyses supported null findings for training effects for each individual composite. Findings suggest that dual n-back working memory training may not benefit working memory or fluid intelligence in healthy adults. Further investigation is necessary to clarify if other forms of working memory training may be beneficial, and what factors impact training-related benefits, should they occur, in this population.

  3. Balancing Opposing Forces—A Nested Process Evaluation Study Protocol for a Stepped Wedge Designed Cluster Randomized Controlled Trial of an Experience Based Codesign Intervention

    Directory of Open Access Journals (Sweden)

    Victoria Jane Palmer

    2016-10-01

    Full Text Available Background: Process evaluations are essential to understand the contextual, relational, and organizational and system factors of complex interventions. The guidance for developing process evaluations for randomized controlled trials (RCTs has until recently however, been fairly limited. Method/Design: A nested process evaluation (NPE was designed and embedded across all stages of a stepped wedge cluster RCT called the CORE study. The aim of the CORE study is to test the effectiveness of an experience-based codesign methodology for improving psychosocial recovery outcomes for people living with severe mental illness (service users. Process evaluation data collection combines qualitative and quantitative methods with four aims: (1 to describe organizational characteristics, service models, policy contexts, and government reforms and examine the interaction of these with the intervention; (2 to understand how the codesign intervention works, the cluster variability in implementation, and if the intervention is or is not sustained in different settings; (3 to assist in the interpretation of the primary and secondary outcomes and determine if the causal assumptions underpinning the codesign interventions are accurate; and (4 to determine the impact of a purposefully designed engagement model on the broader study retention and knowledge transfer in the trial. Discussion: Process evaluations require prespecified study protocols but finding a balance between their iterative nature and the structure offered by protocol development is an important step forward. Taking this step will advance the role of qualitative research within trials research and enable more focused data collection to occur at strategic points within studies.

  4. Model averaging, optimal inference and habit formation

    Directory of Open Access Journals (Sweden)

    Thomas H B FitzGerald

    2014-06-01

    Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.

  5. Generalized Jackknife Estimators of Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic...

  6. Average beta measurement in EXTRAP T1

    International Nuclear Information System (INIS)

    Hedin, E.R.

    1988-12-01

    Beginning with the ideal MHD pressure balance equation, an expression for the average poloidal beta, Β Θ , is derived. A method for unobtrusively measuring the quantities used to evaluate Β Θ in Extrap T1 is described. The results if a series of measurements yielding Β Θ as a function of externally applied toroidal field are presented. (author)

  7. HIGH AVERAGE POWER OPTICAL FEL AMPLIFIERS

    International Nuclear Information System (INIS)

    2005-01-01

    Historically, the first demonstration of the optical FEL was in an amplifier configuration at Stanford University [l]. There were other notable instances of amplifying a seed laser, such as the LLNL PALADIN amplifier [2] and the BNL ATF High-Gain Harmonic Generation FEL [3]. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance FEL's with average power of 100 kW or more. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting Energy Recovery Linacs (ERL) combine well with the high-gain FEL amplifier to produce unprecedented average power FELs. This combination has a number of advantages. In particular, we show that for a given FEL power, an FEL amplifier can introduce lower energy spread in the beam as compared to a traditional oscillator. This properly gives the ERL based FEL amplifier a great wall-plug to optical power efficiency advantage. The optics for an amplifier is simple and compact. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Linac which is under construction at Brookhaven National Laboratory's Collider-Accelerator Department

  8. Bayesian Averaging is Well-Temperated

    DEFF Research Database (Denmark)

    Hansen, Lars Kai

    2000-01-01

    Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation is l...

  9. Gibbs equilibrium averages and Bogolyubov measure

    International Nuclear Information System (INIS)

    Sankovich, D.P.

    2011-01-01

    Application of the functional integration methods in equilibrium statistical mechanics of quantum Bose-systems is considered. We show that Gibbs equilibrium averages of Bose-operators can be represented as path integrals over a special Gauss measure defined in the corresponding space of continuous functions. We consider some problems related to integration with respect to this measure

  10. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.; Turner, W.C.; Watson, J.A.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of ∼ 50-ns duration pulses to > 100 MeV. In this paper the authors report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  11. Function reconstruction from noisy local averages

    International Nuclear Information System (INIS)

    Chen Yu; Huang Jianguo; Han Weimin

    2008-01-01

    A regularization method is proposed for the function reconstruction from noisy local averages in any dimension. Error bounds for the approximate solution in L 2 -norm are derived. A number of numerical examples are provided to show computational performance of the method, with the regularization parameters selected by different strategies

  12. A singularity theorem based on spatial averages

    Indian Academy of Sciences (India)

    journal of. July 2007 physics pp. 31–47. A singularity theorem based on spatial ... In this paper I would like to present a result which confirms – at least partially – ... A detailed analysis of how the model fits in with the .... Further, the statement that the spatial average ...... Financial support under grants FIS2004-01626 and no.

  13. Multiphase averaging of periodic soliton equations

    International Nuclear Information System (INIS)

    Forest, M.G.

    1979-01-01

    The multiphase averaging of periodic soliton equations is considered. Particular attention is given to the periodic sine-Gordon and Korteweg-deVries (KdV) equations. The periodic sine-Gordon equation and its associated inverse spectral theory are analyzed, including a discussion of the spectral representations of exact, N-phase sine-Gordon solutions. The emphasis is on physical characteristics of the periodic waves, with a motivation from the well-known whole-line solitons. A canonical Hamiltonian approach for the modulational theory of N-phase waves is prescribed. A concrete illustration of this averaging method is provided with the periodic sine-Gordon equation; explicit averaging results are given only for the N = 1 case, laying a foundation for a more thorough treatment of the general N-phase problem. For the KdV equation, very general results are given for multiphase averaging of the N-phase waves. The single-phase results of Whitham are extended to general N phases, and more importantly, an invariant representation in terms of Abelian differentials on a Riemann surface is provided. Several consequences of this invariant representation are deduced, including strong evidence for the Hamiltonian structure of N-phase modulational equations

  14. A dynamic analysis of moving average rules

    NARCIS (Netherlands)

    Chiarella, C.; He, X.Z.; Hommes, C.H.

    2006-01-01

    The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type

  15. Essays on model averaging and political economics

    NARCIS (Netherlands)

    Wang, W.

    2013-01-01

    This thesis first investigates various issues related with model averaging, and then evaluates two policies, i.e. West Development Drive in China and fiscal decentralization in U.S, using econometric tools. Chapter 2 proposes a hierarchical weighted least squares (HWALS) method to address multiple

  16. 7 CFR 1209.12 - On average.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... CONSUMER INFORMATION ORDER Mushroom Promotion, Research, and Consumer Information Order Definitions § 1209...

  17. High average-power induction linacs

    International Nuclear Information System (INIS)

    Prono, D.S.; Barrett, D.; Bowles, E.

    1989-01-01

    Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs

  18. Average Costs versus Net Present Value

    NARCIS (Netherlands)

    E.A. van der Laan (Erwin); R.H. Teunter (Ruud)

    2000-01-01

    textabstractWhile the net present value (NPV) approach is widely accepted as the right framework for studying production and inventory control systems, average cost (AC) models are more widely used. For the well known EOQ model it can be verified that (under certain conditions) the AC approach gives

  19. Reliability Estimates for Undergraduate Grade Point Average

    Science.gov (United States)

    Westrick, Paul A.

    2017-01-01

    Undergraduate grade point average (GPA) is a commonly employed measure in educational research, serving as a criterion or as a predictor depending on the research question. Over the decades, researchers have used a variety of reliability coefficients to estimate the reliability of undergraduate GPA, which suggests that there has been no consensus…

  20. Achieving involvement: process outcomes from a cluster randomized trial of shared decision making skill development and use of risk communication aids in general practice.

    Science.gov (United States)

    Elwyn, G; Edwards, A; Hood, K; Robling, M; Atwell, C; Russell, I; Wensing, M; Grol, R

    2004-08-01

    A consulting method known as 'shared decision making' (SDM) has been described and operationalized in terms of several 'competences'. One of these competences concerns the discussion of the risks and benefits of treatment or care options-'risk communication'. Few data exist on clinicians' ability to acquire skills and implement the competences of SDM or risk communication in consultations with patients. The aims of this study were to evaluate the effects of skill development workshops for SDM and the use of risk communication aids on the process of consultations. A cluster randomized trial with crossover was carried out with the participation of 20 recently qualified GPs in urban and rural general practices in Gwent, South Wales. A total of 747 patients with known atrial fibrillation, prostatism, menorrhagia or menopausal symptoms were invited to a consultation to review their condition or treatments. Half the consultations were randomly selected for audio-taping, of which 352 patients attended and were audio-taped successfully. After baseline, participating doctors were randomized to receive training in (i) SDM skills or (ii) the use of simple risk communication aids, using simulated patients. The alternative training was then provided for the final study phase. Patients were allocated randomly to a consultation during baseline or intervention 1 (SDM or risk communication aids) or intervention 2 phases. A randomly selected half of the consultations were audio-taped from each phase. Raters (independent, trained and blinded to study phase) assessed the audio-tapes using a validated scale to assess levels of patient involvement (OPTION: observing patient involvement), and to analyse the nature of risk information discussed. Clinicians completed questionnaires after each consultation, assessing perceived clinician-patient agreement and level of patient involvement in decisions. Multilevel modelling was carried out with the OPTION score as the dependent variable, and

  1. Tendon surveillance requirements - average tendon force

    International Nuclear Information System (INIS)

    Fulton, J.F.

    1982-01-01

    Proposed Rev. 3 to USNRC Reg. Guide 1.35 discusses the need for comparing, for individual tendons, the measured and predicted lift-off forces. Such a comparison is intended to detect any abnormal tendon force loss which might occur. Recognizing that there are uncertainties in the prediction of tendon losses, proposed Guide 1.35.1 has allowed specific tolerances on the fundamental losses. Thus, the lift-off force acceptance criteria for individual tendons appearing in Reg. Guide 1.35, Proposed Rev. 3, is stated relative to a lower bound predicted tendon force, which is obtained using the 'plus' tolerances on the fundamental losses. There is an additional acceptance criterion for the lift-off forces which is not specifically addressed in these two Reg. Guides; however, it is included in a proposed Subsection IWX to ASME Code Section XI. This criterion is based on the overriding requirement that the magnitude of prestress in the containment structure be sufficeint to meet the minimum prestress design requirements. This design requirement can be expressed as an average tendon force for each group of vertical hoop, or dome tendons. For the purpose of comparing the actual tendon forces with the required average tendon force, the lift-off forces measured for a sample of tendons within each group can be averaged to construct the average force for the entire group. However, the individual lift-off forces must be 'corrected' (normalized) prior to obtaining the sample average. This paper derives the correction factor to be used for this purpose. (orig./RW)

  2. Consensus in averager-copier-voter networks of moving dynamical agents

    Science.gov (United States)

    Shang, Yilun

    2017-02-01

    This paper deals with a hybrid opinion dynamics comprising averager, copier, and voter agents, which ramble as random walkers on a spatial network. Agents exchange information following some deterministic and stochastic protocols if they reside at the same site in the same time. Based on stochastic stability of Markov chains, sufficient conditions guaranteeing consensus in the sense of almost sure convergence have been obtained. The ultimate consensus state is identified in the form of an ergodicity result. Simulation studies are performed to validate the effectiveness and availability of our theoretical results. The existence/non-existence of voters and the proportion of them are unveiled to play key roles during the consensus-reaching process.

  3. Increasing average period lengths by switching of robust chaos maps in finite precision

    Science.gov (United States)

    Nagaraj, N.; Shastry, M. C.; Vaidya, P. G.

    2008-12-01

    Grebogi, Ott and Yorke (Phys. Rev. A 38, 1988) have investigated the effect of finite precision on average period length of chaotic maps. They showed that the average length of periodic orbits (T) of a dynamical system scales as a function of computer precision (ɛ) and the correlation dimension (d) of the chaotic attractor: T ˜ɛ-d/2. In this work, we are concerned with increasing the average period length which is desirable for chaotic cryptography applications. Our experiments reveal that random and chaotic switching of deterministic chaotic dynamical systems yield higher average length of periodic orbits as compared to simple sequential switching or absence of switching. To illustrate the application of switching, a novel generalization of the Logistic map that exhibits Robust Chaos (absence of attracting periodic orbits) is first introduced. We then propose a pseudo-random number generator based on chaotic switching between Robust Chaos maps which is found to successfully pass stringent statistical tests of randomness.

  4. Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.

    Science.gov (United States)

    Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel

    2018-06-05

    In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.

  5. Use of a multimedia module to aid the informed consent process in patients undergoing gynecologic laparoscopy for pelvic pain: randomized controlled trial.

    Science.gov (United States)

    Ellett, Lenore; Villegas, Rocio; Beischer, Andrew; Ong, Nicole; Maher, Peter

    2014-01-01

    To determine whether providing additional information to the standard consent process, in the form of a multimedia module (MM), improves patient knowledge about operative laparoscopy without increasing anxiety. Randomized controlled trial (Canadian Task Force classification I). Two outpatient gynecologic clinics, one in a private hospital and the other in a public teaching hospital. Forty-one women aged 19 to 51 years (median, 35.6 years) requiring operative laparoscopy for investigation and treatment of pelvic pain. Following the standard informed consent process, patients were randomized to watch the MM (intervention group, n = 21) or not (control group, n = 20). The surgeon was blinded to the group assignments. All patients completed a knowledge questionnaire and the Spielberger short-form State-Trait Anxiety Inventory. Six weeks after recruitment, patients completed the knowledge questionnaire and the State-Trait Anxiety Inventory a second time to assess knowledge retention and anxiety scores. Patient knowledge of operative laparoscopy, anxiety level, and acceptance of the MM were recorded. The MM intervention group demonstrated superior knowledge scores. Mean (SE) score in the MM group was 11.3 (0.49), and in the control group was 7.9 (0.50) (p <.001) (maximum score, 14). This did not translate into improved knowledge scores 6 weeks later; the score in the MM group was 8.4 (0.53) vs. 7.8 (0.50) in the control group (p = .44). There was no difference in anxiety levels between the groups at intervention or after 6 weeks. Overall, patients found the MM acceptable, and 18 women (86%) in the intervention group and 12 (60%) in the control group stated they would prefer this style of informed consent in the future. Use of an MM enhances the informed consent process by improving patient knowledge, in the short term, without increasing anxiety. Copyright © 2014 AAGL. Published by Elsevier Inc. All rights reserved.

  6. ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE

    Directory of Open Access Journals (Sweden)

    Carmen BOGHEAN

    2013-12-01

    Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.

  7. Can a combination of average of normals and "real time" External Quality Assurance replace Internal Quality Control?

    Science.gov (United States)

    Badrick, Tony; Graham, Peter

    2018-03-28

    Internal Quality Control and External Quality Assurance are separate but related processes that have developed independently in laboratory medicine over many years. They have different sample frequencies, statistical interpretations and immediacy. Both processes have evolved absorbing new understandings of the concept of laboratory error, sample material matrix and assay capability. However, we do not believe at the coalface that either process has led to much improvement in patient outcomes recently. It is the increasing reliability and automation of analytical platforms along with improved stability of reagents that has reduced systematic and random error, which in turn has minimised the risk of running less frequent IQC. We suggest that it is time to rethink the role of both these processes and unite them into a single approach using an Average of Normals model supported by more frequent External Quality Assurance samples. This new paradigm may lead to less confusion for laboratory staff and quicker responses to and identification of out of control situations.

  8. Weighted estimates for the averaging integral operator

    Czech Academy of Sciences Publication Activity Database

    Opic, Bohumír; Rákosník, Jiří

    2010-01-01

    Roč. 61, č. 3 (2010), s. 253-262 ISSN 0010-0757 R&D Projects: GA ČR GA201/05/2033; GA ČR GA201/08/0383 Institutional research plan: CEZ:AV0Z10190503 Keywords : averaging integral operator * weighted Lebesgue spaces * weights Subject RIV: BA - General Mathematics Impact factor: 0.474, year: 2010 http://link.springer.com/article/10.1007%2FBF03191231

  9. Average Transverse Momentum Quantities Approaching the Lightfront

    OpenAIRE

    Boer, Daniel

    2015-01-01

    In this contribution to Light Cone 2014, three average transverse momentum quantities are discussed: the Sivers shift, the dijet imbalance, and the $p_T$ broadening. The definitions of these quantities involve integrals over all transverse momenta that are overly sensitive to the region of large transverse momenta, which conveys little information about the transverse momentum distributions of quarks and gluons inside hadrons. TMD factorization naturally suggests alternative definitions of su...

  10. Average configuration of the geomagnetic tail

    International Nuclear Information System (INIS)

    Fairfield, D.H.

    1979-01-01

    Over 3000 hours of Imp 6 magnetic field data obtained between 20 and 33 R/sub E/ in the geomagnetic tail have been used in a statistical study of the tail configuration. A distribution of 2.5-min averages of B/sub z/ as a function of position across the tail reveals that more flux crosses the equatorial plane near the dawn and dusk flanks (B-bar/sub z/=3.γ) than near midnight (B-bar/sub z/=1.8γ). The tail field projected in the solar magnetospheric equatorial plane deviates from the x axis due to flaring and solar wind aberration by an angle α=-0.9 Y/sub SM/-2.7, where Y/sub SM/ is in earth radii and α is in degrees. After removing these effects, the B/sub y/ component of the tail field is found to depend on interplanetary sector structure. During an 'away' sector the B/sub y/ component of the tail field is on average 0.5γ greater than that during a 'toward' sector, a result that is true in both tail lobes and is independent of location across the tail. This effect means the average field reversal between northern and southern lobes of the tail is more often 178 0 rather than the 180 0 that is generally supposed

  11. Unscrambling The "Average User" Of Habbo Hotel

    Directory of Open Access Journals (Sweden)

    Mikael Johnson

    2007-01-01

    Full Text Available The “user” is an ambiguous concept in human-computer interaction and information systems. Analyses of users as social actors, participants, or configured users delineate approaches to studying design-use relationships. Here, a developer’s reference to a figure of speech, termed the “average user,” is contrasted with design guidelines. The aim is to create an understanding about categorization practices in design through a case study about the virtual community, Habbo Hotel. A qualitative analysis highlighted not only the meaning of the “average user,” but also the work that both the developer and the category contribute to this meaning. The average user a represents the unknown, b influences the boundaries of the target user groups, c legitimizes the designer to disregard marginal user feedback, and d keeps the design space open, thus allowing for creativity. The analysis shows how design and use are intertwined and highlights the developers’ role in governing different users’ interests.

  12. Changing mortality and average cohort life expectancy

    Directory of Open Access Journals (Sweden)

    Robert Schoen

    2005-10-01

    Full Text Available Period life expectancy varies with changes in mortality, and should not be confused with the life expectancy of those alive during that period. Given past and likely future mortality changes, a recent debate has arisen on the usefulness of the period life expectancy as the leading measure of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure, the average cohort life expectancy (ACLE, to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate measures of mortality are calculated for England and Wales, Norway, and Switzerland for the years 1880 to 2000. CAL is found to be sensitive to past and present changes in death rates. ACLE requires the most data, but gives the best representation of the survivorship of cohorts present at a given time.

  13. Quantized Average Consensus on Gossip Digraphs with Reduced Computation

    Science.gov (United States)

    Cai, Kai; Ishii, Hideaki

    The authors have recently proposed a class of randomized gossip algorithms which solve the distributed averaging problem on directed graphs, with the constraint that each node has an integer-valued state. The essence of this algorithm is to maintain local records, called “surplus”, of individual state updates, thereby achieving quantized average consensus even though the state sum of all nodes is not preserved. In this paper we study a modified version of this algorithm, whose feature is primarily in reducing both computation and communication effort. Concretely, each node needs to update fewer local variables, and can transmit surplus by requiring only one bit. Under this modified algorithm we prove that reaching the average is ensured for arbitrary strongly connected graphs. The condition of arbitrary strong connection is less restrictive than those known in the literature for either real-valued or quantized states; in particular, it does not require the special structure on the network called balanced. Finally, we provide numerical examples to illustrate the convergence result, with emphasis on convergence time analysis.

  14. Fitting a function to time-dependent ensemble averaged data.

    Science.gov (United States)

    Fogelmark, Karl; Lomholt, Michael A; Irbäck, Anders; Ambjörnsson, Tobias

    2018-05-03

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion constants). A commonly overlooked challenge in such function fitting procedures is that fluctuations around mean values, by construction, exhibit temporal correlations. We show that the only available general purpose function fitting methods, correlated chi-square method and the weighted least squares method (which neglects correlation), fail at either robust parameter estimation or accurate error estimation. We remedy this by deriving a new closed-form error estimation formula for weighted least square fitting. The new formula uses the full covariance matrix, i.e., rigorously includes temporal correlations, but is free of the robustness issues, inherent to the correlated chi-square method. We demonstrate its accuracy in four examples of importance in many fields: Brownian motion, damped harmonic oscillation, fractional Brownian motion and continuous time random walks. We also successfully apply our method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software.

  15. On Random Numbers and Design

    Science.gov (United States)

    Ben-Ari, Morechai

    2004-01-01

    The term "random" is frequently used in discussion of the theory of evolution, even though the mathematical concept of randomness is problematic and of little relevance in the theory. Therefore, since the core concept of the theory of evolution is the non-random process of natural selection, the term random should not be used in teaching the…

  16. Average Soil Water Retention Curves Measured by Neutron Radiography

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Chu-Lin [ORNL; Perfect, Edmund [University of Tennessee, Knoxville (UTK); Kang, Misun [ORNL; Voisin, Sophie [ORNL; Bilheux, Hassina Z [ORNL; Horita, Juske [Texas Tech University (TTU); Hussey, Dan [NIST Center for Neutron Research (NCRN), Gaithersburg, MD

    2011-01-01

    Water retention curves are essential for understanding the hydrologic behavior of partially-saturated porous media and modeling flow transport processes within the vadose zone. In this paper we report direct measurements of the main drying and wetting branches of the average water retention function obtained using 2-dimensional neutron radiography. Flint sand columns were saturated with water and then drained under quasi-equilibrium conditions using a hanging water column setup. Digital images (2048 x 2048 pixels) of the transmitted flux of neutrons were acquired at each imposed matric potential (~10-15 matric potential values per experiment) at the NCNR BT-2 neutron imaging beam line. Volumetric water contents were calculated on a pixel by pixel basis using Beer-Lambert s law after taking into account beam hardening and geometric corrections. To remove scattering effects at high water contents the volumetric water contents were normalized (to give relative saturations) by dividing the drying and wetting sequences of images by the images obtained at saturation and satiation, respectively. The resulting pixel values were then averaged and combined with information on the imposed basal matric potentials to give average water retention curves. The average relative saturations obtained by neutron radiography showed an approximate one-to-one relationship with the average values measured volumetrically using the hanging water column setup. There were no significant differences (at p < 0.05) between the parameters of the van Genuchten equation fitted to the average neutron radiography data and those estimated from replicated hanging water column data. Our results indicate that neutron imaging is a very effective tool for quantifying the average water retention curve.

  17. Performance analysis of spectral-phase-encoded optical code-division multiple-access system regarding the incorrectly decoded signal as a nonstationary random process

    Science.gov (United States)

    Yan, Meng; Yao, Minyu; Zhang, Hongming

    2005-11-01

    The performance of a spectral-phase-encoded (SPE) optical code-division multiple-access (OCDMA) system is analyzed. Regarding the incorrectly decoded signal (IDS) as a nonstationary random process, we derive a novel probability distribution for it. The probability distribution of the IDS is considered a chi-squared distribution with degrees of freedom r=1, which is more reasonable and accurate than in previous work. The bit error rate (BER) of an SPE OCDMA system under multiple-access interference is evaluated. Numerical results show that the system can sustain very low BER even when there are multiple simultaneous users, and as the code length becomes longer or the initial pulse becomes shorter, the system performs better.

  18. Estimating average glandular dose by measuring glandular rate in mammograms

    International Nuclear Information System (INIS)

    Goto, Sachiko; Azuma, Yoshiharu; Sumimoto, Tetsuhiro; Eiho, Shigeru

    2003-01-01

    The glandular rate of the breast was objectively measured in order to calculate individual patient exposure dose (average glandular dose) in mammography. By employing image processing techniques and breast-equivalent phantoms with various glandular rate values, a conversion curve for pixel value to glandular rate can be determined by a neural network. Accordingly, the pixel values in clinical mammograms can be converted to the glandular rate value for each pixel. The individual average glandular dose can therefore be calculated using the individual glandular rates on the basis of the dosimetry method employed for quality control in mammography. In the present study, a data set of 100 craniocaudal mammograms from 50 patients was used to evaluate our method. The average glandular rate and average glandular dose of the data set were 41.2% and 1.79 mGy, respectively. The error in calculating the individual glandular rate can be estimated to be less than ±3%. When the calculation error of the glandular rate is taken into consideration, the error in the individual average glandular dose can be estimated to be 13% or less. We feel that our method for determining the glandular rate from mammograms is useful for minimizing subjectivity in the evaluation of patient breast composition. (author)

  19. Load-Dependent Interference of Deep Brain Stimulation of the Subthalamic Nucleus with Switching from Automatic to Controlled Processing During Random Number Generation in Parkinson's Disease.

    Science.gov (United States)

    Williams, Isobel Anne; Wilkinson, Leonora; Limousin, Patricia; Jahanshahi, Marjan

    2015-01-01

    Deep brain stimulation of the subthalamic nucleus (STN DBS) ameliorates the motor symptoms of Parkinson's disease (PD). However, some aspects of executive control are impaired with STN DBS. We tested the prediction that (i) STN DBS interferes with switching from automatic to controlled processing during fast-paced random number generation (RNG) (ii) STN DBS-induced cognitive control changes are load-dependent. Fifteen PD patients with bilateral STN DBS performed paced-RNG, under three levels of cognitive load synchronised with a pacing stimulus presented at 1, 0.5 and 0.33 Hz (faster rates require greater cognitive control), with DBS on or off. Measures of output randomness were calculated. Countscore 1 (CS1) indicates habitual counting in steps of one (CS1). Countscore 2 (CS2) indicates a more controlled strategy of counting in twos. The fastest rate was associated with an increased CS1 score with STN DBS on compared to off. At the slowest rate, patients had higher CS2 scores with DBS off than on, such that the differences between CS1 and CS2 scores disappeared. We provide evidence for a load-dependent effect of STN DBS on paced RNG in PD. Patients could switch to more controlled RNG strategies during conditions of low cognitive load at slower rates only when the STN stimulators were off, but when STN stimulation was on, they engaged in more automatic habitual counting under increased cognitive load. These findings are consistent with the proposal that the STN implements a switch signal from the medial frontal cortex which enables a shift from automatic to controlled processing.

  20. Meaning in meaninglessness: The propensity to perceive meaningful patterns in coincident events and randomly arranged stimuli is linked to enhanced attention in early sensory processing.

    Science.gov (United States)

    Rominger, Christian; Schulter, Günter; Fink, Andreas; Weiss, Elisabeth M; Papousek, Ilona

    2018-05-01

    Perception of objectively independent events or stimuli as being significantly connected and the associated proneness to perceive meaningful patterns constitute part of the positive symptoms of schizophrenia, which are associated with altered attentional processes in lateralized speech perception. Since perceiving meaningful patterns is to some extent already prevalent in the general population, the aim of the study was to investigate whether the propensity to experience meaningful patterns in co-occurring events and random stimuli may be associated with similar altered attentional processes in lateralized speech perception. Self-reported and behavioral indicators of the perception of meaningful patterns were assessed in non-clinical individuals, along with EEG auditory evoked potentials during the performance of an attention related lateralized speech perception task (Dichotic Listening Test). A greater propensity to perceive meaningful patterns was associated with higher N1 amplitudes of the evoked potentials to the onset of the dichotically presented consonant-vowel syllables, indicating enhanced automatic attention in early sensory processing. The study suggests that more basic mechanisms in how people associate events may play a greater role in the cognitive biases that are manifest in personality expressions such as positive schizotypy, rather than that positive schizotypy moderates these cognitive biases directly. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Almond Consumption and Processing Affects the Composition of the Gastrointestinal Microbiota of Healthy Adult Men and Women: A Randomized Controlled Trial

    Directory of Open Access Journals (Sweden)

    Hannah D. Holscher

    2018-01-01

    Full Text Available Background: Almond processing has been shown to differentially impact metabolizable energy; however, the effect of food form on the gastrointestinal microbiota is under-investigated. Objective: We aimed to assess the interrelationship of almond consumption and processing on the gastrointestinal microbiota. Design: A controlled-feeding, randomized, five-period, crossover study with washouts between diet periods was conducted in healthy adults (n = 18. Treatments included: (1 zero servings/day of almonds (control; (2 1.5 servings (42 g/day of whole almonds; (3 1.5 servings/day of whole, roasted almonds; (4 1.5 servings/day of roasted, chopped almonds; and (5 1.5 servings/day of almond butter. Fecal samples were collected at the end of each three-week diet period. Results: Almond consumption increased the relative abundances of Lachnospira, Roseburia, and Dialister (p ≤ 0.05. Comparisons between control and the four almond treatments revealed that chopped almonds increased Lachnospira, Roseburia, and Oscillospira compared to control (p < 0.05, while whole almonds increased Dialister compared to control (p = 0.007. There were no differences between almond butter and control. Conclusions: These results reveal that almond consumption induced changes in the microbial community composition of the human gastrointestinal microbiota. Furthermore, the degree of almond processing (e.g., roasting, chopping, and grinding into butter differentially impacted the relative abundances of bacterial genera.

  2. Unpredictable visual changes cause temporal memory averaging.

    Science.gov (United States)

    Ohyama, Junji; Watanabe, Katsumi

    2007-09-01

    Various factors influence the perceived timing of visual events. Yet, little is known about the ways in which transient visual stimuli affect the estimation of the timing of other visual events. In the present study, we examined how a sudden color change of an object would influence the remembered timing of another transient event. In each trial, subjects saw a green or red disk travel in circular motion. A visual flash (white frame) occurred at random times during the motion sequence. The color of the disk changed either at random times (unpredictable condition), at a fixed time relative to the motion sequence (predictable condition), or it did not change (no-change condition). The subjects' temporal memory of the visual flash in the predictable condition was as veridical as that in the no-change condition. In the unpredictable condition, however, the flash was reported to occur closer to the timing of the color change than actual timing. Thus, an unpredictable visual change distorts the temporal memory of another visual event such that the remembered moment of the event is closer to the timing of the unpredictable visual change.

  3. Operator product expansion and its thermal average

    Energy Technology Data Exchange (ETDEWEB)

    Mallik, S [Saha Inst. of Nuclear Physics, Calcutta (India)

    1998-05-01

    QCD sum rules at finite temperature, like the ones at zero temperature, require the coefficients of local operators, which arise in the short distance expansion of the thermal average of two-point functions of currents. We extend the configuration space method, applied earlier at zero temperature, to the case at finite temperature. We find that, upto dimension four, two new operators arise, in addition to the two appearing already in the vacuum correlation functions. It is argued that the new operators would contribute substantially to the sum rules, when the temperature is not too low. (orig.) 7 refs.

  4. Phase-averaged transport for quasiperiodic Hamiltonians

    CERN Document Server

    Bellissard, J; Schulz-Baldes, H

    2002-01-01

    For a class of discrete quasi-periodic Schroedinger operators defined by covariant re- presentations of the rotation algebra, a lower bound on phase-averaged transport in terms of the multifractal dimensions of the density of states is proven. This result is established under a Diophantine condition on the incommensuration parameter. The relevant class of operators is distinguished by invariance with respect to symmetry automorphisms of the rotation algebra. It includes the critical Harper (almost-Mathieu) operator. As a by-product, a new solution of the frame problem associated with Weyl-Heisenberg-Gabor lattices of coherent states is given.

  5. Multistage parallel-serial time averaging filters

    International Nuclear Information System (INIS)

    Theodosiou, G.E.

    1980-01-01

    Here, a new time averaging circuit design, the 'parallel filter' is presented, which can reduce the time jitter, introduced in time measurements using counters of large dimensions. This parallel filter could be considered as a single stage unit circuit which can be repeated an arbitrary number of times in series, thus providing a parallel-serial filter type as a result. The main advantages of such a filter over a serial one are much less electronic gate jitter and time delay for the same amount of total time uncertainty reduction. (orig.)

  6. Time-dependent angularly averaged inverse transport

    International Nuclear Information System (INIS)

    Bal, Guillaume; Jollivet, Alexandre

    2009-01-01

    This paper concerns the reconstruction of the absorption and scattering parameters in a time-dependent linear transport equation from knowledge of angularly averaged measurements performed at the boundary of a domain of interest. Such measurement settings find applications in medical and geophysical imaging. We show that the absorption coefficient and the spatial component of the scattering coefficient are uniquely determined by such measurements. We obtain stability results on the reconstruction of the absorption and scattering parameters with respect to the measured albedo operator. The stability results are obtained by a precise decomposition of the measurements into components with different singular behavior in the time domain

  7. Independence, Odd Girth, and Average Degree

    DEFF Research Database (Denmark)

    Löwenstein, Christian; Pedersen, Anders Sune; Rautenbach, Dieter

    2011-01-01

      We prove several tight lower bounds in terms of the order and the average degree for the independence number of graphs that are connected and/or satisfy some odd girth condition. Our main result is the extension of a lower bound for the independence number of triangle-free graphs of maximum...... degree at most three due to Heckman and Thomas [Discrete Math 233 (2001), 233–237] to arbitrary triangle-free graphs. For connected triangle-free graphs of order n and size m, our result implies the existence of an independent set of order at least (4n−m−1) / 7.  ...

  8. Bootstrapping Density-Weighted Average Derivatives

    DEFF Research Database (Denmark)

    Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael

    Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...... (1989). In many cases validity of bootstrap-based inference procedures is found to depend crucially on whether the bandwidth sequence satisfies a particular (asymptotic linearity) condition. An exception to this rule occurs for inference procedures involving a studentized estimator employing a "robust...

  9. Average Nuclear properties based on statistical model

    International Nuclear Information System (INIS)

    El-Jaick, L.J.

    1974-01-01

    The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt

  10. Bayesian model averaging and weighted average least squares : Equivariance, stability, and numerical issues

    NARCIS (Netherlands)

    De Luca, G.; Magnus, J.R.

    2011-01-01

    In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares

  11. Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.

    Science.gov (United States)

    Dirks, Jean; And Others

    1983-01-01

    Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)

  12. Impact of acute administration of escitalopram on the processing of emotional and neutral images: a randomized crossover fMRI study of healthy women.

    Science.gov (United States)

    Outhred, Tim; Das, Pritha; Felmingham, Kim L; Bryant, Richard A; Nathan, Pradeep J; Malhi, Gin S; Kemp, Andrew H

    2014-07-01

    Acute neural effects of antidepressant medication on emotion processing biases may provide the foundation on which clinical outcomes are based. Along with effects on positive and negative stimuli, acute effects on neutral stimuli may also relate to antidepressant efficacy, yet these effects are still to be investigated. The present study therefore examined the impact of a single dose of the selective serotonin reuptake inhibitor escitalopram (20 mg) on positive, negative and neutral stimuli using pharmaco-fMRI. Within a double-blind, randomized, placebo-controlled crossover design, healthy women completed 2 sessions of treatment administration and fMRI scanning separated by a 1-week washout period. We enrolled 36 women in our study. When participants were administered escitalopram relative to placebo, left amygdala activity was increased and right inferior frontal gyrus (IFG) activity was decreased during presentation of positive pictures (potentiation of positive emotion processing). In contrast, escitalopram was associated with decreased left amygdala and increased right IFG activity during presentation of negative pictures (attenuation of negative emotion processing). In addition, escitalopram decreased right IFG activity during the processing of neutral stimuli, akin to the effects on positive stimuli (decrease in negative appraisal). Although we used a women-only sample to reduce heterogeneity, our results may not generalize to men. Potential unblinding, which was related to the subjective occurrence of side effects, occurred in the study; however, manipulation check analyses demonstrated that results were not impacted. These novel findings demonstrate that a single dose of the commonly prescribed escitalopram facilitates a positive information processing bias. These findings provide an important lead for better understanding effects of antidepressant medication.

  13. Randomly transitional phenomena in the system governed by Duffing's equation

    International Nuclear Information System (INIS)

    Ueda, Yoshisuke.

    1978-06-01

    This paper deals with turbulent or chaotic phenomena which occur in the system governed by Duffing's equation, a special type of 2-dimensional periodic systems. By using analog and digital computers, experiments are undertaken with special reference to the changes of attractors and of average power spectra of the random processes under the variation of the system parameters. On the basis of the experimental results, an outline of the random process is made clear. The results obtained in this paper will be applied to the phenomena of the same kind which occur in 3-dimensional autonomous systems. (author)

  14. Effects of a Community-Based, Post-Rehabilitation Exercise Program in COPD: Protocol for a Randomized Controlled Trial With Embedded Process Evaluation.

    Science.gov (United States)

    Desveaux, Laura; Beauchamp, Marla K; Lee, Annemarie; Ivers, Noah; Goldstein, Roger; Brooks, Dina

    2016-05-11

    This manuscript (1) outlines the intervention, (2) describes how its effectiveness is being evaluated in a pragmatic randomized controlled trial, and (3) summarizes the embedded process evaluation aiming to understand key barriers and facilitators for implementation in new environments. Participating centers refer eligible individuals with COPD following discharge from their local PR program. Consenting patients are assigned to a year-long community exercise program or usual care using block randomization and stratifying for supplemental oxygen use. Patients in the intervention arm are asked to attend an exercise session at least twice per week at their local community facility where their progress is supervised by a case manager. Each exercise session includes a component of aerobic exercise, and activities designed to optimize balance, flexibility, and strength. All study participants will have access to routine follow-up appointments with their respiratory physician, and additional health care providers as part of their usual care. Assessments will be completed at baseline (post-PR), 6, and 12 months, and include measures of functional exercise capacity, quality of life, self-efficacy, and health care usage. Intervention effectiveness will be assessed by comparing functional exercise capacity between intervention and control groups. A mixed-methods process evaluation will be conducted to better understand intervention implementation, guided by Normalization Process Theory and the Consolidated Framework for Implementation Research. Based on results from our pilot work, we anticipate a maintenance of exercise capacity and improved health-related quality of life in the intervention group, compared with a decline in exercise capacity in the usual care group. Findings from this study will improve our understanding of the effectiveness of community-based exercise programs for maintaining benefits following PR in patients with COPD and provide information on how best

  15. Trajectory averaging for stochastic approximation MCMC algorithms

    KAUST Repository

    Liang, Faming

    2010-10-01

    The subject of stochastic approximation was founded by Robbins and Monro [Ann. Math. Statist. 22 (1951) 400-407]. After five decades of continual development, it has developed into an important area in systems control and optimization, and it has also served as a prototype for the development of adaptive algorithms for on-line estimation and control of stochastic systems. Recently, it has been used in statistics with Markov chain Monte Carlo for solving maximum likelihood estimation problems and for general simulation and optimizations. In this paper, we first show that the trajectory averaging estimator is asymptotically efficient for the stochastic approximation MCMC (SAMCMC) algorithm under mild conditions, and then apply this result to the stochastic approximation Monte Carlo algorithm [Liang, Liu and Carroll J. Amer. Statist. Assoc. 102 (2007) 305-320]. The application of the trajectory averaging estimator to other stochastic approximationMCMC algorithms, for example, a stochastic approximation MLE algorithm for missing data problems, is also considered in the paper. © Institute of Mathematical Statistics, 2010.

  16. Averaged null energy condition from causality

    Science.gov (United States)

    Hartman, Thomas; Kundu, Sandipan; Tajdini, Amirhossein

    2017-07-01

    Unitary, Lorentz-invariant quantum field theories in flat spacetime obey mi-crocausality: commutators vanish at spacelike separation. For interacting theories in more than two dimensions, we show that this implies that the averaged null energy, ∫ duT uu , must be non-negative. This non-local operator appears in the operator product expansion of local operators in the lightcone limit, and therefore contributes to n-point functions. We derive a sum rule that isolates this contribution and is manifestly positive. The argument also applies to certain higher spin operators other than the stress tensor, generating an infinite family of new constraints of the form ∫ duX uuu··· u ≥ 0. These lead to new inequalities for the coupling constants of spinning operators in conformal field theory, which include as special cases (but are generally stronger than) the existing constraints from the lightcone bootstrap, deep inelastic scattering, conformal collider methods, and relative entropy. We also comment on the relation to the recent derivation of the averaged null energy condition from relative entropy, and suggest a more general connection between causality and information-theoretic inequalities in QFT.

  17. Beta-energy averaging and beta spectra

    International Nuclear Information System (INIS)

    Stamatelatos, M.G.; England, T.R.

    1976-07-01

    A simple yet highly accurate method for approximately calculating spectrum-averaged beta energies and beta spectra for radioactive nuclei is presented. This method should prove useful for users who wish to obtain accurate answers without complicated calculations of Fermi functions, complex gamma functions, and time-consuming numerical integrations as required by the more exact theoretical expressions. Therefore, this method should be a good time-saving alternative for investigators who need to make calculations involving large numbers of nuclei (e.g., fission products) as well as for occasional users interested in restricted number of nuclides. The average beta-energy values calculated by this method differ from those calculated by ''exact'' methods by no more than 1 percent for nuclides with atomic numbers in the 20 to 100 range and which emit betas of energies up to approximately 8 MeV. These include all fission products and the actinides. The beta-energy spectra calculated by the present method are also of the same quality

  18. Chaotic Universe, Friedmannian on the average 2

    Energy Technology Data Exchange (ETDEWEB)

    Marochnik, L S [AN SSSR, Moscow. Inst. Kosmicheskikh Issledovanij

    1980-11-01

    The cosmological solutions are found for the equations for correlators, describing a statistically chaotic Universe, Friedmannian on the average in which delta-correlated fluctuations with amplitudes h >> 1 are excited. For the equation of state of matter p = n epsilon, the kind of solutions depends on the position of maximum of the spectrum of the metric disturbances. The expansion of the Universe, in which long-wave potential and vortical motions and gravitational waves (modes diverging at t ..-->.. 0) had been excited, tends asymptotically to the Friedmannian one at t ..-->.. identity and depends critically on n: at n < 0.26, the solution for the scalefactor is situated higher than the Friedmannian one, and lower at n > 0.26. The influence of finite at t ..-->.. 0 long-wave fluctuation modes leads to an averaged quasiisotropic solution. The contribution of quantum fluctuations and of short-wave parts of the spectrum of classical fluctuations to the expansion law is considered. Their influence is equivalent to the contribution from an ultrarelativistic gas with corresponding energy density and pressure. The restrictions are obtained for the degree of chaos (the spectrum characteristics) compatible with the observed helium abundance, which could have been retained by a completely chaotic Universe during its expansion up to the nucleosynthesis epoch.

  19. Averaging in the presence of sliding errors

    International Nuclear Information System (INIS)

    Yost, G.P.

    1991-08-01

    In many cases the precision with which an experiment can measure a physical quantity depends on the value of that quantity. Not having access to the true value, experimental groups are forced to assign their errors based on their own measured value. Procedures which attempt to derive an improved estimate of the true value by a suitable average of such measurements usually weight each experiment's measurement according to the reported variance. However, one is in a position to derive improved error estimates for each experiment from the average itself, provided an approximate idea of the functional dependence of the error on the central value is known. Failing to do so can lead to substantial biases. Techniques which avoid these biases without loss of precision are proposed and their performance is analyzed with examples. These techniques are quite general and can bring about an improvement even when the behavior of the errors is not well understood. Perhaps the most important application of the technique is in fitting curves to histograms

  20. Random walk on random walks

    NARCIS (Netherlands)

    Hilário, M.; Hollander, den W.Th.F.; Sidoravicius, V.; Soares dos Santos, R.; Teixeira, A.

    2014-01-01

    In this paper we study a random walk in a one-dimensional dynamic random environment consisting of a collection of independent particles performing simple symmetric random walks in a Poisson equilibrium with density ¿¿(0,8). At each step the random walk performs a nearest-neighbour jump, moving to

  1. Characterizing individual painDETECT symptoms by average pain severity

    Directory of Open Access Journals (Sweden)

    Sadosky A

    2016-07-01

    Full Text Available Alesia Sadosky,1 Vijaya Koduru,2 E Jay Bienen,3 Joseph C Cappelleri4 1Pfizer Inc, New York, NY, 2Eliassen Group, New London, CT, 3Outcomes Research Consultant, New York, NY, 4Pfizer Inc, Groton, CT, USA Background: painDETECT is a screening measure for neuropathic pain. The nine-item version consists of seven sensory items (burning, tingling/prickling, light touching, sudden pain attacks/electric shock-type pain, cold/heat, numbness, and slight pressure, a pain course pattern item, and a pain radiation item. The seven-item version consists only of the sensory items. Total scores of both versions discriminate average pain-severity levels (mild, moderate, and severe, but their ability to discriminate individual item severity has not been evaluated.Methods: Data were from a cross-sectional, observational study of six neuropathic pain conditions (N=624. Average pain severity was evaluated using the Brief Pain Inventory-Short Form, with severity levels defined using established cut points for distinguishing mild, moderate, and severe pain. The Wilcoxon rank sum test was followed by ridit analysis to represent the probability that a randomly selected subject from one average pain-severity level had a more favorable outcome on the specific painDETECT item relative to a randomly selected subject from a comparator severity level.Results: A probability >50% for a better outcome (less severe pain was significantly observed for each pain symptom item. The lowest probability was 56.3% (on numbness for mild vs moderate pain and highest probability was 76.4% (on cold/heat for mild vs severe pain. The pain radiation item was significant (P<0.05 and consistent with pain symptoms, as well as with total scores for both painDETECT versions; only the pain course item did not differ.Conclusion: painDETECT differentiates severity such that the ability to discriminate average pain also distinguishes individual pain item severity in an interpretable manner. Pain

  2. Application of NMR circuit for superconducting magnet using signal averaging

    International Nuclear Information System (INIS)

    Yamada, R.; Ishimoto, H.; Shea, M.F.; Schmidt, E.E.; Borer, K.

    1977-01-01

    An NMR circuit was used to measure the absolute field values of Fermilab Energy Doubler magnets up to 44 kG. A signal averaging method to improve the S/N ratio was implemented by means of a Tektronix Digital Processing Oscilloscope, followed by the development of an inexpensive microprocessor based system contained in a NIM module. Some of the data obtained from measuring two superconducting dipole magnets are presented

  3. Algorithmic randomness and physical entropy

    International Nuclear Information System (INIS)

    Zurek, W.H.

    1989-01-01

    Algorithmic randomness provides a rigorous, entropylike measure of disorder of an individual, microscopic, definite state of a physical system. It is defined by the size (in binary digits) of the shortest message specifying the microstate uniquely up to the assumed resolution. Equivalently, algorithmic randomness can be expressed as the number of bits in the smallest program for a universal computer that can reproduce the state in question (for instance, by plotting it with the assumed accuracy). In contrast to the traditional definitions of entropy, algorithmic randomness can be used to measure disorder without any recourse to probabilities. Algorithmic randomness is typically very difficult to calculate exactly but relatively easy to estimate. In large systems, probabilistic ensemble definitions of entropy (e.g., coarse-grained entropy of Gibbs and Boltzmann's entropy H=lnW, as well as Shannon's information-theoretic entropy) provide accurate estimates of the algorithmic entropy of an individual system or its average value for an ensemble. One is thus able to rederive much of thermodynamics and statistical mechanics in a setting very different from the usual. Physical entropy, I suggest, is a sum of (i) the missing information measured by Shannon's formula and (ii) of the algorithmic information content---algorithmic randomness---present in the available data about the system. This definition of entropy is essential in describing the operation of thermodynamic engines from the viewpoint of information gathering and using systems. These Maxwell demon-type entities are capable of acquiring and processing information and therefore can ''decide'' on the basis of the results of their measurements and computations the best strategy for extracting energy from their surroundings. From their internal point of view the outcome of each measurement is definite

  4. A Note on Functional Averages over Gaussian Ensembles

    Directory of Open Access Journals (Sweden)

    Gabriel H. Tucci

    2013-01-01

    Full Text Available We find a new formula for matrix averages over the Gaussian ensemble. Let H be an n×n Gaussian random matrix with complex, independent, and identically distributed entries of zero mean and unit variance. Given an n×n positive definite matrix A and a continuous function f:ℝ+→ℝ such that ∫0∞‍e-αt|f(t|2dt0, we find a new formula for the expectation [Tr(f(HAH*]. Taking f(x=log(1+x gives another formula for the capacity of the MIMO communication channel, and taking f(x=(1+x-1 gives the MMSE achieved by a linear receiver.

  5. High average power linear induction accelerator development

    International Nuclear Information System (INIS)

    Bayless, J.R.; Adler, R.J.

    1987-07-01

    There is increasing interest in linear induction accelerators (LIAs) for applications including free electron lasers, high power microwave generators and other types of radiation sources. Lawrence Livermore National Laboratory has developed LIA technology in combination with magnetic pulse compression techniques to achieve very impressive performance levels. In this paper we will briefly discuss the LIA concept and describe our development program. Our goals are to improve the reliability and reduce the cost of LIA systems. An accelerator is presently under construction to demonstrate these improvements at an energy of 1.6 MeV in 2 kA, 65 ns beam pulses at an average beam power of approximately 30 kW. The unique features of this system are a low cost accelerator design and an SCR-switched, magnetically compressed, pulse power system. 4 refs., 7 figs

  6. FEL system with homogeneous average output

    Energy Technology Data Exchange (ETDEWEB)

    Douglas, David R.; Legg, Robert; Whitney, R. Roy; Neil, George; Powers, Thomas Joseph

    2018-01-16

    A method of varying the output of a free electron laser (FEL) on very short time scales to produce a slightly broader, but smooth, time-averaged wavelength spectrum. The method includes injecting into an accelerator a sequence of bunch trains at phase offsets from crest. Accelerating the particles to full energy to result in distinct and independently controlled, by the choice of phase offset, phase-energy correlations or chirps on each bunch train. The earlier trains will be more strongly chirped, the later trains less chirped. For an energy recovered linac (ERL), the beam may be recirculated using a transport system with linear and nonlinear momentum compactions M.sub.56, which are selected to compress all three bunch trains at the FEL with higher order terms managed.

  7. Quetelet, the average man and medical knowledge.

    Science.gov (United States)

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.

  8. [Quetelet, the average man and medical knowledge].

    Science.gov (United States)

    Caponi, Sandra

    2013-01-01

    Using two books by Adolphe Quetelet, I analyze his theory of the 'average man', which associates biological and social normality with the frequency with which certain characteristics appear in a population. The books are Sur l'homme et le développement de ses facultés and Du systeme social et des lois qui le régissent. Both reveal that Quetelet's ideas are permeated by explanatory strategies drawn from physics and astronomy, and also by discursive strategies drawn from theology and religion. The stability of the mean as opposed to the dispersion of individual characteristics and events provided the basis for the use of statistics in social sciences and medicine.

  9. Asymmetric network connectivity using weighted harmonic averages

    Science.gov (United States)

    Morrison, Greg; Mahadevan, L.

    2011-02-01

    We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.

  10. Angle-averaged Compton cross sections

    International Nuclear Information System (INIS)

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: α = initial photon energy in units of m 0 c 2 ; α/sub s/ = scattered photon energy in units of m 0 c 2 ; β = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV

  11. Average Gait Differential Image Based Human Recognition

    Directory of Open Access Journals (Sweden)

    Jinyan Chen

    2014-01-01

    Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.

  12. Reynolds averaged simulation of unsteady separated flow

    International Nuclear Information System (INIS)

    Iaccarino, G.; Ooi, A.; Durbin, P.A.; Behnia, M.

    2003-01-01

    The accuracy of Reynolds averaged Navier-Stokes (RANS) turbulence models in predicting complex flows with separation is examined. The unsteady flow around square cylinder and over a wall-mounted cube are simulated and compared with experimental data. For the cube case, none of the previously published numerical predictions obtained by steady-state RANS produced a good match with experimental data. However, evidence exists that coherent vortex shedding occurs in this flow. Its presence demands unsteady RANS computation because the flow is not statistically stationary. The present study demonstrates that unsteady RANS does indeed predict periodic shedding, and leads to much better concurrence with available experimental data than has been achieved with steady computation

  13. Angle-averaged Compton cross sections

    Energy Technology Data Exchange (ETDEWEB)

    Nickel, G.H.

    1983-01-01

    The scattering of a photon by an individual free electron is characterized by six quantities: ..cap alpha.. = initial photon energy in units of m/sub 0/c/sup 2/; ..cap alpha../sub s/ = scattered photon energy in units of m/sub 0/c/sup 2/; ..beta.. = initial electron velocity in units of c; phi = angle between photon direction and electron direction in the laboratory frame (LF); theta = polar angle change due to Compton scattering, measured in the electron rest frame (ERF); and tau = azimuthal angle change in the ERF. We present an analytic expression for the average of the Compton cross section over phi, theta, and tau. The lowest order approximation to this equation is reasonably accurate for photons and electrons with energies of many keV.

  14. Random Intercept and Random Slope 2-Level Multilevel Models

    Directory of Open Access Journals (Sweden)

    Rehan Ahmad Khan

    2012-11-01

    Full Text Available Random intercept model and random intercept & random slope model carrying two-levels of hierarchy in the population are presented and compared with the traditional regression approach. The impact of students’ satisfaction on their grade point average (GPA was explored with and without controlling teachers influence. The variation at level-1 can be controlled by introducing the higher levels of hierarchy in the model. The fanny movement of the fitted lines proves variation of student grades around teachers.

  15. Tri-state resistive switching characteristics of MnO/Ta2O5 resistive random access memory device by a controllable reset process

    Science.gov (United States)

    Lee, N. J.; Kang, T. S.; Hu, Q.; Lee, T. S.; Yoon, T.-S.; Lee, H. H.; Yoo, E. J.; Choi, Y. J.; Kang, C. J.

    2018-06-01

    Tri-state resistive switching characteristics of bilayer resistive random access memory devices based on manganese oxide (MnO)/tantalum oxide (Ta2O5) have been studied. The current–voltage (I–V) characteristics of the Ag/MnO/Ta2O5/Pt device show tri-state resistive switching (RS) behavior with a high resistance state (HRS), intermediate resistance state (IRS), and low resistance state (LRS), which are controlled by the reset process. The MnO/Ta2O5 film shows bipolar RS behavior through the formation and rupture of conducting filaments without the forming process. The device shows reproducible and stable RS both from the HRS to the LRS and from the IRS to the LRS. In order to elucidate the tri-state RS mechanism in the Ag/MnO/Ta2O5/Pt device, transmission electron microscope (TEM) images are measured in the LRS, IRS and HRS. White lines like dendrites are observed in the Ta2O5 film in both the LRS and the IRS. Poole–Frenkel conduction, space charge limited conduction, and Ohmic conduction are proposed as the dominant conduction mechanisms for the Ag/MnO/Ta2O5/Pt device based on the obtained I–V characteristics and TEM images.

  16. Material insights of HfO2-based integrated 1-transistor-1-resistor resistive random access memory devices processed by batch atomic layer deposition.

    Science.gov (United States)

    Niu, Gang; Kim, Hee-Dong; Roelofs, Robin; Perez, Eduardo; Schubert, Markus Andreas; Zaumseil, Peter; Costina, Ioan; Wenger, Christian

    2016-06-17

    With the continuous scaling of resistive random access memory (RRAM) devices, in-depth understanding of the physical mechanism and the material issues, particularly by directly studying integrated cells, become more and more important to further improve the device performances. In this work, HfO2-based integrated 1-transistor-1-resistor (1T1R) RRAM devices were processed in a standard 0.25 μm complementary-metal-oxide-semiconductor (CMOS) process line, using a batch atomic layer deposition (ALD) tool, which is particularly designed for mass production. We demonstrate a systematic study on TiN/Ti/HfO2/TiN/Si RRAM devices to correlate key material factors (nano-crystallites and carbon impurities) with the filament type resistive switching (RS) behaviours. The augmentation of the nano-crystallites density in the film increases the forming voltage of devices and its variation. Carbon residues in HfO2 films turn out to be an even more significant factor strongly impacting the RS behaviour. A relatively higher deposition temperature of 300 °C dramatically reduces the residual carbon concentration, thus leading to enhanced RS performances of devices, including lower power consumption, better endurance and higher reliability. Such thorough understanding on physical mechanism of RS and the correlation between material and device performances will facilitate the realization of high density and reliable embedded RRAM devices with low power consumption.

  17. On the role of heat and mass transfer into laser processability during selective laser melting AlSi12 alloy based on a randomly packed powder-bed

    Science.gov (United States)

    Wang, Lianfeng; Yan, Biao; Guo, Lijie; Gu, Dongdong

    2018-04-01

    A newly transient mesoscopic model with a randomly packed powder-bed has been proposed to investigate the heat and mass transfer and laser process quality between neighboring tracks during selective laser melting (SLM) AlSi12 alloy by finite volume method (FVM), considering the solid/liquid phase transition, variable temperature-dependent properties and interfacial force. The results apparently revealed that both the operating temperature and resultant cooling rate were obviously elevated by increasing the laser power. Accordingly, the resultant viscosity of liquid significantly reduced under a large laser power and was characterized with a large velocity, which was prone to result in a more intensive convection within pool. In this case, the sufficient heat and mass transfer occurred at the interface between the previously fabricated tracks and currently building track, revealing a strongly sufficient spreading between the neighboring tracks and a resultant high-quality surface without obvious porosity. By contrast, the surface quality of SLM-processed components with a relatively low laser power notably weakened due to the limited and insufficient heat and mass transfer at the interface of neighboring tracks. Furthermore, the experimental surface morphologies of the top surface were correspondingly acquired and were in full accordance to the calculated results via simulation.

  18. Comparing Acceptance and Commitment Group Therapy and 12-Steps Narcotics Anonymous in Addict’s Rehabilitation Process: A Randomized Controlled Trial

    Directory of Open Access Journals (Sweden)

    Manoochehr Azkhosh

    2016-12-01

    Full Text Available Objective: Substance abuse is a socio-psychological disorder. The aim of this study was to compare the effectiveness of acceptance and commitment therapy with 12-steps Narcotics Anonymous on psychological well-being of opiate dependent individuals in addiction treatment centers in Shiraz, Iran.Method: This was a randomized controlled trial. Data were collected at entry into the study and at post-test and follow-up visits. The participants were selected from opiate addicted individuals who referred to addiction treatment centers in Shiraz. Sixty individuals were evaluated according to inclusion/ exclusion criteria and were divided into three equal groups randomly (20 participants per group. One group received acceptance and commitment group therapy (Twelve 90-minute sessions and the other group was provided with the 12-steps Narcotics Anonymous program and the control group received the usual methadone maintenance treatment. During the treatment process, seven participants dropped out. Data were collected using the psychological well-being questionnaire and AAQ questionnaire in the three groups at pre-test, post-test and follow-up visits. Data were analyzed using repeated measure analysis of variance.Results: Repeated measure analysis of variance revealed that the mean difference between the three groups was significant (P<0.05 and that acceptance and commitment therapy group showed improvement relative to the NA and control groups on psychological well-being and psychological flexibility.Conclusion: The results of this study revealed that acceptance and commitment therapy can be helpful in enhancing positive emotions and increasing psychological well-being of addicts who seek treatment.

  19. Shamba Maisha: Pilot agricultural intervention for food security and HIV health outcomes in Kenya: design, methods, baseline results and process evaluation of a cluster-randomized controlled trial.

    Science.gov (United States)

    Cohen, Craig R; Steinfeld, Rachel L; Weke, Elly; Bukusi, Elizabeth A; Hatcher, Abigail M; Shiboski, Stephen; Rheingans, Richard; Scow, Kate M; Butler, Lisa M; Otieno, Phelgona; Dworkin, Shari L; Weiser, Sheri D

    2015-01-01

    Despite advances in treatment of people living with HIV, morbidity and mortality remains unacceptably high in sub-Saharan Africa, largely due to parallel epidemics of poverty and food insecurity. We conducted a pilot cluster randomized controlled trial (RCT) of a multisectoral agricultural and microfinance intervention (entitled Shamba Maisha) designed to improve food security, household wealth, HIV clinical outcomes and women's empowerment. The intervention was carried out at two HIV clinics in Kenya, one randomized to the intervention arm and one to the control arm. HIV-infected patients >18 years, on antiretroviral therapy, with moderate/severe food insecurity and/or body mass index (BMI) loan (~$150) to purchase the farming commodities, 2) a micro-irrigation pump, seeds, and fertilizer, and 3) trainings in sustainable agricultural practices and financial literacy. Enrollment of 140 participants took four months, and the screening-to-enrollment ratio was similar between arms. We followed participants for 12 months and conducted structured questionnaires. We also conducted a process evaluation with participants and stakeholders 3-5 months after study start and at study end. Baseline results revealed that participants at the two sites were similar in age, gender and marital status. A greater proportion of participants at the intervention site had a low BMI in comparison to participants at the control site (18% vs. 7%, p = 0.054). While median CD4 count was similar between arms, a greater proportion of participants enrolled at the intervention arm had a detectable HIV viral load compared with control participants (49% vs. 28%, respectively, p loans, agricultural challenges due to weather patterns, and a challenging partnership with the microfinance institution. We expect the results from this pilot study to provide useful data on the impacts of livelihood interventions and will help in the design of a definitive cluster RCT. This trial is registered at Clinical

  20. Nonlinear transformations of random processes

    CERN Document Server

    Deutsch, Ralph

    2017-01-01

    This concise treatment of nonlinear noise techniques encountered in system applications is suitable for advanced undergraduates and graduate students. It is also a valuable reference for systems analysts and communication engineers. 1962 edition.

  1. Toddlers' bias to look at average versus obese figures relates to maternal anti-fat prejudice.

    Science.gov (United States)

    Ruffman, Ted; O'Brien, Kerry S; Taumoepeau, Mele; Latner, Janet D; Hunter, John A

    2016-02-01

    Anti-fat prejudice (weight bias, obesity stigma) is strong, prevalent, and increasing in adults and is associated with negative outcomes for those with obesity. However, it is unknown how early in life this prejudice forms and the reasons for its development. We examined whether infants and toddlers might display an anti-fat bias and, if so, whether it was influenced by maternal anti-fat attitudes through a process of social learning. Mother-child dyads (N=70) split into four age groups participated in a preferential looking paradigm whereby children were presented with 10 pairs of average and obese human figures in random order, and their viewing times (preferential looking) for the figures were measured. Mothers' anti-fat prejudice and education were measured along with mothers' and fathers' body mass index (BMI) and children's television viewing time. We found that older infants (M=11months) had a bias for looking at the obese figures, whereas older toddlers (M=32months) instead preferred looking at the average-sized figures. Furthermore, older toddlers' preferential looking was correlated significantly with maternal anti-fat attitudes. Parental BMI, education, and children's television viewing time were unrelated to preferential looking. Looking times might signal a precursor to explicit fat prejudice socialized via maternal anti-fat attitudes. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.

    Science.gov (United States)

    Brezis, Noam; Bronfman, Zohar Z; Usher, Marius

    2015-06-04

    We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two.

  3. Groupies in multitype random graphs

    OpenAIRE

    Shang, Yilun

    2016-01-01

    A groupie in a graph is a vertex whose degree is not less than the average degree of its neighbors. Under some mild conditions, we show that the proportion of groupies is very close to 1/2 in multitype random graphs (such as stochastic block models), which include Erd?s-R?nyi random graphs, random bipartite, and multipartite graphs as special examples. Numerical examples are provided to illustrate the theoretical results.

  4. Groupies in multitype random graphs.

    Science.gov (United States)

    Shang, Yilun

    2016-01-01

    A groupie in a graph is a vertex whose degree is not less than the average degree of its neighbors. Under some mild conditions, we show that the proportion of groupies is very close to 1/2 in multitype random graphs (such as stochastic block models), which include Erdős-Rényi random graphs, random bipartite, and multipartite graphs as special examples. Numerical examples are provided to illustrate the theoretical results.

  5. Random magnetism

    International Nuclear Information System (INIS)

    Tahir-Kheli, R.A.

    1975-01-01

    A few simple problems relating to random magnetic systems are presented. Translational symmetry, only on the macroscopic scale, is assumed for these systems. A random set of parameters, on the microscopic scale, for the various regions of these systems is also assumed. A probability distribution for randomness is obeyed. Knowledge of the form of these probability distributions, is assumed in all cases [pt

  6. Calculating Free Energies Using Average Force

    Science.gov (United States)

    Darve, Eric; Pohorille, Andrew; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    A new, general formula that connects the derivatives of the free energy along the selected, generalized coordinates of the system with the instantaneous force acting on these coordinates is derived. The instantaneous force is defined as the force acting on the coordinate of interest so that when it is subtracted from the equations of motion the acceleration along this coordinate is zero. The formula applies to simulations in which the selected coordinates are either unconstrained or constrained to fixed values. It is shown that in the latter case the formula reduces to the expression previously derived by den Otter and Briels. If simulations are carried out without constraining the coordinates of interest, the formula leads to a new method for calculating the free energy changes along these coordinates. This method is tested in two examples - rotation around the C-C bond of 1,2-dichloroethane immersed in water and transfer of fluoromethane across the water-hexane interface. The calculated free energies are compared with those obtained by two commonly used methods. One of them relies on determining the probability density function of finding the system at different values of the selected coordinate and the other requires calculating the average force at discrete locations along this coordinate in a series of constrained simulations. The free energies calculated by these three methods are in excellent agreement. The relative advantages of each method are discussed.

  7. High-average-power solid state lasers

    International Nuclear Information System (INIS)

    Summers, M.A.

    1989-01-01

    In 1987, a broad-based, aggressive R ampersand D program aimed at developing the technologies necessary to make possible the use of solid state lasers that are capable of delivering medium- to high-average power in new and demanding applications. Efforts were focused along the following major lines: development of laser and nonlinear optical materials, and of coatings for parasitic suppression and evanescent wave control; development of computational design tools; verification of computational models on thoroughly instrumented test beds; and applications of selected aspects of this technology to specific missions. In the laser materials areas, efforts were directed towards producing strong, low-loss laser glasses and large, high quality garnet crystals. The crystal program consisted of computational and experimental efforts aimed at understanding the physics, thermodynamics, and chemistry of large garnet crystal growth. The laser experimental efforts were directed at understanding thermally induced wave front aberrations in zig-zag slabs, understanding fluid mechanics, heat transfer, and optical interactions in gas-cooled slabs, and conducting critical test-bed experiments with various electro-optic switch geometries. 113 refs., 99 figs., 18 tabs

  8. The concept of average LET values determination

    International Nuclear Information System (INIS)

    Makarewicz, M.

    1981-01-01

    The concept of average LET (linear energy transfer) values determination, i.e. ordinary moments of LET in absorbed dose distribution vs. LET of ionizing radiation of any kind and any spectrum (even the unknown ones) has been presented. The method is based on measurement of ionization current with several values of voltage supplying an ionization chamber operating in conditions of columnar recombination of ions or ion recombination in clusters while the chamber is placed in the radiation field at the point of interest. By fitting a suitable algebraic expression to the measured current values one can obtain coefficients of the expression which can be interpreted as values of LET moments. One of the advantages of the method is its experimental and computational simplicity. It has been shown that for numerical estimation of certain effects dependent on LET of radiation it is not necessary to know the dose distribution but only a number of parameters of the distribution, i.e. the LET moments. (author)

  9. On spectral averages in nuclear spectroscopy

    International Nuclear Information System (INIS)

    Verbaarschot, J.J.M.

    1982-01-01

    In nuclear spectroscopy one tries to obtain a description of systems of bound nucleons. By means of theoretical models one attemps to reproduce the eigenenergies and the corresponding wave functions which then enable the computation of, for example, the electromagnetic moments and the transition amplitudes. Statistical spectroscopy can be used for studying nuclear systems in large model spaces. In this thesis, methods are developed and applied which enable the determination of quantities in a finite part of the Hilbert space, which is defined by specific quantum values. In the case of averages in a space defined by a partition of the nucleons over the single-particle orbits, the propagation coefficients reduce to Legendre interpolation polynomials. In chapter 1 these polynomials are derived with the help of a generating function and a generalization of Wick's theorem. One can then deduce the centroid and the variance of the eigenvalue distribution in a straightforward way. The results are used to calculate the systematic energy difference between states of even and odd parity for nuclei in the mass region A=10-40. In chapter 2 an efficient method for transforming fixed angular momentum projection traces into fixed angular momentum for the configuration space traces is developed. In chapter 3 it is shown that the secular behaviour can be represented by a Gaussian function of the energies. (Auth.)

  10. Artificial Intelligence Can Predict Daily Trauma Volume and Average Acuity.

    Science.gov (United States)

    Stonko, David P; Dennis, Bradley M; Betzold, Richard D; Peetz, Allan B; Gunter, Oliver L; Guillamondegui, Oscar D

    2018-04-19

    The goal of this study was to integrate temporal and weather data in order to create an artificial neural network (ANN) to predict trauma volume, the number of emergent operative cases, and average daily acuity at a level 1 trauma center. Trauma admission data from TRACS and weather data from the National Oceanic and Atmospheric Administration (NOAA) was collected for all adult trauma patients from July 2013-June 2016. The ANN was constructed using temporal (time, day of week), and weather factors (daily high, active precipitation) to predict four points of daily trauma activity: number of traumas, number of penetrating traumas, average ISS, and number of immediate OR cases per day. We trained a two-layer feed-forward network with 10 sigmoid hidden neurons via the Levenberg-Marquardt backpropagation algorithm, and performed k-fold cross validation and accuracy calculations on 100 randomly generated partitions. 10,612 patients over 1,096 days were identified. The ANN accurately predicted the daily trauma distribution in terms of number of traumas, number of penetrating traumas, number of OR cases, and average daily ISS (combined training correlation coefficient r = 0.9018+/-0.002; validation r = 0.8899+/- 0.005; testing r = 0.8940+/-0.006). We were able to successfully predict trauma and emergent operative volume, and acuity using an ANN by integrating local weather and trauma admission data from a level 1 center. As an example, for June 30, 2016, it predicted 9.93 traumas (actual: 10), and a mean ISS score of 15.99 (actual: 13.12); see figure 3. This may prove useful for predicting trauma needs across the system and hospital administration when allocating limited resources. Level III STUDY TYPE: Prognostic/Epidemiological.

  11. Thermal effects in high average power optical parametric amplifiers.

    Science.gov (United States)

    Rothhardt, Jan; Demmler, Stefan; Hädrich, Steffen; Peschel, Thomas; Limpert, Jens; Tünnermann, Andreas

    2013-03-01

    Optical parametric amplifiers (OPAs) have the reputation of being average power scalable due to the instantaneous nature of the parametric process (zero quantum defect). This Letter reveals serious challenges originating from thermal load in the nonlinear crystal caused by absorption. We investigate these thermal effects in high average power OPAs based on beta barium borate. Absorption of both pump and idler waves is identified to contribute significantly to heating of the nonlinear crystal. A temperature increase of up to 148 K with respect to the environment is observed and mechanical tensile stress up to 40 MPa is found, indicating a high risk of crystal fracture under such conditions. By restricting the idler to a wavelength range far from absorption bands and removing the crystal coating we reduce the peak temperature and the resulting temperature gradient significantly. Guidelines for further power scaling of OPAs and other nonlinear devices are given.

  12. Dynamical replica analysis of processes on finitely connected random graphs: II. Dynamics in the Griffiths phase of the diluted Ising ferromagnet

    International Nuclear Information System (INIS)

    Mozeika, A; Coolen, A C C

    2009-01-01

    We study the Glauber dynamics of Ising spin models with random bonds, on finitely connected random graphs. We generalize a recent dynamical replica theory with which to predict the evolution of the joint spin-field distribution, to include random graphs with arbitrary degree distributions. The theory is applied to Ising ferromagnets on randomly diluted Bethe lattices, where we study the evolution of the magnetization and the internal energy. It predicts a prominent slowing down of the flow in the Griffiths phase, it suggests a further dynamical transition at lower temperatures within the Griffiths phase, and it is verified quantitatively by the results of Monte Carlo simulations

  13. A Randomized Controlled Clinical Trial of Dialogical Exposure Therapy versus Cognitive Processing Therapy for Adult Outpatients Suffering from PTSD after Type I Trauma in Adulthood.

    Science.gov (United States)

    Butollo, Willi; Karl, Regina; König, Julia; Rosner, Rita

    2016-01-01

    Although there are effective treatments for posttraumatic stress disorder (PTSD), there is little research on treatments with non-cognitive-behavioural backgrounds, such as gestalt therapy. We tested an integrative gestalt-derived intervention, dialogical exposure therapy (DET), against an established cognitive-behavioural treatment (cognitive processing therapy, CPT) for possible differential effects in terms of symptomatic outcome and drop-out rates. We randomized 141 treatment-seeking individuals with a diagnosis of PTSD to receive either DET or CPT. Therapy length in both treatments was flexible with a maximum duration of 24 sessions. Dropout rates were 12.2% in DET and 14.9% in CPT. Patients in both conditions achieved significant and large reductions in PTSD symptoms (Impact of Event Scale - Revised; Hedges' g = 1.14 for DET and d = 1.57 for CPT) which were largely stable at the 6-month follow-up. At the posttreatment assessment, CPT performed statistically better than DET on symptom and cognition measures. For several outcome measures, younger patients profited better from CPT than older ones, while there was no age effect for DET. Our results indicate that DET merits further research and may be an alternative to established treatments for PTSD. It remains to be seen whether DET confers advantages in areas of functioning beyond PTSD symptoms. © 2015 S. Karger AG, Basel.

  14. Qualitative insights into implementation, processes, and outcomes of a randomized trial on peer support and HIV care engagement in Rakai, Uganda.

    Science.gov (United States)

    Monroe, April; Nakigozi, Gertrude; Ddaaki, William; Bazaale, Jeremiah Mulamba; Gray, Ronald H; Wawer, Maria J; Reynolds, Steven J; Kennedy, Caitlin E; Chang, Larry W

    2017-01-10

    People living with human immunodeficiency virus (HIV) who have not yet initiated antiretroviral therapy (ART) can benefit from being engaged in care and utilizing preventive interventions. Community-based peer support may be an effective approach to promote these important HIV services. After conducting a randomized trial of the impact of peer support on pre-ART outcomes, we conducted a qualitative evaluation to better understand trial implementation, processes, and results. Overall, 75 participants, including trial participants (clients), peer supporters, and clinic staff, participated in 41 in-depth interviews and 6 focus group discussions. A situated Information Motivation, and Behavioral skills model of behavior change was used to develop semi-structured interview and focus group guides. Transcripts were coded and thematically synthesized. We found that participant narratives were generally consistent with the theoretical model, indicating that peer support improved information, motivation, and behavioral skills, leading to increased engagement in pre-ART care. Clients described how peer supporters reinforced health messages and helped them better understand complicated health information. Peer supporters also helped clients navigate the health system, develop support networks, and identify strategies for remembering medication and clinic appointments. Some peer supporters adopted roles beyond visiting patients, serving as a bridge between the client and his or her family, community, and health system. Qualitative results demonstrated plausible processes by which peer support improved client engagement in care, cotrimoxazole use, and safe water vessel use. Challenges identified included insufficient messaging surrounding ART initiation, lack of care continuity after ART initiation, rare breaches in confidentiality, and structural challenges. The evaluation found largely positive perceptions of the peer intervention across stakeholders and provided valuable

  15. What Randomized Benchmarking Actually Measures

    International Nuclear Information System (INIS)

    Proctor, Timothy; Rudinger, Kenneth; Young, Kevin; Sarovar, Mohan; Blume-Kohout, Robin

    2017-01-01

    Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not a well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.

  16. A note on asymptotic expansions for sums over a weakly dependent random field with application to the Poisson and Strauss processes

    DEFF Research Database (Denmark)

    Jensen, J.L.

    1993-01-01

    Previous results on Edgeworth expansions for sums over a random field are extended to the case where the strong mixing coefficient depends not only on the distance between two sets of random variables, but also on the size of the two sets. The results are applied to the Poisson and the Strauss...

  17. The Prevention Program for Externalizing Problem Behavior (PEP) Improves Child Behavior by Reducing Negative Parenting: Analysis of Mediating Processes in a Randomized Controlled Trial

    Science.gov (United States)

    Hanisch, Charlotte; Hautmann, Christopher; Plück, Julia; Eichelberger, Ilka; Döpfner, Manfred

    2014-01-01

    Background: Our indicated Prevention program for preschool children with Externalizing Problem behavior (PEP) demonstrated improved parenting and child problem behavior in a randomized controlled efficacy trial and in a study with an effectiveness design. The aim of the present analysis of data from the randomized controlled trial was to identify…

  18. An average salary: approaches to the index determination

    Directory of Open Access Journals (Sweden)

    T. M. Pozdnyakova

    2017-01-01

    Full Text Available The article “An average salary: approaches to the index determination” is devoted to studying various methods of calculating this index, both used by official state statistics of the Russian Federation and offered by modern researchers.The purpose of this research is to analyze the existing approaches to calculating the average salary of employees of enterprises and organizations, as well as to make certain additions that would help to clarify this index.The information base of the research is laws and regulations of the Russian Federation Government, statistical and analytical materials of the Federal State Statistics Service of Russia for the section «Socio-economic indexes: living standards of the population», as well as materials of scientific papers, describing different approaches to the average salary calculation. The data on the average salary of employees of educational institutions of the Khabarovsk region served as the experimental base of research. In the process of conducting the research, the following methods were used: analytical, statistical, calculated-mathematical and graphical.The main result of the research is an option of supplementing the method of calculating average salary index within enterprises or organizations, used by Goskomstat of Russia, by means of introducing a correction factor. Its essence consists in the specific formation of material indexes for different categories of employees in enterprises or organizations, mainly engaged in internal secondary jobs. The need for introducing this correction factor comes from the current reality of working conditions of a wide range of organizations, when an employee is forced, in addition to the main position, to fulfill additional job duties. As a result, the situation is frequent when the average salary at the enterprise is difficult to assess objectively because it consists of calculating multiple rates per staff member. In other words, the average salary of

  19. Average spectral efficiency analysis of FSO links over turbulence channel with adaptive transmissions and aperture averaging

    Science.gov (United States)

    Aarthi, G.; Ramachandra Reddy, G.

    2018-03-01

    In our paper, the impact of adaptive transmission schemes: (i) optimal rate adaptation (ORA) and (ii) channel inversion with fixed rate (CIFR) on the average spectral efficiency (ASE) are explored for free-space optical (FSO) communications with On-Off Keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems under different turbulence regimes. Further to enhance the ASE we have incorporated aperture averaging effects along with the above adaptive schemes. The results indicate that ORA adaptation scheme has the advantage of improving the ASE performance compared with CIFR under moderate and strong turbulence regime. The coherent OWC system with ORA excels the other modulation schemes and could achieve ASE performance of 49.8 bits/s/Hz at the average transmitted optical power of 6 dBm under strong turbulence. By adding aperture averaging effect we could achieve an ASE of 50.5 bits/s/Hz under the same conditions. This makes ORA with Coherent OWC modulation as a favorable candidate for improving the ASE of the FSO communication system.

  20. Recruiting primary care practices for practice-based research: a case study of a group-randomized study (TRANSLATE CKD) recruitment process.

    Science.gov (United States)

    Loskutova, Natalia Y; Smail, Craig; Ajayi, Kemi; Pace, Wilson D; Fox, Chester H

    2018-01-16

    We assessed the challenging process of recruiting primary care practices in a practice-based research study. In this descriptive case study of recruitment data collected for a large practice-based study (TRANSLATE CKD), 48 single or multiple-site health care organizations in the USA with a total of 114 practices were invited to participate. We collected quantitative and qualitative measures of recruitment process and outcomes for the first 25 practices recruited. Information about 13 additional practices is not provided due to staff transitions and limited data collection resources. Initial outreach was made to 114 practices (from 48 organizations, 41% small); 52 (45%) practices responded with interest. Practices enrolled in the study (n = 25) represented 22% of the total outreach number, or 48% of those initially interested. Average time to enroll was 71 calendar days (range 11-107). There was no difference in the number of days practices remained under recruitment, based on enrolled versus not enrolled (44.8 ± 30.4 versus 46.8 ± 25.4 days, P = 0.86) or by the organization size, i.e. large versus small (defined by having ≤4 distinct practices; 52 ± 23.6 versus 43.6 ± 27.8 days; P = 0.46). The most common recruitment barriers were administrative, e.g. lack of perceived direct organizational benefit, and were more prominent among large organizations. Despite the general belief that the research topic, invitation method, and interest in research may facilitate practice recruitment, our results suggest that most of the recruitment challenges represent managerial challenges. Future research projects may need to consider relevant methodologies from businesses administration and marketing fields. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  1. Supporting health care professionals to improve the processes of shared decision making and self-management in a web-based intervention: randomized controlled trial.

    Science.gov (United States)

    Sassen, Barbara; Kok, Gerjo; Schepers, Jan; Vanhees, Luc

    2014-10-21

    Research to assess the effect of interventions to improve the processes of shared decision making and self-management directed at health care professionals is limited. Using the protocol of Intervention Mapping, a Web-based intervention directed at health care professionals was developed to complement and optimize health services in patient-centered care. The objective of the Web-based intervention was to increase health care professionals' intention and encouraging behavior toward patient self-management, following cardiovascular risk management guidelines. A randomized controlled trial was used to assess the effect of a theory-based intervention, using a pre-test and post-test design. The intervention website consisted of a module to help improve professionals' behavior, a module to increase patients' intention and risk-reduction behavior toward cardiovascular risk, and a parallel module with a support system for the health care professionals. Health care professionals (n=69) were recruited online and randomly allocated to the intervention group (n=26) or (waiting list) control group (n=43), and invited their patients to participate. The outcome was improved professional behavior toward health education, and was self-assessed through questionnaires based on the Theory of Planned Behavior. Social-cognitive determinants, intention and behavior were measured pre-intervention and at 1-year follow-up. The module to improve professionals' behavior was used by 45% (19/42) of the health care professionals in the intervention group. The module to support the health professional in encouraging behavior toward patients was used by 48% (20/42). The module to improve patients' risk-reduction behavior was provided to 44% (24/54) of patients. In 1 of every 5 patients, the guideline for cardiovascular risk management was used. The Web-based intervention was poorly used. In the intervention group, no differences in social-cognitive determinants, intention and behavior were found

  2. Gradient networks on uncorrelated random scale-free networks

    International Nuclear Information System (INIS)

    Pan Guijun; Yan Xiaoqing; Huang Zhongbing; Ma Weichuan

    2011-01-01

    Uncorrelated random scale-free (URSF) networks are useful null models for checking the effects of scale-free topology on network-based dynamical processes. Here, we present a comparative study of the jamming level of gradient networks based on URSF networks and Erdos-Renyi (ER) random networks. We find that the URSF networks are less congested than ER random networks for the average degree (k)>k c (k c ∼ 2 denotes a critical connectivity). In addition, by investigating the topological properties of the two kinds of gradient networks, we discuss the relations between the topological structure and the transport efficiency of the gradient networks. These findings show that the uncorrelated scale-free structure might allow more efficient transport than the random structure.

  3. Application of autoregressive moving average model in reactor noise analysis

    International Nuclear Information System (INIS)

    Tran Dinh Tri

    1993-01-01

    The application of an autoregressive (AR) model to estimating noise measurements has achieved many successes in reactor noise analysis in the last ten years. The physical processes that take place in the nuclear reactor, however, are described by an autoregressive moving average (ARMA) model rather than by an AR model. Consequently more correct results could be obtained by applying the ARMA model instead of the AR model to reactor noise analysis. In this paper the system of the generalised Yule-Walker equations is derived from the equation of an ARMA model, then a method for its solution is given. Numerical results show the applications of the method proposed. (author)

  4. Image Denoising Using Interquartile Range Filter with Local Averaging

    OpenAIRE

    Jassim, Firas Ajil

    2013-01-01

    Image denoising is one of the fundamental problems in image processing. In this paper, a novel approach to suppress noise from the image is conducted by applying the interquartile range (IQR) which is one of the statistical methods used to detect outlier effect from a dataset. A window of size kXk was implemented to support IQR filter. Each pixel outside the IQR range of the kXk window is treated as noisy pixel. The estimation of the noisy pixels was obtained by local averaging. The essential...

  5. Updated precision measurement of the average lifetime of B hadrons

    CERN Document Server

    Abreu, P; Adye, T; Agasi, E; Ajinenko, I; Aleksan, Roy; Alekseev, G D; Alemany, R; Allport, P P; Almehed, S; Amaldi, Ugo; Amato, S; Andreazza, A; Andrieux, M L; Antilogus, P; Apel, W D; Arnoud, Y; Åsman, B; Augustin, J E; Augustinus, A; Baillon, Paul; Bambade, P; Barate, R; Barbi, M S; Barbiellini, Guido; Bardin, Dimitri Yuri; Baroncelli, A; Bärring, O; Barrio, J A; Bartl, Walter; Bates, M J; Battaglia, Marco; Baubillier, M; Baudot, J; Becks, K H; Begalli, M; Beillière, P; Belokopytov, Yu A; Benvenuti, Alberto C; Berggren, M; Bertrand, D; Bianchi, F; Bigi, M; Bilenky, S M; Billoir, P; Bloch, D; Blume, M; Blyth, S; Bolognese, T; Bonesini, M; Bonivento, W; Booth, P S L; Borisov, G; Bosio, C; Bosworth, S; Botner, O; Boudinov, E; Bouquet, B; Bourdarios, C; Bowcock, T J V; Bozzo, M; Branchini, P; Brand, K D; Brenke, T; Brenner, R A; Bricman, C; Brillault, L; Brown, R C A; Brückman, P; Brunet, J M; Bugge, L; Buran, T; Burgsmüller, T; Buschmann, P; Buys, A; Cabrera, S; Caccia, M; Calvi, M; Camacho-Rozas, A J; Camporesi, T; Canale, V; Canepa, M; Cankocak, K; Cao, F; Carena, F; Carroll, L; Caso, Carlo; Castillo-Gimenez, M V; Cattai, A; Cavallo, F R; Cerrito, L; Chabaud, V; Charpentier, P; Chaussard, L; Chauveau, J; Checchia, P; Chelkov, G A; Chen, M; Chierici, R; Chliapnikov, P V; Chochula, P; Chorowicz, V; Chudoba, J; Cindro, V; Collins, P; Contreras, J L; Contri, R; Cortina, E; Cosme, G; Cossutti, F; Crawley, H B; Crennell, D J; Crosetti, G; Cuevas-Maestro, J; Czellar, S; Dahl-Jensen, Erik; Dahm, J; D'Almagne, B; Dam, M; Damgaard, G; Dauncey, P D; Davenport, Martyn; Da Silva, W; Defoix, C; Deghorain, A; Della Ricca, G; Delpierre, P A; Demaria, N; De Angelis, A; de Boer, Wim; De Brabandere, S; De Clercq, C; La Vaissière, C de; De Lotto, B; De Min, A; De Paula, L S; De Saint-Jean, C; Dijkstra, H; Di Ciaccio, Lucia; Djama, F; Dolbeau, J; Dönszelmann, M; Doroba, K; Dracos, M; Drees, J; Drees, K A; Dris, M; Dufour, Y; Edsall, D M; Ehret, R; Eigen, G; Ekelöf, T J C; Ekspong, Gösta; Elsing, M; Engel, J P; Ershaidat, N; Erzen, B; Espirito-Santo, M C; Falk, E; Fassouliotis, D; Feindt, Michael; Fenyuk, A; Ferrer, A; Filippas-Tassos, A; Firestone, A; Fischer, P A; Föth, H; Fokitis, E; Fontanelli, F; Formenti, F; Franek, B J; Frenkiel, P; Fries, D E C; Frodesen, A G; Frühwirth, R; Fulda-Quenzer, F; Fuster, J A; Galloni, A; Gamba, D; Gandelman, M; García, C; García, J; Gaspar, C; Gasparini, U; Gavillet, P; Gazis, E N; Gelé, D; Gerber, J P; Gibbs, M; Gokieli, R; Golob, B; Gopal, Gian P; Gorn, L; Górski, M; Guz, Yu; Gracco, Valerio; Graziani, E; Grosdidier, G; Grzelak, K; Gumenyuk, S A; Gunnarsson, P; Günther, M; Guy, J; Hahn, F; Hahn, S; Hajduk, Z; Hallgren, A; Hamacher, K; Hao, W; Harris, F J; Hedberg, V; Henriques, R P; Hernández, J J; Herquet, P; Herr, H; Hessing, T L; Higón, E; Hilke, Hans Jürgen; Hill, T S; Holmgren, S O; Holt, P J; Holthuizen, D J; Hoorelbeke, S; Houlden, M A; Hrubec, Josef; Huet, K; Hultqvist, K; Jackson, J N; Jacobsson, R; Jalocha, P; Janik, R; Jarlskog, C; Jarlskog, G; Jarry, P; Jean-Marie, B; Johansson, E K; Jönsson, L B; Jönsson, P E; Joram, Christian; Juillot, P; Kaiser, M; Kapusta, F; Karafasoulis, K; Karlsson, M; Karvelas, E; Katsanevas, S; Katsoufis, E C; Keränen, R; Khokhlov, Yu A; Khomenko, B A; Khovanskii, N N; King, B J; Kjaer, N J; Klein, H; Klovning, A; Kluit, P M; Köne, B; Kokkinias, P; Koratzinos, M; Korcyl, K; Kourkoumelis, C; Kuznetsov, O; Kramer, P H; Krammer, Manfred; Kreuter, C; Kronkvist, I J; Krumshtein, Z; Krupinski, W; Kubinec, P; Kucewicz, W; Kurvinen, K L; Lacasta, C; Laktineh, I; Lamblot, S; Lamsa, J; Lanceri, L; Lane, D W; Langefeld, P; Last, I; Laugier, J P; Lauhakangas, R; Leder, Gerhard; Ledroit, F; Lefébure, V; Legan, C K; Leitner, R; Lemoigne, Y; Lemonne, J; Lenzen, Georg; Lepeltier, V; Lesiak, T; Liko, D; Lindner, R; Lipniacka, A; Lippi, I; Lörstad, B; Loken, J G; López, J M; Loukas, D; Lutz, P; Lyons, L; MacNaughton, J N; Maehlum, G; Maio, A; Malychev, V; Mandl, F; Marco, J; Marco, R P; Maréchal, B; Margoni, M; Marin, J C; Mariotti, C; Markou, A; Maron, T; Martínez-Rivero, C; Martínez-Vidal, F; Martí i García, S; Masik, J; Matorras, F; Matteuzzi, C; Matthiae, Giorgio; Mazzucato, M; McCubbin, M L; McKay, R; McNulty, R; Medbo, J; Merk, M; Meroni, C; Meyer, S; Meyer, W T; Michelotto, M; Migliore, E; Mirabito, L; Mitaroff, Winfried A; Mjörnmark, U; Moa, T; Møller, R; Mönig, K; Monge, M R; Morettini, P; Müller, H; Mundim, L M; Murray, W J; Muryn, B; Myatt, Gerald; Naraghi, F; Navarria, Francesco Luigi; Navas, S; Nawrocki, K; Negri, P; Neumann, W; Nicolaidou, R; Nielsen, B S; Nieuwenhuizen, M; Nikolaenko, V; Niss, P; Nomerotski, A; Normand, Ainsley; Novák, M; Oberschulte-Beckmann, W; Obraztsov, V F; Olshevskii, A G; Onofre, A; Orava, Risto; Österberg, K; Ouraou, A; Paganini, P; Paganoni, M; Pagès, P; Palka, H; Papadopoulou, T D; Papageorgiou, K; Pape, L; Parkes, C; Parodi, F; Passeri, A; Pegoraro, M; Peralta, L; Pernegger, H; Pernicka, Manfred; Perrotta, A; Petridou, C; Petrolini, A; Petrovykh, M; Phillips, H T; Piana, G; Pierre, F; Pimenta, M; Pindo, M; Plaszczynski, S; Podobrin, O; Pol, M E; Polok, G; Poropat, P; Pozdnyakov, V; Prest, M; Privitera, P; Pukhaeva, N; Pullia, Antonio; Radojicic, D; Ragazzi, S; Rahmani, H; Ratoff, P N; Read, A L; Reale, M; Rebecchi, P; Redaelli, N G; Regler, Meinhard; Reid, D; Renton, P B; Resvanis, L K; Richard, F; Richardson, J; Rídky, J; Rinaudo, G; Ripp, I; Romero, A; Roncagliolo, I; Ronchese, P; Ronjin, V M; Roos, L; Rosenberg, E I; Rosso, E; Roudeau, Patrick; Rovelli, T; Rückstuhl, W; Ruhlmann-Kleider, V; Ruiz, A; Rybicki, K; Saarikko, H; Sacquin, Yu; Sadovskii, A; Sajot, G; Salt, J; Sánchez, J; Sannino, M; Schimmelpfennig, M; Schneider, H; Schwickerath, U; Schyns, M A E; Sciolla, G; Scuri, F; Seager, P; Sedykh, Yu; Segar, A M; Seitz, A; Sekulin, R L; Shellard, R C; Siccama, I; Siegrist, P; Simonetti, S; Simonetto, F; Sissakian, A N; Sitár, B; Skaali, T B; Smadja, G; Smirnov, N; Smirnova, O G; Smith, G R; Solovyanov, O; Sosnowski, R; Souza-Santos, D; Spassoff, Tz; Spiriti, E; Sponholz, P; Squarcia, S; Stanescu, C; Stapnes, Steinar; Stavitski, I; Stichelbaut, F; Stocchi, A; Strauss, J; Strub, R; Stugu, B; Szczekowski, M; Szeptycka, M; Tabarelli de Fatis, T; Tavernet, J P; Chikilev, O G; Tilquin, A; Timmermans, J; Tkatchev, L G; Todorov, T; Toet, D Z; Tomaradze, A G; Tomé, B; Tonazzo, A; Tortora, L; Tranströmer, G; Treille, D; Trischuk, W; Tristram, G; Trombini, A; Troncon, C; Tsirou, A L; Turluer, M L; Tyapkin, I A; Tyndel, M; Tzamarias, S; Überschär, B; Ullaland, O; Uvarov, V; Valenti, G; Vallazza, E; Van der Velde, C; van Apeldoorn, G W; van Dam, P; Van Doninck, W K; Van Eldik, J; Vassilopoulos, N; Vegni, G; Ventura, L; Venus, W A; Verbeure, F; Verlato, M; Vertogradov, L S; Vilanova, D; Vincent, P; Vitale, L; Vlasov, E; Vodopyanov, A S; Vrba, V; Wahlen, H; Walck, C; Weierstall, M; Weilhammer, Peter; Weiser, C; Wetherell, Alan M; Wicke, D; Wickens, J H; Wielers, M; Wilkinson, G R; Williams, W S C; Winter, M; Witek, M; Woschnagg, K; Yip, K; Yushchenko, O P; Zach, F; Zaitsev, A; Zalewska-Bak, A; Zalewski, Piotr; Zavrtanik, D; Zevgolatakos, E; Zimin, N I; Zito, M; Zontar, D; Zuberi, R; Zucchelli, G C; Zumerle, G; Belokopytov, Yu; Charpentier, Ph; Gavillet, Ph; Gouz, Yu; Jarlskog, Ch; Khokhlov, Yu; Papadopoulou, Th D

    1996-01-01

    The measurement of the average lifetime of B hadrons using inclusively reconstructed secondary vertices has been updated using both an improved processing of previous data and additional statistics from new data. This has reduced the statistical and systematic uncertainties and gives \\tau_{\\mathrm{B}} = 1.582 \\pm 0.011\\ \\mathrm{(stat.)} \\pm 0.027\\ \\mathrm{(syst.)}\\ \\mathrm{ps.} Combining this result with the previous result based on charged particle impact parameter distributions yields \\tau_{\\mathrm{B}} = 1.575 \\pm 0.010\\ \\mathrm{(stat.)} \\pm 0.026\\ \\mathrm{(syst.)}\\ \\mathrm{ps.}

  6. Distribution functions for fluids in random media

    International Nuclear Information System (INIS)

    Madden, W.G.; Glandt, E.D.

    1988-01-01

    A random medium is considered, composed of identifiable interactive sites or obstacles equilibrated at a high temperature and then quenched rapidly to form a rigid structure, statistically homogeneous on all but molecular length scales. The equilibrium statistical mechanics of a fluid contained inside this quenched medium is discussed. Various particle-particle and particle-obstacle correlation functions, which differ form the corresponding functions for a fully equilibrated binary mixture, are defined through an averaging process over the static ensemble of obstacle configurations and applications of topological reduction techniques. The Ornstein-Zernike equations also differ from their equilibrium counterparts

  7. A robust combination approach for short-term wind speed forecasting and analysis – Combination of the ARIMA (Autoregressive Integrated Moving Average), ELM (Extreme Learning Machine), SVM (Support Vector Machine) and LSSVM (Least Square SVM) forecasts using a GPR (Gaussian Process Regression) model

    International Nuclear Information System (INIS)

    Wang, Jianzhou; Hu, Jianming

    2015-01-01

    With the increasing importance of wind power as a component of power systems, the problems induced by the stochastic and intermittent nature of wind speed have compelled system operators and researchers to search for more reliable techniques to forecast wind speed. This paper proposes a combination model for probabilistic short-term wind speed forecasting. In this proposed hybrid approach, EWT (Empirical Wavelet Transform) is employed to extract meaningful information from a wind speed series by designing an appropriate wavelet filter bank. The GPR (Gaussian Process Regression) model is utilized to combine independent forecasts generated by various forecasting engines (ARIMA (Autoregressive Integrated Moving Average), ELM (Extreme Learning Machine), SVM (Support Vector Machine) and LSSVM (Least Square SVM)) in a nonlinear way rather than the commonly used linear way. The proposed approach provides more probabilistic information for wind speed predictions besides improving the forecasting accuracy for single-value predictions. The effectiveness of the proposed approach is demonstrated with wind speed data from two wind farms in China. The results indicate that the individual forecasting engines do not consistently forecast short-term wind speed for the two sites, and the proposed combination method can generate a more reliable and accurate forecast. - Highlights: • The proposed approach can make probabilistic modeling for wind speed series. • The proposed approach adapts to the time-varying characteristic of the wind speed. • The hybrid approach can extract the meaningful components from the wind speed series. • The proposed method can generate adaptive, reliable and more accurate forecasting results. • The proposed model combines four independent forecasting engines in a nonlinear way.

  8. A web-based tool to support shared decision making for people with a psychotic disorder: randomized controlled trial and process evaluation.

    Science.gov (United States)

    van der Krieke, Lian; Emerencia, Ando C; Boonstra, Nynke; Wunderink, Lex; de Jonge, Peter; Sytema, Sjoerd

    2013-10-07

    Mental health policy makers encourage the development of electronic decision aids to increase patient participation in medical decision making. Evidence is needed to determine whether these decision aids are helpful in clinical practice and whether they lead to increased patient involvement and better outcomes. This study reports the outcome of a randomized controlled trial and process evaluation of a Web-based intervention to facilitate shared decision making for people with psychotic disorders. The study was carried out in a Dutch mental health institution. Patients were recruited from 2 outpatient teams for patients with psychosis (N=250). Patients in the intervention condition (n=124) were provided an account to access a Web-based information and decision tool aimed to support patients in acquiring an overview of their needs and appropriate treatment options provided by their mental health care organization. Patients were given the opportunity to use the Web-based tool either on their own (at their home computer or at a computer of the service) or with the support of an assistant. Patients in the control group received care as usual (n=126). Half of the patients in the sample were patients experiencing a first episode of psychosis; the other half were patients with a chronic psychosis. Primary outcome was patient-perceived involvement in medical decision making, measured with the Combined Outcome Measure for Risk Communication and Treatment Decision-making Effectiveness (COMRADE). Process evaluation consisted of questionnaire-based surveys, open interviews, and researcher observation. In all, 73 patients completed the follow-up measurement and were included in the final analysis (response rate 29.2%). More than one-third (48/124, 38.7%) of the patients who were provided access to the Web-based decision aid used it, and most used its full functionality. No differences were found between the intervention and control conditions on perceived involvement in medical

  9. High Average Power, High Energy Short Pulse Fiber Laser System

    Energy Technology Data Exchange (ETDEWEB)

    Messerly, M J

    2007-11-13

    Recently continuous wave fiber laser systems with output powers in excess of 500W with good beam quality have been demonstrated [1]. High energy, ultrafast, chirped pulsed fiber laser systems have achieved record output energies of 1mJ [2]. However, these high-energy systems have not been scaled beyond a few watts of average output power. Fiber laser systems are attractive for many applications because they offer the promise of high efficiency, compact, robust systems that are turn key. Applications such as cutting, drilling and materials processing, front end systems for high energy pulsed lasers (such as petawatts) and laser based sources of high spatial coherence, high flux x-rays all require high energy short pulses and two of the three of these applications also require high average power. The challenge in creating a high energy chirped pulse fiber laser system is to find a way to scale the output energy while avoiding nonlinear effects and maintaining good beam quality in the amplifier fiber. To this end, our 3-year LDRD program sought to demonstrate a high energy, high average power fiber laser system. This work included exploring designs of large mode area optical fiber amplifiers for high energy systems as well as understanding the issues associated chirped pulse amplification in optical fiber amplifier systems.

  10. Role of spatial averaging in multicellular gradient sensing.

    Science.gov (United States)

    Smith, Tyler; Fancher, Sean; Levchenko, Andre; Nemenman, Ilya; Mugler, Andrew

    2016-05-20

    Gradient sensing underlies important biological processes including morphogenesis, polarization, and cell migration. The precision of gradient sensing increases with the length of a detector (a cell or group of cells) in the gradient direction, since a longer detector spans a larger range of concentration values. Intuition from studies of concentration sensing suggests that precision should also increase with detector length in the direction transverse to the gradient, since then spatial averaging should reduce the noise. However, here we show that, unlike for concentration sensing, the precision of gradient sensing decreases with transverse length for the simplest gradient sensing model, local excitation-global inhibition. The reason is that gradient sensing ultimately relies on a subtraction of measured concentration values. While spatial averaging indeed reduces the noise in these measurements, which increases precision, it also reduces the covariance between the measurements, which results in the net decrease in precision. We demonstrate how a recently introduced gradient sensing mechanism, regional excitation-global inhibition (REGI), overcomes this effect and recovers the benefit of transverse averaging. Using a REGI-based model, we compute the optimal two- and three-dimensional detector shapes, and argue that they are consistent with the shapes of naturally occurring gradient-sensing cell populations.

  11. Time-dependence and averaging techniques in atomic photoionization calculations

    International Nuclear Information System (INIS)

    Scheibner, K.F.

    1984-01-01

    Two distinct problems in the development and application of averaging techniques to photoionization calculations are considered. The first part of the thesis is concerned with the specific problem of near-resonant three-photon ionization in hydrogen, a process for which no cross section exists. Effects of the inclusion of the laser pulse characteristics (both temporal and spatial) on the dynamics of the ionization probability and of the metastable state probability are examined. It is found, for example, that the ionization probability can decrease with increasing field intensity. The temporal profile of the laser pulse is found to affect the dynamics very little, whereas the spatial character of the pulse can affect the results drastically. In the second part of the thesis techniques are developed for calculating averaged cross sections directly without first calculating a detailed cross section. Techniques are developed whereby the detailed cross section never has to be calculated as an intermediate step, but rather, the averaged cross section is calculated directly. A variation of the moment technique and a new method based on the stabilization technique are applied successfully to atomic hydrogen and helium

  12. Investigating the Randomness of Numbers

    Science.gov (United States)

    Pendleton, Kenn L.

    2009-01-01

    The use of random numbers is pervasive in today's world. Random numbers have practical applications in such far-flung arenas as computer simulations, cryptography, gambling, the legal system, statistical sampling, and even the war on terrorism. Evaluating the randomness of extremely large samples is a complex, intricate process. However, the…

  13. Certified randomness in quantum physics.

    Science.gov (United States)

    Acín, Antonio; Masanes, Lluis

    2016-12-07

    The concept of randomness plays an important part in many disciplines. On the one hand, the question of whether random processes exist is fundamental for our understanding of nature. On the other, randomness is a resource for cryptography, algorithms and simulations. Standard methods for generating randomness rely on assumptions about the devices that are often not valid in practice. However, quantum technologies enable new methods for generating certified randomness, based on the violation of Bell inequalities. These methods are referred to as device-independent because they do not rely on any modelling of the devices. Here we review efforts to design device-independent randomness generators and the associated challenges.

  14. Occurrence and average behavior of pulsating aurora

    Science.gov (United States)

    Partamies, N.; Whiter, D.; Kadokura, A.; Kauristie, K.; Nesse Tyssøy, H.; Massetti, S.; Stauning, P.; Raita, T.

    2017-05-01

    Motivated by recent event studies and modeling efforts on pulsating aurora, which conclude that the precipitation energy during these events is high enough to cause significant chemical changes in the mesosphere, this study looks for the bulk behavior of auroral pulsations. Based on about 400 pulsating aurora events, we outline the typical duration, geomagnetic conditions, and change in the peak emission height for the events. We show that the auroral peak emission height for both green and blue emission decreases by about 8 km at the start of the pulsating aurora interval. This brings the hardest 10% of the electrons down to about 90 km altitude. The median duration of pulsating aurora is about 1.4 h. This value is a conservative estimate since in many cases the end of event is limited by the end of auroral imaging for the night or the aurora drifting out of the camera field of view. The longest durations of auroral pulsations are observed during events which start within the substorm recovery phases. As a result, the geomagnetic indices are not able to describe pulsating aurora. Simultaneous Antarctic auroral images were found for 10 pulsating aurora events. In eight cases auroral pulsations were seen in the southern hemispheric data as well, suggesting an equatorial precipitation source and a frequent interhemispheric occurrence. The long lifetimes of pulsating aurora, their interhemispheric occurrence, and the relatively high-precipitation energies make this type of aurora an effective energy deposition process which is easy to identify from the ground-based image data.

  15. To quantum averages through asymptotic expansion of classical averages on infinite-dimensional space

    International Nuclear Information System (INIS)

    Khrennikov, Andrei

    2007-01-01

    We study asymptotic expansions of Gaussian integrals of analytic functionals on infinite-dimensional spaces (Hilbert and nuclear Frechet). We obtain an asymptotic equality coupling the Gaussian integral and the trace of the composition of scaling of the covariation operator of a Gaussian measure and the second (Frechet) derivative of a functional. In this way we couple classical average (given by an infinite-dimensional Gaussian integral) and quantum average (given by the von Neumann trace formula). We can interpret this mathematical construction as a procedure of 'dequantization' of quantum mechanics. We represent quantum mechanics as an asymptotic projection of classical statistical mechanics with infinite-dimensional phase space. This space can be represented as the space of classical fields, so quantum mechanics is represented as a projection of 'prequantum classical statistical field theory'

  16. Quantum random number generator

    Science.gov (United States)

    Soubusta, Jan; Haderka, Ondrej; Hendrych, Martin

    2001-03-01

    Since reflection or transmission of a quantum particle on a beamsplitter is inherently random quantum process, a device built on this principle does not suffer from drawbacks of neither pseudo-random computer generators or classical noise sources. Nevertheless, a number of physical conditions necessary for high quality random numbers generation must be satisfied. Luckily, in quantum optics realization they can be well controlled. We present an easy random number generator based on the division of weak light pulses on a beamsplitter. The randomness of the generated bit stream is supported by passing the data through series of 15 statistical test. The device generates at a rate of 109.7 kbit/s.

  17. The impact of Cognitive Processing Therapy on stigma among survivors of sexual violence in eastern Democratic Republic of Congo: results from a cluster randomized controlled trial.

    Science.gov (United States)

    Murray, S M; Augustinavicius, J; Kaysen, D; Rao, D; Murray, L K; Wachter, K; Annan, J; Falb, K; Bolton, P; Bass, J K

    2018-01-01

    Sexual violence is associated with a multitude of poor physical, emotional, and social outcomes. Despite reports of stigma by sexual violence survivors, limited evidence exists on effective strategies to reduce stigma, particularly in conflict-affected settings. We sought to assess the effect of group Cognitive Processing Therapy (CPT) on stigma and the extent to which stigma might moderate the effectiveness of CPT in treating mental health problems among survivors of sexual violence in the Democratic Republic of Congo. Data were drawn from 405 adult female survivors of sexual violence reporting mental distress and poor functioning in North and South Kivu. Women were recruited through organizations providing psychosocial support and then cluster randomized to group CPT or individual support. Women were assessed at baseline, the end of treatment, and again six months later. Assessors were masked to women's treatment assignment. Linear mixed-effect regression models were used to estimate (1) the effect of CPT on feelings of perceived and internalized (felt) stigma, and (2) whether felt stigma and discrimination (enacted stigma) moderated the effects of CPT on combined depression and anxiety symptoms, posttraumatic stress, and functional impairment. Participants receiving CPT experienced moderate reductions in felt stigma relative to those in individual support (Cohen's D = 0.44, p  = value = 0.02) following the end of treatment, though this difference was no longer significant six-months later (Cohen's D = 0.45, p  = value = 0.12). Neither felt nor enacted stigma significantly moderated the effect of CPT on mental health symptoms or functional impairment. Group cognitive-behavioral based therapies may be an effective stigma reduction tool for survivors of sexual violence. Experiences and perceptions of stigma did not hinder therapeutic effects of group psychotherapy on survivors' mental health. ClinicalTrials.gov NCT01385163.

  18. Blocked Randomization with Randomly Selected Block Sizes

    Directory of Open Access Journals (Sweden)

    Jimmy Efird

    2010-12-01

    Full Text Available When planning a randomized clinical trial, careful consideration must be given to how participants are selected for various arms of a study. Selection and accidental bias may occur when participants are not assigned to study groups with equal probability. A simple random allocation scheme is a process by which each participant has equal likelihood of being assigned to treatment versus referent groups. However, by chance an unequal number of individuals may be assigned to each arm of the study and thus decrease the power to detect statistically significant differences between groups. Block randomization is a commonly used technique in clinical trial design to reduce bias and achieve balance in the allocation of participants to treatment arms, especially when the sample size is small. This method increases the probability that each arm will contain an equal number of individuals by sequencing participant assignments by block. Yet still, the allocation process may be predictable, for example, when the investigator is not blind and the block size is fixed. This paper provides an overview of blocked randomization and illustrates how to avoid selection bias by using random block sizes.

  19. Random vibrations theory and practice

    CERN Document Server

    Wirsching, Paul H; Ortiz, Keith

    1995-01-01

    Random Vibrations: Theory and Practice covers the theory and analysis of mechanical and structural systems undergoing random oscillations due to any number of phenomena— from engine noise, turbulent flow, and acoustic noise to wind, ocean waves, earthquakes, and rough pavement. For systems operating in such environments, a random vibration analysis is essential to the safety and reliability of the system. By far the most comprehensive text available on random vibrations, Random Vibrations: Theory and Practice is designed for readers who are new to the subject as well as those who are familiar with the fundamentals and wish to study a particular topic or use the text as an authoritative reference. It is divided into three major sections: fundamental background, random vibration development and applications to design, and random signal analysis. Introductory chapters cover topics in probability, statistics, and random processes that prepare the reader for the development of the theory of random vibrations a...

  20. An Exponentially Weighted Moving Average Control Chart for Bernoulli Data

    DEFF Research Database (Denmark)

    Spliid, Henrik

    2010-01-01

    of the transformation is given and its limit for small values of p is derived. Control of high yield processes is discussed and the chart is shown to perform very well in comparison with both the most common alternative EWMA chart and the CUSUM chart. The construction and the use of the proposed EWMA chart......We consider a production process in which units are produced in a sequential manner. The units can, for example, be manufactured items or services, provided to clients. Each unit produced can be a failure with probability p or a success (non-failure) with probability (1-p). A novel exponentially...... weighted moving average (EWMA) control chart intended for surveillance of the probability of failure, p, is described. The chart is based on counting the number of non-failures produced between failures in combination with a variance-stabilizing transformation. The distribution function...