WorldWideScience

Sample records for random average process

  1. Matrix product approach for the asymmetric random average process

    International Nuclear Information System (INIS)

    Zielen, F; Schadschneider, A

    2003-01-01

    We consider the asymmetric random average process which is a one-dimensional stochastic lattice model with nearest-neighbour interaction but continuous and unbounded state variables. First, the explicit functional representations, so-called beta densities, of all local interactions leading to steady states of product measure form are rigorously derived. This also completes an outstanding proof given in a previous publication. Then we present an alternative solution for the processes with factorized stationary states by using a matrix product ansatz. Due to continuous state variables we obtain a matrix algebra in the form of a functional equation which can be solved exactly

  2. Average beta-beating from random errors

    CERN Document Server

    Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department

    2018-01-01

    The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic effect on the tune.

  3. Average subentropy, coherence and entanglement of random mixed quantum states

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lin, E-mail: godyalin@163.com [Institute of Mathematics, Hangzhou Dianzi University, Hangzhou 310018 (China); Singh, Uttam, E-mail: uttamsingh@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India); Pati, Arun K., E-mail: akpati@hri.res.in [Harish-Chandra Research Institute, Allahabad, 211019 (India)

    2017-02-15

    Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate that mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.

  4. Averaging in SU(2) open quantum random walk

    International Nuclear Information System (INIS)

    Ampadu Clement

    2014-01-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT

  5. Averaging in SU(2) open quantum random walk

    Science.gov (United States)

    Clement, Ampadu

    2014-03-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.

  6. Exactly averaged equations for flow and transport in random media

    International Nuclear Information System (INIS)

    Shvidler, Mark; Karasaki, Kenzi

    2001-01-01

    It is well known that exact averaging of the equations of flow and transport in random porous media can be realized only for a small number of special, occasionally exotic, fields. On the other hand, the properties of approximate averaging methods are not yet fully understood. For example, the convergence behavior and the accuracy of truncated perturbation series. Furthermore, the calculation of the high-order perturbations is very complicated. These problems for a long time have stimulated attempts to find the answer for the question: Are there in existence some exact general and sufficiently universal forms of averaged equations? If the answer is positive, there arises the problem of the construction of these equations and analyzing them. There exist many publications related to these problems and oriented on different applications: hydrodynamics, flow and transport in porous media, theory of elasticity, acoustic and electromagnetic waves in random fields, etc. We present a method of finding the general form of exactly averaged equations for flow and transport in random fields by using (1) an assumption of the existence of Green's functions for appropriate stochastic problems, (2) some general properties of the Green's functions, and (3) the some basic information about the random fields of the conductivity, porosity and flow velocity. We present a general form of the exactly averaged non-local equations for the following cases. 1. Steady-state flow with sources in porous media with random conductivity. 2. Transient flow with sources in compressible media with random conductivity and porosity. 3. Non-reactive solute transport in random porous media. We discuss the problem of uniqueness and the properties of the non-local averaged equations, for the cases with some types of symmetry (isotropic, transversal isotropic, orthotropic) and we analyze the hypothesis of the structure non-local equations in general case of stochastically homogeneous fields. (author)

  7. The average inter-crossing number of equilateral random walks and polygons

    International Nuclear Information System (INIS)

    Diao, Y; Dobay, A; Stasiak, A

    2005-01-01

    In this paper, we study the average inter-crossing number between two random walks and two random polygons in the three-dimensional space. The random walks and polygons in this paper are the so-called equilateral random walks and polygons in which each segment of the walk or polygon is of unit length. We show that the mean average inter-crossing number ICN between two equilateral random walks of the same length n is approximately linear in terms of n and we were able to determine the prefactor of the linear term, which is a = 3ln2/8 ∼ 0.2599. In the case of two random polygons of length n, the mean average inter-crossing number ICN is also linear, but the prefactor of the linear term is different from that of the random walks. These approximations apply when the starting points of the random walks and polygons are of a distance ρ apart and ρ is small compared to n. We propose a fitting model that would capture the theoretical asymptotic behaviour of the mean average ICN for large values of ρ. Our simulation result shows that the model in fact works very well for the entire range of ρ. We also study the mean ICN between two equilateral random walks and polygons of different lengths. An interesting result is that even if one random walk (polygon) has a fixed length, the mean average ICN between the two random walks (polygons) would still approach infinity if the length of the other random walk (polygon) approached infinity. The data provided by our simulations match our theoretical predictions very well

  8. The average crossing number of equilateral random polygons

    International Nuclear Information System (INIS)

    Diao, Y; Dobay, A; Kusner, R B; Millett, K; Stasiak, A

    2003-01-01

    In this paper, we study the average crossing number of equilateral random walks and polygons. We show that the mean average crossing number ACN of all equilateral random walks of length n is of the form (3/16)n ln n + O(n). A similar result holds for equilateral random polygons. These results are confirmed by our numerical studies. Furthermore, our numerical studies indicate that when random polygons of length n are divided into individual knot types, the for each knot type K can be described by a function of the form = a(n-n 0 )ln(n-n 0 ) + b(n-n 0 ) + c where a, b and c are constants depending on K and n 0 is the minimal number of segments required to form K. The profiles diverge from each other, with more complex knots showing higher than less complex knots. Moreover, the profiles intersect with the profile of all closed walks. These points of intersection define the equilibrium length of K, i.e., the chain length n e (K) at which a statistical ensemble of configurations with given knot type K-upon cutting, equilibration and reclosure to a new knot type K'-does not show a tendency to increase or decrease . This concept of equilibrium length seems to be universal, and applies also to other length-dependent observables for random knots, such as the mean radius of gyration g >

  9. Effect of random edge failure on the average path length

    Energy Technology Data Exchange (ETDEWEB)

    Guo Dongchao; Liang Mangui; Li Dandan; Jiang Zhongyuan, E-mail: mgliang58@gmail.com, E-mail: 08112070@bjtu.edu.cn [Institute of Information Science, Beijing Jiaotong University, 100044, Beijing (China)

    2011-10-14

    We study the effect of random removal of edges on the average path length (APL) in a large class of uncorrelated random networks in which vertices are characterized by hidden variables controlling the attachment of edges between pairs of vertices. A formula for approximating the APL of networks suffering random edge removal is derived first. Then, the formula is confirmed by simulations for classical ER (Erdoes and Renyi) random graphs, BA (Barabasi and Albert) networks, networks with exponential degree distributions as well as random networks with asymptotic power-law degree distributions with exponent {alpha} > 2. (paper)

  10. Free Energy Self-Averaging in Protein-Sized Random Heteropolymers

    International Nuclear Information System (INIS)

    Chuang, Jeffrey; Grosberg, Alexander Yu.; Kardar, Mehran

    2001-01-01

    Current theories of heteropolymers are inherently macroscopic, but are applied to mesoscopic proteins. To compute the free energy over sequences, one assumes self-averaging -- a property established only in the macroscopic limit. By enumerating the states and energies of compact 18, 27, and 36mers on a lattice with an ensemble of random sequences, we test the self-averaging approximation. We find that fluctuations in the free energy between sequences are weak, and that self-averaging is valid at the scale of real proteins. The results validate sequence design methods which exponentially speed up computational design and simplify experimental realizations

  11. Influence of Averaging Preprocessing on Image Analysis with a Markov Random Field Model

    Science.gov (United States)

    Sakamoto, Hirotaka; Nakanishi-Ohno, Yoshinori; Okada, Masato

    2018-02-01

    This paper describes our investigations into the influence of averaging preprocessing on the performance of image analysis. Averaging preprocessing involves a trade-off: image averaging is often undertaken to reduce noise while the number of image data available for image analysis is decreased. We formulated a process of generating image data by using a Markov random field (MRF) model to achieve image analysis tasks such as image restoration and hyper-parameter estimation by a Bayesian approach. According to the notions of Bayesian inference, posterior distributions were analyzed to evaluate the influence of averaging. There are three main results. First, we found that the performance of image restoration with a predetermined value for hyper-parameters is invariant regardless of whether averaging is conducted. We then found that the performance of hyper-parameter estimation deteriorates due to averaging. Our analysis of the negative logarithm of the posterior probability, which is called the free energy based on an analogy with statistical mechanics, indicated that the confidence of hyper-parameter estimation remains higher without averaging. Finally, we found that when the hyper-parameters are estimated from the data, the performance of image restoration worsens as averaging is undertaken. We conclude that averaging adversely influences the performance of image analysis through hyper-parameter estimation.

  12. The Initial Regression Statistical Characteristics of Intervals Between Zeros of Random Processes

    Directory of Open Access Journals (Sweden)

    V. K. Hohlov

    2014-01-01

    Full Text Available The article substantiates the initial regression statistical characteristics of intervals between zeros of realizing random processes, studies their properties allowing the use these features in the autonomous information systems (AIS of near location (NL. Coefficients of the initial regression (CIR to minimize the residual sum of squares of multiple initial regression views are justified on the basis of vector representations associated with a random vector notion of analyzed signal parameters. It is shown that even with no covariance-based private CIR it is possible to predict one random variable through another with respect to the deterministic components. The paper studies dependences of CIR interval sizes between zeros of the narrowband stationary in wide-sense random process with its energy spectrum. Particular CIR for random processes with Gaussian and rectangular energy spectra are obtained. It is shown that the considered CIRs do not depend on the average frequency of spectra, are determined by the relative bandwidth of the energy spectra, and weakly depend on the type of spectrum. CIR properties enable its use as an informative parameter when implementing temporary regression methods of signal processing, invariant to the average rate and variance of the input implementations. We consider estimates of the average energy spectrum frequency of the random stationary process by calculating the length of the time interval corresponding to the specified number of intervals between zeros. It is shown that the relative variance in estimation of the average energy spectrum frequency of stationary random process with increasing relative bandwidth ceases to depend on the last process implementation in processing above ten intervals between zeros. The obtained results can be used in the AIS NL to solve the tasks of detection and signal recognition, when a decision is made in conditions of unknown mathematical expectations on a limited observation

  13. Average size of random polygons with fixed knot topology.

    Science.gov (United States)

    Matsuda, Hiroshi; Yao, Akihisa; Tsukahara, Hiroshi; Deguchi, Tetsuo; Furuta, Ko; Inami, Takeo

    2003-07-01

    We have evaluated by numerical simulation the average size R(K) of random polygons of fixed knot topology K=,3(1),3(1) musical sharp 4(1), and we have confirmed the scaling law R(2)(K) approximately N(2nu(K)) for the number N of polygonal nodes in a wide range; N=100-2200. The best fit gives 2nu(K) approximately 1.11-1.16 with good fitting curves in the whole range of N. The estimate of 2nu(K) is consistent with the exponent of self-avoiding polygons. In a limited range of N (N greater, similar 600), however, we have another fit with 2nu(K) approximately 1.01-1.07, which is close to the exponent of random polygons.

  14. A note on moving average models for Gaussian random fields

    DEFF Research Database (Denmark)

    Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.

    The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...... basis, a general modeling framework which includes several types of non-Gaussian models. We propose a new one-parameter spatial correlation model which arises from a power kernel and show that the associated Hausdorff dimension of the sample paths can take any value between 2 and 3. As a result...

  15. Probability, random variables, and random processes theory and signal processing applications

    CERN Document Server

    Shynk, John J

    2012-01-01

    Probability, Random Variables, and Random Processes is a comprehensive textbook on probability theory for engineers that provides a more rigorous mathematical framework than is usually encountered in undergraduate courses. It is intended for first-year graduate students who have some familiarity with probability and random variables, though not necessarily of random processes and systems that operate on random signals. It is also appropriate for advanced undergraduate students who have a strong mathematical background. The book has the following features: Several app

  16. Ergodic averages via dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2006-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain....

  17. Scaling behaviour of randomly alternating surface growth processes

    International Nuclear Information System (INIS)

    Raychaudhuri, Subhadip; Shapir, Yonathan

    2002-01-01

    The scaling properties of the roughness of surfaces grown by two different processes randomly alternating in time are addressed. The duration of each application of the two primary processes is assumed to be independently drawn from given distribution functions. We analytically address processes in which the two primary processes are linear and extend the conclusions to nonlinear processes as well. The growth scaling exponent of the average roughness with the number of applications is found to be determined by the long time tail of the distribution functions. For processes in which both mean application times are finite, the scaling behaviour follows that of the corresponding cyclical process in which the uniform application time of each primary process is given by its mean. If the distribution functions decay with a small enough power law for the mean application times to diverge, the growth exponent is found to depend continuously on this power-law exponent. In contrast, the roughness exponent does not depend on the timing of the applications. The analytical results are supported by numerical simulations of various pairs of primary processes and with different distribution functions. Self-affine surfaces grown by two randomly alternating processes are common in nature (e.g., due to randomly changing weather conditions) and in man-made devices such as rechargeable batteries

  18. Entanglement in random pure states: spectral density and average von Neumann entropy

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Santosh; Pandey, Akhilesh, E-mail: skumar.physics@gmail.com, E-mail: ap0700@mail.jnu.ac.in [School of Physical Sciences, Jawaharlal Nehru University, New Delhi 110 067 (India)

    2011-11-04

    Quantum entanglement plays a crucial role in quantum information, quantum teleportation and quantum computation. The information about the entanglement content between subsystems of the composite system is encoded in the Schmidt eigenvalues. We derive here closed expressions for the spectral density of Schmidt eigenvalues for all three invariant classes of random matrix ensembles. We also obtain exact results for average von Neumann entropy. We find that maximum average entanglement is achieved if the system belongs to the symplectic invariant class. (paper)

  19. Studies in astronomical time series analysis: Modeling random processes in the time domain

    Science.gov (United States)

    Scargle, J. D.

    1979-01-01

    Random process models phased in the time domain are used to analyze astrophysical time series data produced by random processes. A moving average (MA) model represents the data as a sequence of pulses occurring randomly in time, with random amplitudes. An autoregressive (AR) model represents the correlations in the process in terms of a linear function of past values. The best AR model is determined from sampled data and transformed to an MA for interpretation. The randomness of the pulse amplitudes is maximized by a FORTRAN algorithm which is relatively stable numerically. Results of test cases are given to study the effects of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the optical light curve of the quasar 3C 273 is given.

  20. An Efficient Randomized Algorithm for Real-Time Process Scheduling in PicOS Operating System

    Science.gov (United States)

    Helmy*, Tarek; Fatai, Anifowose; Sallam, El-Sayed

    PicOS is an event-driven operating environment designed for use with embedded networked sensors. More specifically, it is designed to support the concurrency in intensive operations required by networked sensors with minimal hardware requirements. Existing process scheduling algorithms of PicOS; a commercial tiny, low-footprint, real-time operating system; have their associated drawbacks. An efficient, alternative algorithm, based on a randomized selection policy, has been proposed, demonstrated, confirmed for efficiency and fairness, on the average, and has been recommended for implementation in PicOS. Simulations were carried out and performance measures such as Average Waiting Time (AWT) and Average Turn-around Time (ATT) were used to assess the efficiency of the proposed randomized version over the existing ones. The results prove that Randomized algorithm is the best and most attractive for implementation in PicOS, since it is most fair and has the least AWT and ATT on average over the other non-preemptive scheduling algorithms implemented in this paper.

  1. Average multiplications in deep inelastic processes and their interpretation

    International Nuclear Information System (INIS)

    Kiselev, A.V.; Petrov, V.A.

    1983-01-01

    Inclusive production of hadrons in deep inelastic proceseseus is considered. It is shown that at high energies the jet evolution in deep inelastic processes is mainly of nonperturbative character. With the increase of a final hadron state energy the leading contribution to an average multiplicity comes from a parton subprocess due to production of massive quark and gluon jets and their further fragmentation as diquark contribution becomes less and less essential. The ratio of the total average multiplicity in deep inelastic processes to the average multiplicity in e + e - -annihilation at high energies tends to unity

  2. Average thermal stress in the Al+SiC composite due to its manufacturing process

    International Nuclear Information System (INIS)

    Miranda, Carlos A.J.; Libardi, Rosani M.P.; Marcelino, Sergio; Boari, Zoroastro M.

    2013-01-01

    The numerical analyses framework to obtain the average thermal stress in the Al+SiC Composite due to its manufacturing process is presented along with the obtained results. The mixing of Aluminum and SiC powders is done at elevated temperature and the usage is at room temperature. A thermal stress state arises in the composite due to the different thermal expansion coefficients of the materials. Due to the particles size and randomness in the SiC distribution, some sets of models were analyzed and a statistical procedure used to evaluate the average stress state in the composite. In each model the particles position, form and size are randomly generated considering a volumetric ratio (VR) between 20% and 25%, close to an actual composite. The obtained stress field is represented by a certain number of iso stress curves, each one weighted by the area it represents. Systematically it was investigated the influence of: (a) the material behavior: linear x non-linear; (b) the carbide particles form: circular x quadrilateral; (c) the number of iso stress curves considered in each analysis; and (e) the model size (the number of particles). Each of above analyzed condition produced conclusions to guide the next step. Considering a confidence level of 95%, the average thermal stress value in the studied composite (20% ≤ VR ≤ 25%) is 175 MPa with a standard deviation of 10 MPa. Depending on its usage, this value should be taken into account when evaluating the material strength. (author)

  3. Scaling behaviour of randomly alternating surface growth processes

    CERN Document Server

    Raychaudhuri, S

    2002-01-01

    The scaling properties of the roughness of surfaces grown by two different processes randomly alternating in time are addressed. The duration of each application of the two primary processes is assumed to be independently drawn from given distribution functions. We analytically address processes in which the two primary processes are linear and extend the conclusions to nonlinear processes as well. The growth scaling exponent of the average roughness with the number of applications is found to be determined by the long time tail of the distribution functions. For processes in which both mean application times are finite, the scaling behaviour follows that of the corresponding cyclical process in which the uniform application time of each primary process is given by its mean. If the distribution functions decay with a small enough power law for the mean application times to diverge, the growth exponent is found to depend continuously on this power-law exponent. In contrast, the roughness exponent does not depe...

  4. Asymptotic behaviour of time averages for non-ergodic Gaussian processes

    Science.gov (United States)

    Ślęzak, Jakub

    2017-08-01

    In this work, we study the behaviour of time-averages for stationary (non-ageing), but ergodicity-breaking Gaussian processes using their representation in Fourier space. We provide explicit formulae for various time-averaged quantities, such as mean square displacement, density, and analyse the behaviour of time-averaged characteristic function, which gives insight into rich memory structure of the studied processes. Moreover, we show applications of the ergodic criteria in Fourier space, determining the ergodicity of the generalised Langevin equation's solutions.

  5. Random processes in nuclear reactors

    CERN Document Server

    Williams, M M R

    1974-01-01

    Random Processes in Nuclear Reactors describes the problems that a nuclear engineer may meet which involve random fluctuations and sets out in detail how they may be interpreted in terms of various models of the reactor system. Chapters set out to discuss topics on the origins of random processes and sources; the general technique to zero-power problems and bring out the basic effect of fission, and fluctuations in the lifetime of neutrons, on the measured response; the interpretation of power reactor noise; and associated problems connected with mechanical, hydraulic and thermal noise sources

  6. A signal theoretic introduction to random processes

    CERN Document Server

    Howard, Roy M

    2015-01-01

    A fresh introduction to random processes utilizing signal theory By incorporating a signal theory basis, A Signal Theoretic Introduction to Random Processes presents a unique introduction to random processes with an emphasis on the important random phenomena encountered in the electronic and communications engineering field. The strong mathematical and signal theory basis provides clarity and precision in the statement of results. The book also features:  A coherent account of the mathematical fundamentals and signal theory that underpin the presented material Unique, in-depth coverage of

  7. A Campbell random process

    International Nuclear Information System (INIS)

    Reuss, J.D.; Misguich, J.H.

    1993-02-01

    The Campbell process is a stationary random process which can have various correlation functions, according to the choice of an elementary response function. The statistical properties of this process are presented. A numerical algorithm and a subroutine for generating such a process is built up and tested, for the physically interesting case of a Campbell process with Gaussian correlations. The (non-Gaussian) probability distribution appears to be similar to the Gamma distribution

  8. Compositions, Random Sums and Continued Random Fractions of Poisson and Fractional Poisson Processes

    Science.gov (United States)

    Orsingher, Enzo; Polito, Federico

    2012-08-01

    In this paper we consider the relation between random sums and compositions of different processes. In particular, for independent Poisson processes N α ( t), N β ( t), t>0, we have that N_{α}(N_{β}(t)) stackrel{d}{=} sum_{j=1}^{N_{β}(t)} Xj, where the X j s are Poisson random variables. We present a series of similar cases, where the outer process is Poisson with different inner processes. We highlight generalisations of these results where the external process is infinitely divisible. A section of the paper concerns compositions of the form N_{α}(tauk^{ν}), ν∈(0,1], where tauk^{ν} is the inverse of the fractional Poisson process, and we show how these compositions can be represented as random sums. Furthermore we study compositions of the form Θ( N( t)), t>0, which can be represented as random products. The last section is devoted to studying continued fractions of Cauchy random variables with a Poisson number of levels. We evaluate the exact distribution and derive the scale parameter in terms of ratios of Fibonacci numbers.

  9. Elements of random walk and diffusion processes

    CERN Document Server

    Ibe, Oliver C

    2013-01-01

    Presents an important and unique introduction to random walk theory Random walk is a stochastic process that has proven to be a useful model in understanding discrete-state discrete-time processes across a wide spectrum of scientific disciplines. Elements of Random Walk and Diffusion Processes provides an interdisciplinary approach by including numerous practical examples and exercises with real-world applications in operations research, economics, engineering, and physics. Featuring an introduction to powerful and general techniques that are used in the application of physical and dynamic

  10. Accumulated damage evaluation for a piping system by the response factor on non-stationary random process, 2

    International Nuclear Information System (INIS)

    Shintani, Masanori

    1988-01-01

    This paper shows that the average and variance of the accumulated damage caused by earthquakes on the piping system attached to a building are related to the seismic response factor λ. The earthquakes refered to in this paper are of a non-stationary random process kind. The average is proportional to λ 2 and the variance to λ 4 . The analytical values of the average and variance for a single-degree-of-freedom system are compared with those obtained from computer simulations. Here the model of the building is a single-degree-of-freedom system. Both average of accumulated damage are approximately equal. The variance obtained from the analysis does not coincide with that from simulations. The reason is considered to be the forced vibraiton by sinusoidal waves, and the sinusoidal waves included random waves. Taking account of amplitude magnification factor, the values of the variance approach those obtained from simulations. (author)

  11. Averaging Principle for the Higher Order Nonlinear Schrödinger Equation with a Random Fast Oscillation

    Science.gov (United States)

    Gao, Peng

    2018-04-01

    This work concerns the problem associated with averaging principle for a higher order nonlinear Schrödinger equation perturbed by a oscillating term arising as the solution of a stochastic reaction-diffusion equation evolving with respect to the fast time. This model can be translated into a multiscale stochastic partial differential equations. Stochastic averaging principle is a powerful tool for studying qualitative analysis of stochastic dynamical systems with different time-scales. To be more precise, under suitable conditions, we prove that there is a limit process in which the fast varying process is averaged out and the limit process which takes the form of the higher order nonlinear Schrödinger equation is an average with respect to the stationary measure of the fast varying process. Finally, by using the Khasminskii technique we can obtain the rate of strong convergence for the slow component towards the solution of the averaged equation, and as a consequence, the system can be reduced to a single higher order nonlinear Schrödinger equation with a modified coefficient.

  12. Averaging Principle for the Higher Order Nonlinear Schrödinger Equation with a Random Fast Oscillation

    Science.gov (United States)

    Gao, Peng

    2018-06-01

    This work concerns the problem associated with averaging principle for a higher order nonlinear Schrödinger equation perturbed by a oscillating term arising as the solution of a stochastic reaction-diffusion equation evolving with respect to the fast time. This model can be translated into a multiscale stochastic partial differential equations. Stochastic averaging principle is a powerful tool for studying qualitative analysis of stochastic dynamical systems with different time-scales. To be more precise, under suitable conditions, we prove that there is a limit process in which the fast varying process is averaged out and the limit process which takes the form of the higher order nonlinear Schrödinger equation is an average with respect to the stationary measure of the fast varying process. Finally, by using the Khasminskii technique we can obtain the rate of strong convergence for the slow component towards the solution of the averaged equation, and as a consequence, the system can be reduced to a single higher order nonlinear Schrödinger equation with a modified coefficient.

  13. Studies in astronomical time series analysis. IV - Modeling chaotic and random processes with linear filters

    Science.gov (United States)

    Scargle, Jeffrey D.

    1990-01-01

    While chaos arises only in nonlinear systems, standard linear time series models are nevertheless useful for analyzing data from chaotic processes. This paper introduces such a model, the chaotic moving average. This time-domain model is based on the theorem that any chaotic process can be represented as the convolution of a linear filter with an uncorrelated process called the chaotic innovation. A technique, minimum phase-volume deconvolution, is introduced to estimate the filter and innovation. The algorithm measures the quality of a model using the volume covered by the phase-portrait of the innovation process. Experiments on synthetic data demonstrate that the algorithm accurately recovers the parameters of simple chaotic processes. Though tailored for chaos, the algorithm can detect both chaos and randomness, distinguish them from each other, and separate them if both are present. It can also recover nonminimum-delay pulse shapes in non-Gaussian processes, both random and chaotic.

  14. Averaging processes in granular flows driven by gravity

    Science.gov (United States)

    Rossi, Giulia; Armanini, Aronne

    2016-04-01

    One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental

  15. Convergence and approximate calculation of average degree under different network sizes for decreasing random birth-and-death networks

    Science.gov (United States)

    Long, Yin; Zhang, Xiao-Jun; Wang, Kui

    2018-05-01

    In this paper, convergence and approximate calculation of average degree under different network sizes for decreasing random birth-and-death networks (RBDNs) are studied. First, we find and demonstrate that the average degree is convergent in the form of power law. Meanwhile, we discover that the ratios of the back items to front items of convergent reminder are independent of network link number for large network size, and we theoretically prove that the limit of the ratio is a constant. Moreover, since it is difficult to calculate the analytical solution of the average degree for large network sizes, we adopt numerical method to obtain approximate expression of the average degree to approximate its analytical solution. Finally, simulations are presented to verify our theoretical results.

  16. Random Process Theory Approach to Geometric Heterogeneous Surfaces: Effective Fluid-Solid Interaction

    Science.gov (United States)

    Khlyupin, Aleksey; Aslyamov, Timur

    2017-06-01

    Realistic fluid-solid interaction potentials are essential in description of confined fluids especially in the case of geometric heterogeneous surfaces. Correlated random field is considered as a model of random surface with high geometric roughness. We provide the general theory of effective coarse-grained fluid-solid potential by proper averaging of the free energy of fluid molecules which interact with the solid media. This procedure is largely based on the theory of random processes. We apply first passage time probability problem and assume the local Markov properties of random surfaces. General expression of effective fluid-solid potential is obtained. In the case of small surface irregularities analytical approximation for effective potential is proposed. Both amorphous materials with large surface roughness and crystalline solids with several types of fcc lattices are considered. It is shown that the wider the lattice spacing in terms of molecular diameter of the fluid, the more obtained potentials differ from classical ones. A comparison with published Monte-Carlo simulations was discussed. The work provides a promising approach to explore how the random geometric heterogeneity affects on thermodynamic properties of the fluids.

  17. Modeling methane emission via the infinite moving average process

    Czech Academy of Sciences Publication Activity Database

    Jordanova, D.; Dušek, Jiří; Stehlík, M.

    2013-01-01

    Roč. 122, - (2013), s. 40-49 ISSN 0169-7439 R&D Projects: GA MŠk(CZ) ED1.1.00/02.0073; GA ČR(CZ) GAP504/11/1151 Institutional support: RVO:67179843 Keywords : Environmental chemistry * Pareto tails * t-Hill estimator * Weak consistency * Moving average process * Methane emission model Subject RIV: EH - Ecology, Behaviour Impact factor: 2.381, year: 2013

  18. Scattering analysis of point processes and random measures

    International Nuclear Information System (INIS)

    Hanisch, K.H.

    1984-01-01

    In the present paper scattering analysis of point processes and random measures is studied. Known formulae which connect the scattering intensity with the pair distribution function of the studied structures are proved in a rigorous manner with tools of the theory of point processes and random measures. For some special fibre processes the scattering intensity is computed. For a class of random measures, namely for 'grain-germ-models', a new formula is proved which yields the pair distribution function of the 'grain-germ-model' in terms of the pair distribution function of the underlying point process (the 'germs') and of the mean structure factor and the mean squared structure factor of the particles (the 'grains'). (author)

  19. A new mathematical process for the calculation of average forms of teeth.

    Science.gov (United States)

    Mehl, A; Blanz, V; Hickel, R

    2005-12-01

    Qualitative visual inspections and linear metric measurements have been predominant methods for describing the morphology of teeth. No quantitative formulation exists for the description of dental features. The aim of this study was to determine and validate a mathematical process for calculation of the average form of first maxillary molars, including the general occlusal features. Stone replicas of 174 caries-free first maxillary molar crowns from young patients ranging from 6 to 9 years of age were measured 3-dimensionally with a laser scanning system at a resolution of approximately 100,000 points. Then, the average tooth was computed, which captured the common features of the molar's surface quantitatively. This new method adapts algorithms both from computer science and neuroscience to detect and associate the same features and same surface points (correspondences) between 1 reference tooth and all other teeth. In this study, the method was tested for 7 different reference teeth. The algorithm does not involve any prior knowledge about teeth and their features. Irrespective of the reference tooth used, the procedure yielded average teeth that showed nearly no differences (less than +/-30 microm). This approach provides a valid quantitative process for calculating 3-dimensional (3D) averages of occlusal surfaces of teeth even in the event of a high number of digitized surface points. Additionally, because this process detects and assigns point-wise feature correspondences between all library teeth, it may also serve as a basis for a more substantiated principal component analysis evaluating the main natural shape deviations from the 3D average.

  20. A Computerized Approach to Trickle-Process, Random Assignment.

    Science.gov (United States)

    Braucht, G. Nicholas; Reichardt, Charles S.

    1993-01-01

    Procedures for implementing random assignment with trickle processing and ways they can be corrupted are described. A computerized method for implementing random assignment with trickle processing is presented as a desirable alternative in many situations and a way of protecting against threats to assignment validity. (SLD)

  1. Fundamentals of applied probability and random processes

    CERN Document Server

    Ibe, Oliver

    2014-01-01

    The long-awaited revision of Fundamentals of Applied Probability and Random Processes expands on the central components that made the first edition a classic. The title is based on the premise that engineers use probability as a modeling tool, and that probability can be applied to the solution of engineering problems. Engineers and students studying probability and random processes also need to analyze data, and thus need some knowledge of statistics. This book is designed to provide students with a thorough grounding in probability and stochastic processes, demonstrate their applicability t

  2. A random matrix approach to VARMA processes

    International Nuclear Information System (INIS)

    Burda, Zdzislaw; Jarosz, Andrzej; Nowak, Maciej A; Snarska, Malgorzata

    2010-01-01

    We apply random matrix theory to derive the spectral density of large sample covariance matrices generated by multivariate VMA(q), VAR(q) and VARMA(q 1 , q 2 ) processes. In particular, we consider a limit where the number of random variables N and the number of consecutive time measurements T are large but the ratio N/T is fixed. In this regime, the underlying random matrices are asymptotically equivalent to free random variables (FRV). We apply the FRV calculus to calculate the eigenvalue density of the sample covariance for several VARMA-type processes. We explicitly solve the VARMA(1, 1) case and demonstrate perfect agreement between the analytical result and the spectra obtained by Monte Carlo simulations. The proposed method is purely algebraic and can be easily generalized to q 1 >1 and q 2 >1.

  3. On the joint statistics of stable random processes

    International Nuclear Information System (INIS)

    Hopcraft, K I; Jakeman, E

    2011-01-01

    A utilitarian continuous bi-variate random process whose first-order probability density function is a stable random variable is constructed. Results paralleling some of those familiar from the theory of Gaussian noise are derived. In addition to the joint-probability density for the process, these include fractional moments and structure functions. Although the correlation functions for stable processes other than Gaussian do not exist, we show that there is coherence between values adopted by the process at different times, which identifies a characteristic evolution with time. The distribution of the derivative of the process, and the joint-density function of the value of the process and its derivative measured at the same time are evaluated. These enable properties to be calculated analytically such as level crossing statistics and those related to the random telegraph wave. When the stable process is fractal, the proportion of time it spends at zero is finite and some properties of this quantity are evaluated, an optical interpretation for which is provided. (paper)

  4. Random-sign observables nonvanishing upon averaging: Enhancement of weak perturbations and parity nonconservation in compound nuclei

    International Nuclear Information System (INIS)

    Flambaum, V.V.; Gribakin, G.F.

    1994-01-01

    Weak perturbations can be strongly enhanced in many-body systems that have dense spectra of excited states (compound nuclei, rare-earth atoms, molecules, clusters, quantum dots, etc.). Statistical consideration shows that in the case of zero-width states the probability distribution for the effect of the perturbation has an infinitte variance and does not obey the standard central limit theorem, i.e., the probability density for the average effect X=1/n tsum i=1 n x i does not tend to a Gaussian (normal) distribution with variance σ n =σ 1 / √n , where n is the ''number of measurements.'' Since for probability densities of this form [f(x)congruent a/x 2 at large x] the central limit theorem is F n (X)=a/X 2 +π 2 a 2 at n much-gt 1, the breadth of the distribution does not decrease with the increase of n. This means the following. (1) In spite of the random signs of observable effects for different compound states the probability of finding a large average effect for n levels is the same as that for a single-resonance measurements. (2) In some cases one does not need to resolve individual compound resonances and the enhanced value of the effect can be observed in the integral spectrum. This substantially increases the chances to observe statistical enhancement of weak perturbations in different reactions and systems. (3) The average value of parity and time-nonconserving effects in low-energy nucleon scattering cannot be described by a smooth weak optical potential. This ''potential'' would randomly fluctuate as a function of energy, with typical magnitudes much larger than the nucleon-nucleus weak potential. The effect of finite compound-state widths is considered

  5. Discrete random signal processing and filtering primer with Matlab

    CERN Document Server

    Poularikas, Alexander D

    2013-01-01

    Engineers in all fields will appreciate a practical guide that combines several new effective MATLAB® problem-solving approaches and the very latest in discrete random signal processing and filtering.Numerous Useful Examples, Problems, and Solutions - An Extensive and Powerful ReviewWritten for practicing engineers seeking to strengthen their practical grasp of random signal processing, Discrete Random Signal Processing and Filtering Primer with MATLAB provides the opportunity to doubly enhance their skills. The author, a leading expert in the field of electrical and computer engineering, offe

  6. Pseudo random signal processing theory and application

    CERN Document Server

    Zepernick, Hans-Jurgen

    2013-01-01

    In recent years, pseudo random signal processing has proven to be a critical enabler of modern communication, information, security and measurement systems. The signal's pseudo random, noise-like properties make it vitally important as a tool for protecting against interference, alleviating multipath propagation and allowing the potential of sharing bandwidth with other users. Taking a practical approach to the topic, this text provides a comprehensive and systematic guide to understanding and using pseudo random signals. Covering theoretical principles, design methodologies and applications

  7. Original article Functioning of memory and attention processes in children with intelligence below average

    Directory of Open Access Journals (Sweden)

    Aneta Rita Borkowska

    2014-05-01

    Full Text Available BACKGROUND The aim of the research was to assess memorization and recall of logically connected and unconnected material, coded graphically and linguistically, and the ability to focus attention, in a group of children with intelligence below average, compared to children with average intelligence. PARTICIPANTS AND PROCEDURE The study group included 27 children with intelligence below average. The control group consisted of 29 individuals. All of them were examined using the authors’ experimental trials and the TUS test (Attention and Perceptiveness Test. RESULTS Children with intelligence below average memorized significantly less information contained in the logical material, demonstrated lower ability to memorize the visual material, memorized significantly fewer words in the verbal material learning task, achieved lower results in such indicators of the visual attention process pace as the number of omissions and mistakes, and had a lower pace of perceptual work, compared to children with average intelligence. CONCLUSIONS The results confirm that children with intelligence below average have difficulties with memorizing new material, both logically connected and unconnected. The significantly lower capacity of direct memory is independent of modality. The results of the study on the memory process confirm the hypothesis about lower abilities of children with intelligence below average, in terms of concentration, work pace, efficiency and perception.

  8. Level sets and extrema of random processes and fields

    CERN Document Server

    Azais, Jean-Marc

    2009-01-01

    A timely and comprehensive treatment of random field theory with applications across diverse areas of study Level Sets and Extrema of Random Processes and Fields discusses how to understand the properties of the level sets of paths as well as how to compute the probability distribution of its extremal values, which are two general classes of problems that arise in the study of random processes and fields and in related applications. This book provides a unified and accessible approach to these two topics and their relationship to classical theory and Gaussian processes and fields, and the most modern research findings are also discussed. The authors begin with an introduction to the basic concepts of stochastic processes, including a modern review of Gaussian fields and their classical inequalities. Subsequent chapters are devoted to Rice formulas, regularity properties, and recent results on the tails of the distribution of the maximum. Finally, applications of random fields to various areas of mathematics a...

  9. Solution-Processed Carbon Nanotube True Random Number Generator.

    Science.gov (United States)

    Gaviria Rojas, William A; McMorrow, Julian J; Geier, Michael L; Tang, Qianying; Kim, Chris H; Marks, Tobin J; Hersam, Mark C

    2017-08-09

    With the growing adoption of interconnected electronic devices in consumer and industrial applications, there is an increasing demand for robust security protocols when transmitting and receiving sensitive data. Toward this end, hardware true random number generators (TRNGs), commonly used to create encryption keys, offer significant advantages over software pseudorandom number generators. However, the vast network of devices and sensors envisioned for the "Internet of Things" will require small, low-cost, and mechanically flexible TRNGs with low computational complexity. These rigorous constraints position solution-processed semiconducting single-walled carbon nanotubes (SWCNTs) as leading candidates for next-generation security devices. Here, we demonstrate the first TRNG using static random access memory (SRAM) cells based on solution-processed SWCNTs that digitize thermal noise to generate random bits. This bit generation strategy can be readily implemented in hardware with minimal transistor and computational overhead, resulting in an output stream that passes standardized statistical tests for randomness. By using solution-processed semiconducting SWCNTs in a low-power, complementary architecture to achieve TRNG, we demonstrate a promising approach for improving the security of printable and flexible electronics.

  10. Levy flights and random searches

    Energy Technology Data Exchange (ETDEWEB)

    Raposo, E P [Laboratorio de Fisica Teorica e Computacional, Departamento de Fisica, Universidade Federal de Pernambuco, Recife-PE, 50670-901 (Brazil); Buldyrev, S V [Department of Physics, Yeshiva University, New York, 10033 (United States); Da Luz, M G E [Departamento de Fisica, Universidade Federal do Parana, Curitiba-PR, 81531-990 (Brazil); Viswanathan, G M [Instituto de Fisica, Universidade Federal de Alagoas, Maceio-AL, 57072-970 (Brazil); Stanley, H E [Center for Polymer Studies and Department of Physics, Boston University, Boston, MA 02215 (United States)

    2009-10-30

    In this work we discuss some recent contributions to the random search problem. Our analysis includes superdiffusive Levy processes and correlated random walks in several regimes of target site density, mobility and revisitability. We present results in the context of mean-field-like and closed-form average calculations, as well as numerical simulations. We then consider random searches performed in regular lattices and lattices with defects, and we discuss a necessary criterion for distinguishing true superdiffusion from correlated random walk processes. We invoke energy considerations in relation to critical survival states on the edge of extinction, and we analyze the emergence of Levy behavior in deterministic search walks. Finally, we comment on the random search problem in the context of biological foraging.

  11. Random Fields

    Science.gov (United States)

    Vanmarcke, Erik

    1983-03-01

    Random variation over space and time is one of the few attributes that might safely be predicted as characterizing almost any given complex system. Random fields or "distributed disorder systems" confront astronomers, physicists, geologists, meteorologists, biologists, and other natural scientists. They appear in the artifacts developed by electrical, mechanical, civil, and other engineers. They even underlie the processes of social and economic change. The purpose of this book is to bring together existing and new methodologies of random field theory and indicate how they can be applied to these diverse areas where a "deterministic treatment is inefficient and conventional statistics insufficient." Many new results and methods are included. After outlining the extent and characteristics of the random field approach, the book reviews the classical theory of multidimensional random processes and introduces basic probability concepts and methods in the random field context. It next gives a concise amount of the second-order analysis of homogeneous random fields, in both the space-time domain and the wave number-frequency domain. This is followed by a chapter on spectral moments and related measures of disorder and on level excursions and extremes of Gaussian and related random fields. After developing a new framework of analysis based on local averages of one-, two-, and n-dimensional processes, the book concludes with a chapter discussing ramifications in the important areas of estimation, prediction, and control. The mathematical prerequisite has been held to basic college-level calculus.

  12. Asymptotically optimum multialternative sequential procedures for discernment of processes minimizing average length of observations

    Science.gov (United States)

    Fishman, M. M.

    1985-01-01

    The problem of multialternative sequential discernment of processes is formulated in terms of conditionally optimum procedures minimizing the average length of observations, without any probabilistic assumptions about any one occurring process, rather than in terms of Bayes procedures minimizing the average risk. The problem is to find the procedure that will transform inequalities into equalities. The problem is formulated for various models of signal observation and data processing: (1) discernment of signals from background interference by a multichannel system; (2) discernment of pulse sequences with unknown time delay; (3) discernment of harmonic signals with unknown frequency. An asymptotically optimum sequential procedure is constructed which compares the statistics of the likelihood ratio with the mean-weighted likelihood ratio and estimates the upper bound for conditional average lengths of observations. This procedure is shown to remain valid as the upper bound for the probability of erroneous partial solutions decreases approaching zero and the number of hypotheses increases approaching infinity. It also remains valid under certain special constraints on the probability such as a threshold. A comparison with a fixed-length procedure reveals that this sequential procedure decreases the length of observations to one quarter, on the average, when the probability of erroneous partial solutions is low.

  13. Random Decrement Based FRF Estimation

    DEFF Research Database (Denmark)

    Brincker, Rune; Asmussen, J. C.

    to speed and quality. The basis of the new method is the Fourier transformation of the Random Decrement functions which can be used to estimate the frequency response functions. The investigations are based on load and response measurements of a laboratory model of a 3 span bridge. By applying both methods...... that the Random Decrement technique is based on a simple controlled averaging of time segments of the load and response processes. Furthermore, the Random Decrement technique is expected to produce reliable results. The Random Decrement technique will reduce leakage, since the Fourier transformation...

  14. Random Decrement Based FRF Estimation

    DEFF Research Database (Denmark)

    Brincker, Rune; Asmussen, J. C.

    1997-01-01

    to speed and quality. The basis of the new method is the Fourier transformation of the Random Decrement functions which can be used to estimate the frequency response functions. The investigations are based on load and response measurements of a laboratory model of a 3 span bridge. By applying both methods...... that the Random Decrement technique is based on a simple controlled averaging of time segments of the load and response processes. Furthermore, the Random Decrement technique is expected to produce reliable results. The Random Decrement technique will reduce leakage, since the Fourier transformation...

  15. Ergodic averages for monotone functions using upper and lower dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain. Our...... methods are studied in detail for three models using Markov chain Monte Carlo methods and we also discuss various types of other models for which our methods apply....

  16. Ergodic averages for monotone functions using upper and lower dominating processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Mengersen, Kerrie

    2007-01-01

    We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain. Our...... methods are studied in detail for three models using Markov chain Monte Carlo methods and we also discuss various types of other models for which our methods apply....

  17. Probability, random processes, and ergodic properties

    CERN Document Server

    Gray, Robert M

    1988-01-01

    This book has been written for several reasons, not all of which are academic. This material was for many years the first half of a book in progress on information and ergodic theory. The intent was and is to provide a reasonably self-contained advanced treatment of measure theory, prob ability theory, and the theory of discrete time random processes with an emphasis on general alphabets and on ergodic and stationary properties of random processes that might be neither ergodic nor stationary. The intended audience was mathematically inc1ined engineering graduate students and visiting scholars who had not had formal courses in measure theoretic probability . Much of the material is familiar stuff for mathematicians, but many of the topics and results have not previously appeared in books. The original project grew too large and the first part contained much that would likely bore mathematicians and dis courage them from the second part. Hence I finally followed the suggestion to separate the material and split...

  18. Dynamic Average Consensus and Consensusability of General Linear Multiagent Systems with Random Packet Dropout

    Directory of Open Access Journals (Sweden)

    Wen-Min Zhou

    2013-01-01

    Full Text Available This paper is concerned with the consensus problem of general linear discrete-time multiagent systems (MASs with random packet dropout that happens during information exchange between agents. The packet dropout phenomenon is characterized as being a Bernoulli random process. A distributed consensus protocol with weighted graph is proposed to address the packet dropout phenomenon. Through introducing a new disagreement vector, a new framework is established to solve the consensus problem. Based on the control theory, the perturbation argument, and the matrix theory, the necessary and sufficient condition for MASs to reach mean-square consensus is derived in terms of stability of an array of low-dimensional matrices. Moreover, mean-square consensusable conditions with regard to network topology and agent dynamic structure are also provided. Finally, the effectiveness of the theoretical results is demonstrated through an illustrative example.

  19. Analytical expressions for conditional averages: A numerical test

    DEFF Research Database (Denmark)

    Pécseli, H.L.; Trulsen, J.

    1991-01-01

    Conditionally averaged random potential fluctuations are an important quantity for analyzing turbulent electrostatic plasma fluctuations. Experimentally, this averaging can be readily performed by sampling the fluctuations only when a certain condition is fulfilled at a reference position...

  20. Extracting gravitational waves induced by plasma turbulence in the early Universe through an averaging process

    International Nuclear Information System (INIS)

    Garrison, David; Ramirez, Christopher

    2017-01-01

    This work is a follow-up to the paper, ‘Numerical relativity as a tool for studying the early Universe’. In this article, we determine if cosmological gravitational waves can be accurately extracted from a dynamical spacetime using an averaging process as opposed to conventional methods of gravitational wave extraction using a complex Weyl scalar. We calculate the normalized energy density, strain and degree of polarization of gravitational waves produced by a simulated turbulent plasma similar to what was believed to have existed shortly after the electroweak scale. This calculation is completed using two numerical codes, one which utilizes full general relativity calculations based on modified BSSN equations while the other utilizes a linearized approximation of general relativity. Our results show that the spectrum of gravitational waves calculated from the nonlinear code using an averaging process is nearly indistinguishable from those calculated from the linear code. This result validates the use of the averaging process for gravitational wave extraction of cosmological systems. (paper)

  1. Efficient processing of CFRP with a picosecond laser with up to 1.4 kW average power

    Science.gov (United States)

    Onuseit, V.; Freitag, C.; Wiedenmann, M.; Weber, R.; Negel, J.-P.; Löscher, A.; Abdou Ahmed, M.; Graf, T.

    2015-03-01

    Laser processing of carbon fiber reinforce plastic (CFRP) is a very promising method to solve a lot of the challenges for large-volume production of lightweight constructions in automotive and airplane industries. However, the laser process is actual limited by two main issues. First the quality might be reduced due to thermal damage and second the high process energy needed for sublimation of the carbon fibers requires laser sources with high average power for productive processing. To achieve thermal damage of the CFRP of less than 10μm intensities above 108 W/cm² are needed. To reach these high intensities in the processing area ultra-short pulse laser systems are favored. Unfortunately the average power of commercially available laser systems is up to now in the range of several tens to a few hundred Watt. To sublimate the carbon fibers a large volume specific enthalpy of 85 J/mm³ is necessary. This means for example that cutting of 2 mm thick material with a kerf width of 0.2 mm with industry-typical 100 mm/sec requires several kilowatts of average power. At the IFSW a thin-disk multipass amplifier yielding a maximum average output power of 1100 W (300 kHz, 8 ps, 3.7 mJ) allowed for the first time to process CFRP at this average power and pulse energy level with picosecond pulse duration. With this unique laser system cutting of CFRP with a thickness of 2 mm an effective average cutting speed of 150 mm/sec with a thermal damage below 10μm was demonstrated.

  2. Renewal theory for perturbed random walks and similar processes

    CERN Document Server

    Iksanov, Alexander

    2016-01-01

    This book offers a detailed review of perturbed random walks, perpetuities, and random processes with immigration. Being of major importance in modern probability theory, both theoretical and applied, these objects have been used to model various phenomena in the natural sciences as well as in insurance and finance. The book also presents the many significant results and efficient techniques and methods that have been worked out in the last decade. The first chapter is devoted to perturbed random walks and discusses their asymptotic behavior and various functionals pertaining to them, including supremum and first-passage time. The second chapter examines perpetuities, presenting results on continuity of their distributions and the existence of moments, as well as weak convergence of divergent perpetuities. Focusing on random processes with immigration, the third chapter investigates the existence of moments, describes long-time behavior and discusses limit theorems, both with and without scaling. Chapters fou...

  3. A simple consensus algorithm for distributed averaging in random ...

    Indian Academy of Sciences (India)

    Random geographical networks are realistic models for wireless sensor ... work are cheap, unreliable, with limited computational power and limited .... signal xj from node j, j does not need to transmit its degree to i in order to let i compute.

  4. Melnikov processes and chaos in randomly perturbed dynamical systems

    Science.gov (United States)

    Yagasaki, Kazuyuki

    2018-07-01

    We consider a wide class of randomly perturbed systems subjected to stationary Gaussian processes and show that chaotic orbits exist almost surely under some nondegenerate condition, no matter how small the random forcing terms are. This result is very contrasting to the deterministic forcing case, in which chaotic orbits exist only if the influence of the forcing terms overcomes that of the other terms in the perturbations. To obtain the result, we extend Melnikov’s method and prove that the corresponding Melnikov functions, which we call the Melnikov processes, have infinitely many zeros, so that infinitely many transverse homoclinic orbits exist. In addition, a theorem on the existence and smoothness of stable and unstable manifolds is given and the Smale–Birkhoff homoclinic theorem is extended in an appropriate form for randomly perturbed systems. We illustrate our theory for the Duffing oscillator subjected to the Ornstein–Uhlenbeck process parametrically.

  5. Studies in astronomical time series analysis. I - Modeling random processes in the time domain

    Science.gov (United States)

    Scargle, J. D.

    1981-01-01

    Several random process models in the time domain are defined and discussed. Attention is given to the moving average model, the autoregressive model, and relationships between and combinations of these models. Consideration is then given to methods for investigating pulse structure, procedures of model construction, computational methods, and numerical experiments. A FORTRAN algorithm of time series analysis has been developed which is relatively stable numerically. Results of test cases are given to study the effect of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the light curve of the quasar 3C 272 is considered as an example.

  6. Advanced pulse oximeter signal processing technology compared to simple averaging. II. Effect on frequency of alarms in the postanesthesia care unit.

    Science.gov (United States)

    Rheineck-Leyssius, A T; Kalkman, C J

    1999-05-01

    To determine the effect of a new pulse oximeter (Nellcor Symphony N-3000, Pleasanton, CA) with signal processing technique (Oxismart) on the incidence of false alarms in the postanesthesia care unit (PACU). Prospective study. Nonuniversity hospital. 603 consecutive ASA physical status I, II, and III patients recovering from general or regional anesthesia in the PACU. We compared the number of alarms produced by a recently developed "third"-generation pulse oximeter (Nellcor Symphony N-3000) with Oxismart signal processing technique and a conventional pulse oximeter (Criticare 504, Waukesha, WI). Patients were randomly assigned to either a Nellcor pulse oximeter or a Criticare with the signal averaging time set at either 12 or 21 seconds. For each patient the number of false (artifact) alarms was counted. The Nellcor generated one false alarm in 199 patients and 36 (in 31 patients) "loss of pulse" alarms. The conventional pulse oximeter with the averaging time set at 12 seconds generated a total of 32 false alarms in 17 of 197 patients [compared with the Nellcor, relative risk (RR) 0.06, confidence interval (CI) 0.01 to 0.25] and a total of 172 "loss of pulse" alarms in 79 patients (RR 0.39, CI 0.28 to 0.55). The conventional pulse oximeter with the averaging time set at 21 seconds generated 12 false alarms in 11 of 207 patients (compared with the Nellcor, RR 0.09, CI 0.02 to 0.48) and a total of 204 "loss of pulse" alarms in 81 patients (RR 0.40, CI 0.28 to 0.56). The lower incidence of false alarms of the conventional pulse oximeter with the longest averaging time compared with the shorter averaging time did not reach statistical significance (false alarms RR 0.62, CI 0.3 to 1.27; "loss of pulse" alarms RR 0.98, CI 0.77 to 1.3). To date, this is the first report of a pulse oximeter that produced almost no false alarms in the PACU.

  7. Averaging, not internal noise, limits the development of coherent motion processing

    Directory of Open Access Journals (Sweden)

    Catherine Manning

    2014-10-01

    Full Text Available The development of motion processing is a critical part of visual development, allowing children to interact with moving objects and navigate within a dynamic environment. However, global motion processing, which requires pooling motion information across space, develops late, reaching adult-like levels only by mid-to-late childhood. The reasons underlying this protracted development are not yet fully understood. In this study, we sought to determine whether the development of motion coherence sensitivity is limited by internal noise (i.e., imprecision in estimating the directions of individual elements and/or global pooling across local estimates. To this end, we presented equivalent noise direction discrimination tasks and motion coherence tasks at both slow (1.5°/s and fast (6°/s speeds to children aged 5, 7, 9 and 11 years, and adults. We show that, as children get older, their levels of internal noise reduce, and they are able to average across more local motion estimates. Regression analyses indicated, however, that age-related improvements in coherent motion perception are driven solely by improvements in averaging and not by reductions in internal noise. Our results suggest that the development of coherent motion sensitivity is primarily limited by developmental changes within brain regions involved in integrating motion signals (e.g., MT/V5.

  8. Average stress in a Stokes suspension of disks

    NARCIS (Netherlands)

    Prosperetti, Andrea

    2004-01-01

    The ensemble-average velocity and pressure in an unbounded quasi-random suspension of disks (or aligned cylinders) are calculated in terms of average multipoles allowing for the possibility of spatial nonuniformities in the system. An expression for the stress due to the suspended particles is

  9. Money creation process in a random redistribution model

    Science.gov (United States)

    Chen, Siyan; Wang, Yougui; Li, Keqiang; Wu, Jinshan

    2014-01-01

    In this paper, the dynamical process of money creation in a random exchange model with debt is investigated. The money creation kinetics are analyzed by both the money-transfer matrix method and the diffusion method. From both approaches, we attain the same conclusion: the source of money creation in the case of random exchange is the agents with neither money nor debt. These analytical results are demonstrated by computer simulations.

  10. The random walk model of intrafraction movement

    International Nuclear Information System (INIS)

    Ballhausen, H; Reiner, M; Kantz, S; Belka, C; Söhn, M

    2013-01-01

    The purpose of this paper is to understand intrafraction movement as a stochastic process driven by random external forces. The hypothetically proposed three-dimensional random walk model has significant impact on optimal PTV margins and offers a quantitatively correct explanation of experimental findings. Properties of the random walk are calculated from first principles, in particular fraction-average population density distributions for displacements along the principal axes. When substituted into the established optimal margin recipes these fraction-average distributions yield safety margins about 30% smaller as compared to the suggested values from end-of-fraction Gaussian fits. Stylized facts of a random walk are identified in clinical data, such as the increase of the standard deviation of displacements with the square root of time. Least squares errors in the comparison to experimental results are reduced by about 50% when accounting for non-Gaussian corrections from the random walk model. (paper)

  11. The random walk model of intrafraction movement.

    Science.gov (United States)

    Ballhausen, H; Reiner, M; Kantz, S; Belka, C; Söhn, M

    2013-04-07

    The purpose of this paper is to understand intrafraction movement as a stochastic process driven by random external forces. The hypothetically proposed three-dimensional random walk model has significant impact on optimal PTV margins and offers a quantitatively correct explanation of experimental findings. Properties of the random walk are calculated from first principles, in particular fraction-average population density distributions for displacements along the principal axes. When substituted into the established optimal margin recipes these fraction-average distributions yield safety margins about 30% smaller as compared to the suggested values from end-of-fraction gaussian fits. Stylized facts of a random walk are identified in clinical data, such as the increase of the standard deviation of displacements with the square root of time. Least squares errors in the comparison to experimental results are reduced by about 50% when accounting for non-gaussian corrections from the random walk model.

  12. Random walks on generalized Koch networks

    International Nuclear Information System (INIS)

    Sun, Weigang

    2013-01-01

    For deterministically growing networks, it is a theoretical challenge to determine the topological properties and dynamical processes. In this paper, we study random walks on generalized Koch networks with features that include an initial state that is a globally connected network to r nodes. In each step, every existing node produces m complete graphs. We then obtain the analytical expressions for first passage time (FPT), average return time (ART), i.e. the average of FPTs for random walks from node i to return to the starting point i for the first time, and average sending time (AST), defined as the average of FPTs from a hub node to all other nodes, excluding the hub itself with regard to network parameters m and r. For this family of Koch networks, the ART of the new emerging nodes is identical and increases with the parameters m or r. In addition, the AST of our networks grows with network size N as N ln N and also increases with parameter m. The results obtained in this paper are the generalizations of random walks for the original Koch network. (paper)

  13. Vacuum instability in a random electric field

    International Nuclear Information System (INIS)

    Krive, I.V.; Pastur, L.A.

    1984-01-01

    The reaction of the vacuum on an intense spatially homogeneous random electric field is investigated. It is shown that a stochastic electric field always causes a breakdown of the boson vacuum, and the number of pairs of particles which are created by the electric field increases exponentially in time. For the choice of potential field in the form of a dichotomic random process we find in explicit form the dependence of the average number of pairs of particles on the time of the action of the source of the stochastic field. For the fermion vacuum the average number of pairs of particles which are created by the field in the lowest order of perturbation theory in the amplitude of the random field is independent of time

  14. Multifractal detrended fluctuation analysis of analog random multiplicative processes

    Energy Technology Data Exchange (ETDEWEB)

    Silva, L.B.M.; Vermelho, M.V.D. [Instituto de Fisica, Universidade Federal de Alagoas, Maceio - AL, 57072-970 (Brazil); Lyra, M.L. [Instituto de Fisica, Universidade Federal de Alagoas, Maceio - AL, 57072-970 (Brazil)], E-mail: marcelo@if.ufal.br; Viswanathan, G.M. [Instituto de Fisica, Universidade Federal de Alagoas, Maceio - AL, 57072-970 (Brazil)

    2009-09-15

    We investigate non-Gaussian statistical properties of stationary stochastic signals generated by an analog circuit that simulates a random multiplicative process with weak additive noise. The random noises are originated by thermal shot noise and avalanche processes, while the multiplicative process is generated by a fully analog circuit. The resulting signal describes stochastic time series of current interest in several areas such as turbulence, finance, biology and environment, which exhibit power-law distributions. Specifically, we study the correlation properties of the signal by employing a detrended fluctuation analysis and explore its multifractal nature. The singularity spectrum is obtained and analyzed as a function of the control circuit parameter that tunes the asymptotic power-law form of the probability distribution function.

  15. Structure and Randomness of Continuous-Time, Discrete-Event Processes

    Science.gov (United States)

    Marzen, Sarah E.; Crutchfield, James P.

    2017-10-01

    Loosely speaking, the Shannon entropy rate is used to gauge a stochastic process' intrinsic randomness; the statistical complexity gives the cost of predicting the process. We calculate, for the first time, the entropy rate and statistical complexity of stochastic processes generated by finite unifilar hidden semi-Markov models—memoryful, state-dependent versions of renewal processes. Calculating these quantities requires introducing novel mathematical objects (ɛ -machines of hidden semi-Markov processes) and new information-theoretic methods to stochastic processes.

  16. Provable quantum advantage in randomness processing

    OpenAIRE

    Dale, H; Jennings, D; Rudolph, T

    2015-01-01

    Quantum advantage is notoriously hard to find and even harder to prove. For example the class of functions computable with classical physics actually exactly coincides with the class computable quantum-mechanically. It is strongly believed, but not proven, that quantum computing provides exponential speed-up for a range of problems, such as factoring. Here we address a computational scenario of "randomness processing" in which quantum theory provably yields, not only resource reduction over c...

  17. A One Line Derivation of DCC: Application of a Vector Random Coefficient Moving Average Process

    NARCIS (Netherlands)

    C.M. Hafner (Christian); M.J. McAleer (Michael)

    2014-01-01

    markdownabstract__Abstract__ One of the most widely-used multivariate conditional volatility models is the dynamic conditional correlation (or DCC) specification. However, the underlying stochastic process to derive DCC has not yet been established, which has made problematic the derivation of

  18. UNDERSTANDING SEVERE WEATHER PROCESSES THROUGH SPATIOTEMPORAL RELATIONAL RANDOM FORESTS

    Data.gov (United States)

    National Aeronautics and Space Administration — UNDERSTANDING SEVERE WEATHER PROCESSES THROUGH SPATIOTEMPORAL RELATIONAL RANDOM FORESTS AMY MCGOVERN, TIMOTHY SUPINIE, DAVID JOHN GAGNE II, NATHANIEL TROUTMAN,...

  19. Optimal redundant systems for works with random processing time

    International Nuclear Information System (INIS)

    Chen, M.; Nakagawa, T.

    2013-01-01

    This paper studies the optimal redundant policies for a manufacturing system processing jobs with random working times. The redundant units of the parallel systems and standby systems are subject to stochastic failures during the continuous production process. First, a job consisting of only one work is considered for both redundant systems and the expected cost functions are obtained. Next, each redundant system with a random number of units is assumed for a single work. The expected cost functions and the optimal expected numbers of units are derived for redundant systems. Subsequently, the production processes of N tandem works are introduced for parallel and standby systems, and the expected cost functions are also summarized. Finally, the number of works is estimated by a Poisson distribution for the parallel and standby systems. Numerical examples are given to demonstrate the optimization problems of redundant systems

  20. Apparent scale correlations in a random multifractal process

    DEFF Research Database (Denmark)

    Cleve, Jochen; Schmiegel, Jürgen; Greiner, Martin

    2008-01-01

    We discuss various properties of a homogeneous random multifractal process, which are related to the issue of scale correlations. By design, the process has no built-in scale correlations. However, when it comes to observables like breakdown coefficients, which are based on a coarse......-graining of the multifractal field, scale correlations do appear. In the log-normal limit of the model process, the conditional distributions and moments of breakdown coefficients reproduce the observations made in fully developed small-scale turbulence. These findings help to understand several puzzling empirical details...

  1. An application of reactor noise techniques to neutron transport problems in a random medium

    International Nuclear Information System (INIS)

    Sahni, D.C.

    1989-01-01

    Neutron transport problems in a random medium are considered by defining a joint Markov process describing the fluctuations of one neutron population and the random changes in the medium. Backward Chapman-Kolmogorov equations are derived which yield an adjoint transport equation for the average neutron density. It is shown that this average density also satisfied the direct transport equation as given by the phenomenological model. (author)

  2. Random matrices and random difference equations

    International Nuclear Information System (INIS)

    Uppuluri, V.R.R.

    1975-01-01

    Mathematical models leading to products of random matrices and random difference equations are discussed. A one-compartment model with random behavior is introduced, and it is shown how the average concentration in the discrete time model converges to the exponential function. This is of relevance to understanding how radioactivity gets trapped in bone structure in blood--bone systems. The ideas are then generalized to two-compartment models and mammillary systems, where products of random matrices appear in a natural way. The appearance of products of random matrices in applications in demography and control theory is considered. Then random sequences motivated from the following problems are studied: constant pulsing and random decay models, random pulsing and constant decay models, and random pulsing and random decay models

  3. Average-case analysis of numerical problems

    CERN Document Server

    2000-01-01

    The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.

  4. Statistical properties of several models of fractional random point processes

    Science.gov (United States)

    Bendjaballah, C.

    2011-08-01

    Statistical properties of several models of fractional random point processes have been analyzed from the counting and time interval statistics points of view. Based on the criterion of the reduced variance, it is seen that such processes exhibit nonclassical properties. The conditions for these processes to be treated as conditional Poisson processes are examined. Numerical simulations illustrate part of the theoretical calculations.

  5. A comparison of random walks in dependent random environments

    NARCIS (Netherlands)

    Scheinhardt, Willem R.W.; Kroese, Dirk

    We provide exact computations for the drift of random walks in dependent random environments, including $k$-dependent and moving average environments. We show how the drift can be characterized and evaluated using Perron–Frobenius theory. Comparing random walks in various dependent environments, we

  6. Spatial birth-and-death processes in random environment

    OpenAIRE

    Fernandez, Roberto; Ferrari, Pablo A.; Guerberoff, Gustavo R.

    2004-01-01

    We consider birth-and-death processes of objects (animals) defined in ${\\bf Z}^d$ having unit death rates and random birth rates. For animals with uniformly bounded diameter we establish conditions on the rate distribution under which the following holds for almost all realizations of the birth rates: (i) the process is ergodic with at worst power-law time mixing; (ii) the unique invariant measure has exponential decay of (spatial) correlations; (iii) there exists a perfect-simulation algorit...

  7. Network formation determined by the diffusion process of random walkers

    International Nuclear Information System (INIS)

    Ikeda, Nobutoshi

    2008-01-01

    We studied the diffusion process of random walkers in networks formed by their traces. This model considers the rise and fall of links determined by the frequency of transports of random walkers. In order to examine the relation between the formed network and the diffusion process, a situation in which multiple random walkers start from the same vertex is investigated. The difference in diffusion rate of random walkers according to the difference in dimension of the initial lattice is very important for determining the time evolution of the networks. For example, complete subgraphs can be formed on a one-dimensional lattice while a graph with a power-law vertex degree distribution is formed on a two-dimensional lattice. We derived some formulae for predicting network changes for the 1D case, such as the time evolution of the size of nearly complete subgraphs and conditions for their collapse. The networks formed on the 2D lattice are characterized by the existence of clusters of highly connected vertices and their life time. As the life time of such clusters tends to be small, the exponent of the power-law distribution changes from γ ≅ 1-2 to γ ≅ 3

  8. Random covering of the circle: the configuration-space of the free deposition process

    Energy Technology Data Exchange (ETDEWEB)

    Huillet, Thierry [Laboratoire de Physique Theorique et Modelisation, CNRS-UMR 8089 et Universite de Cergy-Pontoise, 5 mail Gay-Lussac, 95031, Neuville sur Oise (France)

    2003-12-12

    Consider a circle of circumference 1. Throw at random n points, sequentially, on this circle and append clockwise an arc (or rod) of length s to each such point. The resulting random set (the free gas of rods) is a collection of a random number of clusters with random sizes. It models a free deposition process on a 1D substrate. For such processes, we shall consider the occurrence times (number of rods) and probabilities, as n grows, of the following configurations: those avoiding rod overlap (the hard-rod gas), those for which the largest gap is smaller than rod length s (the packing gas), those (parking configurations) for which hard rod and packing constraints are both fulfilled and covering configurations. Special attention is paid to the statistical properties of each such (rare) configuration in the asymptotic density domain when ns = {rho}, for some finite density {rho} of points. Using results from spacings in the random division of the circle, explicit large deviation rate functions can be computed in each case from state equations. Lastly, a process consisting in selecting at random one of these specific equilibrium configurations (called the observable) can be modelled. When particularized to the parking model, this system produces parking configurations differently from Renyi's random sequential adsorption model.

  9. Random Matrices for Information Processing – A Democratic Vision

    DEFF Research Database (Denmark)

    Cakmak, Burak

    The thesis studies three important applications of random matrices to information processing. Our main contribution is that we consider probabilistic systems involving more general random matrix ensembles than the classical ensembles with iid entries, i.e. models that account for statistical...... dependence between the entries. Specifically, the involved matrices are invariant or fulfill a certain asymptotic freeness condition as their dimensions grow to infinity. Informally speaking, all latent variables contribute to the system model in a democratic fashion – there are no preferred latent variables...

  10. Use of Play Therapy in Nursing Process: A Prospective Randomized Controlled Study.

    Science.gov (United States)

    Sezici, Emel; Ocakci, Ayse Ferda; Kadioglu, Hasibe

    2017-03-01

    Play therapy is a nursing intervention employed in multidisciplinary approaches to develop the social, emotional, and behavioral skills of children. In this study, we aim to determine the effects of play therapy on the social, emotional, and behavioral skills of pre-school children through the nursing process. A single-blind, prospective, randomized controlled study was undertaken. The design, conduct, and reporting of this study adhere to the Consolidated Standards of Reporting Trials (CONSORT) guidelines. The participants included 4- to 5-year-old kindergarten children with no oral or aural disabilities and parents who agreed to participate in the study. The Pre-school Child and Family Identification Form and Social Competence and the Behavior Evaluation Scale were used to gather data. Games in the play therapy literature about nursing diagnoses (fear, social disturbance, impaired social interactions, ineffective coping, anxiety), which were determined after the preliminary test, constituted the application of the study. There was no difference in the average scores of the children in the experimental and control groups in their Anger-Aggression (AA), Social Competence (SC), and Anxiety-Withdrawal (AW) scores beforehand (t = 0.015, p = .988; t = 0.084, p = .933; t = 0.214, p = .831, respectively). The difference between the average AA and SC scores in the post-test (t = 2.041, p = .045; t = 2.692, p = .009, respectively), and the retests were statistically significant in AA and SC average scores in the experimental and control groups (t = 4.538, p = .000; t = 4.693; p = .000, respectively). In AW average scores, no statistical difference was found in the post-test (t = 0.700, p = .486), whereas in the retest, a significant difference was identified (t = 5.839, p = .000). Play therapy helped pre-school children to improve their social, emotional, and behavioral skills. It also provided benefits for the children to decrease their fear and anxiety levels, to improve

  11. Average-case analysis of incremental topological ordering

    DEFF Research Database (Denmark)

    Ajwani, Deepak; Friedrich, Tobias

    2010-01-01

    Many applications like pointer analysis and incremental compilation require maintaining a topological ordering of the nodes of a directed acyclic graph (DAG) under dynamic updates. All known algorithms for this problem are either only analyzed for worst-case insertion sequences or only evaluated...... experimentally on random DAGs. We present the first average-case analysis of incremental topological ordering algorithms. We prove an expected runtime of under insertion of the edges of a complete DAG in a random order for the algorithms of Alpern et al. (1990) [4], Katriel and Bodlaender (2006) [18], and Pearce...

  12. Fundamentals of applied probability and random processes

    CERN Document Server

    Ibe, Oliver

    2005-01-01

    This book is based on the premise that engineers use probability as a modeling tool, and that probability can be applied to the solution of engineering problems. Engineers and students studying probability and random processes also need to analyze data, and thus need some knowledge of statistics. This book is designed to provide students with a thorough grounding in probability and stochastic processes, demonstrate their applicability to real-world problems, and introduce the basics of statistics. The book''s clear writing style and homework problems make it ideal for the classroom or for self-study.* Good and solid introduction to probability theory and stochastic processes * Logically organized; writing is presented in a clear manner * Choice of topics is comprehensive within the area of probability * Ample homework problems are organized into chapter sections

  13. Increasing average period lengths by switching of robust chaos maps in finite precision

    Science.gov (United States)

    Nagaraj, N.; Shastry, M. C.; Vaidya, P. G.

    2008-12-01

    Grebogi, Ott and Yorke (Phys. Rev. A 38, 1988) have investigated the effect of finite precision on average period length of chaotic maps. They showed that the average length of periodic orbits (T) of a dynamical system scales as a function of computer precision (ɛ) and the correlation dimension (d) of the chaotic attractor: T ˜ɛ-d/2. In this work, we are concerned with increasing the average period length which is desirable for chaotic cryptography applications. Our experiments reveal that random and chaotic switching of deterministic chaotic dynamical systems yield higher average length of periodic orbits as compared to simple sequential switching or absence of switching. To illustrate the application of switching, a novel generalization of the Logistic map that exhibits Robust Chaos (absence of attracting periodic orbits) is first introduced. We then propose a pseudo-random number generator based on chaotic switching between Robust Chaos maps which is found to successfully pass stringent statistical tests of randomness.

  14. Advances in randomized parallel computing

    CERN Document Server

    Rajasekaran, Sanguthevar

    1999-01-01

    The technique of randomization has been employed to solve numerous prob­ lems of computing both sequentially and in parallel. Examples of randomized algorithms that are asymptotically better than their deterministic counterparts in solving various fundamental problems abound. Randomized algorithms have the advantages of simplicity and better performance both in theory and often in practice. This book is a collection of articles written by renowned experts in the area of randomized parallel computing. A brief introduction to randomized algorithms In the aflalysis of algorithms, at least three different measures of performance can be used: the best case, the worst case, and the average case. Often, the average case run time of an algorithm is much smaller than the worst case. 2 For instance, the worst case run time of Hoare's quicksort is O(n ), whereas its average case run time is only O( n log n). The average case analysis is conducted with an assumption on the input space. The assumption made to arrive at t...

  15. High-Performance Pseudo-Random Number Generation on Graphics Processing Units

    OpenAIRE

    Nandapalan, Nimalan; Brent, Richard P.; Murray, Lawrence M.; Rendell, Alistair

    2011-01-01

    This work considers the deployment of pseudo-random number generators (PRNGs) on graphics processing units (GPUs), developing an approach based on the xorgens generator to rapidly produce pseudo-random numbers of high statistical quality. The chosen algorithm has configurable state size and period, making it ideal for tuning to the GPU architecture. We present a comparison of both speed and statistical quality with other common parallel, GPU-based PRNGs, demonstrating favourable performance o...

  16. An empirical test of pseudo random number generators by means of an exponential decaying process

    International Nuclear Information System (INIS)

    Coronel B, H.F.; Hernandez M, A.R.; Jimenez M, M.A.; Mora F, L.E.

    2007-01-01

    Empirical tests for pseudo random number generators based on the use of processes or physical models have been successfully used and are considered as complementary to theoretical tests of randomness. In this work a statistical methodology for evaluating the quality of pseudo random number generators is presented. The method is illustrated in the context of the so-called exponential decay process, using some pseudo random number generators commonly used in physics. (Author)

  17. Generation and monitoring of a discrete stable random process

    CERN Document Server

    Hopcraft, K I; Matthews, J O

    2002-01-01

    A discrete stochastic process with stationary power law distribution is obtained from a death-multiple immigration population model. Emigrations from the population form a random series of events which are monitored by a counting process with finite-dynamic range and response time. It is shown that the power law behaviour of the population is manifested in the intermittent behaviour of the series of events. (letter to the editor)

  18. Order out of Randomness: Self-Organization Processes in Astrophysics

    Science.gov (United States)

    Aschwanden, Markus J.; Scholkmann, Felix; Béthune, William; Schmutz, Werner; Abramenko, Valentina; Cheung, Mark C. M.; Müller, Daniel; Benz, Arnold; Chernov, Guennadi; Kritsuk, Alexei G.; Scargle, Jeffrey D.; Melatos, Andrew; Wagoner, Robert V.; Trimble, Virginia; Green, William H.

    2018-03-01

    Self-organization is a property of dissipative nonlinear processes that are governed by a global driving force and a local positive feedback mechanism, which creates regular geometric and/or temporal patterns, and decreases the entropy locally, in contrast to random processes. Here we investigate for the first time a comprehensive number of (17) self-organization processes that operate in planetary physics, solar physics, stellar physics, galactic physics, and cosmology. Self-organizing systems create spontaneous " order out of randomness", during the evolution from an initially disordered system to an ordered quasi-stationary system, mostly by quasi-periodic limit-cycle dynamics, but also by harmonic (mechanical or gyromagnetic) resonances. The global driving force can be due to gravity, electromagnetic forces, mechanical forces (e.g., rotation or differential rotation), thermal pressure, or acceleration of nonthermal particles, while the positive feedback mechanism is often an instability, such as the magneto-rotational (Balbus-Hawley) instability, the convective (Rayleigh-Bénard) instability, turbulence, vortex attraction, magnetic reconnection, plasma condensation, or a loss-cone instability. Physical models of astrophysical self-organization processes require hydrodynamic, magneto-hydrodynamic (MHD), plasma, or N-body simulations. Analytical formulations of self-organizing systems generally involve coupled differential equations with limit-cycle solutions of the Lotka-Volterra or Hopf-bifurcation type.

  19. Neutron Transport in Finite Random Media with Pure-Triplet Scattering

    International Nuclear Information System (INIS)

    Sallaha, M.; Hendi, A.A.

    2008-01-01

    The solution of the one-speed neutron transport equation in a finite slab random medium with pure-triplet anisotropic scattering is studied. The stochastic medium is assumed to consist of two randomly mixed immiscible fluids. The cross section and the scattering kernel are treated as discrete random variables, which obey the same statistics as Markovian processes and exponential chord length statistics. The medium boundaries are considered to have specular reflectivities with angular-dependent externally incident flux. The deterministic solution is obtained by using Pomraning-Eddington approximation. Numerical results are calculated for the average reflectivity and average transmissivity for different values of the single scattering albedo and varying the parameters which characterize the random medium. Compared to the results obtained by Adams et al. in case of isotropic scattering that based on the Monte Carlo technique, it can be seen that we have good comparable data

  20. Random migration processes between two stochastic epidemic centers.

    Science.gov (United States)

    Sazonov, Igor; Kelbert, Mark; Gravenor, Michael B

    2016-04-01

    We consider the epidemic dynamics in stochastic interacting population centers coupled by random migration. Both the epidemic and the migration processes are modeled by Markov chains. We derive explicit formulae for the probability distribution of the migration process, and explore the dependence of outbreak patterns on initial parameters, population sizes and coupling parameters, using analytical and numerical methods. We show the importance of considering the movement of resident and visitor individuals separately. The mean field approximation for a general migration process is derived and an approximate method that allows the computation of statistical moments for networks with highly populated centers is proposed and tested numerically. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Random walk in dynamically disordered chains: Poisson white noise disorder

    International Nuclear Information System (INIS)

    Hernandez-Garcia, E.; Pesquera, L.; Rodriguez, M.A.; San Miguel, M.

    1989-01-01

    Exact solutions are given for a variety of models of random walks in a chain with time-dependent disorder. Dynamic disorder is modeled by white Poisson noise. Models with site-independent (global) and site-dependent (local) disorder are considered. Results are described in terms of an affective random walk in a nondisordered medium. In the cases of global disorder the effective random walk contains multistep transitions, so that the continuous limit is not a diffusion process. In the cases of local disorder the effective process is equivalent to usual random walk in the absence of disorder but with slower diffusion. Difficulties associated with the continuous-limit representation of random walk in a disordered chain are discussed. In particular, the authors consider explicit cases in which taking the continuous limit and averaging over disorder sources do not commute

  2. Designing neural networks that process mean values of random variables

    International Nuclear Information System (INIS)

    Barber, Michael J.; Clark, John W.

    2014-01-01

    We develop a class of neural networks derived from probabilistic models posed in the form of Bayesian networks. Making biologically and technically plausible assumptions about the nature of the probabilistic models to be represented in the networks, we derive neural networks exhibiting standard dynamics that require no training to determine the synaptic weights, that perform accurate calculation of the mean values of the relevant random variables, that can pool multiple sources of evidence, and that deal appropriately with ambivalent, inconsistent, or contradictory evidence. - Highlights: • High-level neural computations are specified by Bayesian belief networks of random variables. • Probability densities of random variables are encoded in activities of populations of neurons. • Top-down algorithm generates specific neural network implementation of given computation. • Resulting “neural belief networks” process mean values of random variables. • Such networks pool multiple sources of evidence and deal properly with inconsistent evidence

  3. Designing neural networks that process mean values of random variables

    Energy Technology Data Exchange (ETDEWEB)

    Barber, Michael J. [AIT Austrian Institute of Technology, Innovation Systems Department, 1220 Vienna (Austria); Clark, John W. [Department of Physics and McDonnell Center for the Space Sciences, Washington University, St. Louis, MO 63130 (United States); Centro de Ciências Matemáticas, Universidade de Madeira, 9000-390 Funchal (Portugal)

    2014-06-13

    We develop a class of neural networks derived from probabilistic models posed in the form of Bayesian networks. Making biologically and technically plausible assumptions about the nature of the probabilistic models to be represented in the networks, we derive neural networks exhibiting standard dynamics that require no training to determine the synaptic weights, that perform accurate calculation of the mean values of the relevant random variables, that can pool multiple sources of evidence, and that deal appropriately with ambivalent, inconsistent, or contradictory evidence. - Highlights: • High-level neural computations are specified by Bayesian belief networks of random variables. • Probability densities of random variables are encoded in activities of populations of neurons. • Top-down algorithm generates specific neural network implementation of given computation. • Resulting “neural belief networks” process mean values of random variables. • Such networks pool multiple sources of evidence and deal properly with inconsistent evidence.

  4. Consensus in averager-copier-voter networks of moving dynamical agents

    Science.gov (United States)

    Shang, Yilun

    2017-02-01

    This paper deals with a hybrid opinion dynamics comprising averager, copier, and voter agents, which ramble as random walkers on a spatial network. Agents exchange information following some deterministic and stochastic protocols if they reside at the same site in the same time. Based on stochastic stability of Markov chains, sufficient conditions guaranteeing consensus in the sense of almost sure convergence have been obtained. The ultimate consensus state is identified in the form of an ergodicity result. Simulation studies are performed to validate the effectiveness and availability of our theoretical results. The existence/non-existence of voters and the proportion of them are unveiled to play key roles during the consensus-reaching process.

  5. Timing of the Crab pulsar III. The slowing down and the nature of the random process

    International Nuclear Information System (INIS)

    Groth, E.J.

    1975-01-01

    The Crab pulsar arrival times are analyzed. The data are found to be consistent with a smooth slowing down with a braking index of 2.515+-0.005. Superposed on the smooth slowdown is a random process which has the same second moments as a random walk in the frequency. The strength of the random process is R 2 >=0.53 (+0.24, -0.12) x10 -22 Hz 2 s -1 , where R is the mean rate of steps and 2 > is the second moment of the step amplitude distribution. Neither the braking index nor the strength of the random process shows evidence of statistically significant time variations, although small fluctuations in the braking index and rather large fluctuations in the noise strength cannot be ruled out. There is a possibility that the random process contains a small component with the same second moments as a random walk in the phase. If so, a time scale of 3.5 days is indicated

  6. Continuous state branching processes in random environment: The Brownian case

    OpenAIRE

    Palau, Sandra; Pardo, Juan Carlos

    2015-01-01

    We consider continuous state branching processes that are perturbed by a Brownian motion. These processes are constructed as the unique strong solution of a stochastic differential equation. The long-term extinction and explosion behaviours are studied. In the stable case, the extinction and explosion probabilities are given explicitly. We find three regimes for the asymptotic behaviour of the explosion probability and, as in the case of branching processes in random environment, we find five...

  7. Auditory detection of an increment in the rate of a random process

    International Nuclear Information System (INIS)

    Brown, W.S.; Emmerich, D.S.

    1994-01-01

    Recent experiments have presented listeners with complex tonal stimuli consisting of components with values (i.e., intensities or frequencies) randomly sampled from probability distributions [e.g., R. A. Lutfi, J. Acoust. Soc. Am. 86, 934--944 (1989)]. In the present experiment, brief tones were presented at intervals corresponding to the intensity of a random process. Specifically, the intervals between tones were randomly selected from exponential probability functions. Listeners were asked to decide whether tones presented during a defined observation interval represented a ''noise'' process alone or the ''noise'' with a ''signal'' process added to it. The number of tones occurring in any observation interval is a Poisson variable; receiver operating characteristics (ROCs) arising from Poisson processes have been considered by Egan [Signal Detection Theory and ROC Analysis (Academic, New York, 1975)]. Several sets of noise and signal intensities and observation interval durations were selected which were expected to yield equivalent performance. Rating ROCs were generated based on subjects' responses in a single-interval, yes--no task. The performance levels achieved by listeners and the effects of intensity and duration are compared to those predicted for an ideal observer

  8. Effects of stratospheric aerosol surface processes on the LLNL two-dimensional zonally averaged model

    International Nuclear Information System (INIS)

    Connell, P.S.; Kinnison, D.E.; Wuebbles, D.J.; Burley, J.D.; Johnston, H.S.

    1992-01-01

    We have investigated the effects of incorporating representations of heterogeneous chemical processes associated with stratospheric sulfuric acid aerosol into the LLNL two-dimensional, zonally averaged, model of the troposphere and stratosphere. Using distributions of aerosol surface area and volume density derived from SAGE 11 satellite observations, we were primarily interested in changes in partitioning within the Cl- and N- families in the lower stratosphere, compared to a model including only gas phase photochemical reactions

  9. Time-averaged MSD of Brownian motion

    OpenAIRE

    Andreanov, Alexei; Grebenkov, Denis

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we de...

  10. Traffic and random processes an introduction

    CERN Document Server

    Mauro, Raffaele

    2015-01-01

    This book deals in a basic and systematic manner with a the fundamentals of random function theory and looks at some aspects related to arrival, vehicle headway and operational speed processes at the same time. The work serves as a useful practical and educational tool and aims at providing stimulus and motivation to investigate issues of such a strong applicative interest. It has a clearly discursive and concise structure, in which numerical examples are given to clarify the applications of the suggested theoretical model. Some statistical characterizations are fully developed in order to illustrate the peculiarities of specific modeling approaches; finally, there is a useful bibliography for in-depth thematic analysis.

  11. Historical Data for Average Processing Time Until Hearing Held

    Data.gov (United States)

    Social Security Administration — This dataset provides historical data for average wait time (in days) from the hearing request date until a hearing was held. This dataset includes data from fiscal...

  12. An Invariance Property for the Maximum Likelihood Estimator of the Parameters of a Gaussian Moving Average Process

    OpenAIRE

    Godolphin, E. J.

    1980-01-01

    It is shown that the estimation procedure of Walker leads to estimates of the parameters of a Gaussian moving average process which are asymptotically equivalent to the maximum likelihood estimates proposed by Whittle and represented by Godolphin.

  13. Minimal-post-processing 320-Gbps true random bit generation using physical white chaos.

    Science.gov (United States)

    Wang, Anbang; Wang, Longsheng; Li, Pu; Wang, Yuncai

    2017-02-20

    Chaotic external-cavity semiconductor laser (ECL) is a promising entropy source for generation of high-speed physical random bits or digital keys. The rate and randomness is unfortunately limited by laser relaxation oscillation and external-cavity resonance, and is usually improved by complicated post processing. Here, we propose using a physical broadband white chaos generated by optical heterodyning of two ECLs as entropy source to construct high-speed random bit generation (RBG) with minimal post processing. The optical heterodyne chaos not only has a white spectrum without signature of relaxation oscillation and external-cavity resonance but also has a symmetric amplitude distribution. Thus, after quantization with a multi-bit analog-digital-convertor (ADC), random bits can be obtained by extracting several least significant bits (LSBs) without any other processing. In experiments, a white chaos with a 3-dB bandwidth of 16.7 GHz is generated. Its entropy rate is estimated as 16 Gbps by single-bit quantization which means a spectrum efficiency of 96%. With quantization using an 8-bit ADC, 320-Gbps physical RBG is achieved by directly extracting 4 LSBs at 80-GHz sampling rate.

  14. Post-processing Free Quantum Random Number Generator Based on Avalanche Photodiode Array

    International Nuclear Information System (INIS)

    Li Yang; Liao Sheng-Kai; Liang Fu-Tian; Shen Qi; Liang Hao; Peng Cheng-Zhi

    2016-01-01

    Quantum random number generators adopting single photon detection have been restricted due to the non-negligible dead time of avalanche photodiodes (APDs). We propose a new approach based on an APD array to improve the generation rate of random numbers significantly. This method compares the detectors' responses to consecutive optical pulses and generates the random sequence. We implement a demonstration experiment to show its simplicity, compactness and scalability. The generated numbers are proved to be unbiased, post-processing free, ready to use, and their randomness is verified by using the national institute of standard technology statistical test suite. The random bit generation efficiency is as high as 32.8% and the potential generation rate adopting the 32 × 32 APD array is up to tens of Gbits/s. (paper)

  15. Average inactivity time model, associated orderings and reliability properties

    Science.gov (United States)

    Kayid, M.; Izadkhah, S.; Abouammoh, A. M.

    2018-02-01

    In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.

  16. Gradient networks on uncorrelated random scale-free networks

    International Nuclear Information System (INIS)

    Pan Guijun; Yan Xiaoqing; Huang Zhongbing; Ma Weichuan

    2011-01-01

    Uncorrelated random scale-free (URSF) networks are useful null models for checking the effects of scale-free topology on network-based dynamical processes. Here, we present a comparative study of the jamming level of gradient networks based on URSF networks and Erdos-Renyi (ER) random networks. We find that the URSF networks are less congested than ER random networks for the average degree (k)>k c (k c ∼ 2 denotes a critical connectivity). In addition, by investigating the topological properties of the two kinds of gradient networks, we discuss the relations between the topological structure and the transport efficiency of the gradient networks. These findings show that the uncorrelated scale-free structure might allow more efficient transport than the random structure.

  17. Randomly transitional phenomena in the system governed by Duffing's equation

    International Nuclear Information System (INIS)

    Ueda, Yoshisuke.

    1978-06-01

    This paper deals with turbulent or chaotic phenomena which occur in the system governed by Duffing's equation, a special type of 2-dimensional periodic systems. By using analog and digital computers, experiments are undertaken with special reference to the changes of attractors and of average power spectra of the random processes under the variation of the system parameters. On the basis of the experimental results, an outline of the random process is made clear. The results obtained in this paper will be applied to the phenomena of the same kind which occur in 3-dimensional autonomous systems. (author)

  18. Ra and the average effective strain of surface asperities deformed in metal-working processes

    DEFF Research Database (Denmark)

    Bay, Niels; Wanheim, Tarras; Petersen, A. S

    1975-01-01

    Based upon a slip-line analysis of the plastic deformation of surface asperities, a theory is developed determining the Ra-value (c.l.a.) and the average effective strain in the surface layer when deforming asperities in metal-working processes. The ratio between Ra and Ra0, the Ra-value after...... and before deformation, is a function of the nominal normal pressure and the initial slope γ0 of the surface asperities. The last parameter does not influence Ra significantly. The average effective strain View the MathML sourcege in the deformed surface layer is a function of the nominal normal pressure...... and γ0. View the MathML sourcege is highly dependent on γ0, View the MathML sourcege increasing with increasing γ0. It is shown that the Ra-value and the strain are hardly affected by the normal pressure until interacting deformation of the asperities begins, that is until the limit of Amonton's law...

  19. Random sampling of evolution time space and Fourier transform processing

    International Nuclear Information System (INIS)

    Kazimierczuk, Krzysztof; Zawadzka, Anna; Kozminski, Wiktor; Zhukov, Igor

    2006-01-01

    Application of Fourier Transform for processing 3D NMR spectra with random sampling of evolution time space is presented. The 2D FT is calculated for pairs of frequencies, instead of conventional sequence of one-dimensional transforms. Signal to noise ratios and linewidths for different random distributions were investigated by simulations and experiments. The experimental examples include 3D HNCA, HNCACB and 15 N-edited NOESY-HSQC spectra of 13 C 15 N labeled ubiquitin sample. Obtained results revealed general applicability of proposed method and the significant improvement of resolution in comparison with conventional spectra recorded in the same time

  20. Efficient tests for equivalence of hidden Markov processes and quantum random walks

    NARCIS (Netherlands)

    U. Faigle; A. Schönhuth (Alexander)

    2011-01-01

    htmlabstractWhile two hidden Markov process (HMP) resp.~quantum random walk (QRW) parametrizations can differ from one another, the stochastic processes arising from them can be equivalent. Here a polynomial-time algorithm is presented which can determine equivalence of two HMP parametrizations

  1. Increased certification of semi-device independent random numbers using many inputs and more post-processing

    International Nuclear Information System (INIS)

    Mironowicz, Piotr; Tavakoli, Armin; Hameedi, Alley; Marques, Breno; Bourennane, Mohamed; Pawłowski, Marcin

    2016-01-01

    Quantum communication with systems of dimension larger than two provides advantages in information processing tasks. Examples include higher rates of key distribution and random number generation. The main disadvantage of using such multi-dimensional quantum systems is the increased complexity of the experimental setup. Here, we analyze a not-so-obvious problem: the relation between randomness certification and computational requirements of the post-processing of experimental data. In particular, we consider semi-device independent randomness certification from an experiment using a four dimensional quantum system to violate the classical bound of a random access code. Using state-of-the-art techniques, a smaller quantum violation requires more computational power to demonstrate randomness, which at some point becomes impossible with today’s computers although the randomness is (probably) still there. We show that by dedicating more input settings of the experiment to randomness certification, then by more computational postprocessing of the experimental data which corresponds to a quantum violation, one may increase the amount of certified randomness. Furthermore, we introduce a method that significantly lowers the computational complexity of randomness certification. Our results show how more randomness can be generated without altering the hardware and indicate a path for future semi-device independent protocols to follow. (paper)

  2. Determining average path length and average trapping time on generalized dual dendrimer

    Science.gov (United States)

    Li, Ling; Guan, Jihong

    2015-03-01

    Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.

  3. Geographic Gossip: Efficient Averaging for Sensor Networks

    Science.gov (United States)

    Dimakis, Alexandros D. G.; Sarwate, Anand D.; Wainwright, Martin J.

    Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log n}} \\log \\epsilon^{-1})$ radio transmissions, which yields a $\\sqrt{\\frac{n}{\\log n}}$ factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental comparisons between our algorithm and standard methods as applied to various classes of random fields.

  4. Characterisation of random Gaussian and non-Gaussian stress processes in terms of extreme responses

    Directory of Open Access Journals (Sweden)

    Colin Bruno

    2015-01-01

    Full Text Available In the field of military land vehicles, random vibration processes generated by all-terrain wheeled vehicles in motion are not classical stochastic processes with a stationary and Gaussian nature. Non-stationarity of processes induced by the variability of the vehicle speed does not form a major difficulty because the designer can have good control over the vehicle speed by characterising the histogram of instantaneous speed of the vehicle during an operational situation. Beyond this non-stationarity problem, the hard point clearly lies in the fact that the random processes are not Gaussian and are generated mainly by the non-linear behaviour of the undercarriage and the strong occurrence of shocks generated by roughness of the terrain. This non-Gaussian nature is expressed particularly by very high flattening levels that can affect the design of structures under extreme stresses conventionally acquired by spectral approaches, inherent to Gaussian processes and based essentially on spectral moments of stress processes. Due to these technical considerations, techniques for characterisation of random excitation processes generated by this type of carrier need to be changed, by proposing innovative characterisation methods based on time domain approaches as described in the body of the text rather than spectral domain approaches.

  5. On the speed towards the mean for continuous time autoregressive moving average processes with applications to energy markets

    International Nuclear Information System (INIS)

    Benth, Fred Espen; Taib, Che Mohd Imran Che

    2013-01-01

    We extend the concept of half life of an Ornstein–Uhlenbeck process to Lévy-driven continuous-time autoregressive moving average processes with stochastic volatility. The half life becomes state dependent, and we analyze its properties in terms of the characteristics of the process. An empirical example based on daily temperatures observed in Petaling Jaya, Malaysia, is presented, where the proposed model is estimated and the distribution of the half life is simulated. The stationarity of the dynamics yield futures prices which asymptotically tend to constant at an exponential rate when time to maturity goes to infinity. The rate is characterized by the eigenvalues of the dynamics. An alternative description of this convergence can be given in terms of our concept of half life. - Highlights: • The concept of half life is extended to Levy-driven continuous time autoregressive moving average processes • The dynamics of Malaysian temperatures are modeled using a continuous time autoregressive model with stochastic volatility • Forward prices on temperature become constant when time to maturity tends to infinity • Convergence in time to maturity is at an exponential rate given by the eigenvalues of the model temperature model

  6. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  7. Macrotransport processes: Brownian tracers as stochastic averagers in effective medium theories of heterogeneous media

    International Nuclear Information System (INIS)

    Brenner, H.

    1991-01-01

    Macrotransport processes (generalized Taylor dispersion phenomena) constitute coarse-grained descriptions of comparable convective diffusive-reactive microtransport processes, the latter supposed governed by microscale linear constitutive equations and boundary conditions, but characterized by spatially nonuniform phenomenological coefficients. Following a brief review of existing applications of the theory, the author focuses - by way of background information-upon the original (and now classical) Taylor - Aris dispersion problem, involving the combined convective and molecular diffusive transport of a point-size Brownian solute molecule (tracer) suspended in a Poiseuille solvent flow within a circular tube. A series of elementary generalizations of this prototype problem to chromatographic-like solute transport processes in tubes is used to illustrate some novel statistical-physical features. These examples emphasize the fact that a solute molecule may, on average, move axially down the tube at a different mean velocity (either larger or smaller) than that of a solvent molecule. Moreover, this solute molecule may suffer axial dispersion about its mean velocity at a rate greatly exceeding that attributable to its axial molecular diffusion alone. Such chromatographic anomalies represent novel macroscale non-linearities originating from physicochemical interactions between spatially inhomogeneous convective-diffusive-reactive microtransport processes

  8. Asymptotic theory of weakly dependent random processes

    CERN Document Server

    Rio, Emmanuel

    2017-01-01

    Presenting tools to aid understanding of asymptotic theory and weakly dependent processes, this book is devoted to inequalities and limit theorems for sequences of random variables that are strongly mixing in the sense of Rosenblatt, or absolutely regular. The first chapter introduces covariance inequalities under strong mixing or absolute regularity. These covariance inequalities are applied in Chapters 2, 3 and 4 to moment inequalities, rates of convergence in the strong law, and central limit theorems. Chapter 5 concerns coupling. In Chapter 6 new deviation inequalities and new moment inequalities for partial sums via the coupling lemmas of Chapter 5 are derived and applied to the bounded law of the iterated logarithm. Chapters 7 and 8 deal with the theory of empirical processes under weak dependence. Lastly, Chapter 9 describes links between ergodicity, return times and rates of mixing in the case of irreducible Markov chains. Each chapter ends with a set of exercises. The book is an updated and extended ...

  9. Statistics on exponential averaging of periodograms

    Energy Technology Data Exchange (ETDEWEB)

    Peeters, T.T.J.M. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Ciftcioglu, Oe. [Istanbul Technical Univ. (Turkey). Dept. of Electrical Engineering

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a {chi}{sup 2} distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.).

  10. Statistics on exponential averaging of periodograms

    International Nuclear Information System (INIS)

    Peeters, T.T.J.M.; Ciftcioglu, Oe.

    1994-11-01

    The algorithm of exponential averaging applied to subsequent periodograms of a stochastic process is used to estimate the power spectral density (PSD). For an independent process, assuming the periodogram estimates to be distributed according to a χ 2 distribution with 2 degrees of freedom, the probability density function (PDF) of the PSD estimate is derived. A closed expression is obtained for the moments of the distribution. Surprisingly, the proof of this expression features some new insights into the partitions and Eulers infinite product. For large values of the time constant of the averaging process, examination of the cumulant generating function shows that the PDF approximates the Gaussian distribution. Although restrictions for the statistics are seemingly tight, simulation of a real process indicates a wider applicability of the theory. (orig.)

  11. Random Intercept and Random Slope 2-Level Multilevel Models

    Directory of Open Access Journals (Sweden)

    Rehan Ahmad Khan

    2012-11-01

    Full Text Available Random intercept model and random intercept & random slope model carrying two-levels of hierarchy in the population are presented and compared with the traditional regression approach. The impact of students’ satisfaction on their grade point average (GPA was explored with and without controlling teachers influence. The variation at level-1 can be controlled by introducing the higher levels of hierarchy in the model. The fanny movement of the fitted lines proves variation of student grades around teachers.

  12. Bivariate copulas on the exponentially weighted moving average control chart

    Directory of Open Access Journals (Sweden)

    Sasigarn Kuvattana

    2016-10-01

    Full Text Available This paper proposes four types of copulas on the Exponentially Weighted Moving Average (EWMA control chart when observations are from an exponential distribution using a Monte Carlo simulation approach. The performance of the control chart is based on the Average Run Length (ARL which is compared for each copula. Copula functions for specifying dependence between random variables are used and measured by Kendall’s tau. The results show that the Normal copula can be used for almost all shifts.

  13. A new type of exact arbitrarily inhomogeneous cosmology: evolution of deceleration in the flat homogeneous-on-average case

    Energy Technology Data Exchange (ETDEWEB)

    Hellaby, Charles, E-mail: Charles.Hellaby@uct.ac.za [Dept. of Maths. and Applied Maths, University of Cape Town, Rondebosch, 7701 (South Africa)

    2012-01-01

    A new method for constructing exact inhomogeneous universes is presented, that allows variation in 3 dimensions. The resulting spacetime may be statistically uniform on average, or have random, non-repeating variation. The construction utilises the Darmois junction conditions to join many different component spacetime regions. In the initial simple example given, the component parts are spatially flat and uniform, but much more general combinations should be possible. Further inhomogeneity may be added via swiss cheese vacuoles and inhomogeneous metrics. This model is used to explore the proposal, that observers are located in bound, non-expanding regions, while the universe is actually in the process of becoming void dominated, and thus its average expansion rate is increasing. The model confirms qualitatively that the faster expanding components come to dominate the average, and that inhomogeneity results in average parameters which evolve differently from those of any one component, but more realistic modelling of the effect will need this construction to be generalised.

  14. Multifractal properties of diffusion-limited aggregates and random multiplicative processes

    International Nuclear Information System (INIS)

    Canessa, E.

    1991-04-01

    We consider the multifractal properties of irreversible diffusion-limited aggregation (DLA) from the point of view of the self-similarity of fluctuations in random multiplicative processes. In particular we analyse the breakdown of multifractal behaviour and phase transition associated with the negative moments of the growth probabilities in DLA. (author). 20 refs, 5 figs

  15. Advanced pulse oximeter signal processing technology compared to simple averaging. I. Effect on frequency of alarms in the operating room.

    Science.gov (United States)

    Rheineck-Leyssius, A T; Kalkman, C J

    1999-05-01

    To determine the effect of a new signal processing technique (Oxismart, Nellcor, Inc., Pleasanton, CA) on the incidence of false pulse oximeter alarms in the operating room (OR). Prospective observational study. Nonuniversity hospital. 53 ASA physical status I, II, and III consecutive patients undergoing general anesthesia with tracheal intubation. In the OR we compared the number of alarms produced by a recently developed third generation pulse oximeter (Nellcor Symphony N-3000) with Oxismart signal processing technique and a conventional pulse oximeter (Criticare 504). Three pulse oximeters were used simultaneously in each patient: a Nellcor pulse oximeter, a Criticare with the signal averaging time set at 3 seconds (Criticareaverage3s) and a similar unit with the signal averaging time set at 21 seconds (Criticareaverage21s). For each pulse oximeter, the number of false (artifact) alarms was counted. One false alarm was produced by the Nellcor (duration 55 sec) and one false alarm by the Criticareaverage21s monitor (5 sec). The incidence of false alarms was higher in Criticareaverage3s. In eight patients, Criticareaverage3s produced 20 false alarms (p signal processing compared with the Criticare monitor with the longer averaging time of 21 seconds.

  16. Experimental demonstration of squeezed-state quantum averaging

    DEFF Research Database (Denmark)

    Lassen, Mikael Østergaard; Madsen, Lars Skovgaard; Sabuncu, Metin

    2010-01-01

    We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The averaged variances are prepared probabilistically by means of linear optical interference and measurement-induced conditioning. We verify that the implemented...

  17. Art Therapy and Cognitive Processing Therapy for Combat-Related PTSD: A Randomized Controlled Trial

    Science.gov (United States)

    Campbell, Melissa; Decker, Kathleen P.; Kruk, Kerry; Deaver, Sarah P.

    2016-01-01

    This randomized controlled trial was designed to determine if art therapy in conjunction with Cognitive Processing Therapy (CPT) was more effective for reducing symptoms of combat posttraumatic stress disorder (PTSD) than CPT alone. Veterans (N = 11) were randomized to receive either individual CPT, or individual CPT in conjunction with individual…

  18. Random Walk on a Perturbation of the Infinitely-Fast Mixing Interchange Process

    Science.gov (United States)

    Salvi, Michele; Simenhaus, François

    2018-03-01

    We consider a random walk in dimension d≥1 in a dynamic random environment evolving as an interchange process with rate γ >0 . We prove that, if we choose γ large enough, almost surely the empirical velocity of the walker X_t/t eventually lies in an arbitrary small ball around the annealed drift. This statement is thus a perturbation of the case γ =+∞ where the environment is refreshed between each step of the walker. We extend three-way part of the results of Huveneers and Simenhaus (Electron J Probab 20(105):42, 2015), where the environment was given by the 1-dimensional exclusion process: (i) We deal with any dimension d≥1 ; (ii) We treat the much more general interchange process, where each particle carries a transition vector chosen according to an arbitrary law μ ; (iii) We show that X_t/t is not only in the same direction of the annealed drift, but that it is also close to it.

  19. Random Walk on a Perturbation of the Infinitely-Fast Mixing Interchange Process

    Science.gov (United States)

    Salvi, Michele; Simenhaus, François

    2018-05-01

    We consider a random walk in dimension d≥ 1 in a dynamic random environment evolving as an interchange process with rate γ >0. We prove that, if we choose γ large enough, almost surely the empirical velocity of the walker X_t/t eventually lies in an arbitrary small ball around the annealed drift. This statement is thus a perturbation of the case γ =+∞ where the environment is refreshed between each step of the walker. We extend three-way part of the results of Huveneers and Simenhaus (Electron J Probab 20(105):42, 2015), where the environment was given by the 1-dimensional exclusion process: (i) We deal with any dimension d≥1; (ii) We treat the much more general interchange process, where each particle carries a transition vector chosen according to an arbitrary law μ ; (iii) We show that X_t/t is not only in the same direction of the annealed drift, but that it is also close to it.

  20. Quasi-analytical treatment of spatially averaged radiation transfer in complex terrain

    Science.gov (United States)

    LöWe, H.; Helbig, N.

    2012-10-01

    We provide a new quasi-analytical method to compute the subgrid topographic influences on the shortwave radiation fluxes and the effective albedo in complex terrain as required for large-scale meteorological, land surface, or climate models. We investigate radiative transfer in complex terrain via the radiosity equation on isotropic Gaussian random fields. Under controlled approximations we derive expressions for domain-averaged fluxes of direct, diffuse, and terrain radiation and the sky view factor. Domain-averaged quantities can be related to a type of level-crossing probability of the random field, which is approximated by long-standing results developed for acoustic scattering at ocean boundaries. This allows us to express all nonlocal horizon effects in terms of a local terrain parameter, namely, the mean-square slope. Emerging integrals are computed numerically, and fit formulas are given for practical purposes. As an implication of our approach, we provide an expression for the effective albedo of complex terrain in terms of the Sun elevation angle, mean-square slope, the area-averaged surface albedo, and the ratio of atmospheric direct beam to diffuse radiation. For demonstration we compute the decrease of the effective albedo relative to the area-averaged albedo in Switzerland for idealized snow-covered and clear-sky conditions at noon in winter. We find an average decrease of 5.8% and spatial patterns which originate from characteristics of the underlying relief. Limitations and possible generalizations of the method are discussed.

  1. Randomized random walk on a random walk

    International Nuclear Information System (INIS)

    Lee, P.A.

    1983-06-01

    This paper discusses generalizations of the model introduced by Kehr and Kunter of the random walk of a particle on a one-dimensional chain which in turn has been constructed by a random walk procedure. The superimposed random walk is randomised in time according to the occurrences of a stochastic point process. The probability of finding the particle in a particular position at a certain instant is obtained explicitly in the transform domain. It is found that the asymptotic behaviour for large time of the mean-square displacement of the particle depends critically on the assumed structure of the basic random walk, giving a diffusion-like term for an asymmetric walk or a square root law if the walk is symmetric. Many results are obtained in closed form for the Poisson process case, and these agree with those given previously by Kehr and Kunter. (author)

  2. Time-averaged MSD of Brownian motion

    International Nuclear Information System (INIS)

    Andreanov, Alexei; Grebenkov, Denis S

    2012-01-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution

  3. Time-averaged MSD of Brownian motion

    Science.gov (United States)

    Andreanov, Alexei; Grebenkov, Denis S.

    2012-07-01

    We study the statistical properties of the time-averaged mean-square displacements (TAMSD). This is a standard non-local quadratic functional for inferring the diffusion coefficient from an individual random trajectory of a diffusing tracer in single-particle tracking experiments. For Brownian motion, we derive an exact formula for the Laplace transform of the probability density of the TAMSD by mapping the original problem onto chains of coupled harmonic oscillators. From this formula, we deduce the first four cumulant moments of the TAMSD, the asymptotic behavior of the probability density and its accurate approximation by a generalized Gamma distribution.

  4. Stable non-Gaussian self-similar processes with stationary increments

    CERN Document Server

    Pipiras, Vladas

    2017-01-01

    This book provides a self-contained presentation on the structure of a large class of stable processes, known as self-similar mixed moving averages. The authors present a way to describe and classify these processes by relating them to so-called deterministic flows. The first sections in the book review random variables, stochastic processes, and integrals, moving on to rigidity and flows, and finally ending with mixed moving averages and self-similarity. In-depth appendices are also included. This book is aimed at graduate students and researchers working in probability theory and statistics.

  5. Voter dynamics on an adaptive network with finite average connectivity

    Science.gov (United States)

    Mukhopadhyay, Abhishek; Schmittmann, Beate

    2009-03-01

    We study a simple model for voter dynamics in a two-party system. The opinion formation process is implemented in a random network of agents in which interactions are not restricted by geographical distance. In addition, we incorporate the rapidly changing nature of the interpersonal relations in the model. At each time step, agents can update their relationships, so that there is no history dependence in the model. This update is determined by their own opinion, and by their preference to make connections with individuals sharing the same opinion and with opponents. Using simulations and analytic arguments, we determine the final steady states and the relaxation into these states for different system sizes. In contrast to earlier studies, the average connectivity (``degree'') of each agent is constant here, independent of the system size. This has significant consequences for the long-time behavior of the model.

  6. The concept of the average stress in the fracture process zone for the search of the crack path

    Directory of Open Access Journals (Sweden)

    Yu.G. Matvienko

    2015-10-01

    Full Text Available The concept of the average stress has been employed to propose the maximum average tangential stress (MATS criterion for predicting the direction of fracture angle. This criterion states that a crack grows when the maximum average tangential stress in the fracture process zone ahead of the crack tip reaches its critical value and the crack growth direction coincides with the direction of the maximum average tangential stress along a constant radius around the crack tip. The tangential stress is described by the singular and nonsingular (T-stress terms in the Williams series solution. To demonstrate the validity of the proposed MATS criterion, this criterion is directly applied to experiments reported in the literature for the mixed mode I/II crack growth behavior of Guiting limestone. The predicted directions of fracture angle are consistent with the experimental data. The concept of the average stress has been also employed to predict the surface crack path under rolling-sliding contact loading. The proposed model considers the size and orientation of the initial crack, normal and tangential loading due to rolling–sliding contact as well as the influence of fluid trapped inside the crack by a hydraulic pressure mechanism. The MATS criterion is directly applied to equivalent contact model for surface crack growth on a gear tooth flank.

  7. Efficient sampling of complex network with modified random walk strategies

    Science.gov (United States)

    Xie, Yunya; Chang, Shuhua; Zhang, Zhipeng; Zhang, Mi; Yang, Lei

    2018-02-01

    We present two novel random walk strategies, choosing seed node (CSN) random walk and no-retracing (NR) random walk. Different from the classical random walk sampling, the CSN and NR strategies focus on the influences of the seed node choice and path overlap, respectively. Three random walk samplings are applied in the Erdös-Rényi (ER), Barabási-Albert (BA), Watts-Strogatz (WS), and the weighted USAir networks, respectively. Then, the major properties of sampled subnets, such as sampling efficiency, degree distributions, average degree and average clustering coefficient, are studied. The similar conclusions can be reached with these three random walk strategies. Firstly, the networks with small scales and simple structures are conducive to the sampling. Secondly, the average degree and the average clustering coefficient of the sampled subnet tend to the corresponding values of original networks with limited steps. And thirdly, all the degree distributions of the subnets are slightly biased to the high degree side. However, the NR strategy performs better for the average clustering coefficient of the subnet. In the real weighted USAir networks, some obvious characters like the larger clustering coefficient and the fluctuation of degree distribution are reproduced well by these random walk strategies.

  8. Asymptotic Time Averages and Frequency Distributions

    Directory of Open Access Journals (Sweden)

    Muhammad El-Taha

    2016-01-01

    Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t,  t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.

  9. Randomized benchmarking of single- and multi-qubit control in liquid-state NMR quantum information processing

    International Nuclear Information System (INIS)

    Ryan, C A; Laforest, M; Laflamme, R

    2009-01-01

    Being able to quantify the level of coherent control in a proposed device implementing a quantum information processor (QIP) is an important task for both comparing different devices and assessing a device's prospects with regards to achieving fault-tolerant quantum control. We implement in a liquid-state nuclear magnetic resonance QIP the randomized benchmarking protocol presented by Knill et al (2008 Phys. Rev. A 77 012307). We report an error per randomized π/2 pulse of 1.3±0.1x10 -4 with a single-qubit QIP and show an experimentally relevant error model where the randomized benchmarking gives a signature fidelity decay which is not possible to interpret as a single error per gate. We explore and experimentally investigate multi-qubit extensions of this protocol and report an average error rate for one- and two-qubit gates of 4.7±0.3x10 -3 for a three-qubit QIP. We estimate that these error rates are still not decoherence limited and thus can be improved with modifications to the control hardware and software.

  10. Time at which the maximum of a random acceleration process is reached

    International Nuclear Information System (INIS)

    Majumdar, Satya N; Rosso, Alberto; Zoia, Andrea

    2010-01-01

    We study the random acceleration model, which is perhaps one of the simplest, yet nontrivial, non-Markov stochastic processes, and is key to many applications. For this non-Markov process, we present exact analytical results for the probability density p(t m |T) of the time t m at which the process reaches its maximum, within a fixed time interval [0, T]. We study two different boundary conditions, which correspond to the process representing respectively (i) the integral of a Brownian bridge and (ii) the integral of a free Brownian motion. Our analytical results are also verified by numerical simulations.

  11. Gaussian random-matrix process and universal parametric correlations in complex systems

    International Nuclear Information System (INIS)

    Attias, H.; Alhassid, Y.

    1995-01-01

    We introduce the framework of the Gaussian random-matrix process as an extension of Dyson's Gaussian ensembles and use it to discuss the statistical properties of complex quantum systems that depend on an external parameter. We classify the Gaussian processes according to the short-distance diffusive behavior of their energy levels and demonstrate that all parametric correlation functions become universal upon the appropriate scaling of the parameter. The class of differentiable Gaussian processes is identified as the relevant one for most physical systems. We reproduce the known spectral correlators and compute eigenfunction correlators in their universal form. Numerical evidence from both a chaotic model and weakly disordered model confirms our predictions

  12. Dimer coverings on random multiple chains of planar honeycomb lattices

    International Nuclear Information System (INIS)

    Ren, Haizhen; Zhang, Fuji; Qian, Jianguo

    2012-01-01

    We study dimer coverings on random multiple chains. A multiple chain is a planar honeycomb lattice constructed by successively fusing copies of a ‘straight’ condensed hexagonal chain at the bottom of the previous one in two possible ways. A random multiple chain is then generated by admitting the Bernoulli distribution on the two types of fusing, which describes a zeroth-order Markov process. We determine the expectation of the number of the pure dimer coverings (perfect matchings) over the ensemble of random multiple chains by the transfer matrix approach. Our result shows that, with only two exceptions, the average of the logarithm of this expectation (i.e., the annealed entropy per dimer) is asymptotically nonzero when the fusing process goes to infinity and the length of the hexagonal chain is fixed, though it is zero when the fusing process and the length of the hexagonal chain go to infinity simultaneously. Some numerical results are provided to support our conclusion, from which we can see that the asymptotic behavior fits well to the theoretical results. We also apply the transfer matrix approach to the quenched entropy and reveal that the quenched entropy of random multiple chains has a close connection with the well-known Lyapunov exponent of random matrices. Using the theory of Lyapunov exponents we show that, for some random multiple chains, the quenched entropy per dimer is strictly smaller than the annealed one when the fusing process goes to infinity. Finally, we determine the expectation of the free energy per dimer over the ensemble of the random multiple chains in which the three types of dimers in different orientations are distinguished, and specify a series of non-random multiple chains whose free energy per dimer is asymptotically equal to this expectation. (paper)

  13. Lateral dispersion coefficients as functions of averaging time

    International Nuclear Information System (INIS)

    Sheih, C.M.

    1980-01-01

    Plume dispersion coefficients are discussed in terms of single-particle and relative diffusion, and are investigated as functions of averaging time. To demonstrate the effects of averaging time on the relative importance of various dispersion processes, and observed lateral wind velocity spectrum is used to compute the lateral dispersion coefficients of total, single-particle and relative diffusion for various averaging times and plume travel times. The results indicate that for a 1 h averaging time the dispersion coefficient of a plume can be approximated by single-particle diffusion alone for travel times <250 s and by relative diffusion for longer travel times. Furthermore, it is shown that the power-law formula suggested by Turner for relating pollutant concentrations for other averaging times to the corresponding 15 min average is applicable to the present example only when the averaging time is less than 200 s and the tral time smaller than about 300 s. Since the turbulence spectrum used in the analysis is an observed one, it is hoped that the results could represent many conditions encountered in the atmosphere. However, as the results depend on the form of turbulence spectrum, the calculations are not for deriving a set of specific criteria but for demonstrating the need in discriminating various processes in studies of plume dispersion

  14. Nonstationary random acoustic and electromagnetic fields as wave diffusion processes

    International Nuclear Information System (INIS)

    Arnaut, L R

    2007-01-01

    We investigate the effects of relatively rapid variations of the boundaries of an overmoded cavity on the stochastic properties of its interior acoustic or electromagnetic field. For quasi-static variations, this field can be represented as an ideal incoherent and statistically homogeneous isotropic random scalar or vector field, respectively. A physical model is constructed showing that the field dynamics can be characterized as a generalized diffusion process. The Langevin-It o-hat and Fokker-Planck equations are derived and their associated statistics and distributions for the complex analytic field, its magnitude and energy density are computed. The energy diffusion parameter is found to be proportional to the square of the ratio of the standard deviation of the source field to the characteristic time constant of the dynamic process, but is independent of the initial energy density, to first order. The energy drift vanishes in the asymptotic limit. The time-energy probability distribution is in general not separable, as a result of nonstationarity. A general solution of the Fokker-Planck equation is obtained in integral form, together with explicit closed-form solutions for several asymptotic cases. The findings extend known results on statistics and distributions of quasi-stationary ideal random fields (pure diffusions), which are retrieved as special cases

  15. To be and not to be: scale correlations in random multifractal processes

    DEFF Research Database (Denmark)

    Cleve, Jochen; Schmiegel, Jürgen; Greiner, Martin

    We discuss various properties of a random multifractal process, which are related to the issue of scale correlations. By design, the process is homogeneous, non-conservative and has no built-in scale correlations. However, when it comes to observables like breakdown coefficients, which are based...... on a coarse-graining of the multifractal field, scale correlations do appear. In the log-normal limit of the model process, the conditional distributions and moments of breakdown coefficients reproduce the observations made in fully developed small-scale turbulence. These findings help to understand several...

  16. Is neutron evaporation from highly excited nuclei a poisson random process

    International Nuclear Information System (INIS)

    Simbel, M.H.

    1982-01-01

    It is suggested that neutron emission from highly excited nuclei follows a Poisson random process. The continuous variable of the process is the excitation energy excess over the binding energy of the emitted neutrons and the discrete variable is the number of emitted neutrons. Cross sections for (HI,xn) reactions are analyzed using a formula containing a Poisson distribution function. The post- and pre-equilibrium components of the cross section are treated separately. The agreement between the predictions of this formula and the experimental results is very good. (orig.)

  17. Random and externally controlled occurrences of Dansgaard–Oeschger events

    Directory of Open Access Journals (Sweden)

    J. Lohmann

    2018-05-01

    Full Text Available Dansgaard–Oeschger (DO events constitute the most pronounced mode of centennial to millennial climate variability of the last glacial period. Since their discovery, many decades of research have been devoted to understand the origin and nature of these rapid climate shifts. In recent years, a number of studies have appeared that report emergence of DO-type variability in fully coupled general circulation models via different mechanisms. These mechanisms result in the occurrence of DO events at varying degrees of regularity, ranging from periodic to random. When examining the full sequence of DO events as captured in the North Greenland Ice Core Project (NGRIP ice core record, one can observe high irregularity in the timing of individual events at any stage within the last glacial period. In addition to the prevailing irregularity, certain properties of the DO event sequence, such as the average event frequency or the relative distribution of cold versus warm periods, appear to be changing throughout the glacial. By using statistical hypothesis tests on simple event models, we investigate whether the observed event sequence may have been generated by stationary random processes or rather was strongly modulated by external factors. We find that the sequence of DO warming events is consistent with a stationary random process, whereas dividing the event sequence into warming and cooling events leads to inconsistency with two independent event processes. As we include external forcing, we find a particularly good fit to the observed DO sequence in a model where the average residence time in warm periods are controlled by global ice volume and cold periods by boreal summer insolation.

  18. Random and externally controlled occurrences of Dansgaard-Oeschger events

    Science.gov (United States)

    Lohmann, Johannes; Ditlevsen, Peter D.

    2018-05-01

    Dansgaard-Oeschger (DO) events constitute the most pronounced mode of centennial to millennial climate variability of the last glacial period. Since their discovery, many decades of research have been devoted to understand the origin and nature of these rapid climate shifts. In recent years, a number of studies have appeared that report emergence of DO-type variability in fully coupled general circulation models via different mechanisms. These mechanisms result in the occurrence of DO events at varying degrees of regularity, ranging from periodic to random. When examining the full sequence of DO events as captured in the North Greenland Ice Core Project (NGRIP) ice core record, one can observe high irregularity in the timing of individual events at any stage within the last glacial period. In addition to the prevailing irregularity, certain properties of the DO event sequence, such as the average event frequency or the relative distribution of cold versus warm periods, appear to be changing throughout the glacial. By using statistical hypothesis tests on simple event models, we investigate whether the observed event sequence may have been generated by stationary random processes or rather was strongly modulated by external factors. We find that the sequence of DO warming events is consistent with a stationary random process, whereas dividing the event sequence into warming and cooling events leads to inconsistency with two independent event processes. As we include external forcing, we find a particularly good fit to the observed DO sequence in a model where the average residence time in warm periods are controlled by global ice volume and cold periods by boreal summer insolation.

  19. Fluctuation theory for radiative transfer in random media

    International Nuclear Information System (INIS)

    Bal, Guillaume; Jing Wenjia

    2011-01-01

    We consider the effect of small scale random fluctuations of the constitutive coefficients on boundary measurements of solutions to radiative transfer equations. As the correlation length of the random oscillations tends to zero, the transport solution is well approximated by a deterministic, averaged, solution. In this paper, we analyze the random fluctuations to the averaged solution, which may be interpreted as a central limit correction to homogenization. With the inverse transport problem in mind, we characterize the random structure of the singular components of the transport measurement operator. In regimes of moderate scattering, such components provide stable reconstructions of the constitutive parameters in the transport equation. We show that the random fluctuations strongly depend on the decorrelation properties of the random medium.

  20. Random Gap Detection Test (RGDT) performance of individuals with central auditory processing disorders from 5 to 25 years of age.

    Science.gov (United States)

    Dias, Karin Ziliotto; Jutras, Benoît; Acrani, Isabela Olszanski; Pereira, Liliane Desgualdo

    2012-02-01

    The aim of the present study was to assess the auditory temporal resolution ability in individuals with central auditory processing disorders, to examine the maturation effect and to investigate the relationship between the performance on a temporal resolution test with the performance on other central auditory tests. Participants were divided in two groups: 131 with Central Auditory Processing Disorder and 94 with normal auditory processing. They had pure-tone air-conduction thresholds no poorer than 15 dB HL bilaterally, normal admittance measures and presence of acoustic reflexes. Also, they were assessed with a central auditory test battery. Participants who failed at least one or more tests were included in the Central Auditory Processing Disorder group and those in the control group obtained normal performance on all tests. Following the auditory processing assessment, the Random Gap Detection Test was administered to the participants. A three-way ANOVA was performed. Correlation analyses were also done between the four Random Gap Detection Test subtests data as well as between Random Gap Detection Test data and the other auditory processing test results. There was a significant difference between the age-group performances in children with and without Central Auditory Processing Disorder. Also, 48% of children with Central Auditory Processing Disorder failed the Random Gap Detection Test and the percentage decreased as a function of age. The highest percentage (86%) was found in the 5-6 year-old children. Furthermore, results revealed a strong significant correlation between the four Random Gap Detection Test subtests. There was a modest correlation between the Random Gap Detection Test results and the dichotic listening tests. No significant correlation was observed between the Random Gap Detection Test data and the results of the other tests in the battery. Random Gap Detection Test should not be administered to children younger than 7 years old because

  1. A stochastic model for stationary dynamics of prices in real estate markets. A case of random intensity for Poisson moments of prices changes

    Science.gov (United States)

    Rusakov, Oleg; Laskin, Michael

    2017-06-01

    We consider a stochastic model of changes of prices in real estate markets. We suppose that in a book of prices the changes happen in points of jumps of a Poisson process with a random intensity, i.e. moments of changes sequently follow to a random process of the Cox process type. We calculate cumulative mathematical expectations and variances for the random intensity of this point process. In the case that the process of random intensity is a martingale the cumulative variance has a linear grows. We statistically process a number of observations of real estate prices and accept hypotheses of a linear grows for estimations as well for cumulative average, as for cumulative variance both for input and output prises that are writing in the book of prises.

  2. Statistical properties of random clique networks

    Science.gov (United States)

    Ding, Yi-Min; Meng, Jun; Fan, Jing-Fang; Ye, Fang-Fu; Chen, Xiao-Song

    2017-10-01

    In this paper, a random clique network model to mimic the large clustering coefficient and the modular structure that exist in many real complex networks, such as social networks, artificial networks, and protein interaction networks, is introduced by combining the random selection rule of the Erdös and Rényi (ER) model and the concept of cliques. We find that random clique networks having a small average degree differ from the ER network in that they have a large clustering coefficient and a power law clustering spectrum, while networks having a high average degree have similar properties as the ER model. In addition, we find that the relation between the clustering coefficient and the average degree shows a non-monotonic behavior and that the degree distributions can be fit by multiple Poisson curves; we explain the origin of such novel behaviors and degree distributions.

  3. Calculations of the properties of superconducting alloys via the average T-matrix approximation

    International Nuclear Information System (INIS)

    Chatterjee, P.

    1980-01-01

    The theoretical formula of McMillan, modified via the multiple-scattering theory by Gomersall and Gyorffy, has been very successful in computing the electron-phonon coupling constant (lambda) and the transition temperature (Tsub(c)) of many superconducting elements and compounds. For disordered solids, such as substitutional alloys, however, this theory fails because of the breakdown of the translational symmetry used in the multiple-scattering theory. Under these conditions the problem can still be solved if the t-matrix is averaged in the random phase approximation (average T-matrix approximation). Gomersall and Gyorffy's expression is reformulated for lambda in the random phase approximation. This theory is applied to calculate lambda and Tsub(c) of the binary substitutional NbMo alloy system at different concentrations. The results appear to be in fair agreement with experiments. (author)

  4. Art Therapy and Cognitive Processing Therapy for Combat-Related PTSD: A Randomized Controlled Trial

    Science.gov (United States)

    Campbell, Melissa; Decker, Kathleen P.; Kruk, Kerry; Deaver, Sarah P.

    2018-01-01

    This randomized controlled trial was designed to determine if art therapy in conjunction with Cognitive Processing Therapy (CPT) was more effective for reducing symptoms of combat posttraumatic stress disorder (PTSD) than CPT alone. Veterans (N = 11) were randomized to receive either individual CPT, or individual CPT in conjunction with individual art therapy. PTSD Checklist–Military Version and Beck Depression Inventory–II scores improved with treatment in both groups with no significant difference in improvement between the experimental and control groups. Art therapy in conjunction with CPT was found to improve trauma processing and veterans considered it to be an important part of their treatment as it provided healthy distancing, enhanced trauma recall, and increased access to emotions. PMID:29332989

  5. Polymers and Random graphs: Asymptotic equivalence to branching processes

    International Nuclear Information System (INIS)

    Spouge, J.L.

    1985-01-01

    In 1974, Falk and Thomas did a computer simulation of Flory's Equireactive RA/sub f/ Polymer model, rings forbidden and rings allowed. Asymptotically, the Rings Forbidden model tended to Stockmayer's RA/sub f/ distribution (in which the sol distribution ''sticks'' after gelation), while the Rings Allowed model tended to the Flory version of the RA/sub f/ distribution. In 1965, Whittle introduced the Tree and Pseudomultigraph models. We show that these random graphs generalize the Falk and Thomas models by incorporating first-shell substitution effects. Moreover, asymptotically the Tree model displays postgelation ''sticking.'' Hence this phenomenon results from the absence of rings and occurs independently of equireactivity. We also show that the Pseudomultigraph model is asymptotically identical to the Branching Process model introduced by Gordon in 1962. This provides a possible basis for the Branching Process model in standard statistical mechanics

  6. Generalized random walk algorithm for the numerical modeling of complex diffusion processes

    CERN Document Server

    Vamos, C; Vereecken, H

    2003-01-01

    A generalized form of the random walk algorithm to simulate diffusion processes is introduced. Unlike the usual approach, at a given time all the particles from a grid node are simultaneously scattered using the Bernoulli repartition. This procedure saves memory and computing time and no restrictions are imposed for the maximum number of particles to be used in simulations. We prove that for simple diffusion the method generalizes the finite difference scheme and gives the same precision for large enough number of particles. As an example, simulations of diffusion in random velocity field are performed and the main features of the stochastic mathematical model are numerically tested.

  7. Generalized random walk algorithm for the numerical modeling of complex diffusion processes

    International Nuclear Information System (INIS)

    Vamos, Calin; Suciu, Nicolae; Vereecken, Harry

    2003-01-01

    A generalized form of the random walk algorithm to simulate diffusion processes is introduced. Unlike the usual approach, at a given time all the particles from a grid node are simultaneously scattered using the Bernoulli repartition. This procedure saves memory and computing time and no restrictions are imposed for the maximum number of particles to be used in simulations. We prove that for simple diffusion the method generalizes the finite difference scheme and gives the same precision for large enough number of particles. As an example, simulations of diffusion in random velocity field are performed and the main features of the stochastic mathematical model are numerically tested

  8. British Standard method for determination of ISO speed and average gradient of direct-exposure medical and dental radiographic film/process combinations

    International Nuclear Information System (INIS)

    1983-01-01

    Under the direction of the Cinematography and Photography Standards Committee, a British Standard method has been prepared for determining ISO speed and average gradient of direct-exposure medical and dental radiographic film/film-process combinations. The method determines the speed and gradient, i.e. contrast, of the X-ray films processed according to their manufacturer's recommendations. (U.K.)

  9. Applying a weighted random forests method to extract karst sinkholes from LiDAR data

    Science.gov (United States)

    Zhu, Junfeng; Pierskalla, William P.

    2016-02-01

    Detailed mapping of sinkholes provides critical information for mitigating sinkhole hazards and understanding groundwater and surface water interactions in karst terrains. LiDAR (Light Detection and Ranging) measures the earth's surface in high-resolution and high-density and has shown great potentials to drastically improve locating and delineating sinkholes. However, processing LiDAR data to extract sinkholes requires separating sinkholes from other depressions, which can be laborious because of the sheer number of the depressions commonly generated from LiDAR data. In this study, we applied the random forests, a machine learning method, to automatically separate sinkholes from other depressions in a karst region in central Kentucky. The sinkhole-extraction random forest was grown on a training dataset built from an area where LiDAR-derived depressions were manually classified through a visual inspection and field verification process. Based on the geometry of depressions, as well as natural and human factors related to sinkholes, 11 parameters were selected as predictive variables to form the dataset. Because the training dataset was imbalanced with the majority of depressions being non-sinkholes, a weighted random forests method was used to improve the accuracy of predicting sinkholes. The weighted random forest achieved an average accuracy of 89.95% for the training dataset, demonstrating that the random forest can be an effective sinkhole classifier. Testing of the random forest in another area, however, resulted in moderate success with an average accuracy rate of 73.96%. This study suggests that an automatic sinkhole extraction procedure like the random forest classifier can significantly reduce time and labor costs and makes its more tractable to map sinkholes using LiDAR data for large areas. However, the random forests method cannot totally replace manual procedures, such as visual inspection and field verification.

  10. Fast Decentralized Averaging via Multi-scale Gossip

    Science.gov (United States)

    Tsianos, Konstantinos I.; Rabbat, Michael G.

    We are interested in the problem of computing the average consensus in a distributed fashion on random geometric graphs. We describe a new algorithm called Multi-scale Gossip which employs a hierarchical decomposition of the graph to partition the computation into tractable sub-problems. Using only pairwise messages of fixed size that travel at most O(n^{1/3}) hops, our algorithm is robust and has communication cost of O(n loglogn logɛ - 1) transmissions, which is order-optimal up to the logarithmic factor in n. Simulated experiments verify the good expected performance on graphs of many thousands of nodes.

  11. Random-effects linear modeling and sample size tables for two special crossover designs of average bioequivalence studies: the four-period, two-sequence, two-formulation and six-period, three-sequence, three-formulation designs.

    Science.gov (United States)

    Diaz, Francisco J; Berg, Michel J; Krebill, Ron; Welty, Timothy; Gidal, Barry E; Alloway, Rita; Privitera, Michael

    2013-12-01

    Due to concern and debate in the epilepsy medical community and to the current interest of the US Food and Drug Administration (FDA) in revising approaches to the approval of generic drugs, the FDA is currently supporting ongoing bioequivalence studies of antiepileptic drugs, the EQUIGEN studies. During the design of these crossover studies, the researchers could not find commercial or non-commercial statistical software that quickly allowed computation of sample sizes for their designs, particularly software implementing the FDA requirement of using random-effects linear models for the analyses of bioequivalence studies. This article presents tables for sample-size evaluations of average bioequivalence studies based on the two crossover designs used in the EQUIGEN studies: the four-period, two-sequence, two-formulation design, and the six-period, three-sequence, three-formulation design. Sample-size computations assume that random-effects linear models are used in bioequivalence analyses with crossover designs. Random-effects linear models have been traditionally viewed by many pharmacologists and clinical researchers as just mathematical devices to analyze repeated-measures data. In contrast, a modern view of these models attributes an important mathematical role in theoretical formulations in personalized medicine to them, because these models not only have parameters that represent average patients, but also have parameters that represent individual patients. Moreover, the notation and language of random-effects linear models have evolved over the years. Thus, another goal of this article is to provide a presentation of the statistical modeling of data from bioequivalence studies that highlights the modern view of these models, with special emphasis on power analyses and sample-size computations.

  12. 5th Seminar on Stochastic Processes, Random Fields and Applications

    CERN Document Server

    Russo, Francesco; Dozzi, Marco

    2008-01-01

    This volume contains twenty-eight refereed research or review papers presented at the 5th Seminar on Stochastic Processes, Random Fields and Applications, which took place at the Centro Stefano Franscini (Monte Verità) in Ascona, Switzerland, from May 30 to June 3, 2005. The seminar focused mainly on stochastic partial differential equations, random dynamical systems, infinite-dimensional analysis, approximation problems, and financial engineering. The book will be a valuable resource for researchers in stochastic analysis and professionals interested in stochastic methods in finance. Contributors: Y. Asai, J.-P. Aubin, C. Becker, M. Benaïm, H. Bessaih, S. Biagini, S. Bonaccorsi, N. Bouleau, N. Champagnat, G. Da Prato, R. Ferrière, F. Flandoli, P. Guasoni, V.B. Hallulli, D. Khoshnevisan, T. Komorowski, R. Léandre, P. Lescot, H. Lisei, J.A. López-Mimbela, V. Mandrekar, S. Méléard, A. Millet, H. Nagai, A.D. Neate, V. Orlovius, M. Pratelli, N. Privault, O. Raimond, M. Röckner, B. Rüdiger, W.J. Runggaldi...

  13. Statistics of peaks of Gaussian random fields

    International Nuclear Information System (INIS)

    Bardeen, J.M.; Bond, J.R.; Kaiser, N.; Szalay, A.S.; Stanford Univ., CA; California Univ., Berkeley; Cambridge Univ., England; Fermi National Accelerator Lab., Batavia, IL)

    1986-01-01

    A set of new mathematical results on the theory of Gaussian random fields is presented, and the application of such calculations in cosmology to treat questions of structure formation from small-amplitude initial density fluctuations is addressed. The point process equation is discussed, giving the general formula for the average number density of peaks. The problem of the proper conditional probability constraints appropriate to maxima are examined using a one-dimensional illustration. The average density of maxima of a general three-dimensional Gaussian field is calculated as a function of heights of the maxima, and the average density of upcrossing points on density contour surfaces is computed. The number density of peaks subject to the constraint that the large-scale density field be fixed is determined and used to discuss the segregation of high peaks from the underlying mass distribution. The machinery to calculate n-point peak-peak correlation functions is determined, as are the shapes of the profiles about maxima. 67 references

  14. Groupies in multitype random graphs

    OpenAIRE

    Shang, Yilun

    2016-01-01

    A groupie in a graph is a vertex whose degree is not less than the average degree of its neighbors. Under some mild conditions, we show that the proportion of groupies is very close to 1/2 in multitype random graphs (such as stochastic block models), which include Erd?s-R?nyi random graphs, random bipartite, and multipartite graphs as special examples. Numerical examples are provided to illustrate the theoretical results.

  15. Random walk of passive tracers among randomly moving obstacles.

    Science.gov (United States)

    Gori, Matteo; Donato, Irene; Floriani, Elena; Nardecchia, Ilaria; Pettini, Marco

    2016-04-14

    This study is mainly motivated by the need of understanding how the diffusion behavior of a biomolecule (or even of a larger object) is affected by other moving macromolecules, organelles, and so on, inside a living cell, whence the possibility of understanding whether or not a randomly walking biomolecule is also subject to a long-range force field driving it to its target. By means of the Continuous Time Random Walk (CTRW) technique the topic of random walk in random environment is here considered in the case of a passively diffusing particle among randomly moving and interacting obstacles. The relevant physical quantity which is worked out is the diffusion coefficient of the passive tracer which is computed as a function of the average inter-obstacles distance. The results reported here suggest that if a biomolecule, let us call it a test molecule, moves towards its target in the presence of other independently interacting molecules, its motion can be considerably slowed down.

  16. Curvature of random walks and random polygons in confinement

    International Nuclear Information System (INIS)

    Diao, Y; Ernst, C; Montemayor, A; Ziegler, U

    2013-01-01

    The purpose of this paper is to study the curvature of equilateral random walks and polygons that are confined in a sphere. Curvature is one of several basic geometric properties that can be used to describe random walks and polygons. We show that confinement affects curvature quite strongly, and in the limit case where the confinement diameter equals the edge length the unconfined expected curvature value doubles from π/2 to π. To study curvature a simple model of an equilateral random walk in spherical confinement in dimensions 2 and 3 is introduced. For this simple model we derive explicit integral expressions for the expected value of the total curvature in both dimensions. These expressions are functions that depend only on the radius R of the confinement sphere. We then show that the values obtained by numeric integration of these expressions agrees with numerical average curvature estimates obtained from simulations of random walks. Finally, we compare the confinement effect on curvature of random walks with random polygons. (paper)

  17. ON THE ESTIMATION OF DISTANCE DISTRIBUTION FUNCTIONS FOR POINT PROCESSES AND RANDOM SETS

    Directory of Open Access Journals (Sweden)

    Dietrich Stoyan

    2011-05-01

    Full Text Available This paper discusses various estimators for the nearest neighbour distance distribution function D of a stationary point process and for the quadratic contact distribution function Hq of a stationary random closed set. It recommends the use of Hanisch's estimator of D, which is of Horvitz-Thompson type, and the minussampling estimator of Hq. This recommendation is based on simulations for Poisson processes and Boolean models.

  18. Quasi-steady-state analysis of two-dimensional random intermittent search processes

    KAUST Repository

    Bressloff, Paul C.

    2011-06-01

    We use perturbation methods to analyze a two-dimensional random intermittent search process, in which a searcher alternates between a diffusive search phase and a ballistic movement phase whose velocity direction is random. A hidden target is introduced within a rectangular domain with reflecting boundaries. If the searcher moves within range of the target and is in the search phase, it has a chance of detecting the target. A quasi-steady-state analysis is applied to the corresponding Chapman-Kolmogorov equation. This generates a reduced Fokker-Planck description of the search process involving a nonzero drift term and an anisotropic diffusion tensor. In the case of a uniform direction distribution, for which there is zero drift, and isotropic diffusion, we use the method of matched asymptotics to compute the mean first passage time (MFPT) to the target, under the assumption that the detection range of the target is much smaller than the size of the domain. We show that an optimal search strategy exists, consistent with previous studies of intermittent search in a radially symmetric domain that were based on a decoupling or moment closure approximation. We also show how the decoupling approximation can break down in the case of biased search processes. Finally, we analyze the MFPT in the case of anisotropic diffusion and find that anisotropy can be useful when the searcher starts from a fixed location. © 2011 American Physical Society.

  19. Quasi-steady-state analysis of two-dimensional random intermittent search processes

    KAUST Repository

    Bressloff, Paul C.; Newby, Jay M.

    2011-01-01

    We use perturbation methods to analyze a two-dimensional random intermittent search process, in which a searcher alternates between a diffusive search phase and a ballistic movement phase whose velocity direction is random. A hidden target is introduced within a rectangular domain with reflecting boundaries. If the searcher moves within range of the target and is in the search phase, it has a chance of detecting the target. A quasi-steady-state analysis is applied to the corresponding Chapman-Kolmogorov equation. This generates a reduced Fokker-Planck description of the search process involving a nonzero drift term and an anisotropic diffusion tensor. In the case of a uniform direction distribution, for which there is zero drift, and isotropic diffusion, we use the method of matched asymptotics to compute the mean first passage time (MFPT) to the target, under the assumption that the detection range of the target is much smaller than the size of the domain. We show that an optimal search strategy exists, consistent with previous studies of intermittent search in a radially symmetric domain that were based on a decoupling or moment closure approximation. We also show how the decoupling approximation can break down in the case of biased search processes. Finally, we analyze the MFPT in the case of anisotropic diffusion and find that anisotropy can be useful when the searcher starts from a fixed location. © 2011 American Physical Society.

  20. Cosmological measure with volume averaging and the vacuum energy problem

    Science.gov (United States)

    Astashenok, Artyom V.; del Popolo, Antonino

    2012-04-01

    In this paper, we give a possible solution to the cosmological constant problem. It is shown that the traditional approach, based on volume weighting of probabilities, leads to an incoherent conclusion: the probability that a randomly chosen observer measures Λ = 0 is exactly equal to 1. Using an alternative, volume averaging measure, instead of volume weighting can explain why the cosmological constant is non-zero.

  1. Cosmological measure with volume averaging and the vacuum energy problem

    International Nuclear Information System (INIS)

    Astashenok, Artyom V; Del Popolo, Antonino

    2012-01-01

    In this paper, we give a possible solution to the cosmological constant problem. It is shown that the traditional approach, based on volume weighting of probabilities, leads to an incoherent conclusion: the probability that a randomly chosen observer measures Λ = 0 is exactly equal to 1. Using an alternative, volume averaging measure, instead of volume weighting can explain why the cosmological constant is non-zero. (paper)

  2. Simulation study on characteristics of long-range interaction in randomly asymmetric exclusion process

    Science.gov (United States)

    Zhao, Shi-Bo; Liu, Ming-Zhe; Yang, Lan-Ying

    2015-04-01

    In this paper we investigate the dynamics of an asymmetric exclusion process on a one-dimensional lattice with long-range hopping and random update via Monte Carlo simulations theoretically. Particles in the model will firstly try to hop over successive unoccupied sites with a probability q, which is different from previous exclusion process models. The probability q may represent the random access of particles. Numerical simulations for stationary particle currents, density profiles, and phase diagrams are obtained. There are three possible stationary phases: the low density (LD) phase, high density (HD) phase, and maximal current (MC) in the system, respectively. Interestingly, bulk density in the LD phase tends to zero, while the MC phase is governed by α, β, and q. The HD phase is nearly the same as the normal TASEP, determined by exit rate β. Theoretical analysis is in good agreement with simulation results. The proposed model may provide a better understanding of random interaction dynamics in complex systems. Project supported by the National Natural Science Foundation of China (Grant Nos. 41274109 and 11104022), the Fund for Sichuan Youth Science and Technology Innovation Research Team (Grant No. 2011JTD0013), and the Creative Team Program of Chengdu University of Technology.

  3. PySpline: A Modern, Cross-Platform Program for the Processing of Raw Averaged XAS Edge and EXAFS Data

    International Nuclear Information System (INIS)

    Tenderholt, Adam; Hedman, Britt; Hodgson, Keith O.

    2007-01-01

    PySpline is a modern computer program for processing raw averaged XAS and EXAFS data using an intuitive approach which allows the user to see the immediate effect of various processing parameters on the resulting k- and R-space data. The Python scripting language and Qt and Qwt widget libraries were chosen to meet the design requirement that it be cross-platform (i.e. versions for Windows, Mac OS X, and Linux). PySpline supports polynomial pre- and post-edge background subtraction, splining of the EXAFS region with a multi-segment polynomial spline, and Fast Fourier Transform (FFT) of the resulting k3-weighted EXAFS data

  4. Can a combination of average of normals and "real time" External Quality Assurance replace Internal Quality Control?

    Science.gov (United States)

    Badrick, Tony; Graham, Peter

    2018-03-28

    Internal Quality Control and External Quality Assurance are separate but related processes that have developed independently in laboratory medicine over many years. They have different sample frequencies, statistical interpretations and immediacy. Both processes have evolved absorbing new understandings of the concept of laboratory error, sample material matrix and assay capability. However, we do not believe at the coalface that either process has led to much improvement in patient outcomes recently. It is the increasing reliability and automation of analytical platforms along with improved stability of reagents that has reduced systematic and random error, which in turn has minimised the risk of running less frequent IQC. We suggest that it is time to rethink the role of both these processes and unite them into a single approach using an Average of Normals model supported by more frequent External Quality Assurance samples. This new paradigm may lead to less confusion for laboratory staff and quicker responses to and identification of out of control situations.

  5. Pseudo-random number generators for Monte Carlo simulations on ATI Graphics Processing Units

    Science.gov (United States)

    Demchik, Vadim

    2011-03-01

    Basic uniform pseudo-random number generators are implemented on ATI Graphics Processing Units (GPU). The performance results of the realized generators (multiplicative linear congruential (GGL), XOR-shift (XOR128), RANECU, RANMAR, RANLUX and Mersenne Twister (MT19937)) on CPU and GPU are discussed. The obtained speed up factor is hundreds of times in comparison with CPU. RANLUX generator is found to be the most appropriate for using on GPU in Monte Carlo simulations. The brief review of the pseudo-random number generators used in modern software packages for Monte Carlo simulations in high-energy physics is presented.

  6. A random point process model for the score in sport matches

    Czech Academy of Sciences Publication Activity Database

    Volf, Petr

    2009-01-01

    Roč. 20, č. 2 (2009), s. 121-131 ISSN 1471-678X R&D Projects: GA AV ČR(CZ) IAA101120604 Institutional research plan: CEZ:AV0Z10750506 Keywords : sport statistics * scoring intensity * Cox’s regression model Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2009/SI/volf-a random point process model for the score in sport matches.pdf

  7. Groupies in multitype random graphs.

    Science.gov (United States)

    Shang, Yilun

    2016-01-01

    A groupie in a graph is a vertex whose degree is not less than the average degree of its neighbors. Under some mild conditions, we show that the proportion of groupies is very close to 1/2 in multitype random graphs (such as stochastic block models), which include Erdős-Rényi random graphs, random bipartite, and multipartite graphs as special examples. Numerical examples are provided to illustrate the theoretical results.

  8. Dose calculation with respiration-averaged CT processed from cine CT without a respiratory surrogate

    International Nuclear Information System (INIS)

    Riegel, Adam C.; Ahmad, Moiz; Sun Xiaojun; Pan Tinsu

    2008-01-01

    Dose calculation for thoracic radiotherapy is commonly performed on a free-breathing helical CT despite artifacts caused by respiratory motion. Four-dimensional computed tomography (4D-CT) is one method to incorporate motion information into the treatment planning process. Some centers now use the respiration-averaged CT (RACT), the pixel-by-pixel average of the ten phases of 4D-CT, for dose calculation. This method, while sparing the tedious task of 4D dose calculation, still requires 4D-CT technology. The authors have recently developed a means to reconstruct RACT directly from unsorted cine CT data from which 4D-CT is formed, bypassing the need for a respiratory surrogate. Using RACT from cine CT for dose calculation may be a means to incorporate motion information into dose calculation without performing 4D-CT. The purpose of this study was to determine if RACT from cine CT can be substituted for RACT from 4D-CT for the purposes of dose calculation, and if increasing the cine duration can decrease differences between the dose distributions. Cine CT data and corresponding 4D-CT simulations for 23 patients with at least two breathing cycles per cine duration were retrieved. RACT was generated four ways: First from ten phases of 4D-CT, second, from 1 breathing cycle of images, third, from 1.5 breathing cycles of images, and fourth, from 2 breathing cycles of images. The clinical treatment plan was transferred to each RACT and dose was recalculated. Dose planes were exported at orthogonal planes through the isocenter (coronal, sagittal, and transverse orientations). The resulting dose distributions were compared using the gamma (γ) index within the planning target volume (PTV). Failure criteria were set to 2%/1 mm. A follow-up study with 50 additional lung cancer patients was performed to increase sample size. The same dose recalculation and analysis was performed. In the primary patient group, 22 of 23 patients had 100% of points within the PTV pass γ criteria

  9. A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, Manfred

    2003-01-01

    We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...

  10. An approximate analytical approach to resampling averages

    DEFF Research Database (Denmark)

    Malzahn, Dorthe; Opper, M.

    2004-01-01

    Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr...... for approximate Bayesian inference. We demonstrate our approach on regression with Gaussian processes. A comparison with averages obtained by Monte-Carlo sampling shows that our method achieves good accuracy....

  11. Quantized Average Consensus on Gossip Digraphs with Reduced Computation

    Science.gov (United States)

    Cai, Kai; Ishii, Hideaki

    The authors have recently proposed a class of randomized gossip algorithms which solve the distributed averaging problem on directed graphs, with the constraint that each node has an integer-valued state. The essence of this algorithm is to maintain local records, called “surplus”, of individual state updates, thereby achieving quantized average consensus even though the state sum of all nodes is not preserved. In this paper we study a modified version of this algorithm, whose feature is primarily in reducing both computation and communication effort. Concretely, each node needs to update fewer local variables, and can transmit surplus by requiring only one bit. Under this modified algorithm we prove that reaching the average is ensured for arbitrary strongly connected graphs. The condition of arbitrary strong connection is less restrictive than those known in the literature for either real-valued or quantized states; in particular, it does not require the special structure on the network called balanced. Finally, we provide numerical examples to illustrate the convergence result, with emphasis on convergence time analysis.

  12. Fluctuations of wavefunctions about their classical average

    International Nuclear Information System (INIS)

    Benet, L; Flores, J; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H

    2003-01-01

    Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics

  13. Distributed Random Process for a Large-Scale Peer-to-Peer Lottery

    OpenAIRE

    Grumbach, Stéphane; Riemann, Robert

    2017-01-01

    International audience; Most online lotteries today fail to ensure the verifiability of the random process and rely on a trusted third party. This issue has received little attention since the emergence of distributed protocols like Bitcoin that demonstrated the potential of protocols with no trusted third party. We argue that the security requirements of online lotteries are similar to those of online voting, and propose a novel distributed online lottery protocol that applies techniques dev...

  14. On a randomly imperfect spherical cap pressurized by a random ...

    African Journals Online (AJOL)

    On a randomly imperfect spherical cap pressurized by a random dynamic load. ... In this paper, we investigate a dynamical system in a random setting of dual ... characterization of the random process for determining the dynamic buckling load ...

  15. Size and emotion averaging: costs of dividing attention after all.

    Science.gov (United States)

    Brand, John; Oriet, Chris; Tottenham, Laurie Sykes

    2012-03-01

    Perceptual averaging is a process by which sets of similar items are represented by summary statistics such as their average size, luminance, or orientation. Researchers have argued that this process is automatic, able to be carried out without interference from concurrent processing. Here, we challenge this conclusion and demonstrate a reliable cost of computing the mean size of circles distinguished by colour (Experiments 1 and 2) and the mean emotionality of faces distinguished by sex (Experiment 3). We also test the viability of two strategies that could have allowed observers to guess the correct response without computing the average size or emotionality of both sets concurrently. We conclude that although two means can be computed concurrently, doing so incurs a cost of dividing attention.

  16. Scaling characteristics of one-dimensional fractional diffusion processes in the presence of power-law distributed random noise.

    Science.gov (United States)

    Nezhadhaghighi, Mohsen Ghasemi

    2017-08-01

    Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ-stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α. We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ-stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.

  17. Scaling characteristics of one-dimensional fractional diffusion processes in the presence of power-law distributed random noise

    Science.gov (United States)

    Nezhadhaghighi, Mohsen Ghasemi

    2017-08-01

    Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ -stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α . We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ -stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.

  18. Setting up a randomized clinical trial in the UK: approvals and process.

    Science.gov (United States)

    Greene, Louise Eleanor; Bearn, David R

    2013-06-01

    Randomized clinical trials are considered the 'gold standard' in primary research for healthcare interventions. However, they can be expensive and time-consuming to set up and require many approvals to be in place before they can begin. This paper outlines how to determine what approvals are required for a trial, the background of each approval and the process for obtaining them.

  19. Generation and monitoring of discrete stable random processes using multiple immigration population models

    Energy Technology Data Exchange (ETDEWEB)

    Matthews, J O; Hopcraft, K I; Jakeman, E [Applied Mathematics Division, School of Mathematical Sciences, University of Nottingham, Nottingham, NG7 2RD (United Kingdom)

    2003-11-21

    Some properties of classical population processes that comprise births, deaths and multiple immigrations are investigated. The rates at which the immigrants arrive can be tailored to produce a population whose steady state fluctuations are described by a pre-selected distribution. Attention is focused on the class of distributions with a discrete stable law, which have power-law tails and whose moments and autocorrelation function do not exist. The separate problem of monitoring and characterizing the fluctuations is studied, analysing the statistics of individuals that leave the population. The fluctuations in the size of the population are transferred to the times between emigrants that form an intermittent time series of events. The emigrants are counted with a detector of finite dynamic range and response time. This is modelled through clipping the time series or saturating it at an arbitrary but finite level, whereupon its moments and correlation properties become finite. Distributions for the time to the first counted event and for the time between events exhibit power-law regimes that are characteristic of the fluctuations in population size. The processes provide analytical models with which properties of complex discrete random phenomena can be explored, and in addition provide generic means by which random time series encompassing a wide range of intermittent and other discrete random behaviour may be generated.

  20. Generation and monitoring of discrete stable random processes using multiple immigration population models

    International Nuclear Information System (INIS)

    Matthews, J O; Hopcraft, K I; Jakeman, E

    2003-01-01

    Some properties of classical population processes that comprise births, deaths and multiple immigrations are investigated. The rates at which the immigrants arrive can be tailored to produce a population whose steady state fluctuations are described by a pre-selected distribution. Attention is focused on the class of distributions with a discrete stable law, which have power-law tails and whose moments and autocorrelation function do not exist. The separate problem of monitoring and characterizing the fluctuations is studied, analysing the statistics of individuals that leave the population. The fluctuations in the size of the population are transferred to the times between emigrants that form an intermittent time series of events. The emigrants are counted with a detector of finite dynamic range and response time. This is modelled through clipping the time series or saturating it at an arbitrary but finite level, whereupon its moments and correlation properties become finite. Distributions for the time to the first counted event and for the time between events exhibit power-law regimes that are characteristic of the fluctuations in population size. The processes provide analytical models with which properties of complex discrete random phenomena can be explored, and in addition provide generic means by which random time series encompassing a wide range of intermittent and other discrete random behaviour may be generated

  1. System for evaluation of the true average input-pulse rate

    International Nuclear Information System (INIS)

    Eichenlaub, D.P.; Garrett, P.

    1977-01-01

    The description is given of a digital radiation monitoring system making use of current digital circuit and microprocessor for rapidly processing the pulse data coming from remote radiation controllers. This system analyses the pulse rates in order to determine if a new datum is statistically the same as that previously received. Hence it determines the best possible average time for itself. So long as the true average pulse rate stays constant, the time required to establish an average can increase until the statistical error is under the desired level, i.e. 1%. When the digital processing of the pulse data indicates a change in the true average pulse rate, the time required to establish an average can be reduced so as to improve the response time of the system at the statistical error. This concept includes a fixed compromise between the statistical error and the response time [fr

  2. Irreversible stochastic processes on lattices

    International Nuclear Information System (INIS)

    Nord, R.S.

    1986-01-01

    Models for irreversible random or cooperative filling of lattices are required to describe many processes in chemistry and physics. Since the filling is assumed to be irreversible, even the stationary, saturation state is not in equilibrium. The kinetics and statistics of these processes are described by recasting the master equations in infinite hierarchical form. Solutions can be obtained by implementing various techniques: refinements in these solution techniques are presented. Programs considered include random dimer, trimer, and tetramer filling of 2D lattices, random dimer filling of a cubic lattice, competitive filling of two or more species, and the effect of a random distribution of inactive sites on the filling. Also considered is monomer filling of a linear lattice with nearest neighbor cooperative effects and solve for the exact cluster-size distribution for cluster sizes up to the asymptotic regime. Additionally, a technique is developed to directly determine the asymptotic properties of the cluster size distribution. Finally cluster growth is considered via irreversible aggregation involving random walkers. In particular, explicit results are provided for the large-lattice-size asymptotic behavior of trapping probabilities and average walk lengths for a single walker on a lattice with multiple traps. Procedures for exact calculation of these quantities on finite lattices are also developed

  3. P1-25: Filling-in the Blind Spot with the Average Direction

    Directory of Open Access Journals (Sweden)

    Sang-Ah Yoo

    2012-10-01

    Full Text Available Previous studies have shown that the visual system integrates local motions and perceives the average direction (Watamaniuk & Duchon, 1992 Vision Research 32 931–941. We investigated whether the surface of the blind spot is filled in with the average direction of the surrounding local motions. To test this, we varied the direction of a random-dot kinematogram (RDK both in adaptation and test. Motion aftereffects (MAE were defined as the difference of motion coherence thresholds between with and without adaptation. The participants were initially adapted to an annular RDK surrounding the blind spot for 30 s in their dominant eyes. The direction of each dot in this RDK was selected equally and randomly from either a normal distribution with the mean of 15° clockwise from vertical, 15° counterclockwise from vertical, or from the mixture of them. Immediately after the adaptation, a disk-shaped test RDK was presented for 1 s to the corresponding blind-spot location in the opposite eye. This RDK moved either 15° clockwise, 15° counterclockwise, or vertically (the average of the two directions. The participants' task was to discriminate the direction of the test RDK across different coherence levels. We found significant MAE when the test RDK had the same directions as the adaptor. More importantly, equally strong MAE was observed even when the direction of the test RDK was vertical, which was not physically present during adaptation. The result demonstrates that the visual system uses the average direction of the local surrounding motions to fill in the blind spot.

  4. What Randomized Benchmarking Actually Measures

    International Nuclear Information System (INIS)

    Proctor, Timothy; Rudinger, Kenneth; Young, Kevin; Sarovar, Mohan; Blume-Kohout, Robin

    2017-01-01

    Randomized benchmarking (RB) is widely used to measure an error rate of a set of quantum gates, by performing random circuits that would do nothing if the gates were perfect. In the limit of no finite-sampling error, the exponential decay rate of the observable survival probabilities, versus circuit length, yields a single error metric r. For Clifford gates with arbitrary small errors described by process matrices, r was believed to reliably correspond to the mean, over all Clifford gates, of the average gate infidelity between the imperfect gates and their ideal counterparts. We show that this quantity is not a well-defined property of a physical gate set. It depends on the representations used for the imperfect and ideal gates, and the variant typically computed in the literature can differ from r by orders of magnitude. We present new theories of the RB decay that are accurate for all small errors describable by process matrices, and show that the RB decay curve is a simple exponential for all such errors. Here, these theories allow explicit computation of the error rate that RB measures (r), but as far as we can tell it does not correspond to the infidelity of a physically allowed (completely positive) representation of the imperfect gates.

  5. Comparison of population-averaged and cluster-specific models for the analysis of cluster randomized trials with missing binary outcomes: a simulation study

    Directory of Open Access Journals (Sweden)

    Ma Jinhui

    2013-01-01

    Full Text Available Abstracts Background The objective of this simulation study is to compare the accuracy and efficiency of population-averaged (i.e. generalized estimating equations (GEE and cluster-specific (i.e. random-effects logistic regression (RELR models for analyzing data from cluster randomized trials (CRTs with missing binary responses. Methods In this simulation study, clustered responses were generated from a beta-binomial distribution. The number of clusters per trial arm, the number of subjects per cluster, intra-cluster correlation coefficient, and the percentage of missing data were allowed to vary. Under the assumption of covariate dependent missingness, missing outcomes were handled by complete case analysis, standard multiple imputation (MI and within-cluster MI strategies. Data were analyzed using GEE and RELR. Performance of the methods was assessed using standardized bias, empirical standard error, root mean squared error (RMSE, and coverage probability. Results GEE performs well on all four measures — provided the downward bias of the standard error (when the number of clusters per arm is small is adjusted appropriately — under the following scenarios: complete case analysis for CRTs with a small amount of missing data; standard MI for CRTs with variance inflation factor (VIF 50. RELR performs well only when a small amount of data was missing, and complete case analysis was applied. Conclusion GEE performs well as long as appropriate missing data strategies are adopted based on the design of CRTs and the percentage of missing data. In contrast, RELR does not perform well when either standard or within-cluster MI strategy is applied prior to the analysis.

  6. Characteristics of the probability function for three random-walk models of reaction--diffusion processes

    International Nuclear Information System (INIS)

    Musho, M.K.; Kozak, J.J.

    1984-01-01

    A method is presented for calculating exactly the relative width (sigma 2 )/sup 1/2// , the skewness γ 1 , and the kurtosis γ 2 characterizing the probability distribution function for three random-walk models of diffusion-controlled processes. For processes in which a diffusing coreactant A reacts irreversibly with a target molecule B situated at a reaction center, three models are considered. The first is the traditional one of an unbiased, nearest-neighbor random walk on a d-dimensional periodic/confining lattice with traps; the second involves the consideration of unbiased, non-nearest-neigh bor (i.e., variable-step length) walks on the same d-dimensional lattice; and, the third deals with the case of a biased, nearest-neighbor walk on a d-dimensional lattice (wherein a walker experiences a potential centered at the deep trap site of the lattice). Our method, which has been described in detail elsewhere [P.A. Politowicz and J. J. Kozak, Phys. Rev. B 28, 5549 (1983)] is based on the use of group theoretic arguments within the framework of the theory of finite Markov processes

  7. Random lasing in human tissues

    International Nuclear Information System (INIS)

    Polson, Randal C.; Vardeny, Z. Valy

    2004-01-01

    A random collection of scatterers in a gain medium can produce coherent laser emission lines dubbed 'random lasing'. We show that biological tissues, including human tissues, can support coherent random lasing when infiltrated with a concentrated laser dye solution. To extract a typical random resonator size within the tissue we average the power Fourier transform of random laser spectra collected from many excitation locations in the tissue; we verified this procedure by a computer simulation. Surprisingly, we found that malignant tissues show many more laser lines compared to healthy tissues taken from the same organ. Consequently, the obtained typical random resonator was found to be different for healthy and cancerous tissues, and this may lead to a technique for separating malignant from healthy tissues for diagnostic imaging

  8. Value of the future: Discounting in random environments

    Science.gov (United States)

    Farmer, J. Doyne; Geanakoplos, John; Masoliver, Jaume; Montero, Miquel; Perelló, Josep

    2015-05-01

    We analyze how to value future costs and benefits when they must be discounted relative to the present. We introduce the subject for the nonspecialist and take into account the randomness of the economic evolution by studying the discount function of three widely used processes for the dynamics of interest rates: Ornstein-Uhlenbeck, Feller, and log-normal. Besides obtaining exact expressions for the discount function and simple asymptotic approximations, we show that historical average interest rates overestimate long-run discount rates and that this effect can be large. In other words, long-run discount rates should be substantially less than the average rate observed in the past, otherwise any cost-benefit calculation would be biased in favor of the present and against interventions that may protect the future.

  9. The emergence of typical entanglement in two-party random processes

    International Nuclear Information System (INIS)

    Dahlsten, O C O; Oliveira, R; Plenio, M B

    2007-01-01

    We investigate the entanglement within a system undergoing a random, local process. We find that there is initially a phase of very fast generation and spread of entanglement. At the end of this phase the entanglement is typically maximal. In Oliveira et al (2007 Phys. Rev. Lett. 98 130502) we proved that the maximal entanglement is reached to a fixed arbitrary accuracy within O(N 3 ) steps, where N is the total number of qubits. Here we provide a detailed and more pedagogical proof. We demonstrate that one can use the so-called stabilizer gates to simulate this process efficiently on a classical computer. Furthermore, we discuss three ways of identifying the transition from the phase of rapid spread of entanglement to the stationary phase: (i) the time when saturation of the maximal entanglement is achieved, (ii) the cutoff moment, when the entanglement probability distribution is practically stationary, and (iii) the moment block entanglement exhibits volume scaling. We furthermore investigate the mixed state and multipartite setting. Numerically, we find that the mutual information appears to behave similarly to the quantum correlations and that there is a well-behaved phase-space flow of entanglement properties towards an equilibrium. We describe how the emergence of typical entanglement can be used to create a much simpler tripartite entanglement description. The results form a bridge between certain abstract results concerning typical (also known as generic) entanglement relative to an unbiased distribution on pure states and the more physical picture of distributions emerging from random local interactions

  10. Bootstrap inference for pre-averaged realized volatility based on non-overlapping returns

    DEFF Research Database (Denmark)

    Gonçalves, Sílvia; Hounyo, Ulrich; Meddahi, Nour

    The main contribution of this paper is to propose bootstrap methods for realized volatility-like estimators defined on pre-averaged returns. In particular, we focus on the pre-averaged realized volatility estimator proposed by Podolskij and Vetter (2009). This statistic can be written (up to a bias......-overlapping nature of the pre-averaged returns implies that these are asymptotically independent, but possibly heteroskedastic. This motivates the application of the wild bootstrap in this context. We provide a proof of the first order asymptotic validity of this method for percentile and percentile-t intervals. Our...... Monte Carlo simulations show that the wild bootstrap can improve the finite sample properties of the existing first order asymptotic theory provided we choose the external random variable appropriately. We use empirical work to illustrate its use in practice....

  11. State Averages

    Data.gov (United States)

    U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...

  12. Improved averaging for non-null interferometry

    Science.gov (United States)

    Fleig, Jon F.; Murphy, Paul E.

    2013-09-01

    Arithmetic averaging of interferometric phase measurements is a well-established method for reducing the effects of time varying disturbances, such as air turbulence and vibration. Calculating a map of the standard deviation for each pixel in the average map can provide a useful estimate of its variability. However, phase maps of complex and/or high density fringe fields frequently contain defects that severely impair the effectiveness of simple phase averaging and bias the variability estimate. These defects include large or small-area phase unwrapping artifacts, large alignment components, and voids that change in number, location, or size. Inclusion of a single phase map with a large area defect into the average is usually sufficient to spoil the entire result. Small-area phase unwrapping and void defects may not render the average map metrologically useless, but they pessimistically bias the variance estimate for the overwhelming majority of the data. We present an algorithm that obtains phase average and variance estimates that are robust against both large and small-area phase defects. It identifies and rejects phase maps containing large area voids or unwrapping artifacts. It also identifies and prunes the unreliable areas of otherwise useful phase maps, and removes the effect of alignment drift from the variance estimate. The algorithm has several run-time adjustable parameters to adjust the rejection criteria for bad data. However, a single nominal setting has been effective over a wide range of conditions. This enhanced averaging algorithm can be efficiently integrated with the phase map acquisition process to minimize the number of phase samples required to approach the practical noise floor of the metrology environment.

  13. On the time-averaging of ultrafine particle number size spectra in vehicular plumes

    Directory of Open Access Journals (Sweden)

    X. H. Yao

    2006-01-01

    Full Text Available Ultrafine vehicular particle (<100 nm number size distributions presented in the literature are mostly averages of long scan-time (~30 s or more spectra mainly due to the non-availability of commercial instruments that can measure particle distributions in the <10 nm to 100 nm range faster than 30 s even though individual researchers have built faster (1–2.5 s scanning instruments. With the introduction of the Engine Exhaust Particle Sizer (EEPS in 2004, high time-resolution (1 full 32-channel spectrum per second particle size distribution data become possible and allow atmospheric researchers to study the characteristics of ultrafine vehicular particles in rapidly and perhaps randomly varying high concentration environments such as roadside, on-road and tunnel. In this study, particle size distributions in these environments were found to vary as rapidly as one second frequently. This poses the question on the generality of using averages of long scan-time spectra for dynamic and/or mechanistic studies in rapidly and perhaps randomly varying high concentration environments. One-second EEPS data taken at roadside, on roads and in tunnels by a mobile platform are time-averaged to yield 5, 10, 30 and 120 s distributions to answer this question.

  14. Random SU(2) invariant tensors

    Science.gov (United States)

    Li, Youning; Han, Muxin; Ruan, Dong; Zeng, Bei

    2018-04-01

    SU(2) invariant tensors are states in the (local) SU(2) tensor product representation but invariant under the global group action. They are of importance in the study of loop quantum gravity. A random tensor is an ensemble of tensor states. An average over the ensemble is carried out when computing any physical quantities. The random tensor exhibits a phenomenon known as ‘concentration of measure’, which states that for any bipartition the average value of entanglement entropy of its reduced density matrix is asymptotically the maximal possible as the local dimensions go to infinity. We show that this phenomenon is also true when the average is over the SU(2) invariant subspace instead of the entire space for rank-n tensors in general. It is shown in our earlier work Li et al (2017 New J. Phys. 19 063029) that the subleading correction of the entanglement entropy has a mild logarithmic divergence when n  =  4. In this paper, we show that for n  >  4 the subleading correction is not divergent but a finite number. In some special situation, the number could be even smaller than 1/2, which is the subleading correction of random state over the entire Hilbert space of tensors.

  15. Characterizing individual painDETECT symptoms by average pain severity

    Directory of Open Access Journals (Sweden)

    Sadosky A

    2016-07-01

    Full Text Available Alesia Sadosky,1 Vijaya Koduru,2 E Jay Bienen,3 Joseph C Cappelleri4 1Pfizer Inc, New York, NY, 2Eliassen Group, New London, CT, 3Outcomes Research Consultant, New York, NY, 4Pfizer Inc, Groton, CT, USA Background: painDETECT is a screening measure for neuropathic pain. The nine-item version consists of seven sensory items (burning, tingling/prickling, light touching, sudden pain attacks/electric shock-type pain, cold/heat, numbness, and slight pressure, a pain course pattern item, and a pain radiation item. The seven-item version consists only of the sensory items. Total scores of both versions discriminate average pain-severity levels (mild, moderate, and severe, but their ability to discriminate individual item severity has not been evaluated.Methods: Data were from a cross-sectional, observational study of six neuropathic pain conditions (N=624. Average pain severity was evaluated using the Brief Pain Inventory-Short Form, with severity levels defined using established cut points for distinguishing mild, moderate, and severe pain. The Wilcoxon rank sum test was followed by ridit analysis to represent the probability that a randomly selected subject from one average pain-severity level had a more favorable outcome on the specific painDETECT item relative to a randomly selected subject from a comparator severity level.Results: A probability >50% for a better outcome (less severe pain was significantly observed for each pain symptom item. The lowest probability was 56.3% (on numbness for mild vs moderate pain and highest probability was 76.4% (on cold/heat for mild vs severe pain. The pain radiation item was significant (P<0.05 and consistent with pain symptoms, as well as with total scores for both painDETECT versions; only the pain course item did not differ.Conclusion: painDETECT differentiates severity such that the ability to discriminate average pain also distinguishes individual pain item severity in an interpretable manner. Pain

  16. MINIMUM ENTROPY DECONVOLUTION OF ONE-AND MULTI-DIMENSIONAL NON-GAUSSIAN LINEAR RANDOM PROCESSES

    Institute of Scientific and Technical Information of China (English)

    程乾生

    1990-01-01

    The minimum entropy deconvolution is considered as one of the methods for decomposing non-Gaussian linear processes. The concept of peakedness of a system response sequence is presented and its properties are studied. With the aid of the peakedness, the convergence theory of the minimum entropy deconvolution is established. The problem of the minimum entropy deconvolution of multi-dimensional non-Gaussian linear random processes is first investigated and the corresponding theory is given. In addition, the relation between the minimum entropy deconvolution and parameter method is discussed.

  17. Distribution functions for fluids in random media

    International Nuclear Information System (INIS)

    Madden, W.G.; Glandt, E.D.

    1988-01-01

    A random medium is considered, composed of identifiable interactive sites or obstacles equilibrated at a high temperature and then quenched rapidly to form a rigid structure, statistically homogeneous on all but molecular length scales. The equilibrium statistical mechanics of a fluid contained inside this quenched medium is discussed. Various particle-particle and particle-obstacle correlation functions, which differ form the corresponding functions for a fully equilibrated binary mixture, are defined through an averaging process over the static ensemble of obstacle configurations and applications of topological reduction techniques. The Ornstein-Zernike equations also differ from their equilibrium counterparts

  18. Reducing Noise by Repetition: Introduction to Signal Averaging

    Science.gov (United States)

    Hassan, Umer; Anwar, Muhammad Sabieh

    2010-01-01

    This paper describes theory and experiments, taken from biophysics and physiological measurements, to illustrate the technique of signal averaging. In the process, students are introduced to the basic concepts of signal processing, such as digital filtering, Fourier transformation, baseline correction, pink and Gaussian noise, and the cross- and…

  19. Laser absorption of carbon fiber reinforced polymer with randomly distributed carbon fibers

    Science.gov (United States)

    Hu, Jun; Xu, Hebing; Li, Chao

    2018-03-01

    Laser processing of carbon fiber reinforced polymer (CFRP) is a non-traditional machining method which has many prospective applications. The laser absorption characteristics of CFRP are analyzed in this paper. A ray tracing model describing the interaction of the laser spot with CFRP is established. The material model contains randomly distributed carbon fibers which are generated using an improved carbon fiber placement method. It was found that CFRP has good laser absorption due to multiple reflections of the light rays in the material’s microstructure. The randomly distributed carbon fibers make the absorptivity of the light rays change randomly in the laser spot. Meanwhile, the average absorptivity fluctuation is obvious during movement of the laser. The experimental measurements agree well with the values predicted by the ray tracing model.

  20. An Artificial Bee Colony Algorithm for the Job Shop Scheduling Problem with Random Processing Times

    Directory of Open Access Journals (Sweden)

    Rui Zhang

    2011-09-01

    Full Text Available Due to the influence of unpredictable random events, the processing time of each operation should be treated as random variables if we aim at a robust production schedule. However, compared with the extensive research on the deterministic model, the stochastic job shop scheduling problem (SJSSP has not received sufficient attention. In this paper, we propose an artificial bee colony (ABC algorithm for SJSSP with the objective of minimizing the maximum lateness (which is an index of service quality. First, we propose a performance estimate for preliminary screening of the candidate solutions. Then, the K-armed bandit model is utilized for reducing the computational burden in the exact evaluation (through Monte Carlo simulation process. Finally, the computational results on different-scale test problems validate the effectiveness and efficiency of the proposed approach.

  1. Stochastic Optimal Prediction with Application to Averaged Euler Equations

    Energy Technology Data Exchange (ETDEWEB)

    Bell, John [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chorin, Alexandre J. [Univ. of California, Berkeley, CA (United States); Crutchfield, William [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-04-24

    Optimal prediction (OP) methods compensate for a lack of resolution in the numerical solution of complex problems through the use of an invariant measure as a prior measure in the Bayesian sense. In first-order OP, unresolved information is approximated by its conditional expectation with respect to the invariant measure. In higher-order OP, unresolved information is approximated by a stochastic estimator, leading to a system of random or stochastic differential equations. We explain the ideas through a simple example, and then apply them to the solution of Averaged Euler equations in two space dimensions.

  2. Interspinous process device versus standard conventional surgical decompression for lumbar spinal stenosis: Randomized controlled trial

    NARCIS (Netherlands)

    W.A. Moojen (Wouter); M.P. Arts (Mark); W.C.H. Jacobs (Wilco); E.W. van Zwet (Erik); M.E. van den Akker-van Marle (Elske); B.W. Koes (Bart); C.L.A.M. Vleggeert-Lankamp (Carmen); W.C. Peul (Wilco)

    2013-01-01

    markdownabstractAbstract Objective To assess whether interspinous process device implantation is more effective in the short term than conventional surgical decompression for patients with intermittent neurogenic claudication due to lumbar spinal stenosis. Design Randomized controlled

  3. Auto Regressive Moving Average (ARMA) Modeling Method for Gyro Random Noise Using a Robust Kalman Filter

    Science.gov (United States)

    Huang, Lei

    2015-01-01

    To solve the problem in which the conventional ARMA modeling methods for gyro random noise require a large number of samples and converge slowly, an ARMA modeling method using a robust Kalman filtering is developed. The ARMA model parameters are employed as state arguments. Unknown time-varying estimators of observation noise are used to achieve the estimated mean and variance of the observation noise. Using the robust Kalman filtering, the ARMA model parameters are estimated accurately. The developed ARMA modeling method has the advantages of a rapid convergence and high accuracy. Thus, the required sample size is reduced. It can be applied to modeling applications for gyro random noise in which a fast and accurate ARMA modeling method is required. PMID:26437409

  4. Probability on graphs random processes on graphs and lattices

    CERN Document Server

    Grimmett, Geoffrey

    2018-01-01

    This introduction to some of the principal models in the theory of disordered systems leads the reader through the basics, to the very edge of contemporary research, with the minimum of technical fuss. Topics covered include random walk, percolation, self-avoiding walk, interacting particle systems, uniform spanning tree, random graphs, as well as the Ising, Potts, and random-cluster models for ferromagnetism, and the Lorentz model for motion in a random medium. This new edition features accounts of major recent progress, including the exact value of the connective constant of the hexagonal lattice, and the critical point of the random-cluster model on the square lattice. The choice of topics is strongly motivated by modern applications, and focuses on areas that merit further research. Accessible to a wide audience of mathematicians and physicists, this book can be used as a graduate course text. Each chapter ends with a range of exercises.

  5. Approximations for transport parameters and self-averaging properties for point-like injections in heterogeneous media

    International Nuclear Information System (INIS)

    Eberhard, Jens

    2004-01-01

    We focus on transport parameters in heterogeneous media with a flow modelled by an ensemble of periodic and Gaussian random fields. The parameters are determined by ensemble averages. We study to what extent these averages represent the behaviour in a single realization. We calculate the centre-of-mass velocity and the dispersion coefficient using approximations based on a perturbative expansion for the transport equation, and on the iterative solution of the Langevin equation. Compared with simulations, the perturbation theory reproduces the numerical results only poorly, whereas the iterative solution yields good results. Using these approximations, we investigate the self-averaging properties. The ensemble average of the velocity characterizes the behaviour of a realization for large times in both ensembles. The dispersion coefficient is not self-averaging in the ensemble of periodic fields. For the Gaussian ensemble the asymptotic dispersion coefficient is self-averaging. For finite times, however, the fluctuations are so large that the average does not represent the behaviour in a single realization

  6. Identification of System Parameters by the Random Decrement Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Kirkegaard, Poul Henning; Rytter, Anders

    1991-01-01

    -Walker equations and finally, least-square fitting of the theoretical correlation function. The results are compared to the results of fitting an Auto Regressive Moving Average (ARMA) model directly to the system output from a single-degree-of-freedom system loaded by white noise.......The aim of this paper is to investigate and illustrate the possibilities of using correlation functions estimated by the Random Decrement Technique as a basis for parameter identification. A two-stage system identification system is used: first, the correlation functions are estimated by the Random...... Decrement Technique, and then the system parameters are identified from the correlation function estimates. Three different techniques are used in the parameter identification process: a simple non-parametric method, estimation of an Auto Regressive (AR) model by solving an overdetermined set of Yule...

  7. Endogenous Information, Risk Characterization, and the Predictability of Average Stock Returns

    Directory of Open Access Journals (Sweden)

    Pradosh Simlai

    2012-09-01

    Full Text Available In this paper we provide a new type of risk characterization of the predictability of two widely known abnormal patterns in average stock returns: momentum and reversal. The purpose is to illustrate the relative importance of common risk factors and endogenous information. Our results demonstrates that in the presence of zero-investment factors, spreads in average momentum and reversal returns correspond to spreads in the slopes of the endogenous information. The empirical findings support the view that various classes of firms react differently to volatility risk, and endogenous information harbor important sources of potential risk loadings. Taken together, our results suggest that returns are influenced by random endogenous information flow, which is asymmetric in nature, and can be used as a performance attribution factor. If one fails to incorporate the existing asymmetric endogenous information hidden in the historical behavior, any attempt to explore average stock return predictability will be subject to an unquantified specification bias.

  8. Application of random-point processes to the detection of radiation sources

    International Nuclear Information System (INIS)

    Woods, J.W.

    1978-01-01

    In this report the mathematical theory of random-point processes is reviewed and it is shown how use of the theory can obtain optimal solutions to the problem of detecting radiation sources. As noted, the theory also applies to image processing in low-light-level or low-count-rate situations. Paralleling Snyder's work, the theory is extended to the multichannel case of a continuous, two-dimensional (2-D), energy-time space. This extension essentially involves showing that the data are doubly stochastic Poisson (DSP) point processes in energy as well as time. Further, a new 2-D recursive formulation is presented for the radiation-detection problem with large computational savings over nonrecursive techniques when the number of channels is large (greater than or equal to 30). Finally, some adaptive strategies for on-line ''learning'' of unknown, time-varying signal and background-intensity parameters and statistics are present and discussed. These adaptive procedures apply when a complete statistical description is not available a priori

  9. A prospective randomized trial of content expertise versus process expertise in small group teaching.

    Science.gov (United States)

    Peets, Adam D; Cooke, Lara; Wright, Bruce; Coderre, Sylvain; McLaughlin, Kevin

    2010-10-14

    Effective teaching requires an understanding of both what (content knowledge) and how (process knowledge) to teach. While previous studies involving medical students have compared preceptors with greater or lesser content knowledge, it is unclear whether process expertise can compensate for deficient content expertise. Therefore, the objective of our study was to compare the effect of preceptors with process expertise to those with content expertise on medical students' learning outcomes in a structured small group environment. One hundred and fifty-one first year medical students were randomized to 11 groups for the small group component of the Cardiovascular-Respiratory course at the University of Calgary. Each group was then block randomized to one of three streams for the entire course: tutoring exclusively by physicians with content expertise (n = 5), tutoring exclusively by physicians with process expertise (n = 3), and tutoring by content experts for 11 sessions and process experts for 10 sessions (n = 3). After each of the 21 small group sessions, students evaluated their preceptors' teaching with a standardized instrument. Students' knowledge acquisition was assessed by an end-of-course multiple choice (EOC-MCQ) examination. Students rated the process experts significantly higher on each of the instrument's 15 items, including the overall rating. Students' mean score (±SD) on the EOC-MCQ exam was 76.1% (8.1) for groups taught by content experts, 78.2% (7.8) for the combination group and 79.5% (9.2) for process expert groups (p = 0.11). By linear regression student performance was higher if they had been taught by process experts (regression coefficient 2.7 [0.1, 5.4], p teach first year medical students within a structured small group environment; preceptors with process expertise result in at least equivalent, if not superior, student outcomes in this setting.

  10. Sampling large random knots in a confined space

    International Nuclear Information System (INIS)

    Arsuaga, J; Blackstone, T; Diao, Y; Hinson, K; Karadayi, E; Saito, M

    2007-01-01

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e n 2 )). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n 2 ). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications

  11. Sampling large random knots in a confined space

    Science.gov (United States)

    Arsuaga, J.; Blackstone, T.; Diao, Y.; Hinson, K.; Karadayi, E.; Saito, M.

    2007-09-01

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e^{n^2}) . We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n2). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.

  12. Sampling large random knots in a confined space

    Energy Technology Data Exchange (ETDEWEB)

    Arsuaga, J [Department of Mathematics, San Francisco State University, 1600 Holloway Ave, San Francisco, CA 94132 (United States); Blackstone, T [Department of Computer Science, San Francisco State University, 1600 Holloway Ave., San Francisco, CA 94132 (United States); Diao, Y [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Hinson, K [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Karadayi, E [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States); Saito, M [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States)

    2007-09-28

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e{sup n{sup 2}}). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n{sup 2}). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.

  13. Convergence to equilibrium under a random Hamiltonian

    Science.gov (United States)

    Brandão, Fernando G. S. L.; Ćwikliński, Piotr; Horodecki, Michał; Horodecki, Paweł; Korbicz, Jarosław K.; Mozrzymas, Marek

    2012-09-01

    We analyze equilibration times of subsystems of a larger system under a random total Hamiltonian, in which the basis of the Hamiltonian is drawn from the Haar measure. We obtain that the time of equilibration is of the order of the inverse of the arithmetic average of the Bohr frequencies. To compute the average over a random basis, we compute the inverse of a matrix of overlaps of operators which permute four systems. We first obtain results on such a matrix for a representation of an arbitrary finite group and then apply it to the particular representation of the permutation group under consideration.

  14. Blocked Randomization with Randomly Selected Block Sizes

    Directory of Open Access Journals (Sweden)

    Jimmy Efird

    2010-12-01

    Full Text Available When planning a randomized clinical trial, careful consideration must be given to how participants are selected for various arms of a study. Selection and accidental bias may occur when participants are not assigned to study groups with equal probability. A simple random allocation scheme is a process by which each participant has equal likelihood of being assigned to treatment versus referent groups. However, by chance an unequal number of individuals may be assigned to each arm of the study and thus decrease the power to detect statistically significant differences between groups. Block randomization is a commonly used technique in clinical trial design to reduce bias and achieve balance in the allocation of participants to treatment arms, especially when the sample size is small. This method increases the probability that each arm will contain an equal number of individuals by sequencing participant assignments by block. Yet still, the allocation process may be predictable, for example, when the investigator is not blind and the block size is fixed. This paper provides an overview of blocked randomization and illustrates how to avoid selection bias by using random block sizes.

  15. Quantitative characterisation of an engineering write-up using random walk analysis

    Directory of Open Access Journals (Sweden)

    Sunday A. Oke

    2008-02-01

    Full Text Available This contribution reports on the investigation of correlation properties in an English scientific text (engineering write-up by means of a random walk. Though the idea to use a random walk to characterise correlations is not new (it was used e.g. in the genome analysis and in the analysis of texts, a random walk approach to the analysis of an English scientific text is still far from being exploited in its full strength as demonstrated in this paper. A method of high-dimensional embedding is proposed. Case examples were drawn arbitrarily from four engineering write-ups (Ph.D. synopsis of three engineering departments in the Faculty of Technology, University of Ibadan, Nigeria. Thirteen additional analyses of non-engineering English texts were made and the results compared to the engineering English texts. Thus, a total of seventeen write-ups of eight Faculties and sixteen Departments of the University of Ibadan were considered. The characterising exponents which relate the average distance of random walkers away from a known starting position to the elapsed time steps were estimated for the seventeen cases according to the power law and in three different dimensional spaces. The average characteristic exponent obtained for the seventeen cases and over three different dimensional spaces studied was 1.42 to 2-decimal with a minimum and a maximum coefficient of determination (R2 of 0.9495 and 0.9994 respectively. This is found to be 284% of the average characterising exponent value (0.5, as supported by the literature for random walkers based on the pseudo-random number generator. The average characteristic exponent obtained for the four cases that were engineering-based and over the three different dimensional studied spaces was 1.41 to 2-decimal (closer by 99.3% to 1.42 with a minimum and a maximum coefficient of determination (R2 of 0.9507 and 0.9974 respectively. This is found to be 282% of the average characterising exponent value (0.5, as

  16. A high speed digital signal averager for pulsed NMR

    International Nuclear Information System (INIS)

    Srinivasan, R.; Ramakrishna, J.; Ra agopalan, S.R.

    1978-01-01

    A 256-channel digital signal averager suitable for pulsed nuclear magnetic resonance spectroscopy is described. It implements 'stable averaging' algorithm and hence provides a calibrated display of the average signal at all times during the averaging process on a CRT. It has a maximum sampling rate of 2.5 μ sec and a memory capacity of 256 x 12 bit words. Number of sweeps is selectable through a front panel control in binary steps from 2 3 to 2 12 . The enhanced signal can be displayed either on a CRT or by a 3.5-digit LED display. The maximum S/N improvement that can be achieved with this instrument is 36 dB. (auth.)

  17. Performance Analysis of 5G Transmission over Fading Channels with Random IG Distributed LOS Components

    Directory of Open Access Journals (Sweden)

    Dejan Jaksic

    2017-01-01

    Full Text Available Mathematical modelling of the behavior of the radio propagation at mmWave bands is crucial to the development of transmission and reception algorithms of new 5G systems. In this study we will model 5G propagation in nondeterministic line-of-sight (LOS conditions, when the random nature of LOS component ratio will be observed as Inverse Gamma (IG distributed process. Closed-form expressions will be presented for the probability density function (PDF and cumulative distribution function (CDF of such random process. Further, closed-form expressions will be provided for important performance measures such as level crossing rate (LCR and average fade duration (AFD. Capitalizing on proposed expressions, LCR and AFD will be discussed in the function of transmission parameters.

  18. Fractional averaging of repetitive waveforms induced by self-imaging effects

    Science.gov (United States)

    Romero Cortés, Luis; Maram, Reza; Azaña, José

    2015-10-01

    We report the theoretical prediction and experimental observation of averaging of stochastic events with an equivalent result of calculating the arithmetic mean (or sum) of a rational number of realizations of the process under test, not necessarily limited to an integer record of realizations, as discrete statistical theory dictates. This concept is enabled by a passive amplification process, induced by self-imaging (Talbot) effects. In the specific implementation reported here, a combined spectral-temporal Talbot operation is shown to achieve undistorted, lossless repetition-rate division of a periodic train of noisy waveforms by a rational factor, leading to local amplification, and the associated averaging process, by the fractional rate-division factor.

  19. Eliciting and Developing Teachers' Conceptions of Random Processes in a Probability and Statistics Course

    Science.gov (United States)

    Smith, Toni M.; Hjalmarson, Margret A.

    2013-01-01

    The purpose of this study is to examine prospective mathematics specialists' engagement in an instructional sequence designed to elicit and develop their understandings of random processes. The study was conducted with two different sections of a probability and statistics course for K-8 teachers. Thirty-two teachers participated. Video analyses…

  20. Random number generation as an index of controlled processing.

    Science.gov (United States)

    Jahanshahi, Marjan; Saleem, T; Ho, Aileen K; Dirnberger, Georg; Fuller, R

    2006-07-01

    Random number generation (RNG) is a functionally complex process that is highly controlled and therefore dependent on Baddeley's central executive. This study addresses this issue by investigating whether key predictions from this framework are compatible with empirical data. In Experiment 1, the effect of increasing task demands by increasing the rate of the paced generation was comprehensively examined. As expected, faster rates affected performance negatively because central resources were increasingly depleted. Next, the effects of participants' exposure were manipulated in Experiment 2 by providing increasing amounts of practice on the task. There was no improvement over 10 practice trials, suggesting that the high level of strategic control required by the task was constant and not amenable to any automatization gain with repeated exposure. Together, the results demonstrate that RNG performance is a highly controlled and demanding process sensitive to additional demands on central resources (Experiment 1) and is unaffected by repeated performance or practice (Experiment 2). These features render the easily administered RNG task an ideal and robust index of executive function that is highly suitable for repeated clinical use. ((c) 2006 APA, all rights reserved).

  1. Autoregressive Moving Average Graph Filtering

    OpenAIRE

    Isufi, Elvin; Loukas, Andreas; Simonetto, Andrea; Leus, Geert

    2016-01-01

    One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design phi...

  2. Mixed random walks with a trap in scale-free networks including nearest-neighbor and next-nearest-neighbor jumps

    Science.gov (United States)

    Zhang, Zhongzhi; Dong, Yuze; Sheng, Yibin

    2015-10-01

    Random walks including non-nearest-neighbor jumps appear in many real situations such as the diffusion of adatoms and have found numerous applications including PageRank search algorithm; however, related theoretical results are much less for this dynamical process. In this paper, we present a study of mixed random walks in a family of fractal scale-free networks, where both nearest-neighbor and next-nearest-neighbor jumps are included. We focus on trapping problem in the network family, which is a particular case of random walks with a perfect trap fixed at the central high-degree node. We derive analytical expressions for the average trapping time (ATT), a quantitative indicator measuring the efficiency of the trapping process, by using two different methods, the results of which are consistent with each other. Furthermore, we analytically determine all the eigenvalues and their multiplicities for the fundamental matrix characterizing the dynamical process. Our results show that although next-nearest-neighbor jumps have no effect on the leading scaling of the trapping efficiency, they can strongly affect the prefactor of ATT, providing insight into better understanding of random-walk process in complex systems.

  3. The groupies of random multipartite graphs

    OpenAIRE

    Portmann, Marius; Wang, Hongyun

    2012-01-01

    If a vertex $v$ in a graph $G$ has degree larger than the average of the degrees of its neighbors, we call it a groupie in $G$. In the current work, we study the behavior of groupie in random multipartite graphs with the link probability between sets of nodes fixed. Our results extend the previous ones on random (bipartite) graphs.

  4. Fitting a function to time-dependent ensemble averaged data.

    Science.gov (United States)

    Fogelmark, Karl; Lomholt, Michael A; Irbäck, Anders; Ambjörnsson, Tobias

    2018-05-03

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion constants). A commonly overlooked challenge in such function fitting procedures is that fluctuations around mean values, by construction, exhibit temporal correlations. We show that the only available general purpose function fitting methods, correlated chi-square method and the weighted least squares method (which neglects correlation), fail at either robust parameter estimation or accurate error estimation. We remedy this by deriving a new closed-form error estimation formula for weighted least square fitting. The new formula uses the full covariance matrix, i.e., rigorously includes temporal correlations, but is free of the robustness issues, inherent to the correlated chi-square method. We demonstrate its accuracy in four examples of importance in many fields: Brownian motion, damped harmonic oscillation, fractional Brownian motion and continuous time random walks. We also successfully apply our method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software.

  5. Linearization effect in multifractal analysis: Insights from the Random Energy Model

    Science.gov (United States)

    Angeletti, Florian; Mézard, Marc; Bertin, Eric; Abry, Patrice

    2011-08-01

    The analysis of the linearization effect in multifractal analysis, and hence of the estimation of moments for multifractal processes, is revisited borrowing concepts from the statistical physics of disordered systems, notably from the analysis of the so-called Random Energy Model. Considering a standard multifractal process (compound Poisson motion), chosen as a simple representative example, we show the following: (i) the existence of a critical order q∗ beyond which moments, though finite, cannot be estimated through empirical averages, irrespective of the sample size of the observation; (ii) multifractal exponents necessarily behave linearly in q, for q>q∗. Tailoring the analysis conducted for the Random Energy Model to that of Compound Poisson motion, we provide explicative and quantitative predictions for the values of q∗ and for the slope controlling the linear behavior of the multifractal exponents. These quantities are shown to be related only to the definition of the multifractal process and not to depend on the sample size of the observation. Monte Carlo simulations, conducted over a large number of large sample size realizations of compound Poisson motion, comfort and extend these analyses.

  6. THE EFFECT MODEL INQUIRY TRAINING MEDIA AND LOGICAL THINKING ABILITY TO STUDENT’S SCIENCE PROCESS SKILL

    Directory of Open Access Journals (Sweden)

    Dahrim Pohan

    2017-06-01

    Full Text Available The aim of the research is to analyz : student’s science process skill using inquiry training learning model is better than konvesional learning.Student’s science process skill who have logical thinking ability above average are better than under average,and the interaction between inquiry training media and logical thinking ability to increase student’s science process skill.The experiment was conducted in SMP 6 Medan as population and class VII-K and VII-J were chosen as sample through cluster random sampling.Science prosess skill used essay test and logical thinking used multiple choice as instrument.Result of the data was analyzed by using two ways ANAVA.Result show that : student’s science process skill using inquiry training learning model is better than konvesional learning,student’s science process skill who logical thinking ability above average are better than under average and the interaction between inquiry training learning model media and logical thinking ability to increase student’s science process skill.

  7. Identification of System Parameters by the Random Decrement Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Kirkegaard, Poul Henning; Rytter, Anders

    -Walker equations and finally least square fitting of the theoretical correlation function. The results are compared to the results of fitting an Auto Regressive Moving Average(ARMA) model directly to the system output. All investigations are performed on the simulated output from a single degree-off-freedom system......The aim of this paper is to investigate and illustrate the possibilities of using correlation functions estimated by the Random Decrement Technique as a basis for parameter identification. A two-stage system identification method is used: first the correlation functions are estimated by the Random...... Decrement technique and then the system parameters are identified from the correlation function estimates. Three different techniques are used in the parameters identification process: a simple non-paramatic method, estimation of an Auto Regressive(AR) model by solving an overdetermined set of Yule...

  8. A Note on Functional Averages over Gaussian Ensembles

    Directory of Open Access Journals (Sweden)

    Gabriel H. Tucci

    2013-01-01

    Full Text Available We find a new formula for matrix averages over the Gaussian ensemble. Let H be an n×n Gaussian random matrix with complex, independent, and identically distributed entries of zero mean and unit variance. Given an n×n positive definite matrix A and a continuous function f:ℝ+→ℝ such that ∫0∞‍e-αt|f(t|2dt0, we find a new formula for the expectation [Tr(f(HAH*]. Taking f(x=log(1+x gives another formula for the capacity of the MIMO communication channel, and taking f(x=(1+x-1 gives the MMSE achieved by a linear receiver.

  9. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  10. Noise Reduction for Nonlinear Nonstationary Time Series Data using Averaging Intrinsic Mode Function

    Directory of Open Access Journals (Sweden)

    Christofer Toumazou

    2013-07-01

    Full Text Available A novel noise filtering algorithm based on averaging Intrinsic Mode Function (aIMF, which is a derivation of Empirical Mode Decomposition (EMD, is proposed to remove white-Gaussian noise of foreign currency exchange rates that are nonlinear nonstationary times series signals. Noise patterns with different amplitudes and frequencies were randomly mixed into the five exchange rates. A number of filters, namely; Extended Kalman Filter (EKF, Wavelet Transform (WT, Particle Filter (PF and the averaging Intrinsic Mode Function (aIMF algorithm were used to compare filtering and smoothing performance. The aIMF algorithm demonstrated high noise reduction among the performance of these filters.

  11. Nonparametric estimation of the heterogeneity of a random medium using compound Poisson process modeling of wave multiple scattering.

    Science.gov (United States)

    Le Bihan, Nicolas; Margerin, Ludovic

    2009-07-01

    In this paper, we present a nonparametric method to estimate the heterogeneity of a random medium from the angular distribution of intensity of waves transmitted through a slab of random material. Our approach is based on the modeling of forward multiple scattering using compound Poisson processes on compact Lie groups. The estimation technique is validated through numerical simulations based on radiative transfer theory.

  12. Algorithmic randomness and physical entropy

    International Nuclear Information System (INIS)

    Zurek, W.H.

    1989-01-01

    Algorithmic randomness provides a rigorous, entropylike measure of disorder of an individual, microscopic, definite state of a physical system. It is defined by the size (in binary digits) of the shortest message specifying the microstate uniquely up to the assumed resolution. Equivalently, algorithmic randomness can be expressed as the number of bits in the smallest program for a universal computer that can reproduce the state in question (for instance, by plotting it with the assumed accuracy). In contrast to the traditional definitions of entropy, algorithmic randomness can be used to measure disorder without any recourse to probabilities. Algorithmic randomness is typically very difficult to calculate exactly but relatively easy to estimate. In large systems, probabilistic ensemble definitions of entropy (e.g., coarse-grained entropy of Gibbs and Boltzmann's entropy H=lnW, as well as Shannon's information-theoretic entropy) provide accurate estimates of the algorithmic entropy of an individual system or its average value for an ensemble. One is thus able to rederive much of thermodynamics and statistical mechanics in a setting very different from the usual. Physical entropy, I suggest, is a sum of (i) the missing information measured by Shannon's formula and (ii) of the algorithmic information content---algorithmic randomness---present in the available data about the system. This definition of entropy is essential in describing the operation of thermodynamic engines from the viewpoint of information gathering and using systems. These Maxwell demon-type entities are capable of acquiring and processing information and therefore can ''decide'' on the basis of the results of their measurements and computations the best strategy for extracting energy from their surroundings. From their internal point of view the outcome of each measurement is definite

  13. Decentralized formation of random regular graphs for robust multi-agent networks

    KAUST Repository

    Yazicioglu, A. Yasin

    2014-12-15

    Multi-agent networks are often modeled via interaction graphs, where the nodes represent the agents and the edges denote direct interactions between the corresponding agents. Interaction graphs have significant impact on the robustness of networked systems. One family of robust graphs is the random regular graphs. In this paper, we present a locally applicable reconfiguration scheme to build random regular graphs through self-organization. For any connected initial graph, the proposed scheme maintains connectivity and the average degree while minimizing the degree differences and randomizing the links. As such, if the average degree of the initial graph is an integer, then connected regular graphs are realized uniformly at random as time goes to infinity.

  14. Human norovirus inactivation in oysters by high hydrostatic pressure processing: A randomized double-blinded study

    Science.gov (United States)

    This randomized, double-blinded, clinical trial assessed the effect of high hydrostatic pressure processing (HPP) on genogroup I.1 human norovirus (HuNoV) inactivation in virus-seeded oysters when ingested by subjects. The safety and efficacy of HPP treatments were assessed in three study phases wi...

  15. Quantum random flip-flop and its applications in random frequency synthesis and true random number generation

    Energy Technology Data Exchange (ETDEWEB)

    Stipčević, Mario, E-mail: mario.stipcevic@irb.hr [Photonics and Quantum Optics Research Unit, Center of Excellence for Advanced Materials and Sensing Devices, Ruđer Bošković Institute, Bijenička 54, 10000 Zagreb (Croatia)

    2016-03-15

    In this work, a new type of elementary logic circuit, named random flip-flop (RFF), is proposed, experimentally realized, and studied. Unlike conventional Boolean logic circuits whose action is deterministic and highly reproducible, the action of a RFF is intentionally made maximally unpredictable and, in the proposed realization, derived from a fundamentally random process of emission and detection of light quanta. We demonstrate novel applications of RFF in randomness preserving frequency division, random frequency synthesis, and random number generation. Possible usages of these applications in the information and communication technology, cryptographic hardware, and testing equipment are discussed.

  16. A Correlated Random Effects Model for Non-homogeneous Markov Processes with Nonignorable Missingness.

    Science.gov (United States)

    Chen, Baojiang; Zhou, Xiao-Hua

    2013-05-01

    Life history data arising in clusters with prespecified assessment time points for patients often feature incomplete data since patients may choose to visit the clinic based on their needs. Markov process models provide a useful tool describing disease progression for life history data. The literature mainly focuses on time homogeneous process. In this paper we develop methods to deal with non-homogeneous Markov process with incomplete clustered life history data. A correlated random effects model is developed to deal with the nonignorable missingness, and a time transformation is employed to address the non-homogeneity in the transition model. Maximum likelihood estimate based on the Monte-Carlo EM algorithm is advocated for parameter estimation. Simulation studies demonstrate that the proposed method works well in many situations. We also apply this method to an Alzheimer's disease study.

  17. Random function representation of stationary stochastic vector processes for probability density evolution analysis of wind-induced structures

    Science.gov (United States)

    Liu, Zhangjun; Liu, Zenghui

    2018-06-01

    This paper develops a hybrid approach of spectral representation and random function for simulating stationary stochastic vector processes. In the proposed approach, the high-dimensional random variables, included in the original spectral representation (OSR) formula, could be effectively reduced to only two elementary random variables by introducing the random functions that serve as random constraints. Based on this, a satisfactory simulation accuracy can be guaranteed by selecting a small representative point set of the elementary random variables. The probability information of the stochastic excitations can be fully emerged through just several hundred of sample functions generated by the proposed approach. Therefore, combined with the probability density evolution method (PDEM), it could be able to implement dynamic response analysis and reliability assessment of engineering structures. For illustrative purposes, a stochastic turbulence wind velocity field acting on a frame-shear-wall structure is simulated by constructing three types of random functions to demonstrate the accuracy and efficiency of the proposed approach. Careful and in-depth studies concerning the probability density evolution analysis of the wind-induced structure have been conducted so as to better illustrate the application prospects of the proposed approach. Numerical examples also show that the proposed approach possesses a good robustness.

  18. A virtual pebble game to ensemble average graph rigidity.

    Science.gov (United States)

    González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J

    2015-01-01

    The body-bar Pebble Game (PG) algorithm is commonly used to calculate network rigidity properties in proteins and polymeric materials. To account for fluctuating interactions such as hydrogen bonds, an ensemble of constraint topologies are sampled, and average network properties are obtained by averaging PG characterizations. At a simpler level of sophistication, Maxwell constraint counting (MCC) provides a rigorous lower bound for the number of internal degrees of freedom (DOF) within a body-bar network, and it is commonly employed to test if a molecular structure is globally under-constrained or over-constrained. MCC is a mean field approximation (MFA) that ignores spatial fluctuations of distance constraints by replacing the actual molecular structure by an effective medium that has distance constraints globally distributed with perfect uniform density. The Virtual Pebble Game (VPG) algorithm is a MFA that retains spatial inhomogeneity in the density of constraints on all length scales. Network fluctuations due to distance constraints that may be present or absent based on binary random dynamic variables are suppressed by replacing all possible constraint topology realizations with the probabilities that distance constraints are present. The VPG algorithm is isomorphic to the PG algorithm, where integers for counting "pebbles" placed on vertices or edges in the PG map to real numbers representing the probability to find a pebble. In the VPG, edges are assigned pebble capacities, and pebble movements become a continuous flow of probability within the network. Comparisons between the VPG and average PG results over a test set of proteins and disordered lattices demonstrate the VPG quantitatively estimates the ensemble average PG results well. The VPG performs about 20% faster than one PG, and it provides a pragmatic alternative to averaging PG rigidity characteristics over an ensemble of constraint topologies. The utility of the VPG falls in between the most

  19. Oppugning the assumptions of spatial averaging of segment and joint orientations.

    Science.gov (United States)

    Pierrynowski, Michael Raymond; Ball, Kevin Arthur

    2009-02-09

    Movement scientists frequently calculate "arithmetic averages" when examining body segment or joint orientations. Such calculations appear routinely, yet are fundamentally flawed. Three-dimensional orientation data are computed as matrices, yet three-ordered Euler/Cardan/Bryant angle parameters are frequently used for interpretation. These parameters are not geometrically independent; thus, the conventional process of averaging each parameter is incorrect. The process of arithmetic averaging also assumes that the distances between data are linear (Euclidean); however, for the orientation data these distances are geodesically curved (Riemannian). Therefore we question (oppugn) whether use of the conventional averaging approach is an appropriate statistic. Fortunately, exact methods of averaging orientation data have been developed which both circumvent the parameterization issue, and explicitly acknowledge the Euclidean or Riemannian distance measures. The details of these matrix-based averaging methods are presented and their theoretical advantages discussed. The Euclidian and Riemannian approaches offer appealing advantages over the conventional technique. With respect to practical biomechanical relevancy, examinations of simulated data suggest that for sets of orientation data possessing characteristics of low dispersion, an isotropic distribution, and less than 30 degrees second and third angle parameters, discrepancies with the conventional approach are less than 1.1 degrees . However, beyond these limits, arithmetic averaging can have substantive non-linear inaccuracies in all three parameterized angles. The biomechanics community is encouraged to recognize that limitations exist with the use of the conventional method of averaging orientations. Investigations requiring more robust spatial averaging over a broader range of orientations may benefit from the use of matrix-based Euclidean or Riemannian calculations.

  20. A comparison of random walks in dependent random environments

    NARCIS (Netherlands)

    Scheinhardt, Willem R.W.; Kroese, Dirk

    2015-01-01

    Although the theoretical behavior of one-dimensional random walks in random environments is well understood, the actual evaluation of various characteristics of such processes has received relatively little attention. This paper develops new methodology for the exact computation of the drift in such

  1. Solitons in a random force field

    International Nuclear Information System (INIS)

    Bass, F.G.; Konotop, V.V.; Sinitsyn, Y.A.

    1985-01-01

    We study the dynamics of a soliton of the sine-Gordon equation in a random force field in the adiabatic approximation. We obtain an Einstein-Fokker equation and find the distribution function for the soliton parameters which we use to evaluate its statistical characteristics. We derive an equation for the averaged functions of the soliton parameters. We determine the limits of applicability of the delta-correlated in time random field approximation

  2. Three-dimensional direct laser written graphitic electrical contacts to randomly distributed components

    Science.gov (United States)

    Dorin, Bryce; Parkinson, Patrick; Scully, Patricia

    2018-04-01

    The development of cost-effective electrical packaging for randomly distributed micro/nano-scale devices is a widely recognized challenge for fabrication technologies. Three-dimensional direct laser writing (DLW) has been proposed as a solution to this challenge, and has enabled the creation of rapid and low resistance graphitic wires within commercial polyimide substrates. In this work, we utilize the DLW technique to electrically contact three fully encapsulated and randomly positioned light-emitting diodes (LEDs) in a one-step process. The resolution of the contacts is in the order of 20 μ m, with an average circuit resistance of 29 ± 18 kΩ per LED contacted. The speed and simplicity of this technique is promising to meet the needs of future microelectronics and device packaging.

  3. Neutron resonance averaging

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1986-10-01

    The principles of resonance averaging as applied to neutron capture reactions are described. Several illustrations of resonance averaging to problems of nuclear structure and the distribution of radiative strength in nuclei are provided. 30 refs., 12 figs

  4. Application of Vector Triggering Random Decrement

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Ibrahim, S. R.; Brincker, Rune

    result is a Random Decrement function from each measurement. In traditional Random Decrement estimation the triggering condition is a scalar condition, which should only be fulfilled in a single measurement. In vector triggering Random Decrement the triggering condition is a vector condition......This paper deals with applications of the vector triggering Random Decrement technique. This technique is new and developed with the aim of minimizing estimation time and identification errors. The theory behind the technique is discussed in an accompanying paper. The results presented...... in this paper should be regarded as a further documentation of the technique. The key point in Random Decrement estimation is the formulation of a triggering condition. If the triggering condition is fulfilled a time segment from each measurement is picked out and averaged with previous time segments. The final...

  5. Application of Vector Triggering Random Decrement

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Ibrahim, S. R.; Brincker, Rune

    1997-01-01

    result is a Random Decrement function from each measurement. In traditional Random Decrement estimation the triggering condition is a scalar condition, which should only be fulfilled in a single measurement. In vector triggering Random Decrement the triggering condition is a vector condition......This paper deals with applications of the vector triggering Random Decrement technique. This technique is new and developed with the aim of minimizing estimation time and identification errors. The theory behind the technique is discussed in an accompanying paper. The results presented...... in this paper should be regarded as a further documentation of the technique. The key point in Random Decrement estimation is the formulation of a triggering condition. If the triggering condition is fulfilled a time segment from each measurement is picked out and averaged with previous time segments. The final...

  6. Real-Space x-ray tomographic reconstruction of randomly oriented objects with sparse data frames.

    Science.gov (United States)

    Ayyer, Kartik; Philipp, Hugh T; Tate, Mark W; Elser, Veit; Gruner, Sol M

    2014-02-10

    Schemes for X-ray imaging single protein molecules using new x-ray sources, like x-ray free electron lasers (XFELs), require processing many frames of data that are obtained by taking temporally short snapshots of identical molecules, each with a random and unknown orientation. Due to the small size of the molecules and short exposure times, average signal levels of much less than 1 photon/pixel/frame are expected, much too low to be processed using standard methods. One approach to process the data is to use statistical methods developed in the EMC algorithm (Loh & Elser, Phys. Rev. E, 2009) which processes the data set as a whole. In this paper we apply this method to a real-space tomographic reconstruction using sparse frames of data (below 10(-2) photons/pixel/frame) obtained by performing x-ray transmission measurements of a low-contrast, randomly-oriented object. This extends the work by Philipp et al. (Optics Express, 2012) to three dimensions and is one step closer to the single molecule reconstruction problem.

  7. Averaging Bias Correction for Future IPDA Lidar Mission MERLIN

    Science.gov (United States)

    Tellier, Yoann; Pierangelo, Clémence; Wirth, Martin; Gibert, Fabien

    2018-04-01

    The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4) and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.

  8. Robust iterative learning control for multi-phase batch processes: an average dwell-time method with 2D convergence indexes

    Science.gov (United States)

    Wang, Limin; Shen, Yiteng; Yu, Jingxian; Li, Ping; Zhang, Ridong; Gao, Furong

    2018-01-01

    In order to cope with system disturbances in multi-phase batch processes with different dimensions, a hybrid robust control scheme of iterative learning control combined with feedback control is proposed in this paper. First, with a hybrid iterative learning control law designed by introducing the state error, the tracking error and the extended information, the multi-phase batch process is converted into a two-dimensional Fornasini-Marchesini (2D-FM) switched system with different dimensions. Second, a switching signal is designed using the average dwell-time method integrated with the related switching conditions to give sufficient conditions ensuring stable running for the system. Finally, the minimum running time of the subsystems and the control law gains are calculated by solving the linear matrix inequalities. Meanwhile, a compound 2D controller with robust performance is obtained, which includes a robust extended feedback control for ensuring the steady-state tracking error to converge rapidly. The application on an injection molding process displays the effectiveness and superiority of the proposed strategy.

  9. The McMillan Theorem for Colored Branching Processes and Dimensions of Random Fractals

    Directory of Open Access Journals (Sweden)

    Victor Bakhtin

    2014-12-01

    Full Text Available For the simplest colored branching process, we prove an analog to the McMillan theorem and calculate the Hausdorff dimensions of random fractals defined in terms of the limit behavior of empirical measures generated by finite genetic lines. In this setting, the role of Shannon’s entropy is played by the Kullback–Leibler divergence, and the Hausdorff dimensions are computed by means of the so-called Billingsley–Kullback entropy, defined in the paper.

  10. Extinction transition in stochastic population dynamics in a random, convective environment

    International Nuclear Information System (INIS)

    Juhász, Róbert

    2013-01-01

    Motivated by modeling the dynamics of a population living in a flowing medium where the environmental factors are random in space, we have studied an asymmetric variant of the one-dimensional contact process, where the quenched random reproduction rates are systematically greater in one direction than in the opposite one. The spatial disorder turns out to be a relevant perturbation but, according to results of Monte Carlo simulations, the behavior of the model at the extinction transition is different from the (infinite-randomness) critical behavior of the disordered symmetric contact process. Depending on the strength a of the asymmetry, the critical population drifts either with a finite velocity or with an asymptotically vanishing velocity as x(t) ∼ t μ(a) , where μ(a) < 1. Dynamical quantities are non-self-averaging at the extinction transition; the survival probability, for instance, shows multiscaling, i.e. it is characterized by a broad spectrum of effective exponents. For a sufficiently weak asymmetry, a Griffiths phase appears below the extinction transition, where the survival probability decays as a non-universal power of the time while, above the transition, another extended phase emerges, where the front of the population advances anomalously with a diffusion exponent continuously varying with the control parameter. (paper)

  11. Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.

    Science.gov (United States)

    Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel

    2018-06-05

    In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.

  12. Generating functionals for quantum field theories with random potentials

    International Nuclear Information System (INIS)

    Jain, Mudit; Vanchurin, Vitaly

    2016-01-01

    We consider generating functionals for computing correlators in quantum field theories with random potentials. Examples of such theories include cosmological systems in context of the string theory landscape (e.g. cosmic inflation) or condensed matter systems with quenched disorder (e.g. spin glass). We use the so-called replica trick to define two different generating functionals for calculating correlators of the quantum fields averaged over a given distribution of random potentials. The first generating functional is appropriate for calculating averaged (in-out) amplitudes and involves a single replica of fields, but the replica limit is taken to an (unphysical) negative one number of fields outside of the path integral. When the number of replicas is doubled the generating functional can also be used for calculating averaged probabilities (squared amplitudes) using the in-in construction. The second generating functional involves an infinite number of replicas, but can be used for calculating both in-out and in-in correlators and the replica limits are taken to only a zero number of fields. We discuss the formalism in details for a single real scalar field, but the generalization to more fields or to different types of fields is straightforward. We work out three examples: one where the mass of scalar field is treated as a random variable and two where the functional form of interactions is random, one described by a Gaussian random field and the other by a Euclidean action in the field configuration space.

  13. Influence of dispatching rules on average production lead time for multi-stage production systems.

    Science.gov (United States)

    Hübl, Alexander; Jodlbauer, Herbert; Altendorfer, Klaus

    2013-08-01

    In this paper the influence of different dispatching rules on the average production lead time is investigated. Two theorems based on covariance between processing time and production lead time are formulated and proved theoretically. Theorem 1 links the average production lead time to the "processing time weighted production lead time" for the multi-stage production systems analytically. The influence of different dispatching rules on average lead time, which is well known from simulation and empirical studies, can be proved theoretically in Theorem 2 for a single stage production system. A simulation study is conducted to gain more insight into the influence of dispatching rules on average production lead time in a multi-stage production system. We find that the "processing time weighted average production lead time" for a multi-stage production system is not invariant of the applied dispatching rule and can be used as a dispatching rule independent indicator for single-stage production systems.

  14. Organic random lasers in the weak-scattering regime

    CERN Document Server

    Polson, R C; 10.1103/PhysRevB.71.045205

    2005-01-01

    We used the ensemble-averaged power Fourier transform (PFT) of random laser emission spectra over the illuminated area to study random lasers with coherent feedback in four different disordered organic gain media in the weak scattering regime, where the light mean free path, l* is much larger than the emission wavelength. The disordered gain media include a pi -conjugated polymer film, an opal photonic crystal infiltrated with a laser dye (rhodamine 6G; R6G) having optical gain in the visible spectral range, a suspension of titania balls in R6G solution, and biological tissues such as chicken breast infiltrated with R6G. We show the existence of universality among the random resonators in each gain medium that we tested, in which at the same excitation intensity a dominant random cavity is excited in different parts of the sample. We show a second universality when scaling the average PFT of the four different media by l*; we found that the dominant cavity in each disordered gain medium scales with l *. The e...

  15. Analysis of litter size and average litter weight in pigs using a recursive model

    DEFF Research Database (Denmark)

    Varona, Luis; Sorensen, Daniel; Thompson, Robin

    2007-01-01

    An analysis of litter size and average piglet weight at birth in Landrace and Yorkshire using a standard two-trait mixed model (SMM) and a recursive mixed model (RMM) is presented. The RMM establishes a one-way link from litter size to average piglet weight. It is shown that there is a one......-to-one correspondence between the parameters of SMM and RMM and that they generate equivalent likelihoods. As parameterized in this work, the RMM tests for the presence of a recursive relationship between additive genetic values, permanent environmental effects, and specific environmental effects of litter size......, on average piglet weight. The equivalent standard mixed model tests whether or not the covariance matrices of the random effects have a diagonal structure. In Landrace, posterior predictive model checking supports a model without any form of recursion or, alternatively, a SMM with diagonal covariance...

  16. Averaging Bias Correction for Future IPDA Lidar Mission MERLIN

    Directory of Open Access Journals (Sweden)

    Tellier Yoann

    2018-01-01

    Full Text Available The CNES/DLR MERLIN satellite mission aims at measuring methane dry-air mixing ratio column (XCH4 and thus improving surface flux estimates. In order to get a 1% precision on XCH4 measurements, MERLIN signal processing assumes an averaging of data over 50 km. The induced biases due to the non-linear IPDA lidar equation are not compliant with accuracy requirements. This paper analyzes averaging biases issues and suggests correction algorithms tested on realistic simulated scenes.

  17. Statistical mechanics and stability of random lattice field theory

    International Nuclear Information System (INIS)

    Baskaran, G.

    1984-01-01

    The averaging procedure in the random lattice field theory is studied by viewing it as a statistical mechanics of a system of classical particles. The corresponding thermodynamic phase is shown to determine the random lattice configuration which contributes dominantly to the generating function. The non-abelian gauge theory in four (space plus time) dimensions in the annealed and quenched averaging versions is shown to exist as an ideal classical gas, implying that macroscopically homogeneous configurations dominate the configurational averaging. For the free massless scalar field theory with O(n) global symmetry, in the annealed average, the pressure becomes negative for dimensions greater than two when n exceeds a critical number. This implies that macroscopically inhomogeneous collapsed configurations contribute dominantly. In the quenched averaging, the collapse of the massless scalar field theory is prevented and the system becomes an ideal gas which is at infinite temperature. Our results are obtained using exact scaling analysis. We also show approximately that SU(N) gauge theory collapses for dimensions greater than four in the annealed average. Within the same approximation, the collapse is prevented in the quenched average. We also obtain exact scaling differential equations satisfied by the generating function and physical quantities. (orig.)

  18. A Randomization Procedure for "Trickle-Process" Evaluations

    Science.gov (United States)

    Goldman, Jerry

    1977-01-01

    This note suggests a solution to the problem of achieving randomization in experimental settings where units deemed eligible for treatment "trickle in," that is, appear at any time. The solution permits replication of the experiment in order to test for time-dependent effects. (Author/CTM)

  19. Parametric interaction of waves in the plasma with random large-scale inhomogeneities

    International Nuclear Information System (INIS)

    Abramovich, B.S.; Tamojkin, V.V.

    1980-01-01

    Parametric processes of the decay and fusion of three waves in a weakly turbulent plasma with random inhomogeneities, the size of which is too big as compared with wave-lengths are considered. Under the diffusive approximation applicability closed equations are obtained, which determine the behaviour of all the intensity moments of parametrically bound waves. It is shown that under the conditions when the characteristic length of the multiple scattering is considerably less than the nonlinear interaction, length the effective increment of average intensity increase and its moments at dissociation processes is too small as compared with the homogeneous plasma case. At fusion processes the same increment (decrement) determines the distance at which all intensity moments are in the saturation regime

  20. Privacy preserving randomized gossip algorithms

    KAUST Repository

    Hanzely, Filip; Konečný , Jakub; Loizou, Nicolas; Richtarik, Peter; Grishchenko, Dmitry

    2017-01-01

    In this work we present three different randomized gossip algorithms for solving the average consensus problem while at the same time protecting the information about the initial private values stored at the nodes. We give iteration complexity bounds for all methods, and perform extensive numerical experiments.

  1. Privacy preserving randomized gossip algorithms

    KAUST Repository

    Hanzely, Filip

    2017-06-23

    In this work we present three different randomized gossip algorithms for solving the average consensus problem while at the same time protecting the information about the initial private values stored at the nodes. We give iteration complexity bounds for all methods, and perform extensive numerical experiments.

  2. Coupled continuous time-random walks in quenched random environment

    Science.gov (United States)

    Magdziarz, M.; Szczotka, W.

    2018-02-01

    We introduce a coupled continuous-time random walk with coupling which is characteristic for Lévy walks. Additionally we assume that the walker moves in a quenched random environment, i.e. the site disorder at each lattice point is fixed in time. We analyze the scaling limit of such a random walk. We show that for large times the behaviour of the analyzed process is exactly the same as in the case of uncoupled quenched trap model for Lévy flights.

  3. Modified Exponential Weighted Moving Average (EWMA) Control Chart on Autocorrelation Data

    Science.gov (United States)

    Herdiani, Erna Tri; Fandrilla, Geysa; Sunusi, Nurtiti

    2018-03-01

    In general, observations of the statistical process control are assumed to be mutually independence. However, this assumption is often violated in practice. Consequently, statistical process controls were developed for interrelated processes, including Shewhart, Cumulative Sum (CUSUM), and exponentially weighted moving average (EWMA) control charts in the data that were autocorrelation. One researcher stated that this chart is not suitable if the same control limits are used in the case of independent variables. For this reason, it is necessary to apply the time series model in building the control chart. A classical control chart for independent variables is usually applied to residual processes. This procedure is permitted provided that residuals are independent. In 1978, Shewhart modification for the autoregressive process was introduced by using the distance between the sample mean and the target value compared to the standard deviation of the autocorrelation process. In this paper we will examine the mean of EWMA for autocorrelation process derived from Montgomery and Patel. Performance to be investigated was investigated by examining Average Run Length (ARL) based on the Markov Chain Method.

  4. Quantitative Model of Price Diffusion and Market Friction Based on Trading as a Mechanistic Random Process

    Science.gov (United States)

    Daniels, Marcus G.; Farmer, J. Doyne; Gillemot, László; Iori, Giulia; Smith, Eric

    2003-03-01

    We model trading and price formation in a market under the assumption that order arrival and cancellations are Poisson random processes. This model makes testable predictions for the most basic properties of markets, such as the diffusion rate of prices (which is the standard measure of financial risk) and the spread and price impact functions (which are the main determinants of transaction cost). Guided by dimensional analysis, simulation, and mean-field theory, we find scaling relations in terms of order flow rates. We show that even under completely random order flow the need to store supply and demand to facilitate trading induces anomalous diffusion and temporal structure in prices.

  5. Identification and estimation of survivor average causal effects.

    Science.gov (United States)

    Tchetgen Tchetgen, Eric J

    2014-09-20

    In longitudinal studies, outcomes ascertained at follow-up are typically undefined for individuals who die prior to the follow-up visit. In such settings, outcomes are said to be truncated by death and inference about the effects of a point treatment or exposure, restricted to individuals alive at the follow-up visit, could be biased even if as in experimental studies, treatment assignment were randomized. To account for truncation by death, the survivor average causal effect (SACE) defines the effect of treatment on the outcome for the subset of individuals who would have survived regardless of exposure status. In this paper, the author nonparametrically identifies SACE by leveraging post-exposure longitudinal correlates of survival and outcome that may also mediate the exposure effects on survival and outcome. Nonparametric identification is achieved by supposing that the longitudinal data arise from a certain nonparametric structural equations model and by making the monotonicity assumption that the effect of exposure on survival agrees in its direction across individuals. A novel weighted analysis involving a consistent estimate of the survival process is shown to produce consistent estimates of SACE. A data illustration is given, and the methods are extended to the context of time-varying exposures. We discuss a sensitivity analysis framework that relaxes assumptions about independent errors in the nonparametric structural equations model and may be used to assess the extent to which inference may be altered by a violation of key identifying assumptions. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd.

  6. Processing speed and working memory training in multiple sclerosis: a double-blind randomized controlled pilot study.

    Science.gov (United States)

    Hancock, Laura M; Bruce, Jared M; Bruce, Amanda S; Lynch, Sharon G

    2015-01-01

    Between 40-65% of multiple sclerosis patients experience cognitive deficits, with processing speed and working memory most commonly affected. This pilot study investigated the effect of computerized cognitive training focused on improving processing speed and working memory. Participants were randomized into either an active or a sham training group and engaged in six weeks of training. The active training group improved on a measure of processing speed and attention following cognitive training, and data trended toward significance on measures of other domains. Results provide preliminary evidence that cognitive training with multiple sclerosis patients may produce moderate improvement in select areas of cognitive functioning.

  7. The balanced survivor average causal effect.

    Science.gov (United States)

    Greene, Tom; Joffe, Marshall; Hu, Bo; Li, Liang; Boucher, Ken

    2013-05-07

    Statistical analysis of longitudinal outcomes is often complicated by the absence of observable values in patients who die prior to their scheduled measurement. In such cases, the longitudinal data are said to be "truncated by death" to emphasize that the longitudinal measurements are not simply missing, but are undefined after death. Recently, the truncation by death problem has been investigated using the framework of principal stratification to define the target estimand as the survivor average causal effect (SACE), which in the context of a two-group randomized clinical trial is the mean difference in the longitudinal outcome between the treatment and control groups for the principal stratum of always-survivors. The SACE is not identified without untestable assumptions. These assumptions have often been formulated in terms of a monotonicity constraint requiring that the treatment does not reduce survival in any patient, in conjunction with assumed values for mean differences in the longitudinal outcome between certain principal strata. In this paper, we introduce an alternative estimand, the balanced-SACE, which is defined as the average causal effect on the longitudinal outcome in a particular subset of the always-survivors that is balanced with respect to the potential survival times under the treatment and control. We propose a simple estimator of the balanced-SACE that compares the longitudinal outcomes between equivalent fractions of the longest surviving patients between the treatment and control groups and does not require a monotonicity assumption. We provide expressions for the large sample bias of the estimator, along with sensitivity analyses and strategies to minimize this bias. We consider statistical inference under a bootstrap resampling procedure.

  8. Criticality and entanglement in random quantum systems

    International Nuclear Information System (INIS)

    Refael, G; Moore, J E

    2009-01-01

    We review studies of entanglement entropy in systems with quenched randomness, concentrating on universal behavior at strongly random quantum critical points. The disorder-averaged entanglement entropy provides insight into the quantum criticality of these systems and an understanding of their relationship to non-random ('pure') quantum criticality. The entanglement near many such critical points in one dimension shows a logarithmic divergence in subsystem size, similar to that in the pure case but with a different universal coefficient. Such universal coefficients are examples of universal critical amplitudes in a random system. Possible measurements are reviewed along with the one-particle entanglement scaling at certain Anderson localization transitions. We also comment briefly on higher dimensions and challenges for the future.

  9. Lattice Boltzmann simulation of flow and heat transfer in random porous media constructed by simulated annealing algorithm

    International Nuclear Information System (INIS)

    Liu, Minghua; Shi, Yong; Yan, Jiashu; Yan, Yuying

    2017-01-01

    Highlights: • A numerical capability combining the lattice Boltzmann method with simulated annealing algorithm is developed. • Digitized representations of random porous media are constructed using limited but meaningful statistical descriptors. • Pore-scale flow and heat transfer information in random porous media is obtained by the lattice Boltzmann simulation. • The effective properties at the representative elementary volume scale are well specified using appropriate upscale averaging. - Abstract: In this article, the lattice Boltzmann (LB) method for transport phenomena is combined with the simulated annealing (SA) algorithm for digitized porous-medium construction to study flow and heat transfer in random porous media. Importantly, in contrast to previous studies which simplify porous media as arrays of regularly shaped objects or effective pore networks, the LB + SA method in this article can model statistically meaningful random porous structures in irregular morphology, and simulate pore-scale transport processes inside them. Pore-scale isothermal flow and heat conduction in a set of constructed random porous media characterized by statistical descriptors were then simulated through use of the LB + SA method. The corresponding averages over the computational volumes and the related effective transport properties were also computed based on these pore scale numerical results. Good agreement between the numerical results and theoretical predictions or experimental data on the representative elementary volume scale was found. The numerical simulations in this article demonstrate combination of the LB method with the SA algorithm is a viable and powerful numerical strategy for simulating transport phenomena in random porous media in complex geometries.

  10. Toddlers' bias to look at average versus obese figures relates to maternal anti-fat prejudice.

    Science.gov (United States)

    Ruffman, Ted; O'Brien, Kerry S; Taumoepeau, Mele; Latner, Janet D; Hunter, John A

    2016-02-01

    Anti-fat prejudice (weight bias, obesity stigma) is strong, prevalent, and increasing in adults and is associated with negative outcomes for those with obesity. However, it is unknown how early in life this prejudice forms and the reasons for its development. We examined whether infants and toddlers might display an anti-fat bias and, if so, whether it was influenced by maternal anti-fat attitudes through a process of social learning. Mother-child dyads (N=70) split into four age groups participated in a preferential looking paradigm whereby children were presented with 10 pairs of average and obese human figures in random order, and their viewing times (preferential looking) for the figures were measured. Mothers' anti-fat prejudice and education were measured along with mothers' and fathers' body mass index (BMI) and children's television viewing time. We found that older infants (M=11months) had a bias for looking at the obese figures, whereas older toddlers (M=32months) instead preferred looking at the average-sized figures. Furthermore, older toddlers' preferential looking was correlated significantly with maternal anti-fat attitudes. Parental BMI, education, and children's television viewing time were unrelated to preferential looking. Looking times might signal a precursor to explicit fat prejudice socialized via maternal anti-fat attitudes. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    1999-01-01

    In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belo...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion......In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  12. A METHOD FOR DETERMINING THE RADIALLY-AVERAGED EFFECTIVE IMPACT AREA FOR AN AIRCRAFT CRASH INTO A STRUCTURE

    Energy Technology Data Exchange (ETDEWEB)

    Walker, William C. [ORNL

    2018-02-01

    This report presents a methodology for deriving the equations which can be used for calculating the radially-averaged effective impact area for a theoretical aircraft crash into a structure. Conventionally, a maximum effective impact area has been used in calculating the probability of an aircraft crash into a structure. Whereas the maximum effective impact area is specific to a single direction of flight, the radially-averaged effective impact area takes into consideration the real life random nature of the direction of flight with respect to a structure. Since the radially-averaged effective impact area is less than the maximum effective impact area, the resulting calculated probability of an aircraft crash into a structure is reduced.

  13. Average-energy games

    Directory of Open Access Journals (Sweden)

    Patricia Bouyer

    2015-09-01

    Full Text Available Two-player quantitative zero-sum games provide a natural framework to synthesize controllers with performance guarantees for reactive systems within an uncontrollable environment. Classical settings include mean-payoff games, where the objective is to optimize the long-run average gain per action, and energy games, where the system has to avoid running out of energy. We study average-energy games, where the goal is to optimize the long-run average of the accumulated energy. We show that this objective arises naturally in several applications, and that it yields interesting connections with previous concepts in the literature. We prove that deciding the winner in such games is in NP inter coNP and at least as hard as solving mean-payoff games, and we establish that memoryless strategies suffice to win. We also consider the case where the system has to minimize the average-energy while maintaining the accumulated energy within predefined bounds at all times: this corresponds to operating with a finite-capacity storage for energy. We give results for one-player and two-player games, and establish complexity bounds and memory requirements.

  14. [The third lumbar transverse process syndrome treated with acupuncture at zygapophyseal joint and transverse process:a randomized controlled trial].

    Science.gov (United States)

    Li, Fangling; Bi, Dingyan

    2017-08-12

    To explore the effects differences for the third lumbar transverse process syndrome between acupuncture mainly at zygapophyseal joint and transverse process and conventional acupuncture. Eighty cases were randomly assigned into an observation group and a control group, 40 cases in each one. In the observation group, patients were treated with acupuncture at zygapophyseal joint, transverse process, the superior gluteus nerve into the hip point and Weizhong (BL 40), and those in the control group were treated with acupuncture at Qihaishu (BL 24), Jiaji (EX-B 2) of L 2 -L 4 , the superior gluteus nerve into the hip point and Weizhong (BL 40). The treatment was given 6 times a week for 2 weeks, once a day. The visual analogue scale (VAS), Japanese Orthopaedic Association (JOA) low back pain score and simplified Chinese Oswestry disability index (SC-ODI) were observed before and after treatment as well as 6 months after treatment, and the clinical effects were evaluated. The total effective rate in the observation group was 95.0% (38/40), which was significantly higher than 82.5% (33/40) in the control group ( P process for the third lumbar transverse process syndrome achieves good effect, which is better than that of conventional acupuncture on relieving pain, improving lumbar function and life quality.

  15. Modelling estimation and analysis of dynamic processes from image sequences using temporal random closed sets and point processes with application to the cell exocytosis and endocytosis

    OpenAIRE

    Díaz Fernández, Ester

    2010-01-01

    In this thesis, new models and methodologies are introduced for the analysis of dynamic processes characterized by image sequences with spatial temporal overlapping. The spatial temporal overlapping exists in many natural phenomena and should be addressed properly in several Science disciplines such as Microscopy, Material Sciences, Biology, Geostatistics or Communication Networks. This work is related to the Point Process and Random Closed Set theories, within Stochastic Ge...

  16. Spherical particle Brownian motion in viscous medium as non-Markovian random process

    International Nuclear Information System (INIS)

    Morozov, Andrey N.; Skripkin, Alexey V.

    2011-01-01

    The Brownian motion of a spherical particle in an infinite medium is described by the conventional methods and integral transforms considering the entrainment of surrounding particles of the medium by the Brownian particle. It is demonstrated that fluctuations of the Brownian particle velocity represent a non-Markovian random process. The features of Brownian motion in short time intervals and in small displacements are considered. -- Highlights: → Description of Brownian motion considering the entrainment of medium is developed. → We find the equations for statistical characteristics of impulse fluctuations. → Brownian motion at small time intervals is considered. → Theoretical results and experimental data are compared.

  17. Efficient rare-event simulation for multiple jump events in regularly varying random walks and compound Poisson processes

    NARCIS (Netherlands)

    B. Chen (Bohan); J. Blanchet; C.H. Rhee (Chang-Han); A.P. Zwart (Bert)

    2017-01-01

    textabstractWe propose a class of strongly efficient rare event simulation estimators for random walks and compound Poisson processes with a regularly varying increment/jump-size distribution in a general large deviations regime. Our estimator is based on an importance sampling strategy that hinges

  18. Design of Energy Aware Adder Circuits Considering Random Intra-Die Process Variations

    Directory of Open Access Journals (Sweden)

    Marco Lanuzza

    2011-04-01

    Full Text Available Energy consumption is one of the main barriers to current high-performance designs. Moreover, the increased variability experienced in advanced process technologies implies further timing yield concerns and therefore intensifies this obstacle. Thus, proper techniques to achieve robust designs are a critical requirement for integrated circuit success. In this paper, the influence of intra-die random process variations is analyzed considering the particular case of the design of energy aware adder circuits. Five well known adder circuits were designed exploiting an industrial 45 nm static complementary metal-oxide semiconductor (CMOS standard cell library. The designed adders were comparatively evaluated under different energy constraints. As a main result, the performed analysis demonstrates that, for a given energy budget, simpler circuits (which are conventionally identified as low-energy slow architectures operating at higher power supply voltages can achieve a timing yield significantly better than more complex faster adders when used in low-power design with supply voltages lower than nominal.

  19. Run charts revisited: a simulation study of run chart rules for detection of non-random variation in health care processes.

    Science.gov (United States)

    Anhøj, Jacob; Olesen, Anne Vingaard

    2014-01-01

    A run chart is a line graph of a measure plotted over time with the median as a horizontal line. The main purpose of the run chart is to identify process improvement or degradation, which may be detected by statistical tests for non-random patterns in the data sequence. We studied the sensitivity to shifts and linear drifts in simulated processes using the shift, crossings and trend rules for detecting non-random variation in run charts. The shift and crossings rules are effective in detecting shifts and drifts in process centre over time while keeping the false signal rate constant around 5% and independent of the number of data points in the chart. The trend rule is virtually useless for detection of linear drift over time, the purpose it was intended for.

  20. Averaging of nonlinearity-managed pulses

    International Nuclear Information System (INIS)

    Zharnitsky, Vadim; Pelinovsky, Dmitry

    2005-01-01

    We consider the nonlinear Schroedinger equation with the nonlinearity management which describes Bose-Einstein condensates under Feshbach resonance. By using an averaging theory, we derive the Hamiltonian averaged equation and compare it with other averaging methods developed for this problem. The averaged equation is used for analytical approximations of nonlinearity-managed solitons

  1. On the Coupling Time of the Heat-Bath Process for the Fortuin-Kasteleyn Random-Cluster Model

    Science.gov (United States)

    Collevecchio, Andrea; Elçi, Eren Metin; Garoni, Timothy M.; Weigel, Martin

    2018-01-01

    We consider the coupling from the past implementation of the random-cluster heat-bath process, and study its random running time, or coupling time. We focus on hypercubic lattices embedded on tori, in dimensions one to three, with cluster fugacity at least one. We make a number of conjectures regarding the asymptotic behaviour of the coupling time, motivated by rigorous results in one dimension and Monte Carlo simulations in dimensions two and three. Amongst our findings, we observe that, for generic parameter values, the distribution of the appropriately standardized coupling time converges to a Gumbel distribution, and that the standard deviation of the coupling time is asymptotic to an explicit universal constant multiple of the relaxation time. Perhaps surprisingly, we observe these results to hold both off criticality, where the coupling time closely mimics the coupon collector's problem, and also at the critical point, provided the cluster fugacity is below the value at which the transition becomes discontinuous. Finally, we consider analogous questions for the single-spin Ising heat-bath process.

  2. Electromagnetic Wave Propagation in Random Media

    DEFF Research Database (Denmark)

    Pécseli, Hans

    1984-01-01

    The propagation of a narrow frequency band beam of electromagnetic waves in a medium with randomly varying index of refraction is considered. A novel formulation of the governing equation is proposed. An equation for the average Green function (or transition probability) can then be derived...

  3. Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.

    Science.gov (United States)

    Brezis, Noam; Bronfman, Zohar Z; Usher, Marius

    2015-06-04

    We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two.

  4. Long-range epidemic spreading in a random environment.

    Science.gov (United States)

    Juhász, Róbert; Kovács, István A; Iglói, Ferenc

    2015-03-01

    Modeling long-range epidemic spreading in a random environment, we consider a quenched, disordered, d-dimensional contact process with infection rates decaying with distance as 1/rd+σ. We study the dynamical behavior of the model at and below the epidemic threshold by a variant of the strong-disorder renormalization-group method and by Monte Carlo simulations in one and two spatial dimensions. Starting from a single infected site, the average survival probability is found to decay as P(t)∼t-d/z up to multiplicative logarithmic corrections. Below the epidemic threshold, a Griffiths phase emerges, where the dynamical exponent z varies continuously with the control parameter and tends to zc=d+σ as the threshold is approached. At the threshold, the spatial extension of the infected cluster (in surviving trials) is found to grow as R(t)∼t1/zc with a multiplicative logarithmic correction and the average number of infected sites in surviving trials is found to increase as Ns(t)∼(lnt)χ with χ=2 in one dimension.

  5. A randomized controlled trial of an electronic informed consent process.

    Science.gov (United States)

    Rothwell, Erin; Wong, Bob; Rose, Nancy C; Anderson, Rebecca; Fedor, Beth; Stark, Louisa A; Botkin, Jeffrey R

    2014-12-01

    A pilot study assessed an electronic informed consent model within a randomized controlled trial (RCT). Participants who were recruited for the parent RCT project were randomly selected and randomized to either an electronic consent group (n = 32) or a simplified paper-based consent group (n = 30). Results from the electronic consent group reported significantly higher understanding of the purpose of the study, alternatives to participation, and who to contact if they had questions or concerns about the study. However, participants in the paper-based control group reported higher mean scores on some survey items. This research suggests that an electronic informed consent presentation may improve participant understanding for some aspects of a research study. © The Author(s) 2014.

  6. On Random Numbers and Design

    Science.gov (United States)

    Ben-Ari, Morechai

    2004-01-01

    The term "random" is frequently used in discussion of the theory of evolution, even though the mathematical concept of randomness is problematic and of little relevance in the theory. Therefore, since the core concept of the theory of evolution is the non-random process of natural selection, the term random should not be used in teaching the…

  7. A generalization of the preset count moving average algorithm for digital rate meters

    International Nuclear Information System (INIS)

    Arandjelovic, Vojislav; Koturovic, Aleksandar; Vukanovic, Radomir

    2002-01-01

    A generalized definition of the preset count moving average algorithm for digital rate meters has been introduced. The algorithm is based on the knowledge of time intervals between successive pulses in random-pulse sequences. The steady state and transient regimes of the algorithm have been characterized. A measure for statistical fluctuations of the successive measurement results has been introduced. The versatility of the generalized algorithm makes it suitable for application in the design of the software of modern measuring/control digital systems

  8. The difference between alternative averages

    Directory of Open Access Journals (Sweden)

    James Vaupel

    2012-09-01

    Full Text Available BACKGROUND Demographers have long been interested in how compositional change, e.g., change in age structure, affects population averages. OBJECTIVE We want to deepen understanding of how compositional change affects population averages. RESULTS The difference between two averages of a variable, calculated using alternative weighting functions, equals the covariance between the variable and the ratio of the weighting functions, divided by the average of the ratio. We compare weighted and unweighted averages and also provide examples of use of the relationship in analyses of fertility and mortality. COMMENTS Other uses of covariances in formal demography are worth exploring.

  9. Cancerous tissue mapping from random lasing emission spectra

    International Nuclear Information System (INIS)

    Polson, R C; Vardeny, Z V

    2010-01-01

    Random lasing emission spectra have been collected from both healthy and cancerous tissues. The two types of tissue with optical gain have different light scattering properties as obtained from an average power Fourier transform of their random lasing emission spectra. The difference in the power Fourier transform leads to a contrast between cancerous and benign tissues, which is utilized for tissue mapping of healthy and cancerous regions of patients

  10. Multi-fidelity Gaussian process regression for prediction of random fields

    International Nuclear Information System (INIS)

    Parussini, L.; Venturi, D.; Perdikaris, P.; Karniadakis, G.E.

    2017-01-01

    We propose a new multi-fidelity Gaussian process regression (GPR) approach for prediction of random fields based on observations of surrogate models or hierarchies of surrogate models. Our method builds upon recent work on recursive Bayesian techniques, in particular recursive co-kriging, and extends it to vector-valued fields and various types of covariances, including separable and non-separable ones. The framework we propose is general and can be used to perform uncertainty propagation and quantification in model-based simulations, multi-fidelity data fusion, and surrogate-based optimization. We demonstrate the effectiveness of the proposed recursive GPR techniques through various examples. Specifically, we study the stochastic Burgers equation and the stochastic Oberbeck–Boussinesq equations describing natural convection within a square enclosure. In both cases we find that the standard deviation of the Gaussian predictors as well as the absolute errors relative to benchmark stochastic solutions are very small, suggesting that the proposed multi-fidelity GPR approaches can yield highly accurate results.

  11. Multi-fidelity Gaussian process regression for prediction of random fields

    Energy Technology Data Exchange (ETDEWEB)

    Parussini, L. [Department of Engineering and Architecture, University of Trieste (Italy); Venturi, D., E-mail: venturi@ucsc.edu [Department of Applied Mathematics and Statistics, University of California Santa Cruz (United States); Perdikaris, P. [Department of Mechanical Engineering, Massachusetts Institute of Technology (United States); Karniadakis, G.E. [Division of Applied Mathematics, Brown University (United States)

    2017-05-01

    We propose a new multi-fidelity Gaussian process regression (GPR) approach for prediction of random fields based on observations of surrogate models or hierarchies of surrogate models. Our method builds upon recent work on recursive Bayesian techniques, in particular recursive co-kriging, and extends it to vector-valued fields and various types of covariances, including separable and non-separable ones. The framework we propose is general and can be used to perform uncertainty propagation and quantification in model-based simulations, multi-fidelity data fusion, and surrogate-based optimization. We demonstrate the effectiveness of the proposed recursive GPR techniques through various examples. Specifically, we study the stochastic Burgers equation and the stochastic Oberbeck–Boussinesq equations describing natural convection within a square enclosure. In both cases we find that the standard deviation of the Gaussian predictors as well as the absolute errors relative to benchmark stochastic solutions are very small, suggesting that the proposed multi-fidelity GPR approaches can yield highly accurate results.

  12. Response-only modal identification using random decrement algorithm with time-varying threshold level

    International Nuclear Information System (INIS)

    Lin, Chang Sheng; Tseng, Tse Chuan

    2014-01-01

    Modal Identification from response data only is studied for structural systems under nonstationary ambient vibration. The topic of this paper is the estimation of modal parameters from nonstationary ambient vibration data by applying the random decrement algorithm with time-varying threshold level. In the conventional random decrement algorithm, the threshold level for evaluating random dec signatures is defined as the standard deviation value of response data of the reference channel. The distortion of random dec signatures may be, however, induced by the error involved in noise from the original response data in practice. To improve the accuracy of identification, a modification of the sampling procedure in random decrement algorithm is proposed for modal-parameter identification from the nonstationary ambient response data. The time-varying threshold level is presented for the acquisition of available sample time history to perform averaging analysis, and defined as the temporal root-mean-square function of structural response, which can appropriately describe a wide variety of nonstationary behaviors in reality, such as the time-varying amplitude (variance) of a nonstationary process in a seismic record. Numerical simulations confirm the validity and robustness of the proposed modal-identification method from nonstationary ambient response data under noisy conditions.

  13. Limit theorems for stationary increments Lévy driven moving averages

    DEFF Research Database (Denmark)

    Basse-O'Connor, Andreas; Lachièze-Rey, Raphaël; Podolskij, Mark

    of the kernel function g at 0. First order asymptotic theory essentially comprise three cases: stable convergence towards a certain infinitely divisible distribution, an ergodic type limit theorem and convergence in probability towards an integrated random process. We also prove the second order limit theorem...

  14. METHODS OF CONTROLLING THE AVERAGE DIAMETER OF THE THREAD WITH ASYMMETRICAL PROFILE

    Directory of Open Access Journals (Sweden)

    L. M. Aliomarov

    2015-01-01

    Full Text Available To handle the threaded holes in hard materials made of marine machinery, operating at high temperatures, heavy loads and in aggressive environments, the authors have developed the combined tool core drill -tap with a special cutting scheme, which has an asymmetric thread profile on the tap part. In order to control the average diameter of the thread of tap part of the combined tool was used the method three wires, which allows to make continuous measurement of the average diameter of the thread along the entire profile. Deviation from the average diameter from the sample is registered by inductive sensor and is recorded by the recorder. In the work are developed and presented control schemes of the average diameter of the threads with a symmetrical and asymmetrical profile. On the basis of these schemes are derived formulas for calculating the theoretical option to set the wires in the thread profile in the process of measuring the average diameter. Conducted complex research and the introduction of the combined instrument core drill-tap in the production of products of marine engineering, shipbuilding, ship repair power plants made of hard materials showed a high efficiency of the proposed technology for the processing of high-quality small-diameter threaded holes that meet modern requirements.

  15. Pervasive randomness in physics: an introduction to its modelling and spectral characterisation

    Science.gov (United States)

    Howard, Roy

    2017-10-01

    An introduction to the modelling and spectral characterisation of random phenomena is detailed at a level consistent with a first exposure to the subject at an undergraduate level. A signal framework for defining a random process is provided and this underpins an introduction to common random processes including the Poisson point process, the random walk, the random telegraph signal, shot noise, information signalling random processes, jittered pulse trains, birth-death random processes and Markov chains. An introduction to the spectral characterisation of signals and random processes, via either an energy spectral density or a power spectral density, is detailed. The important case of defining a white noise random process concludes the paper.

  16. Average of delta: a new quality control tool for clinical laboratories.

    Science.gov (United States)

    Jones, Graham R D

    2016-01-01

    Average of normals is a tool used to control assay performance using the average of a series of results from patients' samples. Delta checking is a process of identifying errors in individual patient results by reviewing the difference from previous results of the same patient. This paper introduces a novel alternate approach, average of delta, which combines these concepts to use the average of a number of sequential delta values to identify changes in assay performance. Models for average of delta and average of normals were developed in a spreadsheet application. The model assessed the expected scatter of average of delta and average of normals functions and the effect of assay bias for different values of analytical imprecision and within- and between-subject biological variation and the number of samples included in the calculations. The final assessment was the number of patients' samples required to identify an added bias with 90% certainty. The model demonstrated that with larger numbers of delta values, the average of delta function was tighter (lower coefficient of variation). The optimal number of samples for bias detection with average of delta was likely to be between 5 and 20 for most settings and that average of delta outperformed average of normals when the within-subject biological variation was small relative to the between-subject variation. Average of delta provides a possible additional assay quality control tool which theoretical modelling predicts may be more valuable than average of normals for analytes where the group biological variation is wide compared with within-subject variation and where there is a high rate of repeat testing in the laboratory patient population. © The Author(s) 2015.

  17. Random tensors

    CERN Document Server

    Gurau, Razvan

    2017-01-01

    Written by the creator of the modern theory of random tensors, this book is the first self-contained introductory text to this rapidly developing theory. Starting from notions familiar to the average researcher or PhD student in mathematical or theoretical physics, the book presents in detail the theory and its applications to physics. The recent detections of the Higgs boson at the LHC and gravitational waves at LIGO mark new milestones in Physics confirming long standing predictions of Quantum Field Theory and General Relativity. These two experimental results only reinforce today the need to find an underlying common framework of the two: the elusive theory of Quantum Gravity. Over the past thirty years, several alternatives have been proposed as theories of Quantum Gravity, chief among them String Theory. While these theories are yet to be tested experimentally, key lessons have already been learned. Whatever the theory of Quantum Gravity may be, it must incorporate random geometry in one form or another....

  18. Groupies in random bipartite graphs

    OpenAIRE

    Yilun Shang

    2010-01-01

    A vertex $v$ of a graph $G$ is called a groupie if its degree is notless than the average of the degrees of its neighbors. In thispaper we study the influence of bipartition $(B_1,B_2)$ on groupiesin random bipartite graphs $G(B_1,B_2,p)$ with both fixed $p$ and$p$ tending to zero.

  19. An empirical test of pseudo random number generators by means of an exponential decaying process; Una prueba empirica de generadores de numeros pseudoaleatorios mediante un proceso de decaimiento exponencial

    Energy Technology Data Exchange (ETDEWEB)

    Coronel B, H.F.; Hernandez M, A.R.; Jimenez M, M.A. [Facultad de Fisica e Inteligencia Artificial, Universidad Veracruzana, A.P. 475, Xalapa, Veracruz (Mexico); Mora F, L.E. [CIMAT, A.P. 402, 36000 Guanajuato (Mexico)]. e-mail: hcoronel@uv.mx

    2007-07-01

    Empirical tests for pseudo random number generators based on the use of processes or physical models have been successfully used and are considered as complementary to theoretical tests of randomness. In this work a statistical methodology for evaluating the quality of pseudo random number generators is presented. The method is illustrated in the context of the so-called exponential decay process, using some pseudo random number generators commonly used in physics. (Author)

  20. Quantum randomness and unpredictability

    Energy Technology Data Exchange (ETDEWEB)

    Jaeger, Gregg [Quantum Communication and Measurement Laboratory, Department of Electrical and Computer Engineering and Division of Natural Science and Mathematics, Boston University, Boston, MA (United States)

    2017-06-15

    Quantum mechanics is a physical theory supplying probabilities corresponding to expectation values for measurement outcomes. Indeed, its formalism can be constructed with measurement as a fundamental process, as was done by Schwinger, provided that individual measurements outcomes occur in a random way. The randomness appearing in quantum mechanics, as with other forms of randomness, has often been considered equivalent to a form of indeterminism. Here, it is argued that quantum randomness should instead be understood as a form of unpredictability because, amongst other things, indeterminism is not a necessary condition for randomness. For concreteness, an explication of the randomness of quantum mechanics as the unpredictability of quantum measurement outcomes is provided. Finally, it is shown how this view can be combined with the recently introduced view that the very appearance of individual quantum measurement outcomes can be grounded in the Plenitude principle of Leibniz, a principle variants of which have been utilized in physics by Dirac and Gell-Mann in relation to the fundamental processes. This move provides further support to Schwinger's ''symbolic'' derivation of quantum mechanics from measurement. (copyright 2016 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  1. MARD—A moving average rose diagram application for the geosciences

    Science.gov (United States)

    Munro, Mark A.; Blenkinsop, Thomas G.

    2012-12-01

    MARD 1.0 is a computer program for generating smoothed rose diagrams by using a moving average, which is designed for use across the wide range of disciplines encompassed within the Earth Sciences. Available in MATLAB®, Microsoft® Excel and GNU Octave formats, the program is fully compatible with both Microsoft® Windows and Macintosh operating systems. Each version has been implemented in a user-friendly way that requires no prior experience in programming with the software. MARD conducts a moving average smoothing, a form of signal processing low-pass filter, upon the raw circular data according to a set of pre-defined conditions selected by the user. This form of signal processing filter smoothes the angular dataset, emphasising significant circular trends whilst reducing background noise. Customisable parameters include whether the data is uni- or bi-directional, the angular range (or aperture) over which the data is averaged, and whether an unweighted or weighted moving average is to be applied. In addition to the uni- and bi-directional options, the MATLAB® and Octave versions also possess a function for plotting 2-dimensional dips/pitches in a single, lower, hemisphere. The rose diagrams from each version are exportable as one of a selection of common graphical formats. Frequently employed statistical measures that determine the vector mean, mean resultant (or length), circular standard deviation and circular variance are also included. MARD's scope is demonstrated via its application to a variety of datasets within the Earth Sciences.

  2. Random walks and polygons in tight confinement

    International Nuclear Information System (INIS)

    Diao, Y; Ernst, C; Ziegler, U

    2014-01-01

    We discuss the effect of confinement on the topology and geometry of tightly confined random walks and polygons. Here the walks and polygons are confined in a sphere of radius R ≥ 1/2 and the polygons are equilateral with n edges of unit length. We illustrate numerically that for a fixed length of random polygons the knotting probability increases to one as the radius decreases to 1/2. We also demonstrate that for random polygons (walks) the curvature increases to πn (π(n – 1)) as the radius approaches 1/2 and that the torsion decreases to ≈ πn/3 (≈ π(n – 1)/3). In addition we show the effect of length and confinement on the average crossing number of a random polygon

  3. Computer generation of random deviates

    International Nuclear Information System (INIS)

    Cormack, John

    1991-01-01

    The need for random deviates arises in many scientific applications. In medical physics, Monte Carlo simulations have been used in radiology, radiation therapy and nuclear medicine. Specific instances include the modelling of x-ray scattering processes and the addition of random noise to images or curves in order to assess the effects of various processing procedures. Reliable sources of random deviates with statistical properties indistinguishable from true random deviates are a fundamental necessity for such tasks. This paper provides a review of computer algorithms which can be used to generate uniform random deviates and other distributions of interest to medical physicists, along with a few caveats relating to various problems and pitfalls which can occur. Source code listings for the generators discussed (in FORTRAN, Turbo-PASCAL and Data General ASSEMBLER) are available on request from the authors. 27 refs., 3 tabs., 5 figs

  4. Average Soil Water Retention Curves Measured by Neutron Radiography

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Chu-Lin [ORNL; Perfect, Edmund [University of Tennessee, Knoxville (UTK); Kang, Misun [ORNL; Voisin, Sophie [ORNL; Bilheux, Hassina Z [ORNL; Horita, Juske [Texas Tech University (TTU); Hussey, Dan [NIST Center for Neutron Research (NCRN), Gaithersburg, MD

    2011-01-01

    Water retention curves are essential for understanding the hydrologic behavior of partially-saturated porous media and modeling flow transport processes within the vadose zone. In this paper we report direct measurements of the main drying and wetting branches of the average water retention function obtained using 2-dimensional neutron radiography. Flint sand columns were saturated with water and then drained under quasi-equilibrium conditions using a hanging water column setup. Digital images (2048 x 2048 pixels) of the transmitted flux of neutrons were acquired at each imposed matric potential (~10-15 matric potential values per experiment) at the NCNR BT-2 neutron imaging beam line. Volumetric water contents were calculated on a pixel by pixel basis using Beer-Lambert s law after taking into account beam hardening and geometric corrections. To remove scattering effects at high water contents the volumetric water contents were normalized (to give relative saturations) by dividing the drying and wetting sequences of images by the images obtained at saturation and satiation, respectively. The resulting pixel values were then averaged and combined with information on the imposed basal matric potentials to give average water retention curves. The average relative saturations obtained by neutron radiography showed an approximate one-to-one relationship with the average values measured volumetrically using the hanging water column setup. There were no significant differences (at p < 0.05) between the parameters of the van Genuchten equation fitted to the average neutron radiography data and those estimated from replicated hanging water column data. Our results indicate that neutron imaging is a very effective tool for quantifying the average water retention curve.

  5. Calculation of thermodynamic properties using the random-phase approximation: alpha-N2

    NARCIS (Netherlands)

    Jansen, A.P.J.; Schoorl, R.

    1988-01-01

    The random-phase approximation (RPA) for molecular crystals is extended in order to calculate thermodynamic properties. A recursion formula for thermodynamic averages of products of mean-field excitation and deexcitation operators is derived. With this formula the thermodynamic average of any

  6. Random numbers from vacuum fluctuations

    International Nuclear Information System (INIS)

    Shi, Yicheng; Kurtsiefer, Christian; Chng, Brenda

    2016-01-01

    We implement a quantum random number generator based on a balanced homodyne measurement of vacuum fluctuations of the electromagnetic field. The digitized signal is directly processed with a fast randomness extraction scheme based on a linear feedback shift register. The random bit stream is continuously read in a computer at a rate of about 480 Mbit/s and passes an extended test suite for random numbers.

  7. Random numbers from vacuum fluctuations

    Energy Technology Data Exchange (ETDEWEB)

    Shi, Yicheng; Kurtsiefer, Christian, E-mail: christian.kurtsiefer@gmail.com [Department of Physics, National University of Singapore, 2 Science Drive 3, Singapore 117542 (Singapore); Center for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543 (Singapore); Chng, Brenda [Center for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543 (Singapore)

    2016-07-25

    We implement a quantum random number generator based on a balanced homodyne measurement of vacuum fluctuations of the electromagnetic field. The digitized signal is directly processed with a fast randomness extraction scheme based on a linear feedback shift register. The random bit stream is continuously read in a computer at a rate of about 480 Mbit/s and passes an extended test suite for random numbers.

  8. Baseline-dependent averaging in radio interferometry

    Science.gov (United States)

    Wijnholds, S. J.; Willis, A. G.; Salvini, S.

    2018-05-01

    This paper presents a detailed analysis of the applicability and benefits of baseline-dependent averaging (BDA) in modern radio interferometers and in particular the Square Kilometre Array. We demonstrate that BDA does not affect the information content of the data other than a well-defined decorrelation loss for which closed form expressions are readily available. We verify these theoretical findings using simulations. We therefore conclude that BDA can be used reliably in modern radio interferometry allowing a reduction of visibility data volume (and hence processing costs for handling visibility data) by more than 80 per cent.

  9. Choosing the best index for the average score intraclass correlation coefficient.

    Science.gov (United States)

    Shieh, Gwowen

    2016-09-01

    The intraclass correlation coefficient (ICC)(2) index from a one-way random effects model is widely used to describe the reliability of mean ratings in behavioral, educational, and psychological research. Despite its apparent utility, the essential property of ICC(2) as a point estimator of the average score intraclass correlation coefficient is seldom mentioned. This article considers several potential measures and compares their performance with ICC(2). Analytical derivations and numerical examinations are presented to assess the bias and mean square error of the alternative estimators. The results suggest that more advantageous indices can be recommended over ICC(2) for their theoretical implication and computational ease.

  10. Disability Reconsideration Average Processing Time (in Days) (Excludes technical denials)

    Data.gov (United States)

    Social Security Administration — A presentation of the overall cumulative number of elapsed days (including processing time for transit, medical determinations, and SSA quality review) from the date...

  11. Align and random electrospun mat of PEDOT:PSS and PEDOT:PSS/RGO

    Science.gov (United States)

    Sarabi, Ghazale Asghari; Latifi, Masoud; Bagherzadeh, Roohollah

    2018-01-01

    In this research work we fabricated two ultrafine conductive nanofibrous layers to investigate the materilas composition and their properties for the preparation of supercapacitor materials application. In first layer, a polymer and a conductive polymer were used and second layer was a composition of polymer, conductive polymer and carbon-base material. In both cases align and randomized mat of conductive nanofibers were fabricated using electrospinning set up. Conductive poly (3,4-ethylenedioxythiophene)/ polystyrene sulfonate (PEDOT:PSS) nanofibers were electrospun by dissolving fiber-forming polymer and polyvinyl alcohol (PVA) in an aqueous dispersion of PEDOT:PSS. The effect of addition of reduced graphene oxide (RGO) was considered for nanocomposite layer. The ultrafine conductive polymer fibers and conductive nanocomposite fibrous materials were also fabricated using an electrospinning process. A fixed collector and a rotating drum were used for random and align nanofibers production, respectively. The resulted fibers were characterized and analyzed by SEM, FTIR and two-point probe conductivity test. The average diameter of nanofibers measured by ImageJ software indicated that the average fiber diameter for first layer was 100 nm and for nanocomposite layer was about 85 nm. The presence of PEDOT:PSS and RGO in the nanofibers was confirmed by FT-IR spectroscopy. The conductivity of align and random layers was characterized. The conductivity of PEDOT:PSS nanofibers showed higher enhancement by addition of RGO in aqueous dispersion. The obtained results showed that alignment of fibrous materials can be considered as an engineering tool for tuning the conductivity of fibrous materials for many different applications such as supercapacitors, conductive and transparent materials.

  12. Nonergodicity, fluctuations, and criticality in heterogeneous diffusion processes.

    Science.gov (United States)

    Cherstvy, A G; Metzler, R

    2014-07-01

    We study the stochastic behavior of heterogeneous diffusion processes with the power-law dependence D(x) ∼ |x|(α) of the generalized diffusion coefficient encompassing sub- and superdiffusive anomalous diffusion. Based on statistical measures such as the amplitude scatter of the time-averaged mean-squared displacement of individual realizations, the ergodicity breaking and non-Gaussianity parameters, as well as the probability density function P(x,t), we analyze the weakly nonergodic character of the heterogeneous diffusion process and, particularly, the degree of irreproducibility of individual realizations. As we show, the fluctuations between individual realizations increase with growing modulus |α| of the scaling exponent. The fluctuations appear to diverge when the critical value α = 2 is approached, while for even larger α the fluctuations decrease, again. At criticality, the power-law behavior of the mean-squared displacement changes to an exponentially fast growth, and the fluctuations of the time-averaged mean-squared displacement do not converge for increasing number of realizations. From a systematic comparison we observe some striking similarities of the heterogeneous diffusion process with the familiar subdiffusive continuous time random walk process with power-law waiting time distribution and diverging characteristic waiting time.

  13. Parameterization of Time-Averaged Suspended Sediment Concentration in the Nearshore

    Directory of Open Access Journals (Sweden)

    Hyun-Doug Yoon

    2015-11-01

    Full Text Available To quantify the effect of wave breaking turbulence on sediment transport in the nearshore, the vertical distribution of time-averaged suspended sediment concentration (SSC in the surf zone was parameterized in terms of the turbulent kinetic energy (TKE at different cross-shore locations, including the bar crest, bar trough, and inner surf zone. Using data from a large-scale laboratory experiment, a simple relationship was developed between the time-averaged SSC and the time-averaged TKE. The vertical variation of the time-averaged SSC was fitted to an equation analogous to the turbulent dissipation rate term. At the bar crest, the proposed equation was slightly modified to incorporate the effect of near-bed sediment processes and yielded reasonable agreement. This parameterization yielded the best agreement at the bar trough, with a coefficient of determination R2 ≥ 0.72 above the bottom boundary layer. The time-averaged SSC in the inner surf zone showed good agreement near the bed but poor agreement near the water surface, suggesting that there is a different sedimentation mechanism that controls the SSC in the inner surf zone.

  14. Investigating the Randomness of Numbers

    Science.gov (United States)

    Pendleton, Kenn L.

    2009-01-01

    The use of random numbers is pervasive in today's world. Random numbers have practical applications in such far-flung arenas as computer simulations, cryptography, gambling, the legal system, statistical sampling, and even the war on terrorism. Evaluating the randomness of extremely large samples is a complex, intricate process. However, the…

  15. Sensitivity analysis for matched pair analysis of binary data: From worst case to average case analysis.

    Science.gov (United States)

    Hasegawa, Raiden; Small, Dylan

    2017-12-01

    In matched observational studies where treatment assignment is not randomized, sensitivity analysis helps investigators determine how sensitive their estimated treatment effect is to some unmeasured confounder. The standard approach calibrates the sensitivity analysis according to the worst case bias in a pair. This approach will result in a conservative sensitivity analysis if the worst case bias does not hold in every pair. In this paper, we show that for binary data, the standard approach can be calibrated in terms of the average bias in a pair rather than worst case bias. When the worst case bias and average bias differ, the average bias interpretation results in a less conservative sensitivity analysis and more power. In many studies, the average case calibration may also carry a more natural interpretation than the worst case calibration and may also allow researchers to incorporate additional data to establish an empirical basis with which to calibrate a sensitivity analysis. We illustrate this with a study of the effects of cellphone use on the incidence of automobile accidents. Finally, we extend the average case calibration to the sensitivity analysis of confidence intervals for attributable effects. © 2017, The International Biometric Society.

  16. Bayesian model averaging and weighted average least squares : Equivariance, stability, and numerical issues

    NARCIS (Netherlands)

    De Luca, G.; Magnus, J.R.

    2011-01-01

    In this article, we describe the estimation of linear regression models with uncertainty about the choice of the explanatory variables. We introduce the Stata commands bma and wals, which implement, respectively, the exact Bayesian model-averaging estimator and the weighted-average least-squares

  17. A teachable moment communication process for smoking cessation talk: description of a group randomized clinician-focused intervention

    Directory of Open Access Journals (Sweden)

    Flocke Susan A

    2012-05-01

    Full Text Available Abstract Background Effective clinician-patient communication about health behavior change is one of the most important and most overlooked strategies to promote health and prevent disease. Existing guidelines for specific health behavior counseling have been created and promulgated, but not successfully adopted in primary care practice. Building on work focused on creating effective clinician strategies for prompting health behavior change in the primary care setting, we developed an intervention intended to enhance clinician communication skills to create and act on teachable moments for smoking cessation. In this manuscript, we describe the development and implementation of the Teachable Moment Communication Process (TMCP intervention and the baseline characteristics of a group randomized trial designed to evaluate its effectiveness. Methods/Design This group randomized trial includes thirty-one community-based primary care clinicians practicing in Northeast Ohio and 840 of their adult patients. Clinicians were randomly assigned to receive either the Teachable Moments Communication Process (TMCP intervention for smoking cessation, or the delayed intervention. The TMCP intervention consisted of two, 3-hour educational training sessions including didactic presentation, skill demonstration through video examples, skills practices with standardized patients, and feedback from peers and the trainers. For each clinician enrolled, 12 patients were recruited for two time points. Pre- and post-intervention data from the clinicians, patients and audio-recorded clinician‒patient interactions were collected. At baseline, the two groups of clinicians and their patients were similar with regard to all demographic and practice characteristics examined. Both physician and patient recruitment goals were met, and retention was 96% and 94% respectively. Discussion Findings support the feasibility of training clinicians to use the Teachable Moments

  18. Quantum random number generator

    Science.gov (United States)

    Soubusta, Jan; Haderka, Ondrej; Hendrych, Martin

    2001-03-01

    Since reflection or transmission of a quantum particle on a beamsplitter is inherently random quantum process, a device built on this principle does not suffer from drawbacks of neither pseudo-random computer generators or classical noise sources. Nevertheless, a number of physical conditions necessary for high quality random numbers generation must be satisfied. Luckily, in quantum optics realization they can be well controlled. We present an easy random number generator based on the division of weak light pulses on a beamsplitter. The randomness of the generated bit stream is supported by passing the data through series of 15 statistical test. The device generates at a rate of 109.7 kbit/s.

  19. Superparamagnetic perpendicular magnetic tunnel junctions for true random number generators

    Science.gov (United States)

    Parks, Bradley; Bapna, Mukund; Igbokwe, Julianne; Almasi, Hamid; Wang, Weigang; Majetich, Sara A.

    2018-05-01

    Superparamagnetic perpendicular magnetic tunnel junctions are fabricated and analyzed for use in random number generators. Time-resolved resistance measurements are used as streams of bits in statistical tests for randomness. Voltage control of the thermal stability enables tuning the average speed of random bit generation up to 70 kHz in a 60 nm diameter device. In its most efficient operating mode, the device generates random bits at an energy cost of 600 fJ/bit. A narrow range of magnetic field tunes the probability of a given state from 0 to 1, offering a means of probabilistic computing.

  20. Generation and Analysis of Constrained Random Sampling Patterns

    DEFF Research Database (Denmark)

    Pierzchlewski, Jacek; Arildsen, Thomas

    2016-01-01

    Random sampling is a technique for signal acquisition which is gaining popularity in practical signal processing systems. Nowadays, event-driven analog-to-digital converters make random sampling feasible in practical applications. A process of random sampling is defined by a sampling pattern, which...... indicates signal sampling points in time. Practical random sampling patterns are constrained by ADC characteristics and application requirements. In this paper, we introduce statistical methods which evaluate random sampling pattern generators with emphasis on practical applications. Furthermore, we propose...... algorithm generates random sampling patterns dedicated for event-driven-ADCs better than existed sampling pattern generators. Finally, implementation issues of random sampling patterns are discussed....

  1. Adaptive Mean Queue Size and Its Rate of Change: Queue Management with Random Dropping

    OpenAIRE

    Karmeshu; Patel, Sanjeev; Bhatnagar, Shalabh

    2016-01-01

    The Random early detection (RED) active queue management (AQM) scheme uses the average queue size to calculate the dropping probability in terms of minimum and maximum thresholds. The effect of heavy load enhances the frequency of crossing the maximum threshold value resulting in frequent dropping of the packets. An adaptive queue management with random dropping (AQMRD) algorithm is proposed which incorporates information not just about the average queue size but also the rate of change of th...

  2. Random skew plane partitions and the Pearcey process

    DEFF Research Database (Denmark)

    Reshetikhin, Nicolai; Okounkov, Andrei

    2007-01-01

    We study random skew 3D partitions weighted by q vol and, specifically, the q → 1 asymptotics of local correlations near various points of the limit shape. We obtain sine-kernel asymptotics for correlations in the bulk of the disordered region, Airy kernel asymptotics near a general point of the ...

  3. Poisson branching point processes

    International Nuclear Information System (INIS)

    Matsuo, K.; Teich, M.C.; Saleh, B.E.A.

    1984-01-01

    We investigate the statistical properties of a special branching point process. The initial process is assumed to be a homogeneous Poisson point process (HPP). The initiating events at each branching stage are carried forward to the following stage. In addition, each initiating event independently contributes a nonstationary Poisson point process (whose rate is a specified function) located at that point. The additional contributions from all points of a given stage constitute a doubly stochastic Poisson point process (DSPP) whose rate is a filtered version of the initiating point process at that stage. The process studied is a generalization of a Poisson branching process in which random time delays are permitted in the generation of events. Particular attention is given to the limit in which the number of branching stages is infinite while the average number of added events per event of the previous stage is infinitesimal. In the special case when the branching is instantaneous this limit of continuous branching corresponds to the well-known Yule--Furry process with an initial Poisson population. The Poisson branching point process provides a useful description for many problems in various scientific disciplines, such as the behavior of electron multipliers, neutron chain reactions, and cosmic ray showers

  4. LED Lighting System Reliability Modeling and Inference via Random Effects Gamma Process and Copula Function

    Directory of Open Access Journals (Sweden)

    Huibing Hao

    2015-01-01

    Full Text Available Light emitting diode (LED lamp has attracted increasing interest in the field of lighting systems due to its low energy and long lifetime. For different functions (i.e., illumination and color, it may have two or more performance characteristics. When the multiple performance characteristics are dependent, it creates a challenging problem to accurately analyze the system reliability. In this paper, we assume that the system has two performance characteristics, and each performance characteristic is governed by a random effects Gamma process where the random effects can capture the unit to unit differences. The dependency of performance characteristics is described by a Frank copula function. Via the copula function, the reliability assessment model is proposed. Considering the model is so complicated and analytically intractable, the Markov chain Monte Carlo (MCMC method is used to estimate the unknown parameters. A numerical example about actual LED lamps data is given to demonstrate the usefulness and validity of the proposed model and method.

  5. The growth of the mean average crossing number of equilateral polygons in confinement

    International Nuclear Information System (INIS)

    Arsuaga, J; Borgo, B; Scharein, R; Diao, Y

    2009-01-01

    The physical and biological properties of collapsed long polymer chains as well as of highly condensed biopolymers (such as DNA in all organisms) are known to be determined, at least in part, by their topological and geometrical properties. With this purpose of characterizing the topological properties of such condensed systems equilateral random polygons restricted to confined volumes are often used. However, very few analytical results are known. In this paper, we investigate the effect of volume confinement on the mean average crossing number (ACN) of equilateral random polygons. The mean ACN of knots and links under confinement provides a simple alternative measurement for the topological complexity of knots and links in the statistical sense. For an equilateral random polygon of n segments without any volume confinement constrain, it is known that its mean ACN (ACN) is of the order 3/16 n log n + O(n). Here we model the confining volume as a simple sphere of radius R. We provide an analytical argument which shows that (ACN) of an equilateral random polygon of n segments under extreme confinement (meaning R 2 ). We propose to model the growth of (ACN) as a(R)n 2 + b(R)nln(n) under a less-extreme confinement condition, where a(R) and b(R) are functions of R with R being the radius of the confining sphere. Computer simulations performed show a fairly good fit using this model.

  6. Omega-3 and -6 fatty acid supplementation and sensory processing in toddlers with ASD symptomology born preterm: A randomized controlled trial.

    Science.gov (United States)

    Boone, Kelly M; Gracious, Barbara; Klebanoff, Mark A; Rogers, Lynette K; Rausch, Joseph; Coury, Daniel L; Keim, Sarah A

    2017-12-01

    Despite advances in the health and long-term survival of infants born preterm, they continue to face developmental challenges including higher risk for autism spectrum disorder (ASD) and atypical sensory processing patterns. This secondary analysis aimed to describe sensory profiles and explore effects of combined dietary docosahexaenoic acid (DHA), eicosapentaenoic acid (EPA), and gamma-linolenic acid (GLA) supplementation on parent-reported sensory processing in toddlers born preterm who were exhibiting ASD symptoms. 90-day randomized, double blinded, placebo-controlled trial. 31 children aged 18-38months who were born at ≤29weeks' gestation. Mixed effects regression analyses followed intent to treat and explored effects on parent-reported sensory processing measured by the Infant/Toddler Sensory Profile (ITSP). Baseline ITSP scores reflected atypical sensory processing, with the majority of atypical scores falling below the mean. Sensory processing sections: auditory (above=0%, below=65%), vestibular (above=13%, below=48%), tactile (above=3%, below=35%), oral sensory (above=10%; below=26%), visual (above=10%, below=16%); sensory processing quadrants: low registration (above=3%; below=71%), sensation avoiding (above=3%; below=39%), sensory sensitivity (above=3%; below=35%), and sensation seeking (above=10%; below=19%). Twenty-eight of 31 children randomized had complete outcome data. Although not statistically significant (p=0.13), the magnitude of the effect for reduction in behaviors associated with sensory sensitivity was medium to large (effect size=0.57). No other scales reflected a similar magnitude of effect size (range: 0.10 to 0.32). The findings provide support for larger randomized trials of omega fatty acid supplementation for children at risk of sensory processing difficulties, especially those born preterm. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Correlated continuous time random walk and option pricing

    Science.gov (United States)

    Lv, Longjin; Xiao, Jianbin; Fan, Liangzhong; Ren, Fuyao

    2016-04-01

    In this paper, we study a correlated continuous time random walk (CCTRW) with averaged waiting time, whose probability density function (PDF) is proved to follow stretched Gaussian distribution. Then, we apply this process into option pricing problem. Supposing the price of the underlying is driven by this CCTRW, we find this model captures the subdiffusive characteristic of financial markets. By using the mean self-financing hedging strategy, we obtain the closed-form pricing formulas for a European option with and without transaction costs, respectively. At last, comparing the obtained model with the classical Black-Scholes model, we find the price obtained in this paper is higher than that obtained from the Black-Scholes model. A empirical analysis is also introduced to confirm the obtained results can fit the real data well.

  8. A random matrix approach to credit risk.

    Directory of Open Access Journals (Sweden)

    Michael C Münnix

    Full Text Available We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided.

  9. A random matrix approach to credit risk.

    Science.gov (United States)

    Münnix, Michael C; Schäfer, Rudi; Guhr, Thomas

    2014-01-01

    We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided.

  10. Parameters, test criteria and fault assessment in random sampling of waste barrels from non-qualified processes

    International Nuclear Information System (INIS)

    Martens, B.R.

    1989-01-01

    In the context of random sampling tests, parameters are checked on the waste barrels and criteria are given on which these tests are based. Also, it is shown how faulty data on the properties of the waste or faulty waste barrels should be treated. To decide the extent of testing, the properties of the waste relevant to final storage are determined based on the conditioning process used. (DG) [de

  11. Markov process of muscle motors

    International Nuclear Information System (INIS)

    Kondratiev, Yu; Pechersky, E; Pirogov, S

    2008-01-01

    We study a Markov random process describing muscle molecular motor behaviour. Every motor is either bound up with a thin filament or unbound. In the bound state the motor creates a force proportional to its displacement from the neutral position. In both states the motor spends an exponential time depending on the state. The thin filament moves at a velocity proportional to the average of all displacements of all motors. We assume that the time which a motor stays in the bound state does not depend on its displacement. Then one can find an exact solution of a nonlinear equation appearing in the limit of an infinite number of motors

  12. Averaging in spherically symmetric cosmology

    International Nuclear Information System (INIS)

    Coley, A. A.; Pelavas, N.

    2007-01-01

    The averaging problem in cosmology is of fundamental importance. When applied to study cosmological evolution, the theory of macroscopic gravity (MG) can be regarded as a long-distance modification of general relativity. In the MG approach to the averaging problem in cosmology, the Einstein field equations on cosmological scales are modified by appropriate gravitational correlation terms. We study the averaging problem within the class of spherically symmetric cosmological models. That is, we shall take the microscopic equations and effect the averaging procedure to determine the precise form of the correlation tensor in this case. In particular, by working in volume-preserving coordinates, we calculate the form of the correlation tensor under some reasonable assumptions on the form for the inhomogeneous gravitational field and matter distribution. We find that the correlation tensor in a Friedmann-Lemaitre-Robertson-Walker (FLRW) background must be of the form of a spatial curvature. Inhomogeneities and spatial averaging, through this spatial curvature correction term, can have a very significant dynamical effect on the dynamics of the Universe and cosmological observations; in particular, we discuss whether spatial averaging might lead to a more conservative explanation of the observed acceleration of the Universe (without the introduction of exotic dark matter fields). We also find that the correlation tensor for a non-FLRW background can be interpreted as the sum of a spatial curvature and an anisotropic fluid. This may lead to interesting effects of averaging on astrophysical scales. We also discuss the results of averaging an inhomogeneous Lemaitre-Tolman-Bondi solution as well as calculations of linear perturbations (that is, the backreaction) in an FLRW background, which support the main conclusions of the analysis

  13. Efficient Numerical Methods for Analysis of Square Ratio of κ-μ and η-μ Random Processes with Their Applications in Telecommunications

    Directory of Open Access Journals (Sweden)

    Gradimir V. Milovanović

    2018-01-01

    Full Text Available We will provide statistical analysis of the square ratio of κ-μ and η-μ random processes and its application in the signal-to-interference ratio (SIR based performance analysis of wireless transmission subjected to the influence of multipath fading, modelled by κ-μ fading model, and undesired occurrence of co-channel interference (CCI, distributed as η-μ random process. First contribution of the paper is deriving exact closed expressions for the probability density function (PDF and cumulative distribution function (CDF of square ratio of κ-μ and η-μ random processes. Further, a verification of accuracy of these PDF and CDF expressions was given by comparison with the corresponding approximations obtained by the high-precision quadrature formulas of Gaussian type with respect to the weight functions on (0,+∞. The computational procedure of such quadrature rules is provided by using the constructive theory of orthogonal polynomials and the MATHEMATICA package OrthogonalPolynomials created by Cvetković and Milovanović (2004. Capitalizing on obtained expression, important wireless performance criteria, namely, outage probability (OP, have been obtained, as functions of transmission parameters. Also, possible performance improvement is observed through a glance at SC (selection combining reception employment based on obtained expressions.

  14. Estimating average glandular dose by measuring glandular rate in mammograms

    International Nuclear Information System (INIS)

    Goto, Sachiko; Azuma, Yoshiharu; Sumimoto, Tetsuhiro; Eiho, Shigeru

    2003-01-01

    The glandular rate of the breast was objectively measured in order to calculate individual patient exposure dose (average glandular dose) in mammography. By employing image processing techniques and breast-equivalent phantoms with various glandular rate values, a conversion curve for pixel value to glandular rate can be determined by a neural network. Accordingly, the pixel values in clinical mammograms can be converted to the glandular rate value for each pixel. The individual average glandular dose can therefore be calculated using the individual glandular rates on the basis of the dosimetry method employed for quality control in mammography. In the present study, a data set of 100 craniocaudal mammograms from 50 patients was used to evaluate our method. The average glandular rate and average glandular dose of the data set were 41.2% and 1.79 mGy, respectively. The error in calculating the individual glandular rate can be estimated to be less than ±3%. When the calculation error of the glandular rate is taken into consideration, the error in the individual average glandular dose can be estimated to be 13% or less. We feel that our method for determining the glandular rate from mammograms is useful for minimizing subjectivity in the evaluation of patient breast composition. (author)

  15. Rapid, easy, and cheap randomization: prospective evaluation in a study cohort

    Directory of Open Access Journals (Sweden)

    Parker Melissa J

    2012-06-01

    Full Text Available Abstract Background When planning a randomized controlled trial (RCT, investigators must select randomization and allocation procedures based upon a variety of factors. While third party randomization is cited as being among the most desirable randomization processes, many third party randomization procedures are neither feasible nor cost-effective for small RCTs, including pilot RCTs. In this study we present our experience with a third party randomization and allocation procedure that utilizes current technology to achieve randomization in a rapid, reliable, and cost-effective manner. Methods This method was developed by the investigators for use in a small 48-participant parallel group RCT with four study arms. As a nested study, the reliability of this randomization procedure was prospectively evaluated in this cohort. The primary outcome of this nested study was the proportion of subjects for whom allocation information was obtained by the Research Assistant within 15 min of the initial participant randomization request. A secondary outcome was the average time for communicating participant group assignment back to the Research Assistant. Descriptive information regarding any failed attempts at participant randomization as well as costs attributable to use of this method were also recorded. Statistical analyses included the calculation of simple proportions and descriptive statistics. Results Forty-eight participants were successfully randomized and group allocation instruction was received for 46 (96% within 15 min of the Research Assistant placing the initial randomization request. Time elapsed in minutes until receipt of participant allocation instruction was Mean (SD 3.1 +/− 3.6; Median (IQR 2 (2,3; Range (1–20 for the entire cohort of 48. For the two participants for whom group allocation information was not received by the Research Assistant within the 15-min pass threshold, this information was obtained following a second

  16. Low to Moderate Average Alcohol Consumption and Binge Drinking in Early Pregnancy: Effects on Choice Reaction Time and Information Processing Time in Five-Year-Old Children.

    Directory of Open Access Journals (Sweden)

    Tina R Kilburn

    Full Text Available Deficits in information processing may be a core deficit after fetal alcohol exposure. This study was designed to investigate the possible effects of weekly low to moderate maternal alcohol consumption and binge drinking episodes in early pregnancy on choice reaction time (CRT and information processing time (IPT in young children.Participants were sampled based on maternal alcohol consumption during pregnancy. At the age of 60-64 months, 1,333 children were administered a modified version of the Sternberg paradigm to assess CRT and IPT. In addition, a test of general intelligence (WPPSI-R was administered.Adjusted for a wide range of potential confounders, this study showed no significant effects of average weekly maternal alcohol consumption during pregnancy on CRT or IPT. There was, however, an indication of slower CRT associated with binge drinking episodes in gestational weeks 1-4.This study observed no significant effects of average weekly maternal alcohol consumption during pregnancy on CRT or IPT as assessed by the Sternberg paradigm. However, there were some indications of CRT being associated with binge drinking during very early pregnancy. Further large-scale studies are needed to investigate effects of different patterns of maternal alcohol consumption on basic cognitive processes in offspring.

  17. How to average logarithmic retrievals?

    Directory of Open Access Journals (Sweden)

    B. Funke

    2012-04-01

    Full Text Available Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.

  18. An average salary: approaches to the index determination

    Directory of Open Access Journals (Sweden)

    T. M. Pozdnyakova

    2017-01-01

    Full Text Available The article “An average salary: approaches to the index determination” is devoted to studying various methods of calculating this index, both used by official state statistics of the Russian Federation and offered by modern researchers.The purpose of this research is to analyze the existing approaches to calculating the average salary of employees of enterprises and organizations, as well as to make certain additions that would help to clarify this index.The information base of the research is laws and regulations of the Russian Federation Government, statistical and analytical materials of the Federal State Statistics Service of Russia for the section «Socio-economic indexes: living standards of the population», as well as materials of scientific papers, describing different approaches to the average salary calculation. The data on the average salary of employees of educational institutions of the Khabarovsk region served as the experimental base of research. In the process of conducting the research, the following methods were used: analytical, statistical, calculated-mathematical and graphical.The main result of the research is an option of supplementing the method of calculating average salary index within enterprises or organizations, used by Goskomstat of Russia, by means of introducing a correction factor. Its essence consists in the specific formation of material indexes for different categories of employees in enterprises or organizations, mainly engaged in internal secondary jobs. The need for introducing this correction factor comes from the current reality of working conditions of a wide range of organizations, when an employee is forced, in addition to the main position, to fulfill additional job duties. As a result, the situation is frequent when the average salary at the enterprise is difficult to assess objectively because it consists of calculating multiple rates per staff member. In other words, the average salary of

  19. Certified randomness in quantum physics.

    Science.gov (United States)

    Acín, Antonio; Masanes, Lluis

    2016-12-07

    The concept of randomness plays an important part in many disciplines. On the one hand, the question of whether random processes exist is fundamental for our understanding of nature. On the other, randomness is a resource for cryptography, algorithms and simulations. Standard methods for generating randomness rely on assumptions about the devices that are often not valid in practice. However, quantum technologies enable new methods for generating certified randomness, based on the violation of Bell inequalities. These methods are referred to as device-independent because they do not rely on any modelling of the devices. Here we review efforts to design device-independent randomness generators and the associated challenges.

  20. An Analysis of the Marine Corps Individual Ready Reserve Screening Process

    Science.gov (United States)

    2015-03-01

    reduces the number of contacts possible in a fiscal year.45 D. STRENGTHS The current screening process has many aspects that work well or may be...average, 80 percent in a fiscal year. Currently, the RSP Marine works randomly down a contact list in their assigned region; the staff’s main focus is...remaining of obligated service, a request for sanctuary, extreme hardship, an Active Status Listing, enrollment in theology or divinity school, and being

  1. Hierarchical random cellular neural networks for system-level brain-like signal processing.

    Science.gov (United States)

    Kozma, Robert; Puljic, Marko

    2013-09-01

    Sensory information processing and cognition in brains are modeled using dynamic systems theory. The brain's dynamic state is described by a trajectory evolving in a high-dimensional state space. We introduce a hierarchy of random cellular automata as the mathematical tools to describe the spatio-temporal dynamics of the cortex. The corresponding brain model is called neuropercolation which has distinct advantages compared to traditional models using differential equations, especially in describing spatio-temporal discontinuities in the form of phase transitions. Phase transitions demarcate singularities in brain operations at critical conditions, which are viewed as hallmarks of higher cognition and awareness experience. The introduced Monte-Carlo simulations obtained by parallel computing point to the importance of computer implementations using very large-scale integration (VLSI) and analog platforms. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Ambient Modal Testing of the Vestvej Bridge using Random Decrement

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Brincker, Rune; Rytter, A.

    This paper presents an ambient vibration study of the Vestvej Bridge. The bridge is a typically Danish two-span concrete bridge which crosses a highway. The purpose of the study is to perform a pre-investigation of the dynamic behavior to obtain information for the design of a demonstration project...... concerning application of vibration based inspection of bridges. The data analysis process of ambient vribration testing of bridges has traditionally been based on auto and cross spectral densities estimated using an FFT algorithm. In the pre-analysis state the spectral densities are all averaged to obtain...... measurements might have a low signal to noise ratio. Thus, it might be difficult clearly to identify physical modes from the spectral densities. The Random Decrement (RD) technique is another method to perform the data analysis process in the time domain only. It is basically a very simple and very easily...

  3. Ambient Modal Testing of the Vestvej Bridge using Random Decrement

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Brincker, Rune; Rytter, A.

    1998-01-01

    This paper presents an ambient vibration study of the Vestvej Bridge. The bridge is a typically Danish two-span concrete bridge which crosses a highway. The purpose of the study is to perform a pre-investigation of the dynamic behavior to obtain information for the design of a demonstration project...... concerning application of vibration based inspection of bridges. The data analysis process of ambient vribration testing of bridges has traditionally been based on auto and cross spectral densities estimated using an FFT algorithm. In the pre-analysis state the spectral densities are all averaged to obtain...... measurements might have a low signal to noise ratio. Thus, it might be difficult clearly to identify physical modes from the spectral densities. The Random Decrement (RD) technique is another method to perform the data analysis process in the time domain only. It is basically a very simple and very easily...

  4. Analytical explicit formulas of average run length for long memory process with ARFIMA model on CUSUM control chart

    Directory of Open Access Journals (Sweden)

    Wilasinee Peerajit

    2017-12-01

    Full Text Available This paper proposes the explicit formulas for the derivation of exact formulas from Average Run Lengths (ARLs using integral equation on CUSUM control chart when observations are long memory processes with exponential white noise. The authors compared efficiency in terms of the percentage of absolute difference to a similar method to verify the accuracy of the ARLs between the values obtained by the explicit formulas and numerical integral equation (NIE method. The explicit formulas were based on Banach fixed point theorem which was used to guarantee the existence and uniqueness of the solution for ARFIMA(p,d,q. Results showed that the two methods are similar in good agreement with the percentage of absolute difference at less than 0.23%. Therefore, the explicit formulas are an efficient alternative for implementation in real applications because the computational CPU time for ARLs from the explicit formulas are 1 second preferable over the NIE method.

  5. Analysis of baseline, average, and longitudinally measured blood pressure data using linear mixed models.

    Science.gov (United States)

    Hossain, Ahmed; Beyene, Joseph

    2014-01-01

    This article compares baseline, average, and longitudinal data analysis methods for identifying genetic variants in genome-wide association study using the Genetic Analysis Workshop 18 data. We apply methods that include (a) linear mixed models with baseline measures, (b) random intercept linear mixed models with mean measures outcome, and (c) random intercept linear mixed models with longitudinal measurements. In the linear mixed models, covariates are included as fixed effects, whereas relatedness among individuals is incorporated as the variance-covariance structure of the random effect for the individuals. The overall strategy of applying linear mixed models decorrelate the data is based on Aulchenko et al.'s GRAMMAR. By analyzing systolic and diastolic blood pressure, which are used separately as outcomes, we compare the 3 methods in identifying a known genetic variant that is associated with blood pressure from chromosome 3 and simulated phenotype data. We also analyze the real phenotype data to illustrate the methods. We conclude that the linear mixed model with longitudinal measurements of diastolic blood pressure is the most accurate at identifying the known single-nucleotide polymorphism among the methods, but linear mixed models with baseline measures perform best with systolic blood pressure as the outcome.

  6. Large-signal analysis of DC motor drive system using state-space averaging technique

    International Nuclear Information System (INIS)

    Bekir Yildiz, Ali

    2008-01-01

    The analysis of a separately excited DC motor driven by DC-DC converter is realized by using state-space averaging technique. Firstly, a general and unified large-signal averaged circuit model for DC-DC converters is given. The method converts power electronic systems, which are periodic time-variant because of their switching operation, to unified and time independent systems. Using the averaged circuit model enables us to combine the different topologies of converters. Thus, all analysis and design processes about DC motor can be easily realized by using the unified averaged model which is valid during whole period. Some large-signal variations such as speed and current relating to DC motor, steady-state analysis, large-signal and small-signal transfer functions are easily obtained by using the averaged circuit model

  7. Random practice - one of the factors of the motor learning process

    Directory of Open Access Journals (Sweden)

    Petr Valach

    2012-01-01

    Full Text Available BACKGROUND: An important concept of acquiring motor skills is the random practice (contextual interference - CI. The explanation of the effect of contextual interference is that the memory has to work more intensively, and therefore it provides higher effect of motor skills retention than the block practice. Only active remembering of a motor skill assigns the practical value for appropriate using in the future. OBJECTIVE: The aim of this research was to determine the difference in how the motor skills in sport gymnastics are acquired and retained using the two different teaching methods - blocked and random practice. METHODS: The blocked and random practice on the three selected gymnastics tasks were applied in the two groups students of physical education (blocked practice - the group BP, random practice - the group RP during two months, in one session a week (totally 80 trials. At the end of the experiment and 6 months after (retention tests the groups were tested on the selected gymnastics skills. RESULTS: No significant differences in a level of the gymnastics skills were found between BP group and RP group at the end of the experiment. However, the retention tests showed significantly higher level of the gymnastics skills in the RP group in comparison with the BP group. CONCLUSION: The results confirmed that a retention of the gymnastics skills using the teaching method of the random practice was significantly higher than with use of the blocked practice.

  8. Nonlinear correlations in the hydrophobicity and average flexibility along the glycolytic enzymes sequences

    Energy Technology Data Exchange (ETDEWEB)

    Ciorsac, Alecu, E-mail: aleciorsac@yahoo.co [Politehnica University of Timisoara, Department of Physical Education and Sport, 2 P-ta Victoriei, 300006, Timisoara (Romania); Craciun, Dana, E-mail: craciundana@gmail.co [Teacher Training Department, West University of Timisoara, 4 Boulevard V. Pirvan, Timisoara, 300223 (Romania); Ostafe, Vasile, E-mail: vostafe@cbg.uvt.r [Department of Chemistry, West University of Timisoara, 16 Pestallozi, 300115, Timisoara (Romania); Laboratory of Advanced Researches in Environmental Protection, Nicholas Georgescu-Roegen Interdisciplinary Research and Formation Platform, 4 Oituz, Timisoara, 300086 (Romania); Isvoran, Adriana, E-mail: aisvoran@cbg.uvt.r [Department of Chemistry, West University of Timisoara, 16 Pestallozi, 300115, Timisoara (Romania); Laboratory of Advanced Researches in Environmental Protection, Nicholas Georgescu-Roegen Interdisciplinary Research and Formation Platform, 4 Oituz, Timisoara, 300086 (Romania)

    2011-04-15

    Research highlights: lights: We focus our study on the glycolytic enzymes. We reveal correlation of hydrophobicity and flexibility along their chains. We also reveal fractal aspects of the glycolytic enzymes structures and surfaces. The glycolytic enzyme sequences are not random. Creation of fractal structures requires the operation of nonlinear dynamics. - Abstract: Nonlinear methods widely used for time series analysis were applied to glycolytic enzyme sequences to derive information concerning the correlation of hydrophobicity and average flexibility along their chains. The 20 sequences of different types of the 10 human glycolytic enzymes were considered as spatial series and were analyzed by spectral analysis, detrended fluctuations analysis and Hurst coefficient calculation. The results agreed that there are both short range and long range correlations of hydrophobicity and average flexibility within investigated sequences, the short range correlations being stronger and indicating that local interactions are the most important for the protein folding. This correlation is also reflected by the fractal nature of the structures of investigated proteins.

  9. Influence of random setup error on dose distribution

    International Nuclear Information System (INIS)

    Zhai Zhenyu

    2008-01-01

    Objective: To investigate the influence of random setup error on dose distribution in radiotherapy and determine the margin from ITV to PTV. Methods: A random sample approach was used to simulate the fields position in target coordinate system. Cumulative effect of random setup error was the sum of dose distributions of all individual treatment fractions. Study of 100 cumulative effects might get shift sizes of 90% dose point position. Margins from ITV to PTV caused by random setup error were chosen by 95% probability. Spearman's correlation was used to analyze the influence of each factor. Results: The average shift sizes of 90% dose point position was 0.62, 1.84, 3.13, 4.78, 6.34 and 8.03 mm if random setup error was 1,2,3,4,5 and 6 mm,respectively. Univariate analysis showed the size of margin was associated only by the size of random setup error. Conclusions: Margin of ITV to PTV is 1.2 times random setup error for head-and-neck cancer and 1.5 times for thoracic and abdominal cancer. Field size, energy and target depth, unlike random setup error, have no relation with the size of the margin. (authors)

  10. Assimilation of time-averaged observations in a quasi-geostrophic atmospheric jet model

    Energy Technology Data Exchange (ETDEWEB)

    Huntley, Helga S. [University of Washington, Department of Applied Mathematics, Seattle, WA (United States); University of Delaware, School of Marine Science and Policy, Newark, DE (United States); Hakim, Gregory J. [University of Washington, Department of Atmospheric Sciences, Seattle, WA (United States)

    2010-11-15

    The problem of reconstructing past climates from a sparse network of noisy time-averaged observations is considered with a novel ensemble Kalman filter approach. Results for a sparse network of 100 idealized observations for a quasi-geostrophic model of a jet interacting with a mountain reveal that, for a wide range of observation averaging times, analysis errors are reduced by about 50% relative to the control case without assimilation. Results are robust to changes to observational error, the number of observations, and an imperfect model. Specifically, analysis errors are reduced relative to the control case for observations having errors up to three times the climatological variance for a fixed 100-station network, and for networks consisting of ten or more stations when observational errors are fixed at one-third the climatological variance. In the limit of small numbers of observations, station location becomes critically important, motivating an optimally determined network. A network of fifteen optimally determined observations reduces analysis errors by 30% relative to the control, as compared to 50% for a randomly chosen network of 100 observations. (orig.)

  11. Analog model for quantum gravity effects: phonons in random fluids.

    Science.gov (United States)

    Krein, G; Menezes, G; Svaiter, N F

    2010-09-24

    We describe an analog model for quantum gravity effects in condensed matter physics. The situation discussed is that of phonons propagating in a fluid with a random velocity wave equation. We consider that there are random fluctuations in the reciprocal of the bulk modulus of the system and study free phonons in the presence of Gaussian colored noise with zero mean. We show that, in this model, after performing the random averages over the noise function a free conventional scalar quantum field theory describing free phonons becomes a self-interacting model.

  12. Analysis of Known Linear Distributed Average Consensus Algorithms on Cycles and Paths

    Directory of Open Access Journals (Sweden)

    Jesús Gutiérrez-Gutiérrez

    2018-03-01

    Full Text Available In this paper, we compare six known linear distributed average consensus algorithms on a sensor network in terms of convergence time (and therefore, in terms of the number of transmissions required. The selected network topologies for the analysis (comparison are the cycle and the path. Specifically, in the present paper, we compute closed-form expressions for the convergence time of four known deterministic algorithms and closed-form bounds for the convergence time of two known randomized algorithms on cycles and paths. Moreover, we also compute a closed-form expression for the convergence time of the fastest deterministic algorithm considered on grids.

  13. On Averaging Rotations

    DEFF Research Database (Denmark)

    Gramkow, Claus

    2001-01-01

    In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong ...... approximations to the Riemannian metric, and that the subsequent corrections are inherent in the least squares estimation.......In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...

  14. Making working memory work: the effects of extended practice on focus capacity and the processes of updating, forward access, and random access.

    Science.gov (United States)

    Price, John M; Colflesh, Gregory J H; Cerella, John; Verhaeghen, Paul

    2014-05-01

    We investigated the effects of 10h of practice on variations of the N-Back task to investigate the processes underlying possible expansion of the focus of attention within working memory. Using subtractive logic, we showed that random access (i.e., Sternberg-like search) yielded a modest effect (a 50% increase in speed) whereas the processes of forward access (i.e., retrieval in order, as in a standard N-Back task) and updating (i.e., changing the contents of working memory) were executed about 5 times faster after extended practice. We additionally found that extended practice increased working memory capacity as measured by the size of the focus of attention for the forward-access task, but not for variations where probing was in random order. This suggests that working memory capacity may depend on the type of search process engaged, and that certain working-memory-related cognitive processes are more amenable to practice than others. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Chaos, complexity, and random matrices

    Science.gov (United States)

    Cotler, Jordan; Hunter-Jones, Nicholas; Liu, Junyu; Yoshida, Beni

    2017-11-01

    Chaos and complexity entail an entropic and computational obstruction to describing a system, and thus are intrinsically difficult to characterize. In this paper, we consider time evolution by Gaussian Unitary Ensemble (GUE) Hamiltonians and analytically compute out-of-time-ordered correlation functions (OTOCs) and frame potentials to quantify scrambling, Haar-randomness, and circuit complexity. While our random matrix analysis gives a qualitatively correct prediction of the late-time behavior of chaotic systems, we find unphysical behavior at early times including an O(1) scrambling time and the apparent breakdown of spatial and temporal locality. The salient feature of GUE Hamiltonians which gives us computational traction is the Haar-invariance of the ensemble, meaning that the ensemble-averaged dynamics look the same in any basis. Motivated by this property of the GUE, we introduce k-invariance as a precise definition of what it means for the dynamics of a quantum system to be described by random matrix theory. We envision that the dynamical onset of approximate k-invariance will be a useful tool for capturing the transition from early-time chaos, as seen by OTOCs, to late-time chaos, as seen by random matrix theory.

  16. Role of spatial averaging in multicellular gradient sensing.

    Science.gov (United States)

    Smith, Tyler; Fancher, Sean; Levchenko, Andre; Nemenman, Ilya; Mugler, Andrew

    2016-05-20

    Gradient sensing underlies important biological processes including morphogenesis, polarization, and cell migration. The precision of gradient sensing increases with the length of a detector (a cell or group of cells) in the gradient direction, since a longer detector spans a larger range of concentration values. Intuition from studies of concentration sensing suggests that precision should also increase with detector length in the direction transverse to the gradient, since then spatial averaging should reduce the noise. However, here we show that, unlike for concentration sensing, the precision of gradient sensing decreases with transverse length for the simplest gradient sensing model, local excitation-global inhibition. The reason is that gradient sensing ultimately relies on a subtraction of measured concentration values. While spatial averaging indeed reduces the noise in these measurements, which increases precision, it also reduces the covariance between the measurements, which results in the net decrease in precision. We demonstrate how a recently introduced gradient sensing mechanism, regional excitation-global inhibition (REGI), overcomes this effect and recovers the benefit of transverse averaging. Using a REGI-based model, we compute the optimal two- and three-dimensional detector shapes, and argue that they are consistent with the shapes of naturally occurring gradient-sensing cell populations.

  17. Finding Order in Randomness: Single-Molecule Studies Reveal Stochastic RNA Processing | Center for Cancer Research

    Science.gov (United States)

    Producing a functional eukaryotic messenger RNA (mRNA) requires the coordinated activity of several large protein complexes to initiate transcription, elongate nascent transcripts, splice together exons, and cleave and polyadenylate the 3’ end. Kinetic competition between these various processes has been proposed to regulate mRNA maturation, but this model could lead to multiple, randomly determined, or stochastic, pathways or outcomes. Regulatory checkpoints have been suggested as a means of ensuring quality control. However, current methods have been unable to tease apart the contributions of these processes at a single gene or on a time scale that could provide mechanistic insight. To begin to investigate the kinetic relationship between transcription and splicing, Daniel Larson, Ph.D., of CCR’s Laboratory of Receptor Biology and Gene Expression, and his colleagues employed a single-molecule RNA imaging approach to monitor production and processing of a human β-globin reporter gene in living cells.

  18. Random mutagenesis of aspergillus niger and process optimization for enhanced production of glucose oxidase

    International Nuclear Information System (INIS)

    Haq, I.; Nawaz, A.; Mukhtar, A.N.H.; Mansoor, H.M.Z.; Ameer, S.M.

    2014-01-01

    The study deals with the improvement of wild strain Aspergillus niger IIB-31 through random mutagenesis using chemical mutagens. The main aim of the work was to enhance the glucose oxidase (GOX) yield of wild strain (24.57+-0.01 U/g of cell mass) through random mutagenesis and process optimization. The wild strain of Aspergillus niger IIB-31 was treated with chemical mutagens such as Ethyl methane sulphonate (EMS) and nitrous acid for this purpose. Mutagen treated 98 variants indicating the positive results were picked and screened for the glucose oxidase production using submerged fermentation. EMS treated E45 mutant strain gave the highest glucose oxidase production (69.47 + 0.01 U/g of cell mass), which was approximately 3-folds greater than the wild strain IIB-31. The preliminary cultural conditions for the production of glucose oxidase using submerged fermentation from strain E45 were also optimized. The highest yield of GOD was obtained using 8% glucose as carbon and 0.3% peptone as nitrogen source at a medium pH of 7.0 after an incubation period of 72 hrs at 30 degree. (author)

  19. Random Finite Set Based Bayesian Filtering with OpenCL in a Heterogeneous Platform

    Directory of Open Access Journals (Sweden)

    Biao Hu

    2017-04-01

    Full Text Available While most filtering approaches based on random finite sets have focused on improving performance, in this paper, we argue that computation times are very important in order to enable real-time applications such as pedestrian detection. Towards this goal, this paper investigates the use of OpenCL to accelerate the computation of random finite set-based Bayesian filtering in a heterogeneous system. In detail, we developed an efficient and fully-functional pedestrian-tracking system implementation, which can run under real-time constraints, meanwhile offering decent tracking accuracy. An extensive evaluation analysis was carried out to ensure the fulfillment of sufficient accuracy requirements. This was followed by extensive profiling analysis to spot the potential bottlenecks in terms of execution performance, which were then targeted to come up with an OpenCL accelerated application. Video-throughput improvements from roughly 15 fps to 100 fps (6× were observed on average while processing typical MOT benchmark videos. Moreover, the worst-case frame processing yielded an 18× advantage from nearly 2 fps to 36 fps, thereby comfortably meeting the real-time constraints. Our implementation is released as open-source code.

  20. Transforming spatial point processes into Poisson processes using random superposition

    DEFF Research Database (Denmark)

    Møller, Jesper; Berthelsen, Kasper Klitgaaard

    with a complementary spatial point process Y  to obtain a Poisson process X∪Y  with intensity function β. Underlying this is a bivariate spatial birth-death process (Xt,Yt) which converges towards the distribution of (X,Y). We study the joint distribution of X and Y, and their marginal and conditional distributions....... In particular, we introduce a fast and easy simulation procedure for Y conditional on X. This may be used for model checking: given a model for the Papangelou intensity of the original spatial point process, this model is used to generate the complementary process, and the resulting superposition is a Poisson...... process with intensity function β if and only if the true Papangelou intensity is used. Whether the superposition is actually such a Poisson process can easily be examined using well known results and fast simulation procedures for Poisson processes. We illustrate this approach to model checking...

  1. Statistical trajectory of an approximate EM algorithm for probabilistic image processing

    International Nuclear Information System (INIS)

    Tanaka, Kazuyuki; Titterington, D M

    2007-01-01

    We calculate analytically a statistical average of trajectories of an approximate expectation-maximization (EM) algorithm with generalized belief propagation (GBP) and a Gaussian graphical model for the estimation of hyperparameters from observable data in probabilistic image processing. A statistical average with respect to observed data corresponds to a configuration average for the random-field Ising model in spin glass theory. In the present paper, hyperparameters which correspond to interactions and external fields of spin systems are estimated by an approximate EM algorithm. A practical algorithm is described for gray-level image restoration based on a Gaussian graphical model and GBP. The GBP approach corresponds to the cluster variation method in statistical mechanics. Our main result in the present paper is to obtain the statistical average of the trajectory in the approximate EM algorithm by using loopy belief propagation and GBP with respect to degraded images generated from a probability density function with true values of hyperparameters. The statistical average of the trajectory can be expressed in terms of recursion formulas derived from some analytical calculations

  2. STUDY OF WITHERS HEIGHT AVERAGE PERFORMANCES IN HUCUL HORSE BREED – HROBY BLOODLINE

    Directory of Open Access Journals (Sweden)

    M. MAFTEI

    2008-10-01

    Full Text Available Study of average performances in a population have a huge importance because, regarding a population, the average of phenotypic value is equal with average of genotypic value. So, the studies of the average value of characters offer us an idea about the population genetic level. The biological material is represented by 177 hucul horse from Hroby bloodline divided in 6 stallion families (tab. 1 analyzed at 18, 30 and 42 months old, owned by Lucina hucul stood farm. The average performances for withers height are presented in tab. 2. We can observe here that the average performances of the character are between characteristic limits of the breed. Both sexes have a small grade of variability with a decreasing tendency in the same time with ageing. We can observe a normal evolution in time for growth process with significant differences only at age of 42 months. We can say in this condition that the average performances for withers height have different values, influenced by the age, with a decreasing tendency.

  3. STUDY OF WITHERS HEIGHT AVERAGE PERFORMANCES IN HUCUL HORSE BREED –GORAL BLOODLINE

    Directory of Open Access Journals (Sweden)

    M. MAFTEI

    2008-10-01

    Full Text Available Study of average performances in a population have a huge importance because, regarding a population, the average of phenotypic value is equal with average of genotypic value. So, the studies of the average value of characters offer us an idea about the population genetic level. The biological material is represented by 87 hucul horse from Goral bloodline divided in 5 stallion families (tab. 1 analyzed at 18, 30 and 42 months old, owned by Lucina hucul stood farm. The average performances for withers height are presented in tab. 2. We can observe here that the average performances of the character are between characteristic limits of the breed. Both sexes have a small grade of variability with a decreasing tendency in the same time with ageing. We can observe a normal evolution in time for growth process with significant differences only at age of 42 months. We can say in this condition that the average performances for withers height have different values, influenced by the age, with a decreasing tendency.

  4. Lagrangian averaging with geodesic mean.

    Science.gov (United States)

    Oliver, Marcel

    2017-11-01

    This paper revisits the derivation of the Lagrangian averaged Euler (LAE), or Euler- α equations in the light of an intrinsic definition of the averaged flow map as the geodesic mean on the volume-preserving diffeomorphism group. Under the additional assumption that first-order fluctuations are statistically isotropic and transported by the mean flow as a vector field, averaging of the kinetic energy Lagrangian of an ideal fluid yields the LAE Lagrangian. The derivation presented here assumes a Euclidean spatial domain without boundaries.

  5. Random walk on a population of random walkers

    International Nuclear Information System (INIS)

    Agliari, E; Burioni, R; Cassi, D; Neri, F M

    2008-01-01

    We consider a population of N labelled random walkers moving on a substrate, and an excitation jumping among the walkers upon contact. The label X(t) of the walker carrying the excitation at time t can be viewed as a stochastic process, where the transition probabilities are a stochastic process themselves. Upon mapping onto two simpler processes, the quantities characterizing X(t) can be calculated in the limit of long times and low walkers density. The results are compared with numerical simulations. Several different topologies for the substrate underlying diffusion are considered

  6. A framework about flow measurements by LDA–PDA as a spatio-temporal average: application to data post-processing

    International Nuclear Information System (INIS)

    Calvo, Esteban; García, Juan A; García, Ignacio; Aísa, Luis; Santolaya, José Luis

    2012-01-01

    method and the cross-section integral calibration method. Finally, a physical interpretation of the statistical reconstruction process is provided: it is a spatio-temporal averaging of the detected particle data, and some of the algorithms used are related to the Eulerian–Eulerian mathematical description of multiphase flows. (paper)

  7. A framework about flow measurements by LDA-PDA as a spatio-temporal average: application to data post-processing

    Science.gov (United States)

    Calvo, Esteban; García, Juan A.; Santolaya, José Luis; García, Ignacio; Aísa, Luis

    2012-05-01

    method and the cross-section integral calibration method. Finally, a physical interpretation of the statistical reconstruction process is provided: it is a spatio-temporal averaging of the detected particle data, and some of the algorithms used are related to the Eulerian-Eulerian mathematical description of multiphase flows.

  8. Industrial Applications of High Average Power FELS

    CERN Document Server

    Shinn, Michelle D

    2005-01-01

    The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...

  9. Improved performance of high average power semiconductor arrays for applications in diode pumped solid state lasers

    International Nuclear Information System (INIS)

    Beach, R.; Emanuel, M.; Benett, W.; Freitas, B.; Ciarlo, D.; Carlson, N.; Sutton, S.; Skidmore, J.; Solarz, R.

    1994-01-01

    The average power performance capability of semiconductor diode laser arrays has improved dramatically over the past several years. These performance improvements, combined with cost reductions pursued by LLNL and others in the fabrication and packaging of diode lasers, have continued to reduce the price per average watt of laser diode radiation. Presently, we are at the point where the manufacturers of commercial high average power solid state laser systems used in material processing applications can now seriously consider the replacement of their flashlamp pumps with laser diode pump sources. Additionally, a low cost technique developed and demonstrated at LLNL for optically conditioning the output radiation of diode laser arrays has enabled a new and scalable average power diode-end-pumping architecture that can be simply implemented in diode pumped solid state laser systems (DPSSL's). This development allows the high average power DPSSL designer to look beyond the Nd ion for the first time. Along with high average power DPSSL's which are appropriate for material processing applications, low and intermediate average power DPSSL's are now realizable at low enough costs to be attractive for use in many medical, electronic, and lithographic applications

  10. Random number generation

    International Nuclear Information System (INIS)

    Coveyou, R.R.

    1974-01-01

    The subject of random number generation is currently controversial. Differing opinions on this subject seem to stem from implicit or explicit differences in philosophy; in particular, from differing ideas concerning the role of probability in the real world of physical processes, electronic computers, and Monte Carlo calculations. An attempt is made here to reconcile these views. The role of stochastic ideas in mathematical models is discussed. In illustration of these ideas, a mathematical model of the use of random number generators in Monte Carlo calculations is constructed. This model is used to set up criteria for the comparison and evaluation of random number generators. (U.S.)

  11. On the Periods of the {ranshi} Random Number Generator

    Science.gov (United States)

    Gutbrod, F.

    The stochastic properties of the pseudo-random number generator {ranshi} are discussed, with emphasis on the average period. Within a factor 2 this turns out to be the root of the maximally possible period. The actual set of periods depends on minor details of the algorithm, and the system settles down in one of only a few different cycles. These features are in perfect agreement with absolute random motion in phase space, to the extent allowed by deterministic dynamics.

  12. Deblurring of class-averaged images in single-particle electron microscopy

    International Nuclear Information System (INIS)

    Park, Wooram; Chirikjian, Gregory S; Madden, Dean R; Rockmore, Daniel N

    2010-01-01

    This paper proposes a method for the deblurring of class-averaged images in single-particle electron microscopy (EM). Since EM images of biological samples are very noisy, the images which are nominally identical projection images are often grouped, aligned and averaged in order to cancel or reduce the background noise. However, the noise in the individual EM images generates errors in the alignment process, which creates an inherent limit on the accuracy of the resulting class averages. This inaccurate class average due to the alignment errors can be viewed as the result of a convolution of an underlying clear image with a blurring function. In this work, we develop a deconvolution method that gives an estimate for the underlying clear image from a blurred class-averaged image using precomputed statistics of misalignment. Since this convolution is over the group of rigid-body motions of the plane, SE(2), we use the Fourier transform for SE(2) in order to convert the convolution into a matrix multiplication in the corresponding Fourier space. For practical implementation we use a Hermite-function-based image modeling technique, because Hermite expansions enable lossless Cartesian-polar coordinate conversion using the Laguerre–Fourier expansions, and Hermite expansion and Laguerre–Fourier expansion retain their structures under the Fourier transform. Based on these mathematical properties, we can obtain the deconvolution of the blurred class average using simple matrix multiplication. Tests of the proposed deconvolution method using synthetic and experimental EM images confirm the performance of our method

  13. Digital servo control of random sound test excitation. [in reverberant acoustic chamber

    Science.gov (United States)

    Nakich, R. B. (Inventor)

    1974-01-01

    A digital servocontrol system for random noise excitation of a test object in a reverberant acoustic chamber employs a plurality of sensors spaced in the sound field to produce signals in separate channels which are decorrelated and averaged. The average signal is divided into a plurality of adjacent frequency bands cyclically sampled by a time division multiplex system, converted into digital form, and compared to a predetermined spectrum value stored in digital form. The results of the comparisons are used to control a time-shared up-down counter to develop gain control signals for the respective frequency bands in the spectrum of random sound energy picked up by the microphones.

  14. Digital signal processing for the Johnson noise thermometry: a time series analysis of the Johnson noise

    International Nuclear Information System (INIS)

    Moon, Byung Soo; Hwang, In Koo; Chung, Chong Eun; Kwon, Kee Choon; David, E. H.; Kisner, R.A.

    2004-06-01

    In this report, we first proved that a random signal obtained by taking the sum of a set of signal frequency signals generates a continuous Markov process. We used this random signal to simulate the Johnson noise and verified that the Johnson noise thermometry can be used to improve the measurements of the reactor coolant temperature within an accuracy of below 0.14%. Secondly, by using this random signal we determined the optimal sampling rate when the frequency band of the Johnson noise signal is given. Also the results of our examination on how good the linearity of the Johnson noise is and how large the relative error of the temperature could become when the temperature increases are described. Thirdly, the results of our analysis on a set of the Johnson noise signal blocks taken from a simple electric circuit are described. We showed that the properties of the continuous Markov process are satisfied even when some channel noises are present. Finally, we describe the algorithm we devised to handle the problem of the time lag in the long-term average or the moving average in a transient state. The algorithm is based on the Haar wavelet and is to estimate the transient temperature that has much smaller time delay. We have shown that the algorithm can track the transient temperature successfully

  15. Generating random walks and polygons with stiffness in confinement

    International Nuclear Information System (INIS)

    Diao, Y; Ernst, C; Saarinen, S; Ziegler, U

    2015-01-01

    The purpose of this paper is to explore ways to generate random walks and polygons in confinement with a bias toward stiffness. Here the stiffness refers to the curvature angle between two consecutive edges along the random walk or polygon. The stiffer the walk (polygon), the smaller this angle on average. Thus random walks and polygons with an elevated stiffness have lower than expected curvatures. The authors introduced and studied several generation algorithms with a stiffness parameter s>0 that regulates the expected curvature angle at a given vertex in which the random walks and polygons are generated one edge at a time using conditional probability density functions. Our generating algorithms also allow the generation of unconfined random walks and polygons with any desired mean curvature angle. In the case of random walks and polygons confined in a sphere of fixed radius, we observe that, as expected, stiff random walks or polygons are more likely to be close to the confinement boundary. The methods developed here require that the random walks and random polygons be rooted at the center of the confinement sphere. (paper)

  16. Random sets and random fuzzy sets as ill-perceived random variables an introduction for Ph.D. students and practitioners

    CERN Document Server

    Couso, Inés; Sánchez, Luciano

    2014-01-01

    This short book provides a unified view of the history and theory of random sets and fuzzy random variables, with special emphasis on its use for representing higher-order non-statistical uncertainty about statistical experiments. The authors lay bare the existence of two streams of works using the same mathematical ground, but differing form their use of sets, according to whether they represent objects of interest naturally taking the form of sets, or imprecise knowledge about such objects. Random (fuzzy) sets can be used in many fields ranging from mathematical morphology, economics, artificial intelligence, information processing and statistics per se, especially in areas where the outcomes of random experiments cannot be observed with full precision. This book also emphasizes the link between random sets and fuzzy sets with some techniques related to the theory of imprecise probabilities. This small book is intended for graduate and doctoral students in mathematics or engineering, but also provides an i...

  17. Statistical theory for estimating sampling errors of regional radiation averages based on satellite measurements

    Science.gov (United States)

    Smith, G. L.; Bess, T. D.; Minnis, P.

    1983-01-01

    The processes which determine the weather and climate are driven by the radiation received by the earth and the radiation subsequently emitted. A knowledge of the absorbed and emitted components of radiation is thus fundamental for the study of these processes. In connection with the desire to improve the quality of long-range forecasting, NASA is developing the Earth Radiation Budget Experiment (ERBE), consisting of a three-channel scanning radiometer and a package of nonscanning radiometers. A set of these instruments is to be flown on both the NOAA-F and NOAA-G spacecraft, in sun-synchronous orbits, and on an Earth Radiation Budget Satellite. The purpose of the scanning radiometer is to obtain measurements from which the average reflected solar radiant exitance and the average earth-emitted radiant exitance at a reference level can be established. The estimate of regional average exitance obtained will not exactly equal the true value of the regional average exitance, but will differ due to spatial sampling. A method is presented for evaluating this spatial sampling error.

  18. A Semi-Analytical Method for the PDFs of A Ship Rolling in Random Oblique Waves

    Science.gov (United States)

    Liu, Li-qin; Liu, Ya-liu; Xu, Wan-hai; Li, Yan; Tang, You-gang

    2018-03-01

    The PDFs (probability density functions) and probability of a ship rolling under the random parametric and forced excitations were studied by a semi-analytical method. The rolling motion equation of the ship in random oblique waves was established. The righting arm obtained by the numerical simulation was approximately fitted by an analytical function. The irregular waves were decomposed into two Gauss stationary random processes, and the CARMA (2, 1) model was used to fit the spectral density function of parametric and forced excitations. The stochastic energy envelope averaging method was used to solve the PDFs and the probability. The validity of the semi-analytical method was verified by the Monte Carlo method. The C11 ship was taken as an example, and the influences of the system parameters on the PDFs and probability were analyzed. The results show that the probability of ship rolling is affected by the characteristic wave height, wave length, and the heading angle. In order to provide proper advice for the ship's manoeuvring, the parametric excitations should be considered appropriately when the ship navigates in the oblique seas.

  19. Random vibrations theory and practice

    CERN Document Server

    Wirsching, Paul H; Ortiz, Keith

    1995-01-01

    Random Vibrations: Theory and Practice covers the theory and analysis of mechanical and structural systems undergoing random oscillations due to any number of phenomena— from engine noise, turbulent flow, and acoustic noise to wind, ocean waves, earthquakes, and rough pavement. For systems operating in such environments, a random vibration analysis is essential to the safety and reliability of the system. By far the most comprehensive text available on random vibrations, Random Vibrations: Theory and Practice is designed for readers who are new to the subject as well as those who are familiar with the fundamentals and wish to study a particular topic or use the text as an authoritative reference. It is divided into three major sections: fundamental background, random vibration development and applications to design, and random signal analysis. Introductory chapters cover topics in probability, statistics, and random processes that prepare the reader for the development of the theory of random vibrations a...

  20. Multiple-scale stochastic processes: Decimation, averaging and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Bo, Stefano, E-mail: stefano.bo@nordita.org [Nordita, KTH Royal Institute of Technology and Stockholm University, Roslagstullsbacken 23, SE-106 91 Stockholm (Sweden); Celani, Antonio [Quantitative Life Sciences, The Abdus Salam International Centre for Theoretical Physics (ICTP), Strada Costiera 11, I-34151 - Trieste (Italy)

    2017-02-07

    The recent experimental progresses in handling microscopic systems have allowed to probe them at levels where fluctuations are prominent, calling for stochastic modeling in a large number of physical, chemical and biological phenomena. This has provided fruitful applications for established stochastic methods and motivated further developments. These systems often involve processes taking place on widely separated time scales. For an efficient modeling one usually focuses on the slower degrees of freedom and it is of great importance to accurately eliminate the fast variables in a controlled fashion, carefully accounting for their net effect on the slower dynamics. This procedure in general requires to perform two different operations: decimation and coarse-graining. We introduce the asymptotic methods that form the basis of this procedure and discuss their application to a series of physical, biological and chemical examples. We then turn our attention to functionals of the stochastic trajectories such as residence times, counting statistics, fluxes, entropy production, etc. which have been increasingly studied in recent years. For such functionals, the elimination of the fast degrees of freedom can present additional difficulties and naive procedures can lead to blatantly inconsistent results. Homogenization techniques for functionals are less covered in the literature and we will pedagogically present them here, as natural extensions of the ones employed for the trajectories. We will also discuss recent applications of these techniques to the thermodynamics of small systems and their interpretation in terms of information-theoretic concepts.

  1. Multipass comminution process to produce precision wood particles of uniform size and shape with disrupted grain structure from wood chips

    Science.gov (United States)

    Dooley, James H; Lanning, David N

    2014-05-27

    A process of comminution of wood chips (C) having a grain direction to produce a mixture of wood particles (P), wherein the wood chips are characterized by an average length dimension (L.sub.C) as measured substantially parallel to the grain, an average width dimension (W.sub.C) as measured normal to L.sub.C and aligned cross grain, and an average height dimension (H.sub.C) as measured normal to W.sub.C and L.sub.C, and wherein the comminution process comprises the step of feeding the wood chips in a direction of travel substantially randomly to the grain direction one or more times through a counter rotating pair of intermeshing arrays of cutting discs (D) arrayed axially perpendicular to the direction of wood chip travel.

  2. Beauty is in the ease of the beholding: A neurophysiological test of the averageness theory of facial attractiveness

    Science.gov (United States)

    Trujillo, Logan T.; Jankowitsch, Jessica M.; Langlois, Judith H.

    2014-01-01

    Multiple studies show that people prefer attractive over unattractive faces. But what is an attractive face and why is it preferred? Averageness theory claims that faces are perceived as attractive when their facial configuration approximates the mathematical average facial configuration of the population. Conversely, faces that deviate from this average configuration are perceived as unattractive. The theory predicts that both attractive and mathematically averaged faces should be processed more fluently than unattractive faces, whereas the averaged faces should be processed marginally more fluently than the attractive faces. We compared neurocognitive and behavioral responses to attractive, unattractive, and averaged human faces to test these predictions. We recorded event-related potentials (ERPs) and reaction times (RTs) from 48 adults while they discriminated between human and chimpanzee faces. Participants categorized averaged and high attractive faces as “human” faster than low attractive faces. The posterior N170 (150 – 225 ms) face-evoked ERP component was smaller in response to high attractive and averaged faces versus low attractive faces. Single-trial EEG analysis indicated that this reduced ERP response arose from the engagement of fewer neural resources and not from a change in the temporal consistency of how those resources were engaged. These findings provide novel evidence that faces are perceived as attractive when they approximate a facial configuration close to the population average and suggest that processing fluency underlies preferences for attractive faces. PMID:24326966

  3. Predicting disease risks from highly imbalanced data using random forest

    Directory of Open Access Journals (Sweden)

    Chakraborty Sounak

    2011-07-01

    Full Text Available Abstract Background We present a method utilizing Healthcare Cost and Utilization Project (HCUP dataset for predicting disease risk of individuals based on their medical diagnosis history. The presented methodology may be incorporated in a variety of applications such as risk management, tailored health communication and decision support systems in healthcare. Methods We employed the National Inpatient Sample (NIS data, which is publicly available through Healthcare Cost and Utilization Project (HCUP, to train random forest classifiers for disease prediction. Since the HCUP data is highly imbalanced, we employed an ensemble learning approach based on repeated random sub-sampling. This technique divides the training data into multiple sub-samples, while ensuring that each sub-sample is fully balanced. We compared the performance of support vector machine (SVM, bagging, boosting and RF to predict the risk of eight chronic diseases. Results We predicted eight disease categories. Overall, the RF ensemble learning method outperformed SVM, bagging and boosting in terms of the area under the receiver operating characteristic (ROC curve (AUC. In addition, RF has the advantage of computing the importance of each variable in the classification process. Conclusions In combining repeated random sub-sampling with RF, we were able to overcome the class imbalance problem and achieve promising results. Using the national HCUP data set, we predicted eight disease categories with an average AUC of 88.79%.

  4. Spacetime averaging of exotic singularity universes

    International Nuclear Information System (INIS)

    Dabrowski, Mariusz P.

    2011-01-01

    Taking a spacetime average as a measure of the strength of singularities we show that big-rips (type I) are stronger than big-bangs. The former have infinite spacetime averages while the latter have them equal to zero. The sudden future singularities (type II) and w-singularities (type V) have finite spacetime averages. The finite scale factor (type III) singularities for some values of the parameters may have an infinite average and in that sense they may be considered stronger than big-bangs.

  5. Random Numbers and Monte Carlo Methods

    Science.gov (United States)

    Scherer, Philipp O. J.

    Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.

  6. Randomized Trial of Thymectomy in Myasthenia Gravis.

    Science.gov (United States)

    Wolfe, Gil I; Kaminski, Henry J; Aban, Inmaculada B; Minisman, Greg; Kuo, Hui-Chien; Marx, Alexander; Ströbel, Philipp; Mazia, Claudio; Oger, Joel; Cea, J Gabriel; Heckmann, Jeannine M; Evoli, Amelia; Nix, Wilfred; Ciafaloni, Emma; Antonini, Giovanni; Witoonpanich, Rawiphan; King, John O; Beydoun, Said R; Chalk, Colin H; Barboi, Alexandru C; Amato, Anthony A; Shaibani, Aziz I; Katirji, Bashar; Lecky, Bryan R F; Buckley, Camilla; Vincent, Angela; Dias-Tosta, Elza; Yoshikawa, Hiroaki; Waddington-Cruz, Márcia; Pulley, Michael T; Rivner, Michael H; Kostera-Pruszczyk, Anna; Pascuzzi, Robert M; Jackson, Carlayne E; Garcia Ramos, Guillermo S; Verschuuren, Jan J G M; Massey, Janice M; Kissel, John T; Werneck, Lineu C; Benatar, Michael; Barohn, Richard J; Tandan, Rup; Mozaffar, Tahseen; Conwit, Robin; Odenkirchen, Joanne; Sonett, Joshua R; Jaretzki, Alfred; Newsom-Davis, John; Cutter, Gary R

    2016-08-11

    Thymectomy has been a mainstay in the treatment of myasthenia gravis, but there is no conclusive evidence of its benefit. We conducted a multicenter, randomized trial comparing thymectomy plus prednisone with prednisone alone. We compared extended transsternal thymectomy plus alternate-day prednisone with alternate-day prednisone alone. Patients 18 to 65 years of age who had generalized nonthymomatous myasthenia gravis with a disease duration of less than 5 years were included if they had Myasthenia Gravis Foundation of America clinical class II to IV disease (on a scale from I to V, with higher classes indicating more severe disease) and elevated circulating concentrations of acetylcholine-receptor antibody. The primary outcomes were the time-weighted average Quantitative Myasthenia Gravis score (on a scale from 0 to 39, with higher scores indicating more severe disease) over a 3-year period, as assessed by means of blinded rating, and the time-weighted average required dose of prednisone over a 3-year period. A total of 126 patients underwent randomization between 2006 and 2012 at 36 sites. Patients who underwent thymectomy had a lower time-weighted average Quantitative Myasthenia Gravis score over a 3-year period than those who received prednisone alone (6.15 vs. 8.99, Pmyasthenia gravis. (Funded by the National Institute of Neurological Disorders and Stroke and others; MGTX ClinicalTrials.gov number, NCT00294658.).

  7. Time-dependence and averaging techniques in atomic photoionization calculations

    International Nuclear Information System (INIS)

    Scheibner, K.F.

    1984-01-01

    Two distinct problems in the development and application of averaging techniques to photoionization calculations are considered. The first part of the thesis is concerned with the specific problem of near-resonant three-photon ionization in hydrogen, a process for which no cross section exists. Effects of the inclusion of the laser pulse characteristics (both temporal and spatial) on the dynamics of the ionization probability and of the metastable state probability are examined. It is found, for example, that the ionization probability can decrease with increasing field intensity. The temporal profile of the laser pulse is found to affect the dynamics very little, whereas the spatial character of the pulse can affect the results drastically. In the second part of the thesis techniques are developed for calculating averaged cross sections directly without first calculating a detailed cross section. Techniques are developed whereby the detailed cross section never has to be calculated as an intermediate step, but rather, the averaged cross section is calculated directly. A variation of the moment technique and a new method based on the stabilization technique are applied successfully to atomic hydrogen and helium

  8. Strong Shock Propagating Over A Random Bed of Spherical Particles

    Science.gov (United States)

    Mehta, Yash; Salari, Kambiz; Jackson, Thomas L.; Balachandar, S.; Thakur, Siddharth

    2017-11-01

    The study of shock interaction with particles has been largely motivated because of its wide-ranging applications. The complex interaction between the compressible flow features, such as shock wave and expansion fan, and the dispersed phase makes this multi-phase flow very difficult to predict and control. In this talk we will be presenting results on fully resolved inviscid simulations of shock interaction with random bed of particles. One of the fascinating observations from these simulations are the flow field fluctuations due to the presence of randomly distributed particles. Rigorous averaging (Favre averaging) of the governing equations results in Reynolds stress like term, which can be classified as pseudo turbulence in this case. We have computed this ``Reynolds stress'' term along with individual fluctuations and the turbulent kinetic energy. Average pressure was also computed to characterize the strength of the transmitted and the reflected waves. This work was supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program.

  9. Magnetic field line random walk in non-axisymmetric turbulence

    International Nuclear Information System (INIS)

    Tautz, R.C.; Lerche, I.

    2011-01-01

    Including a random component of a magnetic field parallel to an ambient field introduces a mean perpendicular motion to the average field line. This effect is normally not discussed because one customarily chooses at the outset to ignore such a field component in discussing random walk and diffusion of field lines. A discussion of the basic effect is given, indicating that any random magnetic field with a non-zero helicity will lead to such a non-zero perpendicular mean motion. Several exact analytic illustrations are given of the effect as well as a simple numerical illustration. -- Highlights: → For magnetic field line random walk all magnetic field components are important. → Non-vanishing magnetic helicity leads to mean perpendicular motion. → Analytically exact stream functions illustrate that the novel transverse effect exists.

  10. Will Mobile Diabetes Education Teams (MDETs in primary care improve patient care processes and health outcomes? Study protocol for a randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Gucciardi Enza

    2012-09-01

    Full Text Available Abstract Background There is evidence to suggest that delivery of diabetes self-management support by diabetes educators in primary care may improve patient care processes and patient clinical outcomes; however, the evaluation of such a model in primary care is nonexistent in Canada. This article describes the design for the evaluation of the implementation of Mobile Diabetes Education Teams (MDETs in primary care settings in Canada. Methods/design This study will use a non-blinded, cluster-randomized controlled trial stepped wedge design to evaluate the Mobile Diabetes Education Teams' intervention in improving patient clinical and care process outcomes. A total of 1,200 patient charts at participating primary care sites will be reviewed for data extraction. Eligible patients will be those aged ≥18, who have type 2 diabetes and a hemoglobin A1c (HbA1c of ≥8%. Clusters (that is, primary care sites will be randomized to the intervention and control group using a block randomization procedure within practice size as the blocking factor. A stepped wedge design will be used to sequentially roll out the intervention so that all clusters eventually receive the intervention. The time at which each cluster begins the intervention is randomized to one of the four roll out periods (0, 6, 12, and 18 months. Clusters that are randomized into the intervention later will act as the control for those receiving the intervention earlier. The primary outcome measure will be the difference in the proportion of patients who achieve the recommended HbA1c target of ≤7% between intervention and control groups. Qualitative work (in-depth interviews with primary care physicians, MDET educators and patients; and MDET educators’ field notes and debriefing sessions will be undertaken to assess the implementation process and effectiveness of the MDET intervention. Trial registration ClinicalTrials.gov NCT01553266

  11. Coherence-generating power of quantum dephasing processes

    Science.gov (United States)

    Styliaris, Georgios; Campos Venuti, Lorenzo; Zanardi, Paolo

    2018-03-01

    We provide a quantification of the capability of various quantum dephasing processes to generate coherence out of incoherent states. The measures defined, admitting computable expressions for any finite Hilbert-space dimension, are based on probabilistic averages and arise naturally from the viewpoint of coherence as a resource. We investigate how the capability of a dephasing process (e.g., a nonselective orthogonal measurement) to generate coherence depends on the relevant bases of the Hilbert space over which coherence is quantified and the dephasing process occurs, respectively. We extend our analysis to include those Lindblad time evolutions which, in the infinite-time limit, dephase the system under consideration and calculate their coherence-generating power as a function of time. We further identify specific families of such time evolutions that, although dephasing, have optimal (over all quantum processes) coherence-generating power for some intermediate time. Finally, we investigate the coherence-generating capability of random dephasing channels.

  12. Artificial Intelligence Can Predict Daily Trauma Volume and Average Acuity.

    Science.gov (United States)

    Stonko, David P; Dennis, Bradley M; Betzold, Richard D; Peetz, Allan B; Gunter, Oliver L; Guillamondegui, Oscar D

    2018-04-19

    The goal of this study was to integrate temporal and weather data in order to create an artificial neural network (ANN) to predict trauma volume, the number of emergent operative cases, and average daily acuity at a level 1 trauma center. Trauma admission data from TRACS and weather data from the National Oceanic and Atmospheric Administration (NOAA) was collected for all adult trauma patients from July 2013-June 2016. The ANN was constructed using temporal (time, day of week), and weather factors (daily high, active precipitation) to predict four points of daily trauma activity: number of traumas, number of penetrating traumas, average ISS, and number of immediate OR cases per day. We trained a two-layer feed-forward network with 10 sigmoid hidden neurons via the Levenberg-Marquardt backpropagation algorithm, and performed k-fold cross validation and accuracy calculations on 100 randomly generated partitions. 10,612 patients over 1,096 days were identified. The ANN accurately predicted the daily trauma distribution in terms of number of traumas, number of penetrating traumas, number of OR cases, and average daily ISS (combined training correlation coefficient r = 0.9018+/-0.002; validation r = 0.8899+/- 0.005; testing r = 0.8940+/-0.006). We were able to successfully predict trauma and emergent operative volume, and acuity using an ANN by integrating local weather and trauma admission data from a level 1 center. As an example, for June 30, 2016, it predicted 9.93 traumas (actual: 10), and a mean ISS score of 15.99 (actual: 13.12); see figure 3. This may prove useful for predicting trauma needs across the system and hospital administration when allocating limited resources. Level III STUDY TYPE: Prognostic/Epidemiological.

  13. Digital servo control of random sound fields

    Science.gov (United States)

    Nakich, R. B.

    1973-01-01

    It is necessary to place number of sensors at different positions in sound field to determine actual sound intensities to which test object is subjected. It is possible to determine whether specification is being met adequately or exceeded. Since excitation is of random nature, signals are essentially coherent and it is impossible to obtain true average.

  14. Random-walk simulation of diffusion-controlled processes among static traps

    International Nuclear Information System (INIS)

    Lee, S.B.; Kim, I.C.; Miller, C.A.; Torquato, S.; Department of Mechanical and Aerospace Engineering and Department of Chemical Engineering, North Carolina State University, Raleigh, North Carolina 27695-7910)

    1989-01-01

    We present computer-simulation results for the trapping rate (rate constant) k associated with diffusion-controlled reactions among identical, static spherical traps distributed with an arbitrary degree of impenetrability using a Pearson random-walk algorithm. We specifically consider the penetrable-concentric-shell model in which each trap of diameter σ is composed of a mutually impenetrable core of diameter λσ, encompassed by a perfectly penetrable shell of thickness (1-λ)σ/2: λ=0 corresponding to randomly centered or ''fully penetrable'' traps and λ=1 corresponding to totally impenetrable traps. Trapping rates are calculated accurately from the random-walk algorithm at the extreme limits of λ (λ=0 and 1) and at an intermediate value (λ=0.8), for a wide range of trap densities. Our simulation procedure has a relatively fast execution time. It is found that k increases with increasing impenetrability at fixed trap concentration. These ''exact'' data are compared with previous theories for the trapping rate. Although a good approximate theory exists for the fully-penetrable-trap case, there are no currently available theories that can provide good estimates of the trapping rate for a moderate to high density of traps with nonzero hard cores (λ>0)

  15. A method of signal transmission path analysis for multivariate random processes

    International Nuclear Information System (INIS)

    Oguma, Ritsuo

    1984-04-01

    A method for noise analysis called ''STP (signal transmission path) analysis'' is presentd as a tool to identify noise sources and their propagation paths in multivariate random proceses. Basic idea of the analysis is to identify, via time series analysis, effective network for the signal power transmission among variables in the system and to make use of its information to the noise analysis. In the present paper, we accomplish this through two steps of signal processings; first, we estimate, using noise power contribution analysis, variables which have large contribution to the power spectrum of interest, and then evaluate the STPs for each pair of variables to identify STPs which play significant role for the generated noise to transmit to the variable under evaluation. The latter part of the analysis is executed through comparison of partial coherence function and newly introduced partial noise power contribution function. This paper presents the procedure of the STP analysis and demonstrates, using simulation data as well as Borssele PWR noise data, its effectiveness for investigation of noise generation and propagation mechanisms. (author)

  16. Random phenomena fundamentals of probability and statistics for engineers

    CERN Document Server

    Ogunnaike, Babatunde A

    2009-01-01

    PreludeApproach PhilosophyFour Basic PrinciplesI FoundationsTwo Motivating ExamplesYield Improvement in a Chemical ProcessQuality Assurance in a Glass Sheet Manufacturing ProcessOutline of a Systematic ApproachRandom Phenomena, Variability, and UncertaintyTwo Extreme Idealizations of Natural PhenomenaRandom Mass PhenomenaIntroducing ProbabilityThe Probabilistic FrameworkII ProbabilityFundamentals of Probability TheoryBuilding BlocksOperationsProbabilityConditional ProbabilityIndependenceRandom Variables and DistributionsDistributionsMathematical ExpectationCharacterizing DistributionsSpecial Derived Probability FunctionsMultidimensional Random VariablesDistributions of Several Random VariablesDistributional Characteristics of Jointly Distributed Random VariablesRandom Variable TransformationsSingle Variable TransformationsBivariate TransformationsGeneral Multivariate TransformationsApplication Case Studies I: ProbabilityMendel and HeredityWorld War II Warship Tactical Response Under AttackIII DistributionsIde...

  17. Scale dependence of the average potential around the maximum in Φ4 theories

    International Nuclear Information System (INIS)

    Tetradis, N.; Wetterich, C.

    1992-04-01

    The average potential describes the physics at a length scale k - 1 by averaging out the degrees of freedom with characteristic moments larger than k. The dependence on k can be described by differential evolution equations. We solve these equations for the nonconvex part of the potential around the origin in φ 4 theories, in the phase with spontaneous symmetry breaking. The average potential is real and approaches the convex effective potential in the limit k → 0. Our calculation is relevant for processes for which the shape of the potential at a given scale is important, such as tunneling phenomena or inflation. (orig.)

  18. Model averaging in the presence of structural uncertainty about treatment effects: influence on treatment decision and expected value of information.

    Science.gov (United States)

    Price, Malcolm J; Welton, Nicky J; Briggs, Andrew H; Ades, A E

    2011-01-01

    Standard approaches to estimation of Markov models with data from randomized controlled trials tend either to make a judgment about which transition(s) treatments act on, or they assume that treatment has a separate effect on every transition. An alternative is to fit a series of models that assume that treatment acts on specific transitions. Investigators can then choose among alternative models using goodness-of-fit statistics. However, structural uncertainty about any chosen parameterization will remain and this may have implications for the resulting decision and the need for further research. We describe a Bayesian approach to model estimation, and model selection. Structural uncertainty about which parameterization to use is accounted for using model averaging and we developed a formula for calculating the expected value of perfect information (EVPI) in averaged models. Marginal posterior distributions are generated for each of the cost-effectiveness parameters using Markov Chain Monte Carlo simulation in WinBUGS, or Monte-Carlo simulation in Excel (Microsoft Corp., Redmond, WA). We illustrate the approach with an example of treatments for asthma using aggregate-level data from a connected network of four treatments compared in three pair-wise randomized controlled trials. The standard errors of incremental net benefit using structured models is reduced by up to eight- or ninefold compared to the unstructured models, and the expected loss attaching to decision uncertainty by factors of several hundreds. Model averaging had considerable influence on the EVPI. Alternative structural assumptions can alter the treatment decision and have an overwhelming effect on model uncertainty and expected value of information. Structural uncertainty can be accounted for by model averaging, and the EVPI can be calculated for averaged models. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights

  19. On the Distribution of Random Geometric Graphs

    DEFF Research Database (Denmark)

    Badiu, Mihai Alin; Coon, Justin P.

    2018-01-01

    as a measure of the graph’s topological uncertainty (or information content). Moreover, the distribution is also relevant for determining average network performance or designing protocols. However, a major impediment in deducing the graph distribution is that it requires the joint probability distribution......Random geometric graphs (RGGs) are commonly used to model networked systems that depend on the underlying spatial embedding. We concern ourselves with the probability distribution of an RGG, which is crucial for studying its random topology, properties (e.g., connectedness), or Shannon entropy...... of the n(n − 1)/2 distances between n nodes randomly distributed in a bounded domain. As no such result exists in the literature, we make progress by obtaining the joint distribution of the distances between three nodes confined in a disk in R 2. This enables the calculation of the probability distribution...

  20. Longest interval between zeros of the tied-down random walk, the Brownian bridge and related renewal processes

    Science.gov (United States)

    Godrèche, Claude

    2017-05-01

    The probability distribution of the longest interval between two zeros of a simple random walk starting and ending at the origin, and of its continuum limit, the Brownian bridge, was analysed in the past by Rosén and Wendel, then extended by the latter to stable processes. We recover and extend these results using simple concepts of renewal theory, which allows to revisit past and recent works of the physics literature.

  1. Longest interval between zeros of the tied-down random walk, the Brownian bridge and related renewal processes

    International Nuclear Information System (INIS)

    Godrèche, Claude

    2017-01-01

    The probability distribution of the longest interval between two zeros of a simple random walk starting and ending at the origin, and of its continuum limit, the Brownian bridge, was analysed in the past by Rosén and Wendel, then extended by the latter to stable processes. We recover and extend these results using simple concepts of renewal theory, which allows to revisit past and recent works of the physics literature. (paper)

  2. Nonlinear correlations in the hydrophobicity and average flexibility along the glycolytic enzymes sequences

    International Nuclear Information System (INIS)

    Ciorsac, Alecu; Craciun, Dana; Ostafe, Vasile; Isvoran, Adriana

    2011-01-01

    Research highlights: → We focus our study on the glycolytic enzymes. → We reveal correlation of hydrophobicity and flexibility along their chains. → We also reveal fractal aspects of the glycolytic enzymes structures and surfaces. → The glycolytic enzyme sequences are not random. → Creation of fractal structures requires the operation of nonlinear dynamics. - Abstract: Nonlinear methods widely used for time series analysis were applied to glycolytic enzyme sequences to derive information concerning the correlation of hydrophobicity and average flexibility along their chains. The 20 sequences of different types of the 10 human glycolytic enzymes were considered as spatial series and were analyzed by spectral analysis, detrended fluctuations analysis and Hurst coefficient calculation. The results agreed that there are both short range and long range correlations of hydrophobicity and average flexibility within investigated sequences, the short range correlations being stronger and indicating that local interactions are the most important for the protein folding. This correlation is also reflected by the fractal nature of the structures of investigated proteins.

  3. Random walk and the heat equation

    CERN Document Server

    Lawler, Gregory F

    2010-01-01

    The heat equation can be derived by averaging over a very large number of particles. Traditionally, the resulting PDE is studied as a deterministic equation, an approach that has brought many significant results and a deep understanding of the equation and its solutions. By studying the heat equation by considering the individual random particles, however, one gains further intuition into the problem. While this is now standard for many researchers, this approach is generally not presented at the undergraduate level. In this book, Lawler introduces the heat equation and the closely related notion of harmonic functions from a probabilistic perspective. The theme of the first two chapters of the book is the relationship between random walks and the heat equation. The first chapter discusses the discrete case, random walk and the heat equation on the integer lattice; and the second chapter discusses the continuous case, Brownian motion and the usual heat equation. Relationships are shown between the two. For exa...

  4. Flexible time domain averaging technique

    Science.gov (United States)

    Zhao, Ming; Lin, Jing; Lei, Yaguo; Wang, Xiufeng

    2013-09-01

    Time domain averaging(TDA) is essentially a comb filter, it cannot extract the specified harmonics which may be caused by some faults, such as gear eccentric. Meanwhile, TDA always suffers from period cutting error(PCE) to different extent. Several improved TDA methods have been proposed, however they cannot completely eliminate the waveform reconstruction error caused by PCE. In order to overcome the shortcomings of conventional methods, a flexible time domain averaging(FTDA) technique is established, which adapts to the analyzed signal through adjusting each harmonic of the comb filter. In this technique, the explicit form of FTDA is first constructed by frequency domain sampling. Subsequently, chirp Z-transform(CZT) is employed in the algorithm of FTDA, which can improve the calculating efficiency significantly. Since the signal is reconstructed in the continuous time domain, there is no PCE in the FTDA. To validate the effectiveness of FTDA in the signal de-noising, interpolation and harmonic reconstruction, a simulated multi-components periodic signal that corrupted by noise is processed by FTDA. The simulation results show that the FTDA is capable of recovering the periodic components from the background noise effectively. Moreover, it can improve the signal-to-noise ratio by 7.9 dB compared with conventional ones. Experiments are also carried out on gearbox test rigs with chipped tooth and eccentricity gear, respectively. It is shown that the FTDA can identify the direction and severity of the eccentricity gear, and further enhances the amplitudes of impulses by 35%. The proposed technique not only solves the problem of PCE, but also provides a useful tool for the fault symptom extraction of rotating machinery.

  5. Hierarchical random additive process and logarithmic scaling of generalized high order, two-point correlations in turbulent boundary layer flow

    Science.gov (United States)

    Yang, X. I. A.; Marusic, I.; Meneveau, C.

    2016-06-01

    Townsend [Townsend, The Structure of Turbulent Shear Flow (Cambridge University Press, Cambridge, UK, 1976)] hypothesized that the logarithmic region in high-Reynolds-number wall-bounded flows consists of space-filling, self-similar attached eddies. Invoking this hypothesis, we express streamwise velocity fluctuations in the inertial layer in high-Reynolds-number wall-bounded flows as a hierarchical random additive process (HRAP): uz+=∑i=1Nzai . Here u is the streamwise velocity fluctuation, + indicates normalization in wall units, z is the wall normal distance, and ai's are independently, identically distributed random additives, each of which is associated with an attached eddy in the wall-attached hierarchy. The number of random additives is Nz˜ln(δ /z ) where δ is the boundary layer thickness and ln is natural log. Due to its simplified structure, such a process leads to predictions of the scaling behaviors for various turbulence statistics in the logarithmic layer. Besides reproducing known logarithmic scaling of moments, structure functions, and correlation function [" close="]3/2 uz(x ) uz(x +r ) >, new logarithmic laws in two-point statistics such as uz4(x ) > 1 /2, 1/3, etc. can be derived using the HRAP formalism. Supporting empirical evidence for the logarithmic scaling in such statistics is found from the Melbourne High Reynolds Number Boundary Layer Wind Tunnel measurements. We also show that, at high Reynolds numbers, the above mentioned new logarithmic laws can be derived by assuming the arrival of an attached eddy at a generic point in the flow field to be a Poisson process [Woodcock and Marusic, Phys. Fluids 27, 015104 (2015), 10.1063/1.4905301]. Taken together, the results provide new evidence supporting the essential ingredients of the attached eddy hypothesis to describe streamwise velocity fluctuations of large, momentum transporting eddies in wall-bounded turbulence, while observed deviations suggest the need for further extensions of the

  6. Incorrect modeling of the failure process of minimally repaired systems under random conditions: The effect on the maintenance costs

    International Nuclear Information System (INIS)

    Pulcini, Gianpaolo

    2015-01-01

    This note investigates the effect of the incorrect modeling of the failure process of minimally repaired systems that operates under random environmental conditions on the costs of a periodic replacement maintenance. The motivation of this paper is given by a recently published paper, where a wrong formulation of the expected cost for unit time under a periodic replacement policy is obtained. This wrong formulation is due to the incorrect assumption that the intensity function of minimally repaired systems that operate under random conditions has the same functional form as the failure rate of the first failure time. This produced an incorrect optimization of the replacement maintenance. Thus, in this note the conceptual differences between the intensity function and the failure rate of the first failure time are first highlighted. Then, the correct expressions of the expected cost and of the optimal replacement period are provided. Finally, a real application is used to measure how severe can be the economical consequences caused by the incorrect modeling of the failure process.

  7. IMAGE SEGMENTATION BASED ON MARKOV RANDOM FIELD AND WATERSHED TECHNIQUES

    Institute of Scientific and Technical Information of China (English)

    纳瑟; 刘重庆

    2002-01-01

    This paper presented a method that incorporates Markov Random Field(MRF), watershed segmentation and merging techniques for performing image segmentation and edge detection tasks. MRF is used to obtain an initial estimate of x regions in the image under process where in MRF model, gray level x, at pixel location i, in an image X, depends on the gray levels of neighboring pixels. The process needs an initial segmented result. An initial segmentation is got based on K-means clustering technique and the minimum distance, then the region process in modeled by MRF to obtain an image contains different intensity regions. Starting from this we calculate the gradient values of that image and then employ a watershed technique. When using MRF method it obtains an image that has different intensity regions and has all the edge and region information, then it improves the segmentation result by superimpose closed and an accurate boundary of each region using watershed algorithm. After all pixels of the segmented regions have been processed, a map of primitive region with edges is generated. Finally, a merge process based on averaged mean values is employed. The final segmentation and edge detection result is one closed boundary per actual region in the image.

  8. FPGA based computation of average neutron flux and e-folding period for start-up range of reactors

    International Nuclear Information System (INIS)

    Ram, Rajit; Borkar, S.P.; Dixit, M.Y.; Das, Debashis

    2013-01-01

    Pulse processing instrumentation channels used for reactor applications, play a vital role to ensure nuclear safety in startup range of reactor operation and also during fuel loading and first approach to criticality. These channels are intended for continuous run time computation of equivalent reactor core neutron flux and e-folding period. This paper focuses only the computational part of these instrumentation channels which is implemented in single FPGA using 32-bit floating point arithmetic engine. The computations of average count rate, log of average count rate, log rate and reactor period are done in VHDL using digital circuit realization approach. The computation of average count rate is done using fully adaptive window size moving average method, while Taylor series expansion for logarithms is implemented in FPGA to compute log of count rate, log rate and reactor e-folding period. This paper describes the block diagrams of digital logic realization in FPGA and advantage of fully adaptive window size moving average technique over conventional fixed size moving average technique for pulse processing of reactor instrumentations. (author)

  9. Non-classical radiation transport in random media with fluctuating densities

    International Nuclear Information System (INIS)

    Dyuldya, S.V.; Bratchenko, M.I.

    2012-01-01

    The ensemble averaged propagation kernels of the non-classical radiation transport are studied by means of the proposed application of the stochastic differential equation random medium generators. It is shown that the non-classical transport is favored in long-correlated weakly fluctuating media. The developed kernel models have been implemented in GEANT4 and validated against the d ouble Monte Carlo m odeling of absorptions curves of disperse neutron absorbers and γ-albedos from a scatterer/absorber random mix

  10. Random graph states, maximal flow and Fuss-Catalan distributions

    International Nuclear Information System (INIS)

    Collins, BenoIt; Nechita, Ion; Zyczkowski, Karol

    2010-01-01

    For any graph consisting of k vertices and m edges we construct an ensemble of random pure quantum states which describe a system composed of 2m subsystems. Each edge of the graph represents a bipartite, maximally entangled state. Each vertex represents a random unitary matrix generated according to the Haar measure, which describes the coupling between subsystems. Dividing all subsystems into two parts, one may study entanglement with respect to this partition. A general technique to derive an expression for the average entanglement entropy of random pure states associated with a given graph is presented. Our technique relies on Weingarten calculus and flow problems. We analyze the statistical properties of spectra of such random density matrices and show for which cases they are described by the free Poissonian (Marchenko-Pastur) distribution. We derive a discrete family of generalized, Fuss-Catalan distributions and explicitly construct graphs which lead to ensembles of random states characterized by these novel distributions of eigenvalues.

  11. Unraveling spurious properties of interaction networks with tailored random networks.

    Directory of Open Access Journals (Sweden)

    Stephan Bialonski

    Full Text Available We investigate interaction networks that we derive from multivariate time series with methods frequently employed in diverse scientific fields such as biology, quantitative finance, physics, earth and climate sciences, and the neurosciences. Mimicking experimental situations, we generate time series with finite length and varying frequency content but from independent stochastic processes. Using the correlation coefficient and the maximum cross-correlation, we estimate interdependencies between these time series. With clustering coefficient and average shortest path length, we observe unweighted interaction networks, derived via thresholding the values of interdependence, to possess non-trivial topologies as compared to Erdös-Rényi networks, which would indicate small-world characteristics. These topologies reflect the mostly unavoidable finiteness of the data, which limits the reliability of typically used estimators of signal interdependence. We propose random networks that are tailored to the way interaction networks are derived from empirical data. Through an exemplary investigation of multichannel electroencephalographic recordings of epileptic seizures--known for their complex spatial and temporal dynamics--we show that such random networks help to distinguish network properties of interdependence structures related to seizure dynamics from those spuriously induced by the applied methods of analysis.

  12. The Process of Clinical Reasoning among Medical Students

    Directory of Open Access Journals (Sweden)

    Djon Machado Lopes

    Full Text Available ABSTRACT Introduction: Research in the field of medical reasoning has shed light on the reasoning process used by medical students. The strategies in this process are related to the analytical [hypothetical-deductive (HD] and nonanalytic [scheme-inductive (SI] systems, and pattern recognition (PR]. Objective: To explore the clinical reasoning process of students from the fifth year of medical school at the end of the clinical cycle of medical internship, and to identify the strategies used in preparing diagnostic hypotheses, knowledge organization and content. Method: Qualitative research conducted in 2014 at a Brazilian public university with medical interns. Following Stamm's method, a case in internal medicine (IM was built based on the theory of prototypes (Group 1 = 47 interns, in which the interns listed, according to their own perceptions, the signs, symptoms, syndromes, and diseases typical of internal medicine. This case was used for evaluating the clinical reasoning process of Group 2 (30 students = simple random sample obtained with the "think aloud" process. The verbalizations were transcribed and evaluated by Bardin's thematic analysis. The content analysis were approved by two experts at the beginning and at the end of the analysis process. Results: The interns developed 164 primary and secondary hypotheses when solving the case. The SI strategy prevailed with 48.8%, followed by PR (35.4%, HD (12.2%, and mixed (1.8 % each: SI + HD and HD + PR. The students built 146 distinct semantic axes, resulting in an average of 4.8/ participant. During the analysis, 438 interpretation processes were executed (average of 14.6/participant, and 124 combination processes (average of 4.1/participant. Conclusions: The nonanalytic strategies prevailed with the PR being the most used in the development of primary hypotheses (46.8% and the SI in secondary hypotheses (93%. The interns showed a strong semantic network and did three and a half times more

  13. Average Throughput Performance of Myopic Policy in Energy Harvesting Wireless Sensor Networks.

    Science.gov (United States)

    Gul, Omer Melih; Demirekler, Mubeccel

    2017-09-26

    This paper considers a single-hop wireless sensor network where a fusion center collects data from M energy harvesting wireless sensors. The harvested energy is stored losslessly in an infinite-capacity battery at each sensor. In each time slot, the fusion center schedules K sensors for data transmission over K orthogonal channels. The fusion center does not have direct knowledge on the battery states of sensors, or the statistics of their energy harvesting processes. The fusion center only has information of the outcomes of previous transmission attempts. It is assumed that the sensors are data backlogged, there is no battery leakage and the communication is error-free. An energy harvesting sensor can transmit data to the fusion center whenever being scheduled only if it has enough energy for data transmission. We investigate average throughput of Round-Robin type myopic policy both analytically and numerically under an average reward (throughput) criterion. We show that Round-Robin type myopic policy achieves optimality for some class of energy harvesting processes although it is suboptimal for a broad class of energy harvesting processes.

  14. Micro-Texture Synthesis by Phase Randomization

    Directory of Open Access Journals (Sweden)

    Bruno Galerne

    2011-09-01

    Full Text Available This contribution is concerned with texture synthesis by example, the process of generating new texture images from a given sample. The Random Phase Noise algorithm presented here synthesizes a texture from an original image by simply randomizing its Fourier phase. It is able to reproduce textures which are characterized by their Fourier modulus, namely the random phase textures (or micro-textures.

  15. Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.

    Science.gov (United States)

    Dirks, Jean; And Others

    1983-01-01

    Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)

  16. Preference for Averageness in Faces Does Not Generalize to Non-Human Primates

    Directory of Open Access Journals (Sweden)

    Olivia B. Tomeo

    2017-07-01

    Full Text Available Facial attractiveness is a long-standing topic of active study in both neuroscience and social science, motivated by its positive social consequences. Over the past few decades, it has been established that averageness is a major factor influencing judgments of facial attractiveness in humans. Non-human primates share similar social behaviors as well as neural mechanisms related to face processing with humans. However, it is unknown whether monkeys, like humans, also find particular faces attractive and, if so, which kind of facial traits they prefer. To address these questions, we investigated the effect of averageness on preferences for faces in monkeys. We tested three adult male rhesus macaques using a visual paired comparison (VPC task, in which they viewed pairs of faces (both individual faces, or one individual face and one average face; viewing time was used as a measure of preference. We did find that monkeys looked longer at certain individual faces than others. However, unlike humans, monkeys did not prefer the average face over individual faces. In fact, the more the individual face differed from the average face, the longer the monkeys looked at it, indicating that the average face likely plays a role in face recognition rather than in judgments of facial attractiveness: in models of face recognition, the average face operates as the norm against which individual faces are compared and recognized. Taken together, our study suggests that the preference for averageness in faces does not generalize to non-human primates.

  17. Evaluations of average level spacings

    International Nuclear Information System (INIS)

    Liou, H.I.

    1980-01-01

    The average level spacing for highly excited nuclei is a key parameter in cross section formulas based on statistical nuclear models, and also plays an important role in determining many physics quantities. Various methods to evaluate average level spacings are reviewed. Because of the finite experimental resolution, to detect a complete sequence of levels without mixing other parities is extremely difficult, if not totally impossible. Most methods derive the average level spacings by applying a fit, with different degrees of generality, to the truncated Porter-Thomas distribution for reduced neutron widths. A method that tests both distributions of level widths and positions is discussed extensivey with an example of 168 Er data. 19 figures, 2 tables

  18. Bayesian model averaging in vector autoregressive processes with an investigation of stability of the US great ratios and risk of a liquidity trap in the USA, UK and Japan

    NARCIS (Netherlands)

    R.W. Strachan (Rodney); H.K. van Dijk (Herman)

    2007-01-01

    textabstractA Bayesian model averaging procedure is presented within the class of vector autoregressive (VAR) processes and applied to two empirical issues. First, stability of the "Great Ratios" in U.S. macro-economic time series is investigated, together with the presence and e¤ects of permanent

  19. Spectra of random networks in the weak clustering regime

    Science.gov (United States)

    Peron, Thomas K. DM.; Ji, Peng; Kurths, Jürgen; Rodrigues, Francisco A.

    2018-03-01

    The asymptotic behavior of dynamical processes in networks can be expressed as a function of spectral properties of the corresponding adjacency and Laplacian matrices. Although many theoretical results are known for the spectra of traditional configuration models, networks generated through these models fail to describe many topological features of real-world networks, in particular non-null values of the clustering coefficient. Here we study effects of cycles of order three (triangles) in network spectra. By using recent advances in random matrix theory, we determine the spectral distribution of the network adjacency matrix as a function of the average number of triangles attached to each node for networks without modular structure and degree-degree correlations. Implications to network dynamics are discussed. Our findings can shed light in the study of how particular kinds of subgraphs influence network dynamics.

  20. Thermal effects in high average power optical parametric amplifiers.

    Science.gov (United States)

    Rothhardt, Jan; Demmler, Stefan; Hädrich, Steffen; Peschel, Thomas; Limpert, Jens; Tünnermann, Andreas

    2013-03-01

    Optical parametric amplifiers (OPAs) have the reputation of being average power scalable due to the instantaneous nature of the parametric process (zero quantum defect). This Letter reveals serious challenges originating from thermal load in the nonlinear crystal caused by absorption. We investigate these thermal effects in high average power OPAs based on beta barium borate. Absorption of both pump and idler waves is identified to contribute significantly to heating of the nonlinear crystal. A temperature increase of up to 148 K with respect to the environment is observed and mechanical tensile stress up to 40 MPa is found, indicating a high risk of crystal fracture under such conditions. By restricting the idler to a wavelength range far from absorption bands and removing the crystal coating we reduce the peak temperature and the resulting temperature gradient significantly. Guidelines for further power scaling of OPAs and other nonlinear devices are given.

  1. Inter simple sequence repeats (ISSR) and random amplified ...

    African Journals Online (AJOL)

    21 of 30 random amplified polymorphic DNA (RAPD) primers produced 220 reproducible bands with average of 10.47 bands per primer and 80.12% of polymorphism. OPR02 primer showed the highest number of effective allele (Ne), Shannon index (I) and genetic diversity (H). Some of the cultivars had specific bands, ...

  2. Reduced fractal model for quantitative analysis of averaged micromotions in mesoscale: Characterization of blow-like signals

    International Nuclear Information System (INIS)

    Nigmatullin, Raoul R.; Toboev, Vyacheslav A.; Lino, Paolo; Maione, Guido

    2015-01-01

    Highlights: •A new approach describes fractal-branched systems with long-range fluctuations. •A reduced fractal model is proposed. •The approach is used to characterize blow-like signals. •The approach is tested on data from different fields. -- Abstract: It has been shown that many micromotions in the mesoscale region are averaged in accordance with their self-similar (geometrical/dynamical) structure. This distinctive feature helps to reduce a wide set of different micromotions describing relaxation/exchange processes to an averaged collective motion, expressed mathematically in a rather general form. This reduction opens new perspectives in description of different blow-like signals (BLS) in many complex systems. The main characteristic of these signals is a finite duration also when the generalized reduced function is used for their quantitative fitting. As an example, we describe quantitatively available signals that are generated by bronchial asthmatic people, songs by queen bees, and car engine valves operating in the idling regime. We develop a special treatment procedure based on the eigen-coordinates (ECs) method that allows to justify the generalized reduced fractal model (RFM) for description of BLS that can propagate in different complex systems. The obtained describing function is based on the self-similar properties of the different considered micromotions. This kind of cooperative model is proposed here for the first time. In spite of the fact that the nature of the dynamic processes that take place in fractal structure on a mesoscale level is not well understood, the parameters of the RFM fitting function can be used for construction of calibration curves, affected by various external/random factors. Then, the calculated set of the fitting parameters of these calibration curves can characterize BLS of different complex systems affected by those factors. Though the method to construct and analyze the calibration curves goes beyond the scope

  3. Strengthened glass for high average power laser applications

    International Nuclear Information System (INIS)

    Cerqua, K.A.; Lindquist, A.; Jacobs, S.D.; Lambropoulos, J.

    1987-01-01

    Recent advancements in high repetition rate and high average power laser systems have put increasing demands on the development of improved solid state laser materials with high thermal loading capabilities. The authors have developed a process for strengthening a commercially available Nd doped phosphate glass utilizing an ion-exchange process. Results of thermal loading fracture tests on moderate size (160 x 15 x 8 mm) glass slabs have shown a 6-fold improvement in power loading capabilities for strengthened samples over unstrengthened slabs. Fractographic analysis of post-fracture samples has given insight into the mechanism of fracture in both unstrengthened and strengthened samples. Additional stress analysis calculations have supported these findings. In addition to processing the glass' surface during strengthening in a manner which preserves its post-treatment optical quality, the authors have developed an in-house optical fabrication technique utilizing acid polishing to minimize subsurface damage in samples prior to exchange treatment. Finally, extension of the strengthening process to alternate geometries of laser glass has produced encouraging results, which may expand the potential or strengthened glass in laser systems, making it an exciting prospect for many applications

  4. Familiarity and Voice Representation: From Acoustic-Based Representation to Voice Averages

    Directory of Open Access Journals (Sweden)

    Maureen Fontaine

    2017-07-01

    Full Text Available The ability to recognize an individual from their voice is a widespread ability with a long evolutionary history. Yet, the perceptual representation of familiar voices is ill-defined. In two experiments, we explored the neuropsychological processes involved in the perception of voice identity. We specifically explored the hypothesis that familiar voices (trained-to-familiar (Experiment 1, and famous voices (Experiment 2 are represented as a whole complex pattern, well approximated by the average of multiple utterances produced by a single speaker. In experiment 1, participants learned three voices over several sessions, and performed a three-alternative forced-choice identification task on original voice samples and several “speaker averages,” created by morphing across varying numbers of different vowels (e.g., [a] and [i] produced by the same speaker. In experiment 2, the same participants performed the same task on voice samples produced by familiar speakers. The two experiments showed that for famous voices, but not for trained-to-familiar voices, identification performance increased and response times decreased as a function of the number of utterances in the averages. This study sheds light on the perceptual representation of familiar voices, and demonstrates the power of average in recognizing familiar voices. The speaker average captures the unique characteristics of a speaker, and thus retains the information essential for recognition; it acts as a prototype of the speaker.

  5. Averaged RMHD equations

    International Nuclear Information System (INIS)

    Ichiguchi, Katsuji

    1998-01-01

    A new reduced set of resistive MHD equations is derived by averaging the full MHD equations on specified flux coordinates, which is consistent with 3D equilibria. It is confirmed that the total energy is conserved and the linearized equations for ideal modes are self-adjoint. (author)

  6. Specialized rheumatology nurse substitutes for rheumatologists in the diagnostic process of fibromyalgia: a cost-consequence analysis and a randomized controlled trial

    NARCIS (Netherlands)

    Kroese, Mariëlle E.; Severens, Johan L.; Schulpen, Guy J.; Bessems, Monique C.; Nijhuis, Frans J.; Landewé, Robert B.

    2011-01-01

    To perform a cost-consequence analysis of the substitution of specialized rheumatology nurses (SRN) for rheumatologists (RMT) in the diagnostic process of fibromyalgia (FM), using both a healthcare and societal perspective and a 9-month period. Alongside a randomized controlled trial, we measured

  7. Description of two-process surface topography

    International Nuclear Information System (INIS)

    Grabon, W; Pawlus, P

    2014-01-01

    After two machining processes, a large number of surface topography measurements were made using Talyscan 150 stylus measuring equipment. The measured samples were divided into two groups. The first group contained two-process surfaces of random nature, while the second group used random-deterministic textures of random plateau parts and portions of deterministic valleys. For comparison, one-process surfaces were also analysed. Correlation and regression analysis was used to study the dependencies among surface texture parameters in 2D and 3D systems. As the result of this study, sets of parameters describing multi-process surface topography were obtained for two-process surfaces of random and of random-deterministic types. (papers)

  8. Reproducing multi-model ensemble average with Ensemble-averaged Reconstructed Forcings (ERF) in regional climate modeling

    Science.gov (United States)

    Erfanian, A.; Fomenko, L.; Wang, G.

    2016-12-01

    Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling

  9. Averaging for solitons with nonlinearity management

    International Nuclear Information System (INIS)

    Pelinovsky, D.E.; Kevrekidis, P.G.; Frantzeskakis, D.J.

    2003-01-01

    We develop an averaging method for solitons of the nonlinear Schroedinger equation with a periodically varying nonlinearity coefficient, which is used to effectively describe solitons in Bose-Einstein condensates, in the context of the recently proposed technique of Feshbach resonance management. Using the derived local averaged equation, we study matter-wave bright and dark solitons and demonstrate a very good agreement between solutions of the averaged and full equations

  10. Subjective randomness as statistical inference.

    Science.gov (United States)

    Griffiths, Thomas L; Daniels, Dylan; Austerweil, Joseph L; Tenenbaum, Joshua B

    2018-06-01

    Some events seem more random than others. For example, when tossing a coin, a sequence of eight heads in a row does not seem very random. Where do these intuitions about randomness come from? We argue that subjective randomness can be understood as the result of a statistical inference assessing the evidence that an event provides for having been produced by a random generating process. We show how this account provides a link to previous work relating randomness to algorithmic complexity, in which random events are those that cannot be described by short computer programs. Algorithmic complexity is both incomputable and too general to capture the regularities that people can recognize, but viewing randomness as statistical inference provides two paths to addressing these problems: considering regularities generated by simpler computing machines, and restricting the set of probability distributions that characterize regularity. Building on previous work exploring these different routes to a more restricted notion of randomness, we define strong quantitative models of human randomness judgments that apply not just to binary sequences - which have been the focus of much of the previous work on subjective randomness - but also to binary matrices and spatial clustering. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. An innovative scintillation process for correcting, cooling, and reducing the randomness of waveforms

    International Nuclear Information System (INIS)

    Shen, J.

    1991-01-01

    Research activities were concentrated on an innovative scintillation technique for high-energy collider detection. Heretofore, scintillation waveform data of high- energy physics events have been problematically random. This paper represents a bottleneck of data flow for the next generation of detectors for proton colliders like SSC or LHC. Prevailing problems to resolve were: additional time walk and jitter resulting from the random hitting positions of particles, increased walk and jitter caused by scintillation photon propagation dispersions, and quantum fluctuations of luminescence. However, these were manageable when the different aspects of randomness had been clarified in increased detail. For this purpose, these three were defined as pseudorandomness, quasi-randomness, and real randomness, respectively. A unique scintillation counter incorporating long scintillators with light guides, a drift chamber, and fast discriminators plus integrators was employed to resolve problems of correcting time walk and reducing the additional jitter by establishing an analytical waveform description of V(t,z) for a measured (z). Resolving problem was accomplished by reducing jitter by compressing V(t,z) with a nonlinear medium, called cooling scintillation. Resolving problem was proposed by orienting molecular and polarizing scintillation through the use of intense magnetic technology, called stabilizing the waveform

  12. Scaling Argument of Anisotropic Random Walk

    International Nuclear Information System (INIS)

    Xu Bingzhen; Jin Guojun; Wang Feifeng

    2005-01-01

    In this paper, we analytically discuss the scaling properties of the average square end-to-end distance (R 2 ) for anisotropic random walk in D-dimensional space (D≥2), and the returning probability P n (r 0 ) for the walker into a certain neighborhood of the origin. We will not only give the calculating formula for (R 2 ) and P n (r 0 ), but also point out that if there is a symmetric axis for the distribution of the probability density of a single step displacement, we always obtain (R p erpendicular n 2 )∼n, where perpendicular refers to the projections of the displacement perpendicular to each symmetric axes of the walk; in D-dimensional space with D symmetric axes perpendicular to each other, we always have (R n 2 )∼n and the random walk will be like a purely random motion; if the number of inter-perpendicular symmetric axis is smaller than the dimensions of the space, we must have (R n 2 )∼n 2 for very large n and the walk will be like a ballistic motion. It is worth while to point out that unlike the isotropic random walk in one and two dimensions, which is certain to return into the neighborhood of the origin, generally there is only a nonzero probability for the anisotropic random walker in two dimensions to return to the neighborhood.

  13. Stochastic Averaging Principle for Spatial Birth-and-Death Evolutions in the Continuum

    Science.gov (United States)

    Friesen, Martin; Kondratiev, Yuri

    2018-06-01

    We study a spatial birth-and-death process on the phase space of locally finite configurations Γ^+ × Γ^- over R}^d. Dynamics is described by an non-equilibrium evolution of states obtained from the Fokker-Planck equation and associated with the Markov operator L^+(γ ^-) + 1/ɛ L^-, ɛ > 0. Here L^- describes the environment process on Γ^- and L^+(γ ^-) describes the system process on Γ^+, where γ ^- indicates that the corresponding birth-and-death rates depend on another locally finite configuration γ ^- \\in Γ^-. We prove that, for a certain class of birth-and-death rates, the corresponding Fokker-Planck equation is well-posed, i.e. there exists a unique evolution of states μ _t^{ɛ } on Γ^+ × Γ^-. Moreover, we give a sufficient condition such that the environment is ergodic with exponential rate. Let μ _{inv} be the invariant measure for the environment process on Γ^-. In the main part of this work we establish the stochastic averaging principle, i.e. we prove that the marginal of μ _t^{ɛ } onto Γ^+ converges weakly to an evolution of states on {Γ}^+ associated with the averaged Markov birth-and-death operator {\\overline{L}} = \\int _{Γ}^- L^+(γ ^-)d μ _{inv}(γ ^-).

  14. Stochastic Averaging Principle for Spatial Birth-and-Death Evolutions in the Continuum

    Science.gov (United States)

    Friesen, Martin; Kondratiev, Yuri

    2018-04-01

    We study a spatial birth-and-death process on the phase space of locally finite configurations Γ^+ × Γ^- over R^d . Dynamics is described by an non-equilibrium evolution of states obtained from the Fokker-Planck equation and associated with the Markov operator L^+(γ ^-) + 1/ɛ L^- , ɛ > 0 . Here L^- describes the environment process on Γ^- and L^+(γ ^-) describes the system process on Γ^+ , where γ ^- indicates that the corresponding birth-and-death rates depend on another locally finite configuration γ ^- \\in Γ^- . We prove that, for a certain class of birth-and-death rates, the corresponding Fokker-Planck equation is well-posed, i.e. there exists a unique evolution of states μ _t^{ɛ } on Γ^+ × Γ^- . Moreover, we give a sufficient condition such that the environment is ergodic with exponential rate. Let μ _{inv} be the invariant measure for the environment process on Γ^- . In the main part of this work we establish the stochastic averaging principle, i.e. we prove that the marginal of μ _t^{ɛ } onto Γ^+ converges weakly to an evolution of states on Γ^+ associated with the averaged Markov birth-and-death operator \\overline{L} = \\int _{Γ}^-}L^+(γ ^-)d μ _{inv}(γ ^-).

  15. Using Bayes Model Averaging for Wind Power Forecasts

    Science.gov (United States)

    Preede Revheim, Pål; Beyer, Hans Georg

    2014-05-01

    For operational purposes predictions of the forecasts of the lumped output of groups of wind farms spread over larger geographic areas will often be of interest. A naive approach is to make forecasts for each individual site and sum them up to get the group forecast. It is however well documented that a better choice is to use a model that also takes advantage of spatial smoothing effects. It might however be the case that some sites tends to more accurately reflect the total output of the region, either in general or for certain wind directions. It will then be of interest giving these a greater influence over the group forecast. Bayesian model averaging (BMA) is a statistical post-processing method for producing probabilistic forecasts from ensembles. Raftery et al. [1] show how BMA can be used for statistical post processing of forecast ensembles, producing PDFs of future weather quantities. The BMA predictive PDF of a future weather quantity is a weighted average of the ensemble members' PDFs, where the weights can be interpreted as posterior probabilities and reflect the ensemble members' contribution to overall forecasting skill over a training period. In Revheim and Beyer [2] the BMA procedure used in Sloughter, Gneiting and Raftery [3] were found to produce fairly accurate PDFs for the future mean wind speed of a group of sites from the single sites wind speeds. However, when the procedure was attempted applied to wind power it resulted in either problems with the estimation of the parameters (mainly caused by longer consecutive periods of no power production) or severe underestimation (mainly caused by problems with reflecting the power curve). In this paper the problems that arose when applying BMA to wind power forecasting is met through two strategies. First, the BMA procedure is run with a combination of single site wind speeds and single site wind power production as input. This solves the problem with longer consecutive periods where the input data

  16. Choosing between Higher Moment Maximum Entropy Models and Its Application to Homogeneous Point Processes with Random Effects

    Directory of Open Access Journals (Sweden)

    Lotfi Khribi

    2017-12-01

    Full Text Available In the Bayesian framework, the usual choice of prior in the prediction of homogeneous Poisson processes with random effects is the gamma one. Here, we propose the use of higher order maximum entropy priors. Their advantage is illustrated in a simulation study and the choice of the best order is established by two goodness-of-fit criteria: Kullback–Leibler divergence and a discrepancy measure. This procedure is illustrated on a warranty data set from the automobile industry.

  17. Path probabilities of continuous time random walks

    International Nuclear Information System (INIS)

    Eule, Stephan; Friedrich, Rudolf

    2014-01-01

    Employing the path integral formulation of a broad class of anomalous diffusion processes, we derive the exact relations for the path probability densities of these processes. In particular, we obtain a closed analytical solution for the path probability distribution of a Continuous Time Random Walk (CTRW) process. This solution is given in terms of its waiting time distribution and short time propagator of the corresponding random walk as a solution of a Dyson equation. Applying our analytical solution we derive generalized Feynman–Kac formulae. (paper)

  18. An Extended Quadratic Frobenius Primality Test with Average Case Error Estimates

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Frandsen, Gudmund Skovbjerg

    2001-01-01

    We present an Extended Quadratic Frobenius Primality Test (EQFT), which is related to an extends the Miller-Rabin test and the Quadratic Frobenius test (QFT) by Grantham. EQFT takes time about equivalent to 2 Miller-Rabin tests, but has much smaller error probability, namely 256/331776t for t...... for the error probability of this algorithm as well as a general closed expression bounding the error. For instance, it is at most 2-143 for k = 500, t = 2. Compared to earlier similar results for the Miller-Rabin test, the results indicates that our test in the average case has the effect of 9 Miller......-Rabin tests, while only taking time equivalent to about 2 such tests. We also give bounds for the error in case a prime is sought by incremental search from a random starting point....

  19. Pure-Triplet Scattering for Radiative Transfer in Semi-infinite Random Media with Refractive-Index Dependent Boundary

    International Nuclear Information System (INIS)

    Sallah, M.; Degheidy, A.R.

    2013-01-01

    Radiative transfer problem for pure-triplet scattering, in participating half-space random medium is proposed. The medium is assumed to be random with binary Markovian mixtures (e.g. radiation transfer in astrophysical contexts where the clouds and clear sky play and two-phase medium) described by Markovian statistics. The specular reflectivity of the boundary is angular-dependent described by the Fresnel's reflection probability function. The problem is solved at first in the deterministic case, and then the solution is averaged using the formalism developed by Levermore and Pomraning, to treat particles transport problems in statistical mixtures. Some physical quantities of interest such as the reflectivity of the boundary, average radiant energy, and average net flux are computed for various values of refractive index of the boundary

  20. 40 CFR 600.510-12 - Calculation of average fuel economy and average carbon-related exhaust emissions.

    Science.gov (United States)

    2010-07-01

    ... and average carbon-related exhaust emissions. 600.510-12 Section 600.510-12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF... Transportation. (iv) [Reserved] (2) Average carbon-related exhaust emissions will be calculated to the nearest...

  1. Random walks on the braid group B3 and magnetic translations in hyperbolic geometry

    International Nuclear Information System (INIS)

    Voituriez, Raphaeel

    2002-01-01

    We study random walks on the three-strand braid group B 3 , and in particular compute the drift, or average topological complexity of a random braid, as well as the probability of trivial entanglement. These results involve the study of magnetic random walks on hyperbolic graphs (hyperbolic Harper-Hofstadter problem), what enables to build a faithful representation of B 3 as generalized magnetic translation operators for the problem of a quantum particle on the hyperbolic plane

  2. Small Acute Benefits of 4 Weeks Processing Speed Training Games on Processing Speed and Inhibition Performance and Depressive Mood in the Healthy Elderly People: Evidence from a Randomized Control Trial.

    Science.gov (United States)

    Nouchi, Rui; Saito, Toshiki; Nouchi, Haruka; Kawashima, Ryuta

    2016-01-01

    Background: Processing speed training using a 1-year intervention period improves cognitive functions and emotional states of elderly people. Nevertheless, it remains unclear whether short-term processing speed training such as 4 weeks can benefit elderly people. This study was designed to investigate effects of 4 weeks of processing speed training on cognitive functions and emotional states of elderly people. Methods: We used a single-blinded randomized control trial (RCT). Seventy-two older adults were assigned randomly to two groups: a processing speed training game (PSTG) group and knowledge quiz training game (KQTG) group, an active control group. In PSTG, participants were asked to play PSTG (12 processing speed games) for 15 min, during five sessions per week, for 4 weeks. In the KQTG group, participants were asked to play KQTG (four knowledge quizzes) for 15 min, during five sessions per week, for 4 weeks. We measured several cognitive functions and emotional states before and after the 4 week intervention period. Results: Our results revealed that PSTG improved performances in processing speed and inhibition compared to KQTG, but did not improve performance in reasoning, shifting, short term/working memory, and episodic memory. Moreover, PSTG reduced the depressive mood score as measured by the Profile of Mood State compared to KQTG during the 4 week intervention period, but did not change other emotional measures. Discussion: This RCT first provided scientific evidence related to small acute benefits of 4 week PSTG on processing speed, inhibition, and depressive mood in healthy elderly people. We discuss possible mechanisms for improvements in processing speed and inhibition and reduction of the depressive mood. Trial registration: This trial was registered in The University Hospital Medical Information Network Clinical Trials Registry (UMIN000022250).

  3. Average bioequivalence of single 500 mg doses of two oral formulations of levofloxacin: a randomized, open-label, two-period crossover study in healthy adult Brazilian volunteers

    Directory of Open Access Journals (Sweden)

    Eunice Kazue Kano

    2015-03-01

    Full Text Available Average bioequivalence of two 500 mg levofloxacin formulations available in Brazil, Tavanic(c (Sanofi-Aventis Farmacêutica Ltda, Brazil, reference product and Levaquin(c (Janssen-Cilag Farmacêutica Ltda, Brazil, test product was evaluated by means of a randomized, open-label, 2-way crossover study performed in 26 healthy Brazilian volunteers under fasting conditions. A single dose of 500 mg levofloxacin tablets was orally administered, and blood samples were collected over a period of 48 hours. Levofloxacin plasmatic concentrations were determined using a validated HPLC method. Pharmacokinetic parameters Cmax, Tmax, Kel, T1/2el, AUC0-t and AUC0-inf were calculated using noncompartmental analysis. Bioequivalence was determined by calculating 90% confidence intervals (90% CI for the ratio of Cmax, AUC0-t and AUC0-inf values for test and reference products, using logarithmic transformed data. Tolerability was assessed by monitoring vital signs and laboratory analysis results, by subject interviews and by spontaneous report of adverse events. 90% CIs for Cmax, AUC0-t and AUC0-inf were 92.1% - 108.2%, 90.7% - 98.0%, and 94.8% - 100.0%, respectively. Observed adverse events were nausea and headache. It was concluded that Tavanic(c and Levaquin(c are bioequivalent, since 90% CIs are within the 80% - 125% interval proposed by regulatory agencies.

  4. General theory for calculating disorder-averaged Green's function correlators within the coherent potential approximation

    Science.gov (United States)

    Zhou, Chenyi; Guo, Hong

    2017-01-01

    We report a diagrammatic method to solve the general problem of calculating configurationally averaged Green's function correlators that appear in quantum transport theory for nanostructures containing disorder. The theory treats both equilibrium and nonequilibrium quantum statistics on an equal footing. Since random impurity scattering is a problem that cannot be solved exactly in a perturbative approach, we combine our diagrammatic method with the coherent potential approximation (CPA) so that a reliable closed-form solution can be obtained. Our theory not only ensures the internal consistency of the diagrams derived at different levels of the correlators but also satisfies a set of Ward-like identities that corroborate the conserving consistency of transport calculations within the formalism. The theory is applied to calculate the quantum transport properties such as average ac conductance and transmission moments of a disordered tight-binding model, and results are numerically verified to high precision by comparing to the exact solutions obtained from enumerating all possible disorder configurations. Our formalism can be employed to predict transport properties of a wide variety of physical systems where disorder scattering is important.

  5. Decay of random correlation functions for unimodal maps

    Science.gov (United States)

    Baladi, Viviane; Benedicks, Michael; Maume-Deschamps, Véronique

    2000-10-01

    Since the pioneering results of Jakobson and subsequent work by Benedicks-Carleson and others, it is known that quadratic maps tfa( χ) = a - χ2 admit a unique absolutely continuous invariant measure for a positive measure set of parameters a. For topologically mixing tfa, Young and Keller-Nowicki independently proved exponential decay of correlation functions for this a.c.i.m. and smooth observables. We consider random compositions of small perturbations tf + ωt, with tf = tfa or another unimodal map satisfying certain nonuniform hyperbolicity axioms, and ωt chosen independently and identically in [-ɛ, ɛ]. Baladi-Viana showed exponential mixing of the associated Markov chain, i.e., averaging over all random itineraries. We obtain stretched exponential bounds for the random correlation functions of Lipschitz observables for the sample measure μωof almost every itinerary.

  6. A time-averaged cosmic ray propagation theory

    International Nuclear Information System (INIS)

    Klimas, A.J.

    1975-01-01

    An argument is presented, which casts doubt on our ability to choose an appropriate magnetic field ensemble for computing the average behavior of cosmic ray particles. An alternate procedure, using time-averages rather than ensemble-averages, is presented. (orig.) [de

  7. Application of NMR circuit for superconducting magnet using signal averaging

    International Nuclear Information System (INIS)

    Yamada, R.; Ishimoto, H.; Shea, M.F.; Schmidt, E.E.; Borer, K.

    1977-01-01

    An NMR circuit was used to measure the absolute field values of Fermilab Energy Doubler magnets up to 44 kG. A signal averaging method to improve the S/N ratio was implemented by means of a Tektronix Digital Processing Oscilloscope, followed by the development of an inexpensive microprocessor based system contained in a NIM module. Some of the data obtained from measuring two superconducting dipole magnets are presented

  8. Improving consensus structure by eliminating averaging artifacts

    Directory of Open Access Journals (Sweden)

    KC Dukka B

    2009-03-01

    Full Text Available Abstract Background Common structural biology methods (i.e., NMR and molecular dynamics often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures. However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%; in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38 of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA 1, our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction 2, which

  9. Random isotropic one-dimensional XY-model

    Science.gov (United States)

    Gonçalves, L. L.; Vieira, A. P.

    1998-01-01

    The 1D isotropic s = ½XY-model ( N sites), with random exchange interaction in a transverse random field is considered. The random variables satisfy bimodal quenched distributions. The solution is obtained by using the Jordan-Wigner fermionization and a canonical transformation, reducing the problem to diagonalizing an N × N matrix, corresponding to a system of N noninteracting fermions. The calculations are performed numerically for N = 1000, and the field-induced magnetization at T = 0 is obtained by averaging the results for the different samples. For the dilute case, in the uniform field limit, the magnetization exhibits various discontinuities, which are the consequence of the existence of disconnected finite clusters distributed along the chain. Also in this limit, for finite exchange constants J A and J B, as the probability of J A varies from one to zero, the saturation field is seen to vary from Γ A to Γ B, where Γ A(Γ B) is the value of the saturation field for the pure case with exchange constant equal to J A(J B) .

  10. Weak convergence to isotropic complex [Formula: see text] random measure.

    Science.gov (United States)

    Wang, Jun; Li, Yunmeng; Sang, Liheng

    2017-01-01

    In this paper, we prove that an isotropic complex symmetric α -stable random measure ([Formula: see text]) can be approximated by a complex process constructed by integrals based on the Poisson process with random intensity.

  11. Random matrix theory with an external source

    CERN Document Server

    Brézin, Edouard

    2016-01-01

    This is a first book to show that the theory of the Gaussian random matrix is essential to understand the universal correlations with random fluctuations and to demonstrate that it is useful to evaluate topological universal quantities. We consider Gaussian random matrix models in the presence of a deterministic matrix source. In such models the correlation functions are known exactly for an arbitrary source and for any size of the matrices. The freedom given by the external source allows for various tunings to different classes of universality. The main interest is to use this freedom to compute various topological invariants for surfaces such as the intersection numbers for curves drawn on a surface of given genus with marked points, Euler characteristics, and the Gromov–Witten invariants. A remarkable duality for the average of characteristic polynomials is essential for obtaining such topological invariants. The analysis is extended to nonorientable surfaces and to surfaces with boundaries.

  12. The impact of randomness on the distribution of wealth: Some economic aspects of the Wright-Fisher diffusion process

    Science.gov (United States)

    Bouleau, Nicolas; Chorro, Christophe

    2017-08-01

    In this paper we consider some elementary and fair zero-sum games of chance in order to study the impact of random effects on the wealth distribution of N interacting players. Even if an exhaustive analytical study of such games between many players may be tricky, numerical experiments highlight interesting asymptotic properties. In particular, we emphasize that randomness plays a key role in concentrating wealth in the extreme, in the hands of a single player. From a mathematical perspective, we interestingly adopt some diffusion limits for small and high-frequency transactions which are otherwise extensively used in population genetics. Finally, the impact of small tax rates on the preceding dynamics is discussed for several regulation mechanisms. We show that taxation of income is not sufficient to overcome this extreme concentration process in contrast to the uniform taxation of capital which stabilizes the economy and prevents agents from being ruined.

  13. Method of model reduction and multifidelity models for solute transport in random layered porous media

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Zhijie; Tartakovsky, Alexandre M.

    2017-09-01

    This work presents a hierarchical model for solute transport in bounded layered porous media with random permeability. The model generalizes the Taylor-Aris dispersion theory to stochastic transport in random layered porous media with a known velocity covariance function. In the hierarchical model, we represent (random) concentration in terms of its cross-sectional average and a variation function. We derive a one-dimensional stochastic advection-dispersion-type equation for the average concentration and a stochastic Poisson equation for the variation function, as well as expressions for the effective velocity and dispersion coefficient. We observe that velocity fluctuations enhance dispersion in a non-monotonic fashion: the dispersion initially increases with correlation length λ, reaches a maximum, and decreases to zero at infinity. Maximum enhancement can be obtained at the correlation length about 0.25 the size of the porous media perpendicular to flow.

  14. A Solution Method for Linear and Geometrically Nonlinear MDOF Systems with Random Properties subject to Random Excitation

    DEFF Research Database (Denmark)

    Micaletti, R. C.; Cakmak, A. S.; Nielsen, Søren R. K.

    structural properties. The resulting state-space formulation is a system of ordinary stochastic differential equations with random coefficient and deterministic initial conditions which are subsequently transformed into ordinary stochastic differential equations with deterministic coefficients and random......A method for computing the lower-order moments of randomly-excited multi-degree-of-freedom (MDOF) systems with random structural properties is proposed. The method is grounded in the techniques of stochastic calculus, utilizing a Markov diffusion process to model the structural system with random...... initial conditions. This transformation facilitates the derivation of differential equations which govern the evolution of the unconditional statistical moments of response. Primary consideration is given to linear systems and systems with odd polynomial nonlinearities, for in these cases...

  15. Random walks and diffusion on networks

    Science.gov (United States)

    Masuda, Naoki; Porter, Mason A.; Lambiotte, Renaud

    2017-11-01

    Random walks are ubiquitous in the sciences, and they are interesting from both theoretical and practical perspectives. They are one of the most fundamental types of stochastic processes; can be used to model numerous phenomena, including diffusion, interactions, and opinions among humans and animals; and can be used to extract information about important entities or dense groups of entities in a network. Random walks have been studied for many decades on both regular lattices and (especially in the last couple of decades) on networks with a variety of structures. In the present article, we survey the theory and applications of random walks on networks, restricting ourselves to simple cases of single and non-adaptive random walkers. We distinguish three main types of random walks: discrete-time random walks, node-centric continuous-time random walks, and edge-centric continuous-time random walks. We first briefly survey random walks on a line, and then we consider random walks on various types of networks. We extensively discuss applications of random walks, including ranking of nodes (e.g., PageRank), community detection, respondent-driven sampling, and opinion models such as voter models.

  16. A Method of Erasing Data Using Random Number Generators

    OpenAIRE

    井上,正人

    2012-01-01

    Erasing data is an indispensable step for disposal of computers or external storage media. Except physical destruction, erasing data means writing random information on entire disk drives or media. We propose a method which erases data safely using random number generators. These random number generators create true random numbers based on quantum processes.

  17. Effect of texture randomization on the slip and interfacial robustness in turbulent flows over superhydrophobic surfaces

    Science.gov (United States)

    Seo, Jongmin; Mani, Ali

    2018-04-01

    Superhydrophobic surfaces demonstrate promising potential for skin friction reduction in naval and hydrodynamic applications. Recent developments of superhydrophobic surfaces aiming for scalable applications use random distribution of roughness, such as spray coating and etched process. However, most previous analyses of the interaction between flows and superhydrophobic surfaces studied periodic geometries that are economically feasible only in laboratory-scale experiments. In order to assess the drag reduction effectiveness as well as interfacial robustness of superhydrophobic surfaces with randomly distributed textures, we conduct direct numerical simulations of turbulent flows over randomly patterned interfaces considering a range of texture widths w+≈4 -26 , and solid fractions ϕs=11 %-25 % . Slip and no-slip boundary conditions are implemented in a pattern, modeling the presence of gas-liquid interfaces and solid elements. Our results indicate that slip of randomly distributed textures under turbulent flows is about 30 % less than those of surfaces with aligned features of the same size. In the small texture size limit w+≈4 , the slip length of the randomly distributed textures in turbulent flows is well described by a previously introduced Stokes flow solution of randomly distributed shear-free holes. By comparing DNS results for patterned slip and no-slip boundary against the corresponding homogenized slip length boundary conditions, we show that turbulent flows over randomly distributed posts can be represented by an isotropic slip length in streamwise and spanwise direction. The average pressure fluctuation on a gas pocket is similar to that of the aligned features with the same texture size and gas fraction, but the maximum interface deformation at the leading edge of the roughness element is about twice as large when the textures are randomly distributed. The presented analyses provide insights on implications of texture randomness on drag

  18. Estimation of the Coefficient of Restitution of Rocking Systems by the Random Decrement Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Demosthenous, Milton; Manos, George C.

    1994-01-01

    The aim of this paper is to investigate the possibility of estimating an average damping parameter for a rocking system due to impact, the so-called coefficient of restitution, from the random response, i.e. when the loads are random and unknown, and the response is measured. The objective...... is to obtain an estimate of the free rocking response from the measured random response using the Random Decrement (RDD) Technique, and then estimate the coefficient of restitution from this free response estimate. In the paper this approach is investigated by simulating the response of a single degree...

  19. Image Denoising Using Interquartile Range Filter with Local Averaging

    OpenAIRE

    Jassim, Firas Ajil

    2013-01-01

    Image denoising is one of the fundamental problems in image processing. In this paper, a novel approach to suppress noise from the image is conducted by applying the interquartile range (IQR) which is one of the statistical methods used to detect outlier effect from a dataset. A window of size kXk was implemented to support IQR filter. Each pixel outside the IQR range of the kXk window is treated as noisy pixel. The estimation of the noisy pixels was obtained by local averaging. The essential...

  20. Random walk generated by random permutations of {1, 2, 3, ..., n + 1}

    International Nuclear Information System (INIS)

    Oshanin, G; Voituriez, R

    2004-01-01

    We study properties of a non-Markovian random walk X (n) l , l = 0, 1, 2, ..., n, evolving in discrete time l on a one-dimensional lattice of integers, whose moves to the right or to the left are prescribed by the rise-and-descent sequences characterizing random permutations π of [n + 1] = {1, 2, 3, ..., n + 1}. We determine exactly the probability of finding the end-point X n = X (n) n of the trajectory of such a permutation-generated random walk (PGRW) at site X, and show that in the limit n → ∞ it converges to a normal distribution with a smaller, compared to the conventional Polya random walk, diffusion coefficient. We formulate, as well, an auxiliary stochastic process whose distribution is identical to the distribution of the intermediate points X (n) l , l < n, which enables us to obtain the probability measure of different excursions and to define the asymptotic distribution of the number of 'turns' of the PGRW trajectories

  1. Risk Stratification and Shared Decision Making for Colorectal Cancer Screening: A Randomized Controlled Trial.

    Science.gov (United States)

    Schroy, Paul C; Duhovic, Emir; Chen, Clara A; Heeren, Timothy C; Lopez, William; Apodaca, Danielle L; Wong, John B

    2016-05-01

    Eliciting patient preferences within the context of shared decision making has been advocated for colorectal cancer (CRC) screening, yet providers often fail to comply with patient preferences that differ from their own. To determine whether risk stratification for advanced colorectal neoplasia (ACN) influences provider willingness to comply with patient preferences when selecting a desired CRC screening option. Randomized controlled trial. Asymptomatic, average-risk patients due for CRC screening in an urban safety net health care setting. Patients were randomized 1:1 to a decision aid alone (n= 168) or decision aid plus risk assessment (n= 173) arm between September 2012 and September 2014. The primary outcome was concordance between patient preference and test ordered; secondary outcomes included patient satisfaction with the decision-making process, screening intentions, test completion rates, and provider satisfaction. Although providers perceived risk stratification to be useful in selecting an appropriate screening test for their average-risk patients, no significant differences in concordance were observed between the decision aid alone and decision aid plus risk assessment groups (88.1% v. 85.0%,P= 0.40) or high- and low-risk groups (84.5% v. 87.1%,P= 0.51). Concordance was highest for colonoscopy and relatively low for tests other than colonoscopy, regardless of study arm or risk group. Failure to comply with patient preferences was negatively associated with satisfaction with the decision-making process, screening intentions, and test completion rates. Single-institution setting; lack of provider education about the utility of risk stratification into their decision making. Providers perceived risk stratification to be useful in their decision making but often failed to comply with patient preferences for tests other than colonoscopy, even among those deemed to be at low risk of ACN. © The Author(s) 2016.

  2. Image-processing of time-averaged interface distributions representing CCFL characteristics in a large scale model of a PWR hot-leg pipe geometry

    International Nuclear Information System (INIS)

    Al Issa, Suleiman; Macián-Juan, Rafael

    2017-01-01

    Highlights: • CCFL characteristics are investigated in PWR large-scale hot-leg pipe geometry. • Image processing of air-water interface produced time-averaged interface distributions. • Time-averages provide a comparative method of CCFL characteristics among different studies. • CCFL correlations depend upon the range of investigated water delivery for Dh ≫ 50 mm. • 1D codes are incapable of investigating CCFL because of lack of interface distribution. - Abstract: Countercurrent Flow Limitation (CCFL) was experimentally investigated in a 1/3.9 downscaled COLLIDER facility with a 190 mm pipe’s diameter using air/water at 1 atmospheric pressure. Previous investigations provided knowledge over the onset of CCFL mechanisms. In current article, CCFL characteristics at the COLLIDER facility are measured and discussed along with time-averaged distributions of the air/water interface for a selected matrix of liquid/gas velocities. The article demonstrates the time-averaged interface as a useful method to identify CCFL characteristics at quasi-stationary flow conditions eliminating variations that appears in single images, and showing essential comparative flow features such as: the degree of restriction at the bend, the extension and the intensity of the two-phase mixing zones, and the average water level within the horizontal part and the steam generator. Consequently, making it possible to compare interface distributions obtained at different investigations. The distributions are also beneficial for CFD validations of CCFL as the instant chaotic gas/liquid interface is impossible to reproduce in CFD simulations. The current study shows that final CCFL characteristics curve (and the corresponding CCFL correlation) depends upon the covered measuring range of water delivery. It also shows that a hydraulic diameter should be sufficiently larger than 50 mm in order to obtain CCFL characteristics comparable to the 1:1 scale data (namely the UPTF data). Finally

  3. The application of moving average control charts for evaluating magnetic field quality on an individual magnet basis

    International Nuclear Information System (INIS)

    Pollock, D.A.; Gunst, R.F.; Schucany, W.R.

    1994-01-01

    SSC Collider Dipole Magnet field quality specifications define limits of variation for the population mean (Systematic) and standard deviation (RMS deviation) of allowed and unallowed multipole coefficients generated by the full collection of dipole magnets throughout the Collider operating cycle. A fundamental Quality Control issue is how to determine the acceptability of individual magnets during production, in other words taken one at a time and compared to the population parameters. Provided that the normal distribution assumptions hold, the random variation of multipoles for individual magnets may be evaluated by comparing the measured results to ± 3 x RMS tolerance, centered on the design nominal. To evaluate the local and cumulative systematic variation of the magnets against the distribution tolerance, individual magnet results need to be combined with others that come before it. This paper demonstrates a Statistical Quality Control method (the Unweighted Moving Average control chart) to evaluate individual magnet performance and process stability against population tolerances. The DESY/HERA Dipole cold skew quadrupole measurements for magnets in production order are used to evaluate non-stationarity of the mean over time for the cumulative set of magnets, as well as for a moving sample

  4. 40 CFR 76.11 - Emissions averaging.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Emissions averaging. 76.11 Section 76.11 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.11 Emissions averaging. (a) General...

  5. Average spectral efficiency analysis of FSO links over turbulence channel with adaptive transmissions and aperture averaging

    Science.gov (United States)

    Aarthi, G.; Ramachandra Reddy, G.

    2018-03-01

    In our paper, the impact of adaptive transmission schemes: (i) optimal rate adaptation (ORA) and (ii) channel inversion with fixed rate (CIFR) on the average spectral efficiency (ASE) are explored for free-space optical (FSO) communications with On-Off Keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems under different turbulence regimes. Further to enhance the ASE we have incorporated aperture averaging effects along with the above adaptive schemes. The results indicate that ORA adaptation scheme has the advantage of improving the ASE performance compared with CIFR under moderate and strong turbulence regime. The coherent OWC system with ORA excels the other modulation schemes and could achieve ASE performance of 49.8 bits/s/Hz at the average transmitted optical power of 6 dBm under strong turbulence. By adding aperture averaging effect we could achieve an ASE of 50.5 bits/s/Hz under the same conditions. This makes ORA with Coherent OWC modulation as a favorable candidate for improving the ASE of the FSO communication system.

  6. A practical guide to averaging functions

    CERN Document Server

    Beliakov, Gleb; Calvo Sánchez, Tomasa

    2016-01-01

    This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...

  7. 7 CFR 51.2561 - Average moisture content.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average moisture content. 51.2561 Section 51.2561... STANDARDS) United States Standards for Grades of Shelled Pistachio Nuts § 51.2561 Average moisture content. (a) Determining average moisture content of the lot is not a requirement of the grades, except when...

  8. High-average-power laser medium based on silica glass

    Science.gov (United States)

    Fujimoto, Yasushi; Nakatsuka, Masahiro

    2000-01-01

    Silica glass is one of the most attractive materials for a high-average-power laser. We have developed a new laser material base don silica glass with zeolite method which is effective for uniform dispersion of rare earth ions in silica glass. High quality medium, which is bubbleless and quite low refractive index distortion, must be required for realization of laser action. As the main reason of bubbling is due to hydroxy species remained in the gelation same, we carefully choose colloidal silica particles, pH value of hydrochloric acid for hydrolysis of tetraethylorthosilicate on sol-gel process, and temperature and atmosphere control during sintering process, and then we get a bubble less transparent rare earth doped silica glass. The refractive index distortion of the sample also discussed.

  9. Scaling Limit of Symmetric Random Walk in High-Contrast Periodic Environment

    Science.gov (United States)

    Piatnitski, A.; Zhizhina, E.

    2017-11-01

    The paper deals with the asymptotic properties of a symmetric random walk in a high contrast periodic medium in Z^d, d≥1. From the existing homogenization results it follows that under diffusive scaling the limit behaviour of this random walk need not be Markovian. The goal of this work is to show that if in addition to the coordinate of the random walk in Z^d we introduce an extra variable that characterizes the position of the random walk inside the period then the limit dynamics of this two-component process is Markov. We describe the limit process and observe that the components of the limit process are coupled. We also prove the convergence in the path space for the said random walk.

  10. Do MENA stock market returns follow a random walk process?

    Directory of Open Access Journals (Sweden)

    Salim Lahmiri

    2013-01-01

    Full Text Available In this research, three variance ratio tests: the standard variance ratio test, the wild bootstrap multiple variance ratio test, and the non-parametric rank scores test are adopted to test the random walk hypothesis (RWH of stock markets in Middle East and North Africa (MENA region using most recent data from January 2010 to September 2012. The empirical results obtained by all three econometric tests show that the RWH is strongly rejected for Kuwait, Tunisia, and Morocco. However, the standard variance ratio test and the wild bootstrap multiple variance ratio test reject the null hypothesis of random walk in Jordan and KSA, while non-parametric rank scores test do not. We may conclude that Jordan and KSA stock market are weak efficient. In sum, the empirical results suggest that return series in Kuwait, Tunisia, and Morocco are predictable. In other words, predictable patterns that can be exploited in these markets still exit. Therefore, investors may make profits in such less efficient markets.

  11. Sedimentological time-averaging and 14C dating of marine shells

    International Nuclear Information System (INIS)

    Fujiwara, Osamu; Kamataki, Takanobu; Masuda, Fujio

    2004-01-01

    The radiocarbon dating of sediments using marine shells involves uncertainties due to the mixed ages of the shells mainly attributed to depositional processes also known as 'sedimentological time-averaging'. This stratigraphic disorder can be removed by selecting the well-preserved indigenous shells based on ecological and taphonomic criteria. These criteria on sample selection are recommended for accurate estimation of the depositional age of geologic strata from 14 C dating of marine shells

  12. Comminution process to produce precision wood particles of uniform size and shape with disrupted grain structure from wood chips

    Science.gov (United States)

    Dooley, James H; Lanning, David N

    2013-08-13

    A process of comminution of wood chips (C) having a grain direction to produce a mixture of wood particles (P), wherein the wood chips are characterized by an average length dimension (L.sub.C) as measured substantially parallel to the grain, an average width dimension (W.sub.C) as measured normal to L.sub.C and aligned cross grain, and an average height dimension (H.sub.C) as measured normal to W.sub.C and L.sub.C, and wherein the comminution process comprises the step of feeding the wood chips in a direction of travel substantially randomly to the grain direction through a counter rotating pair of intermeshing arrays of cutting discs (D) arrayed axially perpendicular to the direction of wood chip travel, wherein the cutting discs have a uniform thickness (T.sub.D), and wherein at least one of L.sub.C, W.sub.C, and H.sub.C is greater than T.sub.D.

  13. Probability distribution for the Gaussian curvature of the zero level surface of a random function

    Science.gov (United States)

    Hannay, J. H.

    2018-04-01

    A rather natural construction for a smooth random surface in space is the level surface of value zero, or ‘nodal’ surface f(x,y,z)  =  0, of a (real) random function f; the interface between positive and negative regions of the function. A physically significant local attribute at a point of a curved surface is its Gaussian curvature (the product of its principal curvatures) because, when integrated over the surface it gives the Euler characteristic. Here the probability distribution for the Gaussian curvature at a random point on the nodal surface f  =  0 is calculated for a statistically homogeneous (‘stationary’) and isotropic zero mean Gaussian random function f. Capitalizing on the isotropy, a ‘fixer’ device for axes supplies the probability distribution directly as a multiple integral. Its evaluation yields an explicit algebraic function with a simple average. Indeed, this average Gaussian curvature has long been known. For a non-zero level surface instead of the nodal one, the probability distribution is not fully tractable, but is supplied as an integral expression.

  14. Computation of the bounce-average code

    International Nuclear Information System (INIS)

    Cutler, T.A.; Pearlstein, L.D.; Rensink, M.E.

    1977-01-01

    The bounce-average computer code simulates the two-dimensional velocity transport of ions in a mirror machine. The code evaluates and bounce-averages the collision operator and sources along the field line. A self-consistent equilibrium magnetic field is also computed using the long-thin approximation. Optionally included are terms that maintain μ, J invariance as the magnetic field changes in time. The assumptions and analysis that form the foundation of the bounce-average code are described. When references can be cited, the required results are merely stated and explained briefly. A listing of the code is appended

  15. Modified parity space averaging approaches for online cross-calibration of redundant sensors in nuclear reactors

    Directory of Open Access Journals (Sweden)

    Moath Kassim

    2018-05-01

    Full Text Available To maintain safety and reliability of reactors, redundant sensors are usually used to measure critical variables and estimate their averaged time-dependency. Nonhealthy sensors can badly influence the estimation result of the process variable. Since online condition monitoring was introduced, the online cross-calibration method has been widely used to detect any anomaly of sensor readings among the redundant group. The cross-calibration method has four main averaging techniques: simple averaging, band averaging, weighted averaging, and parity space averaging (PSA. PSA is used to weigh redundant signals based on their error bounds and their band consistency. Using the consistency weighting factor (C, PSA assigns more weight to consistent signals that have shared bands, based on how many bands they share, and gives inconsistent signals of very low weight. In this article, three approaches are introduced for improving the PSA technique: the first is to add another consistency factor, so called trend consistency (TC, to include a consideration of the preserving of any characteristic edge that reflects the behavior of equipment/component measured by the process parameter; the second approach proposes replacing the error bound/accuracy based weighting factor (Wa with a weighting factor based on the Euclidean distance (Wd, and the third approach proposes applying Wd,TC,andC, all together. Cold neutron source data sets of four redundant hydrogen pressure transmitters from a research reactor were used to perform the validation and verification. Results showed that the second and third modified approaches lead to reasonable improvement of the PSA technique. All approaches implemented in this study were similar in that they have the capability to (1 identify and isolate a drifted sensor that should undergo calibration, (2 identify a faulty sensor/s due to long and continuous missing data range, and (3 identify a healthy sensor. Keywords: Nuclear Reactors

  16. Detection of random alterations to time-varying musical instrument spectra.

    Science.gov (United States)

    Horner, Andrew; Beauchamp, James; So, Richard

    2004-09-01

    The time-varying spectra of eight musical instrument sounds were randomly altered by a time-invariant process to determine how detection of spectral alteration varies with degree of alteration, instrument, musical experience, and spectral variation. Sounds were resynthesized with centroids equalized to the original sounds, with frequencies harmonically flattened, and with average spectral error levels of 8%, 16%, 24%, 32%, and 48%. Listeners were asked to discriminate the randomly altered sounds from reference sounds resynthesized from the original data. For all eight instruments, discrimination was very good for the 32% and 48% error levels, moderate for the 16% and 24% error levels, and poor for the 8% error levels. When the error levels were 16%, 24%, and 32%, the scores of musically experienced listeners were found to be significantly better than the scores of listeners with no musical experience. Also, in this same error level range, discrimination was significantly affected by the instrument tested. For error levels of 16% and 24%, discrimination scores were significantly, but negatively correlated with measures of spectral incoherence and normalized centroid deviation on unaltered instrument spectra, suggesting that the presence of dynamic spectral variations tends to increase the difficulty of detecting spectral alterations. Correlation between discrimination and a measure of spectral irregularity was comparatively low.

  17. Average is Over

    Science.gov (United States)

    Eliazar, Iddo

    2018-02-01

    The popular perception of statistical distributions is depicted by the iconic bell curve which comprises of a massive bulk of 'middle-class' values, and two thin tails - one of small left-wing values, and one of large right-wing values. The shape of the bell curve is unimodal, and its peak represents both the mode and the mean. Thomas Friedman, the famous New York Times columnist, recently asserted that we have entered a human era in which "Average is Over" . In this paper we present mathematical models for the phenomenon that Friedman highlighted. While the models are derived via different modeling approaches, they share a common foundation. Inherent tipping points cause the models to phase-shift from a 'normal' bell-shape statistical behavior to an 'anomalous' statistical behavior: the unimodal shape changes to an unbounded monotone shape, the mode vanishes, and the mean diverges. Hence: (i) there is an explosion of small values; (ii) large values become super-large; (iii) 'middle-class' values are wiped out, leaving an infinite rift between the small and the super large values; and (iv) "Average is Over" indeed.

  18. A random walk on water (Henry Darcy Medal Lecture)

    Science.gov (United States)

    Koutsoyiannis, D.

    2009-04-01

    . Experimentation with this toy model demonstrates, inter alia, that: (1) for short time horizons the deterministic dynamics is able to give good predictions; but (2) these predictions become extremely inaccurate and useless for long time horizons; (3) for such horizons a naïve statistical prediction (average of past data) which fully neglects the deterministic dynamics is more skilful; and (4) if this statistical prediction, in addition to past data, is combined with the probability theory (the principle of maximum entropy, in particular), it can provide a more informative prediction. Also, the toy model shows that the trajectories of the system state (and derivative properties thereof) do not resemble a regular (e.g., periodic) deterministic process nor a purely random process, but exhibit patterns indicating anti-persistence and persistence (where the latter statistically complies with a Hurst-Kolmogorov behaviour). If the process is averaged over long time scales, the anti-persistent behaviour improves predictability, whereas the persistent behaviour substantially deteriorates it. A stochastic representation of this deterministic system, which incorporates dynamics, is not only possible, but also powerful as it provides good predictions for both short and long horizons and helps to decide on when the deterministic dynamics should be considered or neglected. Obviously, a natural system is extremely more complex than this simple toy model and hence unpredictability is naturally even more prominent in the former. In addition, in a complex natural system, we can never know the exact dynamics and we must infer it from past data, which implies additional uncertainty and an additional role of stochastics in the process of formulating the system equations and estimating the involved parameters. Data also offer the only solid grounds to test any hypothesis about the dynamics, and failure of performing such testing against evidence from data renders the hypothesised dynamics worthless

  19. Random Number Simulations Reveal How Random Noise Affects the Measurements and Graphical Portrayals of Self-Assessed Competency

    Directory of Open Access Journals (Sweden)

    Edward Nuhfer

    2016-01-01

    Full Text Available Self-assessment measures of competency are blends of an authentic self-assessment signal that researchers seek to measure and random disorder or "noise" that accompanies that signal. In this study, we use random number simulations to explore how random noise affects critical aspects of self-assessment investigations: reliability, correlation, critical sample size, and the graphical representations of self-assessment data. We show that graphical conventions common in the self-assessment literature introduce artifacts that invite misinterpretation. Troublesome conventions include: (y minus x vs. (x scatterplots; (y minus x vs. (x column graphs aggregated as quantiles; line charts that display data aggregated as quantiles; and some histograms. Graphical conventions that generate minimal artifacts include scatterplots with a best-fit line that depict (y vs. (x measures (self-assessed competence vs. measured competence plotted by individual participant scores, and (y vs. (x scatterplots of collective average measures of all participants plotted item-by-item. This last graphic convention attenuates noise and improves the definition of the signal. To provide relevant comparisons across varied graphical conventions, we use a single dataset derived from paired measures of 1154 participants' self-assessed competence and demonstrated competence in science literacy. Our results show that different numerical approaches employed in investigating and describing self-assessment accuracy are not equally valid. By modeling this dataset with random numbers, we show how recognizing the varied expressions of randomness in self-assessment data can improve the validity of numeracy-based descriptions of self-assessment.

  20. Adaptive and self-averaging Thouless-Anderson-Palmer mean-field theory for probabilistic modeling

    DEFF Research Database (Denmark)

    Opper, Manfred; Winther, Ole

    2001-01-01

    We develop a generalization of the Thouless-Anderson-Palmer (TAP) mean-field approach of disorder physics. which makes the method applicable to the computation of approximate averages in probabilistic models for real data. In contrast to the conventional TAP approach, where the knowledge...... of the distribution of couplings between the random variables is required, our method adapts to the concrete set of couplings. We show the significance of the approach in two ways: Our approach reproduces replica symmetric results for a wide class of toy models (assuming a nonglassy phase) with given disorder...... distributions in the thermodynamic limit. On the other hand, simulations on a real data model demonstrate that the method achieves more accurate predictions as compared to conventional TAP approaches....