Matrix product approach for the asymmetric random average process
Zielen, F.; Schadschneider, A.
2003-04-01
We consider the asymmetric random average process which is a one-dimensional stochastic lattice model with nearest-neighbour interaction but continuous and unbounded state variables. First, the explicit functional representations, so-called beta densities, of all local interactions leading to steady states of product measure form are rigorously derived. This also completes an outstanding proof given in a previous publication. Then we present an alternative solution for the processes with factorized stationary states by using a matrix product ansatz. Due to continuous state variables we obtain a matrix algebra in the form of a functional equation which can be solved exactly.
Functional limit theorem for moving average processes generated by dependent random variables
无
2006-01-01
Let {Xt,t≥1} be a moving average process defined byXt = ∞∑j=0bjξt-j , where {bj,j≥0} is a sequence of real numbers and { ξt, ∞＜ t ＜∞ } is a doubly infinite sequence of strictly stationary φ- mixing random variables. Under conditions on { bj, j ≥0 }which entail that { Xt, t ≥ 1 } is either a long memory process or a linear process, we study asymptotics of Sn ( s ) = [ns]∑t=1 Xt (properly normalized). When { Xt, t≥1 } is a long memory process, we establish a functional limit theorem. When { Xt, t≥1 } is a linear process, we not only obtain the multi-dimensional weak convergence for { Xt, t≥1 }, but also weaken the moment condition on { ξt, - ∞＜ t ＜∞ } and the restriction on { bj,j≥0}. Finally, we give some applications of our results.
A One Line Derivation of DCC: Application of a Vector Random Coefficient Moving Average Process
C.M. Hafner (Christian); M.J. McAleer (Michael)
2014-01-01
markdownabstract__Abstract__ One of the most widely-used multivariate conditional volatility models is the dynamic conditional correlation (or DCC) specification. However, the underlying stochastic process to derive DCC has not yet been established, which has made problematic the derivation of asym
Average Transmission Probability of a Random Stack
Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg
2010-01-01
The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…
Average Transmission Probability of a Random Stack
Lu, Yin; Miniatura, Christian; Englert, Berthold-Georg
2010-01-01
The transmission through a stack of identical slabs that are separated by gaps with random widths is usually treated by calculating the average of the logarithm of the transmission probability. We show how to calculate the average of the transmission probability itself with the aid of a recurrence relation and derive analytical upper and lower…
Zlatanov, Nikola; Karagiannidis, George K; 10.1109/LCOMM.2008.081058
2009-01-01
We present novel exact expressions and accurate closed-form approximations for the level crossing rate (LCR) and the average fade duration (AFD) of the double Nakagami-m random process. These results are then used to study the second order statistics of multiple input multiple output (MIMO) keyhole fading channels with space-time block coding. Numerical and computer simulation examples validate the accuracy of the presented mathematical analysis and show the tightness of the proposed approximations.
Ergodic averages via dominating processes
Møller, Jesper; Mengersen, Kerrie
2006-01-01
We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary ...
Generalized Sampling Series Approximation of Random Signals from Local Averages
SONG Zhanjie; HE Gaiyun; YE Peixin; YANG Deyun
2007-01-01
Signals are often of random character since they cannot bear any information if they are predictable for any time t, they are usually modelled as stationary random processes. On the other hand, because of the inertia of the measurement apparatus, measured sampled values obtained in practice may not be the precise value of the signal X(t) at time tk( k∈ Z), but only local averages of X(t) near tk. In this paper, it is presented that a wide (or weak ) sense stationary stochastic process can be approximated by generalized sampling series with local average samples.
Average fidelity between random quantum states
Zyczkowski, K; Zyczkowski, Karol; Sommers, Hans-Jurgen
2003-01-01
We analyze mean fidelity between random density matrices of size N, generated with respect to various probability measures in the space of mixed quantum states: Hilbert-Schmidt measure, Bures (statistical) measure, the measures induced by partial trace and the natural measure on the space of pure states. In certain cases explicit probability distributions for fidelity are derived.
Order-Optimal Consensus through Randomized Path Averaging
Benezit, F; Thiran, P; Vetterli, M
2008-01-01
Gossip algorithms have recently received significant attention, mainly because they constitute simple and robust message-passing schemes for distributed information processing over networks. However for many topologies that are realistic for wireless ad-hoc and sensor networks (like grids and random geometric graphs), the standard nearest-neighbor gossip converges as slowly as flooding ($O(n^2)$ messages). A recently proposed algorithm called geographic gossip improves gossip efficiency by a $\\sqrt{n}$ factor, by exploiting geographic information to enable multi-hop long distance communications. In this paper we prove that a variation of geographic gossip that averages along routed paths, improves efficiency by an additional $\\sqrt{n}$ factor and is order optimal ($O(n)$ messages) for grids and random geometric graphs. We develop a general technique (travel agency method) based on Markov chain mixing time inequalities, which can give bounds on the performance of randomized message-passing algorithms operating...
Appeals Council Requests - Average Processing Time
Social Security Administration — This dataset provides annual data from 1989 through 2015 for the average processing time (elapsed time in days) for dispositions by the Appeals Council (AC) (both...
Average subentropy, coherence and entanglement of random mixed quantum states
Zhang, Lin; Singh, Uttam; Pati, Arun K.
2017-02-01
Compact expressions for the average subentropy and coherence are obtained for random mixed states that are generated via various probability measures. Surprisingly, our results show that the average subentropy of random mixed states approaches the maximum value of the subentropy which is attained for the maximally mixed state as we increase the dimension. In the special case of the random mixed states sampled from the induced measure via partial tracing of random bipartite pure states, we establish the typicality of the relative entropy of coherence for random mixed states invoking the concentration of measure phenomenon. Our results also indicate that mixed quantum states are less useful compared to pure quantum states in higher dimension when we extract quantum coherence as a resource. This is because of the fact that average coherence of random mixed states is bounded uniformly, however, the average coherence of random pure states increases with the increasing dimension. As an important application, we establish the typicality of relative entropy of entanglement and distillable entanglement for a specific class of random bipartite mixed states. In particular, most of the random states in this specific class have relative entropy of entanglement and distillable entanglement equal to some fixed number (to within an arbitrary small error), thereby hugely reducing the complexity of computation of these entanglement measures for this specific class of mixed states.
A note on moving average models for Gaussian random fields
Hansen, Linda Vadgård; Thorarinsdottir, Thordis L.
The class of moving average models offers a flexible modeling framework for Gaussian random fields with many well known models such as the Matérn covariance family and the Gaussian covariance falling under this framework. Moving average models may also be viewed as a kernel smoothing of a Lévy...
Averaging of random sets based on their distance functions
Baddeley, A.J.; Molchanov, I.S.
1995-01-01
A new notion of expectation (or distance average) of random closed sets based on their distance function representation is introduced. A general concept of the distance function is exploited to define the expectation, which is the set whose distance function is closest to the expected distance funct
STRONG APPROXIMATION FOR MOVING AVERAGE PROCESSES UNDER DEPENDENCE ASSUMPTIONS
无
2008-01-01
Let {Xt, t ≥ 1} be a moving average process defined by Xt = ∞Σk=0akξt-k,where {ak,k ≥ 0} is a sequence of real numbers and {ξt,-∞＜ t ＜∞} is a doubly infinite sequence of strictly stationary dependent random variables. Under the conditions of {ak, k ≥ 0} which entail that {Xt, t ≥ 1} is either a long memory process or a linear process, the strong approximation of {Xt, t ≥ 1} to a Gaussian process is studied. Finally,the results are applied to obtain the strong approximation of a long memory process to a fractional Brownian motion and the laws of the iterated logarithm for moving average processes.
The average crossing number of equilateral random polygons
Diao, Y.; Dobay, A.; Kusner, R. B.; Millett, K.; Stasiak, A.
2003-11-01
In this paper, we study the average crossing number of equilateral random walks and polygons. We show that the mean average crossing number ACN of all equilateral random walks of length n is of the form \\frac{3}{16} n \\ln n +O(n) . A similar result holds for equilateral random polygons. These results are confirmed by our numerical studies. Furthermore, our numerical studies indicate that when random polygons of length n are divided into individual knot types, the \\langle ACN({\\cal K})\\rangle for each knot type \\cal K can be described by a function of the form \\langle ACN({\\cal K})\\rangle=a (n-n_0) \\ln (n-n_0)+b (n-n_0)+c where a, b and c are constants depending on \\cal K and n0 is the minimal number of segments required to form \\cal K . The \\langle ACN({\\cal K})\\rangle profiles diverge from each other, with more complex knots showing higher \\langle ACN({\\cal K})\\rangle than less complex knots. Moreover, the \\langle ACN({\\cal K})\\rangle profiles intersect with the langACNrang profile of all closed walks. These points of intersection define the equilibrium length of \\cal K , i.e., the chain length n_e({\\cal K}) at which a statistical ensemble of configurations with given knot type \\cal K —upon cutting, equilibration and reclosure to a new knot type \\cal K^\\prime —does not show a tendency to increase or decrease \\langle ACN({\\cal K^\\prime)}\\rangle . This concept of equilibrium length seems to be universal, and applies also to other length-dependent observables for random knots, such as the mean radius of gyration langRgrang.
Random Sequences and Pointwise Convergence of Multiple Ergodic Averages
Frantzikinakis, Nikos; Wierdl, Mate
2010-01-01
We prove pointwise convergence, as $N\\to \\infty$, for the multiple ergodic averages $\\frac{1}{N}\\sum_{n=1}^N f(T^nx)\\cdot g(S^{a_n}x)$, where $T$ and $S$ are commuting measure preserving transformations, and $a_n$ is a random version of the sequence $[n^c]$ for some appropriate $c>1$. We also prove similar mean convergence results for averages of the form $\\frac{1}{N}\\sum_{n=1}^N f(T^{a_n}x)\\cdot g(S^{a_n}x)$, as well as pointwise results when $T$ and $S$ are powers of the same transformation. The deterministic versions of these results, where one replaces $a_n$ with $[n^c]$, remain open, and we hope that our method will indicate a fruitful way to approach these problems as well.
A numerical study of self-averaging in adsorption of random copolymers and random surfaces
Moghaddam, M S
2002-01-01
Numerical studies involving random copolymers and random surfaces assume self-averaging of thermodynamic and metric properties of the systems to calculate different properties. For the problem of adsorption of a random copolymer, rigorous proofs regarding self-averaging of some properties such as free energy in the thermodynamic limit (n-> infinity) exist. This says little about the extent of self-averaging for finite size systems used in numerical studies. For the problem of adsorption of a homopolymer on a random surface, no analytical proofs regarding self-averaging exist. In this work assumptions of self-averaging of thermodynamic and metric properties of a self-avoiding walk model of random copolymer adsorption are tested via multiple Markov chain Monte Carlo method. Numerical evidence is provided in support of self-averaging of energy, heat capacity and the z-component of the self-avoiding walk in different temperature intervals. Self-averaging in energy of a homopolymer interacting with a random surfac...
Schvidler, M.; Karasaki, K.
2011-06-15
In previous papers (Shvidler and Karasaki, 1999, 2001, 2005, and 2008) we presented and analyzed an approach for finding the general forms of exactly averaged equations of flow and transport in porous media. We studied systems of basic equations for steady flow with sources in unbounded domains with stochastically homogeneous conductivity fields. A brief analysis of exactly averaged equations of nonsteady flow and nonreactive solute transport was also presented. At the core of this approach is the existence of appropriate random Green's functions. For example, we showed that in the case of a 3-dimensional unbounded domain the existence of appropriate random Green's functions is sufficient for finding the exact nonlocal averaged equations for flow velocity using the operator with a unique kernel-vector. Examination of random fields with global symmetry (isotropy, transversal isotropy and orthotropy) makes it possible to describe significantly different types of averaged equations with nonlocal unique operators. It is evident that the existence of random Green's functions for physical linear processes is equivalent to assuming the existence of some linear random operators for appropriate stochastic equations. If we restricted ourselves to this assumption only, as we have done in this paper, we can study the processes in any dimensional bounded or unbounded fields and in addition, cases in which the random fields of conductivity and porosity are stochastically nonhomogeneous, nonglobally symmetrical, etc.. It is clear that examining more general cases involves significant difficulty and constricts the analysis of structural types for the processes being studied. Nevertheless, we show that we obtain the essential information regarding averaged equations for steady and transient flow, as well as for solute transport.
Average Number of Coherent Modes for Pulse Random Fields
Lazaruk, A M; Lazaruk, Alexander M.; Karelin, Nikolay V.
1997-01-01
Some consequences of spatio-temporal symmetry for the deterministic decomposition of complex light fields into factorized components are considered. This enables to reveal interrelations between spatial and temporal coherence properties of wave. An estimation of average number of the decomposition terms is obtained in the case of statistical ensemble of light pulses.
The average inter-crossing number of equilateral random walks and polygons
Diao, Y.; Dobay, A.; Stasiak, A.
2005-09-01
In this paper, we study the average inter-crossing number between two random walks and two random polygons in the three-dimensional space. The random walks and polygons in this paper are the so-called equilateral random walks and polygons in which each segment of the walk or polygon is of unit length. We show that the mean average inter-crossing number ICN between two equilateral random walks of the same length n is approximately linear in terms of n and we were able to determine the prefactor of the linear term, which is a=\\frac{3\\ln 2}{8}\\approx 0.2599 . In the case of two random polygons of length n, the mean average inter-crossing number ICN is also linear, but the prefactor of the linear term is different from that of the random walks. These approximations apply when the starting points of the random walks and polygons are of a distance ρ apart and ρ is small compared to n. We propose a fitting model that would capture the theoretical asymptotic behaviour of the mean average ICN for large values of ρ. Our simulation result shows that the model in fact works very well for the entire range of ρ. We also study the mean ICN between two equilateral random walks and polygons of different lengths. An interesting result is that even if one random walk (polygon) has a fixed length, the mean average ICN between the two random walks (polygons) would still approach infinity if the length of the other random walk (polygon) approached infinity. The data provided by our simulations match our theoretical predictions very well.
Random processes in nuclear reactors
Williams, M M R
1974-01-01
Random Processes in Nuclear Reactors describes the problems that a nuclear engineer may meet which involve random fluctuations and sets out in detail how they may be interpreted in terms of various models of the reactor system. Chapters set out to discuss topics on the origins of random processes and sources; the general technique to zero-power problems and bring out the basic effect of fission, and fluctuations in the lifetime of neutrons, on the measured response; the interpretation of power reactor noise; and associated problems connected with mechanical, hydraulic and thermal noise sources
A simple consensus algorithm for distributed averaging in random geographical networks
Mahdi Jalili
2012-09-01
Random geographical networks are realistic models for wireless sensor networks which are used in many applications. Achieving average consensus is very important in sensor networks and the faster the consensus is, the durable the sensors’ life, and thus, the better the performance of the network. In this paper we compared the performance of a number of linear consensus algorithms with application to distributed averaging in random geographical networks. Interestingly, the simplest algorithm – where only the degree of receiving nodes is needed for the averaging – had the best performance in terms of the consensus time. Furthermore, we proved that the network has guaranteed convergence with this simple algorithm.
Averaging processes in granular flows driven by gravity
Rossi, Giulia; Armanini, Aronne
2016-04-01
One of the more promising theoretical frames to analyse the two-phase granular flows is offered by the similarity of their rheology with the kinetic theory of gases [1]. Granular flows can be considered a macroscopic equivalent of the molecular case: the collisions among molecules are compared to the collisions among grains at a macroscopic scale [2,3]. However there are important statistical differences in dealing with the two applications. In the two-phase fluid mechanics, there are two main types of average: the phasic average and the mass weighed average [4]. The kinetic theories assume that the size of atoms is so small, that the number of molecules in a control volume is infinite. With this assumption, the concentration (number of particles n) doesn't change during the averaging process and the two definitions of average coincide. This hypothesis is no more true in granular flows: contrary to gases, the dimension of a single particle becomes comparable to that of the control volume. For this reason, in a single realization the number of grain is constant and the two averages coincide; on the contrary, for more than one realization, n is no more constant and the two types of average lead to different results. Therefore, the ensamble average used in the standard kinetic theory (which usually is the phasic average) is suitable for the single realization, but not for several realization, as already pointed out in [5,6]. In the literature, three main length scales have been identified [7]: the smallest is the particles size, the intermediate consists in the local averaging (in order to describe some instability phenomena or secondary circulation) and the largest arises from phenomena such as large eddies in turbulence. Our aim is to solve the intermediate scale, by applying the mass weighted average, when dealing with more than one realizations. This statistical approach leads to additional diffusive terms in the continuity equation: starting from experimental
Hearing Office Average Processing Time Ranking Report, February 2016
Social Security Administration — A ranking of ODAR hearing offices by the average number of hearings dispositions per ALJ per day. The average shown will be a combined average for all ALJs working...
Entanglement in random pure states: spectral density and average von Neumann entropy
Kumar, Santosh; Pandey, Akhilesh, E-mail: skumar.physics@gmail.com, E-mail: ap0700@mail.jnu.ac.in [School of Physical Sciences, Jawaharlal Nehru University, New Delhi 110 067 (India)
2011-11-04
Quantum entanglement plays a crucial role in quantum information, quantum teleportation and quantum computation. The information about the entanglement content between subsystems of the composite system is encoded in the Schmidt eigenvalues. We derive here closed expressions for the spectral density of Schmidt eigenvalues for all three invariant classes of random matrix ensembles. We also obtain exact results for average von Neumann entropy. We find that maximum average entanglement is achieved if the system belongs to the symplectic invariant class. (paper)
RANDOM SINGULAR INTEGRAL OF RANDOM PROCESS WITH SECOND ORDER MOMENT
Wang Chuanrong
2005-01-01
This paper discussses the random singular integral of random process with second order moment, establishes the concepts of the random singular integral and proves that it's a linear bounded operator of space Hα(L)(m, s). Then Plemelj formula and some other properties for random singular integral are proved.
Control of random Boolean networks via average sensitivity of Boolean functions
Chen Shi-Jian; Hong Yi-Guang
2011-01-01
In this paper, we discuss how to transform the disordered phase into an ordered phase in random Boolean networks. To increase the effectiveness, a control scheme is proposed, which periodically freezes a fraction of the network based on the average sensitivity of Boolean functions of the nodes. Theoretical analysis is carried out to estimate the expected critical value of the fraction, and shows that the critical value is reduced using this scheme compared to that of randomly freezing a fraction of the nodes. Finally, the simulation is given for illustrating the effectiveness of the proposed method.
Historical Data for Average Processing Time Until Hearing Held
Social Security Administration — This dataset provides historical data for average wait time (in days) from the hearing request date until a hearing was held. This dataset includes data from fiscal...
Berezhkovskii, Alexander M.; Weiss, George H.
1996-07-01
In order to extend the greatly simplified Smoluchowski model for chemical reaction rates it is necessary to incorporate many-body effects. A generalization with this feature is the so-called trapping model in which random walkers move among a uniformly distributed set of traps. The solution of this model requires consideration of the distinct number of sites visited by a single n-step random walk. A recent analysis [H. Larralde et al., Phys. Rev. A 45, 1728 (1992)] has considered a generalized version of this problem by calculating the average number of distinct sites visited by N n-step random walks. A related continuum analysis is given in [A. M. Berezhkovskii, J. Stat. Phys. 76, 1089 (1994)]. We consider a slightly different version of the general problem by calculating the average volume of the Wiener sausage generated by Brownian particles generated randomly in time. The analysis shows that two types of behavior are possible: one in which there is strong overlap between the Wiener sausages of the particles, and the second in which the particles are mainly independent of one another. Either one or both of these regimes occur, depending on the dimension.
Modification of averaging process in GR: Case study flat LTB
Khosravi, Shahram; Mansouri, Reza
2007-01-01
We study the volume averaging of inhomogeneous metrics within GR and discuss its shortcomings such as gauge dependence, singular behavior as a result of caustics, and causality violations. To remedy these shortcomings, we suggest some modifications to this method. As a case study we focus on the inhomogeneous model of structured FRW based on a flat LTB metric. The effect of averaging is then studied in terms of an effective backreaction fluid. This backreaction fluid turns out to behave like a dark matter component, instead of dark energy as claimed in literature.
Model uncertainty and Bayesian model averaging in vector autoregressive processes
R.W. Strachan (Rodney); H.K. van Dijk (Herman)
2006-01-01
textabstractEconomic forecasts and policy decisions are often informed by empirical analysis based on econometric models. However, inference based upon a single model, when several viable models exist, limits its usefulness. Taking account of model uncertainty, a Bayesian model averaging procedure i
Gruber, Joshua S; Arnold, Benjamin F; Reygadas, Fermin; Hubbard, Alan E; Colford, John M
2014-05-01
Complier average causal effects (CACE) estimate the impact of an intervention among treatment compliers in randomized trials. Methods used to estimate CACE have been outlined for parallel-arm trials (e.g., using an instrumental variables (IV) estimator) but not for other randomized study designs. Here, we propose a method for estimating CACE in randomized stepped wedge trials, where experimental units cross over from control conditions to intervention conditions in a randomized sequence. We illustrate the approach with a cluster-randomized drinking water trial conducted in rural Mexico from 2009 to 2011. Additionally, we evaluated the plausibility of assumptions required to estimate CACE using the IV approach, which are testable in stepped wedge trials but not in parallel-arm trials. We observed small increases in the magnitude of CACE risk differences compared with intention-to-treat estimates for drinking water contamination (risk difference (RD) = -22% (95% confidence interval (CI): -33, -11) vs. RD = -19% (95% CI: -26, -12)) and diarrhea (RD = -0.8% (95% CI: -2.1, 0.4) vs. RD = -0.1% (95% CI: -1.1, 0.9)). Assumptions required for IV analysis were probably violated. Stepped wedge trials allow investigators to estimate CACE with an approach that avoids the stronger assumptions required for CACE estimation in parallel-arm trials. Inclusion of CACE estimates in stepped wedge trials with imperfect compliance could enhance reporting and interpretation of the results of such trials.
Disability Reconsideration Average Processing Time (in Days) (Excludes technical denials)
Social Security Administration — A presentation of the overall cumulative number of elapsed days (including processing time for transit, medical determinations, and SSA quality review) from the date...
Müller, Sebastian
2011-01-01
Separating different propositional proof systems---that is, demonstrating that one proof system cannot efficiently simulate another proof system---is one of the main goals of proof complexity. Nevertheless, all known separation results between non-abstract proof systems are for specific families of hard tautologies: for what we know, in the average case all (non-abstract) propositional proof systems are no stronger than resolution. In this paper we show that this is not the case by demonstrating polynomial-size propositional refutations whose lines are $TC^0$ formulas (i.e., $TC^0$-Frege proofs) for random 3CNF formulas with $ n $ variables and $ \\Omega(n^{1.4}) $ clauses. By known lower bounds on resolution refutations, this implies an exponential separation of $TC^0$-Frege from resolution in the average case. The idea is based on demonstrating efficient propositional correctness proofs of the random 3CNF unsatisfiability witnesses given by Feige, Kim and Ofek [FOCS'06]. Since the soundness of these witnesse...
Multiple-scale stochastic processes: Decimation, averaging and beyond
Bo, Stefano; Celani, Antonio
2017-02-01
The recent experimental progresses in handling microscopic systems have allowed to probe them at levels where fluctuations are prominent, calling for stochastic modeling in a large number of physical, chemical and biological phenomena. This has provided fruitful applications for established stochastic methods and motivated further developments. These systems often involve processes taking place on widely separated time scales. For an efficient modeling one usually focuses on the slower degrees of freedom and it is of great importance to accurately eliminate the fast variables in a controlled fashion, carefully accounting for their net effect on the slower dynamics. This procedure in general requires to perform two different operations: decimation and coarse-graining. We introduce the asymptotic methods that form the basis of this procedure and discuss their application to a series of physical, biological and chemical examples. We then turn our attention to functionals of the stochastic trajectories such as residence times, counting statistics, fluxes, entropy production, etc. which have been increasingly studied in recent years. For such functionals, the elimination of the fast degrees of freedom can present additional difficulties and naive procedures can lead to blatantly inconsistent results. Homogenization techniques for functionals are less covered in the literature and we will pedagogically present them here, as natural extensions of the ones employed for the trajectories. We will also discuss recent applications of these techniques to the thermodynamics of small systems and their interpretation in terms of information-theoretic concepts.
Intensity correlations in metal films with periodic-on-average random nanohole arrays
Kumar, Randhir; Mujumdar, Sushil
2016-12-01
We report detailed numerical studies based on three-dimensional finite-difference time domain computations of the intensity-intensity correlations in deliberately randomized, periodic-on-average systems. Correlation analyses are carried out in plasmonic thin films with nanohole arrays as a function of strength of disorder. We find that the intensity at certain uncharacteristic wavelengths remains strongly correlated with that in the periodic system, and these wavelengths do not match the global maxima of the periodic transmission spectrum. The study indicates that the strength of correlations is related to the pinning of the intensity to the holes. Since the intensity pinning is special characteristic of metals, the effect is only applicable in plasmonic systems.
Wen-Min Zhou
2013-01-01
Full Text Available This paper is concerned with the consensus problem of general linear discrete-time multiagent systems (MASs with random packet dropout that happens during information exchange between agents. The packet dropout phenomenon is characterized as being a Bernoulli random process. A distributed consensus protocol with weighted graph is proposed to address the packet dropout phenomenon. Through introducing a new disagreement vector, a new framework is established to solve the consensus problem. Based on the control theory, the perturbation argument, and the matrix theory, the necessary and sufficient condition for MASs to reach mean-square consensus is derived in terms of stability of an array of low-dimensional matrices. Moreover, mean-square consensusable conditions with regard to network topology and agent dynamic structure are also provided. Finally, the effectiveness of the theoretical results is demonstrated through an illustrative example.
A signal theoretic introduction to random processes
Howard, Roy M
2015-01-01
A fresh introduction to random processes utilizing signal theory By incorporating a signal theory basis, A Signal Theoretic Introduction to Random Processes presents a unique introduction to random processes with an emphasis on the important random phenomena encountered in the electronic and communications engineering field. The strong mathematical and signal theory basis provides clarity and precision in the statement of results. The book also features: A coherent account of the mathematical fundamentals and signal theory that underpin the presented material Unique, in-depth coverage of
Signals and processing for random signal radars
Moore, G. S.
1980-06-01
Signals and associated processing techniques are developed which improve the performance, simplify the implementation, and are more amenable to adaptive operation for radars using the random signal concept. These goals are accomplished through the use of a signal set that is composed of a deterministic spreading function, a binary random or pseudo-random noise source, and a possibly random or pseudo-random pulsing sequence. Techniques are developed for determining the parameters of the spreading function that result in signals with desirable ambiguity functions and high effective power. These techniques are based on the use of window functions for sidelobe control and the theory of chirp waveforms for effective power enhancement.
Precise Asymptotics in the Law of the Iterated Logarithm of Moving-Average Processes
Yun Xia LI; Li Xin ZHANG
2006-01-01
In this paper, we discuss the moving-average process Xk = ∑∞i=-∞ ai+kεi, where{εi; -∞＜ i ＜∞} is a doubly infinite sequence of identically distributed (ψ)-mixing or negatively associated random variables with mean zeros and finite variances, {ai; -∞＜ i ＜∞} is an absolutely summable sequence of real numbers. Set Sn = ∑nk=1 Xk,n ≥ 1. Suppose that σ2 =Eε21 + 2∑∞k=2 Eε1εk ＞ 0. We prove that for any δ≥ 0, if E[ε12(loglog |ε1|)δ-1] ＜∞,and if E[ε21(log |ε1|)δ-1] ＜∞,moment of the standard normal distribution.
Probability, random variables, and random processes theory and signal processing applications
Shynk, John J
2012-01-01
Probability, Random Variables, and Random Processes is a comprehensive textbook on probability theory for engineers that provides a more rigorous mathematical framework than is usually encountered in undergraduate courses. It is intended for first-year graduate students who have some familiarity with probability and random variables, though not necessarily of random processes and systems that operate on random signals. It is also appropriate for advanced undergraduate students who have a strong mathematical background. The book has the following features: Several app
Randomized Consensus Processing over Random Graphs: Independence and Convergence
Shi, Guodong
2011-01-01
Various consensus algorithms over random networks have been investigated in the literature. In this paper, we focus on the role that randomized individual decision-making plays to consensus seeking under stochastic communications. At each time step, each node will independently choose to follow the consensus algorithm, or to stick to current state by a simple Bernoulli trial with time-dependent success probabilities. This node decision strategy characterizes the random node-failures on a communication networks, or a biased opinion selection in the belief evolution over social networks. Connectivity-independent and arc-independent graphs are defined, respectively, to capture the fundamental nature of random network processes with regard to the convergence of the consensus algorithms. A series of sufficient and/or necessary conditions are given on the success probability sequence for the network to reach a global consensus with probability one under different stochastic connectivity assumptions, by which a comp...
Pseudo random signal processing theory and application
Zepernick, Hans-Jurgen
2013-01-01
In recent years, pseudo random signal processing has proven to be a critical enabler of modern communication, information, security and measurement systems. The signal's pseudo random, noise-like properties make it vitally important as a tool for protecting against interference, alleviating multipath propagation and allowing the potential of sharing bandwidth with other users. Taking a practical approach to the topic, this text provides a comprehensive and systematic guide to understanding and using pseudo random signals. Covering theoretical principles, design methodologies and applications
Elements of random walk and diffusion processes
Ibe, Oliver C
2013-01-01
Presents an important and unique introduction to random walk theory Random walk is a stochastic process that has proven to be a useful model in understanding discrete-state discrete-time processes across a wide spectrum of scientific disciplines. Elements of Random Walk and Diffusion Processes provides an interdisciplinary approach by including numerous practical examples and exercises with real-world applications in operations research, economics, engineering, and physics. Featuring an introduction to powerful and general techniques that are used in the application of physical and dynamic
Social Security Administration Public Inquiry Data - Average Processing Time (in days)
Social Security Administration — This dataset shows the average processing time for completed inquiries. The data source is the Electronic Management of Assignments and Correspondence system (EMAC)....
Angular processes related to Cauchy random walks
Cammarota, Valemtina
2011-01-01
We study the angular process related to random walks in the Euclidean and in the non-Euclidean space where steps are Cauchy distributed. This leads to different types of non-linear transformations of Cauchy random variables which preserve the Cauchy density. We give the explicit form of these distributions for all combinations of the scale and the location parameters. Continued fractions involving Cauchy random variables are analyzed. It is shown that the $n$-stage random variables are still Cauchy distributed with parameters related to Fibonacci numbers. This permits us to show the convergence in distribution of the sequence to the golden ratio.
On the local time of random processes in random scenery
Castell, Fabienne; Pène, Françoise; Schapira, Bruno
2012-01-01
Random walks in random scenery are processes defined by $Z_n:=\\sum_{k=1}^n\\xi_{X_1+...+X_k}$, where basically $(X_k,k\\ge 1)$ and $(\\xi_y,y\\in\\mathbb Z)$ are two independent sequences of i.i.d. random variables. We assume here that $X_1$ is $\\ZZ$-valued, centered and with finite moments of all orders. We also assume that $\\xi_0$ is $\\ZZ$-valued, centered and square integrable. In this case H. Kesten and F. Spitzer proved that $(n^{-3/4}Z_{[nt]},t\\ge 0)$ converges in distribution as $n\\to \\infty$ toward some self-similar process $(\\Delta_t,t\\ge 0)$ called Brownian motion in random scenery. In a previous paper, we established that ${\\mathbb P}(Z_n=0)$ behaves asymptotically like a constant times $n^{-3/4}$, as $n\\to \\infty$. We extend here this local limit theorem: we give a precise asymptotic result for the probability for $Z$ to return to zero simultaneously at several times. As a byproduct of our computations, we show that $\\Delta$ admits a bi-continuous version of its local time process which is locally H\\"o...
Fundamentals of applied probability and random processes
Ibe, Oliver
2014-01-01
The long-awaited revision of Fundamentals of Applied Probability and Random Processes expands on the central components that made the first edition a classic. The title is based on the premise that engineers use probability as a modeling tool, and that probability can be applied to the solution of engineering problems. Engineers and students studying probability and random processes also need to analyze data, and thus need some knowledge of statistics. This book is designed to provide students with a thorough grounding in probability and stochastic processes, demonstrate their applicability t
Long Strange Segments, Ruin Probabilities and the Effect of Memory on Moving Average Processes
Ghosh, Souvik
2010-01-01
We obtain the rate of growth of long strange segments and the rate of decay of infinite horizon ruin probabilities for a class of infinite moving average processes with exponentially light tails. The rates are computed explicitly. We show that the rates are very similar to those of an i.i.d. process as long as moving average coefficients decay fast enough. If they do not, then the rates are significantly different. This demonstrates the change in the length of memory in a moving average process associated with certain changes in the rate of decay of the coefficients.
Signal Processing of Random Physiological Signals
Lessard, Charles
2006-01-01
Signal Processing of Random Physiological Signals presents the most widely used techniques in signal and system analysis. Specifically, the book is concerned with methods of characterizing signals and systems. Author Charles Lessard provides students and researchers an understanding of the time and frequency domain processes which may be used to evaluate random physiological signals such as brainwave, sleep, respiratory sounds, heart valve sounds, electromyograms, and electro-oculograms.Another aim of the book is to have the students evaluate actual mammalian data without spending most or all
Huang, Lei
2015-09-30
To solve the problem in which the conventional ARMA modeling methods for gyro random noise require a large number of samples and converge slowly, an ARMA modeling method using a robust Kalman filtering is developed. The ARMA model parameters are employed as state arguments. Unknown time-varying estimators of observation noise are used to achieve the estimated mean and variance of the observation noise. Using the robust Kalman filtering, the ARMA model parameters are estimated accurately. The developed ARMA modeling method has the advantages of a rapid convergence and high accuracy. Thus, the required sample size is reduced. It can be applied to modeling applications for gyro random noise in which a fast and accurate ARMA modeling method is required.
Jaskiewicz, Anna; Nowak, Andrzej S.
2006-04-01
We consider Markov control processes with Borel state space and Feller transition probabilities, satisfying some generalized geometric ergodicity conditions. We provide a new theorem on the existence of a solution to the average cost optimality equation.
Robust randomized benchmarking of quantum processes
Magesan, Easwar; Emerson, Joseph
2010-01-01
We describe a simple randomized benchmarking protocol for quantum information processors and obtain a sequence of models for the observable fidelity decay as a function of a perturbative expansion of the errors. We are able to prove that the protocol provides an efficient and reliable estimate of an average error-rate for a set operations (gates) under a general noise model that allows for both time and gate-dependent errors. We determine the conditions under which this estimate remains valid and illustrate the protocol through numerical examples.
Yilmaz, Atilla
2009-01-01
We consider the quenched and averaged (or annealed) large deviation rate functions $I_q$ and $I_a$ for space-time and (the usual) space-only RWRE on $\\mathbb{Z}^d$. By Jensen's inequality, $I_a\\leq I_q$. In the space-time case, when $d\\geq3+1$, $I_q$ and $I_a$ are known to be equal on an open set containing the typical velocity $\\xi_o$. When $d=1+1$, we prove that $I_q$ and $I_a$ are equal only at $\\xi_o$. Similarly, when $d=2+1$, we show that $I_a
Ultra low voltage and low power Static Random Access Memory design using average 6.5T technique
Nagalingam RAJESWARAN
2015-12-01
Full Text Available Power Stringent Static Random Access Memory (SRAM design is very much essential in embedded systems such as biomedical implants, automotive electronics and energy harvesting devices in which battery life, input power and execution delay are of main concern. With reduced supply voltage, SRAM cell design will go through severe stability issues. In this paper, we present a highly stable average nT SRAM cell for ultra-low power in 125nm technology. The distinct difference between the proposed technique and other conventional methods is about the data independent leakage in the read bit line which is achieved by newly introduced block mask transistors. An average 6.5T SRAM and average 8T SRAM are designed and compared with 6T SRAM, 8T SRAM, 9T SRAM, 10T SRAM and 14T SRAM cells. The result indicates that there is an appreciable decrease in power consumption and delay.
Aneta Rita Borkowska
2014-05-01
Full Text Available BACKGROUND The aim of the research was to assess memorization and recall of logically connected and unconnected material, coded graphically and linguistically, and the ability to focus attention, in a group of children with intelligence below average, compared to children with average intelligence. PARTICIPANTS AND PROCEDURE The study group included 27 children with intelligence below average. The control group consisted of 29 individuals. All of them were examined using the authors’ experimental trials and the TUS test (Attention and Perceptiveness Test. RESULTS Children with intelligence below average memorized significantly less information contained in the logical material, demonstrated lower ability to memorize the visual material, memorized significantly fewer words in the verbal material learning task, achieved lower results in such indicators of the visual attention process pace as the number of omissions and mistakes, and had a lower pace of perceptual work, compared to children with average intelligence. CONCLUSIONS The results confirm that children with intelligence below average have difficulties with memorizing new material, both logically connected and unconnected. The significantly lower capacity of direct memory is independent of modality. The results of the study on the memory process confirm the hypothesis about lower abilities of children with intelligence below average, in terms of concentration, work pace, efficiency and perception.
Studies in the theory of random processes
Skhorokhod, A V
1982-01-01
This text is devoted to the development of certain probabilistic methods in the specific field of stochastic differential equations and limit theorems for Markov processes. Specialists, researchers, and students in the field of probability will find it a source of important theorems as well as a remarkable amount of advanced material in compact form.The treatment begins by introducing the basic facts of the theory of random processes and constructing the auxiliary apparatus of stochastic integrals. All proofs are presented in full. Succeeding chapters explore the theory of stochastic different
Experimental Quantum Randomness Processing Using Superconducting Qubits
Yuan, Xiao; Liu, Ke; Xu, Yuan; Wang, Weiting; Ma, Yuwei; Zhang, Fang; Yan, Zhaopeng; Vijay, R.; Sun, Luyan; Ma, Xiongfeng
2016-07-01
Coherently manipulating multipartite quantum correlations leads to remarkable advantages in quantum information processing. A fundamental question is whether such quantum advantages persist only by exploiting multipartite correlations, such as entanglement. Recently, Dale, Jennings, and Rudolph negated the question by showing that a randomness processing, quantum Bernoulli factory, using quantum coherence, is strictly more powerful than the one with classical mechanics. In this Letter, focusing on the same scenario, we propose a theoretical protocol that is classically impossible but can be implemented solely using quantum coherence without entanglement. We demonstrate the protocol by exploiting the high-fidelity quantum state preparation and measurement with a superconducting qubit in the circuit quantum electrodynamics architecture and a nearly quantum-limited parametric amplifier. Our experiment shows the advantage of using quantum coherence of a single qubit for information processing even when multipartite correlation is not present.
Experimental Quantum Randomness Processing Using Superconducting Qubits.
Yuan, Xiao; Liu, Ke; Xu, Yuan; Wang, Weiting; Ma, Yuwei; Zhang, Fang; Yan, Zhaopeng; Vijay, R; Sun, Luyan; Ma, Xiongfeng
2016-07-01
Coherently manipulating multipartite quantum correlations leads to remarkable advantages in quantum information processing. A fundamental question is whether such quantum advantages persist only by exploiting multipartite correlations, such as entanglement. Recently, Dale, Jennings, and Rudolph negated the question by showing that a randomness processing, quantum Bernoulli factory, using quantum coherence, is strictly more powerful than the one with classical mechanics. In this Letter, focusing on the same scenario, we propose a theoretical protocol that is classically impossible but can be implemented solely using quantum coherence without entanglement. We demonstrate the protocol by exploiting the high-fidelity quantum state preparation and measurement with a superconducting qubit in the circuit quantum electrodynamics architecture and a nearly quantum-limited parametric amplifier. Our experiment shows the advantage of using quantum coherence of a single qubit for information processing even when multipartite correlation is not present.
Fundamentals of applied probability and random processes
Ibe, Oliver
2005-01-01
This book is based on the premise that engineers use probability as a modeling tool, and that probability can be applied to the solution of engineering problems. Engineers and students studying probability and random processes also need to analyze data, and thus need some knowledge of statistics. This book is designed to provide students with a thorough grounding in probability and stochastic processes, demonstrate their applicability to real-world problems, and introduce the basics of statistics. The book''s clear writing style and homework problems make it ideal for the classroom or for self-study.* Good and solid introduction to probability theory and stochastic processes * Logically organized; writing is presented in a clear manner * Choice of topics is comprehensive within the area of probability * Ample homework problems are organized into chapter sections
Ra and the average effective strain of surface asperities deformed in metal-working processes
Bay, Niels; Wanheim, Tarras; Petersen, A. S
1975-01-01
Based upon a slip-line analysis of the plastic deformation of surface asperities, a theory is developed determining the Ra-value (c.l.a.) and the average effective strain in the surface layer when deforming asperities in metal-working processes. The ratio between Ra and Ra0, the Ra-value after...... and before deformation, is a function of the nominal normal pressure and the initial slope γ0 of the surface asperities. The last parameter does not influence Ra significantly. The average effective strain View the MathML sourcege in the deformed surface layer is a function of the nominal normal pressure...
Representation Theorems for Fuzzy Random Sets and Fuzzy Stochastic Processes
无
1999-01-01
The fuzzy static and dynamic random phenomena in an abstract separable Banach space is discussed in this paper. The representation theorems for fuzzy set-valued random sets, fuzzy random elements and fuzzy set-valued stochastic processes are obtained.
A comparison of control charts for the average of autocorrelated processes
Fabiane R. S. Yassukawa
2008-04-01
Full Text Available Control charts are extensively used with the purpose of monitoring some parameters of the process. In general these charts are based on the normality and independence assumptions of the sample observations. However, there are situations where the independence is not valid such as in chemical processes or sampling on-line. In this paper we compared the control charts based on geostatistics and time series methodologies with the well-known charts Shewhart, CUSUM and EWMA, when used to monitor the average of autocorrelated processes. The comparison was performed by using Monte Carlo simulation implemented in the software R for Windows.
Asymptotic theory of weakly dependent random processes
Rio, Emmanuel
2017-01-01
Presenting tools to aid understanding of asymptotic theory and weakly dependent processes, this book is devoted to inequalities and limit theorems for sequences of random variables that are strongly mixing in the sense of Rosenblatt, or absolutely regular. The first chapter introduces covariance inequalities under strong mixing or absolute regularity. These covariance inequalities are applied in Chapters 2, 3 and 4 to moment inequalities, rates of convergence in the strong law, and central limit theorems. Chapter 5 concerns coupling. In Chapter 6 new deviation inequalities and new moment inequalities for partial sums via the coupling lemmas of Chapter 5 are derived and applied to the bounded law of the iterated logarithm. Chapters 7 and 8 deal with the theory of empirical processes under weak dependence. Lastly, Chapter 9 describes links between ergodicity, return times and rates of mixing in the case of irreducible Markov chains. Each chapter ends with a set of exercises. The book is an updated and extended ...
Probability, random processes, and ergodic properties
Gray, Robert M
1988-01-01
This book has been written for several reasons, not all of which are academic. This material was for many years the first half of a book in progress on information and ergodic theory. The intent was and is to provide a reasonably self-contained advanced treatment of measure theory, prob ability theory, and the theory of discrete time random processes with an emphasis on general alphabets and on ergodic and stationary properties of random processes that might be neither ergodic nor stationary. The intended audience was mathematically inc1ined engineering graduate students and visiting scholars who had not had formal courses in measure theoretic probability . Much of the material is familiar stuff for mathematicians, but many of the topics and results have not previously appeared in books. The original project grew too large and the first part contained much that would likely bore mathematicians and dis courage them from the second part. Hence I finally followed the suggestion to separate the material and split...
Youngstedt, Shawn D; Jean-Louis, Girardin; Bootzin, Richard R; Kripke, Daniel F; Cooper, Jonnifer; Dean, Lauren R; Catao, Fabio; James, Shelli; Vining, Caitlin; Williams, Natasha J; Irwin, Michael R
2013-09-01
Epidemiologic studies have consistently shown that sleeping sleep may be consistent with results from experimental sleep deprivation studies. However, there has been little study of chronic moderate sleep restriction and little evaluation of older adults who might be more vulnerable to negative effects of sleep restriction, given their age-related morbidities. Moreover, the risks of long sleep have scarcely been examined experimentally. Moderate sleep restriction might benefit older long sleepers who often spend excessive time in bed (TIB) in contrast to older adults with average sleep patterns. Our aims are: (1) to examine the ability of older long sleepers and older average sleepers to adhere to 60 min TIB restriction; and (2) to contrast effects of chronic TIB restriction in older long vs. average sleepers. Older adults (n = 100) (60-80 years) who sleep 8-9 h per night and 100 older adults who sleep 6-7.25 h per night will be examined at 4 sites over 5 years. Following a 2-week baseline, participants will be randomized to one of two 12-week treatments: (1) a sleep restriction involving a fixed sleep-wake schedule, in which TIB is reduced 60 min below each participant's baseline TIB; and (2) a control treatment involving no sleep restriction, but a fixed sleep schedule. Sleep will be assessed with actigraphy and a diary. Measures will include glucose tolerance, sleepiness, depressive symptoms, quality of life, cognitive performance, incidence of illness or accident, and inflammation.
Statistical early-warning indicators based on Auto-Regressive Moving-Average processes
Faranda, Davide; Dubrulle, Bérengère
2014-01-01
We address the problem of defining early warning indicators of critical transition. To this purpose, we fit the relevant time series through a class of linear models, known as Auto-Regressive Moving-Average (ARMA(p,q)) models. We define two indicators representing the total order and the total persistence of the process, linked, respectively, to the shape and to the characteristic decay time of the autocorrelation function of the process. We successfully test the method to detect transitions in a Langevin model and a 2D Ising model with nearest-neighbour interaction. We then apply the method to complex systems, namely for dynamo thresholds and financial crisis detection.
Traffic and random processes an introduction
Mauro, Raffaele
2015-01-01
This book deals in a basic and systematic manner with a the fundamentals of random function theory and looks at some aspects related to arrival, vehicle headway and operational speed processes at the same time. The work serves as a useful practical and educational tool and aims at providing stimulus and motivation to investigate issues of such a strong applicative interest. It has a clearly discursive and concise structure, in which numerical examples are given to clarify the applications of the suggested theoretical model. Some statistical characterizations are fully developed in order to illustrate the peculiarities of specific modeling approaches; finally, there is a useful bibliography for in-depth thematic analysis.
interaction randomly perturbed by wiener processes
Anatoli V. Skorokhod
2003-01-01
Full Text Available Infinite systems of stochastic differential equations for randomly perturbed particle systems in Rd with pairwise interacting are considered. For gradient systems these equations are of the form dxk(t=Fk(ttd+σdwk(t and for Hamiltonian systems these equations are of the form dx˙k(t=Fk(ttd+σdwk(t. Here xk(t is the position of the kth particle, x˙k(t is its velocity, Fk=−∑j≠kUx(xk(t−xj(t, where the function U:Rd→R is the potential of the system, σ>0 is a constant, {wk(t,k=1,2,…} is a sequence of independent standard Wiener processes.
Ergodic averages for monotone functions using upper and lower dominating processes
Møller, Jesper; Mengersen, Kerrie
2007-01-01
We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary ...... methods are studied in detail for three models using Markov chain Monte Carlo methods and we also discuss various types of other models for which our methods apply.......We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain. Our...
Ergodic averages for monotone functions using upper and lower dominating processes
Møller, Jesper; Mengersen, Kerrie
We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary ...... methods are studied in detail for three models using Markov chain Monte Carlo methods and we also discuss various types of other models for which our methods apply.......We show how the mean of a monotone function (defined on a state space equipped with a partial ordering) can be estimated, using ergodic averages calculated from upper and lower dominating processes of a stationary irreducible Markov chain. In particular, we do not need to simulate the stationary...... Markov chain and we eliminate the problem of whether an appropriate burn-in is determined or not. Moreover, when a central limit theorem applies, we show how confidence intervals for the mean can be estimated by bounding the asymptotic variance of the ergodic average based on the equilibrium chain. Our...
Many-body Systems Interacting via a Two-body Random Ensemble average energy of each angular momentum
Zhao, Y M; Yoshinaga, N
2002-01-01
In this paper, we discuss the regularities of energy of each angular momentum $I$ averaged over all the states for a fixed angular momentum (denoted as $\\bar{E}_I$'s) in many-body systems interacting via a two-body random ensemble. It is found that $\\bar{E}_I$'s with $I \\sim I_{min}$ (minimum of $I$) or $I_{max}$ have large probabilities (denoted as ${\\cal P}(I)$) to be the lowest, and that ${\\cal P}(I)$ is close to zero elsewhere. A simple argument based on the randomness of the two-particle cfp's is given. A compact trajectory of the energy $\\bar{E}_I$ vs. $I(I+1)$ is found to be robust. Regular fluctuations of the $P(I)$ (the probability of finding $I$ to be the ground state) and ${\\cal P}(I)$ of even fermions in a single-$j$ shell and boson systems are found to be reverse, and argued by the dimension fluctuation of the model space. Other regularities, such as why there are 2 or 3 sizable ${\\cal P}(I)$'s with $I\\sim I_{min}$ and ${\\cal P}(I) \\ll {\\cal P}(I_{max})$'s with $I\\sim I_{max}$, why the coefficien...
A self-similar process arising from a random walk with random environment in random scenery
Franke, Brice; 10.3150/09-BEJ234
2011-01-01
In this article, we merge celebrated results of Kesten and Spitzer [Z. Wahrsch. Verw. Gebiete 50 (1979) 5-25] and Kawazu and Kesten [J. Stat. Phys. 37 (1984) 561-575]. A random walk performs a motion in an i.i.d. environment and observes an i.i.d. scenery along its path. We assume that the scenery is in the domain of attraction of a stable distribution and prove that the resulting observations satisfy a limit theorem. The resulting limit process is a self-similar stochastic process with non-trivial dependencies.
Multiple change-points estimation of moving-average processes under dependence assumptions
ZHANG Lixin; LI Yunxia
2004-01-01
In this paper, some results of convergence for a least-square estimator in the problem of multiple change-points estimation are presented and the moving-average processes of ρ-mixing sequence in the mean shifts are discussed. When the number of change points is known, the consistency of change-points estimator is derived. When the number of changes is unknown, the consistency of the change-points number and the change-points estimator by penalized least-squares method are obtained. The results are also true for φ-mixing, α-mixing, associated and negative associated sequences under suitable conditions.
Considerations of the Error Variances of Time-Averaged Estimators for Correlated Processes
1992-12-01
comments in the preparation of this report. Acces/on For NTIS CRA&I DTIC TAB El Unarnounced LI JustfiiCdIIOin By Dt,’jt’bution I " M TIN I.. AvAvijdbdlty...Y2= . isdA () (6til)uigNT10b)esml .2- H I - LAGLA d Figure 3 Time-averaged autocorrelation function and its variance for an AR(l) 2process; X=-0.7...34 Electronics Letters , vol. 7, no.8, April 22, 1971, pp. 185-186. [16] Wiggins, R., Robinson, E., "Recursive solution to the multichannel filtering
Change-point Estimation of a Mean Shift in Moving-average Processes Under Dependence Assumptions
Yun-xia Li
2006-01-01
In this paper we discuss the least-square estimator of the unknown change point in a mean shift for moving-average processes of ALNQD sequence. The consistency and the rate of convergence for the estimated change point are established. The asymptotic distribution for the change point estimator is obtained. The results are also true for ρ-mixing, ψ-mixing, α-mixing sequences under suitable conditions. These results extend those of Bai[1], who studied the mean shift point of a linear process of i.i.d. variables, and the condition ∞∑j=0j|aj|＜∞in Bai is weakened to∞∑j=0|aj|＜∞.
Averaging, not internal noise, limits the development of coherent motion processing
Catherine Manning
2014-10-01
Full Text Available The development of motion processing is a critical part of visual development, allowing children to interact with moving objects and navigate within a dynamic environment. However, global motion processing, which requires pooling motion information across space, develops late, reaching adult-like levels only by mid-to-late childhood. The reasons underlying this protracted development are not yet fully understood. In this study, we sought to determine whether the development of motion coherence sensitivity is limited by internal noise (i.e., imprecision in estimating the directions of individual elements and/or global pooling across local estimates. To this end, we presented equivalent noise direction discrimination tasks and motion coherence tasks at both slow (1.5°/s and fast (6°/s speeds to children aged 5, 7, 9 and 11 years, and adults. We show that, as children get older, their levels of internal noise reduce, and they are able to average across more local motion estimates. Regression analyses indicated, however, that age-related improvements in coherent motion perception are driven solely by improvements in averaging and not by reductions in internal noise. Our results suggest that the development of coherent motion sensitivity is primarily limited by developmental changes within brain regions involved in integrating motion signals (e.g., MT/V5.
Mathematical modelling to predict the roughness average in micro milling process
Burlacu, C.; Iordan, O.
2016-08-01
Surface roughness plays a very important role in micro milling process and in any machining process, because indicates the state of the machined surface. Many surface roughness parameters that can be used to analyse a surface, but the most common surface roughness parameter used is the average roughness (Ra). This paper presents the experimental results obtained at micro milling of the C45W steel and the ways to determine the Ra parameter with respect to the working conditions. The chemical characteristics of the material were determined from a spectral analysis, chemical composition was measured at one point and two points, graphical and tabular. A profilometer Surtronic 3+ was used to examine the surface roughness profiles; the effect of independent parameters can be investigated and can get a proper relationship between the Ra parameter and the process variables. The mathematical model were developed, using multiple regression method with four independent variables D, v, ap, fz; the analysis was done using statistical software SPSS. The ANOVA analysis of variance and the F- test was used to justify the accuracy of the mathematical model. The multiple regression method was used to determine the correlation between a criterion variable and the predictor variables. The prediction model can be used for micro milling process optimization.
Pseudo-random unitary operators for quantum information processing.
Emerson, Joseph; Weinstein, Yaakov S; Saraceno, Marcos; Lloyd, Seth; Cory, David G
2003-12-19
In close analogy to the fundamental role of random numbers in classical information theory, random operators are a basic component of quantum information theory. Unfortunately, the implementation of random unitary operators on a quantum processor is exponentially hard. Here we introduce a method for generating pseudo-random unitary operators that can reproduce those statistical properties of random unitary operators most relevant to quantum information tasks. This method requires exponentially fewer resources, and hence enables the practical application of random unitary operators in quantum communication and information processing protocols. Using a nuclear magnetic resonance quantum processor, we were able to realize pseudorandom unitary operators that reproduce the expected random distribution of matrix elements.
Random Designs for Estimating Integrals of Stochastic Processes
Schoenfelder, Carol; Cambanis, Stamatis
1982-01-01
The integral of a second-order stochastic process $Z$ over a $d$-dimensional domain is estimated by a weighted linear combination of observations of $Z$ in a random design. The design sample points are possibly dependent random variables and are independent of the process $Z$, which may be nonstationary. Necessary and sufficient conditions are obtained for the mean squared error of a random design estimator to converge to zero as the sample size increases towards infinity. Simple random, stra...
Average Sample-path Optimality for Continuous-time Markov Decision Processes in Polish Spaces
Quan-xin ZHU
2011-01-01
In this paper we study the average sample-path cost (ASPC) problem for continuous-time Markov decision processes in Polish spaces.To the best of our knowledge,this paper is a first attempt to study the ASPC criterion on continuous-time MDPs with Polish state and action spaces.The corresponding transition rates are allowed to be unbounded,and the cost rates may have neither upper nor lower bounds.Under some mild hypotheses,we prove the existence of e (ε ≥ 0)-ASPC optimal stationary policies based on two different approaches:one is the “optimality equation” approach and the other is the “two optimality inequalities” approach.
Averaging for a Fully-Coupled Piecewise Deterministic Markov Process in Infinite Dimension
Genadot, Alexandre
2011-01-01
In this paper, we consider the generalized Hodgkin-Huxley model introduced by Austin in \\cite{Austin}. This model describes the propagation of an action potential along the axon of a neuron at the scale of ion channels. Mathematically, this model is a fully-coupled Piecewise Deterministic Markov Process (PDMP) in infinite dimension. We introduce two time scales in this model in considering that some ion channels open and close at faster jump rates than others. We perform a slow-fast analysis of this model and prove that asymptotically this two time scales model reduces to the so called averaged model which is still a PDMP in infinite dimension for which we provide effective evolution equations and jump rates.
Ahmed K. Hassan
2008-01-01
Full Text Available One of the serious problems in any wireless communication system using multi carrier modulation technique like Orthogonal Frequency Division Multiplexing (OFDM is its Peak to Average Power Ratio (PAPR.It limits the transmission power due to the limitation of dynamic range of Analog to Digital Converter and Digital to Analog Converter (ADC/DAC and power amplifiers at the transmitter, which in turn sets the limit over maximum achievable rate.This issue is especially important for mobile terminals to sustain longer battery life time. Therefore reducing PAPR can be regarded as an important issue to realize efficient and affordable mobile communication services.This paper presents an efficient PAPR reduction method for OFDM signal. This method is based on clipping and iterative processing. Iterative processing is performed to limit PAPR in time domain but the subtraction process of the peak that over PAPR threshold with the original signal is done in frequency domain, not in time like usual clipping technique. The results of this method is capable of reducing the PAPR significantly with minimum bit error rate (BER degradation.
Jacob, Chinthaka; Anderson, William
2016-06-01
Aeolian erosion of flat, arid landscapes is induced (and sustained) by the aerodynamic surface stress imposed by flow in the atmospheric surface layer. Conceptual models typically indicate that sediment mass flux, Q (via saltation or drift), scales with imposed aerodynamic stress raised to some exponent, n, where n > 1 . This scaling demonstrates the importance of turbulent fluctuations in driving aeolian processes. In order to illustrate the importance of surface-stress intermittency in aeolian processes, and to elucidate the role of turbulence, conditional averaging predicated on aerodynamic surface stress has been used within large-eddy simulation of atmospheric boundary-layer flow over an arid, flat landscape. The conditional-sampling thresholds are defined based on probability distribution functions of surface stress. The simulations have been performed for a computational domain with ≈ 25 H streamwise extent, where H is the prescribed depth of the neutrally-stratified boundary layer. Thus, the full hierarchy of spatial scales are captured, from surface-layer turbulence to large- and very-large-scale outer-layer coherent motions. Spectrograms are used to support this argument, and also to illustrate how turbulent energy is distributed across wavelengths with elevation. Conditional averaging provides an ensemble-mean visualization of flow structures responsible for erosion `events'. Results indicate that surface-stress peaks are associated with the passage of inclined, high-momentum regions flanked by adjacent low-momentum regions. Fluid in the interfacial shear layers between these adjacent quasi-uniform momentum regions exhibits high streamwise and vertical vorticity.
Jacob, Chinthaka; Anderson, William
2017-01-01
Aeolian erosion of flat, arid landscapes is induced (and sustained) by the aerodynamic surface stress imposed by flow in the atmospheric surface layer. Conceptual models typically indicate that sediment mass flux, Q (via saltation or drift), scales with imposed aerodynamic stress raised to some exponent, n, where n > 1. This scaling demonstrates the importance of turbulent fluctuations in driving aeolian processes. In order to illustrate the importance of surface-stress intermittency in aeolian processes, and to elucidate the role of turbulence, conditional averaging predicated on aerodynamic surface stress has been used within large-eddy simulation of atmospheric boundary-layer flow over an arid, flat landscape. The conditional-sampling thresholds are defined based on probability distribution functions of surface stress. The simulations have been performed for a computational domain with ≈ 25 H streamwise extent, where H is the prescribed depth of the neutrally-stratified boundary layer. Thus, the full hierarchy of spatial scales are captured, from surface-layer turbulence to large- and very-large-scale outer-layer coherent motions. Spectrograms are used to support this argument, and also to illustrate how turbulent energy is distributed across wavelengths with elevation. Conditional averaging provides an ensemble-mean visualization of flow structures responsible for erosion `events'. Results indicate that surface-stress peaks are associated with the passage of inclined, high-momentum regions flanked by adjacent low-momentum regions. Fluid in the interfacial shear layers between these adjacent quasi-uniform momentum regions exhibits high streamwise and vertical vorticity.
Funamizu, Hideki; Shimoma, Shohei; Yuasa, Tomonori; Aizu, Yoshihisa
2014-10-20
We present the effects of spatiotemporal averaging processes on an estimation of spectral reflectance in color digital holography using speckle illuminations. In this technique, speckle fields emitted from a multimode fiber are used as both a reference wave and a wavefront illuminating an object. The interference patterns of two coherent waves for three wavelengths are recorded as digital holograms on a CCD camera. Speckle fields are changed by vibrating the multimode fiber using a vibrator, and a number of holograms are acquired to average reconstructed images. After performing an averaging process, which we refer to as a temporal averaging process in this study, using images reconstructed from multiple holograms, a spatial averaging process is applied using a smoothing window function. For the estimation of spectral reflectance in reconstructed images, we use the Wiener estimation method. The effects of the averaging processes on color reproducibility are evaluated by a chromaticity diagram, the root-mean-square error, and color differences.
Modelling population processes with random initial conditions.
Pollett, P K; Dooley, A H; Ross, J V
2010-02-01
Population dynamics are almost inevitably associated with two predominant sources of variation: the first, demographic variability, a consequence of chance in progenitive and deleterious events; the second, initial state uncertainty, a consequence of partial observability and reporting delays and errors. Here we outline a general method for incorporating random initial conditions in population models where a deterministic model is sufficient to describe the dynamics of the population. Additionally, we show that for a large class of stochastic models the overall variation is the sum of variation due to random initial conditions and variation due to random dynamics, and thus we are able to quantify the variation not accounted for when random dynamics are ignored. Our results are illustrated with reference to both simulated and real data.
Random self-similar trees and a hierarchical branching process
Kovchegov, Yevgeniy
2016-01-01
We study self-similarity in random binary rooted trees. In a well-understood case of Galton-Watson trees, a distribution is called self-similar if it is invariant with respect to the operation of pruning, which cuts the tree leaves. This only happens in the critical case (a constant process progeny), which also exhibits other special symmetries. We extend the prune-invariance set-up to a non-Markov situation and trees with edge lengths. In this general case the class of self-similar processes becomes much richer and covers a variety of practically important situations. The main result is construction of the hierarchical branching processes that satisfy various self-similarity constraints (distributional, mean, in edge-lengths) depending on the process parameters. Taking the limit of averaged stochastic dynamics, as the number of trajectories increases, we obtain a deterministic system of differential equations that describes the process evolution. This system is used to establish a phase transition that separ...
UNDERSTANDING SEVERE WEATHER PROCESSES THROUGH SPATIOTEMPORAL RELATIONAL RANDOM FORESTS
National Aeronautics and Space Administration — UNDERSTANDING SEVERE WEATHER PROCESSES THROUGH SPATIOTEMPORAL RELATIONAL RANDOM FORESTS AMY MCGOVERN, TIMOTHY SUPINIE, DAVID JOHN GAGNE II, NATHANIEL TROUTMAN,...
Multiplier phenomenology in random multiplicative cascade processes
Jouault, B; Greiner, M; Jouault, Bruno; Lipa, Peter; Greiner, Martin
1999-01-01
We demonstrate that the correlations observed in conditioned multiplier distributions of the energy dissipation in fully developed turbulence can be understood as an unavoidable artefact of the observation procedure. Taking the latter into account, all reported properties of both unconditioned and conditioned multiplier distributions can be reproduced by cascade models with uncorrelated random weights if their bivariate splitting function is non-energy conserving. For the alpha-model we show that the simulated multiplier distributions converge to a limiting form, which is very close to the experimentally observed one. If random translations of the observation window are accounted for, also the subtle effects found in conditioned multiplier distributions are precisely reproduced.
Transmissibility Matrix in Harmonic and Random Processes
M. Fontul
2004-01-01
Full Text Available The transmissibility concept may be generalized to multi-degree-of-freedom systems with multiple random excitations. This generalization involves the definition of a transmissibility matrix, relating two sets of responses when the structure is subjected to excitation at a given set of coordinates. Applying such a concept to an experimental example is the easiest way to validate this method.
Lu, Chao-Chin; Leng, Jianwei; Cannon, Grant W; Zhou, Xi; Egger, Marlene; South, Brett; Burningham, Zach; Zeng, Qing; Sauer, Brian C
2016-12-01
Medications with non-standard dosing and unstandardized units of measurement make the estimation of prescribed dose difficult from pharmacy dispensing data. A natural language processing tool named the SIG extractor was developed to identify and extract elements from narrative medication instructions to compute average weekly doses (AWDs) for disease-modifying antirheumatic drugs. The goal of this paper is to evaluate the performance of the SIG extractor. This agreement study utilized Veterans Health Affairs pharmacy data from 2008 to 2012. The SIG extractor was designed to extract key elements from narrative medication schedules (SIGs) for 17 select medications to calculate AWD, and these medications were categorized by generic name and route of administration. The SIG extractor was evaluated against an annotator-derived reference standard for accuracy, which is the fraction of AWDs accurately computed. The overall accuracy was 89% [95% confidence interval (CI) 88%, 90%]. The accuracy was ≥85% for all medications and route combinations, except for cyclophosphamide (oral) and cyclosporine (oral), which were 79% (95%CI 72%, 85%) and 66% (95%CI 58%, 73%), respectively. The SIG extractor performed well on the majority of medications, indicating that AWD calculated by the SIG extractor can be used to improve estimation of AWD when dispensed quantity or days' supply is questionable or improbable. The working model for annotating SIGs and the SIG extractor are generalized and can easily be applied to other medications. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Shlakhter, Oleksandr
2008-01-01
One of the most widely used methods for solving average cost MDP problems is the value iteration method. This method, however, is often computationally impractical and restricted in size of solvable MDP problems. We propose acceleration operators that improve the performance of the value iteration for average reward MDP models. These operators are based on two important properties of Markovian operator: contraction mapping and monotonicity. It is well known that the classical relative value iteration methods for average cost criteria MDP do not involve the max-norm contraction or monotonicity property. To overcome this difficulty we propose to combine acceleration operators with variants of value iteration for stochastic shortest path problems associated average reward problems.
Schille, Joerg; Schneider, Lutz; Loeschner, Udo
2015-09-01
In this paper, laser processing of technical grade stainless steel and copper using high-average-power ultrashort pulse lasers is studied in order to gain deeper insight into material removal for microfabrication. A high-pulse repetition frequency picosecond and femtosecond laser is used in conjunction with high-performance galvanometer scanners and an in-house developed two-axis polygon scanner system. By varying the processing parameters such as wavelength, pulse length, fluence and repetition rate, cavities of standardized geometry are fabricated and analyzed. From the depths of the cavities produced, the ablation rate and removal efficiency are estimated. In addition, the quality of the cavities is evaluated by means of scanning electron microscope micrographs or rather surface roughness measurements. From the results obtained, the influence of the machining parameters on material removal and machining quality is discussed. In addition, it is shown that both material removal rate and quality increase by using femtosecond compared to picosecond laser pulses. On stainless steel, a maximum throughput of 6.81 mm3/min is achieved with 32 W femtosecond laser powers; if using 187 W picosecond laser powers, the maximum is 15.04 mm3/min, respectively. On copper, the maximum throughputs are 6.1 mm3/min and 21.4 mm3/min, obtained with 32 W femtosecond and 187 W picosecond laser powers. The findings indicate that ultrashort pulses in the mid-fluence regime yield most efficient material removal. In conclusion, from the results of this analysis, a range of optimum processing parameters are derived feasible to enhance machining efficiency, throughput and quality in high-rate micromachining. The work carried out here clearly opens the way to significant industrial applications.
Statistical properties of several models of fractional random point processes
Bendjaballah, C.
2011-08-01
Statistical properties of several models of fractional random point processes have been analyzed from the counting and time interval statistics points of view. Based on the criterion of the reduced variance, it is seen that such processes exhibit nonclassical properties. The conditions for these processes to be treated as conditional Poisson processes are examined. Numerical simulations illustrate part of the theoretical calculations.
Comparison of Image Processing Techniques using Random Noise Radar
2014-03-27
pseudo - random noise . The noise waveforms employed by the radar systems 9 are generally white and Gaussian, that is, the waveform’s power...2010. [5] Hardin, Joshua A. “Information Encoding on a Pseudo Random Noise Radar Waveform”, 2013. [6] Jackson, Julie A. “EENG 668/714 Advanced Radar ...COMPARISON OF IMAGE PROCESSING TECHNIQUES USING RANDOM NOISE RADAR THESIS Jesse Robert B. Cruz, Capt, USAF AFIT-ENG-14-M-22 DEPARTMENT OF THE
Discrete random signal processing and filtering primer with Matlab
Poularikas, Alexander D
2013-01-01
Engineers in all fields will appreciate a practical guide that combines several new effective MATLAB® problem-solving approaches and the very latest in discrete random signal processing and filtering.Numerous Useful Examples, Problems, and Solutions - An Extensive and Powerful ReviewWritten for practicing engineers seeking to strengthen their practical grasp of random signal processing, Discrete Random Signal Processing and Filtering Primer with MATLAB provides the opportunity to doubly enhance their skills. The author, a leading expert in the field of electrical and computer engineering, offe
Randomized Primitives for Big Data Processing
Stöckel, Morten
of such data intersection computations, such as approximating the set intersection size and multiplying two matrices. The improvements over the current state of the art methods are either in the form of less space required or less time needed to process the data to compute the answer to the query....
Problem-Solving Processes of High and Average Performers in Physics.
Coleman, Elaine B.; Shore, Bruce
1991-01-01
This study examined the problem-solving protocols of 21 students in a grade 11 enriched physics course as well as 3 adult "experts" in physics. Experts and high performing students made more correct metastatements and more references to prior knowledge than did average performing students. (DB)
Signal analysis and processing for random binary phase coded pulse radar
孙光民; 刘国岁; 顾红
2004-01-01
The application of the random binary phase coded signal in the CW radar system has been limited by the difficulty to isolate the tranmission and reception signal. In order to make use of the random binary phase coded signal, the random binary phase coded pulse radar (RBPC-PR) system has been studied. First, the average ambiguity function (AAF) of the RBPC-PR signal has been analyzed. Then, a statistical method of reducing the range sidelobe (RSL) is presented. Finally, a signal processing scheme of the RBPC-PR is developed. The simulation results show that by using the scheme, the jamming immunity of the system, the resolution and accuracy of distance and velocity have been improved, and the distance and velocity vagueness caused by periods can also be removed. The RSL can be reduced over 30dB by the statistical average method, thus the probability ambiguity caused by random noise can be avoided.
Mixed exponentially weighted moving average-cumulative sum charts for process monitoring
Abbas, N.; Riaz, M.; Does, R.J.M.M.
2013-01-01
The control chart is a very popular tool of statistical process control. It is used to determine the existence of special cause variation to remove it so that the process may be brought in statistical control. Shewhart-type control charts are sensitive for large disturbances in the process, whereas
Analysis of random factors of the self-education process
A. A. Solodov
2016-01-01
Full Text Available The aim of the study is the statistical description of the random factors of the self-educationт process, namely that stage of the process of continuous education, in which there is no meaningful impact on the student’s educational organization and the development of algorithms for estimating these factors. It is assumed that motivations of self-education are intrinsic factors that characterize the individual learner and external, associated with the changing environment and emerging challenges. Phenomena available for analysis a self-learning process (observed data are events relevant to this process, which are modeled by points on the time axis, the number and position of which is assumed to be random. Each point can be mapped with the unknown and unobserved random or nonrandom factor (parameter which affects the intensity of formation of dots. The purpose is to describe observable and unobservable data and developing algorithms for optimal evaluation. Further, such evaluations can be used for the individual characteristics of the process of self-study or for comparison of different students. For the analysis of statistical characteristics of the process of selfeducation applied mathematical apparatus of the theory of point random processes, which allows to determine the key statistical characteristics of unknown random factors of the process of self-education. The work consists of a logically complete model including the following components.• Study the basic statistical model of the appearance of points in the process of self-education in the form of a Poisson process, the only characteristic is the intensity of occurrence of events• Methods of testing the hypothesis about Poisson distribution of observed events.• Generalization of the basic model to the case where the intensity function depends on the time and unknown factor (variable can be both random and not random. Such factors are interpreted as
Level sets and extrema of random processes and fields
Azais, Jean-Marc
2009-01-01
A timely and comprehensive treatment of random field theory with applications across diverse areas of study Level Sets and Extrema of Random Processes and Fields discusses how to understand the properties of the level sets of paths as well as how to compute the probability distribution of its extremal values, which are two general classes of problems that arise in the study of random processes and fields and in related applications. This book provides a unified and accessible approach to these two topics and their relationship to classical theory and Gaussian processes and fields, and the most modern research findings are also discussed. The authors begin with an introduction to the basic concepts of stochastic processes, including a modern review of Gaussian fields and their classical inequalities. Subsequent chapters are devoted to Rice formulas, regularity properties, and recent results on the tails of the distribution of the maximum. Finally, applications of random fields to various areas of mathematics a...
Bisexual Galton-Watson Branching Processes in Random Environments
Shi-xia Ma
2006-01-01
In this paper, we consider a bisexual Galton-Watson branching process whose offspring probability distribution is controlled by a random environment process. Some results for the probability generating functions associated with the process are obtained and sufficient conditions for certain extinction and for non-certain extinction are established.
Transforming spatial point processes into Poisson processes using random superposition
Møller, Jesper; Berthelsen, Kasper Klitgaaard
with a complementary spatial point process Y to obtain a Poisson process X∪Y with intensity function β. Underlying this is a bivariate spatial birth-death process (Xt,Yt) which converges towards the distribution of (X,Y). We study the joint distribution of X and Y, and their marginal and conditional distributions...... process with intensity function β if and only if the true Papangelou intensity is used. Whether the superposition is actually such a Poisson process can easily be examined using well known results and fast simulation procedures for Poisson processes. We illustrate this approach to model checking....... In particular, we introduce a fast and easy simulation procedure for Y conditional on X. This may be used for model checking: given a model for the Papangelou intensity of the original spatial point process, this model is used to generate the complementary process, and the resulting superposition is a Poisson...
Compositions, Random Sums and Continued Random Fractions of Poisson and Fractional Poisson Processes
Orsingher, Enzo; Polito, Federico
2012-08-01
In this paper we consider the relation between random sums and compositions of different processes. In particular, for independent Poisson processes N α ( t), N β ( t), t>0, we have that N_{α}(N_{β}(t)) stackrel{d}{=} sum_{j=1}^{N_{β}(t)} Xj, where the X j s are Poisson random variables. We present a series of similar cases, where the outer process is Poisson with different inner processes. We highlight generalisations of these results where the external process is infinitely divisible. A section of the paper concerns compositions of the form N_{α}(tauk^{ν}), ν∈(0,1], where tauk^{ν} is the inverse of the fractional Poisson process, and we show how these compositions can be represented as random sums. Furthermore we study compositions of the form Θ( N( t)), t>0, which can be represented as random products. The last section is devoted to studying continued fractions of Cauchy random variables with a Poisson number of levels. We evaluate the exact distribution and derive the scale parameter in terms of ratios of Fibonacci numbers.
Renewal theory for perturbed random walks and similar processes
Iksanov, Alexander
2016-01-01
This book offers a detailed review of perturbed random walks, perpetuities, and random processes with immigration. Being of major importance in modern probability theory, both theoretical and applied, these objects have been used to model various phenomena in the natural sciences as well as in insurance and finance. The book also presents the many significant results and efficient techniques and methods that have been worked out in the last decade. The first chapter is devoted to perturbed random walks and discusses their asymptotic behavior and various functionals pertaining to them, including supremum and first-passage time. The second chapter examines perpetuities, presenting results on continuity of their distributions and the existence of moments, as well as weak convergence of divergent perpetuities. Focusing on random processes with immigration, the third chapter investigates the existence of moments, describes long-time behavior and discusses limit theorems, both with and without scaling. Chapters fou...
Money creation process in a random redistribution model
Chen, Siyan; Wang, Yougui; Li, Keqiang; Wu, Jinshan
2014-01-01
In this paper, the dynamical process of money creation in a random exchange model with debt is investigated. The money creation kinetics are analyzed by both the money-transfer matrix method and the diffusion method. From both approaches, we attain the same conclusion: the source of money creation in the case of random exchange is the agents with neither money nor debt. These analytical results are demonstrated by computer simulations.
Zaman, B.; Riaz, M.; Abbas, N.; Does, R.J.M.M.
2015-01-01
Shewhart, exponentially weighted moving average (EWMA), and cumulative sum (CUSUM) charts are famous statistical tools, to handle special causes and to bring the process back in statistical control. Shewhart charts are useful to detect large shifts, whereas EWMA and CUSUM are more sensitive for smal
Continuous state branching processes in random environment: The Brownian case
Palau, Sandra; Pardo, Juan Carlos
2015-01-01
We consider continuous state branching processes that are perturbed by a Brownian motion. These processes are constructed as the unique strong solution of a stochastic differential equation. The long-term extinction and explosion behaviours are studied. In the stable case, the extinction and explosion probabilities are given explicitly. We find three regimes for the asymptotic behaviour of the explosion probability and, as in the case of branching processes in random environment, we find five...
Pritychenko, B.
2010-07-19
Present contribution represents a significant improvement of our previous calculation of Maxwellian-averaged cross sections and astrophysical reaction rates. Addition of newly-evaluated neutron reaction libraries, such as ROSFOND and Low-Fidelity Covariance Project, and improvements in data processing techniques allowed us to extend it for entire range of sprocess nuclei, calculate Maxwellian-averaged cross section uncertainties for the first time, and provide additional insights on all currently available neutron-induced reaction data. Nuclear reaction calculations using ENDF libraries and current Java technologies will be discussed and new results will be presented.
Designing neural networks that process mean values of random variables
Barber, Michael J. [AIT Austrian Institute of Technology, Innovation Systems Department, 1220 Vienna (Austria); Clark, John W. [Department of Physics and McDonnell Center for the Space Sciences, Washington University, St. Louis, MO 63130 (United States); Centro de Ciências Matemáticas, Universidade de Madeira, 9000-390 Funchal (Portugal)
2014-06-13
We develop a class of neural networks derived from probabilistic models posed in the form of Bayesian networks. Making biologically and technically plausible assumptions about the nature of the probabilistic models to be represented in the networks, we derive neural networks exhibiting standard dynamics that require no training to determine the synaptic weights, that perform accurate calculation of the mean values of the relevant random variables, that can pool multiple sources of evidence, and that deal appropriately with ambivalent, inconsistent, or contradictory evidence. - Highlights: • High-level neural computations are specified by Bayesian belief networks of random variables. • Probability densities of random variables are encoded in activities of populations of neurons. • Top-down algorithm generates specific neural network implementation of given computation. • Resulting “neural belief networks” process mean values of random variables. • Such networks pool multiple sources of evidence and deal properly with inconsistent evidence.
Convex minorants of random walks and L\\'evy processes
Abramson, Josh; Ross, Nathan; Bravo, Gerónimo Uribe
2011-01-01
This article provides an overview of recent work on descriptions and properties of the convex minorant of random walks and L\\'evy processes which summarize and extend the literature on these subjects. The results surveyed include point process descriptions of the convex minorant of random walks and L\\'evy processes on a fixed finite interval, up to an independent exponential time, and in the infinite horizon case. These descriptions follow from the invariance of these processes under an adequate path transformation. In the case of Brownian motion, we note how further special properties of this process, including time-inversion, imply a sequential description for the convex minorant of the Brownian meander.
Compositions, Random Sums and Continued Random Fractions of Poisson and Fractional Poisson Processes
Orsingher, Enzo
2011-01-01
In this paper we consider the relation between random sums and compositions of different processes. In particular, for independent Poisson processes $N_\\alpha(t)$, $N_\\beta(t)$, $t>0$, we show that $N_\\alpha(N_\\beta(t)) \\overset{\\text{d}}{=} \\sum_{j=1}^{N_\\beta(t)} X_j$, where the $X_j$s are Poisson random variables. We present a series of similar cases, the most general of which is the one in which the outer process is Poisson and the inner one is a nonlinear fractional birth process. We highlight generalisations of these results where the external process is infinitely divisible. A section of the paper concerns compositions of the form $N_\\alpha(\\tau_k^\
THE CONSTRUCTION OF DENUMERABLE q-PROCESSES IN RANDOM ENVIRONMENTS-THE EXISTENCE AND UNIQUENESS
Hu Dihe; Hu Xiaoyu
2008-01-01
The concepts of Markov process in random environment, q-matrix in random environment, and q-process in random environment are introduced. The minimal q-process in random environment is constructed and the necessary and sufficient conditions for the uniqueness of q-process in random environment are given.
The concept of the average stress in the fracture process zone for the search of the crack path
Yu.G. Matvienko
2015-10-01
Full Text Available The concept of the average stress has been employed to propose the maximum average tangential stress (MATS criterion for predicting the direction of fracture angle. This criterion states that a crack grows when the maximum average tangential stress in the fracture process zone ahead of the crack tip reaches its critical value and the crack growth direction coincides with the direction of the maximum average tangential stress along a constant radius around the crack tip. The tangential stress is described by the singular and nonsingular (T-stress terms in the Williams series solution. To demonstrate the validity of the proposed MATS criterion, this criterion is directly applied to experiments reported in the literature for the mixed mode I/II crack growth behavior of Guiting limestone. The predicted directions of fracture angle are consistent with the experimental data. The concept of the average stress has been also employed to predict the surface crack path under rolling-sliding contact loading. The proposed model considers the size and orientation of the initial crack, normal and tangential loading due to rolling–sliding contact as well as the influence of fluid trapped inside the crack by a hydraulic pressure mechanism. The MATS criterion is directly applied to equivalent contact model for surface crack growth on a gear tooth flank.
On the joint statistics of stable random processes
Hopcraft, K I [School of Mathematical Sciences, University of Nottingham, NG7 2RD (United Kingdom); Jakeman, E, E-mail: keith.hopcraft@nottingham.ac.uk [School of Electrical and Electronic Engineering, University of Nottingham, NG7 2RD (United Kingdom)
2011-10-28
A utilitarian continuous bi-variate random process whose first-order probability density function is a stable random variable is constructed. Results paralleling some of those familiar from the theory of Gaussian noise are derived. In addition to the joint-probability density for the process, these include fractional moments and structure functions. Although the correlation functions for stable processes other than Gaussian do not exist, we show that there is coherence between values adopted by the process at different times, which identifies a characteristic evolution with time. The distribution of the derivative of the process, and the joint-density function of the value of the process and its derivative measured at the same time are evaluated. These enable properties to be calculated analytically such as level crossing statistics and those related to the random telegraph wave. When the stable process is fractal, the proportion of time it spends at zero is finite and some properties of this quantity are evaluated, an optical interpretation for which is provided. (paper)
Generation and monitoring of a discrete stable random process
Hopcraft, K I; Matthews, J O
2002-01-01
A discrete stochastic process with stationary power law distribution is obtained from a death-multiple immigration population model. Emigrations from the population form a random series of events which are monitored by a counting process with finite-dynamic range and response time. It is shown that the power law behaviour of the population is manifested in the intermittent behaviour of the series of events. (letter to the editor)
Spatial birth-and-death processes in random environment
Fernandez, Roberto; Ferrari, Pablo A.; Guerberoff, Gustavo R.
2004-01-01
We consider birth-and-death processes of objects (animals) defined in ${\\bf Z}^d$ having unit death rates and random birth rates. For animals with uniformly bounded diameter we establish conditions on the rate distribution under which the following holds for almost all realizations of the birth rates: (i) the process is ergodic with at worst power-law time mixing; (ii) the unique invariant measure has exponential decay of (spatial) correlations; (iii) there exists a perfect-simulation algorit...
Imhan Khalil Ibraheem
2017-01-01
Full Text Available Laser tube bending is a new technique of laser material forming to produce a complex and accurate shape due to its flexibility and high controllability. Moreover, the defects during conventional tube forming such as thinning, wrinkling, spring back and ovalization can be avoided in laser tube bending process, because there is no external force used. In this paper an analytical investigation has been conducted to analyses the effects of average laser power and laser scanning speed on laser tube bending process, the analytical results have been verified experimentally. The model used in this study is in the same trend of the experiment. The results show that the bending angle increased with the increasing of average laser power and decreased with the increasing of angular scanning speed.
May random processes explain mating success in leks?
Focardi, S; Tinelli, A
1996-06-01
The object of this paper is to verify whether in specific cases the variance of mating success among lekking males may be due exclusively to a random mechanism, as opposed to the adaptive mechanisms of mate choice which are usually postulated in the literature in the framework of sexual selection theory. In fact, some studies attempted to compare observed distributions of male mating success with a Poisson 'null' distribution based on the conjecture of random mating; the conjecture is usually rejected. In this paper we construct a plausible model (the 'null' hypothesis) for a strictly random non-adaptive pattern of social behaviour of lekking males and females and we perform several simulations for reasonable choices of parameter values. It should be observed that some of the simulations based on our random model lead to a distribution of male mating success which is Poisson-like. However, contrary to predictions, in several simulations a random process of mate choice lead to non-Poissonian distributions. Accordingly, the fact that, when performing a statistical test on several sets of field data, we find both cases which are in agreement with Poisson distribution, or a normal one, and cases which are not, does not allow us to reject the assumption of random male reproductive success. Thus it is legitimate to conjecture that in many cases the inter-individual variability of male mating success might indeed be determined by random processes. If this conjecture were to be confirmed by further studies, the actual significance of sexual selection in the evolution of lekking species should be reassessed, and a novel approach in the analysis of field data would be called for.
Concave Majorants of Random Walks and Related Poisson Processes
Abramson, Josh
2010-01-01
We offer a unified approach to the theory of concave majorants of random walks by providing a path transformation for a walk of finite length that leaves the law of the walk unchanged whilst providing complete information about the concave majorant. This leads to a description of a walk of random geometric length as a Poisson point process of excursions away from its concave majorant, which is then used to find a complete description of the concave majorant for a walk of infinite length. In the case where subsets of increments may have the same arithmetic mean, we investigate three nested compositions that naturally arise from our construction of the concave majorant.
Random Matrices for Information Processing – A Democratic Vision
Cakmak, Burak
The thesis studies three important applications of random matrices to information processing. Our main contribution is that we consider probabilistic systems involving more general random matrix ensembles than the classical ensembles with iid entries, i.e. models that account for statistical...... dependence between the entries. Specifically, the involved matrices are invariant or fulfill a certain asymptotic freeness condition as their dimensions grow to infinity. Informally speaking, all latent variables contribute to the system model in a democratic fashion – there are no preferred latent variables...
Random Process Simulation for stochastic fatigue analysis. Ph.D. Thesis - Rice Univ., Houston, Tex.
Larsen, Curtis E.
1988-01-01
A simulation technique is described which directly synthesizes the extrema of a random process and is more efficient than the Gaussian simulation method. Such a technique is particularly useful in stochastic fatigue analysis because the required stress range moment E(R sup m), is a function only of the extrema of the random stress process. The family of autoregressive moving average (ARMA) models is reviewed and an autoregressive model is presented for modeling the extrema of any random process which has a unimodal power spectral density (psd). The proposed autoregressive technique is found to produce rainflow stress range moments which compare favorably with those computed by the Gaussian technique and to average 11.7 times faster than the Gaussian technique. The autoregressive technique is also adapted for processes having bimodal psd's. The adaptation involves using two autoregressive processes to simulate the extrema due to each mode and the superposition of these two extrema sequences. The proposed autoregressive superposition technique is 9 to 13 times faster than the Gaussian technique and produces comparable values for E(R sup m) for bimodal psd's having the frequency of one mode at least 2.5 times that of the other mode.
Karin KANDANANOND
2010-12-01
Full Text Available The objective of this research is to select the appropriate control charts for detecting a shift in the autocorrelated observations. The autocorrelated processes were characterized using AR (1 and IMA (1, 1 for stationary and non-stationary processes respectively. A process model was simulated to achieve the response, the average run length (ARL. The empirical analysis was conducted to quantify the impacts of critical factors e.g., AR coefficient (f, MA coefficient (q, types of charts and shift sizes on the ARL. The results showed that the exponentially weighted moving average (EWMA was the most appropriate control chart to monitor AR (1 and IMA (1, 1 processes because of its sensitivity. For non-stationary case, the ARL at positive q was significantly higher than the one at negative q when a shift size was small. If the performance of the statistical process control under stationary and non-stationary disturbances is correctly characterized, practitioners will have guidelines for achieving the highest possible performance potential when deploying SPC.
Multifractal detrended fluctuation analysis of analog random multiplicative processes
Silva, L.B.M.; Vermelho, M.V.D. [Instituto de Fisica, Universidade Federal de Alagoas, Maceio - AL, 57072-970 (Brazil); Lyra, M.L. [Instituto de Fisica, Universidade Federal de Alagoas, Maceio - AL, 57072-970 (Brazil)], E-mail: marcelo@if.ufal.br; Viswanathan, G.M. [Instituto de Fisica, Universidade Federal de Alagoas, Maceio - AL, 57072-970 (Brazil)
2009-09-15
We investigate non-Gaussian statistical properties of stationary stochastic signals generated by an analog circuit that simulates a random multiplicative process with weak additive noise. The random noises are originated by thermal shot noise and avalanche processes, while the multiplicative process is generated by a fully analog circuit. The resulting signal describes stochastic time series of current interest in several areas such as turbulence, finance, biology and environment, which exhibit power-law distributions. Specifically, we study the correlation properties of the signal by employing a detrended fluctuation analysis and explore its multifractal nature. The singularity spectrum is obtained and analyzed as a function of the control circuit parameter that tunes the asymptotic power-law form of the probability distribution function.
Age-dependent branching processes in random environments
2008-01-01
We consider an age-dependent branching process in random environments. The environments are represented by a stationary and ergodic sequence ξ = (ξ0,ξ1,...) of random variables. Given an environment ξ, the process is a non-homogenous Galton-Watson process, whose particles in n-th generation have a life length distribution G(ξn) on R+, and reproduce independently new particles according to a probability law p(ξn) on N. Let Z(t) be the number of particles alive at time t. We first find a characterization of the conditional probability generating function of Z(t) (given the environment ξ) via a functional equation, and obtain a criterion for almost certain extinction of the process by comparing it with an embedded Galton-Watson process. We then get expressions of the conditional mean EξZ(t) and the global mean EZ(t), and show their exponential growth rates by studying a renewal equation in random environments.
Age-dependent branching processes in random environments
LI YingQiu; LIU QuanSheng
2008-01-01
We consider an age-dependent branching process in random environments.The environments are represented by a stationary and ergodic sequence ξ = (ξ0,ξ1,...) of random variables.Given an environment ξ,the process is a non-homogenous Galton-Watson process,whose particles in n-th generation have a life length distribution G(ξn) on R+,and reproduce independently new particles according to a probability law p(ξn) on N.Let Z(t) be the number of particles alive at time t.We first find a characterization of the conditional probability generating function of Z(t) (given the environment ξ) via a functional equation,and obtain a criterion for almost certain extinction of the process by comparing it with an embedded Galton-Watson process.We then get expressions of the conditional mean EξZ(t) and the global mean EZ(t),and show their exponential growth rates by studying a renewal equation in random environments.
Efficient biased random bit generation for parallel processing
Slone, D.M.
1994-09-28
A lattice gas automaton was implemented on a massively parallel machine (the BBN TC2000) and a vector supercomputer (the CRAY C90). The automaton models Burgers equation {rho}t + {rho}{rho}{sub x} = {nu}{rho}{sub xx} in 1 dimension. The lattice gas evolves by advecting and colliding pseudo-particles on a 1-dimensional, periodic grid. The specific rules for colliding particles are stochastic in nature and require the generation of many billions of random numbers to create the random bits necessary for the lattice gas. The goal of the thesis was to speed up the process of generating the random bits and thereby lessen the computational bottleneck of the automaton.
U.S. Department of Health & Human Services — A list of a variety of averages for each state or territory as well as the national average, including each quality measure, staffing, fine amount and number of...
Apparent scale correlations in a random multifractal process
Cleve, Jochen; Schmiegel, Jürgen; Greiner, Martin
2008-01-01
We discuss various properties of a homogeneous random multifractal process, which are related to the issue of scale correlations. By design, the process has no built-in scale correlations. However, when it comes to observables like breakdown coefficients, which are based on a coarse......-graining of the multifractal field, scale correlations do appear. In the log-normal limit of the model process, the conditional distributions and moments of breakdown coefficients reproduce the observations made in fully developed small-scale turbulence. These findings help to understand several puzzling empirical details...
Probability and Random Processes With Applications to Signal Processing and Communications
Miller, Scott
2012-01-01
Miller and Childers have focused on creating a clear presentation of foundational concepts with specific applications to signal processing and communications, clearly the two areas of most interest to students and instructors in this course. It is aimed at graduate students as well as practicing engineers, and includes unique chapters on narrowband random processes and simulation techniques. The appendices provide a refresher in such areas as linear algebra, set theory, random variables, and more. Probability and Random Processes also includes applications in digital communications, informati
T. Wang; B.Pustal; M. Abondano; T. Grimmig; A. B(u)hrig-Polaczek; M. Wu; A. Ludwig
2005-01-01
The cooling channel process is a rehocasting method by which the prematerial with globular microstructure can be produced to fit the thixocasting process. A three-phase model based on volume averaging approach is proposed to simulate the cooling channel process of A356 Aluminum alloy. The three phases are liquid, solid and air respectively and treated as separated and interacting continua, sharing a single pressure field. The mass, momentum, enthalpy transport equations for each phase are solved. The developed model can predict the evolution of liquid, solid and air fraction as well as the distribution of grain density and grain size. The effect of pouring temperature on the grain density, grain size and solid fraction is analyzed in detail.
Asymptotic results for bifurcating random coefficient autoregressive processes
Blandin, Vassili
2012-01-01
The purpose of this paper is to study the asymptotic behavior of the weighted least square estimators of the unknown parameters of random coefficient bifurcating autoregressive processes. Under suitable assumptions on the immigration and the inheritance, we establish the almost sure convergence of our estimators, as well as a quadratic strong law and central limit theorems. Our study mostly relies on limit theorems for vector-valued martingales.
The construction of Markov processes in random environments and the equivalence theorems
无
2004-01-01
In sec.1, we introduce several basic concepts such as random transition function, p-m process and Markov process in random environment and give some examples to construct a random transition function from a non-homogeneous density function. In sec.2, we construct the Markov process in random enviromment and skew product Markov process by p - m process and investigate the properties of Markov process in random environment and the original process and environment process and skew product process. In sec. 3, we give several equivalence theorems on Markov process in random environment.
Mulac, Richard A.; Celestina, Mark L.; Adamczyk, John J.; Misegades, Kent P.; Dawson, Jef M.
1987-01-01
A procedure is outlined which utilizes parallel processing to solve the inviscid form of the average-passage equation system for multistage turbomachinery along with a description of its implementation in a FORTRAN computer code, MSTAGE. A scheme to reduce the central memory requirements of the program is also detailed. Both the multitasking and I/O routines referred to are specific to the Cray X-MP line of computers and its associated SSD (Solid-State Disk). Results are presented for a simulation of a two-stage rocket engine fuel pump turbine.
Mulac, Richard A.; Celestina, Mark L.; Adamczyk, John J.; Misegades, Kent P.; Dawson, Jef M.
1987-01-01
A procedure is outlined which utilizes parallel processing to solve the inviscid form of the average-passage equation system for multistage turbomachinery along with a description of its implementation in a FORTRAN computer code, MSTAGE. A scheme to reduce the central memory requirements of the program is also detailed. Both the multitasking and I/O routines referred to in this paper are specific to the Cray X-MP line of computers and its associated SSD (Solid-state Storage Device). Results are presented for a simulation of a two-stage rocket engine fuel pump turbine.
The decimation process in random k-SAT
Coja-Oghlan, Amin
2011-01-01
Let F be a uniformly distributed random k-SAT formula with n variables and m clauses. Non-rigorous statistical mechanics ideas have inspired a message passing algorithm called Belief Propagation Guided Decimation for finding satisfying assignments of F. This algorithm can be viewed as an attempt at implementing a certain thought experiment that we call the Decimation Process. In this paper we identify a variety of phase transitions in the decimation process and link these phase transitions to the performance of the algorithm.
The existence and uniqueness of q-process in random environment
HU; Dihe
2004-01-01
We introduce some basic concepts such as random (sub-)transition function,q-function in random environment, q-process in random environment and some basic lemmas. For any continuous q-function in random environment, we prove that the q-process in random environment always exists, and that any q-process in random environment satisfies the random Kolmogorov backward equation and the minimal q-process in random environment always exists. When q is a continuous and conservative q-function in random environment, the necessary and sufficient conditions for the uniqueness of q-process in random environment are given. Finally the special cases, homogeneous random transition functions and homogeneous q-processes in random environments are considered.
Fisher, B.; O'Dell, C.; Mandrake, L.
2013-12-01
The Atmospheric CO2 Observations from Space (ACOS) group has been producing and distributing total column CO2 (XCO2) products using JAXA/NIES/MOE Greenhouse Gases Observing SATellite (GOSAT) spectra and has accumulated almost 4 years of data with version 3.3. While the ACOS team strives to only process soundings that the retrieval algorithm can handle well, we are conservative in what we reject from processing. Consequently, some soundings get processed which do not yield reliable results. We have developed post-processing filters based on comparisons to a few truth proxies (model means, TCCON, and the southern hemisphere approximation) to flag the less reliable soundings. Here we compare regionally (using TRANSCOM spatial bins) and monthly averaged XCO2 that have been filtered by our normal method (described in the ACOS Level 2 Data User's Guide) and a newer method, which we have named warn levels. Mean XCO2 differences are quantified spatially and temporally to inform possible biases in carbon cycle studies that could potentially be introduced by the application of differing post-processing screening methodologies to the ACOS products.
Break-even Analyses for Random Production and Demand Processes
Marcus Schweitzer
2002-01-01
Break-even analyses are often used as controlling instruments. Typically, they are applied to support decision processes or to gain information for the control of profits and sales. Firstly, the study gives an overview of the basic accounting systems. Secondly, the study shows possible ways of performing break-even analyses for a single-stage, make-to-order production in the case of random production and demand structures. To model these structures, queueing systems are employed. As a general result, we see that break-even analyses must always be performed taking into account an existing planning system. Under practical aspects, GI/G/1 systems turn out to map complex real situations realistically. From the examples given it can be concluded that one achieves different results compared with using a deterministic model even in the case of a simple, random effects approach. In particular, it is shown that stochastic modelling in general is helpful in avoiding incorrect decisions.
Some Minorants and Majorants of Random Walks and Levy Processes
Abramson, Joshua Simon
This thesis consists of four chapters, all relating to some sort of minorant or majorant of random walks or Levy processes. In Chapter 1 we provide an overview of recent work on descriptions and properties of the convex minorant of random walks and Levy processes as detailed in Chapter 2, [72] and [73]. This work rejuvenated the field of minorants, and led to the work in all the subsequent chapters. The results surveyed include point process descriptions of the convex minorant of random walks and Levy processes on a fixed finite interval, up to an independent exponential time, and in the infinite horizon case. These descriptions follow from the invariance of these processes under an adequate path transformation. In the case of Brownian motion, we note how further special properties of this process, including time-inversion, imply a sequential description for the convex minorant of the Brownian meander. This chapter is based on [3], which was co-written with Jim Pitman, Nathan Ross and Geronimo Uribe Bravo. Chapter 1 serves as a long introduction to Chapter 2, in which we offer a unified approach to the theory of concave majorants of random walks. The reasons for the switch from convex minorants to concave majorants are discussed in Section 1.1, but the results are all equivalent. This unified theory is arrived at by providing a path transformation for a walk of finite length that leaves the law of the walk unchanged whilst providing complete information about the concave majorant - the path transformation is different from the one discussed in Chapter 1, but this is necessary to deal with a more general case than the standard one as done in Section 2.6. The path transformation of Chapter 1, which is discussed in detail in Section 2.8, is more relevant to the limiting results for Levy processes that are of interest in Chapter 1. Our results lead to a description of a walk of random geometric length as a Poisson point process of excursions away from its concave
Periodically correlated random processes: Application in early diagnostics of mechanical systems
Javorskyj, I.; Kravets, I.; Matsko, I.; Yuzefovych, R.
2017-01-01
The covariance and spectral characteristics of periodically correlated random processes (PCRP) are used to describe the state of rotary mechanical systems and in their fault detection. The methods for estimation of mean function, covariance function, instantaneous spectral density and their Fourier coefficients for a given class of non-stationary random processes on the basis of experimental data, namely: the synchronous averaging, component, least squares method and linear filtration methods are considered. The first and second order periodicity detection methods are used for vibration signals analysis. A method for mechanical system fault identification and classification based on a harmonic series representation is developed. Examples of fault detection in rolling/sliding bearings and gearboxes are given.
Bundesmann, C., E-mail: carsten.bundesmann@iom-leipzig.de; Feder, R.; Gerlach, J.W.; Neumann, H.
2014-01-31
Ion beam sputter deposition is used to grow several sets of Ag films under systematic variation of ion beam parameters, such as ion species and ion energy, and geometrical parameters, such as ion incidence angle and polar emission angle. The films are characterized concerning their thickness by profilometry, their electrical properties by 4-point-probe-measurements, their optical properties by spectroscopic ellipsometry, and their average grain sizes by X-ray diffraction. Systematic influences of the growth parameters on film properties are revealed. The film thicknesses show a cosine-like angular distribution. The electrical resistivity increases for all sets with increasing emission angle and is found to be considerably smaller for Ag films grown by sputtering with Xe ions than for the Ag films grown by sputtering with Ar ions. Increasing the ion energy or the ion incidence angle also increases the electrical resistivity. The optical properties, which are the result of free charge carrier absorption, follow the same trends. The observed trends can be partly assigned to changes in the average grain size, which are tentatively attributed to different energetic and angular distributions of the sputtered and back-scattered particles. - Highlights: • Ion beam sputter deposition under systematic variation of process parameters. • Film characterization: thickness, electrical, optical and structural properties. • Electrical resistivity changes considerably with ion species and polar emission angle. • Electrical and optical data reveal a strong correlation with grain sizes. • Change of film properties related to changing properties of film-forming particles.
Boroushaki, Soheil; Malczewski, Jacek
2008-04-01
This paper focuses on the integration of GIS and an extension of the analytical hierarchy process (AHP) using quantifier-guided ordered weighted averaging (OWA) procedure. AHP_OWA is a multicriteria combination operator. The nature of the AHP_OWA depends on some parameters, which are expressed by means of fuzzy linguistic quantifiers. By changing the linguistic terms, AHP_OWA can generate a wide range of decision strategies. We propose a GIS-multicriteria evaluation (MCE) system through implementation of AHP_OWA within ArcGIS, capable of integrating linguistic labels within conventional AHP for spatial decision making. We suggest that the proposed GIS-MCE would simplify the definition of decision strategies and facilitate an exploratory analysis of multiple criteria by incorporating qualitative information within the analysis.
THE CONSTRUCTION OF DENUMERABLE q-PROCESSES IN RANDOM ENVIRONMENTS SATISFYING (F) OR (B)
Hu Dihe; Hu Xiaoyu
2008-01-01
This article is a continuation of [9]. Based on the discussion of random Kol-mogorov forward (backward) equations, for any given q-matrix in random environment,Q(θ) = (q(θ; x, y), x, y ∈ X), an infinite class of q-processes in random environments sat-isfying the random Kolmogorov forward (backward) equation is constructed. Moreover,under some conditions, all the q-processes in random environments satisfying the random Kolmogorov forward (backward) equation are constructed.
Signal processing technique for randomly discontinuous spectra HF radar waveforms
张东坡; 刘兴钊
2004-01-01
A major problem with all high frequency (HF) radars is a relatively poor range resolution available due to many interference sources. To avoid the interferences in frequency domain and operate with wideband, the randomly discontinuous spectra (RDS) signal is employed. However, it results in high range sidelobes when matching the reflected echo, which is much more difficult for target detection. A new signal processing technique that is radically different from the conventional technique to lower range sidelobes is introduced. This method is based on suppressing the selfclutter of the radar range ambiguity function (AF) by mismatched filtering. An effective algorithm is adopted to solve the filter coefficients. Simulation results show that the peak sidelobe level can be reduced to -30dB and the achievable system bandwidth is about 400KHz. The technique is adaptable to practical radar systems and applicable for other realtime signal processing.
Nonstationary random acoustic and electromagnetic fields as wave diffusion processes
Arnaut, L R
2007-01-01
We investigate the effects of relatively rapid variations of the boundaries of an overmoded cavity on the stochastic properties of its interior acoustic or electromagnetic field. For quasi-static variations, this field can be represented as an ideal incoherent and statistically homogeneous isotropic random scalar or vector field, respectively. A physical model is constructed showing that the field dynamics can be characterized as a generalized diffusion process. The Langevin--It\\^{o} and Fokker--Planck equations are derived and their associated statistics and distributions for the complex analytic field, its magnitude and energy density are computed. The energy diffusion parameter is found to be proportional to the square of the ratio of the standard deviation of the source field to the characteristic time constant of the dynamic process, but is independent of the initial energy density, to first order. The energy drift vanishes in the asymptotic limit. The time-energy probability distribution is in general n...
Online games: a novel approach to explore how partial information influences random search processes
Martinez-Garcia, Ricardo; Lopez, Cristobal
2016-01-01
Many natural processes rely on optimizing the success ratio of an underlying search process. We investigate how fluxes of information between individuals and their environment modify the statistical properties of human search strategies. Using an online game, searchers have to find a hidden target whose location is hinted by a surrounding neighborhood. Searches are optimal for intermediate neighborhood sizes; smaller areas are harder to locate while larger ones obscure the location of the target inside it. Although the neighborhood size that minimizes average search times depends on neighborhood geometry, we develop a theoretical framework to predict this value in a general setup. Furthermore, a priori access to information about the landscape turns search strategies into self-adaptive processes in which the trajectory on the board evolves to show a well-defined characteristic jumping length. A family of random-walk models is developed to investigate the non-Markovian nature of the process.
Ma, Jinhui; Raina, Parminder; Beyene, Joseph; Thabane, Lehana
2013-01-23
The objective of this simulation study is to compare the accuracy and efficiency of population-averaged (i.e. generalized estimating equations (GEE)) and cluster-specific (i.e. random-effects logistic regression (RELR)) models for analyzing data from cluster randomized trials (CRTs) with missing binary responses. In this simulation study, clustered responses were generated from a beta-binomial distribution. The number of clusters per trial arm, the number of subjects per cluster, intra-cluster correlation coefficient, and the percentage of missing data were allowed to vary. Under the assumption of covariate dependent missingness, missing outcomes were handled by complete case analysis, standard multiple imputation (MI) and within-cluster MI strategies. Data were analyzed using GEE and RELR. Performance of the methods was assessed using standardized bias, empirical standard error, root mean squared error (RMSE), and coverage probability. GEE performs well on all four measures--provided the downward bias of the standard error (when the number of clusters per arm is small) is adjusted appropriately--under the following scenarios: complete case analysis for CRTs with a small amount of missing data; standard MI for CRTs with variance inflation factor (VIF) cluster MI for CRTs with VIF≥3 and cluster size>50. RELR performs well only when a small amount of data was missing, and complete case analysis was applied. GEE performs well as long as appropriate missing data strategies are adopted based on the design of CRTs and the percentage of missing data. In contrast, RELR does not perform well when either standard or within-cluster MI strategy is applied prior to the analysis.
Ma Jinhui
2013-01-01
Full Text Available Abstracts Background The objective of this simulation study is to compare the accuracy and efficiency of population-averaged (i.e. generalized estimating equations (GEE and cluster-specific (i.e. random-effects logistic regression (RELR models for analyzing data from cluster randomized trials (CRTs with missing binary responses. Methods In this simulation study, clustered responses were generated from a beta-binomial distribution. The number of clusters per trial arm, the number of subjects per cluster, intra-cluster correlation coefficient, and the percentage of missing data were allowed to vary. Under the assumption of covariate dependent missingness, missing outcomes were handled by complete case analysis, standard multiple imputation (MI and within-cluster MI strategies. Data were analyzed using GEE and RELR. Performance of the methods was assessed using standardized bias, empirical standard error, root mean squared error (RMSE, and coverage probability. Results GEE performs well on all four measures — provided the downward bias of the standard error (when the number of clusters per arm is small is adjusted appropriately — under the following scenarios: complete case analysis for CRTs with a small amount of missing data; standard MI for CRTs with variance inflation factor (VIF 50. RELR performs well only when a small amount of data was missing, and complete case analysis was applied. Conclusion GEE performs well as long as appropriate missing data strategies are adopted based on the design of CRTs and the percentage of missing data. In contrast, RELR does not perform well when either standard or within-cluster MI strategy is applied prior to the analysis.
Random spatial processes and geostatistical models for soil variables
Lark, R. M.
2009-04-01
Geostatistical models of soil variation have been used to considerable effect to facilitate efficient and powerful prediction of soil properties at unsampled sites or over partially sampled regions. Geostatistical models can also be used to investigate the scaling behaviour of soil process models, to design sampling strategies and to account for spatial dependence in the random effects of linear mixed models for spatial variables. However, most geostatistical models (variograms) are selected for reasons of mathematical convenience (in particular, to ensure positive definiteness of the corresponding variables). They assume some underlying spatial mathematical operator which may give a good description of observed variation of the soil, but which may not relate in any clear way to the processes that we know give rise to that observed variation in the real world. In this paper I shall argue that soil scientists should pay closer attention to the underlying operators in geostatistical models, with a view to identifying, where ever possible, operators that reflect our knowledge of processes in the soil. I shall illustrate how this can be done in the case of two problems. The first exemplar problem is the definition of operators to represent statistically processes in which the soil landscape is divided into discrete domains. This may occur at disparate scales from the landscape (outcrops, catchments, fields with different landuse) to the soil core (aggregates, rhizospheres). The operators that underly standard geostatistical models of soil variation typically describe continuous variation, and so do not offer any way to incorporate information on processes which occur in discrete domains. I shall present the Poisson Voronoi Tessellation as an alternative spatial operator, examine its corresponding variogram, and apply these to some real data. The second exemplar problem arises from different operators that are equifinal with respect to the variograms of the
Eunice Kazue Kano
2015-03-01
Full Text Available Average bioequivalence of two 500 mg levofloxacin formulations available in Brazil, Tavanic(c (Sanofi-Aventis Farmacêutica Ltda, Brazil, reference product and Levaquin(c (Janssen-Cilag Farmacêutica Ltda, Brazil, test product was evaluated by means of a randomized, open-label, 2-way crossover study performed in 26 healthy Brazilian volunteers under fasting conditions. A single dose of 500 mg levofloxacin tablets was orally administered, and blood samples were collected over a period of 48 hours. Levofloxacin plasmatic concentrations were determined using a validated HPLC method. Pharmacokinetic parameters Cmax, Tmax, Kel, T1/2el, AUC0-t and AUC0-inf were calculated using noncompartmental analysis. Bioequivalence was determined by calculating 90% confidence intervals (90% CI for the ratio of Cmax, AUC0-t and AUC0-inf values for test and reference products, using logarithmic transformed data. Tolerability was assessed by monitoring vital signs and laboratory analysis results, by subject interviews and by spontaneous report of adverse events. 90% CIs for Cmax, AUC0-t and AUC0-inf were 92.1% - 108.2%, 90.7% - 98.0%, and 94.8% - 100.0%, respectively. Observed adverse events were nausea and headache. It was concluded that Tavanic(c and Levaquin(c are bioequivalent, since 90% CIs are within the 80% - 125% interval proposed by regulatory agencies.
5th Seminar on Stochastic Processes, Random Fields and Applications
Russo, Francesco; Dozzi, Marco
2008-01-01
This volume contains twenty-eight refereed research or review papers presented at the 5th Seminar on Stochastic Processes, Random Fields and Applications, which took place at the Centro Stefano Franscini (Monte Verità) in Ascona, Switzerland, from May 30 to June 3, 2005. The seminar focused mainly on stochastic partial differential equations, random dynamical systems, infinite-dimensional analysis, approximation problems, and financial engineering. The book will be a valuable resource for researchers in stochastic analysis and professionals interested in stochastic methods in finance. Contributors: Y. Asai, J.-P. Aubin, C. Becker, M. Benaïm, H. Bessaih, S. Biagini, S. Bonaccorsi, N. Bouleau, N. Champagnat, G. Da Prato, R. Ferrière, F. Flandoli, P. Guasoni, V.B. Hallulli, D. Khoshnevisan, T. Komorowski, R. Léandre, P. Lescot, H. Lisei, J.A. López-Mimbela, V. Mandrekar, S. Méléard, A. Millet, H. Nagai, A.D. Neate, V. Orlovius, M. Pratelli, N. Privault, O. Raimond, M. Röckner, B. Rüdiger, W.J. Runggaldi...
Meadows, Caroline C; Gable, Philip A; Lohse, Keith R; Miller, Matthew W
2016-07-01
From a neurobiological and motivational perspective, the feedback-related negativity (FRN) and reward positivity (RewP) event-related potential (ERP) components should increase with reward magnitude (reward associated with valence (success/failure) feedback). To test this hypothesis, we recorded participants' electroencephalograms while presenting them with potential monetary rewards ($0.00-$4.96) pre-trial for each trial of a reaction time task and presenting them with valence feedback post-trial. Averaged ERPs time-locked to valence feedback were extracted, and results revealed a valence by magnitude interaction for neural activity in the FRN/RewP time window. This interaction was driven by magnitude affecting RewP, but not FRN, amplitude. Moreover, single trial ERP analyses revealed a reliable correlation between magnitude and RewP, but not FRN, amplitude. Finally, P3b and late positive potential (LPP) amplitudes were affected by magnitude. Results partly support the neurobiological (dopamine) account of the FRN/RewP and suggest motivation affects feedback processing, as indicated by multiple ERP components.
Emergence of typical entanglement in two-party random processes
Dahlsten, O C O; Plenio, M B
2007-01-01
We investigate the entanglement within a system undergoing a random, local process. We find that there is initially a phase of very fast generation and spread of entanglement. At the end of this phase the entanglement is typically maximal. In previous work we proved that the maximal entanglement is reached to a fixed arbitrary accuracy within $O(N^3)$ steps, where $N$ is the total number of qubits. Here we provide a detailed and more pedagogical proof. We demonstrate that one can use the so-called stabilizer gates to simulate this process efficiently on a classical computer. Furthermore, we discuss three ways of identifying the transition from the phase of rapid spread of entanglement to the stationary phase: (i) the time when saturation of the maximal entanglement is achieved, (ii) the cut-off moment, when the entanglement probability distribution is practically stationary, and (iii) the moment block entanglement scales exhibits volume scaling. We furthermore investigate the mixed state and multipartite sett...
A limit process for partial match queries in random quadtrees
Broutin, Nicolas; Sulzbach, Henning
2012-01-01
We consider the problem of recovering items matching a partially specified pattern in multidimensional trees (quad trees and k-d trees). We assume the classical model where the data consist of independent and uniform points in the unit square. For this model, in a structure on $n$ points, it is known that the complexity, measured as the number of nodes $C_n(\\xi)$ to visit in order to report the items matching a random query $\\xi$, independent and uniformly distributed on $[0,1]$, satisfies $E{C_n(\\xi)}\\sim \\kappa n^{\\beta}$, where $\\kappa$ and $\\beta$ are explicit constants. We develop an approach based on the analysis of the cost $C_n(s)$ of any fixed query $s\\in [0,1]$, and give precise estimates for the variance and limit distribution. Moreover, a functional limit law for a rescaled version of the process $(C_n(s))_{0\\le s\\le 1}$ is derived in the space of c\\`{a}dl\\`{a}g functions with the Skorokhod topology. For the worst case complexity $\\max_{s\\in [0,1]} C_n(s)$ the order of the expectation as well as a...
PARTIAL DIFFERENTIAL EQUATIONS FOR DENSITIES OF RANDOM PROCESSES,
PARTIAL DIFFERENTIAL EQUATIONS , STOCHASTIC PROCESSES), (*STOCHASTIC PROCESSES, PARTIAL DIFFERENTIAL EQUATIONS ), EQUATIONS, STATISTICAL FUNCTIONS, STATISTICAL PROCESSES, PROBABILITY, NUMERICAL METHODS AND PROCEDURES
Process-level quenched large deviations for random walk in random environment
Rassoul-Agha, Firas
2009-01-01
We consider a bounded step size random walk in an ergodic random environment with some ellipticity, on an integer lattice of arbitrary dimension. We prove a level 3 large deviation principle, under almost every environment, with rate function related to a relative entropy.
On a zero-one law for the norm process of transient random walk
Matsumoto, Ayako
2009-01-01
A zero-one law of Engelbert--Schmidt type is proven for the norm process of a transient random walk. An invariance principle for random walk local times and a limit version of Jeulin's lemma play key roles.
A Note on Multitype Branching Process with Bounded Immigration in Random Environment
Hua Ming WANG
2013-01-01
In this paper,we study the total number of progeny,W,before regenerating of multitype branching process with immigration in random environment.We show that the tail probability of |W| is of order t-κ as t → ∞,with κ some constant.As an application,we prove a stable law for (L-1) random walk in random environment,generalizing the stable law for the nearest random walk in random environment (see "Kesten,Kozlov,Spitzer:A limit law for random walk in a random environment.Compositio Math.,30,145-168 (1975)").
Igos, Elorri; Benetto, Enrico; Venditti, Silvia; Köhler, Christian; Cornelissen, Alex
2013-01-01
Pharmaceuticals are normally barely removed by conventional wastewater treatments. Advanced technologies as a post-treatment, could prevent these pollutants reaching the environment and could be included in a centralized treatment plant or, alternatively, at the primary point source, e.g. hospitals. In this study, the environmental impacts of different options, as a function of several advanced treatments as well as the centralized/decentralized implementation options, have been evaluated using Life Cycle Assessment (LCA) methodology. In previous publications, the characterization of the toxicity of pharmaceuticals within LCA suffers from high uncertainties. In our study, LCA was therefore only used to quantify the generated impacts (electricity, chemicals, etc.) of different treatment scenarios. These impacts are then weighted by the average removal rate of pharmaceuticals using a new Eco-efficiency Indicator EFI. This new way of comparing the scenarios shows significant advantages of upgrading a centralized plant with ozonation as the post-treatment. The decentralized treatment option reveals no significant improvement on the avoided environmental impact, due to the comparatively small pollutant load coming from the hospital and the uncertainties in the average removal of the decentralized scenarios. When comparing the post-treatment technologies, UV radiation has a lower performance than both ozonation and activated carbon adsorption.
Advances in Disordered Systems, Random Processes and Some Applications
Contucci, Pierluigi; Giardinà, Cristian
2016-12-01
Preface; 1. Topological field theory of data: mining data beyond complex networks? Mario Rasetti and Emanuela Merelli; 2. A random walk in diffusion phenomena and statistical mechanics Elena Agliari; 3. Legendre structures in statistical mechanics for ordered and disordered systems Francesco Guerra; 4. Extrema of log-correlated random variables: principles and examples Louis-Pierre Arguin; 5. Scaling limits, Brownian loops and conformal fields Federico Camia; 6. The Brownian web, the Brownian net, and their universality Emmanuel Schertzer, Rongfeng Sun and Jan M. Swart; Index.
Siegel, Irving H.
The arithmetic processes of aggregation and averaging are basic to quantitative investigations of employment, unemployment, and related concepts. In explaining these concepts, this report stresses need for accuracy and consistency in measurements, and describes tools for analyzing alternative measures. (BH)
Variational Hidden Conditional Random Fields with Coupled Dirichlet Process Mixtures
Bousmalis, K.; Zafeiriou, S.; Morency, L.P.; Pantic, Maja; Ghahramani, Z.
2013-01-01
Hidden Conditional Random Fields (HCRFs) are discriminative latent variable models which have been shown to successfully learn the hidden structure of a given classification problem. An infinite HCRF is an HCRF with a countably infinite number of hidden states, which rids us not only of the necessit
On the Exeedance Random Measures for Stationary Processes.
1987-11-01
random variables. 0<oɚ, July 87. 197 1 Kuznezova-Sholpo and S.T. Rachev . Explicit solutions of moment problems 1. July 87. 198 T. Hsing. On the...209 J, Bather, Stopping rules and observed significance levels, Sept. 87. 210 ST. Rachev and J.E. Yukich. Convolution metrics and rates of convergence
Ivan D. Lobanov
2016-06-01
Full Text Available In this article, the problem of the number of spikes (level crossings of the stationary narrowband Gaussian process has been considered. The process was specified by an exponentially-cosine autocorrelation function. The problem had been solved earlier by Rice in terms of the joint probabilities’ density of the process and its derivative with respect to time, but in our article we obtained the solution using the functional of probabilities’ density (the functional was obtained by Amiantov, as well as an expansion of the canonical stochastic process. In this article, the optimal canonical expansion of a narrowband stochastic process based on the work of Filimonov and Denisov was also considered to solve the problem. The application of all these resources allowed obtaining an exact analytical solution of the problem on spikes of stationary narrowband Gaussian process. The obtained formulae could be used to solve, for example, some problems about the residual resource of some radiotechnical products, about the breaking sea waves and others.
Jausovec, Norbert
2000-01-01
Studied differences in cognitive processes related to creativity and intelligence using EEG coherence and power measures in the lower and upper alpha bands. Results of 2 experiments involving 49 and 48 right-handed student teachers suggest that creativity and intelligence are different abilities that also differ in the neurological activity…
Dodonov, V V
1998-01-01
We consider a relaxation of a single mode of the quantized field in a presence of one- and two-photon absorption and emission processes. Exact stationary solutions of the master equation for the diagonal elements of the density matrix in the Fock basis are found in the case of completely saturated two-photon emission. If two-photon processes dominate over single-photon ones, the stationary state is a mixture of phase averaged even and odd coherent states.
Physical Theories with Average Symmetry
Alamino, Roberto C.
2013-01-01
This Letter probes the existence of physical laws invariant only in average when subjected to some transformation. The concept of a symmetry transformation is broadened to include corruption by random noise and average symmetry is introduced by considering functions which are invariant only in average under these transformations. It is then shown that actions with average symmetry obey a modified version of Noether's Theorem with dissipative currents. The relation of this with possible violat...
David G. Gadian
2011-10-01
Full Text Available A common feature of many magnetic resonance image (MRI data processing methods is the voxel-by-voxel (a voxel is a volume element manner in which the processing is performed. In general, however, MRI data are expected to exhibit some level of spatial correlation, rendering an independent-voxels treatment inefficient in its use of the data. Bayesian random effect models are expected to be more efficient owing to their information-borrowing behaviour. To illustrate the Bayesian random effects approach, this paper outlines a Markov chain Monte Carlo (MCMC analysis of a perfusion MRI dataset, implemented in R using the BRugs package. BRugs provides an interface to WinBUGS and its GeoBUGS add-on. WinBUGS is a widely used programme for performing MCMC analyses, with a focus on Bayesian random effect models. A simultaneous modeling of both voxels (restricted to a region of interest and multiple subjects is demonstrated. Despite the low signal-to-noise ratio in the magnetic resonance signal intensity data, useful model signal intensity profiles are obtained. The merits of random effects modeling are discussed in comparison with the alternative approaches based on region-of-interest averaging and repeated independent voxels analysis. This paper focuses on perfusion MRI for the purpose of illustration, the main proposition being that random effects modeling is expected to be beneficial in many other MRI applications in which the signal-to-noise ratio is a limiting factor.
Law of large numbers for a transient random walk driven by a symmetric exclusion process
Avena, Luca; Völlering, Florian
2011-01-01
We consider a one-dimensional simple symmetric exclusion process in equilibrium, constituting a dynamic random environment for a nearest-neighbor random walk that on occupied/vacant sites has two different local drifts to the right. We prove that the random walk has an a.s. positive constant global speed by using a regeneration-time argument. This result is part of an ongoing project aiming to analyze the behavior of random walks in slowly mixing dynamic random environments. A brief discussion on this topic is presented.
Do MENA stock market returns follow a random walk process?
Salim Lahmiri
2013-01-01
Full Text Available In this research, three variance ratio tests: the standard variance ratio test, the wild bootstrap multiple variance ratio test, and the non-parametric rank scores test are adopted to test the random walk hypothesis (RWH of stock markets in Middle East and North Africa (MENA region using most recent data from January 2010 to September 2012. The empirical results obtained by all three econometric tests show that the RWH is strongly rejected for Kuwait, Tunisia, and Morocco. However, the standard variance ratio test and the wild bootstrap multiple variance ratio test reject the null hypothesis of random walk in Jordan and KSA, while non-parametric rank scores test do not. We may conclude that Jordan and KSA stock market are weak efficient. In sum, the empirical results suggest that return series in Kuwait, Tunisia, and Morocco are predictable. In other words, predictable patterns that can be exploited in these markets still exit. Therefore, investors may make profits in such less efficient markets.
Li Sun
2016-01-01
Full Text Available It is assumed that the drift parameter is dependent on the acceleration variables and the diffusion coefficient remains the same across the whole accelerated degradation test (ADT in most of the literature based on Wiener process. However, the diffusion coefficient variation would also become obvious in some applications with the stress increasing. Aiming at the phenomenon, the paper concludes that both the drift parameter and the diffusion parameter depend on stress variables based on the invariance principle of failure mechanism and Nelson assumption. Accordingly, constant stress accelerated degradation process (CSADP and step stress accelerated degradation process (SSADP with random effects are modeled. The unknown parameters in the established model are estimated based on the property of degradation and degradation increment, separately for CASDT and SSADT, by the maximum likelihood estimation approach with measurement error. In addition, the simulation steps of accelerated degradation data are provided and simulated step stress accelerated degradation data is designed to validate the proposed model compared to other models. Finally, a case study of CSADT is conducted to demonstrate the benefits of our model in the practical engineering.
Topology,randomness and noise in process calculus
YING Mingsheng
2007-01-01
Formal models of communicating and concurrent systems are one of the most important topics in formal methods,and process calculus is one of the most successful formal models of communicating and concurrent systems.In the previous works,the author systematically studied topology in process calculus,probabilistic process calculus and pi-calculus with noisy channels in order to describe approximate behaviors of communicating and concurrent systems as well as randonmess and noise in them.This article is a brief survey of these works.
A Class of Limit Theorems of Moving Averages for END Random Variables%END随机序列滑动平均的若干极限定理
胡松; 汪忠志
2013-01-01
利用条件E(exp{t?X1|1/p})＜∞,(p＞1),证明END随机序列滑动平均的极限定理,给出形如(logn)-p(n+(log n)p)∑(k=n+1)Xk的滑动平均的上下界,得到了经典强大数定律.%The classical strong law of large numbers is generalized to the case of END random variables of the form (logn)-p/(n+(logn)p) Σ(k=n+1) Xk with the condition E(exp{t∣X1∣1/p})＜∞,(p>l), by identifying its upper and lower limit.
周勇[1; 孙六全[2; Paul; S.F.YIP[3
1999-01-01
The local behavior of oscillation modulus of the product-limit (PL) process and the cumulative hazard process is investigated when the data are subjected to random censoring. Laws of the iterated logarithm of local oscillation modulus for the PL-process and the cumulative hazard process are established. Some of these results are applied to obtain the almost sure best rates of convergence for various types of density estimators as well as the Bahadur-Kiefer type process.
Ruin Probabilities in the Risk Process with Random Income
Zhen-hua Bao; Zhong-xing Ye
2008-01-01
We extend the classical risk model to the case in which the premium income process, modelled as a Poisson process, is no longer a linear function. We derive an analog of the Beekman convolution formula for the ultimate ruin probability when the inter-claim times are exponentially distributed. A defective renewal equation satisfied by the ultimate ruin probability is then given. For the general inter-claim times with zero-truncated geometrically distributed claim sizes, the explicit expression for the ultimate ruin probability is derived.
Random Error in Judgment: The Contribution of Encoding and Retrieval Processes
Pleskac, Timothy J.; Dougherty, Michael R.; Rivadeneira, A. Walkyria; Wallsten, Thomas S.
2009-01-01
Theories of confidence judgments have embraced the role random error plays in influencing responses. An important next step is to identify the source(s) of these random effects. To do so, we used the stochastic judgment model (SJM) to distinguish the contribution of encoding and retrieval processes. In particular, we investigated whether dividing…
Art Therapy and Cognitive Processing Therapy for Combat-Related PTSD: A Randomized Controlled Trial
Campbell, Melissa; Decker, Kathleen P.; Kruk, Kerry; Deaver, Sarah P.
2016-01-01
This randomized controlled trial was designed to determine if art therapy in conjunction with Cognitive Processing Therapy (CPT) was more effective for reducing symptoms of combat posttraumatic stress disorder (PTSD) than CPT alone. Veterans (N = 11) were randomized to receive either individual CPT, or individual CPT in conjunction with individual…
Art Therapy and Cognitive Processing Therapy for Combat-Related PTSD: A Randomized Controlled Trial
Campbell, Melissa; Decker, Kathleen P.; Kruk, Kerry; Deaver, Sarah P.
2016-01-01
This randomized controlled trial was designed to determine if art therapy in conjunction with Cognitive Processing Therapy (CPT) was more effective for reducing symptoms of combat posttraumatic stress disorder (PTSD) than CPT alone. Veterans (N = 11) were randomized to receive either individual CPT, or individual CPT in conjunction with individual…
Physics of Stochastic Processes How Randomness Acts in Time
Mahnke, Reinhard; Lubashevsky, Ihor
2008-01-01
Based on lectures given by one of the authors with many years of experience in teaching stochastic processes, this textbook is unique in combining basic mathematical and physical theory with numerous simple and sophisticated examples as well as detailed calculations. In addition, applications from different fields are included so as to strengthen the background learned in the first part of the book. With its exercises at the end of each chapter (and solutions only available to lecturers) this book will benefit students and researchers at different educational levels. Solutions manual available
Random Gaussian process effect upon selective system of spectra heterodyne analyzer
N. F. Vollerner
1967-12-01
Full Text Available The formula is obtained that describe mean power changing the selective system output by changing speed tuning of the spectra heterodyne analyzer when searching random stationary processes.
Equilibrium fluctuations for gradient exclusion processes with conductances in random environments
Farfan, Jonathan; Valentim, Fabio J
2009-01-01
We study the equilibrium fluctuations for a gradient exclusion process with conductances in random environments, which can be viewed as a central limit theorem for the empirical distribution of particles when the system starts from an equilibrium measure.
Convergence of clock processes in random environments and ageing in the p-spin SK model
Bovier, Anton
2010-01-01
We derive a general criterion for the convergence of clock processes in random dynamics in random environments that is applicable in cases when correlations are not negligible, extending recent results by Gayrard [15,16], based on general criterion for convergence of sums of dependent random variables due to Durrett and Resnick [13]. We demonstrate the power of this criterion by applying it to the case of random hopping time dynamics of the p-spin SK model. We prove that on a wide range of time scales, the clock process converges to a stable subordinator almost surely with respect to the environment. We also show that a time-time correlation function converges to the arcsine law for this subordinator, almost surely. This improves recent results of Ben Arous et al. [1] that obtained similar convergence result in law with respect to the random environment.
Reference Information Extraction and Processing Using Random Conditional Fields
Tudor Groza
2012-06-01
Full Text Available Fostering both the creation and the linking of data with the scope of supporting the growth of the Linked Data Web requires us to improve the acquisition and extraction mechanisms of the underlying semantic metadata. This is particularly important for the scientific publishing domain, where currently most of the datasets are being created in an author-driven, manual manner. In addition, such datasets capture only fragments of the complete metadata, omitting usually, important elements such as the references, although they represent valuable information. In this paper we present an approach that aims at dealing with this aspect of extraction and processing of reference information. The experimental evaluation shows that, currently, our solution handles very well diverse types of reference format, thus making it usable for, or adaptable to, any area of scientific publishing.
De Moerloose, Barbara; Suciu, Stefan; Bertrand, Yves; Mazingue, Françoise; Robert, Alain; Uyttebroeck, Anne; Yakouben, Karima; Ferster, Alice; Margueritte, Geneviève; Lutz, Patrick; Munzer, Martine; Sirvent, Nicolas; Norton, Lucilia; Boutard, Patrick; Plantaz, Dominique; Millot, Frederic; Philippet, Pierre; Baila, Liliana; Benoit, Yves; Otten, Jacques
2010-07-08
The European Organisation for Research and Treatment of Cancer 58951 trial for children with acute lymphoblastic leukemia (ALL) or non-Hodgkin lymphoma (NHL) addressed 3 randomized questions, including the evaluation of dexamethasone (DEX) versus prednisolone (PRED) in induction and, for average-risk patients, the evaluation of vincristine and corticosteroid pulses during continuation therapy. The corticosteroid used in the pulses was that assigned at induction. Overall, 411 patients were randomly assigned: 202 initially randomly assigned to PRED (60 mg/m(2)/d), 201 to DEX (6 mg/m(2)/d), and 8 nonrandomly assigned to PRED. At a median follow-up of 6.3 years, there were 19 versus 34 events for pulses versus no pulses; 6-year disease-free survival (DFS) rate was 90.6% (standard error [SE], 2.1%) and 82.8% (SE, 2.8%), respectively (hazard ratio [HR] = 0.54; 95% confidence interval, 0.31-0.94; P = .027). The effect of pulses was similar in the PRED (HR = 0.56) and DEX groups (HR = 0.59) but more pronounced in girls (HR = 0.24) than in boys (HR = 0.71). Grade 3 to 4 hepatic toxicity was 30% versus 40% in pulses versus no pulses group and grade 2 to 3 osteonecrosis was 4.4% versus 2%. For average-risk patients treated according to Berlin-Frankfurt-Muenster-based protocols, pulses should become a standard component of therapy.
Tina R Kilburn
Full Text Available Deficits in information processing may be a core deficit after fetal alcohol exposure. This study was designed to investigate the possible effects of weekly low to moderate maternal alcohol consumption and binge drinking episodes in early pregnancy on choice reaction time (CRT and information processing time (IPT in young children.Participants were sampled based on maternal alcohol consumption during pregnancy. At the age of 60-64 months, 1,333 children were administered a modified version of the Sternberg paradigm to assess CRT and IPT. In addition, a test of general intelligence (WPPSI-R was administered.Adjusted for a wide range of potential confounders, this study showed no significant effects of average weekly maternal alcohol consumption during pregnancy on CRT or IPT. There was, however, an indication of slower CRT associated with binge drinking episodes in gestational weeks 1-4.This study observed no significant effects of average weekly maternal alcohol consumption during pregnancy on CRT or IPT as assessed by the Sternberg paradigm. However, there were some indications of CRT being associated with binge drinking during very early pregnancy. Further large-scale studies are needed to investigate effects of different patterns of maternal alcohol consumption on basic cognitive processes in offspring.
Diaz, Francisco J; Berg, Michel J; Krebill, Ron; Welty, Timothy; Gidal, Barry E; Alloway, Rita; Privitera, Michael
2013-12-01
Due to concern and debate in the epilepsy medical community and to the current interest of the US Food and Drug Administration (FDA) in revising approaches to the approval of generic drugs, the FDA is currently supporting ongoing bioequivalence studies of antiepileptic drugs, the EQUIGEN studies. During the design of these crossover studies, the researchers could not find commercial or non-commercial statistical software that quickly allowed computation of sample sizes for their designs, particularly software implementing the FDA requirement of using random-effects linear models for the analyses of bioequivalence studies. This article presents tables for sample-size evaluations of average bioequivalence studies based on the two crossover designs used in the EQUIGEN studies: the four-period, two-sequence, two-formulation design, and the six-period, three-sequence, three-formulation design. Sample-size computations assume that random-effects linear models are used in bioequivalence analyses with crossover designs. Random-effects linear models have been traditionally viewed by many pharmacologists and clinical researchers as just mathematical devices to analyze repeated-measures data. In contrast, a modern view of these models attributes an important mathematical role in theoretical formulations in personalized medicine to them, because these models not only have parameters that represent average patients, but also have parameters that represent individual patients. Moreover, the notation and language of random-effects linear models have evolved over the years. Thus, another goal of this article is to provide a presentation of the statistical modeling of data from bioequivalence studies that highlights the modern view of these models, with special emphasis on power analyses and sample-size computations.
Processing of X-ray snapshots from crystals in random orientations
Kabsch, Wolfgang, E-mail: kabsch@mpimf-heidelberg.mpg.de [Max-Planck-Institut für medizinische Forschung, Jahnstrasse 29, D-69120 Heidelberg (Germany)
2014-08-01
A new method for the treatment of partial reflections from X-ray snapshots is implemented in the program package nXDS, which yields intensity data of almost the same quality as those obtained by the classical rotation method. A functional expression is introduced that relates scattered X-ray intensities from a still or a rotation snapshot to the corresponding structure-factor amplitudes. The new approach was implemented in the program nXDS for processing monochromatic diffraction images recorded by a multi-segment detector where each exposure could come from a different crystal. For images containing indexable spots, the intensities of the expected reflections and their variances are obtained by profile fitting after mapping the contributing pixel contents to the Ewald sphere. The varying intensity decline owing to the angular distance of the reflection from the surface of the Ewald sphere is estimated using a Gaussian rocking curve. This decline is dubbed ‘Ewald offset correction’, which is well defined even for still images. Together with an image-scaling factor and other corrections, an explicit expression is defined that predicts each recorded intensity from its corresponding structure-factor amplitude. All diffraction parameters, scaling and correction factors are improved by post-refinement. The ambiguous case of a lower point group than the lattice symmetry is resolved by a method reminiscent of the technique of ‘selective breeding’. It selects the indexing alternative for each image that yields, on average, the highest correlation with intensities from all other images. Processing a test set of rotation images by XDS and treating the same images by nXDS as snapshots of crystals in random orientations yields data of comparable quality, clearly indicating an anomalous signal from Se atoms.
THE ERGODICITY FOR BI-IMMIGRATION BIRTH AND DEATH PROCESSES IN RANDOM ENVIRONMENT
无
2008-01-01
The concepts of bi-immigration birth and death density matrix in random environment and bi-immigration birth and death process in random environment are introduced. For any bi-immigration birth and death matrix in random environment Q(θ)with birth rate λ＜ death rate μ, the following results are proved, (1) there is an unique q-process in random environment, -P(θ*(0);t) = (-p(θ*(0);t,i,j),i,j ≥ 0), which is ergodic, that is, limt→∞ -p(θ*(0);t,i,j) = -π(θ*(0);j) ≥ 0 does not depend on i ≥ 0 and Σj≥0 -π(θ*(0);j) = 1, (2) there is a bi-immigration birth and death process in random environment (X* = {Xt, t ≥ 0}, ξ* = {ξt, t ∈ (-∞, ∞)}) with random transition matrix -P(θ* (0); t) such that X* is a strictly stationary process.
Physical Theories with Average Symmetry
Alamino, Roberto C
2013-01-01
This Letter probes the existence of physical laws invariant only in average when subjected to some transformation. The concept of a symmetry transformation is broadened to include corruption by random noise and average symmetry is introduced by considering functions which are invariant only in average under these transformations. It is then shown that actions with average symmetry obey a modified version of Noether's Theorem with dissipative currents. The relation of this with possible violations of physical symmetries, as for instance Lorentz invariance in some quantum gravity theories, is briefly commented.
Gaussian point processes and two-by-two random matrix theory.
Nieminen, John M
2007-10-01
The statistics of the multidimensional Gaussian point process are discussed in connection with the spacing statistics of eigenvalues of 2x2 random matrices. We consider the three-dimensional Gaussian point process when two of the coordinates of a point are randomly chosen from a Gaussian distribution having a mean of zero and a variance of sigma;{2}=1 but the third coordinate is chosen from a Gaussian distribution having a variance in the range of 0random point being at a distance r from the origin is shown to be closely related to the nearest-neighbor spacing distribution of eigenvalues coming from an ensemble of 2x2 matrices defined by the French-Kota-Pandey-Mehta two-matrix model of random matrix theory. An elementary explanation of this result is given.
Simulation of the diffusion process in composite porous media by random walks
ZHANG Yong
2005-01-01
A new random-walk interpolation scheme was developed to simulate solute transport through composite porous media with different porosities as well as different diffusivities. The significant influences of the abrupt variations of porosity and diffusivity on solute transport were simulated by tracking random walkers through a linear interpolation domain across the heterogeneity interface. The displacements of the random walkers within the interpolation region were obtained explicitly by establishing the equivalence between the Fokker-Planck equation and the advection-dispersion equation. Applications indicate that the random-walk interpolation method can simulate one- and two-dimensional, 2nd-order diffusion processes in composite media without local mass conservation errors. In addition, both the theoretical derivations and the numerical simulations show that the drift and dispersion of particles depend on the type of Markov process selected to reflect the dynamics of random walkers. If the nonlinear Langevin equation is used, the gradient of porosity and the gradient of diffusivity strongly affect the drift displacement of particles. Therefore, random-walking particles driven by the gradient of porosity,the gradient of diffusivity, and the random diffusion, can imitate the transport of solute under only pure diffusion in composite porous media containing abrupt variations of porosity and diffusivity.
Suciu, Stefan; Bertrand, Yves; Mazingue, Françoise; Robert, Alain; Uyttebroeck, Anne; Yakouben, Karima; Ferster, Alice; Margueritte, Geneviève; Lutz, Patrick; Munzer, Martine; Sirvent, Nicolas; Norton, Lucilia; Boutard, Patrick; Plantaz, Dominique; Millot, Frederic; Philippet, Pierre; Baila, Liliana; Benoit, Yves; Otten, Jacques
2010-01-01
The European Organisation for Research and Treatment of Cancer 58951 trial for children with acute lymphoblastic leukemia (ALL) or non-Hodgkin lymphoma (NHL) addressed 3 randomized questions, including the evaluation of dexamethasone (DEX) versus prednisolone (PRED) in induction and, for average-risk patients, the evaluation of vincristine and corticosteroid pulses during continuation therapy. The corticosteroid used in the pulses was that assigned at induction. Overall, 411 patients were randomly assigned: 202 initially randomly assigned to PRED (60 mg/m2/d), 201 to DEX (6 mg/m2/d), and 8 nonrandomly assigned to PRED. At a median follow-up of 6.3 years, there were 19 versus 34 events for pulses versus no pulses; 6-year disease-free survival (DFS) rate was 90.6% (standard error [SE], 2.1%) and 82.8% (SE, 2.8%), respectively (hazard ratio [HR] = 0.54; 95% confidence interval, 0.31-0.94; P = .027). The effect of pulses was similar in the PRED (HR = 0.56) and DEX groups (HR = 0.59) but more pronounced in girls (HR = 0.24) than in boys (HR = 0.71). Grade 3 to 4 hepatic toxicity was 30% versus 40% in pulses versus no pulses group and grade 2 to 3 osteonecrosis was 4.4% versus 2%. For average-risk patients treated according to Berlin-Frankfurt-Muenster–based protocols, pulses should become a standard component of therapy. This trial was registered at www.clinicaltrials.gov as #NCT00003728. PMID:20407035
Average quantum dynamics of closed systems over stochastic Hamiltonians
Yu, Li
2011-01-01
We develop a master equation formalism to describe the evolution of the average density matrix of a closed quantum system driven by a stochastic Hamiltonian. The average over random processes generally results in decoherence effects in closed system dynamics, in addition to the usual unitary evolution. We then show that, for an important class of problems in which the Hamiltonian is proportional to a Gaussian random process, the 2nd-order master equation yields exact dynamics. The general formalism is applied to study the examples of a two-level system, two atoms in a stochastic magnetic field and the heating of a trapped ion.
Maslennikova, Yu. S.; Nugmanov, I. S.
2016-08-01
The problem of probability density function estimation for a random process is one of the most common in practice. There are several methods to solve this problem. Presented laboratory work uses methods of the mathematical statistics to detect patterns in the realization of random process. On the basis of ergodic theory, we construct algorithm for estimating univariate probability density distribution function for a random process. Correlational analysis of realizations is applied to estimate the necessary size of the sample and the time of observation. Hypothesis testing for two probability distributions (normal and Cauchy) is used on the experimental data, using χ2 criterion. To facilitate understanding and clarity of the problem solved, we use ELVIS II platform and LabVIEW software package that allows us to make the necessary calculations, display results of the experiment and, most importantly, to control the experiment. At the same time students are introduced to a LabVIEW software package and its capabilities.
α-TRANSIENCE AND α-RECURRENCE FOR RANDOM WALKS AND L(E)VY PROCESSES
ZHANG HUIZENG; ZHAO MINZHI; YING JIANGANG
2005-01-01
The authors investigate the α-transience and α-recurrence for random walks and Levy processes by means of the associated moment generating function, give a dichotomy theorem for not one-sided processes and prove that the process X is quasisymmetric if and only if X is not α-recurrent for all α＜ 0 which gives a probabilistic explanation of quasi-symmetry, a concept originated from C. J. Stone.
Note: A 10 Gbps real-time post-processing free physical random number generator chip
Qian, Yi; Liang, Futian; Wang, Xinzhe; Li, Feng; Chen, Lian; Jin, Ge
2017-09-01
A random number generator with high data rate, small size, and low power consumption is essential for a certain quantum key distribution (QKD) system. We designed a 10 Gbps random number generator ASIC, TRNG2016, for the QKD system. With a 6 mm × 6 mm QFN48 package, TRNG2016 has 10 independent physical random number generation channels, and each channel can work at a fixed frequency up to 1 Gbps. The random number generated by TRNG2016 can pass the NIST statistical tests without any post-processing. With 3.3 V IO power supply and 1.2 V core power supply, the typical power consumption of TRNG2016 is 773 mW with 10 channels on and running at 1 Gbps data rate.
Average Optimality in Markov Decision Processes with Unbounded Rewards%报酬无界的平均准则马氏决策过程
胡奇英
2002-01-01
本文对可数状态集、非空决策集、报酬无界的平均准则马氏决策过程,提出了一组新的条件,在此条件下存在(ε)最优平稳策略,且当最优不等式中的和有定义时最优不等式也成立.%This paper studies average optimality in Markov decision processes with countablestate space, nonempty action sets and unbounded reward function. New conditions arediscussed under which there exists an (ε) optimal stationary policy, and that the averagecriterion optimality inequality holds when the summation in it is well defined.
Characterisation of random Gaussian and non-Gaussian stress processes in terms of extreme responses
Colin Bruno
2015-01-01
Full Text Available In the field of military land vehicles, random vibration processes generated by all-terrain wheeled vehicles in motion are not classical stochastic processes with a stationary and Gaussian nature. Non-stationarity of processes induced by the variability of the vehicle speed does not form a major difficulty because the designer can have good control over the vehicle speed by characterising the histogram of instantaneous speed of the vehicle during an operational situation. Beyond this non-stationarity problem, the hard point clearly lies in the fact that the random processes are not Gaussian and are generated mainly by the non-linear behaviour of the undercarriage and the strong occurrence of shocks generated by roughness of the terrain. This non-Gaussian nature is expressed particularly by very high flattening levels that can affect the design of structures under extreme stresses conventionally acquired by spectral approaches, inherent to Gaussian processes and based essentially on spectral moments of stress processes. Due to these technical considerations, techniques for characterisation of random excitation processes generated by this type of carrier need to be changed, by proposing innovative characterisation methods based on time domain approaches as described in the body of the text rather than spectral domain approaches.
胡细; 王汉兴; 赵飞
2007-01-01
The flooding distance is an important parameter in the design and evaluation of a routing protocol, which is related not only to the delay time in the route discovery, but also to the stability and reliability of the route. In this paper,the average flooding distance (AFD) for a mobile ad hoc network (MANET) in a random graph model was given based on the dynamic source routing (DSR) protocol. The influence of spatial reuse on the AFD was also studied. Compared with that in the model without the spatial reuse, the AFD in the model with the spatial reuse has much smaller value, when the connetivity probability between nodes in the network is small and when the number of reused times is large. This means that the route discovery with the spatial reuse is much more effective.
Chittleborough, Catherine R.; Nicholson, Alexandra L.; Basker, Elaine; Bell, Sarah; Campbell, Rona
2012-01-01
This article explores factors that may influence hand washing behaviour among pupils and staff in primary schools. A qualitative process evaluation within a cluster randomized controlled trial included pupil focus groups (n = 16, aged 6-11 years), semi-structured interviews (n = 16 teachers) and observations of hand washing facilities (n = 57).…
Random processes and geographic species richness patterns : why so few species in the north?
Bokma, F; Bokma, J; Monkkonen, M
In response to the suggestion that the latitudinal gradient in species richness is the result of stochastic processes of species distributions, we created a computer simulation program that enabled us to study random species distributions over irregularly shaped areas. Our model could not explain
Random processes and geographic species richness patterns : why so few species in the north?
Bokma, F; Bokma, J; Monkkonen, M
2001-01-01
In response to the suggestion that the latitudinal gradient in species richness is the result of stochastic processes of species distributions, we created a computer simulation program that enabled us to study random species distributions over irregularly shaped areas. Our model could not explain la
Smith, Toni M.; Hjalmarson, Margret A.
2013-01-01
The purpose of this study is to examine prospective mathematics specialists' engagement in an instructional sequence designed to elicit and develop their understandings of random processes. The study was conducted with two different sections of a probability and statistics course for K-8 teachers. Thirty-two teachers participated. Video analyses…
Koziel, Jacek A; Nguyen, Lam T; Glanville, Thomas D; Ahn, Heekwon; Frana, Timothy S; Hans van Leeuwen, J
2017-10-01
A passive sampling method, using retracted solid-phase microextraction (SPME) - gas chromatography-mass spectrometry and time-weighted averaging, was developed and validated for tracking marker volatile organic compounds (VOCs) emitted during aerobic digestion of biohazardous animal tissue. The retracted SPME configuration protects the fragile fiber from buffeting by the process gas stream, and it requires less equipment and is potentially more biosecure than conventional active sampling methods. VOC concentrations predicted via a model based on Fick's first law of diffusion were within 6.6-12.3% of experimentally controlled values after accounting for VOC adsorption to the SPME fiber housing. Method detection limits for five marker VOCs ranged from 0.70 to 8.44ppbv and were statistically equivalent (p>0.05) to those for active sorbent-tube-based sampling. The sampling time of 30min and fiber retraction of 5mm were found to be optimal for the tissue digestion process. Copyright © 2017 Elsevier Ltd. All rights reserved.
A NOTE ON OSCILLATION MODULUS OF PL-PROCESS AND ITS APPLICATIONS UNDER RANDOM CENSORSHIP
周勇
2003-01-01
The strong limit results of oscillation modulus of PL-process are established inthis paper when the density function is not continuous function for censored data. The ratesof convergence of oscillation modulus of PL-process are sharp under week condition. Theseresults can be used to derive laws of the iterated logarithm of random bandwidth kernelestimator and nearest neighborhood estimator of density under continuous conditions ofdensity function being not assumed.
Li,Quan-Lin; Lui, John C. S.
2010-01-01
In this paper, we provide a novel matrix-analytic approach for studying doubly exponential solutions of randomized load balancing models (also known as supermarket models) with Markovian arrival processes (MAPs) and phase-type (PH) service times. We describe the supermarket model as a system of differential vector equations by means of density dependent jump Markov processes, and obtain a closed-form solution with a doubly exponential structure to the fixed point of the system of differential...
Li, Quan-Lin; Lui, John C. S.
2010-01-01
In this paper, we provide a novel matrix-analytic approach for studying doubly exponential solutions of randomized load balancing models (also known as supermarket models) with Markovian arrival processes (MAPs) and phase-type (PH) service times. We describe the supermarket model as a system of differential vector equations by means of density dependent jump Markov processes, and obtain a closed-form solution with a doubly exponential structure to the fixed point of the system of differential...
ON THE ESTIMATION OF DISTANCE DISTRIBUTION FUNCTIONS FOR POINT PROCESSES AND RANDOM SETS
Dietrich Stoyan
2011-05-01
Full Text Available This paper discusses various estimators for the nearest neighbour distance distribution function D of a stationary point process and for the quadratic contact distribution function Hq of a stationary random closed set. It recommends the use of Hanisch's estimator of D, which is of Horvitz-Thompson type, and the minussampling estimator of Hq. This recommendation is based on simulations for Poisson processes and Boolean models.
ON DESIGN METHOD OF THE PRECISION CAM PROFILE WITH RANDOM PROCESSING ERRORS
无
2003-01-01
Based on probability and statistic, a design method of precision cam profile concerning the influence of random processing errors is advanced. Combining the design with the process, which can be used to predict that cam profiles will be successfully processed or not in the design stage, design of the cam can be done by balancing the economization and reliability. In addition, an fuzzy deduction method based on Bayers formula is advanced to estimate processing reasonable of the designed precision cam profile, and it take few samples.
The Limit Theorems for Maxima of Stationary Gaussian Processes with Random Index
Zhong Quan TAN
2014-01-01
Let {X(t), t ≥ 0} be a standard (zero-mean, unit-variance) stationary Gaussian process with correlation function r(·) and continuous sample paths. In this paper, we consider the maxima M (T ) = max{X (t),∀t ∈ [0, T ]} with random index TT , where TT/T converges to a non-degenerate distribution or to a positive random variable in probability, and show that the limit distribution of M (TT ) exists under some additional conditions related to the correlation function r(·).
On a random walk with memory and its relation with Markovian processes
Turban, Loic, E-mail: turban@lpm.u-nancy.f [Groupe de Physique Statistique, Departement Physique de la Matiere et des Materiaux, Institut Jean Lamour (Laboratoire associe au CNRS UMR 7198), CNRS-Nancy Universite-UPV Metz, BP 70239, F-54506 Vandoeuvre les Nancy Cedex (France)
2010-07-16
We study a one-dimensional random walk with memory in which the step lengths to the left and to the right evolve at each step in order to reduce the wandering of the walker. The feedback is quite efficient and leads to a non-diffusive walk. The time evolution of the displacement is given by an equivalent Markovian dynamical process. The probability density for the position of the walker is the same at any time as for a random walk with shrinking steps, although the two-time correlation functions are quite different.
Daniels, Marcus G.; Farmer, J. Doyne; Gillemot, László; Iori, Giulia; Smith, Eric
2003-03-01
We model trading and price formation in a market under the assumption that order arrival and cancellations are Poisson random processes. This model makes testable predictions for the most basic properties of markets, such as the diffusion rate of prices (which is the standard measure of financial risk) and the spread and price impact functions (which are the main determinants of transaction cost). Guided by dimensional analysis, simulation, and mean-field theory, we find scaling relations in terms of order flow rates. We show that even under completely random order flow the need to store supply and demand to facilitate trading induces anomalous diffusion and temporal structure in prices.
Conditional limit theorems for intermediately subcritical branching processes in random environment
Afanasyev, Valeriy; Kersting, Götz; Vatutin, Vladimir
2011-01-01
For a branching process in random environment it is assumed that the offspring distribution of the individuals varies in a random fashion, independently from one generation to the other. For the subcritical regime a kind of phase transition appears. In this paper we study the intermediately subcritical case, which constitutes the borderline within this phase transition. We study the asymptotic behavior of the survival probability. Next the size of the population and the shape of the random environment conditioned on non-extinction is examined. Finally we show that conditioned on non-extinction periods of small and large population sizes alternate. This kind of 'bottleneck' behavior appears under the annealed approach only in the intermediately subcritical case.
Plern Saipara
2017-03-01
Full Text Available In this paper, we suggest the modified random S-iterative process and prove the common random fixed point theorems of a finite family of random uniformly quasi-Lipschitzian operators in a generalized convex metric space. Our results improves and extends various results in the literature.
Gaussian moving averages and semimartingales
Basse-O'Connor, Andreas
2008-01-01
In the present paper we study moving averages (also known as stochastic convolutions) driven by a Wiener process and with a deterministic kernel. Necessary and sufficient conditions on the kernel are provided for the moving average to be a semimartingale in its natural filtration. Our results...... are constructive - meaning that they provide a simple method to obtain kernels for which the moving average is a semimartingale or a Wiener process. Several examples are considered. In the last part of the paper we study general Gaussian processes with stationary increments. We provide necessary and sufficient...
To be and not to be: scale correlations in random multifractal processes
Cleve, Jochen; Schmiegel, Jürgen; Greiner, Martin
We discuss various properties of a random multifractal process, which are related to the issue of scale correlations. By design, the process is homogeneous, non-conservative and has no built-in scale correlations. However, when it comes to observables like breakdown coefficients, which are based...... on a coarse-graining of the multifractal field, scale correlations do appear. In the log-normal limit of the model process, the conditional distributions and moments of breakdown coefficients reproduce the observations made in fully developed small-scale turbulence. These findings help to understand several...
Schmidt, Deena R; Thomas, Peter J
2014-04-17
Mathematical models of cellular physiological mechanisms often involve random walks on graphs representing transitions within networks of functional states. Schmandt and Galán recently introduced a novel stochastic shielding approximation as a fast, accurate method for generating approximate sample paths from a finite state Markov process in which only a subset of states are observable. For example, in ion-channel models, such as the Hodgkin-Huxley or other conductance-based neural models, a nerve cell has a population of ion channels whose states comprise the nodes of a graph, only some of which allow a transmembrane current to pass. The stochastic shielding approximation consists of neglecting fluctuations in the dynamics associated with edges in the graph not directly affecting the observable states. We consider the problem of finding the optimal complexity reducing mapping from a stochastic process on a graph to an approximate process on a smaller sample space, as determined by the choice of a particular linear measurement functional on the graph. The partitioning of ion-channel states into conducting versus nonconducting states provides a case in point. In addition to establishing that Schmandt and Galán's approximation is in fact optimal in a specific sense, we use recent results from random matrix theory to provide heuristic error estimates for the accuracy of the stochastic shielding approximation for an ensemble of random graphs. Moreover, we provide a novel quantitative measure of the contribution of individual transitions within the reaction graph to the accuracy of the approximate process.
Pretorius Albertus
2003-03-01
Full Text Available Abstract In the case of the mixed linear model the random effects are usually assumed to be normally distributed in both the Bayesian and classical frameworks. In this paper, the Dirichlet process prior was used to provide nonparametric Bayesian estimates for correlated random effects. This goal was achieved by providing a Gibbs sampler algorithm that allows these correlated random effects to have a nonparametric prior distribution. A sampling based method is illustrated. This method which is employed by transforming the genetic covariance matrix to an identity matrix so that the random effects are uncorrelated, is an extension of the theory and the results of previous researchers. Also by using Gibbs sampling and data augmentation a simulation procedure was derived for estimating the precision parameter M associated with the Dirichlet process prior. All needed conditional posterior distributions are given. To illustrate the application, data from the Elsenburg Dormer sheep stud were analysed. A total of 3325 weaning weight records from the progeny of 101 sires were used.
Explosive Percolation in Erd\\"os-R\\'enyi-Like Random Graph Processes
Panagiotou, Konstantinos; Steger, Angelika; Thomas, Henning
2011-01-01
The evolution of the largest component has been studied intensely in a variety of random graph processes, starting in 1960 with the Erd\\"os-R\\'enyi process. It is well known that this process undergoes a phase transition at n/2 edges when, asymptotically almost surely, a linear-sized component appears. Moreover, this phase transition is continuous, i.e., in the limit the function f(c) denoting the fraction of vertices in the largest component in the process after cn edge insertions is continuous. A variation of the Erd\\"os-R\\'enyi process are the so-called Achlioptas processes in which in every step a random pair of edges is drawn, and a fixed edge-selection rule selects one of them to be included in the graph while the other is put back. Recently, Achlioptas, D'Souza and Spencer (2009) gave strong numerical evidence that a variety of edge-selection rules exhibit a discontinuous phase transition. However, Riordan and Warnke (2011) very recently showed that all Achlioptas processes have a continuous phase tran...
DiBenedetto, Maria K.
2010-01-01
The current investigation sought to determine whether self-regulatory variables: "study strategies" and "self-satisfaction" correlate with first and second generation college students' grade point averages, and to determine if these two variables would improve the prediction of their averages if used along with high school grades and SAT scores.…
Liu, Chengyin; Xu, Chunchuan; Teng, Jun
2016-09-01
The Random Decrement Technique (RDT), based on decentralized computing approaches implemented in wireless sensor networks (WSNs), has shown advantages for modal parameter and data aggregation identification. However, previous studies of RDT-based approaches from ambient vibration data are based on the assumption of a broad-band stochastic process input excitation. The process normally is modeled by filtered white or white noise. In addition, the choice of the triggering condition in RDT is closely related to data communication. In this project, research has been conducted to study the nonstationary white noise excitations as the input to verify the random decrement technique. A local extremum triggering condition is chosen and implemented for the purpose of minimum data communication in a RDT-based distributed computing strategy. Numerical simulation results show that the proposed technique is capable of minimizing the amount of data transmitted over the network with accuracy in modal parameters identification.
An Artificial Bee Colony Algorithm for the Job Shop Scheduling Problem with Random Processing Times
Rui Zhang
2011-09-01
Full Text Available Due to the influence of unpredictable random events, the processing time of each operation should be treated as random variables if we aim at a robust production schedule. However, compared with the extensive research on the deterministic model, the stochastic job shop scheduling problem (SJSSP has not received sufficient attention. In this paper, we propose an artificial bee colony (ABC algorithm for SJSSP with the objective of minimizing the maximum lateness (which is an index of service quality. First, we propose a performance estimate for preliminary screening of the candidate solutions. Then, the K-armed bandit model is utilized for reducing the computational burden in the exact evaluation (through Monte Carlo simulation process. Finally, the computational results on different-scale test problems validate the effectiveness and efficiency of the proposed approach.
2016-01-01
Camera traps are used to estimate densities or abundances using capture-recapture and, more recently, random encounter models (REMs). We deploy REMs to describe an invasive-native species replacement process, and to demonstrate their wider application beyond abundance estimation. The Irish hare Lepus timidus hibernicus is a high priority endemic of conservation concern. It is threatened by an expanding population of non-native, European hares L. europaeus, an invasive species of global import...
The McMillan Theorem for Colored Branching Processes and Dimensions of Random Fractals
Victor Bakhtin
2014-12-01
Full Text Available For the simplest colored branching process, we prove an analog to the McMillan theorem and calculate the Hausdorff dimensions of random fractals defined in terms of the limit behavior of empirical measures generated by finite genetic lines. In this setting, the role of Shannon’s entropy is played by the Kullback–Leibler divergence, and the Hausdorff dimensions are computed by means of the so-called Billingsley–Kullback entropy, defined in the paper.
Yan Xia REN
2008-01-01
The global supports of super-Poisson processes and super-random walks with a branching mechanism ψ(z)=z2 and constant branching rate are known to be noncompact. It turns out that, for any spatially dependent branching rate, this property remains true. However, the asymptotic extinction property for these two kinds of superprocesses depends on the decay rate of the branching-rate function at infinity.
Schwinger-Dyson equations in large-N quantum field theories and nonlinear random processes
Buividovich, P V
2010-01-01
We study stochastic methods for solving Schwinger-Dyson equations in large-N quantum field theories. Expectation values of single-trace operators are sampled by stationary probability distributions of so-called nonlinear random processes. The set of all histories of such processes corresponds to the set of all planar diagrams in the perturbative expansion of the theory. We describe stochastic algorithms for summation of planar diagrams in matrix-valued scalar field theory and in the Weingarten model of random planar surfaces on the lattice. For compact field variables, the method does not converge in the physically most interesting weak-coupling limit. In this case one can absorb the divergences into the self-consistent redefinition of expansion parameters. Stochastic solution of the self-consistency conditions can be implemented as a random process with memory. We illustrate this idea on the example of two-dimensional O(N) sigma-model. Extension to non-Abelian lattice gauge theories is discussed.
Kula, Witold; Wolfman, Jerome; Ounadjela, Kamel; Chen, Eugene; Koutny, William
2003-05-01
We report on the development and process control of magnetic tunnel junctions (MTJs) for magnetic random access memory (MRAM) devices. It is demonstrated that MTJs with high magnetoresistance ˜40% at 300 mV, resistance-area product (RA) ˜1-3 kΩ μm2, low intrinsic interlayer coupling (Hin) ˜2-3 Oe, and excellent bit switching characteristics can be developed and fully integrated with complementary metal-oxide-semiconductor circuitry into MRAM devices. MTJ uniformity and repeatability level suitable for mass production has been demonstrated with the advanced processing and monitoring techniques.
A prospective randomized trial of content expertise versus process expertise in small group teaching
Wright Bruce
2010-10-01
Full Text Available Abstract Background Effective teaching requires an understanding of both what (content knowledge and how (process knowledge to teach. While previous studies involving medical students have compared preceptors with greater or lesser content knowledge, it is unclear whether process expertise can compensate for deficient content expertise. Therefore, the objective of our study was to compare the effect of preceptors with process expertise to those with content expertise on medical students' learning outcomes in a structured small group environment. Methods One hundred and fifty-one first year medical students were randomized to 11 groups for the small group component of the Cardiovascular-Respiratory course at the University of Calgary. Each group was then block randomized to one of three streams for the entire course: tutoring exclusively by physicians with content expertise (n = 5, tutoring exclusively by physicians with process expertise (n = 3, and tutoring by content experts for 11 sessions and process experts for 10 sessions (n = 3. After each of the 21 small group sessions, students evaluated their preceptors' teaching with a standardized instrument. Students' knowledge acquisition was assessed by an end-of-course multiple choice (EOC-MCQ examination. Results Students rated the process experts significantly higher on each of the instrument's 15 items, including the overall rating. Students' mean score (±SD on the EOC-MCQ exam was 76.1% (8.1 for groups taught by content experts, 78.2% (7.8 for the combination group and 79.5% (9.2 for process expert groups (p = 0.11. By linear regression student performance was higher if they had been taught by process experts (regression coefficient 2.7 [0.1, 5.4], p Conclusions When preceptors are physicians, content expertise is not a prerequisite to teach first year medical students within a structured small group environment; preceptors with process expertise result in at least equivalent, if not
Measuring Complexity through Average Symmetry
Alamino, Roberto C.
2015-01-01
This work introduces a complexity measure which addresses some conflicting issues between existing ones by using a new principle - measuring the average amount of symmetry broken by an object. It attributes low (although different) complexity to either deterministic or random homogeneous densities and higher complexity to the intermediate cases. This new measure is easily computable, breaks the coarse graining paradigm and can be straightforwardly generalised, including to continuous cases an...
Huibing Hao
2015-01-01
Full Text Available Light emitting diode (LED lamp has attracted increasing interest in the field of lighting systems due to its low energy and long lifetime. For different functions (i.e., illumination and color, it may have two or more performance characteristics. When the multiple performance characteristics are dependent, it creates a challenging problem to accurately analyze the system reliability. In this paper, we assume that the system has two performance characteristics, and each performance characteristic is governed by a random effects Gamma process where the random effects can capture the unit to unit differences. The dependency of performance characteristics is described by a Frank copula function. Via the copula function, the reliability assessment model is proposed. Considering the model is so complicated and analytically intractable, the Markov chain Monte Carlo (MCMC method is used to estimate the unknown parameters. A numerical example about actual LED lamps data is given to demonstrate the usefulness and validity of the proposed model and method.
Critical value for the contact process with random recovery rates and edge weights on regular tree
Xue, Xiaofeng
2016-11-01
In this paper we are concerned with contact processes with random recovery rates and edge weights on rooted regular trees TN. Let ρ and ξ be two nonnegative random variables such that P(ɛ ≤ ξ 0. For each vertex x on TN, ξ(x) is an independent copy of ξ while for each edge e on TN, ρ(e) is an independent copy of ρ. An infected vertex x becomes healthy at rate ξ(x) while an infected vertex y infects an healthy neighbor z at rate proportional to ρ(y , z) . For this model, we prove that the critical value under the annealed measure approximately equals (N E ρ E 1/ξ )-1 as N grows to infinity. Furthermore, we show that the critical value under the quenched measure equals that under the annealed measure when the cluster containing the root formed with edges with positive weights is infinite.
Anantharam, Venkat
2010-01-01
This paper studies the Shannon regime for the random displacement of stationary point processes. Let each point of some initial stationary point process in $\\R^n$ give rise to one daughter point, the location of which is obtained by adding a random vector to the coordinates of the mother point, with all displacement vectors independently and identically distributed for all points. The decoding problem is then the following one: the whole mother point process is known as well as the coordinates of some daughter point; the displacements are only known through their law; can one find the mother of this daughter point? The Shannon regime is that where the dimension $n$ tends to infinity and where the logarithm of the intensity of the point process is proportional to $n$. We show that this problem exhibits a sharp threshold: if the sum of the proportionality factor and of the differential entropy rate of the noise is positive, then the probability of finding the right mother point tends to 0 with $n$ for all point...
Musho, M.K.; Kozak, J.J.
1984-10-01
A method is presented for calculating exactly the relative width (sigma/sup 2/)/sup 1/2//
Quasi-steady-state analysis of two-dimensional random intermittent search processes
Bressloff, Paul C.
2011-06-01
We use perturbation methods to analyze a two-dimensional random intermittent search process, in which a searcher alternates between a diffusive search phase and a ballistic movement phase whose velocity direction is random. A hidden target is introduced within a rectangular domain with reflecting boundaries. If the searcher moves within range of the target and is in the search phase, it has a chance of detecting the target. A quasi-steady-state analysis is applied to the corresponding Chapman-Kolmogorov equation. This generates a reduced Fokker-Planck description of the search process involving a nonzero drift term and an anisotropic diffusion tensor. In the case of a uniform direction distribution, for which there is zero drift, and isotropic diffusion, we use the method of matched asymptotics to compute the mean first passage time (MFPT) to the target, under the assumption that the detection range of the target is much smaller than the size of the domain. We show that an optimal search strategy exists, consistent with previous studies of intermittent search in a radially symmetric domain that were based on a decoupling or moment closure approximation. We also show how the decoupling approximation can break down in the case of biased search processes. Finally, we analyze the MFPT in the case of anisotropic diffusion and find that anisotropy can be useful when the searcher starts from a fixed location. © 2011 American Physical Society.
Contact processes on random graphs with power law degree distributions have critical value 0
Chatterjee, Shirshendu; 10.1214/09-AOP471
2009-01-01
If we consider the contact process with infection rate $\\lambda$ on a random graph on $n$ vertices with power law degree distributions, mean field calculations suggest that the critical value $\\lambda_c$ of the infection rate is positive if the power $\\alpha>3$. Physicists seem to regard this as an established fact, since the result has recently been generalized to bipartite graphs by G\\'{o}mez-Garde\\~{n}es et al. [Proc. Natl. Acad. Sci. USA 105 (2008) 1399--1404]. Here, we show that the critical value $\\lambda_c$ is zero for any value of $\\alpha>3$, and the contact process starting from all vertices infected, with a probability tending to 1 as $n\\to\\infty$, maintains a positive density of infected sites for time at least $\\exp(n^{1-\\delta})$ for any $\\delta>0$. Using the last result, together with the contact process duality, we can establish the existence of a quasi-stationary distribution in which a randomly chosen vertex is occupied with probability $\\rho(\\lambda)$. It is expected that $\\rho(\\lambda)\\sim ...
Generating Functionals of Random Packing Point Processes: From Hard-Core to Carrier Sensing
Viet, Nguyen Tien
2012-01-01
In this paper we study the generating functionals of several models of random packing processes: the classical Mat\\'ern hard-core model; its extensions, the $k$-Mat\\'ern models and the $\\infty$-Mat\\'ern model, which is an example of random sequential packing process. The main new results are: 1) a sufficient condition for the $\\infty$-Mat\\'ern model to be well-defined (unlike the other two, the $\\infty$-Mat\\'ern model may not be well-defined on unbounded spaces); 2) the generating functional of the resulting point process which is given for each of the three models as the solution of a differential equation; 3) series representations and bounds on the generating functional of the packing models; 4) moment measures and other useful properties of the considered packing models which are derived from their generating functionals. These results are applied to various stochastic geometry problems and in particular to the modeling and the analysis of a wireless Carrier Sensing Multiple Access network.
Coevolution of Information Processing and Topology in Hierarchical Adaptive Random Boolean Networks
Gorski, Piotr J; Holyst, Janusz A
2015-01-01
Random Boolean networks (RBNs) are frequently employed for modelling complex systems driven by information processing, e.g. for gene regulatory networks (GRNs). Here we propose a hierarchical adaptive RBN (HARBN) as a system consisting of distinct adaptive RBNs - subnetworks - connected by a set of permanent interlinks. Information measures and internal subnetworks topology of HARBN coevolve and reach steady-states that are specific for a given network structure. We investigate mean node information, mean edge information as well as a mean node degree as functions of model parameters and demonstrate HARBN's ability to describe complex hierarchical systems.
Monte Carlo Simulations of Random Frustrated Systems on Graphics Processing Units
Feng, Sheng; Fang, Ye; Hall, Sean; Papke, Ariane; Thomasson, Cade; Tam, Ka-Ming; Moreno, Juana; Jarrell, Mark
2012-02-01
We study the implementation of the classical Monte Carlo simulation for random frustrated models using the multithreaded computing environment provided by the the Compute Unified Device Architecture (CUDA) on modern Graphics Processing Units (GPU) with hundreds of cores and high memory bandwidth. The key for optimizing the performance of the GPU computing is in the proper handling of the data structure. Utilizing the multi-spin coding, we obtain an efficient GPU implementation of the parallel tempering Monte Carlo simulation for the Edwards-Anderson spin glass model. In the typical simulations, we find over two thousand times of speed-up over the single threaded CPU implementation.
ON THE STRUCTURES OF RANDOM MEASURE AND POINT PROCESS CONVOLUTION SEMIGROUPS
HEYUANJIANG
1996-01-01
Let D be a convolution semigroup of random measures or point processes on a locally compact second countable T2 space. There is a topological isomorphism from D into a subsemigroup of product topological semigroup (R+, +)N. D is a sequentially stable and D-separableZH-semigroup, as well as a metrizable, stable and normable Hun semigroup, so it has the corresponding properties. In particular the author has a new and simple proof by ZH-semigroupapproach or Hun semigroup approach to show that D has property ILID (an infinitesimal arraylimit is infinitely divisible), and know the Bairn types which some subsets of D belong in.
A new laterally conductive bridge random access memory by fully CMOS logic compatible process
Hsieh, Min-Che; Chin, Yung-Wen; Lin, Yu-Cheng; Chih, Yu-Der; Tsai, Kan-Hsueh; Tsai, Ming-Jinn; King, Ya-Chin; Lin, Chrong Jung
2014-01-01
This paper proposes a novel laterally conductive bridge random access memory (L-CBRAM) module using a fully CMOS logic compatible process. A contact buffer layer between the poly-Si and contact plug enables the lateral Ti-based atomic layer to provide on/off resistance ratio via bipolar operations. The proposed device reached more than 100 pulse cycles with an on/off ratio over 10 and very stable data retention under high temperature operations. These results make this Ti-based L-CBRAM cell a promising solution for advanced embedded multi-time programmable (MTP) memory applications.
Mironowicz, Piotr; Tavakoli, Armin; Hameedi, Alley; Marques, Breno; Pawłowski, Marcin; Bourennane, Mohamed
2016-06-01
Quantum communication with systems of dimension larger than two provides advantages in information processing tasks. Examples include higher rates of key distribution and random number generation. The main disadvantage of using such multi-dimensional quantum systems is the increased complexity of the experimental setup. Here, we analyze a not-so-obvious problem: the relation between randomness certification and computational requirements of the post-processing of experimental data. In particular, we consider semi-device independent randomness certification from an experiment using a four dimensional quantum system to violate the classical bound of a random access code. Using state-of-the-art techniques, a smaller quantum violation requires more computational power to demonstrate randomness, which at some point becomes impossible with today’s computers although the randomness is (probably) still there. We show that by dedicating more input settings of the experiment to randomness certification, then by more computational postprocessing of the experimental data which corresponds to a quantum violation, one may increase the amount of certified randomness. Furthermore, we introduce a method that significantly lowers the computational complexity of randomness certification. Our results show how more randomness can be generated without altering the hardware and indicate a path for future semi-device independent protocols to follow.
Tsai, C.; Hung, R. J.
2015-12-01
This study attempts to apply queueing theory to develop a stochastic framework that could account for the random-sized batch arrivals of incoming sediment particles into receiving waters. Sediment particles, control volume, mechanics of sediment transport (such as mechanics of suspension, deposition and resuspension) are treated as the customers, service facility and the server respectively in queueing theory. In the framework, the stochastic diffusion particle tracking model (SD-PTM) and resuspension of particles are included to simulate the random transport trajectories of suspended particles. The most distinguished characteristic of queueing theory is that customers come to the service facility in a random manner. In analogy to sediment transport, this characteristic is adopted to model the random-sized batch arrival process of sediment particles including the random occurrences and random magnitude of incoming sediment particles. The random occurrences of arrivals are simulated by Poisson process while the number of sediment particles in each arrival can be simulated by a binominal distribution. Simulations of random arrivals and random magnitude are proposed individually to compare with the random-sized batch arrival simulations. Simulation results are a probabilistic description for discrete sediment transport through ensemble statistics (i.e. ensemble means and ensemble variances) of sediment concentrations and transport rates. Results reveal the different mechanisms of incoming particles will result in differences in the ensemble variances of concentrations and transport rates under the same mean incoming rate of sediment particles.
Le Bihan, Nicolas; Margerin, Ludovic
2009-07-01
In this paper, we present a nonparametric method to estimate the heterogeneity of a random medium from the angular distribution of intensity of waves transmitted through a slab of random material. Our approach is based on the modeling of forward multiple scattering using compound Poisson processes on compact Lie groups. The estimation technique is validated through numerical simulations based on radiative transfer theory.
Bihan, Nicolas Le
2009-01-01
In this paper, we present a nonparametric method to estimate the heterogeneity of a random medium from the angular distribution of intensity transmitted through a slab of random material. Our approach is based on the modeling of forward multiple scattering using Compound Poisson Processes on compact Lie groups. The estimation technique is validated through numerical simulations based on radiative transfer theory.
Dong HAN; Xin Sheng ZHANG; Wei An ZHENG
2008-01-01
We consider the asymptotic probability distribution of the size of a reversible random coagula-tion-fragmentation process in the thermodynamic limit.We prove that the distributions of small,medium and the largest clusters converge to Gaussian,Poisson and 0-1 distributions in the supercritical stage (post-gelation),respectively.We show also that the mutually dependent distributions of clusters will become independent after the occurrence of a gelation transition.Furthermore,it is proved that all the number distributions of clusters are mutually independent at the critical stage (gelation),but the distributions of medium and the largest clusters are mutually dependent with positive correlation coe .cient in the supercritical stage.When the fragmentation strength goes to zero,there will exist only two types of clusters in the process,one type consists of the smallest clusters, the other is the largest one which has a size nearly equal to the volume (total number of units).
ZHANG Yafang; TANG Chun'an; LIU Hao
2006-01-01
Based on an essential assumption of meso-heterogeneity of material, the macro characteristic of composite reinforced with particles, the crack initiation, propagation and the failure process in composite were studied by using a numerical code. The composite is subjected to a uniaxial tension, and stiff or soft particles are distributed at random manner but without overlapping or contacting. The effect of reinforcement particle properties on the fracture process and mechanism of composite with brittle matrix, furthermore, the influence of the particle volumetric fraction is also investigated. Numerical results present the different failure mode and re-produce the crack initiation, propagation and coalescence in brittle and heterogeneous matrix. The mechanism of such failure was also elucidated.
Coevolution of information processing and topology in hierarchical adaptive random Boolean networks
Górski, Piotr J.; Czaplicka, Agnieszka; Hołyst, Janusz A.
2016-02-01
Random Boolean Networks (RBNs) are frequently used for modeling complex systems driven by information processing, e.g. for gene regulatory networks (GRNs). Here we propose a hierarchical adaptive random Boolean Network (HARBN) as a system consisting of distinct adaptive RBNs (ARBNs) - subnetworks - connected by a set of permanent interlinks. We investigate mean node information, mean edge information as well as mean node degree. Information measures and internal subnetworks topology of HARBN coevolve and reach steady-states that are specific for a given network structure. The main natural feature of ARBNs, i.e. their adaptability, is preserved in HARBNs and they evolve towards critical configurations which is documented by power law distributions of network attractor lengths. The mean information processed by a single node or a single link increases with the number of interlinks added to the system. The mean length of network attractors and the mean steady-state connectivity possess minima for certain specific values of the quotient between the density of interlinks and the density of all links in networks. It means that the modular network displays extremal values of its observables when subnetworks are connected with a density a few times lower than a mean density of all links.
Hot-spot model for accretion disc variability as random process
Pechacek, T; Czerny, B
2008-01-01
Theory of random processes provides an attractive mathematical tool to describe the fluctuating signal from accreting sources, such as active galactic nuclei and Galactic black holes observed in X-rays. These objects exhibit featureless variability on different timescales, probably originating from an accretion disc. We study the basic features of the power spectra in terms of a general framework, which permits semi-analytical determination of the power spectral density (PSD) of the resulting light curve. We consider the expected signal generated by an ensemble of spots randomly created on the accretion disc surface. Spot generation is governed by Poisson or by Hawkes processes. We include general relativity effects shaping the signal on its propagation to a distant observer. We analyse the PSD of a spotted disc light curve and show the accuracy of our semi-analytical approach by comparing the obtained PSD with the results of Monte Carlo simulations. The asymptotic slopes of PSD are 0 at low frequencies and t...
Language-based social feedback processing with randomized "senders": An ERP study.
Schindler, Sebastian; Kissler, Johanna
2017-02-06
Recently, several event-related potential (ERP) studies investigated the impact of sender attributions on language-based social feedback processing. Results showed very early responses to the social context, while interactions or effects of emotional content started later. However, in these studies, sender attribution was varied across blocks, possibly inducing unspecific, anticipatory effects. Here, who was giving feedback was disclosed simultaneously with the decision itself. Participants' ERPs differentiated between attributed senders starting with the early posterior negativity. P3 and late positive potential (LPP) amplitudes were also enlarged for the "human sender". Emotion effects occurred in the P3 and LPP time windows. Further, we found an interaction on the P3: "Human" emotional feedback was selectively amplified. Source analysis localized enhanced processing of "human"-generated feedback in visual areas from around 300 ms after feedback onset and from 400 ms also in temporal regions. Enhancement of "human" emotional feedback resulted from increased activations in the left visual word form area. These findings highlight that decoding who is giving feedback precedes content processing, both in blocked and in randomly alternating situations. Further, in quasi-realistic social contexts, processing of emotional content is selectively amplified. Finally, involvement of semantic language processing structures indicates reintegration of words in a salient context.
Gramkow, Claus
1999-01-01
In this article two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very offten the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...... approximations to the Riemannian metric, and that the subsequent corrections are inherient in the least squares estimation. Keywords: averaging rotations, Riemannian metric, matrix, quaternion...
Solar cycles or random processes? Evaluating solar variability in Holocene climate records.
Turner, T Edward; Swindles, Graeme T; Charman, Dan J; Langdon, Peter G; Morris, Paul J; Booth, Robert K; Parry, Lauren E; Nichols, Jonathan E
2016-04-05
Many studies have reported evidence for solar-forcing of Holocene climate change across a range of archives. These studies have compared proxy-climate data with records of solar variability (e.g. (14)C or (10)Be), or have used time series analysis to test for the presence of solar-type cycles. This has led to some climate sceptics misrepresenting this literature to argue strongly that solar variability drove the rapid global temperature increase of the twentieth century. As proxy records underpin our understanding of the long-term processes governing climate, they need to be evaluated thoroughly. The peatland archive has become a prominent line of evidence for solar forcing of climate. Here we examine high-resolution peatland proxy climate data to determine whether solar signals are present. We find a wide range of significant periodicities similar to those in records of solar variability: periods between 40-100 years, and 120-140 years are particularly common. However, periodicities similar to those in the data are commonly found in random-walk simulations. Our results demonstrate that solar-type signals can be the product of random variations alone, and that a more critical approach is required for their robust interpretation.
Michaľčonok German
2014-12-01
Full Text Available The aim of this paper is to present the possibilities of applying data mining techniques to the problem of analysis of structural relationships in the system of stationary random processes. In this paper, we will approach the area of the random processes, present the process of structural analysis and select suitable circuit data mining methods applicable to the area of structural analysis. We will propose the methodology for the structural analysis in the system of stationary stochastic processes using data mining methods for active experimental approach, based on the theoretical basis.
Michaľčonok, German; Kalinová, Michaela Horalová; Németh, Martin
2014-12-01
The aim of this paper is to present the possibilities of applying data mining techniques to the problem of analysis of structural relationships in the system of stationary random processes. In this paper, we will approach the area of the random processes, present the process of structural analysis and select suitable circuit data mining methods applicable to the area of structural analysis. We will propose the methodology for the structural analysis in the system of stationary stochastic processes using data mining methods for active experimental approach, based on the theoretical basis.
Li, Quan-Lin
2010-01-01
In this paper, we provide a novel matrix-analytic approach for studying doubly exponential solution of randomized load balancing models (also known as the supermarket models) with Markovian arrival processes (MAPs) and PH service times. We describe the supermarket model as a system of differential vector equations, and obtain a close-form solution: doubly exponential structure, for the fixed point of the system of differential vector equations. Based on this, we show that the fixed point is decomposited into two groups of information under a product form: the arrival information and the service information, and indicate that the doubly exponential solution to the fixed point is not always unique for more general supermarket models. Furthermore, we analyze the exponential convergence of the current location of the supermarket model to its fixed point, and study the Lipschitz condition in the Kurtz Theorem under MAP arrivals and PH service times. This paper gains a new understanding how the workload probing can...
Hierarchical random cellular neural networks for system-level brain-like signal processing.
Kozma, Robert; Puljic, Marko
2013-09-01
Sensory information processing and cognition in brains are modeled using dynamic systems theory. The brain's dynamic state is described by a trajectory evolving in a high-dimensional state space. We introduce a hierarchy of random cellular automata as the mathematical tools to describe the spatio-temporal dynamics of the cortex. The corresponding brain model is called neuropercolation which has distinct advantages compared to traditional models using differential equations, especially in describing spatio-temporal discontinuities in the form of phase transitions. Phase transitions demarcate singularities in brain operations at critical conditions, which are viewed as hallmarks of higher cognition and awareness experience. The introduced Monte-Carlo simulations obtained by parallel computing point to the importance of computer implementations using very large-scale integration (VLSI) and analog platforms.
Zone inhomogeneity with the random asymmetric simple exclusion process in a one-lane system
Xiao Song; Cai Jiu-Ju; Liu Fei
2009-01-01
In this paper we use theoretical analysis and extensive simulations to study zone inhomogeneity with the random asymmetric simple exclusion process (ASEP). In the inhomogeneous zone, the hopping probability is less than 1. Two typical lattice geometries axe investigated here. In case A, the lattice includes two equal segments. The hopping probability in the left segment is equal to 1, and in the right segment it is equal to p, which is less than 1. In case B, there are three equal segments in the system; the hopping probabilities in the left and right segments are equal to 1, and in the middle segment it is equal to p, which is leas than 1. Through theoretical analysis, we can discover the effect on these systems when p is changed.
The emergence of typical entanglement in two-party random processes
Dahlsten, O C O [Institute for Mathematical Sciences, Imperial College London, 53 Prince' s Gate, South Kensington London, SW7 2PG (United Kingdom); Oliveira, R [Instituto Nacional de Matematica Pura e Aplicada-IMPA Estrada Dona Castorina, 110 Jardim Botanico 22460-320, Rio de Janeiro, RJ (Brazil); Plenio, M B [Institute for Mathematical Sciences, Imperial College London, 53 Prince' s Gate, South Kensington London, SW7 2PG (United Kingdom)
2007-07-13
We investigate the entanglement within a system undergoing a random, local process. We find that there is initially a phase of very fast generation and spread of entanglement. At the end of this phase the entanglement is typically maximal. In Oliveira et al (2007 Phys. Rev. Lett. 98 130502) we proved that the maximal entanglement is reached to a fixed arbitrary accuracy within O(N{sup 3}) steps, where N is the total number of qubits. Here we provide a detailed and more pedagogical proof. We demonstrate that one can use the so-called stabilizer gates to simulate this process efficiently on a classical computer. Furthermore, we discuss three ways of identifying the transition from the phase of rapid spread of entanglement to the stationary phase: (i) the time when saturation of the maximal entanglement is achieved, (ii) the cutoff moment, when the entanglement probability distribution is practically stationary, and (iii) the moment block entanglement exhibits volume scaling. We furthermore investigate the mixed state and multipartite setting. Numerically, we find that the mutual information appears to behave similarly to the quantum correlations and that there is a well-behaved phase-space flow of entanglement properties towards an equilibrium. We describe how the emergence of typical entanglement can be used to create a much simpler tripartite entanglement description. The results form a bridge between certain abstract results concerning typical (also known as generic) entanglement relative to an unbiased distribution on pure states and the more physical picture of distributions emerging from random local interactions.
Bessel processes and hyperbolic Brownian motions stopped at different random times
D'Ovidio, Mirko
2010-01-01
Iterated Bessel processes R^\\gamma(t), t>0, \\gamma>0 and their counterparts on hyperbolic spaces, i.e. hyperbolic Brownian motions B^{hp}(t), t>0 are examined and their probability laws derived. The higher-order partial differential equations governing the distributions of I_R(t)=_1R^\\gamma(_2R^\\gamma(t)), t>0 and J_R(t) =_1R^\\gamma(|_2R^\\gamma(t)|^2), t>0 are obtained and discussed. Processes of the form R^\\gamma(T_t), t>0, B^{hp}(T_t), t>0 where T_t=\\inf{s: B(s)=t} are examined and numerous probability laws derived, including the Student law, the arcsin laws (also their asymmetric versions), the Lamperti distribution of the ratio of independent positively skewed stable random variables and others. For the process R^{\\gamma}(T^\\mu_t), t>0 (where T^\\mu_t = \\inf{s: B^\\mu(s)=t} and B^\\mu is a Brownian motion with drift \\mu) the explicit probability law and the governing equation are obtained. For the hyperbolic Brownian motions on the Poincar\\'e half-spaces H^+_2, H^+_3 we study B^{hp}(T_t), t>0 and the corresp...
Young, Vershawn Ashanti
2004-01-01
"Your Average Nigga" contends that just as exaggerating the differences between black and white language leaves some black speakers, especially those from the ghetto, at an impasse, so exaggerating and reifying the differences between the races leaves blacks in the impossible position of either having to try to be white or forever struggling to…
Gramkow, Claus
2001-01-01
In this paper two common approaches to averaging rotations are compared to a more advanced approach based on a Riemannian metric. Very often the barycenter of the quaternions or matrices that represent the rotations are used as an estimate of the mean. These methods neglect that rotations belong...
Dias, Karin Ziliotto; Jutras, Benoît; Acrani, Isabela Olszanski; Pereira, Liliane Desgualdo
2012-02-01
The aim of the present study was to assess the auditory temporal resolution ability in individuals with central auditory processing disorders, to examine the maturation effect and to investigate the relationship between the performance on a temporal resolution test with the performance on other central auditory tests. Participants were divided in two groups: 131 with Central Auditory Processing Disorder and 94 with normal auditory processing. They had pure-tone air-conduction thresholds no poorer than 15 dB HL bilaterally, normal admittance measures and presence of acoustic reflexes. Also, they were assessed with a central auditory test battery. Participants who failed at least one or more tests were included in the Central Auditory Processing Disorder group and those in the control group obtained normal performance on all tests. Following the auditory processing assessment, the Random Gap Detection Test was administered to the participants. A three-way ANOVA was performed. Correlation analyses were also done between the four Random Gap Detection Test subtests data as well as between Random Gap Detection Test data and the other auditory processing test results. There was a significant difference between the age-group performances in children with and without Central Auditory Processing Disorder. Also, 48% of children with Central Auditory Processing Disorder failed the Random Gap Detection Test and the percentage decreased as a function of age. The highest percentage (86%) was found in the 5-6 year-old children. Furthermore, results revealed a strong significant correlation between the four Random Gap Detection Test subtests. There was a modest correlation between the Random Gap Detection Test results and the dichotic listening tests. No significant correlation was observed between the Random Gap Detection Test data and the results of the other tests in the battery. Random Gap Detection Test should not be administered to children younger than 7 years old because
Rui Nouchi
Full Text Available BACKGROUND: The beneficial effects of brain training games are expected to transfer to other cognitive functions, but these beneficial effects are poorly understood. Here we investigate the impact of the brain training game (Brain Age on cognitive functions in the elderly. METHODS AND RESULTS: Thirty-two elderly volunteers were recruited through an advertisement in the local newspaper and randomly assigned to either of two game groups (Brain Age, Tetris. This study was completed by 14 of the 16 members in the Brain Age group and 14 of the 16 members in the Tetris group. To maximize the benefit of the interventions, all participants were non-gamers who reported playing less than one hour of video games per week over the past 2 years. Participants in both the Brain Age and the Tetris groups played their game for about 15 minutes per day, at least 5 days per week, for 4 weeks. Each group played for a total of about 20 days. Measures of the cognitive functions were conducted before and after training. Measures of the cognitive functions fell into four categories (global cognitive status, executive functions, attention, and processing speed. Results showed that the effects of the brain training game were transferred to executive functions and to processing speed. However, the brain training game showed no transfer effect on any global cognitive status nor attention. CONCLUSIONS: Our results showed that playing Brain Age for 4 weeks could lead to improve cognitive functions (executive functions and processing speed in the elderly. This result indicated that there is a possibility which the elderly could improve executive functions and processing speed in short term training. The results need replication in large samples. Long-term effects and relevance for every-day functioning remain uncertain as yet. TRIAL REGISTRATION: UMIN Clinical Trial Registry 000002825.
R.W. Strachan (Rodney); H.K. van Dijk (Herman)
2007-01-01
textabstractA Bayesian model averaging procedure is presented within the class of vector autoregressive (VAR) processes and applied to two empirical issues. First, stability of the "Great Ratios" in U.S. macro-economic time series is investigated, together with the presence and e¤ects of permanent s
Covariant approximation averaging
Shintani, Eigo; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph
2014-01-01
We present a new class of statistical error reduction techniques for Monte-Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in $N_f=2+1$ lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte-Carlo calculations over conventional methods for the same cost.
Zhou, L.; Qu, Z. G.; Ding, T.; Miao, J. Y.
2016-04-01
The gas-solid adsorption process in reconstructed random porous media is numerically studied with the lattice Boltzmann (LB) method at the pore scale with consideration of interparticle, interfacial, and intraparticle mass transfer performances. Adsorbent structures are reconstructed in two dimensions by employing the quartet structure generation set approach. To implement boundary conditions accurately, all the porous interfacial nodes are recognized and classified into 14 types using a proposed universal program called the boundary recognition and classification program. The multiple-relaxation-time LB model and single-relaxation-time LB model are adopted to simulate flow and mass transport, respectively. The interparticle, interfacial, and intraparticle mass transfer capacities are evaluated with the permeability factor and interparticle transfer coefficient, Langmuir adsorption kinetics, and the solid diffusion model, respectively. Adsorption processes are performed in two groups of adsorbent media with different porosities and particle sizes. External and internal mass transfer resistances govern the adsorption system. A large porosity leads to an early time for adsorption equilibrium because of the controlling factor of external resistance. External and internal resistances are dominant at small and large particle sizes, respectively. Particle size, under which the total resistance is minimum, ranges from 3 to 7 μm with the preset parameters. Pore-scale simulation clearly explains the effect of both external and internal mass transfer resistances. The present paper provides both theoretical and practical guidance for the design and optimization of adsorption systems.
Design of Energy Aware Adder Circuits Considering Random Intra-Die Process Variations
Marco Lanuzza
2011-04-01
Full Text Available Energy consumption is one of the main barriers to current high-performance designs. Moreover, the increased variability experienced in advanced process technologies implies further timing yield concerns and therefore intensifies this obstacle. Thus, proper techniques to achieve robust designs are a critical requirement for integrated circuit success. In this paper, the influence of intra-die random process variations is analyzed considering the particular case of the design of energy aware adder circuits. Five well known adder circuits were designed exploiting an industrial 45 nm static complementary metal-oxide semiconductor (CMOS standard cell library. The designed adders were comparatively evaluated under different energy constraints. As a main result, the performed analysis demonstrates that, for a given energy budget, simpler circuits (which are conventionally identified as low-energy slow architectures operating at higher power supply voltages can achieve a timing yield significantly better than more complex faster adders when used in low-power design with supply voltages lower than nominal.
Mohammad Fathi
2012-01-01
Full Text Available Bank erosion in populated areas could cause fatalities and property damage if banks collapse abruptly, compromising the integrity of residential buildings and civil facilities. Bank erosion study is in general a very complex problem because of it involves multi-processes such as bank surface erosion, bank toe erosion and bank material mechanic failure, etc. Each of these processes is related to several parameters: sediment size distribution, bank material cohesion, slope, homogeneity, consolidation, soil moisture and ground water level, as well as bank height. The bank erosion rate is also related to the strength of the flow in the river indicated by the flow shear stress, water depth, and channel curvature, etc. In this study, the numerical model CCHE2D has been applied to study real-world bank erosion cases in a mountain river, Khoske Rud Farsan River, Iran, which is a braided river with high sediment loads and channel mobility; the bank erosion of this river is dominated by floods during rainy seasons.
Extended power-law scaling of heavy-tailed random fields or processes
A. Guadagnini
2012-06-01
Full Text Available We analyze the scaling behaviors of two log permeability data sets showing heavy-tailed frequency distributions in three and two spatial dimensions, respectively. One set consists of 1-m scale pneumatic packer test data from six vertical and inclined boreholes spanning a decameters scale block of unsaturated fractured tuffs near Superior, Arizona, the other of pneumatic minipermeameter data measured at a spacing of 15 cm along two horizontal transects on a 21 m long outcrop of lower-shoreface bioturbated sandstone near Escalante, Utah. Order q sample structure functions of each data set scale as a power ξ (q of separation scale or lag, s, over limited ranges of s. A procedure known as Extended Self-Similarity (ESS extends this range to all lags and yields a nonlinear (concave functional relationship between ξ (q and q. Whereas the literature tends to associate extended and nonlinear power-law scaling with multifractals or fractional Laplace motions, we have shown elsewhere that (a ESS of data having a normal frequency distribution is theoretically consistent with (Gaussian truncated (additive, self-affine, monofractal fractional Brownian motion (tfBm, the latter being unique in predicting a breakdown in power-law scaling at small and large lags, and (b nonlinear power-law scaling of data having either normal or heavy-tailed frequency distributions is consistent with samples from sub-Gaussian random fields or processes subordinated to tfBm, stemming from lack of ergodicity which causes sample moments to scale differently than do their ensemble counterparts. Here we (i demonstrate that the above two data sets are consistent with sub-Gaussian random fields subordinated to tfBm and (ii provide maximum likelihood estimates of parameters characterizing the corresponding Lévy stable subordinators and tfBm functions.
Glynn, R.J.; Koenig, W.; Nordestgaard, B.G.;
2010-01-01
Background: Randomized data on statins for primary prevention in older persons are limited, and the relative hazard of cardiovascular disease associated with an elevated cholesterol level weakens with advancing age. Objective: To assess the efficacy and safety of rosuvastatin in persons 70 years ...
Negative Average Preference Utilitarianism
Roger Chao
2012-03-01
Full Text Available For many philosophers working in the area of Population Ethics, it seems that either they have to confront the Repugnant Conclusion (where they are forced to the conclusion of creating massive amounts of lives barely worth living, or they have to confront the Non-Identity Problem (where no one is seemingly harmed as their existence is dependent on the “harmful” event that took place. To them it seems there is no escape, they either have to face one problem or the other. However, there is a way around this, allowing us to escape the Repugnant Conclusion, by using what I will call Negative Average Preference Utilitarianism (NAPU – which though similar to anti-frustrationism, has some important differences in practice. Current “positive” forms of utilitarianism have struggled to deal with the Repugnant Conclusion, as their theory actually entails this conclusion; however, it seems that a form of Negative Average Preference Utilitarianism (NAPU easily escapes this dilemma (it never even arises within it.
Multi-fidelity Gaussian process regression for prediction of random fields
Parussini, L. [Department of Engineering and Architecture, University of Trieste (Italy); Venturi, D., E-mail: venturi@ucsc.edu [Department of Applied Mathematics and Statistics, University of California Santa Cruz (United States); Perdikaris, P. [Department of Mechanical Engineering, Massachusetts Institute of Technology (United States); Karniadakis, G.E. [Division of Applied Mathematics, Brown University (United States)
2017-05-01
We propose a new multi-fidelity Gaussian process regression (GPR) approach for prediction of random fields based on observations of surrogate models or hierarchies of surrogate models. Our method builds upon recent work on recursive Bayesian techniques, in particular recursive co-kriging, and extends it to vector-valued fields and various types of covariances, including separable and non-separable ones. The framework we propose is general and can be used to perform uncertainty propagation and quantification in model-based simulations, multi-fidelity data fusion, and surrogate-based optimization. We demonstrate the effectiveness of the proposed recursive GPR techniques through various examples. Specifically, we study the stochastic Burgers equation and the stochastic Oberbeck–Boussinesq equations describing natural convection within a square enclosure. In both cases we find that the standard deviation of the Gaussian predictors as well as the absolute errors relative to benchmark stochastic solutions are very small, suggesting that the proposed multi-fidelity GPR approaches can yield highly accurate results.
The Ghirlanda-Guerra identities without averaging
Chatterjee, Sourav
2009-01-01
The Ghirlanda-Guerra identities are one of the most mysterious features of spin glasses. We prove the GG identities in a large class of models that includes the Edwards-Anderson model, the random field Ising model, and the Sherrington-Kirkpatrick model in the presence of a random external field. Previously, the GG identities were rigorously proved only `on average' over a range of temperatures or under small perturbations.
李云霞; 张立新
2005-01-01
In this paper,the least square estimator in the problem of multiple change points estimation is studied.Here,the moving-average processes of ALNQD sequence in the mean shifts are discussed.When the number of change points is known,the rate of convergence of change-points estimation is derived.The result is also true for ρ-mixing,φ-mixing,α-mixing,associated and negatively associated sequences under suitable conditions.
Sim, Aaron; Liepe, Juliane; Stumpf, Michael P. H.
2015-04-01
The Goldstein-Kac telegraph process describes the one-dimensional motion of particles with constant speed undergoing random changes in direction. Despite its resemblance to numerous real-world phenomena, the singular nature of the resultant spatial distribution of each particle precludes the possibility of any a posteriori empirical validation of this random-walk model from data. Here we show that by simply allowing for random speeds, the ballistic terms are regularized and that the diffusion component can be well-approximated via the unscented transform. The result is a computationally efficient yet robust evaluation of the full particle path probabilities and, hence, the parameter likelihoods of this generalized telegraph process. We demonstrate how a population diffusing under such a model can lead to non-Gaussian asymptotic spatial distributions, thereby mimicking the behavior of an ensemble of Lévy walkers.
Statistical mechanical analysis of a hierarchical random code ensemble in signal processing
Obuchi, Tomoyuki [Department of Earth and Space Science, Faculty of Science, Osaka University, Toyonaka 560-0043 (Japan); Takahashi, Kazutaka [Department of Physics, Tokyo Institute of Technology, Tokyo 152-8551 (Japan); Takeda, Koujin, E-mail: takeda@sp.dis.titech.ac.jp [Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology, Yokohama 226-8502 (Japan)
2011-02-25
We study a random code ensemble with a hierarchical structure, which is closely related to the generalized random energy model with discrete energy values. Based on this correspondence, we analyze the hierarchical random code ensemble by using the replica method in two situations: lossy data compression and channel coding. For both the situations, the exponents of large deviation analysis characterizing the performance of the ensemble, the distortion rate of lossy data compression and the error exponent of channel coding in Gallager's formalism, are accessible by a generating function of the generalized random energy model. We discuss that the transitions of those exponents observed in the preceding work can be interpreted as phase transitions with respect to the replica number. We also show that the replica symmetry breaking plays an essential role in these transitions.
Growth of Preferential Attachment Random Graphs Via Continuous-Time Branching Processes
Krishna B Athreya; Arka P Ghosh; Sunder Sethuraman
2008-08-01
Some growth asymptotics of a version of `preferential attachment’ random graphs are studied through an embedding into a continuous-time branching scheme. These results complement and extend previous work in the literature.
Nezhadhaghighi, Mohsen Ghasemi
2017-08-01
Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ -stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α . We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ -stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.
Glynn, R.J.; Koenig, W.; Nordestgaard, B.G.
2010-01-01
Background: Randomized data on statins for primary prevention in older persons are limited, and the relative hazard of cardiovascular disease associated with an elevated cholesterol level weakens with advancing age. Objective: To assess the efficacy and safety of rosuvastatin in persons 70 years......: The 32% of trial participants 70 years or older accrued 49% (n = 194) of the 393 confirmed primary end points. The rates of the primary end point in this age group were 1.22 and 1.99 per 100 person-years of follow-up in the rosuvastatin and placebo groups, respectively ( hazard ratio, 0.61 [95% CI, 0...... greater in older persons. The relative rate of any serious adverse event among older persons in the rosuvastatin versus placebo group was 1.05 ( CI, 0.93 to 1.17). Limitation: Effect estimates from this exploratory analysis with age cut-point chosen after trial completion should be viewed in the context...
Random practice - one of the factors of the motor learning process
Petr Valach
2012-01-01
Full Text Available BACKGROUND: An important concept of acquiring motor skills is the random practice (contextual interference - CI. The explanation of the effect of contextual interference is that the memory has to work more intensively, and therefore it provides higher effect of motor skills retention than the block practice. Only active remembering of a motor skill assigns the practical value for appropriate using in the future. OBJECTIVE: The aim of this research was to determine the difference in how the motor skills in sport gymnastics are acquired and retained using the two different teaching methods - blocked and random practice. METHODS: The blocked and random practice on the three selected gymnastics tasks were applied in the two groups students of physical education (blocked practice - the group BP, random practice - the group RP during two months, in one session a week (totally 80 trials. At the end of the experiment and 6 months after (retention tests the groups were tested on the selected gymnastics skills. RESULTS: No significant differences in a level of the gymnastics skills were found between BP group and RP group at the end of the experiment. However, the retention tests showed significantly higher level of the gymnastics skills in the RP group in comparison with the BP group. CONCLUSION: The results confirmed that a retention of the gymnastics skills using the teaching method of the random practice was significantly higher than with use of the blocked practice.
Liu, Q.; Liu, F.; Turner, I.; Anh, V.
2007-03-01
In this paper we present a random walk model for approximating a Lévy-Feller advection-dispersion process, governed by the Lévy-Feller advection-dispersion differential equation (LFADE). We show that the random walk model converges to LFADE by use of a properly scaled transition to vanishing space and time steps. We propose an explicit finite difference approximation (EFDA) for LFADE, resulting from the Grünwald-Letnikov discretization of fractional derivatives. As a result of the interpretation of the random walk model, the stability and convergence of EFDA for LFADE in a bounded domain are discussed. Finally, some numerical examples are presented to show the application of the present technique.
Carter, Ashley J. R.
2002-01-01
Presents a hands-on activity on the phenomenon of genetic drift in populations that reinforces the random nature of drift and demonstrates the effect of the population size on the mean frequency of an allele over a few generations. Includes materials for the demonstration, procedures, and discussion topics. (KHR)
Andersen, Allan T.; Nielsen, Bo Friis
2000-01-01
. The implications for the correlation structure when shuffling an exactly second-order self-similar process are examined. We apply the Markovian arrival process (MAP) as a tool to investigate whether general conclusions can be made with regard to the statistical implications of the shuffling experiments...
Znoj, H J; Messerli-Burgy, N; Tschopp, S; Weber, R.; Christen, L; Christen, S; Grawe, K
2010-01-01
The aim of this exploratory study was to examine the possible mechanisms of behavioral change in a cognitive-behavioral intervention supporting medication adherence in HIV-infected persons. A total of 60 persons currently under medical treatment were randomized to psychotherapy or usual care and were compared with a sociodemographically matched group of general psychotherapy clients. Outcome measures included therapy adherence using medication event-monitoring system psychotherapeutic process...
Hermann, Philipp; Mrkvička, Tomáš; Mattfeldt, Torsten; Minárová, Mária; Helisová, Kateřina; Nicolis, Orietta; Wartner, Fabian; Stehlík, Milan
2015-08-15
Fractals are models of natural processes with many applications in medicine. The recent studies in medicine show that fractals can be applied for cancer detection and the description of pathological architecture of tumors. This fact is not surprising, as due to the irregular structure, cancerous cells can be interpreted as fractals. Inspired by Sierpinski carpet, we introduce a flexible parametric model of random carpets. Randomization is introduced by usage of binomial random variables. We provide an algorithm for estimation of parameters of the model and illustrate theoretical and practical issues in generation of Sierpinski gaskets and Hausdorff measure calculations. Stochastic geometry models can also serve as models for binary cancer images. Recently, a Boolean model was applied on the 200 images of mammary cancer tissue and 200 images of mastopathic tissue. Here, we describe the Quermass-interaction process, which can handle much more variations in the cancer data, and we apply it to the images. It was found out that mastopathic tissue deviates significantly stronger from Quermass-interaction process, which describes interactions among particles, than mammary cancer tissue does. The Quermass-interaction process serves as a model describing the tissue, which structure is broken to a certain level. However, random fractal model fits well for mastopathic tissue. We provide a novel discrimination method between mastopathic and mammary cancer tissue on the basis of complex wavelet-based self-similarity measure with classification rates more than 80%. Such similarity measure relates to Hurst exponent and fractional Brownian motions. The R package FractalParameterEstimation is developed and introduced in the paper.
R. Dhanasekaran
2014-01-01
Full Text Available Large number of low power, tiny radio jammers are constituting a Distributed Jammer Network (DJN is used nowadays to cause a Denial of Service (DoS attack on a Distributed Wireless Network (DWN. Using NANO technologies, it is possible to build huge number of tiny jammers in millions, if not more. The Denial of Service (DoS attacks in Distributed Wireless Network (DWN using Distributed Jammer Network (DJN considering each of them as separate Poisson Random Process. In an integrated approach, in this study, we advocate the more natural Birth-Death Random Process route to study the impact of Distributed Jammer Network (DJN on the connectivity of Distributed Wireless Network (DWN. We express that the Distributed Jammer Network (DJN can root a phase transition in the performance of the target network. We use Birth-Death Random Process (BDRP route for this phase transition to evaluate the collision of Distributed Jammer Network (DJN on the connectivity and global percolation of the target network. This study confirms the global percolation of Distributed Wireless Network (DWN is definite when the Distributed Jammer Network (DJN is not more significant.
Driver, Vickie R; Yao, Min; Kantarci, Alpdogan; Gu, Guosheng; Park, Nanjin; Hasturk, Hatice
2013-11-01
Hypoxia is a major factor in delayed wound healing. The aim of this prospective, randomized, clinical trial was to compare outcomes of treatment in persons with chronic diabetic foot ulcers (DFUs) randomly assigned to transdermal continuous oxygen therapy (TCOT) for 4 weeks as an adjunct to standard care (debridement, offloading, and moisture). Nine patients (age 58.6±7.1, range 38-73 years) received TCOT (treatment group) and eight patients (age 59.9±12.6, range 35-76 years) received standard care alone (control group). Most patients (12) were male, and all had a Wagner I or II foot ulcer for an average of 14 (control group) or 20 months (treatment group). Weekly wound measurements and wound tissue biopsies were obtained and wound fluid collected. Levels of pro-inflammatory cytokines and proteases in wound fluid samples were analyzed using Luminex-based multiplex assays. Tissue-resident macrophages were quantified by immunohistochemistry. At week 4, average wound size reduction was 87% (range 55.7% to 100%) in the treatment group compared to 46% (15% to 99%) in the control group (P <0.05). Changes in cytokine levels (IL-6, IL-8) and proteinases (MMP-1,-2,-9, TIMP-1) at weeks 2 to 4 in wound fluid correlated with clinical findings. CD68+ macrophage counts showed statistically significant reduction in response to TCOT compared to the control group (P <0.01). The results of this study show that TCOT may facilitate healing of DFUs by reversing the inflammatory process through reduction in pro-inflammatory cytokines and tissue-degrading proteases. Additional research to elucidate the effects of this treatment on complete healing and increase understanding about the role of wound fluid analysis is needed.
Average weighted receiving time in recursive weighted Koch networks
DAI MEIFENG; YE DANDAN; LI XINGYI; HOU JIE
2016-06-01
Motivated by the empirical observation in airport networks and metabolic networks, we introduce the model of the recursive weighted Koch networks created by the recursive division method. As a fundamental dynamical process, random walks have received considerable interest in the scientific community. Then, we study the recursive weighted Koch networks on random walk i.e., the walker, at each step, starting from its current node, moves uniformly to any of itsneighbours. In order to study the model more conveniently, we use recursive division method again to calculate the sum of the mean weighted first-passing times for all nodes to absorption at the trap located in the merging node. It is showed that in a large network, the average weighted receiving time grows sublinearly with the network order.
Kamon, Mattan; Akbulut, Mustafa; Yan, Yiguang; Faken, Daniel; Pap, Andras; Allampalli, Vasanth; Greiner, Ken; Fried, David
2016-07-01
For directed self-assembly (DSA) to be deployed in advanced semiconductor technologies, it must reliably integrate into a full process flow. We present a methodology for using virtual fabrication software, including predictive DSA process models, to develop and analyze the replacement of self-aligned quadruple patterning with Liu-Nealey chemoepitaxy on a 14-nm dynamic random access memory (DRAM) process. To quantify the impact of this module replacement, we investigated a key process yield metric for DRAM, interface area between the capacitor contacts and transistor source/drain. Additionally, we demonstrate virtual fabrication of the DRAM cell's hexagonally packed capacitors patterned with an array of diblock copolymer cylinders in place of fourfold litho-etch (LE4) patterning.
FREQUENTIST MODEL AVERAGING ESTIMATION: A REVIEW
Haiying WANG; Xinyu ZHANG; Guohua ZOU
2009-01-01
In applications, the traditional estimation procedure generally begins with model selection.Once a specific model is selected, subsequent estimation is conducted under the selected model without consideration of the uncertainty from the selection process. This often leads to the underreporting of variability and too optimistic confidence sets. Model averaging estimation is an alternative to this procedure, which incorporates model uncertainty into the estimation process. In recent years, there has been a rising interest in model averaging from the frequentist perspective, and some important progresses have been made. In this paper, the theory and methods on frequentist model averaging estimation are surveyed. Some future research topics are also discussed.
Random functions and turbulence
Panchev, S
1971-01-01
International Series of Monographs in Natural Philosophy, Volume 32: Random Functions and Turbulence focuses on the use of random functions as mathematical methods. The manuscript first offers information on the elements of the theory of random functions. Topics include determination of statistical moments by characteristic functions; functional transformations of random variables; multidimensional random variables with spherical symmetry; and random variables and distribution functions. The book then discusses random processes and random fields, including stationarity and ergodicity of random
A prospective randomized trial of content expertise versus process expertise in small group teaching
2010-01-01
Abstract Background Effective teaching requires an understanding of both what (content knowledge) and how (process knowledge) to teach. While previous studies involving medical students have compared preceptors with greater or lesser content knowledge, it is unclear whether process expertise can compensate for deficient content expertise. Therefore, the objective of our study was to compare the effect of preceptors with process expertise to those with content expertise on medical students' le...
Scaling in Rate-Changeable Birth and Death Processes with Random Removals
KE Jian-Hong; LIN Zhen-Quan; CHEN Xiao-Shuang
2009-01-01
We propose a monomer birth-death model with random removals, in which an aggregate of size k can produce a new monomer at a time-dependent rate I(t)k or lose one monomer at a rate J(t)k, and with a probability P(t) an aggregate of any size is randomly removed. We then analytically investigate the kinetic evolution of the model by means of the rate equation. The results show that the scaling behavior of the aggregate size distribution is dependent crucially on the net birth rate I(t)-J(t) as well as the birth rate I(t). The aggregate size distribution can approach a standard or modified scaling form in some cases, but it may take a scale-free form in other cases. Moreover, the species can survive finally only if either I(t) - J(t) ≥ P(t) or [J(t) + P(t) - I(t)]t (≌) 0 at t > 1; otherwise, it will become extinct.
Average Convexity in Communication Situations
Slikker, M.
1998-01-01
In this paper we study inheritance properties of average convexity in communication situations. We show that the underlying graph ensures that the graphrestricted game originating from an average convex game is average convex if and only if every subgraph associated with a component of the underlyin
Sampling Based Average Classifier Fusion
Jian Hou
2014-01-01
fusion algorithms have been proposed in literature, average fusion is almost always selected as the baseline for comparison. Little is done on exploring the potential of average fusion and proposing a better baseline. In this paper we empirically investigate the behavior of soft labels and classifiers in average fusion. As a result, we find that; by proper sampling of soft labels and classifiers, the average fusion performance can be evidently improved. This result presents sampling based average fusion as a better baseline; that is, a newly proposed classifier fusion algorithm should at least perform better than this baseline in order to demonstrate its effectiveness.
Geographic Gossip: Efficient Averaging for Sensor Networks
Dimakis, Alexandros G; Wainwright, Martin J
2007-01-01
Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log ...
{beta}-decay rates of r-process nuclei in the relativistic quasiparticle random phase approximation
Niksic, T.; Marketin, T.; Vretenar, D. [Zagreb Univ. (Croatia). Faculty of Science, Physics Dept.; Paar, N. [Technische Univ. Darmstadt (Germany). Inst. fuer Kernphysik; Ring, P. [Technische Univ. Muenchen, Garching (Germany). Physik-Department
2004-12-08
The fully consistent relativistic proton-neutron quasiparticle random phase approximation (PN-RQRPA) is employed in the calculation of {beta}-decay half-lives of neutron-rich nuclei in the N{approx}50 and N{approx}82 regions. A new density-dependent effective interaction, with an enhanced value of the nucleon effective mass, is used in relativistic Hartree-Bogolyubov calculation of nuclear ground states and in the particle-hole channel of the PN-RQRPA. The finite range Gogny D1S interaction is employed in the T=1 pairing channel, and the model also includes a proton-neutron particle-particle interaction. The theoretical half-lives reproduce the experimental data for the Fe, Zn, Cd, and Te isotopic chains, but overestimate the lifetimes of Ni isotopes and predict a stable {sup 132}Sn. (orig.)
RANDOM TIMES TRANSFORMATION OF PROCESSES WITH MARKOV SKELETON%Markov 骨架过程的随机时变换
刘万荣; 刘再明; 侯振挺
2000-01-01
In this paper, random time transformations of processes with Markov skeleton are discussed. A class of random time transformations that transform a process with Markov skeleton into a process with Markov skeleton is given.%讨论了 Markov 骨架过程的随机时变换,给出了一类变换 Markov 骨架过程为Markov 骨架过程的随机时变换.
Flocke Susan A
2012-05-01
Full Text Available Abstract Background Effective clinician-patient communication about health behavior change is one of the most important and most overlooked strategies to promote health and prevent disease. Existing guidelines for specific health behavior counseling have been created and promulgated, but not successfully adopted in primary care practice. Building on work focused on creating effective clinician strategies for prompting health behavior change in the primary care setting, we developed an intervention intended to enhance clinician communication skills to create and act on teachable moments for smoking cessation. In this manuscript, we describe the development and implementation of the Teachable Moment Communication Process (TMCP intervention and the baseline characteristics of a group randomized trial designed to evaluate its effectiveness. Methods/Design This group randomized trial includes thirty-one community-based primary care clinicians practicing in Northeast Ohio and 840 of their adult patients. Clinicians were randomly assigned to receive either the Teachable Moments Communication Process (TMCP intervention for smoking cessation, or the delayed intervention. The TMCP intervention consisted of two, 3-hour educational training sessions including didactic presentation, skill demonstration through video examples, skills practices with standardized patients, and feedback from peers and the trainers. For each clinician enrolled, 12 patients were recruited for two time points. Pre- and post-intervention data from the clinicians, patients and audio-recorded clinician‒patient interactions were collected. At baseline, the two groups of clinicians and their patients were similar with regard to all demographic and practice characteristics examined. Both physician and patient recruitment goals were met, and retention was 96% and 94% respectively. Discussion Findings support the feasibility of training clinicians to use the Teachable Moments
Gustafsson Lars
2008-03-01
Full Text Available Abstract Background In the rural areas of sub-Saharan Africa, the majority of young children affected by malaria have no access to formal health services. Home treatment through mothers of febrile children supported by mother groups and local health workers has the potential to reduce malaria morbidity and mortality. Methods A cluster-randomized controlled effectiveness trial was implemented from 2002–2004 in a malaria endemic area of rural Burkina Faso. Six and seven villages were randomly assigned to the intervention and control arms respectively. Febrile children from intervention villages were treated with chloroquine (CQ by their mothers, supported by local women group leaders. CQ was regularly supplied through a revolving fund from local health centres. The trial was evaluated through two cross-sectional surveys at baseline and after two years of intervention. The primary endpoint of the study was the proportion of moderate to severe anaemia in children aged 6–59 months. For assessment of the development of drug efficacy over time, an in vivo CQ efficacy study was nested into the trial. The study is registered under http://www.controlled-trials.com (ISRCTN 34104704. Results The intervention was shown to be feasible under program conditions and a total of 1.076 children and 999 children were evaluated at baseline and follow-up time points respectively. Self-reported CQ treatment of fever episodes at home as well as referrals to health centres increased over the study period. At follow-up, CQ was detected in the blood of high proportions of intervention and control children. Compared to baseline findings, the prevalence of anaemia (29% vs 16%, p P. falciparum parasitaemia, fever and palpable spleens was lower at follow-up but there were no differences between the intervention and control group. CQ efficacy decreased over the study period but this was not associated with the intervention. Discussion The decreasing prevalence of malaria
Peakedness of Weighted Averages of Jointly Distributed Random Variables.
1985-10-01
under the integral sign is permissible here, so that ah’(a) f L Ix =U (---gl(u)(t- u) du -2~ t-a t - au =f f(u, --- (t -u) du. t f tu i- "u’ t -u) du...differentiation under the integral sign , we note that f Ifu, t - ) Idu əlf -u ( )daa which follows from (2.1). -4- This condition is clearly not a
Alessandro Ambrosi
Full Text Available Retroviral vectors are widely used in gene therapy to introduce therapeutic genes into patients' cells, since, once delivered to the nucleus, the genes of interest are stably inserted (integrated into the target cell genome. There is now compelling evidence that integration of retroviral vectors follows non-random patterns in mammalian genome, with a preference for active genes and regulatory regions. In particular, Moloney Leukemia Virus (MLV-derived vectors show a tendency to integrate in the proximity of the transcription start site (TSS of genes, occasionally resulting in the deregulation of gene expression and, where proto-oncogenes are targeted, in tumor initiation. This has drawn the attention of the scientific community to the molecular determinants of the retroviral integration process as well as to statistical methods to evaluate the genome-wide distribution of integration sites. In recent approaches, the observed distribution of MLV integration distances (IDs from the TSS of the nearest gene is assumed to be non-random by empirical comparison with a random distribution generated by computational simulation procedures. To provide a statistical procedure to test the randomness of the retroviral insertion pattern, we propose a probability model (Beta distribution based on IDs between two consecutive genes. We apply the procedure to a set of 595 unique MLV insertion sites retrieved from human hematopoietic stem/progenitor cells. The statistical goodness of fit test shows the suitability of this distribution to the observed data. Our statistical analysis confirms the preference of MLV-based vectors to integrate in promoter-proximal regions.
Fissaha Adafre, S.; de Rijke, M.
2005-01-01
We present the results of feature engineering and post-processing experiments conducted on a temporal expression recognition task. The former explores the use of different kinds of tagging schemes and of exploiting a list of core temporal expressions during training. The latter is concerned with the
Foulkes, Stephen B.; Booth, David M.
1997-07-01
Object segmentation is the process by which a mask is generated which identifies the area of an image which is occupied by an object. Many object recognition techniques depend on the quality of such masks for shape and underlying brightness information, however, segmentation remains notoriously unreliable. This paper considers how the image restoration technique of Geman and Geman can be applied to the improvement of object segmentations generated by a locally adaptive background subtraction technique. Also presented is how an artificial neural network hybrid, consisting of a single layer Kohonen network with each of its nodes connected to a different multi-layer perceptron, can be used to approximate the image restoration process. It is shown that the restoration techniques are very well suited for parallel processing and in particular the artificial neural network hybrid has the potential for near real time image processing. Results are presented for the detection of ships in SPOT panchromatic imagery and the detection of vehicles in infrared linescan images, these being a fair representation of the wider class of problem.
Experimental Demonstration of Squeezed State Quantum Averaging
Lassen, Mikael; Sabuncu, Metin; Filip, Radim; Andersen, Ulrik L
2010-01-01
We propose and experimentally demonstrate a universal quantum averaging process implementing the harmonic mean of quadrature variances. The harmonic mean protocol can be used to efficiently stabilize a set of fragile squeezed light sources with statistically fluctuating noise levels. The averaged variances are prepared probabilistically by means of linear optical interference and measurement induced conditioning. We verify that the implemented harmonic mean outperforms the standard arithmetic mean strategy. The effect of quantum averaging is experimentally tested both for uncorrelated and partially correlated noise sources with sub-Poissonian shot noise or super-Poissonian shot noise characteristics.
Dynamic Multiscale Averaging (DMA) of Turbulent Flow
Richard W. Johnson
2012-09-01
A new approach called dynamic multiscale averaging (DMA) for computing the effects of turbulent flow is described. The new method encompasses multiple applications of temporal and spatial averaging, that is, multiscale operations. Initially, a direct numerical simulation (DNS) is performed for a relatively short time; it is envisioned that this short time should be long enough to capture several fluctuating time periods of the smallest scales. The flow field variables are subject to running time averaging during the DNS. After the relatively short time, the time-averaged variables are volume averaged onto a coarser grid. Both time and volume averaging of the describing equations generate correlations in the averaged equations. These correlations are computed from the flow field and added as source terms to the computation on the next coarser mesh. They represent coupling between the two adjacent scales. Since they are computed directly from first principles, there is no modeling involved. However, there is approximation involved in the coupling correlations as the flow field has been computed for only a relatively short time. After the time and spatial averaging operations are applied at a given stage, new computations are performed on the next coarser mesh using a larger time step. The process continues until the coarsest scale needed is reached. New correlations are created for each averaging procedure. The number of averaging operations needed is expected to be problem dependent. The new DMA approach is applied to a relatively low Reynolds number flow in a square duct segment. Time-averaged stream-wise velocity and vorticity contours from the DMA approach appear to be very similar to a full DNS for a similar flow reported in the literature. Expected symmetry for the final results is produced for the DMA method. The results obtained indicate that DMA holds significant potential in being able to accurately compute turbulent flow without modeling for practical
Likelihood updating of random process load and resistance parameters by monitoring
Friis-Hansen, Peter; Ditlevsen, Ove Dalager
2003-01-01
Spectral parameters for a stationary Gaussian process are most often estimated by Fourier transformation of a realization followed by some smoothing procedure. This smoothing is often a weighted least square fitting of some prespecified parametric form of the spectrum. In this paper it is shown...... that maximum likelihood estimation is a rational alternative to an arbitrary weighting for least square fitting. The derived likelihood function gets singularities if the spectrum is prescribed with zero values at some frequencies. This is often the case for models of technically relevant processes....... The numerical problem caused by these singularities is easily overcome by adding simulated low intensity white noise to the realization. Without changing its parameters the spectrum is hereby lifted above zero by an amount equal to the white noise intensity. The knowledge of an explicit likelihood function...
Znoj, Hans-Jörg; Messerli-Burgy, Nadine; Tschopp, Simone; Weber, Rainer; Christen, Lisanne; Christen, Stephan; Grawe, Klaus
2010-03-01
The aim of this exploratory study was to examine the possible mechanisms of behavioral change in a cognitive-behavioral intervention supporting medication adherence in HIV-infected persons. A total of 60 persons currently under medical treatment were randomized to psychotherapy or usual care and were compared with a sociodemographically matched group of general psychotherapy clients. Outcome measures included therapy adherence using medication event-monitoring system psychotherapeutic processes and changes of experience and behavior. The general psychotherapy group was initially more distressed than HIV psychotherapy patients and reached higher levels of psychotherapeutic effect. In the HIV psychotherapy patients, a significant effect was found for maintaining adherence to medical treatment (Weber et al., 2004). These findings show that psychotherapy is a beneficial intervention for HIV-infected persons, and therapeutic alliance and activation of resources do not differ from a general psychotherapy treatment. Differential effects were detected for specific process variables, namely problem actuation.
Pechacek, Tomas; Karas, Vladimir; Czerny, Bozena; Dovciak, Michal
2013-01-01
We study some general properties of accretion disc variability in the context of stationary random processes. In particular, we are interested in mathematical constraints that can be imposed on the functional form of the Fourier power-spectrum density (PSD) that exhibits a multiply broken shape and several local maxima. We develop a methodology for determining the regions of the model parameter space that can in principle reproduce a PSD shape with a given number and position of local peaks and breaks of the PSD slope. Given the vast space of possible parameters, it is an important requirement that the method is fast in estimating the PSD shape for a given parameter set of the model. We generated and discuss the theoretical PSD profiles of a shot-noise-type random process with exponentially decaying flares. Then we determined conditions under which one, two, or more breaks or local maxima occur in the PSD. We calculated positions of these features and determined the changing slope of the model PSD. Furthermor...
Randomized Search Methods for Solving Markov Decision Processes and Global Optimization
2006-01-01
over relaxation (SOR) method ([81]). Puterman and Shin [62] proposed a modified policy iteration algorithm, which takes the basic form of PI, with the...99018) (1999). [61] Pintér, J. D., Global Optimization in Action, Kluwer Academic Publisher, The Netherlands, 1996. [62] Puterman , M. L. and Shin, M. C...Modified policy iteration algorithms for dis- counted Markov decision processes,” Management Science, 24, 1127–1137 (1978). [63] Puterman , M. L
Theodorsen, Audun; Rypdal, Martin
2016-01-01
The filtered Poisson process is often used as a reference model for intermittent fluctuations in physical systems. Here, this process is extended by adding a noise term, either as a purely additive term to the process or as a dynamical term in a stochastic differential equation. The moments, probability density function, auto- correlation function and power spectral density are derived and used to compare the effects of the different noise terms. Monte-Carlo studies of synthetic time series are used to investigate the accuracy of parameter estimation and to identify methods for separating the noise types. It is shown that the probability density function and the three lowest moments provide accurate estimations of the parameters, but are unable to separate the noise types. The auto-correlation function and the power spectral density also provide methods for estimating the model parameters, as well as being capable of determining the noise type. The number of times the signal passes a prescribed threshold in t...
A new Gompertz-type diffusion process with application to random growth.
Gutiérrez-Jáimez, Ramón; Román, Patricia; Romero, Desirée; Serrano, Juan J; Torres, Francisco
2007-07-01
Stochastic models describing growth kinetics are very important for predicting many biological phenomena. In this paper, a new Gompertz-type diffusion process is introduced, by means of which bounded sigmoidal growth patterns can be modeled by time-continuous variables. The main innovation of the process is that the bound can depend on the initial value, a situation that is not provided by the models considered to date. After building the model, a comprehensive study is presented, including its main characteristics and a simulation of sample paths. With the aim of applying this model to real-life situations, and given its possibilities in forecasting via the mean function, discrete sampling based inference is developed. The likelihood equations are not directly solvable, and because of difficulties that arise with the usual numerical methods employed to solve them, an iterative procedure is proposed. The possibilities of the new process are illustrated by means of an application to real data, concretely, to growth in rabbits.
Quantized average consensus with delay
Jafarian, Matin; De Persis, Claudio
2012-01-01
Average consensus problem is a special case of cooperative control in which the agents of the network asymptotically converge to the average state (i.e., position) of the network by transferring information via a communication topology. One of the issues of the large scale networks is the cost of co
Simultaneous Range-Velocity Processing and SNR Analysis of AFIT’s Random Noise Radar
2012-03-22
Number of Processing Cores 4 8 Processor Speed 3.33 GHz 3.07 GHz Installed Memory 48 GB 48 GB GPU Make NVIDIA NVIDIA GPU Model Tesla 1060 Tesla C2070 GPU...Shiyan. “W-band Noise Radar Sensor for Car Collision Warning Systems”. Physics and Engineering of Millimeter and Sub-Millimeter Waves, 2001. The...Noise Radar with Correlation Receiver as the Basis of Car Collision Avoidance System”. Microwave Conference, 1995. 25th European, volume 1, 506–507
Bell's inequality violation due to misidentification of spatially non stationary random processes
Sica, L
2003-01-01
Correlations for the Bell gedankenexperiment are constructed using probabilities given by quantum mechanics, and nonlocal information. They satisfy Bell's inequality and exhibit spatial non stationarity in angle. Correlations for three successive local spin measurements on one particle are computed as well. These correlations also exhibit non stationarity, and satisfy the Bell inequality. In both cases, the mistaken assumption that the underlying process is wide-sense-stationary in angle results in violation of Bell's inequality. These results directly challenge the wide-spread belief that violation of Bell's inequality is a decisive test for nonlocality.
Likelihood updating of random process load and resistance parameters by monitoring
Friis-Hansen, Peter; Ditlevsen, Ove Dalager
2003-01-01
. The numerical problem caused by these singularities is easily overcome by adding simulated low intensity white noise to the realization. Without changing its parameters the spectrum is hereby lifted above zero by an amount equal to the white noise intensity. The knowledge of an explicit likelihood function......, even though it is of complicated mathematical form, allows an approximate Bayesian updating and control of the time development of the parameters. Some of these parameters can be structural parameters that by too much change reveal progressing damage or other malfunctioning. Thus current process...
Rui Nouchi
Full Text Available BACKGROUND: Do brain training games work? The beneficial effects of brain training games are expected to transfer to other cognitive functions. Yet in all honesty, beneficial transfer effects of the commercial brain training games in young adults have little scientific basis. Here we investigated the impact of the brain training game (Brain Age on a wide range of cognitive functions in young adults. METHODS: We conducted a double-blind (de facto masking randomized controlled trial using a popular brain training game (Brain Age and a popular puzzle game (Tetris. Thirty-two volunteers were recruited through an advertisement in the local newspaper and randomly assigned to either of two game groups (Brain Age, Tetris. Participants in both the Brain Age and the Tetris groups played their game for about 15 minutes per day, at least 5 days per week, for 4 weeks. Measures of the cognitive functions were conducted before and after training. Measures of the cognitive functions fell into eight categories (fluid intelligence, executive function, working memory, short-term memory, attention, processing speed, visual ability, and reading ability. RESULTS AND DISCUSSION: Our results showed that commercial brain training game improves executive functions, working memory, and processing speed in young adults. Moreover, the popular puzzle game can engender improvement attention and visuo-spatial ability compared to playing the brain training game. The present study showed the scientific evidence which the brain training game had the beneficial effects on cognitive functions (executive functions, working memory and processing speed in the healthy young adults. CONCLUSIONS: Our results do not indicate that everyone should play brain training games. However, the commercial brain training game might be a simple and convenient means to improve some cognitive functions. We believe that our findings are highly relevant to applications in educational and clinical fields
Basse-O'Connor, Andreas
2011-01-01
Let $X_n$ be independent random elements in the Skorohod space $D([0,1]; E)$ of cadlag functions taking values in a separable Banach space $E$. Let $S_n = \\sum_{j=1}^{n} X_j$. We show that if $S_n$ converges in finite dimensional distributions to a cadlag process, then $S_n + y_n$ converges a.s. pathwise uniformly over $[0,1]$, for some $y_n \\in D([0,1]; E)$. This result extends the Ito-Nisio Theorem to the space $D([0,1]; E)$, which is surprisingly lacking in the literature even for $E=\\R$. The main difficulties of dealing with $D([0,1]; E)$ in this context are its non-separability under the uniform norm and the discontinuity of the addition under Skorohod's $J_1$-topology. We use this result to prove the uniform convergence of various series representations of cadlag infinitely divisible processes. As a consequence, we obtain explicit representations of the jump process, and of related path functionals, in a general non-Markovian setting. Finally, we illustrate our results on an example of stable processes....
Gothe, Neha P; Kramer, Arthur F; McAuley, Edward
2017-01-01
Age-related cognitive decline is well documented across various aspects of cognitive function, including attention and processing speed, and lifestyle behaviors such as physical activity play an important role in preventing cognitive decline and maintaining or even improving cognitive function. The purpose of this study was to evaluate the effects of an 8-week Hatha yoga intervention on attention and processing speed among older adults. Participants (n = 118; mean age, 62 ± 5.59) were randomly assigned to an 8-week Hatha yoga group or a stretching control group and completed cognitive assessments-Attention Network Task, Trail Making Test parts A and B, and Pattern Comparison Test-at baseline and after the 8-week intervention. Analyses of covariance revealed significantly faster reaction times for the yoga group on the Attention Network Task's neutral, congruent, and incongruent conditions (p ≤ 0.04). The yoga intervention also improved participants' visuospatial and perceptual processing on the Trail Making Test part B (p = 0.002) and pattern comparison (p yoga practice that includes postures, breathing, and meditative exercises lead to improved attentional and information processing abilities. Although the underlying mechanisms remain largely speculative, more systematic trials are needed to explore the extent of cognitive benefits and their neurobiological mechanisms.
Fluctuations of wavefunctions about their classical average
Benet, L [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Flores, J [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Hernandez-Saldana, H [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Izrailev, F M [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Leyvraz, F [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico); Seligman, T H [Centro Internacional de Ciencias, Ciudad Universitaria, Chamilpa, Cuernavaca (Mexico)
2003-02-07
Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics.
Fluctuations of wavefunctions about their classical average
Bénet, L; Hernandez-Saldana, H; Izrailev, F M; Leyvraz, F; Seligman, T H
2003-01-01
Quantum-classical correspondence for the average shape of eigenfunctions and the local spectral density of states are well-known facts. In this paper, the fluctuations of the quantum wavefunctions around the classical value are discussed. A simple random matrix model leads to a Gaussian distribution of the amplitudes whose width is determined by the classical shape of the eigenfunction. To compare this prediction with numerical calculations in chaotic models of coupled quartic oscillators, we develop a rescaling method for the components. The expectations are broadly confirmed, but deviations due to scars are observed. This effect is much reduced when both Hamiltonians have chaotic dynamics.
Adenauer Hannah
2011-12-01
Full Text Available Abstract Background Little is known about the neurobiological foundations of psychotherapy for Posttraumatic Stress Disorder (PTSD. Prior studies have shown that PTSD is associated with altered processing of threatening and aversive stimuli. It remains unclear whether this functional abnormality can be changed by psychotherapy. This is the first randomized controlled treatment trial that examines whether narrative exposure therapy (NET causes changes in affective stimulus processing in patients with chronic PTSD. Methods 34 refugees with PTSD were randomly assigned to a NET group or to a waitlist control (WLC group. At pre-test and at four-months follow-up, the diagnostics included the assessment of clinical variables and measurements of neuromagnetic oscillatory brain activity (steady-state visual evoked fields, ssVEF resulting from exposure to aversive pictures compared to neutral pictures. Results PTSD as well as depressive symptom severity scores declined in the NET group, whereas symptoms persisted in the WLC group. Only in the NET group, parietal and occipital activity towards threatening pictures increased significantly after therapy. Conclusions Our results indicate that NET causes an increase of activity associated with cortical top-down regulation of attention towards aversive pictures. The increase of attention allocation to potential threat cues might allow treated patients to re-appraise the actual danger of the current situation and, thereby, reducing PTSD symptoms. Registration of the clinical trial Number: NCT00563888 Name: "Change of Neural Network Indicators Through Narrative Treatment of PTSD in Torture Victims" ULR: http://www.clinicaltrials.gov/ct2/show/NCT00563888
Vocal attractiveness increases by averaging.
Bruckert, Laetitia; Bestelmeyer, Patricia; Latinus, Marianne; Rouger, Julien; Charest, Ian; Rousselet, Guillaume A; Kawahara, Hideki; Belin, Pascal
2010-01-26
Vocal attractiveness has a profound influence on listeners-a bias known as the "what sounds beautiful is good" vocal attractiveness stereotype [1]-with tangible impact on a voice owner's success at mating, job applications, and/or elections. The prevailing view holds that attractive voices are those that signal desirable attributes in a potential mate [2-4]-e.g., lower pitch in male voices. However, this account does not explain our preferences in more general social contexts in which voices of both genders are evaluated. Here we show that averaging voices via auditory morphing [5] results in more attractive voices, irrespective of the speaker's or listener's gender. Moreover, we show that this phenomenon is largely explained by two independent by-products of averaging: a smoother voice texture (reduced aperiodicities) and a greater similarity in pitch and timbre with the average of all voices (reduced "distance to mean"). These results provide the first evidence for a phenomenon of vocal attractiveness increases by averaging, analogous to a well-established effect of facial averaging [6, 7]. They highlight prototype-based coding [8] as a central feature of voice perception, emphasizing the similarity in the mechanisms of face and voice perception.
Quantum Averaging of Squeezed States of Light
Squeezing has been recognized as the main resource for quantum information processing and an important resource for beating classical detection strategies. It is therefore of high importance to reliably generate stable squeezing over longer periods of time. The averaging procedure for a single qu...
李云霞
2011-01-01
In this paper,we discuss moving-average processXk = ∑∞ I=-∞ ai+kεi where {εi; - ∞- 1,lim ∈2δ+2 ∑∞ n=1 --(log logn)(∞) n3/2 log n E{| Sn |-∈τ √--2nlog log n}+=--√2τ √π(δ+1) (2δ+3)(F)(δ+2) ,∈↘0where τ2 =σ2 (∑∞ I=-∞ ai)2 and (F)(·) is a Gamma function.%讨论线性过程Xk=∑∞i=-∞ai+kεi,其中{εi；-∞＜i＜∞}是均值为零,方差有限为σ2的双侧无穷独立同分布随机变量序列,{ai；-∞＜ i＜∞}为绝对可和的实数序列.令Sn=∑nl=1Xk,n≥1,假设|ε1|3＜∞,证明了对任意的δ＞-1,lim ∈↘0∈2δ+2∑∞n=1(㏒ ㏒ n)δ/n3/2㏒ nE{|Sn|-∈τ√2n ㏒ ㏒ n}+=√2τ√/√π(δ+1)(2δ+3)Γ(δ+2),其中τ2=σ2(∑∞i=-∞ai)2以及Γ(·)为Gamma函数.
Estimation of Frequency Response Functions by Random Decrement
Asmussen, J. C.; Brincker, Rune
1996-01-01
A method for estimating frequency response functions by the Random Decrement technique is investigated in this paper. The method is based on the auto and cross Random Decrement functions of the input process and the output process of a linear system. The Fourier transformation of these functions...... is used to calculate the frequency response functions. The Random Decrement functions are obtained by averaging time segments of the processes under given initial conditions. The method will reduce the leakage problem, because of the natural decay of the Random Decrement functions. Also, the influence...
Mednick, Zale; Irrcher, Isabella; Hopman, Wilma M; Sharma, Sanjay
2016-12-01
To determine if a narrated white board animation (nWBA) video as part of the consent process for intravenous fluorescein angiography (IVFA) improves patient comprehension compared with a standard consent process. Prospective, randomized study. Patients undergoing an initial IVFA investigation. Three groups of 26 patients (N = 78) naïve to the IVFA procedure were included. Groups 1 and 2 consisted of patients undergoing IVFA for diagnostic purposes. Group 1 received the IVFA information via standard physician-patient interaction to obtain standard consent. Group 2 received IVFA information by watching an nWBA explaining the purpose, method, and risks of the diagnostic test to obtain informed consent. Group 3 comprised patients who were not scheduled to undergo IVFA. This group was exposed to both the standard and nWBA consent. All groups completed a 6-question knowledge quiz to assess retained information and a survey to reflect on the consent experience. Participants receiving information via standard physician-patient interaction to obtain informed consent had a lower mean knowledge score (4.38 out of 6; 73%) than participants receiving the information to obtain consent via nWBA (5.04 out of 6, 84%; P = 0.023). Of participants receiving both forms of information (group 3) to obtain informed consent, 73% preferred the nWBA to the standard consent process. Participants receiving consent information for an IVFA diagnostic test via nWBA have better knowledge retention regarding the IVFA procedure and preferred this medium compared with participants receiving the standard physician-patient interaction for obtaining consent. Incorporation of multimedia into the informed consent process should be explored for other diagnostic tests. Copyright © 2016 Canadian Ophthalmological Society. Published by Elsevier Inc. All rights reserved.
Averaged Electroencephalic Audiometry in Infants
Lentz, William E.; McCandless, Geary A.
1971-01-01
Normal, preterm, and high-risk infants were tested at 1, 3, 6, and 12 months of age using averaged electroencephalic audiometry (AEA) to determine the usefulness of AEA as a measurement technique for assessing auditory acuity in infants, and to delineate some of the procedural and technical problems often encountered. (KW)
Borah, Dipu; Rasappa, Sozaraj; Senthamaraikannan, Ramsankar; Shaw, Matthew T; Holmes, Justin D; Morris, Michael A
2013-03-01
The use of random copolymer brushes (polystyrene-r-polymethylmethacrylate--PS-r-PMMA) to 'neutralise' substrate surfaces and ordain perpendicular orientation of the microphase separated lamellae in symmetric polystyrene-b-polymethylmethacrylate (PS-b-PMMA) block copolymers (BCPs) is well known. However, less well known is how the brushes interact with both the substrate and the BCP, and how this might change during thermal processing. A detailed study of changes in these films for different brush and diblock PS-b-PMMA molecular weights is reported here. In general, self-assembly and pattern formation is altered little, and a range of brush molecular weights are seen to be effective. However, on extended anneal times, the microphase separated films can undergo dimension changes and loss of order. This process is not related to any complex microphase separation dynamics but rather a degradation of methacrylate components in the film. The data suggest that care must be taken in interpretation of structural changes in these systems as being due to BCP only effects.
Yang, Jin-Chen; Rodriguez, Annette; Royston, Ashley; Niu, Yu-Qiong; Avar, Merve; Brill, Ryan; Simon, Christa; Grigsby, Jim; Hagerman, Randi J; Olichney, John M
2016-02-22
Progressive cognitive deficits are common in patients with fragile X-associated tremor/ataxia syndrome (FXTAS), with no targeted treatment yet established. In this substudy of the first randomized controlled trial for FXTAS, we examined the effects of NMDA antagonist memantine on attention and working memory. Data were analyzed for patients (24 in each arm) who completed both the primary memantine trial and two EEG recordings (at baseline and follow-up) using an auditory "oddball" task. Results demonstrated significantly improved attention/working memory performance after one year only for the memantine group. The event-related potential P2 amplitude elicited by non-targets was significantly enhanced in the treated group, indicating memantine-associated improvement in attentional processes at the stimulus identification/discrimination level. P2 amplitude increase was positively correlated with improvement on the behavioral measure of attention/working memory during target detection. Analysis also revealed that memantine treatment normalized the P2 habituation effect at the follow-up visit. These findings indicate that memantine may benefit attentional processes that represent fundamental components of executive function/dysfunction, thought to comprise the core cognitive deficit in FXTAS. The results provide evidence of target engagement of memantine, as well as therapeutically relevant information that could further the development of specific cognitive or disease-modifying therapies for FXTAS.
Azkhosh, Manoochehr; Farhoudianm, Ali; Saadati, Hemn; Shoaee, Fateme; Lashani, Leila
2016-10-01
Objective: Substance abuse is a socio-psychological disorder. The aim of this study was to compare the effectiveness of acceptance and commitment therapy with 12-steps Narcotics Anonymous on psychological well-being of opiate dependent individuals in addiction treatment centers in Shiraz, Iran. Method: This was a randomized controlled trial. Data were collected at entry into the study and at post-test and follow-up visits. The participants were selected from opiate addicted individuals who referred to addiction treatment centers in Shiraz. Sixty individuals were evaluated according to inclusion/ exclusion criteria and were divided into three equal groups randomly (20 participants per group). One group received acceptance and commitment group therapy (Twelve 90-minute sessions) and the other group was provided with the 12-steps Narcotics Anonymous program and the control group received the usual methadone maintenance treatment. During the treatment process, seven participants dropped out. Data were collected using the psychological well-being questionnaire and AAQ questionnaire in the three groups at pre-test, post-test and follow-up visits. Data were analyzed using repeated measure analysis of variance. Results: Repeated measure analysis of variance revealed that the mean difference between the three groups was significant (Pacceptance and commitment therapy group showed improvement relative to the NA and control groups on psychological well-being and psychological flexibility. Conclusion: The results of this study revealed that acceptance and commitment therapy can be helpful in enhancing positive emotions and increasing psychological well-being of addicts who seek treatment.
Time-average dynamic speckle interferometry
Vladimirov, A. P.
2014-05-01
For the study of microscopic processes occurring at structural level in solids and thin biological objects, a method of dynamic speckle interferometry successfully applied. However, the method has disadvantages. The purpose of the report is to acquaint colleagues with the method of averaging in time in dynamic speckle - interferometry of microscopic processes, allowing eliminating shortcomings. The main idea of the method is the choice the averaging time, which exceeds the characteristic time correlation (relaxation) the most rapid process. The method theory for a thin phase and the reflecting object is given. The results of the experiment on the high-cycle fatigue of steel and experiment to estimate the biological activity of a monolayer of cells, cultivated on a transparent substrate is given. It is shown that the method allows real-time visualize the accumulation of fatigue damages and reliably estimate the activity of cells with viruses and without viruses.
Jensen, J.L.
1993-01-01
Previous results on Edgeworth expansions for sums over a random field are extended to the case where the strong mixing coefficient depends not only on the distance between two sets of random variables, but also on the size of the two sets. The results are applied to the Poisson and the Strauss po...... point processes, giving rise also to local limit results. © 1993 The Institute of Statistical Mathematics....
Nechaev, S
2003-01-01
We investigate the statistical properties of random walks on the simplest nontrivial braid group B sub 3 , and on related hyperbolic groups. We provide a method using Cayley graphs of groups allowing us to compute explicitly the probability distribution of the basic statistical characteristics of random trajectories - the drift and the return probability. The action of the groups under consideration in the hyperbolic plane is investigated, and the distribution of a geometric invariant - the hyperbolic distance - is analysed. It is shown that a random walk on B sub 3 can be viewed as a 'magnetic random walk' on the group PSL(2, Z).
Aguiar, Elroy J; Morgan, Philip J; Collins, Clare E; Plotnikoff, Ronald C; Young, Myles D; Callister, Robin
2017-07-01
Men are underrepresented in weight loss and type 2 diabetes mellitus (T2DM) prevention studies. To determine the effectiveness of recruitment, and acceptability of the T2DM Prevention Using LifeStyle Education (PULSE) Program-a gender-targeted, self-administered intervention for men. Men (18-65 years, high risk for T2DM) were randomized to intervention ( n = 53) or wait-list control groups ( n = 48). The 6-month PULSE Program intervention focused on weight loss, diet, and exercise for T2DM prevention. A process evaluation questionnaire was administered at 6 months to examine recruitment and selection processes, and acceptability of the intervention's delivery and content. Associations between self-monitoring and selected outcomes were assessed using Spearman's rank correlation. A pragmatic recruitment and online screening process was effective in identifying men at high risk of T2DM (prediabetes prevalence 70%). Men reported the trial was appealing because it targeted weight loss, T2DM prevention, and getting fit, and because it was perceived as "doable" and tailored for men. The intervention was considered acceptable, with men reporting high overall satisfaction (83%) and engagement with the various components. Adherence to self-monitoring was poor, with only 13% meeting requisite criteria. However, significant associations were observed between weekly self-monitoring of weight and change in weight ( rs = -.47, p = .004) and waist circumference ( rs = -.38, p = .026). Men reported they would have preferred more intervention contact, for example, by phone or email. Gender-targeted, self-administered lifestyle interventions are feasible, appealing, and satisfying for men. Future studies should explore the effects of additional non-face-to-face contact on motivation, accountability, self-monitoring adherence, and program efficacy.
High average power supercontinuum sources
J C Travers
2010-11-01
The physical mechanisms and basic experimental techniques for the creation of high average spectral power supercontinuum sources is briefly reviewed. We focus on the use of high-power ytterbium-doped fibre lasers as pump sources, and the use of highly nonlinear photonic crystal fibres as the nonlinear medium. The most common experimental arrangements are described, including both continuous wave fibre laser systems with over 100 W pump power, and picosecond mode-locked, master oscillator power fibre amplifier systems, with over 10 kW peak pump power. These systems can produce broadband supercontinua with over 50 and 1 mW/nm average spectral power, respectively. Techniques for numerical modelling of the supercontinuum sources are presented and used to illustrate some supercontinuum dynamics. Some recent experimental results are presented.
Dependability in Aggregation by Averaging
Jesus, Paulo; Almeida, Paulo Sérgio
2010-01-01
Aggregation is an important building block of modern distributed applications, allowing the determination of meaningful properties (e.g. network size, total storage capacity, average load, majorities, etc.) that are used to direct the execution of the system. However, the majority of the existing aggregation algorithms exhibit relevant dependability issues, when prospecting their use in real application environments. In this paper, we reveal some dependability issues of aggregation algorithms based on iterative averaging techniques, giving some directions to solve them. This class of algorithms is considered robust (when compared to common tree-based approaches), being independent from the used routing topology and providing an aggregation result at all nodes. However, their robustness is strongly challenged and their correctness often compromised, when changing the assumptions of their working environment to more realistic ones. The correctness of this class of algorithms relies on the maintenance of a funda...
Fu, Zhijian; Zhou, Xiaodong; Chen, Yanqiu; Gong, Junhui; Peng, Fei; Yan, Zidan; Zhang, Taolin; Yang, Lizhong
2015-03-01
Random slowdown process and lock-step effect, observed from real-life observation and the experiments of other researchers, were investigated in the view of the pedestrian microscopic behaviors. Due to the limited controllability, repeatability and randomness of the pedestrian experiments, a new estimating-correction cellular automaton was established to research the influence of random slowdown process and lock-step effect on the fundamental diagram. The first step of the model is to estimate the next time-step status of the neighbor cell in front of the tracked pedestrian. The second step is to correct the status and confirm the position of the tracked pedestrian in the next time-step. It is found that the random slowdown process and lock-step have significant influence on the curve configuration and the characteristic parameters, including the concavity-convexity, the inflection point, the maximum flow rate and the critical density etc. The random slowdown process reduces the utilization of the available space between two adjacent pedestrians in the longitudinal direction, especially in the region of intermediate density. However, the lock-step effect enhances the utilization of the available space, especially in the region of high density.
Xing, Lizhi; Dong, Xianlei; Guan, Jun
2017-04-01
Input-output table is very comprehensive and detailed in describing the national economic system with lots of economic relationships, which contains supply and demand information among industrial sectors. The complex network, a theory and method for measuring the structure of complex system, can describe the structural characteristics of the internal structure of the research object by measuring the structural indicators of the social and economic system, revealing the complex relationship between the inner hierarchy and the external economic function. This paper builds up GIVCN-WIOT models based on World Input-Output Database in order to depict the topological structure of Global Value Chain (GVC), and assumes the competitive advantage of nations is equal to the overall performance of its domestic sectors' impact on the GVC. Under the perspective of econophysics, Global Industrial Impact Coefficient (GIIC) is proposed to measure the national competitiveness in gaining information superiority and intermediate interests. Analysis of GIVCN-WIOT models yields several insights including the following: (1) sectors with higher Random Walk Centrality contribute more to transmitting value streams within the global economic system; (2) Half-Value Ratio can be used to measure robustness of open-economy macroeconomics in the process of globalization; (3) the positive correlation between GIIC and GDP indicates that one country's global industrial impact could reveal its international competitive advantage.
Yisu Lu
2014-01-01
Full Text Available Brain-tumor segmentation is an important clinical requirement for brain-tumor diagnosis and radiotherapy planning. It is well-known that the number of clusters is one of the most important parameters for automatic segmentation. However, it is difficult to define owing to the high diversity in appearance of tumor tissue among different patients and the ambiguous boundaries of lesions. In this study, a nonparametric mixture of Dirichlet process (MDP model is applied to segment the tumor images, and the MDP segmentation can be performed without the initialization of the number of clusters. Because the classical MDP segmentation cannot be applied for real-time diagnosis, a new nonparametric segmentation algorithm combined with anisotropic diffusion and a Markov random field (MRF smooth constraint is proposed in this study. Besides the segmentation of single modal brain-tumor images, we developed the algorithm to segment multimodal brain-tumor images by the magnetic resonance (MR multimodal features and obtain the active tumor and edema in the same time. The proposed algorithm is evaluated using 32 multimodal MR glioma image sequences, and the segmentation results are compared with other approaches. The accuracy and computation time of our algorithm demonstrates very impressive performance and has a great potential for practical real-time clinical use.
Hu, Jun; Li, Yang; Yang, Jing-Yu; Shen, Hong-Bin; Yu, Dong-Jun
2016-02-01
G-protein-coupled receptors (GPCRs) are important targets of modern medicinal drugs. The accurate identification of interactions between GPCRs and drugs is of significant importance for both protein function annotations and drug discovery. In this paper, a new sequence-based predictor called TargetGDrug is designed and implemented for predicting GPCR-drug interactions. In TargetGDrug, the evolutionary feature of GPCR sequence and the wavelet-based molecular fingerprint feature of drug are integrated to form the combined feature of a GPCR-drug pair; then, the combined feature is fed to a trained random forest (RF) classifier to perform initial prediction; finally, a novel drug-association-matrix-based post-processing procedure is applied to reduce potential false positive or false negative of the initial prediction. Experimental results on benchmark datasets demonstrate the efficacy of the proposed method, and an improvement of 15% in the Matthews correlation coefficient (MCC) was observed over independent validation tests when compared with the most recently released sequence-based GPCR-drug interactions predictor. The implemented webserver, together with the datasets used in this study, is freely available for academic use at http://csbio.njust.edu.cn/bioinf/TargetGDrug.
Brambilla, Michela; Cotelli, Maria; Manenti, Rosa; Dagani, Jessica; Sisti, Davide; Rocchi, Marco; Balestrieri, Matteo; Pini, Stefano; Raimondi, Sara; Saviotti, Francesco Maria; Scocco, Paolo; de Girolamo, Giovanni
2016-10-01
Deficits in social cognition, including emotional processing, are hallmarks of schizophrenia and antipsychotic agents seem to be ineffectual to improve these symptoms. However, oxytocin does seem to have beneficial effects on social cognition. The aim of this study was to examine the effects of four months of treatment with intranasal oxytocin, in 31 patients with schizophrenia, on distinct aspects of social cognition. This was assessed using standardized and experimental tests in a randomized, double-blind, placebo-controlled, cross-over trial. All patients underwent clinical and experimental assessment before treatment, four months after treatment and at the end of treatment. Social cognition abilities were assessed with the Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT) and the Reading the Mind in the Eyes task (RMET). Furthermore, an Emotional Priming Paradigm (EPP) was developed to examine the effects of oxytocin on implicit perceptual sensitivity to affective information and explicit facial affect recognition. We found that oxytocin improved performance on MSCEIT compared to placebo in Branch 3-Understanding Emotion (p-value=0.004; Cohen׳s d=1.12). In the EPP task, we observed a significant reduction of reaction times for facial affect recognition (p-value=0.021; Cohen׳s d=0.88). No effects were found for implicit priming or for theory of mind abilities. Further study is required in order to highlight the potential for possible integration of oxytocin with antipsychotic agents as well as to evaluate psycho-social treatment as a multi-dimensional approach to increase explicit emotional processing abilities and compensate social cognition deficits related to schizophrenia.
Fredric D Wolinsky
Full Text Available Age-related cognitive decline is common and may lead to substantial difficulties and disabilities in everyday life. We hypothesized that 10 hours of visual speed of processing training would prevent age-related declines and potentially improve cognitive processing speed.Within two age bands (50-64 and ≥ 65 681 patients were randomized to (a three computerized visual speed of processing training arms (10 hours on-site, 14 hours on-site, or 10 hours at-home or (b an on-site attention control group using computerized crossword puzzles for 10 hours. The primary outcome was the Useful Field of View (UFOV test, and the secondary outcomes were the Trail Making (Trails A and B Tests, Symbol Digit Modalities Test (SDMT, Stroop Color and Word Tests, Controlled Oral Word Association Test (COWAT, and the Digit Vigilance Test (DVT, which were assessed at baseline and at one year. 620 participants (91% completed the study and were included in the analyses. Linear mixed models were used with Blom rank transformations within age bands.All intervention groups had (p<0.05 small to medium standardized effect size improvements on UFOV (Cohen's d = -0.322 to -0.579, depending on intervention arm, Trails A (d = -0.204 to -0.265, Trails B (d = -0.225 to -0.320, SDMT (d = 0.263 to 0.351, and Stroop Word (d = 0.240 to 0.271. Converted to years of protection against age-related cognitive declines, these effects reflect 3.0 to 4.1 years on UFOV, 2.2 to 3.5 years on Trails A, 1.5 to 2.0 years on Trails B, 5.4 to 6.6 years on SDMT, and 2.3 to 2.7 years on Stroop Word.Visual speed of processing training delivered on-site or at-home to middle-aged or older adults using standard home computers resulted in stabilization or improvement in several cognitive function tests. Widespread implementation of this intervention is feasible.ClinicalTrials.gov NCT-01165463.
Mirror averaging with sparsity priors
Dalalyan, Arnak
2010-01-01
We consider the problem of aggregating the elements of a (possibly infinite) dictionary for building a decision procedure, that aims at minimizing a given criterion. Along with the dictionary, an independent identically distributed training sample is available, on which the performance of a given procedure can be tested. In a fairly general set-up, we establish an oracle inequality for the Mirror Averaging aggregate based on any prior distribution. This oracle inequality is applied in the context of sparse coding for different problems of statistics and machine learning such as regression, density estimation and binary classification.
Asymptotic Time Averages and Frequency Distributions
Muhammad El-Taha
2016-01-01
Full Text Available Consider an arbitrary nonnegative deterministic process (in a stochastic setting {X(t, t≥0} is a fixed realization, i.e., sample-path of the underlying stochastic process with state space S=(-∞,∞. Using a sample-path approach, we give necessary and sufficient conditions for the long-run time average of a measurable function of process to be equal to the expectation taken with respect to the same measurable function of its long-run frequency distribution. The results are further extended to allow unrestricted parameter (time space. Examples are provided to show that our condition is not superfluous and that it is weaker than uniform integrability. The case of discrete-time processes is also considered. The relationship to previously known sufficient conditions, usually given in stochastic settings, will also be discussed. Our approach is applied to regenerative processes and an extension of a well-known result is given. For researchers interested in sample-path analysis, our results will give them the choice to work with the time average of a process or its frequency distribution function and go back and forth between the two under a mild condition.
Coronel B, H.F.; Hernandez M, A.R.; Jimenez M, M.A. [Facultad de Fisica e Inteligencia Artificial, Universidad Veracruzana, A.P. 475, Xalapa, Veracruz (Mexico); Mora F, L.E. [CIMAT, A.P. 402, 36000 Guanajuato (Mexico)]. e-mail: hcoronel@uv.mx
2007-07-01
Empirical tests for pseudo random number generators based on the use of processes or physical models have been successfully used and are considered as complementary to theoretical tests of randomness. In this work a statistical methodology for evaluating the quality of pseudo random number generators is presented. The method is illustrated in the context of the so-called exponential decay process, using some pseudo random number generators commonly used in physics. (Author)
Taveras, Elsie M; Marshall, Richard; Horan, Christine M; Gillman, Matthew W; Hacker, Karen; Kleinman, Ken P; Koziol, Renata; Price, Sarah; Rifas-Shiman, Sheryl L; Simon, Steven R
2014-01-01
To examine the extent to which an intervention using electronic decision support delivered to pediatricians at the point-of-care of obese children, with or without direct-to-parent outreach, improved health care quality measures for child obesity. Process outcomes from a three-arm, cluster-randomized trial from 14 pediatric practices in Massachusetts were reported. Participants were 549 children aged 6-12 years with body mass index (BMI) ≥ 95th percentile. In five practices (Intervention-1), pediatricians receive electronic decision support at the point-of-care. In five other practices (Intervention-2), pediatricians receive point-of-care decision support and parents receive information about their child's prior BMI before their scheduled visit. Four practices receive usual care. The main outcomes were Healthcare Effectiveness Data and Information Set (HEDIS) performance measures for child obesity: documentation of BMI percentile and use of counseling codes for nutrition or physical activity. Compared to the usual care condition, participants in Intervention-2, but not Intervention-1, had substantially higher odds of use of HEDIS codes for BMI percentile documentation (adjusted OR: 3.97; 95% CI: 1.92, 8.23) and higher prevalence of use of HEDIS codes for counseling for nutrition or physical activity (adjusted predicted prevalence 20.3% [95% CI 8.5, 41.2] for Intervention -2 vs. 0.0% [0.0, 2.0] for usual care). An intervention that included both decision support for clinicians and outreach to parents resulted in improved health care quality measures for child obesity. © 2013 The Obesity Society.
Bivariate copulas on the exponentially weighted moving average control chart
Sasigarn Kuvattana
2016-10-01
Full Text Available This paper proposes four types of copulas on the Exponentially Weighted Moving Average (EWMA control chart when observations are from an exponential distribution using a Monte Carlo simulation approach. The performance of the control chart is based on the Average Run Length (ARL which is compared for each copula. Copula functions for specifying dependence between random variables are used and measured by Kendall’s tau. The results show that the Normal copula can be used for almost all shifts.
Simulating thermal boundary conditions of spin-lattice models with weighted averages
Wang, Wenlong
2016-07-01
Thermal boundary conditions have played an increasingly important role in revealing the nature of short-range spin glasses and is likely to be relevant also for other disordered systems. Diffusion method initializing each replica with a random boundary condition at the infinite temperature using population annealing has been used in recent large-scale simulations. However, the efficiency of this method can be greatly suppressed because of temperature chaos. For example, most samples have some boundary conditions that are completely eliminated from the population in the process of annealing at low temperatures. In this work, I study a weighted average method to solve this problem by simulating each boundary conditions separately and collect data using weighted averages. The efficiency of the two methods is studied using both population annealing and parallel tempering, showing that the weighted average method is more efficient and accurate.
Variability and self-average of impurity-limited resistance in quasi-one dimensional nanowires
Sano, Nobuyuki
2017-02-01
The impurity-limited resistance in quasi-one dimensional (quasi-1D) nanowires is studied under the framework of the Lippmann-Schwinger scattering theory. The resistance of cylindrical nanowires is calculated theoretically under various spatial configurations of localized impurities with a simplified short-range scattering potential. Then, the relationship between the phase interference and the variability in the impurity-limited resistances is clarified. We show that there are two different and independent mechanisms leading to the variability in impurity-limited resistances; incoherent and phase-coherent randomization processes. The latter is closely related to the so-called "self-average" and its physical origin under nanowire structures is clarified. We point out that the ensemble average also comes into play in the cases of long channel nanowires, which leads to the self-average resistance of multiple impurities.
Randomized selection on the GPU
Monroe, Laura Marie [Los Alamos National Laboratory; Wendelberger, Joanne R [Los Alamos National Laboratory; Michalak, Sarah E [Los Alamos National Laboratory
2011-01-13
We implement here a fast and memory-sparing probabilistic top N selection algorithm on the GPU. To our knowledge, this is the first direct selection in the literature for the GPU. The algorithm proceeds via a probabilistic-guess-and-chcck process searching for the Nth element. It always gives a correct result and always terminates. The use of randomization reduces the amount of data that needs heavy processing, and so reduces the average time required for the algorithm. Probabilistic Las Vegas algorithms of this kind are a form of stochastic optimization and can be well suited to more general parallel processors with limited amounts of fast memory.
Akimoto, Takuma; Yamamoto, Eiji
2016-06-01
We consider the Langevin equation with dichotomously fluctuating diffusivity, where the diffusion coefficient changes dichotomously over time, in order to study fluctuations of time-averaged observables in temporally heterogeneous diffusion processes. We find that the time-averaged mean-square displacement (TMSD) can be represented by the occupation time of a state in the asymptotic limit of the measurement time and hence occupation time statistics is a powerful tool for calculating the TMSD in the model. We show that the TMSD increases linearly with time (normal diffusion) but the time-averaged diffusion coefficients are intrinsically random when the mean sojourn time for one of the states diverges, i.e., intrinsic nonequilibrium processes. Thus, we find that temporally heterogeneous environments provide anomalous fluctuations of time-averaged diffusivity, which have relevance to large fluctuations of the diffusion coefficients obtained by single-particle-tracking trajectories in experiments.
Duality between random trap and barrier models
Jack, Robert L [Department of Chemistry, University of California at Berkeley, Berkeley, CA 94720 (United States); Sollich, Peter [Department of Mathematics, King' s College London, London WC2R 2LS (United Kingdom)
2008-08-15
We discuss the physical consequences of a duality between two models with quenched disorder, in which particles propagate in one dimension among random traps or across random barriers. We derive an exact relation between their diffusion fronts at fixed disorder and deduce from this that their disorder-averaged diffusion fronts are exactly equal. We use effective dynamics schemes to isolate the different physical processes by which particles propagate in the models and discuss how the duality arises from a correspondence between the rates for these different processes.
Cader SA
2012-10-01
Full Text Available Samária Ali Cader,1 Rodrigo Gomes de Souza Vale,1 Victor Emmanuel Zamora,2 Claudia Henrique Costa,2 Estélio Henrique Martin Dantas11Laboratory of Human Kinetics Bioscience, Federal University of Rio de Janeiro State, 2Pedro Ernesto University Hospital, School of Medicine, State University of Rio de Janeiro, Rio de Janeiro, BrazilBackground: The purpose of this study was to evaluate the extubation process in bed-ridden elderly intensive care patients receiving inspiratory muscle training (IMT and identify predictors of successful weaning.Methods: Twenty-eight elderly intubated patients in an intensive care unit were randomly assigned to an experimental group (n = 14 that received conventional physiotherapy plus IMT with a Threshold IMT® device or to a control group (n = 14 that received only conventional physiotherapy. The experimental protocol for muscle training consisted of an initial load of 30% maximum inspiratory pressure, which was increased by 10% daily. The training was administered for 5 minutes, twice daily, 7 days a week, with supplemental oxygen from the beginning of weaning until extubation. Successful extubation was defined by the ventilation time measurement with noninvasive positive pressure. A vacuum manometer was used for measurement of maximum inspiratory pressure, and the patients' Tobin index values were measured using a ventilometer.Results: The maximum inspiratory pressure increased significantly (by 7 cm H2O, 95% confidence interval [CI] 4–10, and the Tobin index decreased significantly (by 16 breaths/min/L, 95% CI −26 to 6 in the experimental group compared with the control group. The Chi-squared distribution did not indicate a significant difference in weaning success between the groups (Χ2 = 1.47; P = 0.20. However, a comparison of noninvasive positive pressure time dependence indicated a significantly lower value for the experimental group (P = 0.0001; 95% CI 13.08–18.06. The receiver
Levy random walks on multiplex networks
Guo, Quantong; Zheng, Zhiming; Moreno, Yamir
2016-01-01
Random walks constitute a fundamental mechanism for many dynamics taking place on complex networks. Besides, as a more realistic description of our society, multiplex networks have been receiving a growing interest, as well as the dynamical processes that occur on top of them. Here, inspired by one specific model of random walks that seems to be ubiquitous across many scientific fields, the Levy flight, we study a new navigation strategy on top of multiplex networks. Capitalizing on spectral graph and stochastic matrix theories, we derive analytical expressions for the mean first passage time and the average time to reach a node on these networks. Moreover, we also explore the efficiency of Levy random walks, which we found to be very different as compared to the single layer scenario, accounting for the structure and dynamics inherent to the multiplex network. Finally, by comparing with some other important random walk processes defined on multiplex networks, we find that in some region of the parameters, a ...
Spatial averaging infiltration model for layered soil
HU HePing; YANG ZhiYong; TIAN FuQiang
2009-01-01
To quantify the influences of soil heterogeneity on infiltration, a spatial averaging infiltration model for layered soil (SAI model) is developed by coupling the spatial averaging approach proposed by Chen et al. and the Generalized Green-Ampt model proposed by Jia et al. In the SAI model, the spatial heterogeneity along the horizontal direction is described by a probability distribution function, while that along the vertical direction is represented by the layered soils. The SAI model is tested on a typical soil using Monte Carlo simulations as the base model. The results show that the SAI model can directly incorporate the influence of spatial heterogeneity on infiltration on the macro scale. It is also found that the homogeneous assumption of soil hydraulic conductivity along the horizontal direction will overestimate the infiltration rate, while that along the vertical direction will underestimate the infiltration rate significantly during rainstorm periods. The SAI model is adopted in the spatial averaging hydrological model developed by the authors, and the results prove that it can be applied in the macro-scale hydrological and land surface process modeling in a promising way.
Spatial averaging infiltration model for layered soil
无
2009-01-01
To quantify the influences of soil heterogeneity on infiltration, a spatial averaging infiltration model for layered soil (SAI model) is developed by coupling the spatial averaging approach proposed by Chen et al. and the Generalized Green-Ampt model proposed by Jia et al. In the SAI model, the spatial hetero- geneity along the horizontal direction is described by a probability distribution function, while that along the vertical direction is represented by the layered soils. The SAI model is tested on a typical soil using Monte Carlo simulations as the base model. The results show that the SAI model can directly incorporate the influence of spatial heterogeneity on infiltration on the macro scale. It is also found that the homogeneous assumption of soil hydraulic conductivity along the horizontal direction will overes- timate the infiltration rate, while that along the vertical direction will underestimate the infiltration rate significantly during rainstorm periods. The SAI model is adopted in the spatial averaging hydrological model developed by the authors, and the results prove that it can be applied in the macro-scale hy- drological and land surface process modeling in a promising way.
Newton, J. Stephen; Horner, Robert H.; Algozzine, Bob; Todd, Anne W.; Algozzine, Kate
2012-01-01
Members of Positive Behavior Interventions and Supports (PBIS) teams from 34 elementary schools participated in a Team-Initiated Problem Solving (TIPS) Workshop and follow-up technical assistance. Within the context of a randomized wait-list controlled trial, team members who were the first recipients of the TIPS intervention demonstrated greater…
Carla Simone de Lima Teixeira
2013-01-01
Full Text Available Este trabalho propõe uma abordagem para monitoramento da taxa média de defeitos por item produzido numa produção finita ou encomenda de N itens. A cada ciclo de m itens produzidos, inspecionam-se os últimos r itens. Em cada item inspecionado conta-se o número de defeitos e cada item é classificado como aprovado se o número de defeitos satisfizer o critério do limite de controle. Se todos os r itens forem aprovados, a produção continua, caso contrário interrompe-se a produção à procura de causas especiais. Os itens inspecionados são descartados somente quando há parada no processo. Após a produção de N itens, um lote adicional será produzido para completar a quantia encomendada, mas esses não passarão por inspeção. Será utilizada uma cadeia de Markov finita de estados discretos para determinar as probabilidades de mudança de estado. Elas são utilizadas nas expressões de custo para determinar a estratégia ótima de monitoração, que será obtida através da otimização de três parâmetros: intervalo amostral (m, tamanho da amostra retrospectiva (r e o limite de controle (LC. Os parâmetros serão obtidos através de busca direta, de forma que se minimize a expressão do custo médio por item produzido. Um exemplo numérico ilustra a proposta.This paper proposes an approach for monitoring the average number of non-conformities per items in short-run productions of N items. After every m produced items, the last r items are inspected. For each inspected item, the number of defects is counted; each inspected item is classified as approved if it meets the control limit criterion. If all r inspected items are approved, then the production goes on, otherwise it is stopped for adjustment. The inspected items are all discarded in case of production stoppage. After a production of N items, an additional lot is produced to complete the size ordered, but this additional lot does suffer inspection. A finite discrete state
B. S. Daya Sagar
2001-01-01
This letter presents a brief framework based on nonlinear morphological transformations to generate a self organized critical connectivity network map (SOCCNM) in 2-dimensional space. This simple and elegant framework is implemented on a section that contains a few simulated water bodies to generate SOCCNM. This is based on a postulate that the randomly situated surface water bodies of various sizes and shapes self organize during flooding process.
B. S. Daya Sagar
2001-01-01
Full Text Available This letter presents a brief framework based on nonlinear morphological transformations to generate a self organized critical connectivity network map (SOCCNM in 2-dimensional space. This simple and elegant framework is implemented on a section that contains a few simulated water bodies to generate SOCCNM. This is based on a postulate that the randomly situated surface water bodies of various sizes and shapes self organize during flooding process.
2010-01-01
... 7 Agriculture 10 2010-01-01 2010-01-01 false On average. 1209.12 Section 1209.12 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS....12 On average. On average means a rolling average of production or imports during the last two...
Estimating a weighted average of stratum-specific parameters.
Brumback, Babette A; Winner, Larry H; Casella, George; Ghosh, Malay; Hall, Allyson; Zhang, Jianyi; Chorba, Lorna; Duncan, Paul
2008-10-30
This article investigates estimators of a weighted average of stratum-specific univariate parameters and compares them in terms of a design-based estimate of mean-squared error (MSE). The research is motivated by a stratified survey sample of Florida Medicaid beneficiaries, in which the parameters are population stratum means and the weights are known and determined by the population sampling frame. Assuming heterogeneous parameters, it is common to estimate the weighted average with the weighted sum of sample stratum means; under homogeneity, one ignores the known weights in favor of precision weighting. Adaptive estimators arise from random effects models for the parameters. We propose adaptive estimators motivated from these random effects models, but we compare their design-based performance. We further propose selecting the tuning parameter to minimize a design-based estimate of mean-squared error. This differs from the model-based approach of selecting the tuning parameter to accurately represent the heterogeneity of stratum means. Our design-based approach effectively downweights strata with small weights in the assessment of homogeneity, which can lead to a smaller MSE. We compare the standard random effects model with identically distributed parameters to a novel alternative, which models the variances of the parameters as inversely proportional to the known weights. We also present theoretical and computational details for estimators based on a general class of random effects models. The methods are applied to estimate average satisfaction with health plan and care among Florida beneficiaries just prior to Medicaid reform.
Precise Asymptotics of Complete Moment Convergence on Moving Average
Zheng Yan LIN; Hui ZHOU
2012-01-01
Let {(ξ)i,-∞ ＜ i ＜ ∞} be a doubly infinite sequence of identically distributed (φ)-mixing random variables with zero means and finite variances,{ai,-∞ ＜ i ＜ ∞} be an absolutely summable sequence of real numbers and Xk =∑+∞ i=-∞ aiξi+k be a moving average process.Under some proper moment conditions,the precise asymptotics are established for limε↘0 1/-logε∞Σn=1 1/n2 ES2nI{｜Sn｜≥nε} =2EZ2.where Z ～ N(0,γ2),γ2 =σ2(∑∞ i=-∞ ai)2,and lim ε↘0 ε2δ Σ∞ n=2 (log n)δ-1/n2 ES2nI{｜Sn｜≥√n log nε}=γ2δ+2/δE｜N｜2δ+2.
A random walk with a branching system in random environments
Ying-qiu LI; Xu LI; Quan-sheng LIU
2007-01-01
We consider a branching random walk in random environments, where the particles are reproduced as a branching process with a random environment (in time), and move independently as a random walk on Z with a random environment (in locations). We obtain the asymptotic properties on the position of the rightmost particle at time n, revealing a phase transition phenomenon of the system.
Hellaby, Charles
2012-01-01
A new method for constructing exact inhomogeneous universes is presented, that allows variation in 3 dimensions. The resulting spacetime may be statistically uniform on average, or have random, non-repeating variation. The construction utilises the Darmois junction conditions to join many different component spacetime regions. In the initial simple example given, the component parts are spatially flat and uniform, but much more general combinations should be possible. Further inhomogeneity may be added via swiss cheese vacuoles and inhomogeneous metrics. This model is used to explore the proposal, that observers are located in bound, non-expanding regions, while the universe is actually in the process of becoming void dominated, and thus its average expansion rate is increasing. The model confirms qualitatively that the faster expanding components come to dominate the average, and that inhomogeneity results in average parameters which evolve differently from those of any one component, but more realistic modelling of the effect will need this construction to be generalised.
Hellaby, Charles
2012-01-01
A new method for constructing exact inhomogeneous universes is presented, that allows variation in 3 dimensions. The resulting spacetime may be statistically uniform on average, or have random, non-repeating variation. The construction utilises the Darmois junction conditions to join many different component spacetime regions. In the initial simple example given, the component parts are spatially flat and uniform, but much more general combinations should be possible. Further inhomogeneity may be added via swiss cheese vacuoles and inhomogeneous metrics. This model is used to explore the proposal, that observers are located in bound, non-expanding regions, while the universe is actually in the process of becoming void dominated, and thus its average expansion rate is increasing. The model confirms qualitatively that the faster expanding components come to dominate the average, and that inhomogeneity results in average parameters which evolve differently from those of any one component, but more realistic mode...
A Statistical Mechanics Approach to Approximate Analytical Bootstrap Averages
Malzahn, Dorthe; Opper, Manfred
2003-01-01
We apply the replica method of Statistical Physics combined with a variational method to the approximate analytical computation of bootstrap averages for estimating the generalization error. We demonstrate our approach on regression with Gaussian processes and compare our results with averages...
Shah Nita H.
2015-01-01
Full Text Available Economic production quantity (EPQ model has been analyzed for trended demand, and units in inventory are subject to constant rate. The system allows rework of imperfect units, and preventive maintenance time is random. A search method is used to study the model. The proposed methodology is validated by a numerical example. The sensitivity analysis is carried out to determine the critical model parameters. It is observed that the rate of change of demand, and the deterioration rate have a significant impact on the decision variables and the total cost of an inventory system. The model is highly sensitive to the production and demand rate.
Knowlden, Adam P; Sharma, Manoj
2014-09-01
Family-and-home-based interventions are an important vehicle for preventing childhood obesity. Systematic process evaluations have not been routinely conducted in assessment of these interventions. The purpose of this study was to plan and conduct a process evaluation of the Enabling Mothers to Prevent Pediatric Obesity Through Web-Based Learning and Reciprocal Determinism (EMPOWER) randomized control trial. The trial was composed of two web-based, mother-centered interventions for prevention of obesity in children between 4 and 6 years of age. Process evaluation used the components of program fidelity, dose delivered, dose received, context, reach, and recruitment. Categorical process evaluation data (program fidelity, dose delivered, dose exposure, and context) were assessed using Program Implementation Index (PII) values. Continuous process evaluation variables (dose satisfaction and recruitment) were assessed using ANOVA tests to evaluate mean differences between groups (experimental and control) and sessions (sessions 1 through 5). Process evaluation results found that both groups (experimental and control) were equivalent, and interventions were administered as planned. Analysis of web-based intervention process objectives requires tailoring of process evaluation models for online delivery. Dissemination of process evaluation results can advance best practices for implementing effective online health promotion programs.
Shafer, Michael S; Prendergast, Michael; Melnick, Gerald; Stein, Lynda A; Welsh, Wayne N
2014-01-01
The Organizational Process Improvement Intervention (OPII), conducted by the NIDA-funded Criminal Justice Drug Abuse Treatment Studies consortium of nine research centers, examined an organizational intervention to improve the processes used in correctional settings to assess substance abusing offenders, develop case plans, transfer this information to community-based treatment agencies, and monitor the services provided by these community based treatment agencies. A multi-site cluster randomized design was used to evaluate an inter-agency organizational process improvement intervention among dyads of correctional agencies and community based treatment agencies. Linked correctional and community based agencies were clustered among nine (9) research centers and randomly assigned to an early or delayed intervention condition. Participants included administrators, managers, and line staff from the participating agencies; some participants served on interagency change teams while other participants performed agency tasks related to offender services. A manualized organizational intervention that includes the use of external organizational coaches was applied to create and support interagency change teams that proceeded through a four-step process over a planned intervention period of 12 months. The primary outcome of the process improvement intervention was to improve processes associated with the assessment, case planning, service referral and service provision processes within the linked organizations. Providing substance abuse offenders with coordinated treatment and access to community-based services is critical to reducing offender recidivism. Results from this study protocol will provide new and critical information on strategies and processes that improve the assessment and case planning for such offenders as they transition between correctional and community based systems and settings. Further, this study extends current knowledge of and methods for, the study
Industrial Applications of High Average Power FELS
Shinn, Michelle D
2005-01-01
The use of lasers for material processing continues to expand, and the annual sales of such lasers exceeds $1 B (US). Large scale (many m2) processing of materials require the economical production of laser powers of the tens of kilowatts, and therefore are not yet commercial processes, although they have been demonstrated. The development of FELs based on superconducting RF (SRF) linac technology provides a scaleable path to laser outputs above 50 kW in the IR, rendering these applications economically viable, since the cost/photon drops as the output power increases. This approach also enables high average power ~ 1 kW output in the UV spectrum. Such FELs will provide quasi-cw (PRFs in the tens of MHz), of ultrafast (pulsewidth ~ 1 ps) output with very high beam quality. This talk will provide an overview of applications tests by our facility's users such as pulsed laser deposition, laser ablation, and laser surface modification, as well as present plans that will be tested with our upgraded FELs. These upg...
Sweeney, Dean; Quinlan, Leo R; OLaighin, Gearoid
2015-08-01
The use of NMES has evolved over the last five decades. Technological advancements have transformed these once complex systems into user-friendly devices with enhanced control functions, leading to new applications of NMES being investigated. The use of Randomized Control Trial (RCT) methodology in evaluating the effectiveness of new and existing applications of NMES is a demanding process adding time and cost to a translation into clinical practice. Poor quality trials may result in poor evidence of NMES effectiveness. In this paper some of the key challenges encountered in NMES clinical trials are identified with the aim of purposing a solution to address these challenges through the adoption of Smartphone technology. The design and evaluation of a smartphone application to provide automatic blind randomization control and facilitating the wireless temporal control of a portable Bluetooth enabled NMES is presented.
Vocks, S; Schulte, D; Busch, M; Grönemeyer, D; Herpertz, S; Suchan, B
2011-08-01
Previous neuroimaging studies have demonstrated abnormalities in visual body image processing in anorexia and bulimia nervosa, possibly underlying body image disturbance in these disorders. Although cognitive behavioural interventions have been shown to be successful in improving body image disturbance in eating disorders, no randomized controlled study has yet analysed treatment-induced changes in neuronal correlates of visual body image processing. Altogether, 32 females with eating disorders were randomly assigned either to a manualized cognitive behavioural body image therapy consisting of 10 group sessions, or to a waiting list control condition. Using functional magnetic resonance imaging, brain responses to viewing photographs of one's own and another female's body taken from 16 standardized perspectives while participants were wearing a uniform bikini were acquired before and after the intervention and the waiting time, respectively. Data indicate a general blood oxygen level dependent signal enhancement in response to looking at photographs of one's own body from pre- to post-treatment, whereas exclusively in the control group activation decreases from pre- to post-waiting time were observed. Focused activation increases from pre- to post-treatment were found in the left middle temporal gyrus covering the coordinates of the extrastriate body area and in bilateral frontal structures including the middle frontal gyrus. Results point to a more intense neuronal processing of one's own body after the cognitive behavioural body image therapy in cortical regions that are responsible for the visual processing of the human body and for self-awareness. © Cambridge University Press 2010
Time averages, recurrence and transience in the stochastic replicator dynamics
Hofbauer, Josef; 10.1214/08-AAP577
2009-01-01
We investigate the long-run behavior of a stochastic replicator process, which describes game dynamics for a symmetric two-player game under aggregate shocks. We establish an averaging principle that relates time averages of the process and Nash equilibria of a suitably modified game. Furthermore, a sufficient condition for transience is given in terms of mixed equilibria and definiteness of the payoff matrix. We also present necessary and sufficient conditions for stochastic stability of pure equilibria.
Gurau, Razvan
2017-01-01
Written by the creator of the modern theory of random tensors, this book is the first self-contained introductory text to this rapidly developing theory. Starting from notions familiar to the average researcher or PhD student in mathematical or theoretical physics, the book presents in detail the theory and its applications to physics. The recent detections of the Higgs boson at the LHC and gravitational waves at LIGO mark new milestones in Physics confirming long standing predictions of Quantum Field Theory and General Relativity. These two experimental results only reinforce today the need to find an underlying common framework of the two: the elusive theory of Quantum Gravity. Over the past thirty years, several alternatives have been proposed as theories of Quantum Gravity, chief among them String Theory. While these theories are yet to be tested experimentally, key lessons have already been learned. Whatever the theory of Quantum Gravity may be, it must incorporate random geometry in one form or another....
Edgington, Eugene
2007-01-01
Statistical Tests That Do Not Require Random Sampling Randomization Tests Numerical Examples Randomization Tests and Nonrandom Samples The Prevalence of Nonrandom Samples in Experiments The Irrelevance of Random Samples for the Typical Experiment Generalizing from Nonrandom Samples Intelligibility Respect for the Validity of Randomization Tests Versatility Practicality Precursors of Randomization Tests Other Applications of Permutation Tests Questions and Exercises Notes References Randomized Experiments Unique Benefits of Experiments Experimentation without Mani
Level sets of multiple ergodic averages
Ai-Hua, Fan; Ma, Ji-Hua
2011-01-01
We propose to study multiple ergodic averages from multifractal analysis point of view. In some special cases in the symbolic dynamics, Hausdorff dimensions of the level sets of multiple ergodic average limit are determined by using Riesz products.
Relton, Caroline L; Davey Smith, George
2012-01-01
The burgeoning interest in the field of epigenetics has precipitated the need to develop approaches to strengthen causal inference when considering the role of epigenetic mediators of environmental exposures on disease risk. Epigenetic markers, like any other molecular biomarker, are vulnerable to confounding and reverse causation. Here, we present a strategy, based on the well-established framework of Mendelian randomization, to interrogate the causal relationships between exposure, DNA methylation and outcome. The two-step approach first uses a genetic proxy for the exposure of interest to assess the causal relationship between exposure and methylation. A second step then utilizes a genetic proxy for DNA methylation to interrogate the causal relationship between DNA methylation and outcome. The rationale, origins, methodology, advantages and limitations of this novel strategy are presented. PMID:22422451
Deneubourg, J. L.; Aron, S.; Goss, S.; Pasteels, J. M.; Duerinck, G.
1986-10-01
Two major types of foraging organisation in ants are described and compared, being illustrated with experimental data and mathematical models. The first concerns large colonies of identical, unspecialised foragers. The communication and interaction between foragers and their randomness generates collective and efficient structures. The second concerns small societies of deterministic and specialised foragers, rarely communicating together. The first organisation is discussed in relation to the different recruitment mechanisms, trail-following error, quality and degree of aggregation of food-sources, and territorial marking, and is the key to many types of collective behaviour in social insects. The second is discussed in relation to spatial specialisation, foraging density, individual learning and genetic programming. The two organisations may be associated in the same colony. The choice of organisation is discussed in relation to colony size and size and predictability of food sources.
Pulsar average waveforms and hollow cone beam models
Backer, D. C.
1975-01-01
An analysis of pulsar average waveforms at radio frequencies from 40 MHz to 15 GHz is presented. The analysis is based on the hypothesis that the observer sees one cut of a hollow-cone beam pattern and that stationary properties of the emission vary over the cone. The distributions of apparent cone widths for different observed forms of the average pulse profiles (single, double/unresolved, double/resolved, triple and multiple) are in modest agreement with a model of a circular hollow-cone beam with random observer-spin axis orientation, a random cone axis-spin axis alignment, and a small range of physical hollow-cone parameters for all objects.
Exact Averaging of Stochastic Equations for Flow in Porous Media
Karasaki, Kenzi; Shvidler, Mark; Karasaki, Kenzi
2008-03-15
It is well known that at present, exact averaging of the equations for flow and transport in random porous media have been proposed for limited special fields. Moreover, approximate averaging methods--for example, the convergence behavior and the accuracy of truncated perturbation series--are not well studied, and in addition, calculation of high-order perturbations is very complicated. These problems have for a long time stimulated attempts to find the answer to the question: Are there in existence some, exact, and sufficiently general forms of averaged equations? Here, we present an approach for finding the general exactly averaged system of basic equations for steady flow with sources in unbounded stochastically homogeneous fields. We do this by using (1) the existence and some general properties of Green's functions for the appropriate stochastic problem, and (2) some information about the random field of conductivity. This approach enables us to find the form of the averaged equations without directly solving the stochastic equations or using the usual assumption regarding any small parameters. In the common case of a stochastically homogeneous conductivity field we present the exactly averaged new basic nonlocal equation with a unique kernel-vector. We show that in the case of some type of global symmetry (isotropy, transversal isotropy, or orthotropy), we can for three-dimensional and two-dimensional flow in the same way derive the exact averaged nonlocal equations with a unique kernel-tensor. When global symmetry does not exist, the nonlocal equation with a kernel-tensor involves complications and leads to an ill-posed problem.
Accurate Switched-Voltage voltage averaging circuit
金光, 一幸; 松本, 寛樹
2006-01-01
Abstract ###This paper proposes an accurate Switched-Voltage (SV) voltage averaging circuit. It is presented ###to compensated for NMOS missmatch error at MOS differential type voltage averaging circuit. ###The proposed circuit consists of a voltage averaging and a SV sample/hold (S/H) circuit. It can ###operate using nonoverlapping three phase clocks. Performance of this circuit is verified by PSpice ###simulations.
Spectral averaging techniques for Jacobi matrices
del Rio, Rafael; Schulz-Baldes, Hermann
2008-01-01
Spectral averaging techniques for one-dimensional discrete Schroedinger operators are revisited and extended. In particular, simultaneous averaging over several parameters is discussed. Special focus is put on proving lower bounds on the density of the averaged spectral measures. These Wegner type estimates are used to analyze stability properties for the spectral types of Jacobi matrices under local perturbations.
David S Rebergen; David J Bruinvels; Chris M Bos; Allard J van der Beek; Willem van Mechelen
2010-01-01
...) to the Dutch guideline on the management of common mental health problems and its effect on return to work as part of the process evaluation of a trial comparing adherence to the guideline to care as usual...
Mingle Guo
2012-01-01
Full Text Available The complete moment convergence of weighted sums for arrays of rowwise negatively associated random variables is investigated. Some sufficient conditions for complete moment convergence of weighted sums for arrays of rowwise negatively associated random variables are established. Moreover, the results of Baek et al. (2008, are complemented. As an application, the complete moment convergence of moving average processes based on a negatively associated random sequences is obtained, which improves the result of Li et al. (2004.
Kunida, Katsuyuki; Matsuda, Michiyuki; Aoki, Kazuhiro
2012-05-15
Cell migration plays an important role in many physiological processes. Rho GTPases (Rac1, Cdc42, RhoA) and phosphatidylinositols have been extensively studied in directional cell migration. However, it remains unclear how Rho GTPases and phosphatidylinositols regulate random cell migration in space and time. We have attempted to address this issue using fluorescence resonance energy transfer (FRET) imaging and statistical signal processing. First, we acquired time-lapse images of random migration of HT-1080 fibrosarcoma cells expressing FRET biosensors of Rho GTPases and phosphatidyl inositols. We developed an image-processing algorithm to extract FRET values and velocities at the leading edge of migrating cells. Auto- and cross-correlation analysis suggested the involvement of feedback regulations among Rac1, phosphatidyl inositols and membrane protrusions. To verify the feedback regulations, we employed an acute inhibition of the signaling pathway with pharmaceutical inhibitors. The inhibition of actin polymerization decreased Rac1 activity, indicating the presence of positive feedback from actin polymerization to Rac1. Furthermore, treatment with PI3-kinase inhibitor induced an adaptation of Rac1 activity, i.e. a transient reduction of Rac1 activity followed by recovery to the basal level. In silico modeling that reproduced the adaptation predicted the existence of a negative feedback loop from Rac1 to actin polymerization. Finally, we identified MLCK as the probable controlling factor in the negative feedback. These findings quantitatively demonstrate positive and negative feedback loops that involve actin, Rac1 and MLCK, and account for the ordered patterns of membrane dynamics observed in randomly migrating cells.
Average-Time Games on Timed Automata
Jurdzinski, Marcin; Trivedi, Ashutosh
2009-01-01
An average-time game is played on the infinite graph of configurations of a finite timed automaton. The two players, Min and Max, construct an infinite run of the automaton by taking turns to perform a timed transition. Player Min wants to minimise the average time per transition and player Max wants to maximise it. A solution of average-time games is presented using a reduction to average-price game on a finite graph. A direct consequence is an elementary proof of determinacy for average-tim...
Yong, Liu; Dingfa, Huang; Yong, Jiang
2012-07-20
Temporal phase unwrapping is an important method for shape measurement in structured light projection. Its measurement errors mainly come from both the camera noise and nonlinearity. Analysis found that least-squares fitting cannot completely eliminate nonlinear errors, though it can significantly reduce the random errors. To further reduce the measurement errors of current temporal phase unwrapping algorithms, in this paper, we proposed a phase averaging method (PAM) in which an additional fringe sequence at the highest fringe density is employed in the process of data processing and the phase offset of each set of the four frames is carefully chosen according to the period of the phase nonlinear errors, based on fast classical temporal phase unwrapping algorithms. This method can decrease both the random errors and the systematic errors with statistical averaging. In addition, the length of the additional fringe sequence can be changed flexibly according to the precision of the measurement. Theoretical analysis and simulation experiment results showed the validity of the proposed method.
Goutte, Stéphane; Russo, Francesco
2012-01-01
Given a process with independent increments $X$ (not necessarily a martingale) and a large class of square integrable r.v. $H=f(X_T)$, $f$ being the Fourier transform of a finite measure $\\mu$, we provide explicit Kunita-Watanabe and F\\"ollmer-Schweizer decompositions. The representation is expressed by means of two significant maps: the expectation and derivative operators related to the characteristics of $X$. We also provide an explicit expression for the variance optimal error when hedging the claim $H$ with underlying process $X$. Those questions are motivated by finding the solution of the celebrated problem of global and local quadratic risk minimization in mathematical finance.
Mauricio Bustamante Jamid
2015-07-01
Full Text Available Rev.esc.adm.neg In Colombia, the study and experimental design focusing on agro industrial processes is limited. It is evident that comparative advantages of the country have been misused regarding climatic conditions, as well as the advantages that our species offer, which being favored by our geographical situation would benefit social and commercial development and the use of organic products, supported by research processes that apply experimental models to optimize operations. Consequently, few stablished alternatives together with an appropriate infrastructure affect the economic sector at a high scale, avoiding the necessary conditions for competitiveness, which are fundamental in a global context (Jaramillo C. F., 1990-2000.
Nouchi, Rui; Saito, Toshiki; Nouchi, Haruka; Kawashima, Ryuta
2016-01-01
Background: Processing speed training using a 1-year intervention period improves cognitive functions and emotional states of elderly people. Nevertheless, it remains unclear whether short-term processing speed training such as 4 weeks can benefit elderly people. This study was designed to investigate effects of 4 weeks of processing speed training on cognitive functions and emotional states of elderly people. Methods: We used a single-blinded randomized control trial (RCT). Seventy-two older adults were assigned randomly to two groups: a processing speed training game (PSTG) group and knowledge quiz training game (KQTG) group, an active control group. In PSTG, participants were asked to play PSTG (12 processing speed games) for 15 min, during five sessions per week, for 4 weeks. In the KQTG group, participants were asked to play KQTG (four knowledge quizzes) for 15 min, during five sessions per week, for 4 weeks. We measured several cognitive functions and emotional states before and after the 4 week intervention period. Results: Our results revealed that PSTG improved performances in processing speed and inhibition compared to KQTG, but did not improve performance in reasoning, shifting, short term/working memory, and episodic memory. Moreover, PSTG reduced the depressive mood score as measured by the Profile of Mood State compared to KQTG during the 4 week intervention period, but did not change other emotional measures. Discussion: This RCT first provided scientific evidence related to small acute benefits of 4 week PSTG on processing speed, inhibition, and depressive mood in healthy elderly people. We discuss possible mechanisms for improvements in processing speed and inhibition and reduction of the depressive mood. Trial registration: This trial was registered in The University Hospital Medical Information Network Clinical Trials Registry (UMIN000022250).
Rohrbach, F; Vesztergombi, G
1997-01-01
In the near future, the computer performance will be completely determined by how long it takes to access memory. There are bottle-necks in memory latency and memory-to processor interface bandwidth. The IRAM initiative could be the answer by putting Processor-In-Memory (PIM). Starting from the massively parallel processing concept, one reached a similar conclusion. The MPPC (Massively Parallel Processing Collaboration) project and the 8K processor ASTRA machine (Associative String Test bench for Research \\& Applications) developed at CERN \\cite{kuala} can be regarded as a forerunner of the IRAM concept. The computing power of the ASTRA machine, regarded as an IRAM with 64 one-bit processors on a 64$\\times$64 bit-matrix memory chip machine, has been demonstrated by running statistical physics algorithms: one-dimensional stochastic cellular automata, as a simple model for dynamical phase transitions. As a relevant result for physics, the damage spreading of this model has been investigated.
Basse-O'Connor, Andreas; Rosiński, Jan
2013-01-01
We characterize the finite variation property for stationary increment mixed moving averages driven by infinitely divisible random measures. Such processes include fractional and moving average processes driven by Levy processes, and also their mixtures. We establish two types of zero-one laws...
Basse-O'Connor, Andreas
2012-01-01
We characterize the finite variation property for stationary increment mixed moving averages driven by infinitely divisible random measures. Such processes include fractional and moving average processes driven by Levy processes, and also their mixtures. We establish two types of zero-one laws for the finite variation property. We also consider some examples to illustrate our results.
Tan, Khoon Kiat; Chan, Sally Wai-Chi; Wang, Wenru; Vehviläinen-Julkunen, Katri
2016-01-01
To determine the feasibility of a salutogenesis-based self-care program on quality of life, sense of coherence, activation and resilience among older community dwellers. This is a feasibility randomized controlled trial. Sixty-four older community-dwellers were recruited from a Singapore senior activity center and randomly assigned to intervention and control groups. The intervention group attended a 12-week Resource Enhancement and Activation Program. The outcomes were assessed with the Chinese versions of World Health Organization Quality of Life Scale, Sense of Coherence, Patient Activation Measure, and Connor-Davidson Resilience Scale. Process evaluation was conducted using focus groups with the intervention group. At the end of the program, the intervention group showed significant improvement in the Sense of Coherence scale and the psychological subscale of the WHO Quality of Life scale compared with the control group. Three themes emerged from the process evaluation: participation in the program, reflection on the experience, and improving the experience. A salutogenic self-care approach could be a potential health promotion strategy for older people. With improved sense of coherence and psychological aspect of quality of life, older people's self-care ability may improve, leading to better health and better quality of life. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Nouchi, Rui; Taki, Yasuyuki; Takeuchi, Hikaru; Sekiguchi, Atsushi; Hashizume, Hiroshi; Nozawa, Takayuki; Nouchi, Haruka; Kawashima, Ryuta
2014-04-01
Previous reports have described that long-term combination exercise training improves cognitive functions in healthy elderly people. This study investigates the effects of 4 weeks of short-term combination exercise training on various cognitive functions of elderly people. We conducted a single-blinded randomized controlled trial with two parallel groups. Sixty-four healthy older adults were assigned randomly to a combination exercise training group or a waiting list control group. Participants in the combination exercise training group participated in the combination exercise training (aerobic, strength, and stretching exercise trainings) 3 days per week during 4 weeks (12 workouts total). The waiting list control group did not participate in the combination exercise training. Measures of the cognitive functions (executive functions, episodic memory, working memory, reading ability, attention, and processing speed) were conducted before and after training. Results showed that the combination exercise training improved executive functions, episodic memory, and processing speed compared to those attributes of the waiting list control group. This report was the first of a study demonstrating the beneficial effects of short-term combination exercise training on diverse cognitive functions of elderly people. Our study provides important evidence of the short-term combination exercise's effectiveness.
The Optimal Selection for Restricted Linear Models with Average Estimator
Qichang Xie
2014-01-01
Full Text Available The essential task of risk investment is to select an optimal tracking portfolio among various portfolios. Statistically, this process can be achieved by choosing an optimal restricted linear model. This paper develops a statistical procedure to do this, based on selecting appropriate weights for averaging approximately restricted models. The method of weighted average least squares is adopted to estimate the approximately restricted models under dependent error setting. The optimal weights are selected by minimizing a k-class generalized information criterion (k-GIC, which is an estimate of the average squared error from the model average fit. This model selection procedure is shown to be asymptotically optimal in the sense of obtaining the lowest possible average squared error. Monte Carlo simulations illustrate that the suggested method has comparable efficiency to some alternative model selection techniques.
Reducing Noise by Repetition: Introduction to Signal Averaging
Hassan, Umer; Anwar, Muhammad Sabieh
2010-01-01
This paper describes theory and experiments, taken from biophysics and physiological measurements, to illustrate the technique of signal averaging. In the process, students are introduced to the basic concepts of signal processing, such as digital filtering, Fourier transformation, baseline correction, pink and Gaussian noise, and the cross- and…
WIDTHS AND AVERAGE WIDTHS OF SOBOLEV CLASSES
刘永平; 许贵桥
2003-01-01
This paper concerns the problem of the Kolmogorov n-width, the linear n-width, the Gel'fand n-width and the Bernstein n-width of Sobolev classes of the periodicmultivariate functions in the space Lp(Td) and the average Bernstein σ-width, averageKolmogorov σ-widths, the average linear σ-widths of Sobolev classes of the multivariatequantities.
Victoria Jane Palmer
2016-10-01
Full Text Available Background: Process evaluations are essential to understand the contextual, relational, and organizational and system factors of complex interventions. The guidance for developing process evaluations for randomized controlled trials (RCTs has until recently however, been fairly limited. Method/Design: A nested process evaluation (NPE was designed and embedded across all stages of a stepped wedge cluster RCT called the CORE study. The aim of the CORE study is to test the effectiveness of an experience-based codesign methodology for improving psychosocial recovery outcomes for people living with severe mental illness (service users. Process evaluation data collection combines qualitative and quantitative methods with four aims: (1 to describe organizational characteristics, service models, policy contexts, and government reforms and examine the interaction of these with the intervention; (2 to understand how the codesign intervention works, the cluster variability in implementation, and if the intervention is or is not sustained in different settings; (3 to assist in the interpretation of the primary and secondary outcomes and determine if the causal assumptions underpinning the codesign interventions are accurate; and (4 to determine the impact of a purposefully designed engagement model on the broader study retention and knowledge transfer in the trial. Discussion: Process evaluations require prespecified study protocols but finding a balance between their iterative nature and the structure offered by protocol development is an important step forward. Taking this step will advance the role of qualitative research within trials research and enable more focused data collection to occur at strategic points within studies.
Quantum random number generator
Stipcevic, M
2006-01-01
We report upon a novel principle for realization of a fast nondeterministic random number generator whose randomness relies on intrinsic randomness of the quantum physical processes of photonic emission in semiconductors and subsequent detection by the photoelectric effect. Timing information of detected photons is used to generate binary random digits-bits. The bit extraction method based on restartable clock theoretically eliminates both bias and autocorrelation while reaching efficiency of almost 0.5 bits per random event. A prototype has been built and statistically tested.
Stochastic averaging of quasi-Hamiltonian systems
朱位秋
1996-01-01
A stochastic averaging method is proposed for quasi-Hamiltonian systems (Hamiltonian systems with light dampings subject to weakly stochastic excitations). Various versions of the method, depending on whether the associated Hamiltonian systems are integrable or nonintegrable, resonant or nonresonant, are discussed. It is pointed out that the standard stochastic averaging method and the stochastic averaging method of energy envelope are special cases of the stochastic averaging method of quasi-Hamiltonian systems and that the results obtained by this method for several examples prove its effectiveness.
NOAA Average Annual Salinity (3-Zone)
California Department of Resources — The 3-Zone Average Annual Salinity Digital Geography is a digital spatial framework developed using geographic information system (GIS) technology. These salinity...
Mander, Johannes; Kröger, Paula; Heidenreich, Thomas; Flückiger, Christoph; Lutz, Wolfgang; Bents, Hinrich; Barnow, Sven
2015-01-01
Mindfulness has its origins in an Eastern Buddhist tradition that is over 2500 years old and can be defined as a specific form of attention that is non-judgmental, purposeful, and focused on the present moment. It has been well established in cognitive-behavior therapy in the last decades, while it has been investigated in manualized group settings such as mindfulness-based stress reduction and mindfulness-based cognitive therapy. However, there is scarce research evidence on the effects of mindfulness as a treatment element in individual therapy. Consequently, the demand to investigate mindfulness under effectiveness conditions in trainee therapists has been highlighted. To fill in this research gap, we designed the PrOMET Study. In our study, we will investigate the effects of brief, audiotape-presented, session-introducing interventions with mindfulness elements conducted by trainee therapists and their patients at the beginning of individual therapy sessions in a prospective, randomized, controlled design under naturalistic conditions with a total of 30 trainee therapists and 150 patients with depression and anxiety disorders in a large outpatient training center. We hypothesize that the primary outcomes of the session-introducing intervention with mindfulness elements will be positive effects on therapeutic alliance (Working Alliance Inventory) and general clinical symptomatology (Brief Symptom Checklist) in contrast to the session-introducing progressive muscle relaxation and treatment-as-usual control conditions. Treatment duration is 25 therapy sessions. Therapeutic alliance will be assessed on a session-to-session basis. Clinical symptomatology will be assessed at baseline, session 5, 15 and 25. We will conduct multilevel modeling to address the nested data structure. The secondary outcome measures include depression, anxiety, interpersonal functioning, mindful awareness, and mindfulness during the sessions. The study results could provide important
Average sampling theorems for shift invariant subspaces
无
2000-01-01
The sampling theorem is one of the most powerful results in signal analysis. In this paper, we study the average sampling on shift invariant subspaces, e.g. wavelet subspaces. We show that if a subspace satisfies certain conditions, then every function in the subspace is uniquely determined and can be reconstructed by its local averages near certain sampling points. Examples are given.
Testing linearity against nonlinear moving average models
de Gooijer, J.G.; Brännäs, K.; Teräsvirta, T.
1998-01-01
Lagrange multiplier (LM) test statistics are derived for testing a linear moving average model against an additive smooth transition moving average model. The latter model is introduced in the paper. The small sample performance of the proposed tests are evaluated in a Monte Carlo study and compared
Averaging Einstein's equations : The linearized case
Stoeger, William R.; Helmi, Amina; Torres, Diego F.
2007-01-01
We introduce a simple and straightforward averaging procedure, which is a generalization of one which is commonly used in electrodynamics, and show that it possesses all the characteristics we require for linearized averaging in general relativity and cosmology for weak-field and perturbed FLRW situ
Averaging Einstein's equations : The linearized case
Stoeger, William R.; Helmi, Amina; Torres, Diego F.
We introduce a simple and straightforward averaging procedure, which is a generalization of one which is commonly used in electrodynamics, and show that it possesses all the characteristics we require for linearized averaging in general relativity and cosmology for weak-field and perturbed FLRW
Average excitation potentials of air and aluminium
Bogaardt, M.; Koudijs, B.
1951-01-01
By means of a graphical method the average excitation potential I may be derived from experimental data. Average values for Iair and IAl have been obtained. It is shown that in representing range/energy relations by means of Bethe's well known formula, I has to be taken as a continuously changing fu
Shiffman, Bernard
2010-01-01
We introduce several notions of `random fewnomials', i.e. random polynomials with a fixed number f of monomials of degree N. The f exponents are chosen at random and then the coefficients are chosen to be Gaussian random, mainly from the SU(m + 1) ensemble. The results give limiting formulas as N goes to infinity for the expected distribution of complex zeros of a system of k random fewnomials in m variables. When k = m, for SU(m + 1) polynomials, the limit is the Monge-Ampere measure of a toric Kaehler potential on CP^m obtained by averaging a `discrete Legendre transform' of the Fubini-Study symplectic potential at f points of the unit simplex in R^m.
New results on averaging theory and applications
Cândido, Murilo R.; Llibre, Jaume
2016-08-01
The usual averaging theory reduces the computation of some periodic solutions of a system of ordinary differential equations, to find the simple zeros of an associated averaged function. When one of these zeros is not simple, i.e., the Jacobian of the averaged function in it is zero, the classical averaging theory does not provide information about the periodic solution associated to a non-simple zero. Here we provide sufficient conditions in order that the averaging theory can be applied also to non-simple zeros for studying their associated periodic solutions. Additionally, we do two applications of this new result for studying the zero-Hopf bifurcation in the Lorenz system and in the Fitzhugh-Nagumo system.
Analogue Divider by Averaging a Triangular Wave
Selvam, Krishnagiri Chinnathambi
2017-08-01
A new analogue divider circuit by averaging a triangular wave using operational amplifiers is explained in this paper. The triangle wave averaging analog divider using operational amplifiers is explained here. The reference triangular waveform is shifted from zero voltage level up towards positive power supply voltage level. Its positive portion is obtained by a positive rectifier and its average value is obtained by a low pass filter. The same triangular waveform is shifted from zero voltage level to down towards negative power supply voltage level. Its negative portion is obtained by a negative rectifier and its average value is obtained by another low pass filter. Both the averaged voltages are combined in a summing amplifier and the summed voltage is given to an op-amp as negative input. This op-amp is configured to work in a negative closed environment. The op-amp output is the divider output.
Ben-Ari, Morechai
2004-01-01
The term "random" is frequently used in discussion of the theory of evolution, even though the mathematical concept of randomness is problematic and of little relevance in the theory. Therefore, since the core concept of the theory of evolution is the non-random process of natural selection, the term random should not be used in teaching the…
Structure and defect processes in Si{sub 1-x-y}Ge{sub x}Sn{sub y} random alloys
Schwingenschloegl, U. [PSE Division, KAUST, Thuwal (Saudi Arabia); Chroneos, A.; Grimes, R.W. [Department of Materials, Imperial College London (United Kingdom); Jiang, C. [Materials Science and Technology Division, Los Alamos National Laboratory, Los Alamos, NM (United States); Bracht, H. [Institute of Material Physics, University of Muenster (Germany)
2010-07-01
Binary and ternary Si{sub 1-x-y}Ge{sub x}Sn{sub y} random alloys are being considered as candidate materials to lattice match III-V or II-VI compounds with Si or Ge in optoelectronic or microelectronic devices. The simulation of the defect interactions of these alloys is hindered by their random nature. Here we use the special quasirandom approach (SQS) in conjunction with density functional theory calculations to study the structure and the defect processes. For the binary alloy Ge{sub x}Sn{sub 1-x} the SQS method correctly describes the deviation of the lattice parameters from Vegard's Law. For the ternary alloy Si{sub 0.375}Ge{sub 0.5}Sn{sub 0.125} we find an association of As atoms to lattice vacancies and the formation of As-vacancy pairs. It is predicted that the nearest-neighbour environment exerts a strong influence on the stability of these pairs.
GUO TieXin; CHEN XinXiang
2009-01-01
The purpose of this paper is to provide a random duality theory for the further development of the theory of random conjugate spaces for random normed modules.First,the complicated stratification structure of a module over the algebra L(μ,K) frequently makes our investigations into random duality theory considerably different from the corresponding ones into classical duality theory,thus in this paper we have to first begin in overcoming several substantial obstacles to the study of stratification structure on random locally convex modules.Then,we give the representation theorem of weakly continuous canonical module homomorphisms,the theorem of existence of random Mackey structure,and the random bipolar theorem with respect to a regular random duality pair together with some important random compatible invariants.
无
2009-01-01
The purpose of this paper is to provide a random duality theory for the further development of the theory of random conjugate spaces for random normed modules. First, the complicated stratification structure of a module over the algebra L(μ, K) frequently makes our investigations into random duality theory considerably difierent from the corresponding ones into classical duality theory, thus in this paper we have to first begin in overcoming several substantial obstacles to the study of stratification structure on random locally convex modules. Then, we give the representation theorem of weakly continuous canonical module homomorphisms, the theorem of existence of random Mackey structure, and the random bipolar theorem with respect to a regular random duality pair together with some important random compatible invariants.
Averaged Lema\\^itre-Tolman-Bondi dynamics
Isidro, Eddy G Chirinos; Piattella, Oliver F; Zimdahl, Winfried
2016-01-01
We consider cosmological backreaction effects in Buchert's averaging formalism on the basis of an explicit solution of the Lema\\^itre-Tolman-Bondi (LTB) dynamics which is linear in the LTB curvature parameter and has an inhomogeneous bang time. The volume Hubble rate is found in terms of the volume scale factor which represents a derivation of the simplest phenomenological solution of Buchert's equations in which the fractional densities corresponding to average curvature and kinematic backreaction are explicitly determined by the parameters of the underlying LTB solution at the boundary of the averaging volume. This configuration represents an exactly solvable toy model but it does not adequately describe our "real" Universe.
Average-passage flow model development
Adamczyk, John J.; Celestina, Mark L.; Beach, Tim A.; Kirtley, Kevin; Barnett, Mark
1989-01-01
A 3-D model was developed for simulating multistage turbomachinery flows using supercomputers. This average passage flow model described the time averaged flow field within a typical passage of a bladed wheel within a multistage configuration. To date, a number of inviscid simulations were executed to assess the resolution capabilities of the model. Recently, the viscous terms associated with the average passage model were incorporated into the inviscid computer code along with an algebraic turbulence model. A simulation of a stage-and-one-half, low speed turbine was executed. The results of this simulation, including a comparison with experimental data, is discussed.
Averaging of Backscatter Intensities in Compounds
Donovan, John J.; Pingitore, Nicholas E.; Westphal, Andrew J.
2002-01-01
Low uncertainty measurements on pure element stable isotope pairs demonstrate that mass has no influence on the backscattering of electrons at typical electron microprobe energies. The traditional prediction of average backscatter intensities in compounds using elemental mass fractions is improperly grounded in mass and thus has no physical basis. We propose an alternative model to mass fraction averaging, based of the number of electrons or protons, termed “electron fraction,” which predicts backscatter yield better than mass fraction averaging. PMID:27446752
The Average Lower Connectivity of Graphs
Ersin Aslan
2014-01-01
Full Text Available For a vertex v of a graph G, the lower connectivity, denoted by sv(G, is the smallest number of vertices that contains v and those vertices whose deletion from G produces a disconnected or a trivial graph. The average lower connectivity denoted by κav(G is the value (∑v∈VGsvG/VG. It is shown that this parameter can be used to measure the vulnerability of networks. This paper contains results on bounds for the average lower connectivity and obtains the average lower connectivity of some graphs.
Cosmic inhomogeneities and averaged cosmological dynamics.
Paranjape, Aseem; Singh, T P
2008-10-31
If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a "dark energy." However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the inhomogeneous Universe, the averaged homogeneous Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic initial conditions, we show the answer to be "no." Averaging effects negligibly influence the cosmological dynamics.
Changing mortality and average cohort life expectancy
Schoen, Robert; Canudas-Romo, Vladimir
2005-01-01
of survivorship. An alternative aggregate measure of period mortality which has been seen as less sensitive to period changes, the cross-sectional average length of life (CAL) has been proposed as an alternative, but has received only limited empirical or analytical examination. Here, we introduce a new measure......, the average cohort life expectancy (ACLE), to provide a precise measure of the average length of life of cohorts alive at a given time. To compare the performance of ACLE with CAL and with period and cohort life expectancy, we first use population models with changing mortality. Then the four aggregate...
Jørgensen, Sanne Ellegaard; Jørgensen, Thea Suldrup; Aarestrup, Anne Kristine
fathers and low OSC parents were difficult to reach. The findings may be subject to selection bias due to parent non-response. Strategies to improve parents’ participation in school-based interventions and surveys should be developed. Funding Source: TrygFonden, University of Southern Denmark......Purpose: School-based dietary interventions often include a parental component, but the degree of implementation is seldom reported. This study evaluated the implementation of six parental newsletters in the Boost study, a multicomponent school-randomized controlled trial targeting fruit...... and vegetable intake among year 7th graders (~13-year-olds) in school year 2010/11. Methods: Post-intervention questionnaire data from parents and teachers at 20 intervention schools were analysed descriptively. Process measures: Dose delivered: number of newsletters uploaded by teachers to the school’s website...
Sea Surface Temperature Average_SST_Master
National Oceanic and Atmospheric Administration, Department of Commerce — Sea surface temperature collected via satellite imagery from http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.html and averaged for each region using ArcGIS...
Average Vegetation Growth 1990 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1990 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1997 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1997 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1992 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1992 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 2001 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2001 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1995 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1995 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 2000 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2000 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1998 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1998 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1994 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1994 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
MN Temperature Average (1961-1990) - Line
Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...
Average Vegetation Growth 1996 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1996 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 2005 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 2005 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
Average Vegetation Growth 1993 - Direct Download
U.S. Geological Survey, Department of the Interior — This map layer is a grid map of 1993 average vegetation growth for Alaska and the conterminous United States. The nominal spatial resolution is 1 kilometer and the...
MN Temperature Average (1961-1990) - Polygon
Minnesota Department of Natural Resources — This data set depicts 30-year averages (1961-1990) of monthly and annual temperatures for Minnesota. Isolines and regions were created using kriging and...
Spacetime Average Density (SAD) Cosmological Measures
Page, Don N
2014-01-01
The measure problem of cosmology is how to obtain normalized probabilities of observations from the quantum state of the universe. This is particularly a problem when eternal inflation leads to a universe of unbounded size so that there are apparently infinitely many realizations or occurrences of observations of each of many different kinds or types, making the ratios ambiguous. There is also the danger of domination by Boltzmann Brains. Here two new Spacetime Average Density (SAD) measures are proposed, Maximal Average Density (MAD) and Biased Average Density (BAD), for getting a finite number of observation occurrences by using properties of the Spacetime Average Density (SAD) of observation occurrences to restrict to finite regions of spacetimes that have a preferred beginning or bounce hypersurface. These measures avoid Boltzmann brain domination and appear to give results consistent with other observations that are problematic for other widely used measures, such as the observation of a positive cosmolo...
A practical guide to averaging functions
Beliakov, Gleb; Calvo Sánchez, Tomasa
2016-01-01
This book offers an easy-to-use and practice-oriented reference guide to mathematical averages. It presents different ways of aggregating input values given on a numerical scale, and of choosing and/or constructing aggregating functions for specific applications. Building on a previous monograph by Beliakov et al. published by Springer in 2007, it outlines new aggregation methods developed in the interim, with a special focus on the topic of averaging aggregation functions. It examines recent advances in the field, such as aggregation on lattices, penalty-based aggregation and weakly monotone averaging, and extends many of the already existing methods, such as: ordered weighted averaging (OWA), fuzzy integrals and mixture functions. A substantial mathematical background is not called for, as all the relevant mathematical notions are explained here and reported on together with a wealth of graphical illustrations of distinct families of aggregation functions. The authors mainly focus on practical applications ...
Rotational averaging of multiphoton absorption cross sections
Friese, Daniel H., E-mail: daniel.h.friese@uit.no; Beerepoot, Maarten T. P.; Ruud, Kenneth [Centre for Theoretical and Computational Chemistry, University of Tromsø — The Arctic University of Norway, N-9037 Tromsø (Norway)
2014-11-28
Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.
Rotational averaging of multiphoton absorption cross sections
Friese, Daniel H.; Beerepoot, Maarten T. P.; Ruud, Kenneth
2014-11-01
Rotational averaging of tensors is a crucial step in the calculation of molecular properties in isotropic media. We present a scheme for the rotational averaging of multiphoton absorption cross sections. We extend existing literature on rotational averaging to even-rank tensors of arbitrary order and derive equations that require only the number of photons as input. In particular, we derive the first explicit expressions for the rotational average of five-, six-, and seven-photon absorption cross sections. This work is one of the required steps in making the calculation of these higher-order absorption properties possible. The results can be applied to any even-rank tensor provided linearly polarized light is used.
Monthly snow/ice averages (ISCCP)
National Aeronautics and Space Administration — September Arctic sea ice is now declining at a rate of 11.5 percent per decade, relative to the 1979 to 2000 average. Data from NASA show that the land ice sheets in...
Average Annual Precipitation (PRISM model) 1961 - 1990
U.S. Geological Survey, Department of the Interior — This map layer shows polygons of average annual precipitation in the contiguous United States, for the climatological period 1961-1990. Parameter-elevation...
Symmetric Euler orientation representations for orientational averaging.
Mayerhöfer, Thomas G
2005-09-01
A new kind of orientation representation called symmetric Euler orientation representation (SEOR) is presented. It is based on a combination of the conventional Euler orientation representations (Euler angles) and Hamilton's quaternions. The properties of the SEORs concerning orientational averaging are explored and compared to those of averaging schemes that are based on conventional Euler orientation representations. To that aim, the reflectance of a hypothetical polycrystalline material with orthorhombic crystal symmetry was calculated. The calculation was carried out according to the average refractive index theory (ARIT [T.G. Mayerhöfer, Appl. Spectrosc. 56 (2002) 1194]). It is shown that the use of averaging schemes based on conventional Euler orientation representations leads to a dependence of the result from the specific Euler orientation representation that was utilized and from the initial position of the crystal. The latter problem can be overcome partly by the introduction of a weighing factor, but only for two-axes-type Euler orientation representations. In case of a numerical evaluation of the average, a residual difference remains also if a two-axes type Euler orientation representation is used despite of the utilization of a weighing factor. In contrast, this problem does not occur if a symmetric Euler orientation representation is used as a matter of principle, while the result of the averaging for both types of orientation representations converges with increasing number of orientations considered in the numerical evaluation. Additionally, the use of a weighing factor and/or non-equally spaced steps in the numerical evaluation of the average is not necessary. The symmetrical Euler orientation representations are therefore ideally suited for the use in orientational averaging procedures.
Cosmic Inhomogeneities and the Average Cosmological Dynamics
Paranjape, Aseem; Singh, T. P.
2008-01-01
If general relativity (GR) describes the expansion of the Universe, the observed cosmic acceleration implies the existence of a `dark energy'. However, while the Universe is on average homogeneous on large scales, it is inhomogeneous on smaller scales. While GR governs the dynamics of the \\emph{in}homogeneous Universe, the averaged \\emph{homogeneous} Universe obeys modified Einstein equations. Can such modifications alone explain the acceleration? For a simple generic model with realistic ini...
Average Bandwidth Allocation Model of WFQ
Tomáš Balogh
2012-01-01
Full Text Available We present a new iterative method for the calculation of average bandwidth assignment to traffic flows using a WFQ scheduler in IP based NGN networks. The bandwidth assignment calculation is based on the link speed, assigned weights, arrival rate, and average packet length or input rate of the traffic flows. We prove the model outcome with examples and simulation results using NS2 simulator.
Average number of iterations of some polynomial interior-point——Algorithms for linear programming
黄思明
2000-01-01
We study the behavior of some polynomial interior-point algorithms for solving random linear programming (LP) problems. We show that the average number of iterations of these algorithms, coupled with a finite termination technique, is bounded above by O( n1.5). The random LP problem is Todd’s probabilistic model with the standard Gauss distribution.
Carrier Noise Reduction in Speckle Correlation Interferometry by a Unique Averaging Technique
Pechersky, M.J.
1999-01-20
We present experimental result of carrier speckle noise averaging by a novel approach to generate numerous identical correlation fringes with randomly different speckles. The surface under study is sprayed with a new dry paint or a layer each time for the repetitive experiments to generate randomly different surfaces of the carrier speckle patterns.
Average number of iterations of some polynomial interior-point--Algorithms for linear programming
无
2000-01-01
We study the behavior of some polynomial interior-point algorithms for solving random linear programming (LP) problems. We show that the average number of iterations of these algorithms, coupled with a finite termination technique, is bounded above by O(n1.5). The random LP problem is Todd's probabilistic model with the standard Gauss distribution.
Does subduction zone magmatism produce average continental crust
Ellam, R. M.; Hawkesworth, C. J.
1988-01-01
The question of whether present day subduction zone magmatism produces material of average continental crust composition, which perhaps most would agree is andesitic, is addressed. It was argued that modern andesitic to dacitic rocks in Andean-type settings are produced by plagioclase fractionation of mantle derived basalts, leaving a complementary residue with low Rb/Sr and a positive Eu anomaly. This residue must be removed, for example by delamination, if the average crust produced in these settings is andesitic. The author argued against this, pointing out the absence of evidence for such a signature in the mantle. Either the average crust is not andesitic, a conclusion the author was not entirely comfortable with, or other crust forming processes must be sought. One possibility is that during the Archean, direct slab melting of basaltic or eclogitic oceanic crust produced felsic melts, which together with about 65 percent mafic material, yielded an average crust of andesitic composition.
CHAMP climate data based on inversion of monthly average bending angles
J. Danzer
2014-07-01
Full Text Available GNSS Radio Occultation (RO refractivity climatologies for the stratosphere can be obtained from the Abel inversion of monthly average bending-angle profiles. The averaging of large numbers of profiles suppresses random noise and this, in combination with simple exponential extrapolation above an altitude of 80 km, circumvents the need for a "statistical optimization" step in the processing. Using data from the US-Taiwanese COSMIC mission, which provides ~ 1500–2000 occultations per day, it has been shown that this Average-Profile Inversion (API technique provides a robust method for generating stratospheric refractivity climatologies. Prior to the launch of COSMIC in mid-2006, the data records rely on data from the CHAMP mission. In order to exploit the full range of available RO data, the usage of CHAMP data is also required. CHAMP only provided ~ 200 profiles per day, and the measurements were noisier than COSMIC. As a consequence, the main research question in this study was to see if the average bending angle approach is also applicable to CHAMP data. Different methods for suppression of random noise – statistical and through data quality pre-screening – were tested. The API retrievals were compared with the more conventional approach of averaging individual refractivity profiles, produced with the implementation of statistical optimization used in the EUMETSAT Radio Occultation Meteorology Satellite Application Facility (ROM SAF operational processing. In this study it is demonstrated that the API retrieval technique works well for CHAMP data, enabling the generation of long-term stratospheric RO climate data records from August 2001 and onward. The resulting CHAMP refractivity climatologies are found to be practically identical to the standard retrieval at the DMI below altitudes of 35 km. Between 35 km to 50 km the differences between the two retrieval methods started to increase, showing largest differences at high latitudes and
The stability of a zonally averaged thermohaline circulation model
Schmidt, G A
1995-01-01
A combination of analytical and numerical techniques are used to efficiently determine the qualitative and quantitative behaviour of a one-basin zonally averaged thermohaline circulation ocean model. In contrast to earlier studies which use time stepping to find the steady solutions, the steady state equations are first solved directly to obtain the multiple equilibria under identical mixed boundary conditions. This approach is based on the differentiability of the governing equations and especially the convection scheme. A linear stability analysis is then performed, in which the normal modes and corresponding eigenvalues are found for the various equilibrium states. Resonant periodic solutions superimposed on these states are predicted for various types of forcing. The results are used to gain insight into the solutions obtained by Mysak, Stocker and Huang in a previous numerical study in which the eddy diffusivities were varied in a randomly forced one-basin zonally averaged model. Resonant stable oscillat...
The Average-Case Area of Heilbronn-Type Triangles
Jiang, T.; Li, Ming; Vitányi, Paul
1999-01-01
From among $ {n \\choose 3}$ triangles with vertices chosen from $n$ points in the unit square, let $T$ be the one with the smallest area, and let $A$ be the area of $T$. Heilbronn's triangle problem asks for the maximum value assumed by $A$ over all choices of $n$ points. We consider the average-case: If the $n$ points are chosen independently and at random (with a uniform distribution), then there exist positive constants $c$ and $C$ such that $c/n^3 < \\mu_n < C/n^3$ for all large enough val...
Multiscale Gossip for Efficient Decentralized Averaging in Wireless Packet Networks
Tsianos, Konstantinos I
2010-01-01
This paper describes and analyzes a hierarchical gossip algorithm for solving the distributed average consensus problem in wireless sensor networks. The network is recursively partitioned into subnetworks. Initially, nodes at the finest scale gossip to compute local averages. Then, using geographic routing to enable gossip between nodes that are not directly connected, these local averages are progressively fused up the hierarchy until the global average is computed. We show that the proposed hierarchical scheme with $k$ levels of hierarchy is competitive with state-of-the-art randomized gossip algorithms, in terms of message complexity, achieving $\\epsilon$-accuracy with high probability after $O\\big(n \\log \\log n \\log \\frac{kn}{\\epsilon} \\big)$ messages. Key to our analysis is the way in which the network is recursively partitioned. We find that the optimal scaling law is achieved when subnetworks at scale $j$ contain $O(n^{(2/3)^j})$ nodes; then the message complexity at any individual scale is $O(n \\log \\...
Post-model selection inference and model averaging
Georges Nguefack-Tsague
2011-07-01
Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.
Random Intercept and Random Slope 2-Level Multilevel Models
Rehan Ahmad Khan
2012-11-01
Full Text Available Random intercept model and random intercept & random slope model carrying two-levels of hierarchy in the population are presented and compared with the traditional regression approach. The impact of students’ satisfaction on their grade point average (GPA was explored with and without controlling teachers influence. The variation at level-1 can be controlled by introducing the higher levels of hierarchy in the model. The fanny movement of the fitted lines proves variation of student grades around teachers.
Averaged controllability of parameter dependent conservative semigroups
Lohéac, Jérôme; Zuazua, Enrique
2017-02-01
We consider the problem of averaged controllability for parameter depending (either in a discrete or continuous fashion) control systems, the aim being to find a control, independent of the unknown parameters, so that the average of the states is controlled. We do it in the context of conservative models, both in an abstract setting and also analysing the specific examples of the wave and Schrödinger equations. Our first result is of perturbative nature. Assuming the averaging probability measure to be a small parameter-dependent perturbation (in a sense that we make precise) of an atomic measure given by a Dirac mass corresponding to a specific realisation of the system, we show that the averaged controllability property is achieved whenever the system corresponding to the support of the Dirac is controllable. Similar tools can be employed to obtain averaged versions of the so-called Ingham inequalities. Particular attention is devoted to the 1d wave equation in which the time-periodicity of solutions can be exploited to obtain more precise results, provided the parameters involved satisfy Diophantine conditions ensuring the lack of resonances.
Average Temperatures in the Southwestern United States, 2000-2015 Versus Long-Term Average
U.S. Environmental Protection Agency — This indicator shows how the average air temperature from 2000 to 2015 has differed from the long-term average (1895–2015). To provide more detailed information,...
Efficient measurement of quantum gate error by interleaved randomized benchmarking.
Magesan, Easwar; Gambetta, Jay M; Johnson, B R; Ryan, Colm A; Chow, Jerry M; Merkel, Seth T; da Silva, Marcus P; Keefe, George A; Rothwell, Mary B; Ohki, Thomas A; Ketchen, Mark B; Steffen, M
2012-08-24
We describe a scalable experimental protocol for estimating the average error of individual quantum computational gates. This protocol consists of interleaving random Clifford gates between the gate of interest and provides an estimate as well as theoretical bounds for the average error of the gate under test, so long as the average noise variation over all Clifford gates is small. This technique takes into account both state preparation and measurement errors and is scalable in the number of qubits. We apply this protocol to a superconducting qubit system and find a bounded average error of 0.003 [0,0.016] for the single-qubit gates X(π/2) and Y(π/2). These bounded values provide better estimates of the average error than those extracted via quantum process tomography.
Niu, Gang; Kim, Hee-Dong; Roelofs, Robin; Perez, Eduardo; Schubert, Markus Andreas; Zaumseil, Peter; Costina, Ioan; Wenger, Christian
2016-06-01
With the continuous scaling of resistive random access memory (RRAM) devices, in-depth understanding of the physical mechanism and the material issues, particularly by directly studying integrated cells, become more and more important to further improve the device performances. In this work, HfO2-based integrated 1-transistor-1-resistor (1T1R) RRAM devices were processed in a standard 0.25 μm complementary-metal-oxide-semiconductor (CMOS) process line, using a batch atomic layer deposition (ALD) tool, which is particularly designed for mass production. We demonstrate a systematic study on TiN/Ti/HfO2/TiN/Si RRAM devices to correlate key material factors (nano-crystallites and carbon impurities) with the filament type resistive switching (RS) behaviours. The augmentation of the nano-crystallites density in the film increases the forming voltage of devices and its variation. Carbon residues in HfO2 films turn out to be an even more significant factor strongly impacting the RS behaviour. A relatively higher deposition temperature of 300 °C dramatically reduces the residual carbon concentration, thus leading to enhanced RS performances of devices, including lower power consumption, better endurance and higher reliability. Such thorough understanding on physical mechanism of RS and the correlation between material and device performances will facilitate the realization of high density and reliable embedded RRAM devices with low power consumption.
Niu, Gang; Kim, Hee-Dong; Roelofs, Robin; Perez, Eduardo; Schubert, Markus Andreas; Zaumseil, Peter; Costina, Ioan; Wenger, Christian
2016-06-17
With the continuous scaling of resistive random access memory (RRAM) devices, in-depth understanding of the physical mechanism and the material issues, particularly by directly studying integrated cells, become more and more important to further improve the device performances. In this work, HfO2-based integrated 1-transistor-1-resistor (1T1R) RRAM devices were processed in a standard 0.25 μm complementary-metal-oxide-semiconductor (CMOS) process line, using a batch atomic layer deposition (ALD) tool, which is particularly designed for mass production. We demonstrate a systematic study on TiN/Ti/HfO2/TiN/Si RRAM devices to correlate key material factors (nano-crystallites and carbon impurities) with the filament type resistive switching (RS) behaviours. The augmentation of the nano-crystallites density in the film increases the forming voltage of devices and its variation. Carbon residues in HfO2 films turn out to be an even more significant factor strongly impacting the RS behaviour. A relatively higher deposition temperature of 300 °C dramatically reduces the residual carbon concentration, thus leading to enhanced RS performances of devices, including lower power consumption, better endurance and higher reliability. Such thorough understanding on physical mechanism of RS and the correlation between material and device performances will facilitate the realization of high density and reliable embedded RRAM devices with low power consumption.
Gopal K Basak; Arunangshu Biswas
2013-02-01
In this paper we show that the continuous version of the self-normalized process $Y_{n,p}(t)=S_n(t)/V_{n,p}+(nt-[nt])X_{[nt]+1}/V_{n,p},0 < t ≤ 1;p>0$ where $S_n(t)=\\sum^{[nt]}_{i=1}X_i$ and $V_{(n,p)}=\\left(\\sum^n_{i=1}|X_i|^p\\right)^{1/p}$ and $X_i i.i.d.$ random variables belong to $DA()$, has a non-trivial distribution $\\mathrm{iff}$ ==2. The case for 2>> and ≤ < 2 is systematically eliminated by showing that either of tightness or finite dimensional convergence to a non-degenerate limiting distribution does not hold. This work is an extension of the work by Csörgő et al. who showed Donsker’s theorem for $Y_{n,2}(\\cdot p)$, i.e., for $p=2$, holds $\\mathrm{iff}$ =2 and identified the limiting process as a standard Brownian motion in sup norm.
Niu, Gang; Kim, Hee-Dong; Roelofs, Robin; Perez, Eduardo; Schubert, Markus Andreas; Zaumseil, Peter; Costina, Ioan; Wenger, Christian
2016-01-01
With the continuous scaling of resistive random access memory (RRAM) devices, in-depth understanding of the physical mechanism and the material issues, particularly by directly studying integrated cells, become more and more important to further improve the device performances. In this work, HfO2-based integrated 1-transistor-1-resistor (1T1R) RRAM devices were processed in a standard 0.25 μm complementary-metal-oxide-semiconductor (CMOS) process line, using a batch atomic layer deposition (ALD) tool, which is particularly designed for mass production. We demonstrate a systematic study on TiN/Ti/HfO2/TiN/Si RRAM devices to correlate key material factors (nano-crystallites and carbon impurities) with the filament type resistive switching (RS) behaviours. The augmentation of the nano-crystallites density in the film increases the forming voltage of devices and its variation. Carbon residues in HfO2 films turn out to be an even more significant factor strongly impacting the RS behaviour. A relatively higher deposition temperature of 300 °C dramatically reduces the residual carbon concentration, thus leading to enhanced RS performances of devices, including lower power consumption, better endurance and higher reliability. Such thorough understanding on physical mechanism of RS and the correlation between material and device performances will facilitate the realization of high density and reliable embedded RRAM devices with low power consumption. PMID:27312225
Cosmic structure, averaging and dark energy
Wiltshire, David L
2013-01-01
These lecture notes review the theoretical problems associated with coarse-graining the observed inhomogeneous structure of the universe at late epochs, of describing average cosmic evolution in the presence of growing inhomogeneity, and of relating average quantities to physical observables. In particular, a detailed discussion of the timescape scenario is presented. In this scenario, dark energy is realized as a misidentification of gravitational energy gradients which result from gradients in the kinetic energy of expansion of space, in the presence of density and spatial curvature gradients that grow large with the growth of structure. The phenomenology and observational tests of the timescape model are discussed in detail, with updated constraints from Planck satellite data. In addition, recent results on the variation of the Hubble expansion on < 100/h Mpc scales are discussed. The spherically averaged Hubble law is significantly more uniform in the rest frame of the Local Group of galaxies than in t...
Books average previous decade of economic misery.
R Alexander Bentley
Full Text Available For the 20(th century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.
Benchmarking statistical averaging of spectra with HULLAC
Klapisch, Marcel; Busquet, Michel
2008-11-01
Knowledge of radiative properties of hot plasmas is important for ICF, astrophysics, etc When mid-Z or high-Z elements are present, the spectra are so complex that one commonly uses statistically averaged description of atomic systems [1]. In a recent experiment on Fe[2], performed under controlled conditions, high resolution transmission spectra were obtained. The new version of HULLAC [3] allows the use of the same model with different levels of details/averaging. We will take advantage of this feature to check the effect of averaging with comparison with experiment. [1] A Bar-Shalom, J Oreg, and M Klapisch, J. Quant. Spectros. Rad. Transf. 65, 43 (2000). [2] J. E. Bailey, G. A. Rochau, C. A. Iglesias et al., Phys. Rev. Lett. 99, 265002-4 (2007). [3]. M. Klapisch, M. Busquet, and A. Bar-Shalom, AIP Conference Proceedings 926, 206-15 (2007).
Stochastic Averaging and Stochastic Extremum Seeking
Liu, Shu-Jun
2012-01-01
Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering and analysis of bacterial convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...
Books average previous decade of economic misery.
Bentley, R Alexander; Acerbi, Alberto; Ormerod, Paul; Lampos, Vasileios
2014-01-01
For the 20(th) century since the Depression, we find a strong correlation between a 'literary misery index' derived from English language books and a moving average of the previous decade of the annual U.S. economic misery index, which is the sum of inflation and unemployment rates. We find a peak in the goodness of fit at 11 years for the moving average. The fit between the two misery indices holds when using different techniques to measure the literary misery index, and this fit is significantly better than other possible correlations with different emotion indices. To check the robustness of the results, we also analysed books written in German language and obtained very similar correlations with the German economic misery index. The results suggest that millions of books published every year average the authors' shared economic experiences over the past decade.
High Average Power Yb:YAG Laser
Zapata, L E; Beach, R J; Payne, S A
2001-05-23
We are working on a composite thin-disk laser design that can be scaled as a source of high brightness laser power for tactical engagement and other high average power applications. The key component is a diffusion-bonded composite comprising a thin gain-medium and thicker cladding that is strikingly robust and resolves prior difficulties with high average power pumping/cooling and the rejection of amplified spontaneous emission (ASE). In contrast to high power rods or slabs, the one-dimensional nature of the cooling geometry and the edge-pump geometry scale gracefully to very high average power. The crucial design ideas have been verified experimentally. Progress this last year included: extraction with high beam quality using a telescopic resonator, a heterogeneous thin film coating prescription that meets the unusual requirements demanded by this laser architecture, thermal management with our first generation cooler. Progress was also made in design of a second-generation laser.
Manoochehr Azkhosh
2016-12-01
Full Text Available Objective: Substance abuse is a socio-psychological disorder. The aim of this study was to compare the effectiveness of acceptance and commitment therapy with 12-steps Narcotics Anonymous on psychological well-being of opiate dependent individuals in addiction treatment centers in Shiraz, Iran.Method: This was a randomized controlled trial. Data were collected at entry into the study and at post-test and follow-up visits. The participants were selected from opiate addicted individuals who referred to addiction treatment centers in Shiraz. Sixty individuals were evaluated according to inclusion/ exclusion criteria and were divided into three equal groups randomly (20 participants per group. One group received acceptance and commitment group therapy (Twelve 90-minute sessions and the other group was provided with the 12-steps Narcotics Anonymous program and the control group received the usual methadone maintenance treatment. During the treatment process, seven participants dropped out. Data were collected using the psychological well-being questionnaire and AAQ questionnaire in the three groups at pre-test, post-test and follow-up visits. Data were analyzed using repeated measure analysis of variance.Results: Repeated measure analysis of variance revealed that the mean difference between the three groups was significant (P<0.05 and that acceptance and commitment therapy group showed improvement relative to the NA and control groups on psychological well-being and psychological flexibility.Conclusion: The results of this study revealed that acceptance and commitment therapy can be helpful in enhancing positive emotions and increasing psychological well-being of addicts who seek treatment.
Azkhosh, Manoochehr; Farhoudianm, Ali; Saadati, Hemn; Shoaee, Fateme; Lashani, Leila
2016-01-01
Objective: Substance abuse is a socio-psychological disorder. The aim of this study was to compare the effectiveness of acceptance and commitment therapy with 12-steps Narcotics Anonymous on psychological well-being of opiate dependent individuals in addiction treatment centers in Shiraz, Iran. Method: This was a randomized controlled trial. Data were collected at entry into the study and at post-test and follow-up visits. The participants were selected from opiate addicted individuals who referred to addiction treatment centers in Shiraz. Sixty individuals were evaluated according to inclusion/ exclusion criteria and were divided into three equal groups randomly (20 participants per group). One group received acceptance and commitment group therapy (Twelve 90-minute sessions) and the other group was provided with the 12-steps Narcotics Anonymous program and the control group received the usual methadone maintenance treatment. During the treatment process, seven participants dropped out. Data were collected using the psychological well-being questionnaire and AAQ questionnaire in the three groups at pre-test, post-test and follow-up visits. Data were analyzed using repeated measure analysis of variance. Results: Repeated measure analysis of variance revealed that the mean difference between the three groups was significant (Pacceptance and commitment therapy group showed improvement relative to the NA and control groups on psychological well-being and psychological flexibility. Conclusion: The results of this study revealed that acceptance and commitment therapy can be helpful in enhancing positive emotions and increasing psychological well-being of addicts who seek treatment. PMID:28050185
The XXZ Heisenberg model on random surfaces
Ambjørn, J., E-mail: ambjorn@nbi.dk [The Niels Bohr Institute, Copenhagen University, Blegdamsvej 17, DK-2100 Copenhagen (Denmark); Institute for Mathematics, Astrophysics and Particle Physics (IMAPP), Radbaud University Nijmegen, Heyendaalseweg 135, 6525 AJ, Nijmegen (Netherlands); Sedrakyan, A., E-mail: sedrak@nbi.dk [The Niels Bohr Institute, Copenhagen University, Blegdamsvej 17, DK-2100 Copenhagen (Denmark); Yerevan Physics Institute, Br. Alikhanyan str. 2, Yerevan-36 (Armenia)
2013-09-21
We consider integrable models, or in general any model defined by an R-matrix, on random surfaces, which are discretized using random Manhattan lattices. The set of random Manhattan lattices is defined as the set dual to the lattice random surfaces embedded on a regular d-dimensional lattice. They can also be associated with the random graphs of multiparticle scattering nodes. As an example we formulate a random matrix model where the partition function reproduces the annealed average of the XXZ Heisenberg model over all random Manhattan lattices. A technique is presented which reduces the random matrix integration in partition function to an integration over their eigenvalues.
The XXZ Heisenberg model on random surfaces
Ambjorn, J
2013-01-01
We consider integrable models, or in general any model defined by an $R$-matrix, on random surfaces, which are discretized using random Manhattan lattices. The set of random Manhattan lattices is defined as the set dual to the lattice random surfaces embedded on a regular d-dimensional lattice. They can also be associated with the random graphs of multiparticle scattering nodes. As an example we formulate a random matrix model where the partition function reproduces the annealed average of the XXZ Heisenberg model over all random Manhattan lattices. A technique is presented which reduces the random matrix integration in partition function to an integration over their eigenvalues.
The modulated average structure of mullite.
Birkenstock, Johannes; Petříček, Václav; Pedersen, Bjoern; Schneider, Hartmut; Fischer, Reinhard X
2015-06-01
Homogeneous and inclusion-free single crystals of 2:1 mullite (Al(4.8)Si(1.2)O(9.6)) grown by the Czochralski technique were examined by X-ray and neutron diffraction methods. The observed diffuse scattering together with the pattern of satellite reflections confirm previously published data and are thus inherent features of the mullite structure. The ideal composition was closely met as confirmed by microprobe analysis (Al(4.82 (3))Si(1.18 (1))O(9.59 (5))) and by average structure refinements. 8 (5) to 20 (13)% of the available Si was found in the T* position of the tetrahedra triclusters. The strong tendencey for disorder in mullite may be understood from considerations of hypothetical superstructures which would have to be n-fivefold with respect to the three-dimensional average unit cell of 2:1 mullite and n-fourfold in case of 3:2 mullite. In any of these the possible arrangements of the vacancies and of the tetrahedral units would inevitably be unfavorable. Three directions of incommensurate modulations were determined: q1 = [0.3137 (2) 0 ½], q2 = [0 0.4021 (5) 0.1834 (2)] and q3 = [0 0.4009 (5) -0.1834 (2)]. The one-dimensional incommensurately modulated crystal structure associated with q1 was refined for the first time using the superspace approach. The modulation is dominated by harmonic occupational modulations of the atoms in the di- and the triclusters of the tetrahedral units in mullite. The modulation amplitudes are small and the harmonic character implies that the modulated structure still represents an average structure in the overall disordered arrangement of the vacancies and of the tetrahedral structural units. In other words, when projecting the local assemblies at the scale of a few tens of average mullite cells into cells determined by either one of the modulation vectors q1, q2 or q3 a weak average modulation results with slightly varying average occupation factors for the tetrahedral units. As a result, the real
A singularity theorem based on spatial averages
J M M Senovilla
2007-07-01
Inspired by Raychaudhuri's work, and using the equation named after him as a basic ingredient, a new singularity theorem is proved. Open non-rotating Universes, expanding everywhere with a non-vanishing spatial average of the matter variables, show severe geodesic incompletness in the past. Another way of stating the result is that, under the same conditions, any singularity-free model must have a vanishing spatial average of the energy density (and other physical variables). This is very satisfactory and provides a clear decisive difference between singular and non-singular cosmologies.
Average: the juxtaposition of procedure and context
Watson, Jane; Chick, Helen; Callingham, Rosemary
2014-09-01
This paper presents recent data on the performance of 247 middle school students on questions concerning average in three contexts. Analysis includes considering levels of understanding linking definition and context, performance across contexts, the relative difficulty of tasks, and difference in performance for male and female students. The outcomes lead to a discussion of the expectations of the curriculum and its implementation, as well as assessment, in relation to students' skills in carrying out procedures and their understanding about the meaning of average in context.
SOURCE TERMS FOR AVERAGE DOE SNF CANISTERS
K. L. Goluoglu
2000-06-09
The objective of this calculation is to generate source terms for each type of Department of Energy (DOE) spent nuclear fuel (SNF) canister that may be disposed of at the potential repository at Yucca Mountain. The scope of this calculation is limited to generating source terms for average DOE SNF canisters, and is not intended to be used for subsequent calculations requiring bounding source terms. This calculation is to be used in future Performance Assessment calculations, or other shielding or thermal calculations requiring average source terms.
An approximate analytical approach to resampling averages
Malzahn, Dorthe; Opper, M.
2004-01-01
Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach for appr......Using a novel reformulation, we develop a framework to compute approximate resampling data averages analytically. The method avoids multiple retraining of statistical models on the samples. Our approach uses a combination of the replica "trick" of statistical physics and the TAP approach...
Grassmann Averages for Scalable Robust PCA
Hauberg, Søren; Feragen, Aasa; Black, Michael J.
2014-01-01
As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...
Model averaging and muddled multimodel inferences.
Cade, Brian S
2015-09-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the t
Model averaging and muddled multimodel inferences
Cade, Brian S.
2015-01-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the
Perceptual averaging in individuals with Autism Spectrum Disorder
Jennifer Elise Corbett
2016-11-01
Full Text Available There is mounting evidence that observers rely on statistical summaries of visual information to maintain stable and coherent perception. Sensitivity to the mean (or other prototypical value of a visual feature (e.g., mean size appears to be a pervasive process in human visual perception. Previous studies in individuals diagnosed with Autism Spectrum Disorder (ASD have uncovered characteristic patterns of visual processing that suggest they may rely more on enhanced local representations of individual objects instead of computing such perceptual averages. To further explore the fundamental nature of abstract statistical representation in visual perception, we investigated perceptual averaging of mean size in a group of 12 high-functioning individuals diagnosed with ASD using simplified versions of two identification and adaptation tasks that elicited characteristic perceptual averaging effects in a control group of neurotypical participants. In Experiment 1, participants performed with above chance accuracy in recalling the mean size of a set of circles (mean task despite poor accuracy in recalling individual circle sizes (member task. In Experiment 2, their judgments of single circle size were biased by mean size adaptation. Overall, these results suggest that individuals with ASD perceptually average information about sets of objects in the surrounding environment. Our results underscore the fundamental nature of perceptual averaging in vision, and further our understanding of how autistic individuals make sense of the external environment.
Consensus in averager-copier-voter networks of moving dynamical agents
Shang, Yilun
2017-02-01
This paper deals with a hybrid opinion dynamics comprising averager, copier, and voter agents, which ramble as random walkers on a spatial network. Agents exchange information following some deterministic and stochastic protocols if they reside at the same site in the same time. Based on stochastic stability of Markov chains, sufficient conditions guaranteeing consensus in the sense of almost sure convergence have been obtained. The ultimate consensus state is identified in the form of an ergodicity result. Simulation studies are performed to validate the effectiveness and availability of our theoretical results. The existence/non-existence of voters and the proportion of them are unveiled to play key roles during the consensus-reaching process.
Increasing average period lengths by switching of robust chaos maps in finite precision
Nagaraj, N.; Shastry, M. C.; Vaidya, P. G.
2008-12-01
Grebogi, Ott and Yorke (Phys. Rev. A 38, 1988) have investigated the effect of finite precision on average period length of chaotic maps. They showed that the average length of periodic orbits (T) of a dynamical system scales as a function of computer precision (ɛ) and the correlation dimension (d) of the chaotic attractor: T ˜ɛ-d/2. In this work, we are concerned with increasing the average period length which is desirable for chaotic cryptography applications. Our experiments reveal that random and chaotic switching of deterministic chaotic dynamical systems yield higher average length of periodic orbits as compared to simple sequential switching or absence of switching. To illustrate the application of switching, a novel generalization of the Logistic map that exhibits Robust Chaos (absence of attracting periodic orbits) is first introduced. We then propose a pseudo-random number generator based on chaotic switching between Robust Chaos maps which is found to successfully pass stringent statistical tests of randomness.
Parameterized Traveling Salesman Problem: Beating the Average
Gutin, G.; Patel, V.
2016-01-01
In the traveling salesman problem (TSP), we are given a complete graph Kn together with an integer weighting w on the edges of Kn, and we are asked to find a Hamilton cycle of Kn of minimum weight. Let h(w) denote the average weight of a Hamilton cycle of Kn for the weighting w. Vizing in 1973 asked
On averaging methods for partial differential equations
Verhulst, F.
2001-01-01
The analysis of weakly nonlinear partial differential equations both qualitatively and quantitatively is emerging as an exciting eld of investigation In this report we consider specic results related to averaging but we do not aim at completeness The sections and contain important material which
Discontinuities and hysteresis in quantized average consensus
Ceragioli, Francesca; Persis, Claudio De; Frasca, Paolo
2011-01-01
We consider continuous-time average consensus dynamics in which the agents’ states are communicated through uniform quantizers. Solutions to the resulting system are defined in the Krasowskii sense and are proven to converge to conditions of ‘‘practical consensus’’. To cope with undesired chattering
Bayesian Averaging is Well-Temperated
Hansen, Lars Kai
2000-01-01
Bayesian predictions are stochastic just like predictions of any other inference scheme that generalize from a finite sample. While a simple variational argument shows that Bayes averaging is generalization optimal given that the prior matches the teacher parameter distribution the situation...
A Functional Measurement Study on Averaging Numerosity
Tira, Michael D.; Tagliabue, Mariaelena; Vidotto, Giulio
2014-01-01
In two experiments, participants judged the average numerosity between two sequentially presented dot patterns to perform an approximate arithmetic task. In Experiment 1, the response was given on a 0-20 numerical scale (categorical scaling), and in Experiment 2, the response was given by the production of a dot pattern of the desired numerosity…
Generalized Jackknife Estimators of Weighted Average Derivatives
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
With the aim of improving the quality of asymptotic distributional approximations for nonlinear functionals of nonparametric estimators, this paper revisits the large-sample properties of an important member of that class, namely a kernel-based weighted average derivative estimator. Asymptotic li...
Bootstrapping Density-Weighted Average Derivatives
Cattaneo, Matias D.; Crump, Richard K.; Jansson, Michael
Employing the "small bandwidth" asymptotic framework of Cattaneo, Crump, and Jansson (2009), this paper studies the properties of a variety of bootstrap-based inference procedures associated with the kernel-based density-weighted averaged derivative estimator proposed by Powell, Stock, and Stoker...
Bayesian Model Averaging for Propensity Score Analysis
Kaplan, David; Chen, Jianshen
2013-01-01
The purpose of this study is to explore Bayesian model averaging in the propensity score context. Previous research on Bayesian propensity score analysis does not take into account model uncertainty. In this regard, an internally consistent Bayesian framework for model building and estimation must also account for model uncertainty. The…
A dynamic analysis of moving average rules
Chiarella, C.; He, X.Z.; Hommes, C.H.
2006-01-01
The use of various moving average (MA) rules remains popular with financial market practitioners. These rules have recently become the focus of a number empirical studies, but there have been very few studies of financial market models where some agents employ technical trading rules of the type
Average utility maximization: A preference foundation
A.V. Kothiyal (Amit); V. Spinu (Vitalie); P.P. Wakker (Peter)
2014-01-01
textabstractThis paper provides necessary and sufficient preference conditions for average utility maximization over sequences of variable length. We obtain full generality by using a new algebraic technique that exploits the richness structure naturally provided by the variable length of the sequen
High average-power induction linacs
Prono, D.S.; Barrett, D.; Bowles, E.; Caporaso, G.J.; Chen, Yu-Jiuan; Clark, J.C.; Coffield, F.; Newton, M.A.; Nexsen, W.; Ravenscroft, D.
1989-03-15
Induction linear accelerators (LIAs) are inherently capable of accelerating several thousand amperes of /approximately/ 50-ns duration pulses to > 100 MeV. In this paper we report progress and status in the areas of duty factor and stray power management. These technologies are vital if LIAs are to attain high average power operation. 13 figs.
High Average Power Optical FEL Amplifiers
Ben-Zvi, I; Litvinenko, V
2005-01-01
Historically, the first demonstration of the FEL was in an amplifier configuration at Stanford University. There were other notable instances of amplifying a seed laser, such as the LLNL amplifier and the BNL ATF High-Gain Harmonic Generation FEL. However, for the most part FELs are operated as oscillators or self amplified spontaneous emission devices. Yet, in wavelength regimes where a conventional laser seed can be used, the FEL can be used as an amplifier. One promising application is for very high average power generation, for instance a 100 kW average power FEL. The high electron beam power, high brightness and high efficiency that can be achieved with photoinjectors and superconducting energy recovery linacs combine well with the high-gain FEL amplifier to produce unprecedented average power FELs with some advantages. In addition to the general features of the high average power FEL amplifier, we will look at a 100 kW class FEL amplifier is being designed to operate on the 0.5 ampere Energy Recovery Li...