Hung, Tran Loc; Giang, Le Truong
2016-01-01
Using the Stein-Chen method some upper bounds in Poisson approximation for distributions of row-wise triangular arrays of independent negative-binomial distributed random variables are established in this note.
Extended Poisson Exponential Distribution
Directory of Open Access Journals (Sweden)
Anum Fatima
2015-09-01
Full Text Available A new mixture of Modified Exponential (ME and Poisson distribution has been introduced in this paper. Taking the Maximum of Modified Exponential random variable when the sample size follows a zero truncated Poisson distribution we have derived the new distribution, named as Extended Poisson Exponential distribution. This distribution possesses increasing and decreasing failure rates. The Poisson-Exponential, Modified Exponential and Exponential distributions are special cases of this distribution. We have also investigated some mathematical properties of the distribution along with Information entropies and Order statistics of the distribution. The estimation of parameters has been obtained using the Maximum Likelihood Estimation procedure. Finally we have illustrated a real data application of our distribution.
Scaling the Poisson Distribution
Farnsworth, David L.
2014-01-01
We derive the additive property of Poisson random variables directly from the probability mass function. An important application of the additive property to quality testing of computer chips is presented.
White Noise of Poisson Random Measures
Proske, Frank; Øksendal, Bernt
2002-01-01
We develop a white noise theory for Poisson random measures associated with a Lévy process. The starting point of this theory is a chaos expansion with kernels of polynomial type. We use this to construct the white noise of a Poisson random measure, which takes values in a certain distribution space. Then we show, how a Skorohod/Itô integral for point processes can be represented by a Bochner integral in terms of white noise of the random measure and a Wick product. Further, we apply these co...
Independent production and Poisson distribution
International Nuclear Information System (INIS)
Golokhvastov, A.I.
1994-01-01
The well-known statement of factorization of inclusive cross-sections in case of independent production of particles (or clusters, jets etc.) and the conclusion of Poisson distribution over their multiplicity arising from it do not follow from the probability theory in any way. Using accurately the theorem of the product of independent probabilities, quite different equations are obtained and no consequences relative to multiplicity distributions are obtained. 11 refs
Compound Poisson Approximations for Sums of Random Variables
Serfozo, Richard F.
1986-01-01
We show that a sum of dependent random variables is approximately compound Poisson when the variables are rarely nonzero and, given they are nonzero, their conditional distributions are nearly identical. We give several upper bounds on the total-variation distance between the distribution of such a sum and a compound Poisson distribution. Included is an example for Markovian occurrences of a rare event. Our bounds are consistent with those that are known for Poisson approximations for sums of...
Bhamidi, S.; Van der Hofstad, R.; Hooghiemstra, G.
2010-01-01
We study first passage percolation (FPP) on the configuration model (CM) having power-law degrees with exponent ? ? [1, 2) and exponential edge weights. We derive the distributional limit of the minimal weight of a path between typical vertices in the network and the number of edges on the
Reduction of Nambu-Poisson Manifolds by Regular Distributions
Das, Apurba
2018-03-01
The version of Marsden-Ratiu reduction theorem for Nambu-Poisson manifolds by a regular distribution has been studied by Ibáñez et al. In this paper we show that the reduction is always ensured unless the distribution is zero. Next we extend the more general Falceto-Zambon Poisson reduction theorem for Nambu-Poisson manifolds. Finally, we define gauge transformations of Nambu-Poisson structures and show that these transformations commute with the reduction procedure.
Prescription-induced jump distributions in multiplicative Poisson processes
Suweis, Samir; Porporato, Amilcare; Rinaldo, Andrea; Maritan, Amos
2011-06-01
Generalized Langevin equations (GLE) with multiplicative white Poisson noise pose the usual prescription dilemma leading to different evolution equations (master equations) for the probability distribution. Contrary to the case of multiplicative Gaussian white noise, the Stratonovich prescription does not correspond to the well-known midpoint (or any other intermediate) prescription. By introducing an inertial term in the GLE, we show that the Itô and Stratonovich prescriptions naturally arise depending on two time scales, one induced by the inertial term and the other determined by the jump event. We also show that, when the multiplicative noise is linear in the random variable, one prescription can be made equivalent to the other by a suitable transformation in the jump probability distribution. We apply these results to a recently proposed stochastic model describing the dynamics of primary soil salinization, in which the salt mass balance within the soil root zone requires the analysis of different prescriptions arising from the resulting stochastic differential equation forced by multiplicative white Poisson noise, the features of which are tailored to the characters of the daily precipitation. A method is finally suggested to infer the most appropriate prescription from the data.
Minimum Hellinger distance estimation for k-component poisson mixture with random effects.
Xiang, Liming; Yau, Kelvin K W; Van Hui, Yer; Lee, Andy H
2008-06-01
The k-component Poisson regression mixture with random effects is an effective model in describing the heterogeneity for clustered count data arising from several latent subpopulations. However, the residual maximum likelihood estimation (REML) of regression coefficients and variance component parameters tend to be unstable and may result in misleading inferences in the presence of outliers or extreme contamination. In the literature, the minimum Hellinger distance (MHD) estimation has been investigated to obtain robust estimation for finite Poisson mixtures. This article aims to develop a robust MHD estimation approach for k-component Poisson mixtures with normally distributed random effects. By applying the Gaussian quadrature technique to approximate the integrals involved in the marginal distribution, the marginal probability function of the k-component Poisson mixture with random effects can be approximated by the summation of a set of finite Poisson mixtures. Simulation study shows that the MHD estimates perform satisfactorily for data without outlying observation(s), and outperform the REML estimates when data are contaminated. Application to a data set of recurrent urinary tract infections (UTI) with random institution effects demonstrates the practical use of the robust MHD estimation method.
Investigation of Random Switching Driven by a Poisson Point Process
DEFF Research Database (Denmark)
Simonsen, Maria; Schiøler, Henrik; Leth, John-Josef
2015-01-01
This paper investigates the switching mechanism of a two-dimensional switched system, when the switching events are generated by a Poisson point process. A model, in the shape of a stochastic process, for such a system is derived and the distribution of the trajectory's position is developed...... together with marginal density functions for the coordinate functions. Furthermore, the joint probability distribution is given explicitly....
Random walk in dynamically disordered chains: Poisson white noise disorder
International Nuclear Information System (INIS)
Hernandez-Garcia, E.; Pesquera, L.; Rodriguez, M.A.; San Miguel, M.
1989-01-01
Exact solutions are given for a variety of models of random walks in a chain with time-dependent disorder. Dynamic disorder is modeled by white Poisson noise. Models with site-independent (global) and site-dependent (local) disorder are considered. Results are described in terms of an affective random walk in a nondisordered medium. In the cases of global disorder the effective random walk contains multistep transitions, so that the continuous limit is not a diffusion process. In the cases of local disorder the effective process is equivalent to usual random walk in the absence of disorder but with slower diffusion. Difficulties associated with the continuous-limit representation of random walk in a disordered chain are discussed. In particular, the authors consider explicit cases in which taking the continuous limit and averaging over disorder sources do not commute
Comparison between two bivariate Poisson distributions through the ...
African Journals Online (AJOL)
To remedy this problem, Berkhout and Plug proposed a bivariate Poisson distribution accepting the correlation as well negative, equal to zero, that positive. In this paper, we show that these models are nearly everywhere asymptotically equal. From this survey that the ø-divergence converges toward zero, both models are ...
Is neutron evaporation from highly excited nuclei a poisson random process
International Nuclear Information System (INIS)
Simbel, M.H.
1982-01-01
It is suggested that neutron emission from highly excited nuclei follows a Poisson random process. The continuous variable of the process is the excitation energy excess over the binding energy of the emitted neutrons and the discrete variable is the number of emitted neutrons. Cross sections for (HI,xn) reactions are analyzed using a formula containing a Poisson distribution function. The post- and pre-equilibrium components of the cross section are treated separately. The agreement between the predictions of this formula and the experimental results is very good. (orig.)
Seasonally adjusted birth frequencies follow the Poisson distribution.
Barra, Mathias; Lindstrøm, Jonas C; Adams, Samantha S; Augestad, Liv A
2015-12-15
Variations in birth frequencies have an impact on activity planning in maternity wards. Previous studies of this phenomenon have commonly included elective births. A Danish study of spontaneous births found that birth frequencies were well modelled by a Poisson process. Somewhat unexpectedly, there were also weekly variations in the frequency of spontaneous births. Another study claimed that birth frequencies follow the Benford distribution. Our objective was to test these results. We analysed 50,017 spontaneous births at Akershus University Hospital in the period 1999-2014. To investigate the Poisson distribution of these births, we plotted their variance over a sliding average. We specified various Poisson regression models, with the number of births on a given day as the outcome variable. The explanatory variables included various combinations of years, months, days of the week and the digit sum of the date. The relationship between the variance and the average fits well with an underlying Poisson process. A Benford distribution was disproved by a goodness-of-fit test (p variables is significantly improved (p variable. Altogether 7.5% more children are born on Tuesdays than on Sundays. The digit sum of the date is non-significant as an explanatory variable (p = 0.23), nor does it increase the explained variance. INERPRETATION: Spontaneous births are well modelled by a time-dependent Poisson process when monthly and day-of-the-week variation is included. The frequency is highest in summer towards June and July, Friday and Tuesday stand out as particularly busy days, and the activity level is at its lowest during weekends.
A Raikov-Type Theorem for Radial Poisson Distributions: A Proof of Kingman's Conjecture
Van Nguyen, Thu
2011-01-01
In the present paper we prove the following conjecture in Kingman, J.F.C., Random walks with spherical symmetry, Acta Math.,109, (1963), 11-53. concerning a famous Raikov's theorem of decomposition of Poisson random variables: "If a radial sum of two independent random variables X and Y is radial Poisson, then each of them must be radial Poisson."
A comparison of Poisson-one-inflated power series distributions for ...
African Journals Online (AJOL)
A class of Poisson-one-inflated power series distributions (the binomial, the Poisson, the negative binomial, the geometric, the log-series and the misrecorded Poisson) are proposed for modeling rural out-migration at the household level. The probability mass functions of the mixture distributions are derived and fitted to the ...
Polyelectrolyte Microcapsules: Ion Distributions from a Poisson-Boltzmann Model
Tang, Qiyun; Denton, Alan R.; Rozairo, Damith; Croll, Andrew B.
2014-03-01
Recent experiments have shown that polystyrene-polyacrylic-acid-polystyrene (PS-PAA-PS) triblock copolymers in a solvent mixture of water and toluene can self-assemble into spherical microcapsules. Suspended in water, the microcapsules have a toluene core surrounded by an elastomer triblock shell. The longer, hydrophilic PAA blocks remain near the outer surface of the shell, becoming charged through dissociation of OH functional groups in water, while the shorter, hydrophobic PS blocks form a networked (glass or gel) structure. Within a mean-field Poisson-Boltzmann theory, we model these polyelectrolyte microcapsules as spherical charged shells, assuming different dielectric constants inside and outside the capsule. By numerically solving the nonlinear Poisson-Boltzmann equation, we calculate the radial distribution of anions and cations and the osmotic pressure within the shell as a function of salt concentration. Our predictions, which can be tested by comparison with experiments, may guide the design of microcapsules for practical applications, such as drug delivery. This work was supported by the National Science Foundation under Grant No. DMR-1106331.
Confidence limits for parameters of Poisson and binomial distributions
International Nuclear Information System (INIS)
Arnett, L.M.
1976-04-01
The confidence limits for the frequency in a Poisson process and for the proportion of successes in a binomial process were calculated and tabulated for the situations in which the observed values of the frequency or proportion and an a priori distribution of these parameters are available. Methods are used that produce limits with exactly the stated confidence levels. The confidence interval [a,b] is calculated so that Pr [a less than or equal to lambda less than or equal to b c,μ], where c is the observed value of the parameter, and μ is the a priori hypothesis of the distribution of this parameter. A Bayesian type analysis is used. The intervals calculated are narrower and appreciably different from results, known to be conservative, that are often used in problems of this type. Pearson and Hartley recognized the characteristics of their methods and contemplated that exact methods could someday be used. The calculation of the exact intervals requires involved numerical analyses readily implemented only on digital computers not available to Pearson and Hartley. A Monte Carlo experiment was conducted to verify a selected interval from those calculated. This numerical experiment confirmed the results of the analytical methods and the prediction of Pearson and Hartley that their published tables give conservative results
Michael, A. J.
2012-12-01
Detecting trends in the rate of sporadic events is a problem for earthquakes and other natural hazards such as storms, floods, or landslides. I use synthetic events to judge the tests used to address this problem in seismology and consider their application to other hazards. Recent papers have analyzed the record of magnitude ≥7 earthquakes since 1900 and concluded that the events are consistent with a constant rate Poisson process plus localized aftershocks (Michael, GRL, 2011; Shearer and Stark, PNAS, 2012; Daub et al., GRL, 2012; Parsons and Geist, BSSA, 2012). Each paper removed localized aftershocks and then used a different suite of statistical tests to test the null hypothesis that the remaining data could be drawn from a constant rate Poisson process. The methods include KS tests between event times or inter-event times and predictions from a Poisson process, the autocorrelation function on inter-event times, and two tests on the number of events in time bins: the Poisson dispersion test and the multinomial chi-square test. The range of statistical tests gives us confidence in the conclusions; which are robust with respect to the choice of tests and parameters. But which tests are optimal and how sensitive are they to deviations from the null hypothesis? The latter point was raised by Dimer (arXiv, 2012), who suggested that the lack of consideration of Type 2 errors prevents these papers from being able to place limits on the degree of clustering and rate changes that could be present in the global seismogenic process. I produce synthetic sets of events that deviate from a constant rate Poisson process using a variety of statistical simulation methods including Gamma distributed inter-event times and random walks. The sets of synthetic events are examined with the statistical tests described above. Preliminary results suggest that with 100 to 1000 events, a data set that does not reject the Poisson null hypothesis could have a variability that is 30% to
A Hands-on Activity for Teaching the Poisson Distribution Using the Stock Market
Dunlap, Mickey; Studstill, Sharyn
2014-01-01
The number of increases a particular stock makes over a fixed period follows a Poisson distribution. This article discusses using this easily-found data as an opportunity to let students become involved in the data collection and analysis process.
Ship-Track Models Based on Poisson-Distributed Port-Departure Times
National Research Council Canada - National Science Library
Heitmeyer, Richard
2006-01-01
... of those ships, and their nominal speeds. The probability law assumes that the ship departure times are Poisson-distributed with a time-varying departure rate and that the ship speeds and ship routes are statistically independent...
Random vibrations of Rayleigh vibroimpact oscillator under Parametric Poisson white noise
Yang, Guidong; Xu, Wei; Jia, Wantao; He, Meijuan
2016-04-01
Random vibration problems for a single-degree-of-freedom (SDOF) Rayleigh vibroimpact system with a rigid barrier under parametric Poisson white noise are considered. The averaged generalized Fokker-Planck-Kolmogorov (FPK) equations with parametric Poisson white noise are derived after using the nonsmooth variable transformation and the approximate stationary solutions for the system's response are obtained by perturbation method. The results are validated numerically by using Monte Carlo simulations from original vibroimpact system. Effects on the response for different damping coefficients, restitution coefficients and noise intensities are discussed. Furthermore, stochastic bifurcations are also explored.
Use of the negative binomial-truncated Poisson distribution in thunderstorm prediction
Cohen, A. C.
1971-01-01
A probability model is presented for the distribution of thunderstorms over a small area given that thunderstorm events (1 or more thunderstorms) are occurring over a larger area. The model incorporates the negative binomial and truncated Poisson distributions. Probability tables for Cape Kennedy for spring, summer, and fall months and seasons are presented. The computer program used to compute these probabilities is appended.
e+-e- hadronic multiplicity distributions: negative binomial or Poisson
International Nuclear Information System (INIS)
Carruthers, P.; Shih, C.C.
1986-01-01
On the basis of fits to the multiplicity distributions for variable rapidity windows and the forward backward correlation for the 2 jet subset of e + e - data it is impossible to distinguish between a global negative binomial and its generalization, the partially coherent distribution. It is suggested that intensity interferometry, especially the Bose-Einstein correlation, gives information which will discriminate among dynamical models. 16 refs
Poisson mixture distribution analysis for North Carolina SIDS counts using information criteria
Directory of Open Access Journals (Sweden)
Tyler Massaro
2017-09-01
Full Text Available Mixture distribution analysis provides us with a tool for identifying unlabeled clusters that naturally arise in a data set. In this paper, we demonstrate how to use the information criteria AIC and BIC to choose the optimal number of clusters for a given set of univariate Poisson data. We give an empirical comparison between minimum Hellinger distance (MHD estimation and EM estimation for finding parameters in a mixture of Poisson distributions with artificial data. In addition, we discuss Bayes error in the context of classification problems with mixture of 2, 3, 4, and 5 Poisson models. Finally, we provide an example with real data, taken from a study that looked at sudden infant death syndrome (SIDS count data from 100 North Carolina counties (Symons et al., 1983. This gives us an opportunity to demonstrate the advantages of the proposed model framework in comparison with the original analysis.
TCP (truncated compound poisson) process for multiplicity distributions in high energy collisions
International Nuclear Information System (INIS)
Srivastava, P.P.
1989-01-01
On using the Poisson distribution truncated at zero for intermediate cluster decay in a compound Poisson process we obtain TCP distribution which describes quite well the multiplicity distributions in high energy collisions. A detailed comparison is made between TCP and NB for UA5 data. The reduced moments up to the fifth agree very well with the observed ones. The TCP curves are narrower than NB at high multiplicity tail, look narrower at very high energy and develop shoulders and oscillations which become increasingly pronounced as the energy grows. At lower energies the curves are very close to the NB ones. We also compare the parameterizations by these two distributions of the data for fixed intervals of rapidity for UA5 data and for the data (at low energy) for e sup(+) e sup(-) annihilati8on and pion-proton, discussion of compound Poisson distribution expressions of reduced moments and Poisson transforms are also given. The TCP curves and curves of the reduced moments for different values of the parameters are also presented. (author)
International Nuclear Information System (INIS)
Lewis, J.C.
2011-01-01
In a recent paper (Lewis, 2008) a class of models suitable for application to collision-sequence interference was introduced. In these models velocities are assumed to be completely randomized in each collision. The distribution of velocities was assumed to be Gaussian. The integrated induced dipole moment μk, for vector interference, or the scalar modulation μk, for scalar interference, was assumed to be a function of the impulse (integrated force) fk, or its magnitude fk, experienced by the molecule in a collision. For most of (Lewis, 2008) it was assumed that μk fk and μk fk, but it proved to be possible to extend the models, so that the magnitude of the induced dipole moment is equal to an arbitrary power or sum of powers of the intermolecular force. This allows estimates of the in filling of the interference dip by the dis proportionality of the induced dipole moment and force. One particular such model, using data from (Herman and Lewis, 2006), leads to the most realistic estimate for the in filling of the vector interference dip yet obtained. In (Lewis, 2008) the drastic assumption was made that collision times occurred at equal intervals. In the present paper that assumption is removed: the collision times are taken to form a Poisson process. This is much more realistic than the equal-intervals assumption. The interference dip is found to be a Lorentzian in this model
Directory of Open Access Journals (Sweden)
Hyungsuk Tak
2017-06-01
Full Text Available Rgbp is an R package that provides estimates and verifiable confidence intervals for random effects in two-level conjugate hierarchical models for overdispersed Gaussian, Poisson, and binomial data. Rgbp models aggregate data from k independent groups summarized by observed sufficient statistics for each random effect, such as sample means, possibly with covariates. Rgbp uses approximate Bayesian machinery with unique improper priors for the hyper-parameters, which leads to good repeated sampling coverage properties for random effects. A special feature of Rgbp is an option that generates synthetic data sets to check whether the interval estimates for random effects actually meet the nominal confidence levels. Additionally, Rgbp provides inference statistics for the hyper-parameters, e.g., regression coefficients.
Eliazar, Iddo; Klafter, Joseph
2008-05-01
Many random populations can be modeled as a countable set of points scattered randomly on the positive half-line. The points may represent magnitudes of earthquakes and tornados, masses of stars, market values of public companies, etc. In this article we explore a specific class of random such populations we coin ` Paretian Poisson processes'. This class is elemental in statistical physics—connecting together, in a deep and fundamental way, diverse issues including: the Poisson distribution of the Law of Small Numbers; Paretian tail statistics; the Fréchet distribution of Extreme Value Theory; the one-sided Lévy distribution of the Central Limit Theorem; scale-invariance, renormalization and fractality; resilience to random perturbations.
A random matrix approach to the crossover of energy-level statistics from Wigner to Poisson
International Nuclear Information System (INIS)
Datta, Nilanjana; Kunz, Herve
2004-01-01
We analyze a class of parametrized random matrix models, introduced by Rosenzweig and Porter, which is expected to describe the energy level statistics of quantum systems whose classical dynamics varies from regular to chaotic as a function of a parameter. We compute the generating function for the correlations of energy levels, in the limit of infinite matrix size. The crossover between Poisson and Wigner statistics is measured by a renormalized coupling constant. The model is exactly solved in the sense that, in the limit of infinite matrix size, the energy-level correlation functions and their generating function are given in terms of a finite set of integrals
Filling of a Poisson trap by a population of random intermittent searchers
Bressloff, Paul C.
2012-03-01
We extend the continuum theory of random intermittent search processes to the case of N independent searchers looking to deliver cargo to a single hidden target located somewhere on a semi-infinite track. Each searcher randomly switches between a stationary state and either a leftward or rightward constant velocity state. We assume that all of the particles start at one end of the track and realize sample trajectories independently generated from the same underlying stochastic process. The hidden target is treated as a partially absorbing trap in which a particle can only detect the target and deliver its cargo if it is stationary and within range of the target; the particle is removed from the system after delivering its cargo. As a further generalization of previous models, we assume that up to n successive particles can find the target and deliver its cargo. Assuming that the rate of target detection scales as 1/N, we show that there exists a well-defined mean-field limit N→ in which the stochastic model reduces to a deterministic system of linear reaction-hyperbolic equations for the concentrations of particles in each of the internal states. These equations decouple from the stochastic process associated with filling the target with cargo. The latter can be modeled as a Poisson process in which the time-dependent rate of filling λ(t) depends on the concentration of stationary particles within the target domain. Hence, we refer to the target as a Poisson trap. We analyze the efficiency of filling the Poisson trap with n particles in terms of the waiting time density f n(t). The latter is determined by the integrated Poisson rate μ(t)=0tλ(s)ds, which in turn depends on the solution to the reaction-hyperbolic equations. We obtain an approximate solution for the particle concentrations by reducing the system of reaction-hyperbolic equations to a scalar advection-diffusion equation using a quasisteady-state analysis. We compare our analytical results for the
Chavanis, P H; Delfini, L
2014-03-01
We study random transitions between two metastable states that appear below a critical temperature in a one-dimensional self-gravitating Brownian gas with a modified Poisson equation experiencing a second order phase transition from a homogeneous phase to an inhomogeneous phase [P. H. Chavanis and L. Delfini, Phys. Rev. E 81, 051103 (2010)]. We numerically solve the N-body Langevin equations and the stochastic Smoluchowski-Poisson system, which takes fluctuations (finite N effects) into account. The system switches back and forth between the two metastable states (bistability) and the particles accumulate successively at the center or at the boundary of the domain. We explicitly show that these random transitions exhibit the phenomenology of the ordinary Kramers problem for a Brownian particle in a double-well potential. The distribution of the residence time is Poissonian and the average lifetime of a metastable state is given by the Arrhenius law; i.e., it is proportional to the exponential of the barrier of free energy ΔF divided by the energy of thermal excitation kBT. Since the free energy is proportional to the number of particles N for a system with long-range interactions, the lifetime of metastable states scales as eN and is considerable for N≫1. As a result, in many applications, metastable states of systems with long-range interactions can be considered as stable states. However, for moderate values of N, or close to a critical point, the lifetime of the metastable states is reduced since the barrier of free energy decreases. In that case, the fluctuations become important and the mean field approximation is no more valid. This is the situation considered in this paper. By an appropriate change of notations, our results also apply to bacterial populations experiencing chemotaxis in biology. Their dynamics can be described by a stochastic Keller-Segel model that takes fluctuations into account and goes beyond the usual mean field approximation.
Energy Technology Data Exchange (ETDEWEB)
Bu, W.; Vaknin, D.; Travesset, A. (Iowa State)
2010-07-13
Surface sensitive synchrotron-x-ray scattering studies reveal the distributions of monovalent ions next to highly charged interfaces. A lipid phosphate (dihexadecyl hydrogen phosphate) was spread as a monolayer at the air-water interface, containing CsI at various concentrations. Using anomalous reflectivity off and at the L{sub 3} Cs{sup +} resonance, we provide spatial counterion distributions (Cs{sup +}) next to the negatively charged interface over a wide range of ionic concentrations. We argue that at low salt concentrations and for pure water the enhanced concentration of hydroniums H{sub 3}O{sup +} at the interface leads to proton transfer back to the phosphate group by a high contact potential, whereas high salt concentrations lower the contact potential resulting in proton release and increased surface charge density. The experimental ionic distributions are in excellent agreement with a renormalized-surface-charge Poisson-Boltzmann theory without fitting parameters or additional assumptions.
Bu, Wei; Vaknin, David; Travesset, Alex
2005-12-01
Surface sensitive synchrotron-x-ray scattering studies reveal the distributions of monovalent ions next to highly charged interfaces. A lipid phosphate (dihexadecyl hydrogen phosphate) was spread as a monolayer at the air-water interface, containing CsI at various concentrations. Using anomalous reflectivity off and at the L3 Cs+ resonance, we provide spatial counterion distributions (Cs+) next to the negatively charged interface over a wide range of ionic concentrations. We argue that at low salt concentrations and for pure water the enhanced concentration of hydroniums H3O+ at the interface leads to proton transfer back to the phosphate group by a high contact potential, whereas high salt concentrations lower the contact potential resulting in proton release and increased surface charge density. The experimental ionic distributions are in excellent agreement with a renormalized-surface-charge Poisson-Boltzmann theory without fitting parameters or additional assumptions.
SnIPRE: selection inference using a Poisson random effects model.
Directory of Open Access Journals (Sweden)
Kirsten E Eilertson
Full Text Available We present an approach for identifying genes under natural selection using polymorphism and divergence data from synonymous and non-synonymous sites within genes. A generalized linear mixed model is used to model the genome-wide variability among categories of mutations and estimate its functional consequence. We demonstrate how the model's estimated fixed and random effects can be used to identify genes under selection. The parameter estimates from our generalized linear model can be transformed to yield population genetic parameter estimates for quantities including the average selection coefficient for new mutations at a locus, the synonymous and non-synynomous mutation rates, and species divergence times. Furthermore, our approach incorporates stochastic variation due to the evolutionary process and can be fit using standard statistical software. The model is fit in both the empirical Bayes and Bayesian settings using the lme4 package in R, and Markov chain Monte Carlo methods in WinBUGS. Using simulated data we compare our method to existing approaches for detecting genes under selection: the McDonald-Kreitman test, and two versions of the Poisson random field based method MKprf. Overall, we find our method universally outperforms existing methods for detecting genes subject to selection using polymorphism and divergence data.
A Tutorial of the Poisson Random Field Model in Population Genetics
Directory of Open Access Journals (Sweden)
Praveen Sethupathy
2008-01-01
Full Text Available Population genetics is the study of allele frequency changes driven by various evolutionary forces such as mutation, natural selection, and random genetic drift. Although natural selection is widely recognized as a bona-fide phenomenon, the extent to which it drives evolution continues to remain unclear and controversial. Various qualitative techniques, or so-called “tests of neutrality”, have been introduced to detect signatures of natural selection. A decade and a half ago, Stanley Sawyer and Daniel Hartl provided a mathematical framework, referred to as the Poisson random field (PRF, with which to determine quantitatively the intensity of selection on a particular gene or genomic region. The recent availability of large-scale genetic polymorphism data has sparked widespread interest in genome-wide investigations of natural selection. To that end, the original PRF model is of particular interest for geneticists and evolutionary genomicists. In this article, we will provide a tutorial of the mathematical derivation of the original Sawyer and Hartl PRF model.
Analysis of overdispersed count data by mixtures of Poisson variables and Poisson processes.
Hougaard, P; Lee, M L; Whitmore, G A
1997-12-01
Count data often show overdispersion compared to the Poisson distribution. Overdispersion is typically modeled by a random effect for the mean, based on the gamma distribution, leading to the negative binomial distribution for the count. This paper considers a larger family of mixture distributions, including the inverse Gaussian mixture distribution. It is demonstrated that it gives a significantly better fit for a data set on the frequency of epileptic seizures. The same approach can be used to generate counting processes from Poisson processes, where the rate or the time is random. A random rate corresponds to variation between patients, whereas a random time corresponds to variation within patients.
Hsieh, Meng-Juei; Luo, Ray
2011-08-01
We have implemented and evaluated a coarse-grained distributive method for finite-difference Poisson-Boltzmann (FDPB) calculations of large biomolecular systems. This method is based on the electrostatic focusing principle of decomposing a large fine-grid FDPB calculation into multiple independent FDPB calculations, each of which focuses on only a small and a specific portion (block) of the large fine grid. We first analyzed the impact of the focusing approximation upon the accuracy of the numerical reaction field energies and found that a reasonable relative accuracy of 10(-3) can be achieved when the buffering space is set to be 16 grid points and the block dimension is set to be at least (1/6)(3) of the fine-grid dimension, as in the one-block focusing method. The impact upon efficiency of the use of buffering space to maintain enough accuracy was also studied. It was found that an "optimal" multi-block dimension exists for a given computer hardware setup, and this dimension is more or less independent of the solute geometries. A parallel version of the distributive focusing method was also implemented. Given the proper settings, the distributive method was able to achieve respectable parallel efficiency with tested biomolecular systems on a loosely connected computer cluster.
International Nuclear Information System (INIS)
Wright, T.
1983-01-01
Consider a stratified population with L strata, so that a Poisson random variable is associated with each stratum. The parameter associated with the hth stratum is theta/sub h/, h = 1, 2, ..., L. Let ω/sub h/ be the known proportion of the population in the hth stratum, h = 1, 2, ..., L. The authors want to estimate the parameter theta = summation from h = 1 to L ω/sub h/theta/sub h/. We assume that prior information is available on theta/sub h/ and that it can be expressed in terms of a gamma distribution with parameters α/sub h/ and β/sub h/, h = 1, 2, ..., L. We also assume that the prior distributions are independent. Using squared error loss function, a Bayes allocation of total sample size with a cost constraint is given. The Bayes estimate using the Bayes allocation is shown to have an adjusted mean square error which is strictly less than the adjusted mean square error of the classical estimate using the classical allocation
Fetsch, Corinna
2011-09-13
Preparation of defined and functional polymers has been one of the hottest topics in polymer science and drug delivery in the recent decade. Also, research on (bio)degradable polymers gains more and more interest, in particular at the interface of these two disciplines. However, in the majority of cases, combination of definition, functionality and degradability, is problematic. Here we present the preparation and characterization (MALDI-ToF MS, NMR, GPC) of nonionic hydrophilic, hydrophobic, and amphiphilic N-substituted polyglycines (polypeptoids), which are expected to be main-chain degradable and are able to disperse a hydrophobic model compound in aqueous media. Polymerization kinetics suggest that the polymerization is well controlled with strictly linear pseudo first-order kinetic plots to high monomer consumption. Moreover, molar mass distributions of products are Poisson-type and molar mass can be controlled by the monomer to initiator ratio. The presented polymer platform is nonionic, backbone degradable, and synthetically highly flexible and may therefore be valuable for a broad range of applications, in particular as a biomaterial. © 2011 American Chemical Society.
Egan, Raphael; Gibou, Frédéric
2017-10-01
We present a discretization method for the multidimensional Dirac distribution. We show its applicability in the context of integration problems, and for discretizing Dirac-distributed source terms in Poisson equations with constant or variable diffusion coefficients. The discretization is cell-based and can thus be applied in a straightforward fashion to Quadtree/Octree grids. The method produces second-order accurate results for integration. Superlinear convergence is observed when it is used to model Dirac-distributed source terms in Poisson equations: the observed order of convergence is 2 or slightly smaller. The method is consistent with the discretization of Dirac delta distribution for codimension one surfaces presented in [1,2]. We present Quadtree/Octree construction procedures to preserve convergence and present various numerical examples, including multi-scale problems that are intractable with uniform grids.
The Poisson aggregation process
International Nuclear Information System (INIS)
Eliazar, Iddo
2016-01-01
In this paper we introduce and analyze the Poisson Aggregation Process (PAP): a stochastic model in which a random collection of random balls is stacked over a general metric space. The scattering of the balls’ centers follows a general Poisson process over the metric space, and the balls’ radii are independent and identically distributed random variables governed by a general distribution. For each point of the metric space, the PAP counts the number of balls that are stacked over it. The PAP model is a highly versatile spatial counterpart of the temporal M/G/∞ model in queueing theory. The surface of the moon, scarred by circular meteor-impact craters, exemplifies the PAP model in two dimensions: the PAP counts the number of meteor-impacts that any given moon-surface point sustained. A comprehensive analysis of the PAP is presented, and the closed-form results established include: general statistics, stationary statistics, short-range and long-range dependencies, a Central Limit Theorem, an Extreme Limit Theorem, and fractality.
Process of random distributions : classification and prediction ...
African Journals Online (AJOL)
Dirichlet random distribution. The parameter of this process can be the distribution of any usual such as the (multifractional) Brownian motion. We also extend Kraft random distribution to the continuous time case. We give an application in ...
Sadler, J. M.; Goodall, J. L.; Morsy, M. M.; Spencer, K.
2018-04-01
Sea level rise has already caused more frequent and severe coastal flooding and this trend will likely continue. Flood prediction is an essential part of a coastal city's capacity to adapt to and mitigate this growing problem. Complex coastal urban hydrological systems however, do not always lend themselves easily to physically-based flood prediction approaches. This paper presents a method for using a data-driven approach to estimate flood severity in an urban coastal setting using crowd-sourced data, a non-traditional but growing data source, along with environmental observation data. Two data-driven models, Poisson regression and Random Forest regression, are trained to predict the number of flood reports per storm event as a proxy for flood severity, given extensive environmental data (i.e., rainfall, tide, groundwater table level, and wind conditions) as input. The method is demonstrated using data from Norfolk, Virginia USA from September 2010 to October 2016. Quality-controlled, crowd-sourced street flooding reports ranging from 1 to 159 per storm event for 45 storm events are used to train and evaluate the models. Random Forest performed better than Poisson regression at predicting the number of flood reports and had a lower false negative rate. From the Random Forest model, total cumulative rainfall was by far the most dominant input variable in predicting flood severity, followed by low tide and lower low tide. These methods serve as a first step toward using data-driven methods for spatially and temporally detailed coastal urban flood prediction.
DEFF Research Database (Denmark)
Jensen, J.L.
1993-01-01
Previous results on Edgeworth expansions for sums over a random field are extended to the case where the strong mixing coefficient depends not only on the distance between two sets of random variables, but also on the size of the two sets. The results are applied to the Poisson and the Strauss...
Garcia, Jane Bernadette Denise M.; Esguerra, Jose Perico H.
2017-08-01
An approximate but closed-form expression for a Poisson-like steady state wealth distribution in a kinetic model of gambling was formulated from a finite number of its moments, which were generated from a βa,b(x) exchange distribution. The obtained steady-state wealth distributions have tails which are qualitatively similar to those observed in actual wealth distributions.
A zero-inflated occupancy distribution: exact results and Poisson convergence
Directory of Open Access Journals (Sweden)
Ljuben Mutafchiev
2003-05-01
Full Text Available We introduce the generalized zero-inflated allocation scheme of placing n labeled balls into N labeled cells. We study the asymptotic behavior of the number of empty cells when (n,N belongs to the Ã‚Â“rightÃ‚Â” and Ã‚Â“leftÃ‚Â” domain of attraction. An application to the estimation of characteristics of agreement among a set of raters which independently classify subjects into one of two categories is also indicated. The case when a large number of raters acts following the zero-inflated binomial law with small probability for positive diagnosis is treated using the zero-inflated Poisson approximation.
Estimation of Poisson noise in spatial domain
Švihlík, Jan; Fliegel, Karel; Vítek, Stanislav; Kukal, Jaromír.; Krbcová, Zuzana
2017-09-01
This paper deals with modeling of astronomical images in the spatial domain. We consider astronomical light images contaminated by the dark current which is modeled by Poisson random process. Dark frame image maps the thermally generated charge of the CCD sensor. In this paper, we solve the problem of an addition of two Poisson random variables. At first, the noise analysis of images obtained from the astronomical camera is performed. It allows estimating parameters of the Poisson probability mass functions in every pixel of the acquired dark frame. Then the resulting distributions of the light image can be found. If the distributions of the light image pixels are identified, then the denoising algorithm can be applied. The performance of the Bayesian approach in the spatial domain is compared with the direct approach based on the method of moments and the dark frame subtraction.
Spatial Nonhomogeneous Poisson Process in Corrosion Management
López De La Cruz, J.; Kuniewski, S.P.; Van Noortwijk, J.M.; Guriérrez, M.A.
2008-01-01
A method to test the assumption of nonhomogeneous Poisson point processes is implemented to analyze corrosion pit patterns. The method is calibrated with three artificially generated patterns and manages to accurately assess whether a pattern distribution is random, regular, or clustered. The
Is extrapair mating random? On the probability distribution of extrapair young in avian broods
Brommer, Jon E.; Korsten, Peter; Bouwman, Karen A.; Berg, Mathew L.; Komdeur, Jan
2007-01-01
A dichotomy in female extrapair copulation (EPC) behavior, with some females seeking EPC and others not, is inferred if the observed distribution of extrapair young (EPY) over broods differs from a random process on the level of individual offspring (binomial, hypergeometrical, or Poisson). A review
DEFF Research Database (Denmark)
Fokianos, Konstantinos; Rahbek, Anders Christian; Tjøstheim, Dag
This paper considers geometric ergodicity and likelihood based inference for linear and nonlinear Poisson autoregressions. In the linear case the conditional mean is linked linearly to its past values as well as the observed values of the Poisson process. This also applies to the conditional...... variance, implying an interpretation as an integer valued GARCH process. In a nonlinear conditional Poisson model, the conditional mean is a nonlinear function of its past values and a nonlinear function of past observations. As a particular example an exponential autoregressive Poisson model for time...... series is considered. Under geometric ergodicity the maximum likelihood estimators of the parameters are shown to be asymptotically Gaussian in the linear model. In addition we provide a consistent estimator of the asymptotic covariance, which is used in the simulations and the analysis of some...
International Nuclear Information System (INIS)
Bluszcz, Andrzej; Adamiec, Grzegorz; Heer, Aleksandra J.
2015-01-01
The current work focuses on the estimation of equivalent dose and its uncertainty using the single aliquot regenerative protocol in optically stimulated luminescence measurements. The authors show that the count numbers recorded with the use of photomultiplier tubes are well described by negative binomial distributions, different ones for background counts and photon induced counts. This fact is then exploited in pseudo-random count number generation and simulations of D e determination assuming a saturating exponential growth. A least squares fitting procedure is applied using different types of weights to determine whether the obtained D e 's and their error estimates are unbiased and accurate. A weighting procedure is suggested that leads to almost unbiased D e estimates. It is also shown that the assumption of Poisson distribution in D e estimation may lead to severe underestimation of the D e error. - Highlights: • Detailed analysis of statistics of count numbers in luminescence readers. • Generation of realistically scattered pseudo-random numbers of counts in luminescence measurements. • A practical guide for stringent analysis of D e values and errors assessment.
DEFF Research Database (Denmark)
Fokianos, Konstantinos; Rahbek, Anders Christian; Tjøstheim, Dag
2009-01-01
In this article we consider geometric ergodicity and likelihood-based inference for linear and nonlinear Poisson autoregression. In the linear case, the conditional mean is linked linearly to its past values, as well as to the observed values of the Poisson process. This also applies to the condi......In this article we consider geometric ergodicity and likelihood-based inference for linear and nonlinear Poisson autoregression. In the linear case, the conditional mean is linked linearly to its past values, as well as to the observed values of the Poisson process. This also applies...... to the conditional variance, making possible interpretation as an integer-valued generalized autoregressive conditional heteroscedasticity process. In a nonlinear conditional Poisson model, the conditional mean is a nonlinear function of its past values and past observations. As a particular example, we consider...... ergodicity proceeds via Markov theory and irreducibility. Finding transparent conditions for proving ergodicity turns out to be a delicate problem in the original model formulation. This problem is circumvented by allowing a perturbation of the model. We show that as the perturbations can be chosen...
DEFF Research Database (Denmark)
Fokianos, Konstantinos; Rahbæk, Anders; Tjøstheim, Dag
This paper considers geometric ergodicity and likelihood based inference for linear and nonlinear Poisson autoregressions. In the linear case the conditional mean is linked linearly to its past values as well as the observed values of the Poisson process. This also applies to the conditional...... proceeds via Markov theory and irreducibility. Finding transparent conditions for proving ergodicity turns out to be a delicate problem in the original model formulation. This problem is circumvented by allowing a perturbation of the model. We show that as the perturbations can be chosen to be arbitrarily...
Poisson branching point processes
International Nuclear Information System (INIS)
Matsuo, K.; Teich, M.C.; Saleh, B.E.A.
1984-01-01
We investigate the statistical properties of a special branching point process. The initial process is assumed to be a homogeneous Poisson point process (HPP). The initiating events at each branching stage are carried forward to the following stage. In addition, each initiating event independently contributes a nonstationary Poisson point process (whose rate is a specified function) located at that point. The additional contributions from all points of a given stage constitute a doubly stochastic Poisson point process (DSPP) whose rate is a filtered version of the initiating point process at that stage. The process studied is a generalization of a Poisson branching process in which random time delays are permitted in the generation of events. Particular attention is given to the limit in which the number of branching stages is infinite while the average number of added events per event of the previous stage is infinitesimal. In the special case when the branching is instantaneous this limit of continuous branching corresponds to the well-known Yule--Furry process with an initial Poisson population. The Poisson branching point process provides a useful description for many problems in various scientific disciplines, such as the behavior of electron multipliers, neutron chain reactions, and cosmic ray showers
Bouleau, Nicolas
2015-01-01
A simplified approach to Malliavin calculus adapted to Poisson random measures is developed and applied in this book. Called the “lent particle method” it is based on perturbation of the position of particles. Poisson random measures describe phenomena involving random jumps (for instance in mathematical finance) or the random distribution of particles (as in statistical physics). Thanks to the theory of Dirichlet forms, the authors develop a mathematical tool for a quite general class of random Poisson measures and significantly simplify computations of Malliavin matrices of Poisson functionals. The method gives rise to a new explicit calculus that they illustrate on various examples: it consists in adding a particle and then removing it after computing the gradient. Using this method, one can establish absolute continuity of Poisson functionals such as Lévy areas, solutions of SDEs driven by Poisson measure and, by iteration, obtain regularity of laws. The authors also give applications to error calcul...
Modeling Repeated Count Data : Some Extensions of the Rasch Poisson Counts Model
van Duijn, M.A.J.; Jansen, Margo
1995-01-01
We consider data that can be summarized as an N X K table of counts-for example, test data obtained by administering K tests to N subjects. The cell entries y(ij) are assumed to be conditionally independent Poisson-distributed random variables, given the NK Poisson intensity parameters mu(ij). The
Rusakov, Oleg; Laskin, Michael
2017-06-01
We consider a stochastic model of changes of prices in real estate markets. We suppose that in a book of prices the changes happen in points of jumps of a Poisson process with a random intensity, i.e. moments of changes sequently follow to a random process of the Cox process type. We calculate cumulative mathematical expectations and variances for the random intensity of this point process. In the case that the process of random intensity is a martingale the cumulative variance has a linear grows. We statistically process a number of observations of real estate prices and accept hypotheses of a linear grows for estimations as well for cumulative average, as for cumulative variance both for input and output prises that are writing in the book of prises.
Coordination of Conditional Poisson Samples
Directory of Open Access Journals (Sweden)
Grafström Anton
2015-12-01
Full Text Available Sample coordination seeks to maximize or to minimize the overlap of two or more samples. The former is known as positive coordination, and the latter as negative coordination. Positive coordination is mainly used for estimation purposes and to reduce data collection costs. Negative coordination is mainly performed to diminish the response burden of the sampled units. Poisson sampling design with permanent random numbers provides an optimum coordination degree of two or more samples. The size of a Poisson sample is, however, random. Conditional Poisson (CP sampling is a modification of the classical Poisson sampling that produces a fixed-size πps sample. We introduce two methods to coordinate Conditional Poisson samples over time or simultaneously. The first one uses permanent random numbers and the list-sequential implementation of CP sampling. The second method uses a CP sample in the first selection and provides an approximate one in the second selection because the prescribed inclusion probabilities are not respected exactly. The methods are evaluated using the size of the expected sample overlap, and are compared with their competitors using Monte Carlo simulation. The new methods provide a good coordination degree of two samples, close to the performance of Poisson sampling with permanent random numbers.
Tuckwell, H C; Walsh, J B
1983-01-01
The linear cable equation with uniform Poisson or white noise input current is employed as a model for the voltage across the membrane of a one-dimensional nerve cylinder, which may sometimes represent the dendritic tree of a nerve cell. From the Green's function representation of the solutions, the mean, variance and covariance of the voltage are found. At large times, the voltage becomes asymptotically wide-sense stationary and we find the spectral density functions for various cable lengths and boundary conditions. For large frequencies the voltage exhibits "1/f3/2 noise". Using the Fourier series representation of the voltage we study the moments of the firing times for the diffusion model with numerical techniques, employing a simplified threshold criterion. We also simulate the solution of the stochastic cable equation by two different methods in order to estimate the moments and density of the firing time.
Non-uniform approximations for sums of discrete m-dependent random variables
Vellaisamy, P.; Cekanavicius, V.
2013-01-01
Non-uniform estimates are obtained for Poisson, compound Poisson, translated Poisson, negative binomial and binomial approximations to sums of of m-dependent integer-valued random variables. Estimates for Wasserstein metric also follow easily from our results. The results are then exemplified by the approximation of Poisson binomial distribution, 2-runs and $m$-dependent $(k_1,k_2)$-events.
Distribution of interference in random quantum algorithms
International Nuclear Information System (INIS)
Arnaud, Ludovic; Braun, Daniel
2007-01-01
We study the amount of interference in random quantum algorithms using a recently derived quantitative measure of interference. To this end we introduce two random circuit ensembles composed of random sequences of quantum gates from a universal set, mimicking quantum algorithms in the quantum circuit representation. We show numerically that, concerning the interference distribution and the level spacing distribution, these ensembles converge to the well-known circular unitary ensemble (CUE) for general complex quantum algorithms, and to the Haar orthogonal ensemble (HOE) for real quantum algorithms. We provide exact analytical formulas for the average and typical interference in the circular ensembles, and show that for sufficiently large numbers of qubits a random quantum algorithm uses with probability close to one an amount of interference approximately equal to the dimension of the Hilbert space. As a by-product, we offer a new way of constructing approximate random unitary operators from the Haar measures of CUE or HOE in a high dimensional Hilbert space using universal sets of quantum gates
A random sampling procedure for anisotropic distributions
International Nuclear Information System (INIS)
Nagrajan, P.S.; Sethulakshmi, P.; Raghavendran, C.P.; Bhatia, D.P.
1975-01-01
A procedure is described for sampling the scattering angle of neutrons as per specified angular distribution data. The cosine of the scattering angle is written as a double Legendre expansion in the incident neutron energy and a random number. The coefficients of the expansion are given for C, N, O, Si, Ca, Fe and Pb and these elements are of interest in dosimetry and shielding. (author)
Perturbation-induced emergence of Poisson-like behavior in non-Poisson systems
International Nuclear Information System (INIS)
Akin, Osman C; Grigolini, Paolo; Paradisi, Paolo
2009-01-01
The response of a system with ON–OFF intermittency to an external harmonic perturbation is discussed. ON–OFF intermittency is described by means of a sequence of random events, i.e., the transitions from the ON to the OFF state and vice versa. The unperturbed waiting times (WTs) between two events are assumed to satisfy a renewal condition, i.e., the WTs are statistically independent random variables. The response of a renewal model with non-Poisson ON–OFF intermittency, associated with non-exponential WT distribution, is analyzed by looking at the changes induced in the WT statistical distribution by the harmonic perturbation. The scaling properties are also studied by means of diffusion entropy analysis. It is found that, in the range of fast and relatively strong perturbation, the non-Poisson system displays a Poisson-like behavior in both WT distribution and scaling. In particular, the histogram of perturbed WTs becomes a sequence of equally spaced peaks, with intensity decaying exponentially in time. Further, the diffusion entropy detects an ordinary scaling (related to normal diffusion) instead of the expected unperturbed anomalous scaling related to the inverse power-law decay. Thus, an analysis based on the WT histogram and/or on scaling methods has to be considered with some care when dealing with perturbed intermittent systems
Poisson integrators for Lie-Poisson structures on R3
International Nuclear Information System (INIS)
Song Lina
2011-01-01
This paper is concerned with the study of Poisson integrators. We are interested in Lie-Poisson systems on R 3 . First, we focus on Poisson integrators for constant Poisson systems and the transformations used for transforming Lie-Poisson structures to constant Poisson structures. Then, we construct local Poisson integrators for Lie-Poisson systems on R 3 . Finally, we present the results of numerical experiments for two Lie-Poisson systems and compare our Poisson integrators with other known methods.
Swanson, C.; Jandovitz, P.; Cohen, S. A.
2018-02-01
We measured Electron Energy Distribution Functions (EEDFs) from below 200 eV to over 8 keV and spanning five orders-of-magnitude in intensity, produced in a low-power, RF-heated, tandem mirror discharge in the PFRC-II apparatus. The EEDF was obtained from the x-ray energy distribution function (XEDF) using a novel Poisson-regularized spectrum inversion algorithm applied to pulse-height spectra that included both Bremsstrahlung and line emissions. The XEDF was measured using a specially calibrated Amptek Silicon Drift Detector (SDD) pulse-height system with 125 eV FWHM at 5.9 keV. The algorithm is found to out-perform current leading x-ray inversion algorithms when the error due to counting statistics is high.
Directory of Open Access Journals (Sweden)
C. Swanson
2018-02-01
Full Text Available We measured Electron Energy Distribution Functions (EEDFs from below 200 eV to over 8 keV and spanning five orders-of-magnitude in intensity, produced in a low-power, RF-heated, tandem mirror discharge in the PFRC-II apparatus. The EEDF was obtained from the x-ray energy distribution function (XEDF using a novel Poisson-regularized spectrum inversion algorithm applied to pulse-height spectra that included both Bremsstrahlung and line emissions. The XEDF was measured using a specially calibrated Amptek Silicon Drift Detector (SDD pulse-height system with 125 eV FWHM at 5.9 keV. The algorithm is found to out-perform current leading x-ray inversion algorithms when the error due to counting statistics is high.
Distributions on unbounded moment spaces and random moment sequences
Dette, Holger; Nagel, Jan
2012-01-01
In this paper we define distributions on moment spaces corresponding to measures on the real line with an unbounded support. We identify these distributions as limiting distributions of random moment vectors defined on compact moment spaces and as distributions corresponding to random spectral measures associated with the Jacobi, Laguerre and Hermite ensemble from random matrix theory. For random vectors on the unbounded moment spaces we prove a central limit theorem where the centering vecto...
Amalia, Junita; Purhadi, Otok, Bambang Widjanarko
2017-11-01
Poisson distribution is a discrete distribution with count data as the random variables and it has one parameter defines both mean and variance. Poisson regression assumes mean and variance should be same (equidispersion). Nonetheless, some case of the count data unsatisfied this assumption because variance exceeds mean (over-dispersion). The ignorance of over-dispersion causes underestimates in standard error. Furthermore, it causes incorrect decision in the statistical test. Previously, paired count data has a correlation and it has bivariate Poisson distribution. If there is over-dispersion, modeling paired count data is not sufficient with simple bivariate Poisson regression. Bivariate Poisson Inverse Gaussian Regression (BPIGR) model is mix Poisson regression for modeling paired count data within over-dispersion. BPIGR model produces a global model for all locations. In another hand, each location has different geographic conditions, social, cultural and economic so that Geographically Weighted Regression (GWR) is needed. The weighting function of each location in GWR generates a different local model. Geographically Weighted Bivariate Poisson Inverse Gaussian Regression (GWBPIGR) model is used to solve over-dispersion and to generate local models. Parameter estimation of GWBPIGR model obtained by Maximum Likelihood Estimation (MLE) method. Meanwhile, hypothesis testing of GWBPIGR model acquired by Maximum Likelihood Ratio Test (MLRT) method.
A New Distribution-Random Limit Normal Distribution
Gong, Xiaolin; Yang, Shuzhen
2013-01-01
This paper introduces a new distribution to improve tail risk modeling. Based on the classical normal distribution, we define a new distribution by a series of heat equations. Then, we use market data to verify our model.
Different Random Distributions Research on Logistic-Based Sample Assumption
Directory of Open Access Journals (Sweden)
Jing Pan
2014-01-01
Full Text Available Logistic-based sample assumption is proposed in this paper, with a research on different random distributions through this system. It provides an assumption system of logistic-based sample, including its sample space structure. Moreover, the influence of different random distributions for inputs has been studied through this logistic-based sample assumption system. In this paper, three different random distributions (normal distribution, uniform distribution, and beta distribution are used for test. The experimental simulations illustrate the relationship between inputs and outputs under different random distributions. Thereafter, numerical analysis infers that the distribution of outputs depends on that of inputs to some extent, and this assumption system is not independent increment process, but it is quasistationary.
The randomly renewed general item and the randomly inspected item with exponential life distribution
International Nuclear Information System (INIS)
Schneeweiss, W.G.
1979-01-01
For a randomly renewed item the probability distributions of the time to failure and of the duration of down time and the expectations of these random variables are determined. Moreover, it is shown that the same theory applies to randomly checked items with exponential probability distribution of life such as electronic items. The case of periodic renewals is treated as an example. (orig.) [de
Homogeneous Poisson structures
International Nuclear Information System (INIS)
Shafei Deh Abad, A.; Malek, F.
1993-09-01
We provide an algebraic definition for Schouten product and give a decomposition for any homogenenous Poisson structure in any n-dimensional vector space. A large class of n-homogeneous Poisson structures in R k is also characterized. (author). 4 refs
Poisson point processes imaging, tracking, and sensing
Streit, Roy L
2010-01-01
This overview of non-homogeneous and multidimensional Poisson point processes and their applications features mathematical tools and applications from emission- and transmission-computed tomography to multiple target tracking and distributed sensor detection.
Wild Fluctuations of Random Functions with the Pareto Distribution
Directory of Open Access Journals (Sweden)
Ming Li
2013-01-01
Full Text Available This paper provides the fluctuation analysis of random functions with the Pareto distribution. By the introduced concept of wild fluctuations, we give an alternative way to classify the fluctuations from those with light-tailed distributions. Moreover, the suggested term wildest fluctuation may be used to classify random functions with infinite variance from those with finite variances.
Zeroth Poisson Homology, Foliated Cohomology and Perfect Poisson Manifolds
Martínez-Torres, David; Miranda, Eva
2018-01-01
We prove that, for compact regular Poisson manifolds, the zeroth homology group is isomorphic to the top foliated cohomology group, and we give some applications. In particular, we show that, for regular unimodular Poisson manifolds, top Poisson and foliated cohomology groups are isomorphic. Inspired by the symplectic setting, we define what a perfect Poisson manifold is. We use these Poisson homology computations to provide families of perfect Poisson manifolds.
International Nuclear Information System (INIS)
Harwood, L.H.
1981-01-01
At MSU we have used the POISSON family of programs extensively for magnetic field calculations. In the presently super-saturated computer situation, reducing the run time for the program is imperative. Thus, a series of modifications have been made to POISSON to speed up convergence. Two of the modifications aim at having the first guess solution as close as possible to the final solution. The other two aim at increasing the convergence rate. In this discussion, a working knowledge of POISSON is assumed. The amount of new code and expected time saving for each modification is discussed
DISTRIBUTION OF BOREHOLES IN RESIDENTIAL LAYOUTS AND ...
African Journals Online (AJOL)
IPPIS NAU
2017-07-01
Jul 1, 2017 ... ... is independent and random. The Poisson distribution is given as: P(x) = e. -λχx. (4) x! where, x = number of points per quadrat λ = Probability of obtaining a point per quadrat e = the base of the natural logarithms = 2.7183. For ease of computation, the Poisson distribution equation 4 may be rewritten as: ...
Distributed synchronization of coupled neural networks via randomly occurring control.
Tang, Yang; Wong, Wai Keung
2013-03-01
In this paper, we study the distributed synchronization and pinning distributed synchronization of stochastic coupled neural networks via randomly occurring control. Two Bernoulli stochastic variables are used to describe the occurrences of distributed adaptive control and updating law according to certain probabilities. Both distributed adaptive control and updating law for each vertex in a network depend on state information on each vertex's neighborhood. By constructing appropriate Lyapunov functions and employing stochastic analysis techniques, we prove that the distributed synchronization and the distributed pinning synchronization of stochastic complex networks can be achieved in mean square. Additionally, randomly occurring distributed control is compared with periodically intermittent control. It is revealed that, although randomly occurring control is an intermediate method among the three types of control in terms of control costs and convergence rates, it has fewer restrictions to implement and can be more easily applied in practice than periodically intermittent control.
On Poisson Nonlinear Transformations
Directory of Open Access Journals (Sweden)
Nasir Ganikhodjaev
2014-01-01
Full Text Available We construct the family of Poisson nonlinear transformations defined on the countable sample space of nonnegative integers and investigate their trajectory behavior. We have proved that these nonlinear transformations are regular.
A simple consensus algorithm for distributed averaging in random ...
Indian Academy of Sciences (India)
http://www.ias.ac.in/article/fulltext/pram/079/03/0493-0499. Keywords. Sensor networks; random geographical networks; distributed averaging; consensus algorithms. Abstract. Random geographical networks are realistic models for wireless sensor networks which are used in many applications. Achieving average ...
A distributed randomized algorithm for relative localization in sensor networks
Ravazzi, Chiara; Frasca, Paolo; Ishii, Hideaki; Tempo, Roberto
This paper regards the relative localization problem in sensor networks.We propose for its solution a distributed randomized algorithm, which is based on input-driven consensus dynamics and features pairwise “gossip‿ communications and updates. Due to the randomness of the updates, the state of this
On Distributed Computation in Noisy Random Planar Networks
Kanoria, Y.; Manjunath, D.
2007-01-01
We consider distributed computation of functions of distributed data in random planar networks with noisy wireless links. We present a new algorithm for computation of the maximum value which is order optimal in the number of transmissions and computation time.We also adapt the histogram computation algorithm of Ying et al to make the histogram computation time optimal.
Continuous Time Random Walks with memory and financial distributions
Montero, Miquel; Masoliver, Jaume
2017-11-01
We study financial distributions from the perspective of Continuous Time Random Walks with memory. We review some of our previous developments and apply them to financial problems. We also present some new models with memory that can be useful in characterizing tendency effects which are inherent in most markets. We also briefly study the effect on return distributions of fractional behaviors in the distribution of pausing times between successive transactions.
Fully-distributed randomized cooperation in wireless sensor networks
Bader, Ahmed
2015-01-07
When marrying randomized distributed space-time coding (RDSTC) to geographical routing, new performance horizons can be created. In order to reach those horizons however, routing protocols must evolve to operate in a fully distributed fashion. In this letter, we expose a technique to construct a fully distributed geographical routing scheme in conjunction with RDSTC. We then demonstrate the performance gains of this novel scheme by comparing it to one of the prominent classical schemes.
Equilibrium stochastic dynamics of Poisson cluster ensembles
Directory of Open Access Journals (Sweden)
L.Bogachev
2008-06-01
Full Text Available The distribution μ of a Poisson cluster process in Χ=Rd (with n-point clusters is studied via the projection of an auxiliary Poisson measure in the space of configurations in Χn, with the intensity measure being the convolution of the background intensity (of cluster centres with the probability distribution of a generic cluster. We show that μ is quasi-invariant with respect to the group of compactly supported diffeomorphisms of Χ, and prove an integration by parts formula for μ. The corresponding equilibrium stochastic dynamics is then constructed using the method of Dirichlet forms.
Coordination number for random distribution of parallel fibres
Directory of Open Access Journals (Sweden)
Darnowski Piotr
2017-03-01
Full Text Available This paper presents the results of computer simulations carried out to determine coordination numbers for a system of parallel cylindrical fibres distributed at random in a circular matrix according to twodimensional pattern created by random sequential addition scheme. Two different methods to calculate coordination number were utilized and compared. The first method was based on integration of pair distribution function. The second method was the modified sequential analysis. The calculations following from ensemble average approach revealed that these two methods give very close results for the same neighbourhood area irrespective of the wide range of radii used for calculation.
On the minimum of independent geometrically distributed random variables
Ciardo, Gianfranco; Leemis, Lawrence M.; Nicol, David
1994-01-01
The expectations E(X(sub 1)), E(Z(sub 1)), and E(Y(sub 1)) of the minimum of n independent geometric, modifies geometric, or exponential random variables with matching expectations differ. We show how this is accounted for by stochastic variability and how E(X(sub 1))/E(Y(sub 1)) equals the expected number of ties at the minimum for the geometric random variables. We then introduce the 'shifted geometric distribution' and show that there is a unique value of the shift for which the individual shifted geometric and exponential random variables match expectations both individually and in the minimums.
Limit distributions of random walks on stochastic matrices
Indian Academy of Sciences (India)
condition that μm(P) > 0 for some positive integer m (as opposed to just 1, instead of m, considered in [1]), where μm is the ...... Limit distributions of random walks. 611. PROPOSITION 3.2. Let f be as introduced before Proposition 3.1. The probability distribution λ is the image of π by the map b ↦→ f (b). In other words, λ = ∑.
Avoiding negative populations in explicit Poisson tau-leaping.
Cao, Yang; Gillespie, Daniel T; Petzold, Linda R
2005-08-01
The explicit tau-leaping procedure attempts to speed up the stochastic simulation of a chemically reacting system by approximating the number of firings of each reaction channel during a chosen time increment tau as a Poisson random variable. Since the Poisson random variable can have arbitrarily large sample values, there is always the possibility that this procedure will cause one or more reaction channels to fire so many times during tau that the population of some reactant species will be driven negative. Two recent papers have shown how that unacceptable occurrence can be avoided by replacing the Poisson random variables with binomial random variables, whose values are naturally bounded. This paper describes a modified Poisson tau-leaping procedure that also avoids negative populations, but is easier to implement than the binomial procedure. The new Poisson procedure also introduces a second control parameter, whose value essentially dials the procedure from the original Poisson tau-leaping at one extreme to the exact stochastic simulation algorithm at the other; therefore, the modified Poisson procedure will generally be more accurate than the original Poisson procedure.
Randomized algorithms for tracking distributed count, frequencies, and ranks
DEFF Research Database (Denmark)
Zengfeng, Huang; Ke, Yi; Zhang, Qin
2012-01-01
We show that randomization can lead to significant improvements for a few fundamental problems in distributed tracking. Our basis is the count-tracking problem, where there are k players, each holding a counter ni that gets incremented over time, and the goal is to track an ∑-approximation of the...
Peer-Assisted Content Distribution with Random Linear Network Coding
DEFF Research Database (Denmark)
Hundebøll, Martin; Ledet-Pedersen, Jeppe; Sluyterman, Georg
2014-01-01
-to-peer system, which applies random linear network coding. We focus on an experimental evaluation of the performance on 36 real nodes. The evalution shows that BRONCO outperforms regular HTTP transfers, and, with a extremely simple protocol structure, performs equivalently to bittorrent distribution...
A simple consensus algorithm for distributed averaging in random ...
Indian Academy of Sciences (India)
guaranteed convergence with this simple algorithm. Keywords. Sensor networks; random geographical networks; distributed averaging; consensus algorithms. PACS Nos 89.75.Hc; 89.75.Fb; 89.20.Ff. 1. Introduction. Wireless sensor networks are increasingly used in many applications ranging from envi- ronmental to ...
Stimulated luminescence emission from localized recombination in randomly distributed defects
DEFF Research Database (Denmark)
Jain, Mayank; Guralnik, Benny; Andersen, Martin Thalbitzer
2012-01-01
408–15) which assumes a fixed d → a tunnelling probability for the entire crystal, our model is based on nearest-neighbour recombination within randomly distributed centres. Such a random distribution can occur through the entire volume or within the defect complexes of the dosimeter, and implies...... that the tunnelling probability varies with the donor–acceptor (d–a) separation distance. We first develop an ‘exact kinetic model’ that incorporates this variation in tunnelling probabilities, and evolves both in spatial as well as temporal domains. We then develop a simplified one-dimensional, semi-analytical model...... results in a highly asymmetric TL peak; this peak can be understood to derive from a continuum of several first-order TL peaks. Our model also shows an extended power law behaviour for OSL (or prompt luminescence), which is expected from localized recombination mechanisms in materials with random...
The Poisson model limits in NBA basketball: Complexity in team sports
Martín-González, Juan Manuel; de Saá Guerra, Yves; García-Manso, Juan Manuel; Arriaza, Enrique; Valverde-Estévez, Teresa
2016-12-01
Team sports are frequently studied by researchers. There is presumption that scoring in basketball is a random process and that can be described using the Poisson Model. Basketball is a collaboration-opposition sport, where the non-linear local interactions among players are reflected in the evolution of the score that ultimately determines the winner. In the NBA, the outcomes of close games are often decided in the last minute, where fouls play a main role. We examined 6130 NBA games in order to analyze the time intervals between baskets and scoring dynamics. Most numbers of baskets (n) over a time interval (ΔT) follow a Poisson distribution, but some (e.g., ΔT = 10 s, n > 3) behave as a Power Law. The Poisson distribution includes most baskets in any game, in most game situations, but in close games in the last minute, the numbers of events are distributed following a Power Law. The number of events can be adjusted by a mixture of two distributions. In close games, both teams try to maintain their advantage solely in order to reach the last minute: a completely different game. For this reason, we propose to use the Poisson model as a reference. The complex dynamics will emerge from the limits of this model.
Free energy distribution function of a random Ising ferromagnet
International Nuclear Information System (INIS)
Dotsenko, Victor; Klumov, Boris
2012-01-01
We study the free energy distribution function of a weakly disordered Ising ferromagnet in terms of the D-dimensional random temperature Ginzburg–Landau Hamiltonian. It is shown that besides the usual Gaussian 'body' this distribution function exhibits non-Gaussian tails both in the paramagnetic and in the ferromagnetic phases. Explicit asymptotic expressions for these tails are derived. It is demonstrated that the tails are strongly asymmetric: the left tail (for large negative values of the free energy) is much slower than the right one (for large positive values of the free energy). It is argued that at the critical point the free energy of the random Ising ferromagnet in dimensions D < 4 is described by a non-trivial universal distribution function which is non-self-averaging
Nonparametric Estimation of Distributions in Random Effects Models
Hart, Jeffrey D.
2011-01-01
We propose using minimum distance to obtain nonparametric estimates of the distributions of components in random effects models. A main setting considered is equivalent to having a large number of small datasets whose locations, and perhaps scales, vary randomly, but which otherwise have a common distribution. Interest focuses on estimating the distribution that is common to all datasets, knowledge of which is crucial in multiple testing problems where a location/scale invariant test is applied to every small dataset. A detailed algorithm for computing minimum distance estimates is proposed, and the usefulness of our methodology is illustrated by a simulation study and an analysis of microarray data. Supplemental materials for the article, including R-code and a dataset, are available online. © 2011 American Statistical Association.
Fractional Poisson Fields and Martingales
Aletti, Giacomo; Leonenko, Nikolai; Merzbach, Ely
2018-01-01
We present new properties for the Fractional Poisson process (FPP) and the Fractional Poisson field on the plane. A martingale characterization for FPPs is given. We extend this result to Fractional Poisson fields, obtaining some other characterizations. The fractional differential equations are studied. We consider a more general Mixed-Fractional Poisson process and show that this process is the stochastic solution of a system of fractional differential-difference equations. Finally, we give some simulations of the Fractional Poisson field on the plane.
Fractional Poisson Fields and Martingales
Aletti, Giacomo; Leonenko, Nikolai; Merzbach, Ely
2018-02-01
We present new properties for the Fractional Poisson process (FPP) and the Fractional Poisson field on the plane. A martingale characterization for FPPs is given. We extend this result to Fractional Poisson fields, obtaining some other characterizations. The fractional differential equations are studied. We consider a more general Mixed-Fractional Poisson process and show that this process is the stochastic solution of a system of fractional differential-difference equations. Finally, we give some simulations of the Fractional Poisson field on the plane.
Natural Poisson structures of nonlinear plasma dynamics
International Nuclear Information System (INIS)
Kaufman, A.N.
1982-06-01
Hamiltonian field theories, for models of nonlinear plasma dynamics, require a Poisson bracket structure for functionals of the field variables. These are presented, applied, and derived for several sets of field variables: coherent waves, incoherent waves, particle distributions, and multifluid electrodynamics. Parametric coupling of waves and plasma yields concise expressions for ponderomotive effects (in kinetic and fluid models) and for induced scattering
Natural Poisson structures of nonlinear plasma dynamics
International Nuclear Information System (INIS)
Kaufman, A.N.
1982-01-01
Hamiltonian field theories, for models of nonlinear plasma dynamics, require a Poisson bracket structure for functionals of the field variables. These are presented, applied, and derived for several sets of field variables: coherent waves, incoherent waves, particle distributions, and multifluid electrodynamics. Parametric coupling of waves and plasma yields concise expressions for ponderomotive effects (in kinetic and fluid models) and for induced scattering. (Auth.)
Bayesian regression of piecewise homogeneous Poisson processes
Directory of Open Access Journals (Sweden)
Diego Sevilla
2015-12-01
Full Text Available In this paper, a Bayesian method for piecewise regression is adapted to handle counting processes data distributed as Poisson. A numerical code in Mathematica is developed and tested analyzing simulated data. The resulting method is valuable for detecting breaking points in the count rate of time series for Poisson processes. Received: 2 November 2015, Accepted: 27 November 2015; Edited by: R. Dickman; Reviewed by: M. Hutter, Australian National University, Canberra, Australia.; DOI: http://dx.doi.org/10.4279/PIP.070018 Cite as: D J R Sevilla, Papers in Physics 7, 070018 (2015
d-records in geometrically distributed random variables
Directory of Open Access Journals (Sweden)
Helmut Prodinger
2006-01-01
Full Text Available We study d–records in sequences generated by independent geometric random variables and derive explicit and asymptotic formulæ for expectation and variance. Informally speaking, a d–record occurs, when one computes the d–largest values, and the variable maintaining it changes its value while the sequence is scanned from left to right. This is done for the “strict model,” but a “weak model” is also briefly investigated. We also discuss the limit q → 1(q the parameter of the geometric distribution, which leads to the model of random permutations.
PENERAPAN REGRESI BINOMIAL NEGATIF UNTUK MENGATASI OVERDISPERSI PADA REGRESI POISSON
Directory of Open Access Journals (Sweden)
PUTU SUSAN PRADAWATI
2013-09-01
Full Text Available Poisson regression was used to analyze the count data which Poisson distributed. Poisson regression analysis requires state equidispersion, in which the mean value of the response variable is equal to the value of the variance. However, there are deviations in which the value of the response variable variance is greater than the mean. This is called overdispersion. If overdispersion happens and Poisson Regression analysis is being used, then underestimated standard errors will be obtained. Negative Binomial Regression can handle overdispersion because it contains a dispersion parameter. From the simulation data which experienced overdispersion in the Poisson Regression model it was found that the Negative Binomial Regression was better than the Poisson Regression model.
A simple consensus algorithm for distributed averaging in random ...
Indian Academy of Sciences (India)
Distributed averaging in random geographical networks. It can be simply proved that for the values of the uniform step size σ in the range. (0,1/kmax], with kmax being the maximum degree of the graph, the above system is asymptotically globally convergent to [17]. ∀i; lim k→∞ xi (k) = α = 1. N. N. ∑ i=1 xi (0),. (3) which is ...
Graphene materials having randomly distributed two-dimensional structural defects
Kung, Harold H; Zhao, Xin; Hayner, Cary M; Kung, Mayfair C
2013-10-08
Graphene-based storage materials for high-power battery applications are provided. The storage materials are composed of vertical stacks of graphene sheets and have reduced resistance for Li ion transport. This reduced resistance is achieved by incorporating a random distribution of structural defects into the stacked graphene sheets, whereby the structural defects facilitate the diffusion of Li ions into the interior of the storage materials.
Random generation of RNA secondary structures according to native distributions
Directory of Open Access Journals (Sweden)
Nebel Markus E
2011-10-01
Full Text Available Abstract Background Random biological sequences are a topic of great interest in genome analysis since, according to a powerful paradigm, they represent the background noise from which the actual biological information must differentiate. Accordingly, the generation of random sequences has been investigated for a long time. Similarly, random object of a more complicated structure like RNA molecules or proteins are of interest. Results In this article, we present a new general framework for deriving algorithms for the non-uniform random generation of combinatorial objects according to the encoding and probability distribution implied by a stochastic context-free grammar. Briefly, the framework extends on the well-known recursive method for (uniform random generation and uses the popular framework of admissible specifications of combinatorial classes, introducing weighted combinatorial classes to allow for the non-uniform generation by means of unranking. This framework is used to derive an algorithm for the generation of RNA secondary structures of a given fixed size. We address the random generation of these structures according to a realistic distribution obtained from real-life data by using a very detailed context-free grammar (that models the class of RNA secondary structures by distinguishing between all known motifs in RNA structure. Compared to well-known sampling approaches used in several structure prediction tools (such as SFold ours has two major advantages: Firstly, after a preprocessing step in time O(n2 for the computation of all weighted class sizes needed, with our approach a set of m random secondary structures of a given structure size n can be computed in worst-case time complexity Om⋅n⋅ log(n while other algorithms typically have a runtime in O(m⋅n2. Secondly, our approach works with integer arithmetic only which is faster and saves us from all the discomforting details of using floating point arithmetic with
Evaluating the double Poisson generalized linear model.
Zou, Yaotian; Geedipally, Srinivas Reddy; Lord, Dominique
2013-10-01
The objectives of this study are to: (1) examine the applicability of the double Poisson (DP) generalized linear model (GLM) for analyzing motor vehicle crash data characterized by over- and under-dispersion and (2) compare the performance of the DP GLM with the Conway-Maxwell-Poisson (COM-Poisson) GLM in terms of goodness-of-fit and theoretical soundness. The DP distribution has seldom been investigated and applied since its first introduction two decades ago. The hurdle for applying the DP is related to its normalizing constant (or multiplicative constant) which is not available in closed form. This study proposed a new method to approximate the normalizing constant of the DP with high accuracy and reliability. The DP GLM and COM-Poisson GLM were developed using two observed over-dispersed datasets and one observed under-dispersed dataset. The modeling results indicate that the DP GLM with its normalizing constant approximated by the new method can handle crash data characterized by over- and under-dispersion. Its performance is comparable to the COM-Poisson GLM in terms of goodness-of-fit (GOF), although COM-Poisson GLM provides a slightly better fit. For the over-dispersed data, the DP GLM performs similar to the NB GLM. Considering the fact that the DP GLM can be easily estimated with inexpensive computation and that it is simpler to interpret coefficients, it offers a flexible and efficient alternative for researchers to model count data. Copyright © 2013 Elsevier Ltd. All rights reserved.
Earthquake slip distribution estimation, using a random vector approach
Hooper, A. J.
2012-12-01
InSAR and/or GNSS data are routinely used to invert for the slip distribution on faults that rupture during earthquakes. Where exactly slip occurred has implications for future seismic hazard. However, in order to regularize the inversion, extra assumptions about the smoothness of the slip distribution are usually included, which do not have a physical basis. Here we propose a new approach for constraining the slip distribution based on a random vector model following a von Karman autocorrelation function. While this approach also has no physical basis, it does have empirical support from a stochastic analysis of seismic finite-source slip inversions (Mai and Beroza, 2002). We implement the random vector constraint in a Bayesian fashion and use a Markov chain Monte Carlo (MCMC) algorithm to derive the posterior joint probability distribution for each of the slipping patches. The von Karman function depends on two parameters: correlation length and Hurst number (related to fractal dimension). We use histograms from the stochastic analysis for these two parameters, which differ in along-strike and down-dip directions, to derive prior probability distributions, but allow them to vary during the inversion as hyperparameters. We also let the model parameters that control the fault geometry vary freely. In other inversion approaches these are usually fixed prior to inversion for distributed slip, due primarily to the difficulty in searching the resulting model space within a reasonable CPU time. To overcome this problem we have implemented a variation to the usual MCMC approach, in which the step size for each of the model parameters is regularly updated to optimize convergence time. We have applied our approach to a number of earthquakes and find that the results sometimes differ markedly to those incorporating the common Laplacian smoothing constraint. In addition, the fast run times mean that this approach could be routinely applied to data from the upcoming Sentinel
Vasta, M.; Di Paola, M.
In this paper an approximate explicit probability density function for the analysis of external oscillations of a linear and geometric nonlinear simply supported beam driven by random pulses is proposed. The adopted impulsive loading model is the Poisson White Noise , that is a process having Dirac's delta occurrences with random intensity distributed in time according to Poisson's law. The response probability density function can be obtained solving the related Kolmogorov-Feller (KF) integro-differential equation. An approximated solution, using path integral method, is derived transforming the KF equation to a first order partial differential equation. The method of characteristic is then applied to obtain an explicit solution. Different levels of approximation, depending on the physical assumption on the transition probability density function, are found and the solution for the response density is obtained as series expansion using convolution integrals.
Poisson statistics of PageRank probabilities of Twitter and Wikipedia networks
Frahm, Klaus M.; Shepelyansky, Dima L.
2014-04-01
We use the methods of quantum chaos and Random Matrix Theory for analysis of statistical fluctuations of PageRank probabilities in directed networks. In this approach the effective energy levels are given by a logarithm of PageRank probability at a given node. After the standard energy level unfolding procedure we establish that the nearest spacing distribution of PageRank probabilities is described by the Poisson law typical for integrable quantum systems. Our studies are done for the Twitter network and three networks of Wikipedia editions in English, French and German. We argue that due to absence of level repulsion the PageRank order of nearby nodes can be easily interchanged. The obtained Poisson law implies that the nearby PageRank probabilities fluctuate as random independent variables.
Directory of Open Access Journals (Sweden)
Jun Xie
2018-03-01
Full Text Available The increasing penetration of distributed energy resources in distribution systems has brought a number of network management and operational challenges; reactive power variation has been identified as one of the dominant effects. Enormous growth in a variety of controllable devices that have complex control requirements are integrated in distribution networks. The operation modes of traditional centralized control are difficult to tackle these problems with central controller. When considering the non-linear multi-objective functions with discrete and continuous optimization variables, the proposed random gradient-free algorithm is employed to the optimal operation of controllable devices for reactive power optimization. This paper presents a distributed reactive power optimization algorithm that can obtain the global optimum solution based on random gradient-free algorithm for distribution network without requiring a central coordinator. By utilizing local measurements and local communications among capacitor banks and distributed generators (DGs, the proposed reactive power control strategy can realize the overall network voltage optimization and power loss minimization simultaneously. Simulation studies on the modified IEEE-69 bus distribution systems demonstrate the effectiveness and superiority of the proposed reactive power optimization strategy.
Poisson hierarchy of discrete strings
Energy Technology Data Exchange (ETDEWEB)
Ioannidou, Theodora, E-mail: ti3@auth.gr [Faculty of Civil Engineering, School of Engineering, Aristotle University of Thessaloniki, 54249, Thessaloniki (Greece); Niemi, Antti J., E-mail: Antti.Niemi@physics.uu.se [Department of Physics and Astronomy, Uppsala University, P.O. Box 803, S-75108, Uppsala (Sweden); Laboratoire de Mathematiques et Physique Theorique CNRS UMR 6083, Fédération Denis Poisson, Université de Tours, Parc de Grandmont, F37200, Tours (France); Department of Physics, Beijing Institute of Technology, Haidian District, Beijing 100081 (China)
2016-01-28
The Poisson geometry of a discrete string in three dimensional Euclidean space is investigated. For this the Frenet frames are converted into a spinorial representation, the discrete spinor Frenet equation is interpreted in terms of a transfer matrix formalism, and Poisson brackets are introduced in terms of the spinor components. The construction is then generalised, in a self-similar manner, into an infinite hierarchy of Poisson algebras. As an example, the classical Virasoro (Witt) algebra that determines reparametrisation diffeomorphism along a continuous string, is identified as a particular sub-algebra, in the hierarchy of the discrete string Poisson algebra. - Highlights: • Witt (classical Virasoro) algebra is derived in the case of discrete string. • Infinite dimensional hierarchy of Poisson bracket algebras is constructed for discrete strings. • Spinor representation of discrete Frenet equations is developed.
Poisson hierarchy of discrete strings
International Nuclear Information System (INIS)
Ioannidou, Theodora; Niemi, Antti J.
2016-01-01
The Poisson geometry of a discrete string in three dimensional Euclidean space is investigated. For this the Frenet frames are converted into a spinorial representation, the discrete spinor Frenet equation is interpreted in terms of a transfer matrix formalism, and Poisson brackets are introduced in terms of the spinor components. The construction is then generalised, in a self-similar manner, into an infinite hierarchy of Poisson algebras. As an example, the classical Virasoro (Witt) algebra that determines reparametrisation diffeomorphism along a continuous string, is identified as a particular sub-algebra, in the hierarchy of the discrete string Poisson algebra. - Highlights: • Witt (classical Virasoro) algebra is derived in the case of discrete string. • Infinite dimensional hierarchy of Poisson bracket algebras is constructed for discrete strings. • Spinor representation of discrete Frenet equations is developed.
Anti-counterfeit nanoscale fingerprints based on randomly distributed nanowires
International Nuclear Information System (INIS)
Kim, Jangbae; Yun, Je Moon; Jung, Jongwook; Song, Hyunjoon; Kim, Jin-Baek; Ihee, Hyotcherl
2014-01-01
Counterfeiting is conducted in almost every industry, and the losses caused by it are growing as today’s world trade continues to increase. In an attempt to provide an efficient method to fight such counterfeiting, we herein demonstrate anti-counterfeit nanoscale fingerprints generated by randomly distributed nanowires. Specifically, we prepare silver nanowires coated with fluorescent dyes and cast them onto the surface of transparent PET film. The resulting non-repeatable patterns characterized by the random location of the nanowires and their fluorescent colors provide unique barcodes suitable for anti-counterfeit purposes. Counterfeiting such a fingerprint pattern is impractical and expensive; the cost of replicating it would be higher than the value of the typical target item being protected. Fingerprint patterns can be visually authenticated in a simple and straightforward manner by using an optical microscope. The concept of generating unique patterns by randomness is not limited to the materials shown in this paper and should be readily applicable to other types of materials. (paper)
Weight Distributions for Turbo Codes Using Random and Nonrandom Permutations
Dolinar, S.; Divsalar, D.
1995-04-01
This article takes a preliminary look at the weight distributions achievable for turbo codes using random, nonrandom, and semirandom permutations. Due to the recursiveness of the encoders, it is important to distinguish between self-terminating and non-self-terminating input sequences. The non-self-terminating sequences have little effect on decoder performance, because they accumulate high encoded weight until they are artificially terminated at the end of the block. From probabilistic arguments based on selecting the permutations randomly, it is concluded that the self-terminating weight-2 data sequences are the most important consideration in the design of the constituent codes; higher-weight self-terminating sequences have successively decreasing importance. Also, increasing the number of codes and, correspondingly, the number of permutations makes it more and more likely that the bad input sequences will be broken up by one or more of the permuters. It is possible to design nonrandom permutations that ensure that the minimum distance due to weight-2 input sequences grows roughly as p 2N, where N is the block length. However, these nonrandom permutations amplify the bad effects of higher-weight inputs, and as a result they are inferior in performance to randomly selected permutations. But there are "semirandom" permutations that perform nearly as well as the designed nonrandom permutations with respect to weight-2 input sequences and are not as susceptible to being foiled by higher-weight inputs.
Smooth conditional distribution function and quantiles under random censorship.
Leconte, Eve; Poiraud-Casanova, Sandrine; Thomas-Agnan, Christine
2002-09-01
We consider a nonparametric random design regression model in which the response variable is possibly right censored. The aim of this paper is to estimate the conditional distribution function and the conditional alpha-quantile of the response variable. We restrict attention to the case where the response variable as well as the explanatory variable are unidimensional and continuous. We propose and discuss two classes of estimators which are smooth with respect to the response variable as well as to the covariate. Some simulations demonstrate that the new methods have better mean square error performances than the generalized Kaplan-Meier estimator introduced by Beran (1981) and considered in the literature by Dabrowska (1989, 1992) and Gonzalez-Manteiga and Cadarso-Suarez (1994).
The rising power of random distributed feedback fiber laser
Zhou, Pu; Ye, Jun; Xu, Jiangming; Zhang, Hanwei; Huang, Long; Wu, Jian; Xiao, Hu; Leng, Jinyong
2018-01-01
Random distributed feedback fiber lasers (RDFFL) are now attracting more and more attentions for their unique cavity-free, mode-free and structural simplicity features and broadband application potentials in many fields, such as long distance sensing, speck free imaging, nonlinear frequency conversion as well as new pump source. In this talk, we will review the recent research progresses on high power RDFFLs. We have achieved (1) More than 400 W RDFFL with nearly Gaussian beam profile based on crucial employment of fiber mismatching architecture. (2) High power RDFFL with specialized optical property that include: high power narrow-band RDFFL, hundred-watt level linearly-polarized RDFFL, hundred-watt level high-order RDFFL. (3) Power enhancements of RDFFL to record kilowatt level are demonstrated with the aid of fiber master oscillator power amplifier (MOPA) with different pump schemes.
Fitting and Analyzing Randomly Censored Geometric Extreme Exponential Distribution
Directory of Open Access Journals (Sweden)
Muhammad Yameen Danish
2016-06-01
Full Text Available The paper presents the Bayesian analysis of two-parameter geometric extreme exponential distribution with randomly censored data. The continuous conjugate prior of the scale and shape parameters of the model does not exist while computing the Bayes estimates, it is assumed that the scale and shape parameters have independent gamma priors. It is seen that the closed-form expressions for the Bayes estimators are not possible; we suggest the Lindley’s approximation to obtain the Bayes estimates. However, the Bayesian credible intervals cannot be constructed while using this method, we propose Gibbs sampling to obtain the Bayes estimates and also to construct the Bayesian credible intervals. Monte Carlo simulation study is carried out to observe the behavior of the Bayes estimators and also to compare with the maximum likelihood estimators. One real data analysis is performed for illustration.
Analysis on Poisson and Gamma spaces
Kondratiev, Yuri; Silva, Jose Luis; Streit, Ludwig; Us, Georgi
1999-01-01
We study the spaces of Poisson, compound Poisson and Gamma noises as special cases of a general approach to non-Gaussian white noise calculus, see \\cite{KSS96}. We use a known unitary isomorphism between Poisson and compound Poisson spaces in order to transport analytic structures from Poisson space to compound Poisson space. Finally we study a Fock type structure of chaos decomposition on Gamma space.
Zero inflated Poisson and negative binomial regression models: application in education.
Salehi, Masoud; Roudbari, Masoud
2015-01-01
The number of failed courses and semesters in students are indicators of their performance. These amounts have zero inflated (ZI) distributions. Using ZI Poisson and negative binomial distributions we can model these count data to find the associated factors and estimate the parameters. This study aims at to investigate the important factors related to the educational performance of students. This cross-sectional study performed in 2008-2009 at Iran University of Medical Sciences (IUMS) with a population of almost 6000 students, 670 students selected using stratified random sampling. The educational and demographical data were collected using the University records. The study design was approved at IUMS and the students' data kept confidential. The descriptive statistics and ZI Poisson and negative binomial regressions were used to analyze the data. The data were analyzed using STATA. In the number of failed semesters, Poisson and negative binomial distributions with ZI, students' total average and quota system had the most roles. For the number of failed courses, total average, and being in undergraduate or master levels had the most effect in both models. In all models the total average have the most effect on the number of failed courses or semesters. The next important factor is quota system in failed semester and undergraduate and master levels in failed courses. Therefore, average has an important inverse effect on the numbers of failed courses and semester.
Radio pulsar glitches as a state-dependent Poisson process
Fulgenzi, W.; Melatos, A.; Hughes, B. D.
2017-10-01
Gross-Pitaevskii simulations of vortex avalanches in a neutron star superfluid are limited computationally to ≲102 vortices and ≲102 avalanches, making it hard to study the long-term statistics of radio pulsar glitches in realistically sized systems. Here, an idealized, mean-field model of the observed Gross-Pitaevskii dynamics is presented, in which vortex unpinning is approximated as a state-dependent, compound Poisson process in a single random variable, the spatially averaged crust-superfluid lag. Both the lag-dependent Poisson rate and the conditional distribution of avalanche-driven lag decrements are inputs into the model, which is solved numerically (via Monte Carlo simulations) and analytically (via a master equation). The output statistics are controlled by two dimensionless free parameters: α, the glitch rate at a reference lag, multiplied by the critical lag for unpinning, divided by the spin-down rate; and β, the minimum fraction of the lag that can be restored by a glitch. The system evolves naturally to a self-regulated stationary state, whose properties are determined by α/αc(β), where αc(β) ≈ β-1/2 is a transition value. In the regime α ≳ αc(β), one recovers qualitatively the power-law size and exponential waiting-time distributions observed in many radio pulsars and Gross-Pitaevskii simulations. For α ≪ αc(β), the size and waiting-time distributions are both power-law-like, and a correlation emerges between size and waiting time until the next glitch, contrary to what is observed in most pulsars. Comparisons with astrophysical data are restricted by the small sample sizes available at present, with ≤35 events observed per pulsar.
A geometric multigrid Poisson solver for domains containing solid inclusions
Botto, Lorenzo
2013-03-01
A Cartesian grid method for the fast solution of the Poisson equation in three-dimensional domains with embedded solid inclusions is presented and its performance analyzed. The efficiency of the method, which assume Neumann conditions at the immersed boundaries, is comparable to that of a multigrid method for regular domains. The method is light in terms of memory usage, and easily adaptable to parallel architectures. Tests with random and ordered arrays of solid inclusions, including spheres and ellipsoids, demonstrate smooth convergence of the residual for small separation between the inclusion surfaces. This feature is important, for instance, in simulations of nearly-touching finite-size particles. The implementation of the method, “MG-Inc”, is available online. Catalogue identifier: AEOE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 19068 No. of bytes in distributed program, including test data, etc.: 215118 Distribution format: tar.gz Programming language: C++ (fully tested with GNU GCC compiler). Computer: Any machine supporting standard C++ compiler. Operating system: Any OS supporting standard C++ compiler. RAM: About 150MB for 1283 resolution Classification: 4.3. Nature of problem: Poisson equation in domains containing inclusions; Neumann boundary conditions at immersed boundaries. Solution method: Geometric multigrid with finite-volume discretization. Restrictions: Stair-case representation of the immersed boundaries. Running time: Typically a fraction of a minute for 1283 resolution.
A Note On the Estimation of the Poisson Parameter
Directory of Open Access Journals (Sweden)
S. S. Chitgopekar
1985-01-01
distribution when there are errors in observing the zeros and ones and obtains both the maximum likelihood and moments estimates of the Poisson mean and the error probabilities. It is interesting to note that either method fails to give unique estimates of these parameters unless the error probabilities are functionally related. However, it is equally interesting to observe that the estimate of the Poisson mean does not depend on the functional relationship between the error probabilities.
International Nuclear Information System (INIS)
Akıncı, Ümit
2012-01-01
The effect of the random magnetic field distribution on the phase diagrams and ground state magnetizations of the Ising nanowire has been investigated with effective field theory with correlations. Gaussian distribution has been chosen as a random magnetic field distribution. The variation of the phase diagrams with that distribution parameters has been obtained and some interesting results have been found such as disappearance of the reentrant behavior and first order transitions which appear in the case of discrete distributions. Also for single and double Gaussian distributions, ground state magnetizations for different distribution parameters have been determined which can be regarded as separate partially ordered phases of the system. - Highlights: ► We give the phase diagrams of the Ising nanowire under the continuous randomly distributed magnetic field. ► Ground state magnetization values obtained. ► Different partially ordered phases observed.
DEFF Research Database (Denmark)
Mikosch, Thomas Valentin; Rackauskas, Alfredas
2010-01-01
In this paper, we deal with the asymptotic distribution of the maximum increment of a random walk with a regularly varying jump size distribution. This problem is motivated by a long-standing problem on change point detection for epidemic alternatives. It turns out that the limit distribution of ...... of the maximum increment of the random walk is one of the classical extreme value distributions, the Fréchet distribution. We prove the results in the general framework of point processes and for jump sizes taking values in a separable Banach space......In this paper, we deal with the asymptotic distribution of the maximum increment of a random walk with a regularly varying jump size distribution. This problem is motivated by a long-standing problem on change point detection for epidemic alternatives. It turns out that the limit distribution...
Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'
DEFF Research Database (Denmark)
de Nijs, Robin
2015-01-01
by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all...... methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics...... for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties...
Brown, Timothy C.
2004-01-01
When can one find a smooth transformation of a random variable so that the transformed random variable has a specified distribution? If the random variable is continuous, the solution is elementary; if it is discrete, it may be impossible. In this paper, a simple method is given of transforming a random variable in a smooth way to match a specified number of quantiles of an arbitrary distribution. The problem arose from a request for a simple way of transforming marks giv...
Double generalized linear compound poisson models to insurance claims data
DEFF Research Database (Denmark)
Andersen, Daniel Arnfeldt; Bonat, Wagner Hugo
2017-01-01
This paper describes the specification, estimation and comparison of double generalized linear compound Poisson models based on the likelihood paradigm. The models are motivated by insurance applications, where the distribution of the response variable is composed by a degenerate distribution...... in a finite sample framework. The simulation studies are also used to validate the fitting algorithms and check the computational implementation. Furthermore, we investigate the impact of an unsuitable choice for the response variable distribution on both mean and dispersion parameter estimates. We provide R...... implementation and illustrate the application of double generalized linear compound Poisson models using a data set about car insurances....
Okawa, S; Endo, Y; Hoshi, Y; Yamada, Y
2012-01-01
A method to reduce noise for time-domain diffuse optical tomography (DOT) is proposed. Poisson noise which contaminates time-resolved photon counting data is reduced by use of maximum a posteriori estimation. The noise-free data are modeled as a Markov random process, and the measured time-resolved data are assumed as Poisson distributed random variables. The posterior probability of the occurrence of the noise-free data is formulated. By maximizing the probability, the noise-free data are estimated, and the Poisson noise is reduced as a result. The performances of the Poisson noise reduction are demonstrated in some experiments of the image reconstruction of time-domain DOT. In simulations, the proposed method reduces the relative error between the noise-free and noisy data to about one thirtieth, and the reconstructed DOT image was smoothed by the proposed noise reduction. The variance of the reconstructed absorption coefficients decreased by 22% in a phantom experiment. The quality of DOT, which can be applied to breast cancer screening etc., is improved by the proposed noise reduction.
Modified Regression Correlation Coefficient for Poisson Regression Model
Kaengthong, Nattacha; Domthong, Uthumporn
2017-09-01
This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).
The Stochastic stability of a Logistic model with Poisson white noise
International Nuclear Information System (INIS)
Duan Dong-Hai; Xu Wei; Zhou Bing-Chang; Su Jun
2011-01-01
The stochastic stability of a logistic model subjected to the effect of a random natural environment, modeled as Poisson white noise process, is investigated. The properties of the stochastic response are discussed for calculating the Lyapunov exponent, which had proven to be the most useful diagnostic tool for the stability of dynamical systems. The generalised Itô differentiation formula is used to analyse the stochastic stability of the response. The results indicate that the stability of the response is related to the intensity and amplitude distribution of the environment noise and the growth rate of the species. (general)
The Stochastic stability of a Logistic model with Poisson white noise
Duan, Dong-Hai; Xu, Wei; Su, Jun; Zhou, Bing-Chang
2011-03-01
The stochastic stability of a logistic model subjected to the effect of a random natural environment, modeled as Poisson white noise process, is investigated. The properties of the stochastic response are discussed for calculating the Lyapunov exponent, which had proven to be the most useful diagnostic tool for the stability of dynamical systems. The generalised Itô differentiation formula is used to analyse the stochastic stability of the response. The results indicate that the stability of the response is related to the intensity and amplitude distribution of the environment noise and the growth rate of the species. Project supported by the National Natural Science Foundation of China (Grant Nos. 10872165 and 10932009).
Graded geometry and Poisson reduction
Cattaneo, A S; Zambon, M
2009-01-01
The main result of [2] extends the Marsden-Ratiu reduction theorem [4] in Poisson geometry, and is proven by means of graded geometry. In this note we provide the background material about graded geometry necessary for the proof in [2]. Further, we provide an alternative algebraic proof for the main result. ©2009 American Institute of Physics
Periodic Poisson Solver for Particle Tracking
International Nuclear Information System (INIS)
Dohlus, M.; Henning, C.
2015-05-01
A method is described to solve the Poisson problem for a three dimensional source distribution that is periodic into one direction. Perpendicular to the direction of periodicity a free space (or open) boundary is realized. In beam physics, this approach allows to calculate the space charge field of a continualized charged particle distribution with periodic pattern. The method is based on a particle mesh approach with equidistant grid and fast convolution with a Green's function. The periodic approach uses only one period of the source distribution, but a periodic extension of the Green's function. The approach is numerically efficient and allows the investigation of periodic- and pseudo-periodic structures with period lengths that are small compared to the source dimensions, for instance of laser modulated beams or of the evolution of micro bunch structures. Applications for laser modulated beams are given.
The Allan variance in the presence of a compound Poisson process modelling clock frequency jumps
Formichella, Valerio
2016-12-01
Atomic clocks can be affected by frequency jumps occurring at random times and with a random amplitude. The frequency jumps degrade the clock stability and this is captured by the Allan variance. In this work we assume that the random jumps can be modelled by a compound Poisson process, independent of the other stochastic and deterministic processes affecting the clock stability. Then, we derive the analytical expression of the Allan variance of a jumping clock. We find that the analytical Allan variance does not depend on the actual shape of the jumps amplitude distribution, but only on its first and second moments, and its final form is the same as for a clock with a random walk of frequency and a frequency drift. We conclude that the Allan variance cannot distinguish between a compound Poisson process and a Wiener process, hence it may not be sufficient to correctly identify the fundamental noise processes affecting a clock. The result is general and applicable to any oscillator, whose frequency is affected by a jump process with the described statistics.
Directory of Open Access Journals (Sweden)
Bogdan Gheorghe Munteanu
2013-01-01
Full Text Available Using the stochastic approximations, in this paper it was studiedthe convergence in distribution of the fractional parts of the sum of random variables to the truncated exponential distribution with parameter lambda. This fact is feasible by means of the Fourier-Stieltjes sequence (FSS of the random variable.
Distribution of level spacing ratios using one-plus two-body random ...
Indian Academy of Sciences (India)
2015-02-03
Feb 3, 2015 ... Random matrix ensembles; embedded ensembles; EGOE(1+2); BEGOE(1+2); Poisson–GOE transition; spacing distribution. ... the distribution of the ratio of consecutive level spacings using one-body plus two-body random matrix ensembles for finite interacting many-fermion and many-boson systems.
Larwin, Karen H.; Larwin, David A.
2011-01-01
Bootstrapping methods and random distribution methods are increasingly recommended as better approaches for teaching students about statistical inference in introductory-level statistics courses. The authors examined the effect of teaching undergraduate business statistics students using random distribution and bootstrapping simulations. It is the…
On the Fractional Poisson Process and the Discretized Stable Subordinator
Directory of Open Access Journals (Sweden)
Rudolf Gorenflo
2015-08-01
Full Text Available We consider the renewal counting number process N = N(t as a forward march over the non-negative integers with independent identically distributed waiting times. We embed the values of the counting numbers N in a “pseudo-spatial” non-negative half-line x ≥ 0 and observe that for physical time likewise we have t ≥ 0. Thus we apply the Laplace transform with respect to both variables x and t. Applying then a modification of the Montroll-Weiss-Cox formalism of continuous time random walk we obtain the essential characteristics of a renewal process in the transform domain and, if we are lucky, also in the physical domain. The process t = t(N of accumulation of waiting times is inverse to the counting number process, in honour of the Danish mathematician and telecommunication engineer A.K. Erlang we call it the Erlang process. It yields the probability of exactly n renewal events in the interval (0; t]. We apply our Laplace-Laplace formalism to the fractional Poisson process whose waiting times are of Mittag-Leffler type and to a renewal process whose waiting times are of Wright type. The process of Mittag-Leffler type includes as a limiting case the classical Poisson process, the process of Wright type represents the discretized stable subordinator and a re-scaled version of it was used in our method of parametric subordination of time-space fractional diffusion processes. Properly rescaling the counting number process N(t and the Erlang process t(N yields as diffusion limits the inverse stable and the stable subordinator, respectively.
Asymptotic distribution of products of sums of independent random ...
Indian Academy of Sciences (India)
Yanling Wang et al. Recently, Gonchigdanzan and Rempała [3] discussed an almost sure limit theorem for the product of the partial sums of i.i.d. positive random variables as follows. Theorem GR. Let (Xn)n≥1 be a sequence of i.i.d. positive square integrable random variables with EX1 = μ > 0, Var X1 = σ2. Denote γ = σ/μ ...
Reliability Analysis of a Cold Standby System with Imperfect Repair and under Poisson Shocks
Directory of Open Access Journals (Sweden)
Yutian Chen
2014-01-01
Full Text Available This paper considers the reliability analysis of a two-component cold standby system with a repairman who may have vacation. The system may fail due to intrinsic factors like aging or deteriorating, or external factors such as Poisson shocks. The arrival time of the shocks follows a Poisson process with the intensity λ>0. Whenever the magnitude of a shock is larger than the prespecified threshold of the operating component, the operating component will fail. The paper assumes that the intrinsic lifetime and the repair time on the component are an extended Poisson process, the magnitude of the shock and the threshold of the operating component are nonnegative random variables, and the vacation time of the repairman obeys the general continuous probability distribution. By using the vector Markov process theory, the supplementary variable method, Laplace transform, and Tauberian theory, the paper derives a number of reliability indices: system availability, system reliability, the rate of occurrence of the system failure, and the mean time to the first failure of the system. Finally, a numerical example is given to validate the derived indices.
Zero Distribution of System with Unknown Random Variables Case Study: Avoiding Collision Path
Directory of Open Access Journals (Sweden)
Parman Setyamartana
2014-07-01
Full Text Available This paper presents the stochastic analysis of finding the feasible trajectories of robotics arm motion at obstacle surrounding. Unknown variables are coefficients of polynomials joint angle so that the collision-free motion is achieved. ãk is matrix consisting of these unknown feasible polynomial coefficients. The pattern of feasible polynomial in the obstacle environment shows as random. This paper proposes to model the pattern of this randomness values using random polynomial with unknown variables as coefficients. The behavior of the system will be obtained from zero distribution as the characteristic of such random polynomial. Results show that the pattern of random polynomial of avoiding collision can be constructed from zero distribution. Zero distribution is like building block of the system with obstacles as uncertainty factor. By scale factor k, which has range, the random coefficient pattern can be predicted.
Drikvandi, Reza
2017-06-01
Nonlinear mixed-effects models are frequently used for pharmacokinetic data analysis, and they account for inter-subject variability in pharmacokinetic parameters by incorporating subject-specific random effects into the model. The random effects are often assumed to follow a (multivariate) normal distribution. However, many articles have shown that misspecifying the random-effects distribution can introduce bias in the estimates of parameters and affect inferences about the random effects themselves, such as estimation of the inter-subject variability. Because random effects are unobservable latent variables, it is difficult to assess their distribution. In a recent paper we developed a diagnostic tool based on the so-called gradient function to assess the random-effects distribution in mixed models. There we evaluated the gradient function for generalized liner mixed models and in the presence of a single random effect. However, assessing the random-effects distribution in nonlinear mixed-effects models is more challenging, especially when multiple random effects are present, and therefore the results from linear and generalized linear mixed models may not be valid for such nonlinear models. In this paper, we further investigate the gradient function and evaluate its performance for such nonlinear mixed-effects models which are common in pharmacokinetics and pharmacodynamics. We use simulations as well as real data from an intensive pharmacokinetic study to illustrate the proposed diagnostic tool.
Thinning spatial point processes into Poisson processes
DEFF Research Database (Denmark)
Møller, Jesper; Schoenberg, Frederic Paik
This paper describes methods for randomly thinning certain classes of spatial point processes. In the case of a Markov point process, the proposed method involves a dependent thinning of a spatial birth-and-death process, where clans of ancestors associated with the original points are identified......, and where one simulates backwards and forwards in order to obtain the thinned process. In the case of a Cox process, a simple independent thinning technique is proposed. In both cases, the thinning results in a Poisson process if and only if the true Papangelou conditional intensity is used, and thus can...... be used as a diagnostic for assessing the goodness-of-fit of a spatial point process model. Several examples, including clustered and inhibitive point processes, are considered....
Thinning spatial point processes into Poisson processes
DEFF Research Database (Denmark)
Møller, Jesper; Schoenberg, Frederic Paik
2010-01-01
In this paper we describe methods for randomly thinning certain classes of spatial point processes. In the case of a Markov point process, the proposed method involves a dependent thinning of a spatial birth-and-death process, where clans of ancestors associated with the original points...... are identified, and where we simulate backwards and forwards in order to obtain the thinned process. In the case of a Cox process, a simple independent thinning technique is proposed. In both cases, the thinning results in a Poisson process if and only if the true Papangelou conditional intensity is used, and......, thus, can be used as a graphical exploratory tool for inspecting the goodness-of-fit of a spatial point process model. Several examples, including clustered and inhibitive point processes, are considered....
Asymptotic distribution of products of sums of independent random ...
Indian Academy of Sciences (India)
the product of the partial sums of i.i.d. positive random variables as follows. Theorem GR. Let (Xn)n≥1 be a ..... ful suggestions, which have resulted in an improved presentation of the paper. This work is supported by Henan Province Foundation and Frontier Technology Research Plan. (112300410205). References.
Limit distributions of random walks on stochastic matrices
Indian Academy of Sciences (India)
all 2 × 2 rank one stochastic matrices. We show that S(λ), the support of λ, consists of the end points of a countable number of disjoint open intervals and we have calculated the λ-measure of each such point. To the best of our knowledge, these results are new. Keywords. Random walk; stochastic matrices; limiting measure.
Binomial vs poisson statistics in radiation studies
International Nuclear Information System (INIS)
Foster, J.; Kouris, K.; Spyrou, N.M.; Matthews, I.P.; Welsh National School of Medicine, Cardiff
1983-01-01
The processes of radioactive decay, decay and growth of radioactive species in a radioactive chain, prompt emission(s) from nuclear reactions, conventional activation and cyclic activation are discussed with respect to their underlying statistical density function. By considering the transformation(s) that each nucleus may undergo it is shown that all these processes are fundamentally binomial. Formally, when the number of experiments N is large and the probability of success p is close to zero, the binomial is closely approximated by the Poisson density function. In radiation and nuclear physics, N is always large: each experiment can be conceived of as the observation of the fate of each of the N nuclei initially present. Whether p, the probability that a given nucleus undergoes a prescribed transformation, is close to zero depends on the process and nuclide(s) concerned. Hence, although a binomial description is always valid, the Poisson approximation is not always adequate. Therefore further clarification is provided as to when the binomial distribution must be used in the statistical treatment of detected events. (orig.)
Limit distributions of random walks on stochastic matrices
Indian Academy of Sciences (India)
Problems similar to Ann. Prob. 22 (1994) 424–430 and J. Appl. Prob. 23 (1986) 1019–1024 are considered here. The limit distribution of the sequence X n X n − 1 … X 1 , where ( X n ) n ≥ 1 is a sequence of i.i.d. 2 × 2 stochastic matrices with each X n distributed as , is identified here in a number of discrete situations.
International Nuclear Information System (INIS)
Valor, Alma; Alfonso, Lester; Caleyo, Francisco; Vidal, Julio; Perez-Baruch, Eloy; Hallen, José M.
2015-01-01
Highlights: • Observed external-corrosion defects in underground pipelines revealed a tendency to cluster. • The Poisson distribution is unable to fit extensive count data for these type of defects. • In contrast, the negative binomial distribution provides a suitable count model for them. • Two spatial stochastic processes lead to the negative binomial distribution for defect counts. • They are the Gamma-Poisson mixed process and the compound Poisson process. • A Rogeŕs process also arises as a plausible temporal stochastic process leading to corrosion defect clustering and to negative binomially distributed defect counts. - Abstract: The spatial distribution of external corrosion defects in buried pipelines is usually described as a Poisson process, which leads to corrosion defects being randomly distributed along the pipeline. However, in real operating conditions, the spatial distribution of defects considerably departs from Poisson statistics due to the aggregation of defects in groups or clusters. In this work, the statistical analysis of real corrosion data from underground pipelines operating in southern Mexico leads to conclude that the negative binomial distribution provides a better description for defect counts. The origin of this distribution from several processes is discussed. The analysed processes are: mixed Gamma-Poisson, compound Poisson and Roger’s processes. The physical reasons behind them are discussed for the specific case of soil corrosion.
A method for generating skewed random numbers using two overlapping uniform distributions
International Nuclear Information System (INIS)
Ermak, D.L.; Nasstrom, J.S.
1995-02-01
The objective of this work was to implement and evaluate a method for generating skewed random numbers using a combination of uniform random numbers. The method provides a simple and accurate way of generating skewed random numbers from the specified first three moments without an a priori specification of the probability density function. We describe the procedure for generating skewed random numbers from unifon-n random numbers, and show that it accurately produces random numbers with the desired first three moments over a range of skewness values. We also show that in the limit of zero skewness, the distribution of random numbers is an accurate approximation to the Gaussian probability density function. Future work win use this method to provide skewed random numbers for a Langevin equation model for diffusion in skewed turbulence
Poisson-Fermi Formulation of Nonlocal Electrostatics in Electrolyte Solutions
Directory of Open Access Journals (Sweden)
Liu Jinn-Liang
2017-10-01
Full Text Available We present a nonlocal electrostatic formulation of nonuniform ions and water molecules with interstitial voids that uses a Fermi-like distribution to account for steric and correlation efects in electrolyte solutions. The formulation is based on the volume exclusion of hard spheres leading to a steric potential and Maxwell’s displacement field with Yukawa-type interactions resulting in a nonlocal electric potential. The classical Poisson-Boltzmann model fails to describe steric and correlation effects important in a variety of chemical and biological systems, especially in high field or large concentration conditions found in and near binding sites, ion channels, and electrodes. Steric effects and correlations are apparent when we compare nonlocal Poisson-Fermi results to Poisson-Boltzmann calculations in electric double layer and to experimental measurements on the selectivity of potassium channels for K+ over Na+.
A new test for random events of an exponential distribution
International Nuclear Information System (INIS)
Schmidt, K.H.
2000-03-01
A new statistical test procedure is described to evaluate whether a set of radioactive-decay data is compatible with the assumption that these data originate from the decay of a single radioactive species. Criteria to detect contributions from other radioactive species and from different event sources are given. The test is applicable to samples of exponential distributions with two or more events. (orig.)
Electrospun dye-doped fiber networks: lasing emission from randomly distributed cavities
DEFF Research Database (Denmark)
Krammer, Sarah; Vannahme, Christoph; Smith, Cameron
2015-01-01
Dye-doped polymer fiber networks fabricated with electrospinning exhibit comb-like laser emission. We identify randomly distributed ring resonators being responsible for lasing emission by making use of spatially resolved spectroscopy. Numerical simulations confirm this result quantitatively....
Sampling Random Bioinformatics Puzzles using Adaptive Probability Distributions
DEFF Research Database (Denmark)
Have, Christian Theil; Appel, Emil Vincent; Bork-Jensen, Jette
2016-01-01
We present a probabilistic logic program to generate an educational puzzle that introduces the basic principles of next generation sequencing, gene finding and the translation of genes to proteins following the central dogma in biology. In the puzzle, a secret "protein word" must be found by asse...... and the randomness of the generation process, sampling may fail to generate a satisfactory puzzle. To avoid failure we employ a strategy using adaptive probabilities which change in response to previous steps of generative process, thus minimizing the risk of failure....
Dynamic Response of Non-Linear Inelsatic Systems to Poisson-Driven Stochastic Excitations
DEFF Research Database (Denmark)
Nielsen, Søren R. K.; Iwankiewicz, R.
A single-degree-of-freedom inelastic system subject to a stochastic excitation in form of a Poisson-distributed train of impulses is considered. The state variables of the system form a non-diffusive, Poisson-driven Markov process. Two approximate analytical techniques are developed: modification...
A Predictive Analysis of the Department of Defense Distribution System Utilizing Random Forests
2016-06-01
ANALYSIS OF THE DEPARTMENT OF DEFENSE DISTRIBUTION SYSTEM UTILIZING RANDOM FORESTS by Amber G. Coleman June 2016 Thesis Advisor: Samuel E...UTILIZING RANDOM FORESTS 5. FUNDING NUMBERS 6. AUTHOR(S) Amber G. Coleman 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval Postgraduate School...linear regression, regression tree and random forest model for each sub-segment and find that the weekday and month in which requisitions begin the
Poisson process approximation for sequence repeats, and sequencing by hybridization.
Arratia, R; Martin, D; Reinert, G; Waterman, M S
1996-01-01
Sequencing by hybridization is a tool to determine a DNA sequence from the unordered list of all l-tuples contained in this sequence; typical numbers for l are l = 8, 10, 12. For theoretical purposes we assume that the multiset of all l-tuples is known. This multiset determines the DNA sequence uniquely if none of the so-called Ukkonen transformations are possible. These transformations require repeats of (l-1)-tuples in the sequence, with these repeats occurring in certain spatial patterns. We model DNA as an i.i.d. sequence. We first prove Poisson process approximations for the process of indicators of all leftmost long repeats allowing self-overlap and for the process of indicators of all left-most long repeats without self-overlap. Using the Chen-Stein method, we get bounds on the error of these approximations. As a corollary, we approximate the distribution of longest repeats. In the second step we analyze the spatial patterns of the repeats. Finally we combine these two steps to prove an approximation for the probability that a random sequence is uniquely recoverable from its list of l-tuples. For all our results we give some numerical examples including error bounds.
On the generation of log-Levy distributions and extreme randomness
International Nuclear Information System (INIS)
Eliazar, Iddo; Klafter, Joseph
2011-01-01
The log-normal distribution is prevalent across the sciences, as it emerges from the combination of multiplicative processes and the central limit theorem (CLT). The CLT, beyond yielding the normal distribution, also yields the class of Levy distributions. The log-Levy distributions are the Levy counterparts of the log-normal distribution, they appear in the context of ultraslow diffusion processes, and they are categorized by Mandelbrot as belonging to the class of extreme randomness. In this paper, we present a natural stochastic growth model from which both the log-normal distribution and the log-Levy distributions emerge universally-the former in the case of deterministic underlying setting, and the latter in the case of stochastic underlying setting. In particular, we establish a stochastic growth model which universally generates Mandelbrot's extreme randomness. (paper)
Parasites et parasitoses des poissons
De Kinkelin, Pierre; Morand, Marc; Hedrick, Ronald; Michel, Christian
2014-01-01
Cet ouvrage, richement illustré, offre un panorama représentatif des agents parasitaires rencontrés chez les poissons. S'appuyant sur les nouvelles conceptions de la classification phylogénétique, il met l'accent sur les propriétés biologiques, l'épidémiologie et les conséquences cliniques des groupes d'organismes en cause, à la lumière des avancées cognitives permises par les nouveaux outils de la biologie. Il est destiné à un large public, allant du monde de l'aquaculture à ceux de la santé...
Dualizing the Poisson summation formula.
Duffin, R J; Weinberger, H F
1991-01-01
If f(x) and g(x) are a Fourier cosine transform pair, then the Poisson summation formula can be written as 2sumfrominfinityn = 1g(n) + g(0) = 2sumfrominfinityn = 1f(n) + f(0). The concepts of linear transformation theory lead to the following dual of this classical relation. Let phi(x) and gamma(x) = phi(1/x)/x have absolutely convergent integrals over the positive real line. Let F(x) = sumfrominfinityn = 1phi(n/x)/x - integralinfinity0phi(t)dt and G(x) = sumfrominfinityn = 1gamma (n/x)/x - integralinfinity0 gamma(t)dt. Then F(x) and G(x) are a Fourier cosine transform pair. We term F(x) the "discrepancy" of phi because it is the error in estimating the integral phi of by its Riemann sum with the constant mesh spacing 1/x. PMID:11607208
Singular reduction of Nambu-Poisson manifolds
Das, Apurba
The version of Marsden-Ratiu Poisson reduction theorem for Nambu-Poisson manifolds by a regular foliation have been studied by Ibáñez et al. In this paper, we show that this reduction procedure can be extended to the singular case. Under a suitable notion of Hamiltonian flow on the reduced space, we show that a set of Hamiltonians on a Nambu-Poisson manifold can also be reduced.
Wang, Xin-Fan; Wang, Jian-Qiang; Deng, Sheng-Yue
2013-01-01
We investigate the dynamic stochastic multicriteria decision making (SMCDM) problems, in which the criterion values take the form of log-normally distributed random variables, and the argument information is collected from different periods. We propose two new geometric aggregation operators, such as the log-normal distribution weighted geometric (LNDWG) operator and the dynamic log-normal distribution weighted geometric (DLNDWG) operator, and develop a method for dynamic SMCDM with log-normally distributed random variables. This method uses the DLNDWG operator and the LNDWG operator to aggregate the log-normally distributed criterion values, utilizes the entropy model of Shannon to generate the time weight vector, and utilizes the expectation values and variances of log-normal distributions to rank the alternatives and select the best one. Finally, an example is given to illustrate the feasibility and effectiveness of this developed method.
Events in time: Basic analysis of Poisson data
Energy Technology Data Exchange (ETDEWEB)
Engelhardt, M.E.
1994-09-01
The report presents basic statistical methods for analyzing Poisson data, such as the member of events in some period of time. It gives point estimates, confidence intervals, and Bayesian intervals for the rate of occurrence per unit of time. It shows how to compare subsets of the data, both graphically and by statistical tests, and how to look for trends in time. It presents a compound model when the rate of occurrence varies randomly. Examples and SAS programs are given.
Lawnik, Marcin
2018-01-01
The scope of the paper is the presentation of a new method of generating numbers from a given distribution. The method uses the inverse cumulative distribution function and a method of flattening of probabilistic distributions. On the grounds of these methods, a new construction of chaotic maps was derived, which generates values from a given distribution. The analysis of the new method was conducted on the example of a newly constructed chaotic recurrences, based on the Box-Muller transformation and the quantile function of the exponential distribution. The obtained results certify that the proposed method may be successively applicable for the construction of generators of pseudo-random numbers.
Characterizing the performance of the Conway-Maxwell Poisson generalized linear model.
Francis, Royce A; Geedipally, Srinivas Reddy; Guikema, Seth D; Dhavala, Soma Sekhar; Lord, Dominique; LaRocca, Sarah
2012-01-01
Count data are pervasive in many areas of risk analysis; deaths, adverse health outcomes, infrastructure system failures, and traffic accidents are all recorded as count events, for example. Risk analysts often wish to estimate the probability distribution for the number of discrete events as part of doing a risk assessment. Traditional count data regression models of the type often used in risk assessment for this problem suffer from limitations due to the assumed variance structure. A more flexible model based on the Conway-Maxwell Poisson (COM-Poisson) distribution was recently proposed, a model that has the potential to overcome the limitations of the traditional model. However, the statistical performance of this new model has not yet been fully characterized. This article assesses the performance of a maximum likelihood estimation method for fitting the COM-Poisson generalized linear model (GLM). The objectives of this article are to (1) characterize the parameter estimation accuracy of the MLE implementation of the COM-Poisson GLM, and (2) estimate the prediction accuracy of the COM-Poisson GLM using simulated data sets. The results of the study indicate that the COM-Poisson GLM is flexible enough to model under-, equi-, and overdispersed data sets with different sample mean values. The results also show that the COM-Poisson GLM yields accurate parameter estimates. The COM-Poisson GLM provides a promising and flexible approach for performing count data regression. © 2011 Society for Risk Analysis.
Zheng, Shimin; Rao, Uma; Bartolucci, Alfred A.; Singh, Karan P.
2011-01-01
Bartolucci et al.(2003) extended the distribution assumption from the normal (Lyles et al., 2000) to the elliptical contoured distribution (ECD) for random regression models used in analysis of longitudinal data accounting for both undetectable values and informative drop-outs. In this paper, the random regression models are constructed on the multivariate skew ECD. A real data set is used to illustrate that the skew ECDs can fit some unimodal continuous data better than the Gaussian distributions or more general continuous symmetric distributions when the symmetric distribution assumption is violated. Also, a simulation study is done for illustrating the model fitness from a variety of skew ECDs. The software we used is SAS/STAT, V. 9.13. PMID:21637734
A computer code for calculating a γ-external dose from a randomly distributed radioactive cloud
International Nuclear Information System (INIS)
Kai, Michiaki
1984-02-01
A computer code ( CIDE ) has been developed to calculate a γ-external dose from a randomly distributed radioactive cloud. Atmospheric dispersion of radioactive materials accidentally released from a nuclear reactor needs to be estimated considering time-dependent meteorological data and terrain heights. Particle-in-Cell model is useful for that purpose, but it is not easy to calculate the dose from the randomly distributed concentration by numerical integration. In this study the mean concentration in a cell evaluated by PIC model was assumed to be uniformly distributed over that cell, which was integrated as a constant concentration by a point kernel method. The dose was obtained by summing the attributable cell doses. When the concentration of plume had a Gaussian distribution, the results of CIDE code well agreed with those of GAMPLE, which was the code for calculating the dose from the Gaussian distribution. The choice of cell sizes affecting the accuracy of the calculated results was discussed. (author)
Optimizing Persistent Random Searches
Tejedor, Vincent; Voituriez, Raphael; Bénichou, Olivier
2012-02-01
We consider a minimal model of persistent random searcher with a short range memory. We calculate exactly for such a searcher the mean first-passage time to a target in a bounded domain and find that it admits a nontrivial minimum as function of the persistence length. This reveals an optimal search strategy which differs markedly from the simple ballistic motion obtained in the case of Poisson distributed targets. Our results show that the distribution of targets plays a crucial role in the random search problem. In particular, in the biologically relevant cases of either a single target or regular patterns of targets, we find that, in strong contrast to repeated statements in the literature, persistent random walks with exponential distribution of excursion lengths can minimize the search time, and in that sense perform better than any Levy walk.
A Seemingly Unrelated Poisson Regression Model
King, Gary
1989-01-01
This article introduces a new estimator for the analysis of two contemporaneously correlated endogenous event count variables. This seemingly unrelated Poisson regression model (SUPREME) estimator combines the efficiencies created by single equation Poisson regression model estimators and insights from "seemingly unrelated" linear regression models.
Associative and Lie deformations of Poisson algebras
Remm, Elisabeth
2011-01-01
Considering a Poisson algebra as a non associative algebra satisfying the Markl-Remm identity, we study deformations of Poisson algebras as deformations of this non associative algebra. This gives a natural interpretation of deformations which preserves the underlying associative structure and we study deformations which preserve the underlying Lie algebra.
Multi-point Distribution Function for the Continuous Time Random Walk
Barkai, E.; Sokolov, I. M.
2007-01-01
We derive an explicit expression for the Fourier-Laplace transform of the two-point distribution function $p(x_1,t_1;x_2,t_2)$ of a continuous time random walk (CTRW), thus generalizing the result of Montroll and Weiss for the single point distribution function $p(x_1,t_1)$. The multi-point distribution function has a structure of a convolution of the Montroll-Weiss CTRW and the aging CTRW single point distribution functions. The correlation function $$ for the biased CTRW process is found. T...
Wide-area traffic: The failure of Poisson modeling
Energy Technology Data Exchange (ETDEWEB)
Paxson, V.; Floyd, S.
1994-08-01
Network arrivals are often modeled as Poisson processes for analytic simplicity, even though a number of traffic studies have shown that packet interarrivals are not exponentially distributed. The authors evaluate 21 wide-area traces, investigating a number of wide-area TCP arrival processes (session and connection arrivals, FTPDATA connection arrivals within FTP sessions, and TELNET packet arrivals) to determine the error introduced by modeling them using Poisson processes. The authors find that user-initiated TCP session arrivals, such as remote-login and file-transfer, are well-modeled as Poisson processes with fixed hourly rates, but that other connection arrivals deviate considerably from Poisson; that modeling TELNET packet interarrivals as exponential grievously underestimates the burstiness of TELNET traffic, but using the empirical Tcplib[DJCME92] interarrivals preserves burstiness over many time scales; and that FTPDATA connection arrivals within FTP sessions come bunched into ``connection bursts``, the largest of which are so large that they completely dominate FTPDATA traffic. Finally, they offer some preliminary results regarding how the findings relate to the possible self-similarity of wide-area traffic.
An adaptive fast multipole accelerated Poisson solver for complex geometries
Askham, T.; Cerfon, A. J.
2017-09-01
We present a fast, direct and adaptive Poisson solver for complex two-dimensional geometries based on potential theory and fast multipole acceleration. More precisely, the solver relies on the standard decomposition of the solution as the sum of a volume integral to account for the source distribution and a layer potential to enforce the desired boundary condition. The volume integral is computed by applying the FMM on a square box that encloses the domain of interest. For the sake of efficiency and convergence acceleration, we first extend the source distribution (the right-hand side in the Poisson equation) to the enclosing box as a C0 function using a fast, boundary integral-based method. We demonstrate on multiply connected domains with irregular boundaries that this continuous extension leads to high accuracy without excessive adaptive refinement near the boundary and, as a result, to an extremely efficient "black box" fast solver.
Risk Assessment of Distribution Network Based on Random set Theory and Sensitivity Analysis
Zhang, Sh; Bai, C. X.; Liang, J.; Jiao, L.; Hou, Z.; Liu, B. Zh
2017-05-01
Considering the complexity and uncertainty of operating information in distribution network, this paper introduces the use of random set for risk assessment. The proposed method is based on the operating conditions defined in the random set framework to obtain the upper and lower cumulative probability functions of risk indices. Moreover, the sensitivity of risk indices can effectually reflect information about system reliability and operating conditions, and by use of these information the bottlenecks that suppress system reliability can be found. The analysis about a typical radial distribution network shows that the proposed method is reasonable and effective.
An efficient algorithm for generating random number pairs drawn from a bivariate normal distribution
Campbell, C. W.
1983-01-01
An efficient algorithm for generating random number pairs from a bivariate normal distribution was developed. Any desired value of the two means, two standard deviations, and correlation coefficient can be selected. Theoretically the technique is exact and in practice its accuracy is limited only by the quality of the uniform distribution random number generator, inaccuracies in computer function evaluation, and arithmetic. A FORTRAN routine was written to check the algorithm and good accuracy was obtained. Some small errors in the correlation coefficient were observed to vary in a surprisingly regular manner. A simple model was developed which explained the qualities aspects of the errors.
Thompson, J. R.; Taylor, M. S.
1982-01-01
Let X be a K-dimensional random variable serving as input for a system with output Y (not necessarily of dimension k). given X, an outcome Y or a distribution of outcomes G(Y/X) may be obtained either explicitly or implicity. The situation is considered in which there is a real world data set X sub j sub = 1 (n) and a means of simulating an outcome Y. A method for empirical random number generation based on the sample of observations of the random variable X without estimating the underlying density is discussed.
Da, Yao; Huan, Zhang; Wei, Deng
2017-05-01
Distributed photovoltaics (DPV) will increase or shunt the fault current as a branch of the power supply. The random outputs of DPV will also cause a random distribution of fault current, while the breaking capacity of the breaker and the setting value of the current protection is pre-set value, and cannot flexibly change, so DPV will bring a certain degree of influence on the breaking margin and the sensitivity of protection. This paper makes probability distribution calculating model of fault current containing DPV, and takes IEEE 33-node system as an example, simulated the probability distribution of fault current at different penetration of DPV. Finally, from the two indicators of the breaking margin of breaker and the sensitivity of protection, analysed the protection failure risk after the access of DPV.
Poplová, Michaela; Sovka, Pavel; Cifra, Michal
2017-01-01
Photonic signals are broadly exploited in communication and sensing and they typically exhibit Poisson-like statistics. In a common scenario where the intensity of the photonic signals is low and one needs to remove a nonstationary trend of the signals for any further analysis, one faces an obstacle: due to the dependence between the mean and variance typical for a Poisson-like process, information about the trend remains in the variance even after the trend has been subtracted, possibly yielding artifactual results in further analyses. Commonly available detrending or normalizing methods cannot cope with this issue. To alleviate this issue we developed a suitable pre-processing method for the signals that originate from a Poisson-like process. In this paper, a Poisson pre-processing method for nonstationary time series with Poisson distribution is developed and tested on computer-generated model data and experimental data of chemiluminescence from human neutrophils and mung seeds. The presented method transforms a nonstationary Poisson signal into a stationary signal with a Poisson distribution while preserving the type of photocount distribution and phase-space structure of the signal. The importance of the suggested pre-processing method is shown in Fano factor and Hurst exponent analysis of both computer-generated model signals and experimental photonic signals. It is demonstrated that our pre-processing method is superior to standard detrending-based methods whenever further signal analysis is sensitive to variance of the signal.
Ruin probabilities for a regenerative Poisson gap generated risk process
DEFF Research Database (Denmark)
Asmussen, Søren; Biard, Romain
. Asymptotic expressions for the inﬁnite horizon ruin probabilities are given both for the light- and the heavy-tailed case. A basic observation is that the process regenerates at each G-claim. Also an approach via Markov additive processes is outlined, and heuristics are given for the distribution of the time......A risk process with constant premium rate c and Poisson arrivals of claims is considered. A threshold r is deﬁned for claim interarrival times, such that if k consecutive interarrival times are larger than r, then the next claim has distribution G. Otherwise, the claim size distribution is F...
Constructions and classifications of projective Poisson varieties
Pym, Brent
2018-03-01
This paper is intended both as an introduction to the algebraic geometry of holomorphic Poisson brackets, and as a survey of results on the classification of projective Poisson manifolds that have been obtained in the past 20 years. It is based on the lecture series delivered by the author at the Poisson 2016 Summer School in Geneva. The paper begins with a detailed treatment of Poisson surfaces, including adjunction, ruled surfaces and blowups, and leading to a statement of the full birational classification. We then describe several constructions of Poisson threefolds, outlining the classification in the regular case, and the case of rank-one Fano threefolds (such as projective space). Following a brief introduction to the notion of Poisson subspaces, we discuss Bondal's conjecture on the dimensions of degeneracy loci on Poisson Fano manifolds. We close with a discussion of log symplectic manifolds with simple normal crossings degeneracy divisor, including a new proof of the classification in the case of rank-one Fano manifolds.
Lord, Dominique; Geedipally, Srinivas Reddy; Guikema, Seth D
2010-08-01
The objective of this article is to evaluate the performance of the COM-Poisson GLM for analyzing crash data exhibiting underdispersion (when conditional on the mean). The COM-Poisson distribution, originally developed in 1962, has recently been reintroduced by statisticians for analyzing count data subjected to either over- or underdispersion. Over the last year, the COM-Poisson GLM has been evaluated in the context of crash data analysis and it has been shown that the model performs as well as the Poisson-gamma model for crash data exhibiting overdispersion. To accomplish the objective of this study, several COM-Poisson models were estimated using crash data collected at 162 railway-highway crossings in South Korea between 1998 and 2002. This data set has been shown to exhibit underdispersion when models linking crash data to various explanatory variables are estimated. The modeling results were compared to those produced from the Poisson and gamma probability models documented in a previous published study. The results of this research show that the COM-Poisson GLM can handle crash data when the modeling output shows signs of underdispersion. Finally, they also show that the model proposed in this study provides better statistical performance than the gamma probability and the traditional Poisson models, at least for this data set.
Khristoforov, Mikhail; Kleptsyn, Victor; Triestino, Michele
2016-07-01
This paper is inspired by the problem of understanding in a mathematical sense the Liouville quantum gravity on surfaces. Here we show how to define a stationary random metric on self-similar spaces which are the limit of nice finite graphs: these are the so-called hierarchical graphs. They possess a well-defined level structure and any level is built using a simple recursion. Stopping the construction at any finite level, we have a discrete random metric space when we set the edges to have random length (using a multiplicative cascade with fixed law {m}). We introduce a tool, the cut-off process, by means of which one finds that renormalizing the sequence of metrics by an exponential factor, they converge in law to a non-trivial metric on the limit space. Such limit law is stationary, in the sense that glueing together a certain number of copies of the random limit space, according to the combinatorics of the brick graph, the obtained random metric has the same law when rescaled by a random factor of law {m} . In other words, the stationary random metric is the solution of a distributional equation. When the measure m has continuous positive density on {mathbf{R}+}, the stationary law is unique up to rescaling and any other distribution tends to a rescaled stationary law under the iterations of the hierarchical transformation. We also investigate topological and geometric properties of the random space when m is log-normal, detecting a phase transition influenced by the branching random walk associated to the multiplicative cascade.
Directory of Open Access Journals (Sweden)
Abdou Amza
2014-09-01
Full Text Available Antibiotic use on animals demonstrates improved growth regardless of whether or not there is clinical evidence of infectious disease. Antibiotics used for trachoma control may play an unintended benefit of improving child growth.In this sub-study of a larger randomized controlled trial, we assess anthropometry of pre-school children in a community-randomized trial of mass oral azithromycin distributions for trachoma in Niger. We measured height, weight, and mid-upper arm circumference (MUAC in 12 communities randomized to receive annual mass azithromycin treatment of everyone versus 12 communities randomized to receive biannual mass azithromycin treatments for children, 3 years after the initial mass treatment. We collected measurements in 1,034 children aged 6-60 months of age.We found no difference in the prevalence of wasting among children in the 12 annually treated communities that received three mass azithromycin distributions compared to the 12 biannually treated communities that received six mass azithromycin distributions (odds ratio = 0.88, 95% confidence interval = 0.53 to 1.49.We were unable to demonstrate a statistically significant difference in stunting, underweight, and low MUAC of pre-school children in communities randomized to annual mass azithromycin treatment or biannual mass azithromycin treatment. The role of antibiotics on child growth and nutrition remains unclear, but larger studies and longitudinal trials may help determine any association.
Martina, R; Kay, R; van Maanen, R; Ridder, A
2015-01-01
Clinical studies in overactive bladder have traditionally used analysis of covariance or nonparametric methods to analyse the number of incontinence episodes and other count data. It is known that if the underlying distributional assumptions of a particular parametric method do not hold, an alternative parametric method may be more efficient than a nonparametric one, which makes no assumptions regarding the underlying distribution of the data. Therefore, there are advantages in using methods based on the Poisson distribution or extensions of that method, which incorporate specific features that provide a modelling framework for count data. One challenge with count data is overdispersion, but methods are available that can account for this through the introduction of random effect terms in the modelling, and it is this modelling framework that leads to the negative binomial distribution. These models can also provide clinicians with a clearer and more appropriate interpretation of treatment effects in terms of rate ratios. In this paper, the previously used parametric and non-parametric approaches are contrasted with those based on Poisson regression and various extensions in trials evaluating solifenacin and mirabegron in patients with overactive bladder. In these applications, negative binomial models are seen to fit the data well. Copyright © 2014 John Wiley & Sons, Ltd.
Application of the Hyper-Poisson Generalized Linear Model for Analyzing Motor Vehicle Crashes.
Khazraee, S Hadi; Sáez-Castillo, Antonio Jose; Geedipally, Srinivas Reddy; Lord, Dominique
2015-05-01
The hyper-Poisson distribution can handle both over- and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation-specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper-Poisson distribution in analyzing motor vehicle crash count data. The hyper-Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway-highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness-of-fit measures indicated that the hyper-Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper-Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway-Maxwell-Poisson model previously developed for the same data set. The advantages of the hyper-Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper-Poisson model can handle both over- and underdispersed crash data. Although not a major issue for the Conway-Maxwell-Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model. © 2014 Society for Risk Analysis.
Thermodynamic method for generating random stress distributions on an earthquake fault
Barall, Michael; Harris, Ruth A.
2012-01-01
This report presents a new method for generating random stress distributions on an earthquake fault, suitable for use as initial conditions in a dynamic rupture simulation. The method employs concepts from thermodynamics and statistical mechanics. A pattern of fault slip is considered to be analogous to a micro-state of a thermodynamic system. The energy of the micro-state is taken to be the elastic energy stored in the surrounding medium. Then, the Boltzmann distribution gives the probability of a given pattern of fault slip and stress. We show how to decompose the system into independent degrees of freedom, which makes it computationally feasible to select a random state. However, due to the equipartition theorem, straightforward application of the Boltzmann distribution leads to a divergence which predicts infinite stress. To avoid equipartition, we show that the finite strength of the fault acts to restrict the possible states of the system. By analyzing a set of earthquake scaling relations, we derive a new formula for the expected power spectral density of the stress distribution, which allows us to construct a computer algorithm free of infinities. We then present a new technique for controlling the extent of the rupture by generating a random stress distribution thousands of times larger than the fault surface, and selecting a portion which, by chance, has a positive stress perturbation of the desired size. Finally, we present a new two-stage nucleation method that combines a small zone of forced rupture with a larger zone of reduced fracture energy.
The Poisson equation on Klein surfaces
Directory of Open Access Journals (Sweden)
Monica Rosiu
2016-04-01
Full Text Available We obtain a formula for the solution of the Poisson equation with Dirichlet boundary condition on a region of a Klein surface. This formula reveals the symmetric character of the solution.
Vazquez, A I; Gianola, D; Bates, D; Weigel, K A; Heringstad, B
2009-02-01
Clinical mastitis is typically coded as presence/absence during some period of exposure, and records are analyzed with linear or binary data models. Because presence includes cows with multiple episodes, there is loss of information when a count is treated as a binary response. The Poisson model is designed for counting random variables, and although it is used extensively in epidemiology of mastitis, it has rarely been used for studying the genetics of mastitis. Many models have been proposed for genetic analysis of mastitis, but they have not been formally compared. The main goal of this study was to compare linear (Gaussian), Bernoulli (with logit link), and Poisson models for the purpose of genetic evaluation of sires for mastitis in dairy cattle. The response variables were clinical mastitis (CM; 0, 1) and number of CM cases (NCM; 0, 1, 2, ..). Data consisted of records on 36,178 first-lactation daughters of 245 Norwegian Red sires distributed over 5,286 herds. Predictive ability of models was assessed via a 3-fold cross-validation using mean squared error of prediction (MSEP) as the end-point. Between-sire variance estimates for NCM were 0.065 in Poisson and 0.007 in the linear model. For CM the between-sire variance was 0.093 in logit and 0.003 in the linear model. The ratio between herd and sire variances for the models with NCM response was 4.6 and 3.5 for Poisson and linear, respectively, and for model for CM was 3.7 in both logit and linear models. The MSEP for all cows was similar. However, within healthy animals, MSEP was 0.085 (Poisson), 0.090 (linear for NCM), 0.053 (logit), and 0.056 (linear for CM). For mastitic animals the MSEP values were 1.206 (Poisson), 1.185 (linear for NCM response), 1.333 (logit), and 1.319 (linear for CM response). The models for count variables had a better performance when predicting diseased animals and also had a similar performance between them. Logit and linear models for CM had better predictive ability for healthy
Brownian motion and parabolic Anderson model in a renormalized Poisson potential
Chen, Xia; Kulik, Alexey M.
2012-01-01
A method known as renormalization is proposed for constructing some more physically realistic random potentials in a Poisson cloud. The Brownian motion in the renormalized random potential and related parabolic Anderson models are modeled. With the renormalization, for example, the models consistent to Newton’s law of universal attraction can be rigorously constructed.
Xie, Wen-Jie; Han, Rui-Qi; Jiang, Zhi-Qiang; Wei, Lijian; Zhou, Wei-Xing
2017-08-01
Complex network is not only a powerful tool for the analysis of complex system, but also a promising way to analyze time series. The algorithm of horizontal visibility graph (HVG) maps time series into graphs, whose degree distributions are numerically and analytically investigated for certain time series. We derive the degree distributions of HVGs through an iterative construction process of HVGs. The degree distributions of the HVG and the directed HVG for random series are derived to be exponential, which confirms the analytical results from other methods. We also obtained the analytical expressions of degree distributions of HVGs and in-degree and out-degree distributions of directed HVGs transformed from multifractal binomial measures, which agree excellently with numerical simulations.
Bering's proposal for boundary contribution to the Poisson bracket
International Nuclear Information System (INIS)
Soloviev, V.O.
1998-11-01
It is shown that the Poisson bracket with boundary terms recently proposed by Bering can be deduced from the Poisson bracket proposed by the present author if one omits terms free of Euler-Lagrange derivatives (''annihilation principle''). This corresponds to another definition of the formal product of distributions (or, saying it in other words, to another definition of the pairing between 1-forms and 1-vectors in the formal variational calculus). We extend the formula initially suggested by Bering only for the ultralocal case with constant coefficients onto the general non-ultralocal brackets with coefficients depending on fields and their spatial derivatives. The lack of invariance under changes of dependent variables (field redefinitions) seems a drawback of this proposal. (author)
Improved mesh generator for the POISSON Group Codes
International Nuclear Information System (INIS)
Gupta, R.C.
1987-01-01
This paper describes the improved mesh generator of the POISSON Group Codes. These improvements enable one to have full control over the way the mesh is generated and in particular the way the mesh density is distributed throughout this model. A higher mesh density in certain regions coupled with a successively lower mesh density in others keeps the accuracy of the field computation high and the requirements on the computer time and computer memory low. The mesh is generated with the help of codes AUTOMESH and LATTICE; both have gone through a major upgrade. Modifications have also been made in the POISSON part of these codes. We shall present an example of a superconducting dipole magnet to explain how to use this code. The results of field computations are found to be reliable within a few parts in a hundred thousand even in such complex geometries
Probability distribution for the Gaussian curvature of the zero level surface of a random function
Hannay, J. H.
2018-04-01
A rather natural construction for a smooth random surface in space is the level surface of value zero, or ‘nodal’ surface f(x,y,z) = 0, of a (real) random function f; the interface between positive and negative regions of the function. A physically significant local attribute at a point of a curved surface is its Gaussian curvature (the product of its principal curvatures) because, when integrated over the surface it gives the Euler characteristic. Here the probability distribution for the Gaussian curvature at a random point on the nodal surface f = 0 is calculated for a statistically homogeneous (‘stationary’) and isotropic zero mean Gaussian random function f. Capitalizing on the isotropy, a ‘fixer’ device for axes supplies the probability distribution directly as a multiple integral. Its evaluation yields an explicit algebraic function with a simple average. Indeed, this average Gaussian curvature has long been known. For a non-zero level surface instead of the nodal one, the probability distribution is not fully tractable, but is supplied as an integral expression.
Poisson-Like Spiking in Circuits with Probabilistic Synapses
Moreno-Bote, Rubén
2014-01-01
Neuronal activity in cortex is variable both spontaneously and during stimulation, and it has the remarkable property that it is Poisson-like over broad ranges of firing rates covering from virtually zero to hundreds of spikes per second. The mechanisms underlying cortical-like spiking variability over such a broad continuum of rates are currently unknown. We show that neuronal networks endowed with probabilistic synaptic transmission, a well-documented source of variability in cortex, robustly generate Poisson-like variability over several orders of magnitude in their firing rate without fine-tuning of the network parameters. Other sources of variability, such as random synaptic delays or spike generation jittering, do not lead to Poisson-like variability at high rates because they cannot be sufficiently amplified by recurrent neuronal networks. We also show that probabilistic synapses predict Fano factor constancy of synaptic conductances. Our results suggest that synaptic noise is a robust and sufficient mechanism for the type of variability found in cortex. PMID:25032705
A Permutation-Randomization Approach to Test the Spatial Distribution of Plant Diseases.
Lione, G; Gonthier, P
2016-01-01
The analysis of the spatial distribution of plant diseases requires the availability of trustworthy geostatistical methods. The mean distance tests (MDT) are here proposed as a series of permutation and randomization tests to assess the spatial distribution of plant diseases when the variable of phytopathological interest is categorical. A user-friendly software to perform the tests is provided. Estimates of power and type I error, obtained with Monte Carlo simulations, showed the reliability of the MDT (power > 0.80; type I error pathogens causing root rot on conifers was successfully performed by verifying the consistency between the MDT responses and previously published data. An application of the MDT was carried out to analyze the relation between the plantation density and the distribution of the infection of Gnomoniopsis castanea, an emerging fungal pathogen causing nut rot on sweet chestnut. Trees carrying nuts infected by the pathogen were randomly distributed in areas with different plantation densities, suggesting that the distribution of G. castanea was not related to the plantation density. The MDT could be used to analyze the spatial distribution of plant diseases both in agricultural and natural ecosystems.
Exact probability distribution function for multifractal random walk models of stocks
Saakian, D. B.; Martirosyan, A.; Hu, Chin-Kun; Struzik, Z. R.
2011-07-01
We investigate the multifractal random walk (MRW) model, popular in the modelling of stock fluctuations in the financial market. The exact probability distribution function (PDF) is derived by employing methods proposed in the derivation of correlation functions in string theory, including the analytical extension of Selberg integrals. We show that the recent results by Y. V. Fyodorov, P. Le Doussal and A. Rosso obtained with the logarithmic Random Energy Model (REM) model are sufficient to derive exact formulas for the PDF of the log returns in the MRW model.
Distributed Synchronization in Networks of Agent Systems With Nonlinearities and Random Switchings.
Tang, Yang; Gao, Huijun; Zou, Wei; Kurths, Jürgen
2013-02-01
In this paper, the distributed synchronization problem of networks of agent systems with controllers and nonlinearities subject to Bernoulli switchings is investigated. Controllers and adaptive updating laws injected in each vertex of networks depend on the state information of its neighborhood. Three sets of Bernoulli stochastic variables are introduced to describe the occurrence probabilities of distributed adaptive controllers, updating laws and nonlinearities, respectively. By the Lyapunov functions method, we show that the distributed synchronization of networks composed of agent systems with multiple randomly occurring nonlinearities, multiple randomly occurring controllers, and multiple randomly occurring updating laws can be achieved in mean square under certain criteria. The conditions derived in this paper can be solved by semi-definite programming. Moreover, by mathematical analysis, we find that the coupling strength, the probabilities of the Bernoulli stochastic variables, and the form of nonlinearities have great impacts on the convergence speed and the terminal control strength. The synchronization criteria and the observed phenomena are demonstrated by several numerical simulation examples. In addition, the advantage of distributed adaptive controllers over conventional adaptive controllers is illustrated.
Directory of Open Access Journals (Sweden)
Wenzhi Wang
2016-07-01
Full Text Available Modeling the random fiber distribution of a fiber-reinforced composite is of great importance for studying the progressive failure behavior of the material on the micro scale. In this paper, we develop a new algorithm for generating random representative volume elements (RVEs with statistical equivalent fiber distribution against the actual material microstructure. The realistic statistical data is utilized as inputs of the new method, which is archived through implementation of the probability equations. Extensive statistical analysis is conducted to examine the capability of the proposed method and to compare it with existing methods. It is found that the proposed method presents a good match with experimental results in all aspects including the nearest neighbor distance, nearest neighbor orientation, Ripley’s K function, and the radial distribution function. Finite element analysis is presented to predict the effective elastic properties of a carbon/epoxy composite, to validate the generated random representative volume elements, and to provide insights of the effect of fiber distribution on the elastic properties. The present algorithm is shown to be highly accurate and can be used to generate statistically equivalent RVEs for not only fiber-reinforced composites but also other materials such as foam materials and particle-reinforced composites.
Poisson versus threshold models for genetic analysis of clinical mastitis in US Holsteins.
Vazquez, A I; Weigel, K A; Gianola, D; Bates, D M; Perez-Cabal, M A; Rosa, G J M; Chang, Y M
2009-10-01
Typically, clinical mastitis is coded as the presence or absence of disease in a given lactation, and records are analyzed with either linear models or binary threshold models. Because the presence of mastitis may include cows with multiple episodes, there is a loss of information when counts are treated as binary responses. Poisson models are appropriated for random variables measured as the number of events, and although these models are used extensively in studying the epidemiology of mastitis, they have rarely been used for studying the genetic aspects of mastitis. Ordinal threshold models are pertinent for ordered categorical responses; although one can hypothesize that the number of clinical mastitis episodes per animal reflects a continuous underlying increase in mastitis susceptibility, these models have rarely been used in genetic analysis of mastitis. The objective of this study was to compare probit, Poisson, and ordinal threshold models for the genetic evaluation of US Holstein sires for clinical mastitis. Mastitis was measured as a binary trait or as the number of mastitis cases. Data from 44,908 first-parity cows recorded in on-farm herd management software were gathered, edited, and processed for the present study. The cows were daughters of 1,861 sires, distributed over 94 herds. Predictive ability was assessed via a 5-fold cross-validation using 2 loss functions: mean squared error of prediction (MSEP) as the end point and a cost difference function. The heritability estimates were 0.061 for mastitis measured as a binary trait in the probit model and 0.085 and 0.132 for the number of mastitis cases in the ordinal threshold and Poisson models, respectively; because of scale differences, only the probit and ordinal threshold models are directly comparable. Among healthy animals, MSEP was smallest for the probit model, and the cost function was smallest for the ordinal threshold model. Among diseased animals, MSEP and the cost function were smallest
A distribution-free newsvendor model with balking penalty and random yield
Directory of Open Access Journals (Sweden)
Chongfeng Lan
2015-05-01
Full Text Available Purpose: The purpose of this paper is to extend the analysis of the distribution-free newsvendor problem in an environment of customer balking, which occurs when customers are reluctant to buy a product if its available inventory falls below a threshold level. Design/methodology/approach: We provide a new tradeoff tool as a replacement of the traditional one to weigh the holding cost and the goodwill costs segment: in addition to the shortage penalty, we also introduce the balking penalty. Furthermore, we extend our model to the case of random yield. Findings: A model is presented for determining both an optimal order quantity and a lower bound on the profit under the worst possible distribution of the demand. We also study the effects of shortage penalty and the balking penalty on the optimal order quantity, which have been largely bypassed in the existing distribution free single period models with balking. Numerical examples are presented to illustrate the result. Originality/value: The incorporation of balking penalty and random yield represents an important improvement in inventory policy performance for distribution-free newsvendor problem when customer balking occurs and the distributional form of demand is unknown.
Chord-length distribution function for two-phase random media
International Nuclear Information System (INIS)
Torquato, S.; Lu, B.
1993-01-01
A statistical correlation function of basic importance in the study of two-phase random media (such as suspensions, porous media, and composites) is the chord-length distribution function p(z). We show that p(z) is related to another fundamentally important morphological descriptor studied by us previously, namely, the lineal-path function L(z), which gives the probability of finding a line segment of length z wholly in one of the phases when randomly thrown into the sample. We derive exact series representations of the chord-length distribution function for media comprised of spheres with a polydispersivity in size for arbitrary space dimension D. For the special case of spatially uncorrelated spheres (i.e., fully penetrable spheres), we determine exactly p(z) and the mean chord length l C , the first moment of p(z). We also obtain corresponding formulas for the case of impenetrable (i.e., spatially correlated) polydispersed spheres
Online distribution channel increases article usage on Mendeley: a randomized controlled trial.
Kudlow, Paul; Cockerill, Matthew; Toccalino, Danielle; Dziadyk, Devin Bissky; Rutledge, Alan; Shachak, Aviv; McIntyre, Roger S; Ravindran, Arun; Eysenbach, Gunther
2017-01-01
Prior research shows that article reader counts (i.e. saves) on the online reference manager, Mendeley, correlate to future citations. There are currently no evidenced-based distribution strategies that have been shown to increase article saves on Mendeley. We conducted a 4-week randomized controlled trial to examine how promotion of article links in a novel online cross-publisher distribution channel (TrendMD) affect article saves on Mendeley. Four hundred articles published in the Journal of Medical Internet Research were randomized to either the TrendMD arm ( n = 200) or the control arm ( n = 200) of the study. Our primary outcome compares the 4-week mean Mendeley saves of articles randomized to TrendMD versus control. Articles randomized to TrendMD showed a 77% increase in article saves on Mendeley relative to control. The difference in mean Mendeley saves for TrendMD articles versus control was 2.7, 95% CI (2.63, 2.77), and statistically significant ( p < 0.01). There was a positive correlation between pageviews driven by TrendMD and article saves on Mendeley (Spearman's rho r = 0.60). This is the first randomized controlled trial to show how an online cross-publisher distribution channel (TrendMD) enhances article saves on Mendeley. While replication and further study are needed, these data suggest that cross-publisher article recommendations via TrendMD may enhance citations of scholarly articles.
Strong second-harmonic radiation from a thin silver film with randomly distributed small holes
Rakov, N; Xiao, M
2003-01-01
We report the observation of strong second-harmonic radiation from a thin silver film containing randomly distributed small holes. A pulsed laser beam of wavelength 1064 nm impinges at an angle of incidence 45 deg. on the film, and the reflection is collected by a CCD detector and analysed by a high-resolution spectrometer. Strong second-harmonic radiation was observed at the wavelength of 532 nm with a halfwidth of 40 nm. (letter to the editor)
Distributed Pseudo-Random Number Generation and Its Application to Cloud Database
Chen, Jiageng; Miyaji, Atsuko; Su, Chunhua
2014-01-01
Cloud database is now a rapidly growing trend in cloud computing market recently. It enables the clients run their computation on out-sourcing databases or access to some distributed database service on the cloud. At the same time, the security and privacy concerns is major challenge for cloud database to continue growing. To enhance the security and privacy of the cloud database technology, the pseudo-random number generation (PRNG) plays an important roles in data encryptions and privacy-pr...
International Nuclear Information System (INIS)
Ebert, M.A.; Zavgorodni, S.F.; Kendrick, L.A.; Weston, S.; Harper, C.S.
2001-01-01
Purpose: This investigation examined the effect of alignment and localization errors on dose distributions in stereotactic radiotherapy (SRT) with arced circular fields. In particular, it was desired to determine the effect of systematic and random localization errors on multi-isocenter treatments. Methods and Materials: A research version of the FastPlan system from Surgical Navigation Technologies was used to generate a series of SRT plans of varying complexity. These plans were used to examine the influence of random setup errors by recalculating dose distributions with successive setup errors convolved into the off-axis ratio data tables used in the dose calculation. The influence of systematic errors was investigated by displacing isocenters from their planned positions. Results: For single-isocenter plans, it is found that the influences of setup error are strongly dependent on the size of the target volume, with minimum doses decreasing most significantly with increasing random and systematic alignment error. For multi-isocenter plans, similar variations in target dose are encountered, with this result benefiting from the conventional method of prescribing to a lower isodose value for multi-isocenter treatments relative to single-isocenter treatments. Conclusions: It is recommended that the systematic errors associated with target localization in SRT be tracked via a thorough quality assurance program, and that random setup errors be minimized by use of a sufficiently robust relocation system. These errors should also be accounted for by incorporating corrections into the treatment planning algorithm or, alternatively, by inclusion of sufficient margins in target definition
Blind beam-hardening correction from Poisson measurements
Gu, Renliang; Dogandžić, Aleksandar
2016-02-01
We develop a sparse image reconstruction method for Poisson-distributed polychromatic X-ray computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. We employ our mass-attenuation spectrum parameterization of the noiseless measurements and express the mass- attenuation spectrum as a linear combination of B-spline basis functions of order one. A block coordinate-descent algorithm is developed for constrained minimization of a penalized Poisson negative log-likelihood (NLL) cost function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and nonnegativity and sparsity of the density map image; the image sparsity is imposed using a convex total-variation (TV) norm penalty term. This algorithm alternates between a Nesterov's proximal-gradient (NPG) step for estimating the density map image and a limited-memory Broyden-Fletcher-Goldfarb-Shanno with box constraints (L-BFGS-B) step for estimating the incident-spectrum parameters. To accelerate convergence of the density- map NPG steps, we apply function restart and a step-size selection scheme that accounts for varying local Lipschitz constants of the Poisson NLL. Real X-ray CT reconstruction examples demonstrate the performance of the proposed scheme.
Pagonis, Vasilis; Kulp, Christopher; Chaney, Charity-Grace; Tachiya, M
2017-09-13
During the past 10 years, quantum tunneling has been established as one of the dominant mechanisms for recombination in random distributions of electrons and positive ions, and in many dosimetric materials. Specifically quantum tunneling has been shown to be closely associated with two important effects in luminescence materials, namely long term afterglow luminescence and anomalous fading. Two of the common assumptions of quantum tunneling models based on random distributions of electrons and positive ions are: (a) An electron tunnels from a donor to the nearest acceptor, and (b) the concentration of electrons is much lower than that of positive ions at all times during the tunneling process. This paper presents theoretical studies for arbitrary relative concentrations of electrons and positive ions in the solid. Two new differential equations are derived which describe the loss of charge in the solid by tunneling, and they are solved analytically. The analytical solution compares well with the results of Monte Carlo simulations carried out in a random distribution of electrons and positive ions. Possible experimental implications of the model are discussed for tunneling phenomena in long term afterglow signals, and also for anomalous fading studies in feldspars and apatite samples.
Zhao, Youxuan; Li, Feilong; Cao, Peng; Liu, Yaolu; Zhang, Jianyu; Fu, Shaoyun; Zhang, Jun; Hu, Ning
2017-08-01
Since the identification of micro-cracks in engineering materials is very valuable in understanding the initial and slight changes in mechanical properties of materials under complex working environments, numerical simulations on the propagation of the low frequency S 0 Lamb wave in thin plates with randomly distributed micro-cracks were performed to study the behavior of nonlinear Lamb waves. The results showed that while the influence of the randomly distributed micro-cracks on the phase velocity of the low frequency S 0 fundamental waves could be neglected, significant ultrasonic nonlinear effects caused by the randomly distributed micro-cracks was discovered, which mainly presented as a second harmonic generation. By using a Monte Carlo simulation method, we found that the acoustic nonlinear parameter increased linearly with the micro-crack density and the size of micro-crack zone, and it was also related to the excitation frequency and friction coefficient of the micro-crack surfaces. In addition, it was found that the nonlinear effect of waves reflected by the micro-cracks was more noticeable than that of the transmitted waves. This study theoretically reveals that the low frequency S 0 mode of Lamb waves can be used as the fundamental waves to quantitatively identify micro-cracks in thin plates. Copyright © 2017 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Moore, Stephen R.; Papworth, David; Grosovsky, Andrew J.
2006-01-01
Genomic instability is observed in tumors and in a large fraction of the progeny surviving irradiation. One of the best-characterized phenotypic manifestations of genomic instability is delayed chromosome aberrations. Our working hypothesis for the current study was that if genomic instability is in part attributable to cis mechanisms, we should observe a non-random distribution of chromosomes or sites involved in instability-associated rearrangements, regardless of radiation quality, dose, or trans factor expression. We report here the karyotypic examination of 296 instability-associated chromosomal rearrangement breaksites (IACRB) from 118 unstable TK6 human B lymphoblast, and isogenic derivative, clones. When we tested whether IACRB were distributed across the chromosomes based on target size, a significant non-random distribution was evident (p < 0.00001), and three IACRB hotspots (chromosomes 11, 12, and 22) and one IACRB coldspot (chromosome 2) were identified. Statistical analysis at the chromosomal band-level identified four IACRB hotspots accounting for 20% of all instability-associated breaks, two of which account for over 14% of all IACRB. Further, analysis of independent clones provided evidence within 14 individual clones of IACRB clustering at the chromosomal band level, suggesting a predisposition for further breaks after an initial break at some chromosomal bands. All of these events, independently, or when taken together, were highly unlikely to have occurred by chance (p < 0.000001). These IACRB band-level cluster hotspots were observed independent of radiation quality, dose, or cellular p53 status. The non-random distribution of instability-associated chromosomal rearrangements described here significantly differs from the distribution that was observed in a first-division post-irradiation metaphase analysis (p = 0.0004). Taken together, these results suggest that genomic instability may be in part driven by chromosomal cis mechanisms
DEFF Research Database (Denmark)
Mikosch, Thomas Valentin; Moser, Martin
2013-01-01
We investigate the maximum increment of a random walk with heavy-tailed jump size distribution. Here heavy-tailedness is understood as regular variation of the finite-dimensional distributions. The jump sizes constitute a strictly stationary sequence. Using a continuous mapping argument acting on...... on the point processes of the normalized jump sizes, we prove that the maximum increment of the random walk converges in distribution to a Fréchet distributed random variable.......We investigate the maximum increment of a random walk with heavy-tailed jump size distribution. Here heavy-tailedness is understood as regular variation of the finite-dimensional distributions. The jump sizes constitute a strictly stationary sequence. Using a continuous mapping argument acting...
The First Order Correction to the Exit Distribution for Some Random Walks
Kennedy, Tom
2016-07-01
We study three different random walk models on several two-dimensional lattices by Monte Carlo simulations. One is the usual nearest neighbor random walk. Another is the nearest neighbor random walk which is not allowed to backtrack. The final model is the smart kinetic walk. For all three of these models the distribution of the point where the walk exits a simply connected domain D in the plane converges weakly to harmonic measure on partial D as the lattice spacing δ → 0. Let ω (0,\\cdot ;D) be harmonic measure for D, and let ω _δ (0,\\cdot ;D) be the discrete harmonic measure for one of the random walk models. Our definition of the random walk models is unusual in that we average over the orientation of the lattice with respect to the domain. We are interested in the limit of (ω _δ (0,\\cdot ;D)- ω (0,\\cdot ;D))/δ . Our Monte Carlo simulations of the three models lead to the conjecture that this limit equals c_{M,L} ρ _D(z) times Lebesgue measure with respect to arc length along the boundary, where the function ρ _D(z) depends on the domain, but not on the model or lattice, and the constant c_{M,L} depends on the model and on the lattice, but not on the domain. So there is a form of universality for this first order correction. We also give an explicit formula for the conjectured density ρ _D.
High order Poisson Solver for unbounded flows
DEFF Research Database (Denmark)
Hejlesen, Mads Mølholm; Rasmussen, Johannes Tophøj; Chatelain, Philippe
2015-01-01
This paper presents a high order method for solving the unbounded Poisson equation on a regular mesh using a Green’s function solution. The high order convergence was achieved by formulating mollified integration kernels, that were derived from a filter regularisation of the solution field...... the equations of fluid mechanics as an example, but can be used in many physical problems to solve the Poisson equation on a rectangular unbounded domain. For the two-dimensional case we propose an infinitely smooth test function which allows for arbitrary high order convergence. Using Gaussian smoothing....... The method was implemented on a rectangular domain using fast Fourier transforms (FFT) to increase computational efficiency. The Poisson solver was extended to directly solve the derivatives of the solution. This is achieved either by including the differential operator in the integration kernel...
Selective Contrast Adjustment by Poisson Equation
Directory of Open Access Journals (Sweden)
Ana-Belen Petro
2013-09-01
Full Text Available Poisson Image Editing is a new technique permitting to modify the gradient vector field of an image, and then to recover an image with a gradient approaching this modified gradient field. This amounts to solve a Poisson equation, an operation which can be efficiently performed by Fast Fourier Transform (FFT. This paper describes an algorithm applying this technique, with two different variants. The first variant enhances the contrast by increasing the gradient in the dark regions of the image. This method is well adapted to images with back light or strong shadows, and reveals details in the shadows. The second variant of the same Poisson technique enhances all small gradients in the image, thus also sometimes revealing details and texture.
Poisson-Jacobi reduction of homogeneous tensors
International Nuclear Information System (INIS)
Grabowski, J; Iglesias, D; Marrero, J C; Padron, E; Urbanski, P
2004-01-01
The notion of homogeneous tensors is discussed. We show that there is a one-to-one correspondence between multivector fields on a manifold M, homogeneous with respect to a vector field Δ on M, and first-order polydifferential operators on a closed submanifold N of codimension 1 such that Δ is transversal to N. This correspondence relates the Schouten-Nijenhuis bracket of multivector fields on M to the Schouten-Jacobi bracket of first-order polydifferential operators on N and generalizes the Poissonization of Jacobi manifolds. Actually, it can be viewed as a super-Poissonization. This procedure of passing from a homogeneous multivector field to a first-order polydifferential operator can also be understood as a sort of reduction; in the standard case-a half of a Poisson reduction. A dual version of the above correspondence yields in particular the correspondence between Δ-homogeneous symplectic structures on M and contact structures on N
[Application of detecting and taking overdispersion into account in Poisson regression model].
Bouche, G; Lepage, B; Migeot, V; Ingrand, P
2009-08-01
Researchers often use the Poisson regression model to analyze count data. Overdispersion can occur when a Poisson regression model is used, resulting in an underestimation of variance of the regression model parameters. Our objective was to take overdispersion into account and assess its impact with an illustration based on the data of a study investigating the relationship between use of the Internet to seek health information and number of primary care consultations. Three methods, overdispersed Poisson, a robust estimator, and negative binomial regression, were performed to take overdispersion into account in explaining variation in the number (Y) of primary care consultations. We tested overdispersion in the Poisson regression model using the ratio of the sum of Pearson residuals over the number of degrees of freedom (chi(2)/df). We then fitted the three models and compared parameter estimation to the estimations given by Poisson regression model. Variance of the number of primary care consultations (Var[Y]=21.03) was greater than the mean (E[Y]=5.93) and the chi(2)/df ratio was 3.26, which confirmed overdispersion. Standard errors of the parameters varied greatly between the Poisson regression model and the three other regression models. Interpretation of estimates from two variables (using the Internet to seek health information and single parent family) would have changed according to the model retained, with significant levels of 0.06 and 0.002 (Poisson), 0.29 and 0.09 (overdispersed Poisson), 0.29 and 0.13 (use of a robust estimator) and 0.45 and 0.13 (negative binomial) respectively. Different methods exist to solve the problem of underestimating variance in the Poisson regression model when overdispersion is present. The negative binomial regression model seems to be particularly accurate because of its theorical distribution ; in addition this regression is easy to perform with ordinary statistical software packages.
Zhang, Y.; Li, F.; Zhang, S.; Hao, W.; Zhu, T.; Yuan, L.; Xiao, F.
2017-09-01
In this paper, Statistical Distribution based Conditional Random Fields (STA-CRF) algorithm is exploited for improving marginal ice-water classification. Pixel level ice concentration is presented as the comparison of methods based on CRF. Furthermore, in order to explore the effective statistical distribution model to be integrated into STA-CRF, five statistical distribution models are investigated. The STA-CRF methods are tested on 2 scenes around Prydz Bay and Adélie Depression, where contain a variety of ice types during melt season. Experimental results indicate that the proposed method can resolve sea ice edge well in Marginal Ice Zone (MIZ) and show a robust distinction of ice and water.
Directory of Open Access Journals (Sweden)
Y. Zhang
2017-09-01
Full Text Available In this paper, Statistical Distribution based Conditional Random Fields (STA-CRF algorithm is exploited for improving marginal ice-water classification. Pixel level ice concentration is presented as the comparison of methods based on CRF. Furthermore, in order to explore the effective statistical distribution model to be integrated into STA-CRF, five statistical distribution models are investigated. The STA-CRF methods are tested on 2 scenes around Prydz Bay and Adélie Depression, where contain a variety of ice types during melt season. Experimental results indicate that the proposed method can resolve sea ice edge well in Marginal Ice Zone (MIZ and show a robust distinction of ice and water.
International Nuclear Information System (INIS)
Ni Xiaohui; Jiang Zhiqiang; Zhou Weixing
2009-01-01
The dynamics of a complex system is usually recorded in the form of time series, which can be studied through its visibility graph from a complex network perspective. We investigate the visibility graphs extracted from fractional Brownian motions and multifractal random walks, and find that the degree distributions exhibit power-law behaviors, in which the power-law exponent α is a linear function of the Hurst index H of the time series. We also find that the degree distribution of the visibility graph is mainly determined by the temporal correlation of the original time series with minor influence from the possible multifractal nature. As an example, we study the visibility graphs constructed from three Chinese stock market indexes and unveil that the degree distributions have power-law tails, where the tail exponents of the visibility graphs and the Hurst indexes of the indexes are close to the α∼H linear relationship.
Vincent, Jean-Louis; Privalle, Christopher T; Singer, Mervyn; Lorente, José A; Boehm, Erwin; Meier-Hellmann, Andreas; Darius, Harald; Ferrer, Ricard; Sirvent, Josep-Maria; Marx, Gernot; DeAngelo, Joseph
2015-01-01
To compare the effectiveness and safety of the hemoglobin-based nitric oxide scavenger, pyridoxalated hemoglobin polyoxyethylene, against placebo in patients with vasopressor-dependent distributive shock. Multicenter, randomized, placebo-controlled, open-label study. Sixty-one participating ICUs in six European countries (Austria, Belgium, Germany, the Netherlands, Spain, and United Kingdom). All patients admitted with distributive shock, defined as the presence of at least two systemic inflammatory response syndrome criteria, persisting norepinephrine dependence and evidence of organ dysfunction/hypoperfusion despite adequate fluid resuscitation. Patients were randomized to receive 0.25 mL/kg/hr pyridoxalated hemoglobin polyoxyethylene (20 mg Hb/kg/hr) or an equal volume of placebo, infused for up to 150 hours, in addition to conventional vasopressor therapy. The study was stopped after interim analysis showed higher mortality in the pyridoxalated hemoglobin polyoxyethylene group and an increased prevalence of adverse events. At this time, 377 patients had been randomized to pyridoxalated hemoglobin polyoxyethylene (n = 183) or placebo (n = 194). Age, gender, type of patient (medical/surgical), and Acute Physiology and Chronic Health Evaluation II scores were similar between groups. Twenty-eight-day mortality rate was 44.3% in the pyridoxalated hemoglobin polyoxyethylene group versus 37.6% in the placebo group (OR, 1.29; 95% CI, 0.85-1.95; p = 0.227). In patients with higher organ dysfunction scores (Sepsis-related Organ Failure Assessment > 13), mortality rates were significantly higher in the pyridoxalated hemoglobin polyoxyethylene group when compared with those in placebo-treated patients (60.9% vs 39.2%; p = 0.014). Survivors who received pyridoxalated hemoglobin polyoxyethylene had a longer vasopressor-free time (21.3 vs 19.7 d; p = 0.035). In this randomized, controlled phase III trial in patients with vasopressor-dependent distributive shock
Optimized thick-wall cylinders by virtue of Poisson's ratio selection
International Nuclear Information System (INIS)
Whitty, J.P.M.; Henderson, B.; Francis, J.; Lloyd, N.
2011-01-01
The principal stress distributions in thick-wall cylinders due to variation in the Poisson's ratio are predicted using analytical and finite element methods. Analyses of appropriate brittle and ductile failure criteria show that under the isochoric pressure conditions investigated that auextic (i.e. those possessing a negative Poisson's ratio) materials act as stress concentrators; hence they are predicted to fail before their conventional (i.e. possessing a positive Poisson's ratio) material counterparts. The key finding of the work presented shows that for constrained thick-wall cylinders the maximum tensile principal stress can vanish at a particular Poisson's ratio and aspect ratio. This phenomenon is exploited in order to present an optimized design criterion for thick-wall cylinders. Moreover, via the use of a cogent finite element model, this criterion is also shown to be applicable for the design of micro-porous materials.
Fluid limit of the continuous-time random walk with general Levy jump distribution functions
Energy Technology Data Exchange (ETDEWEB)
Cartea, A. [Birbeck College, University of London; Del-Castillo-Negrete, Diego B [ORNL
2007-01-01
The continuous time random walk (CTRW) is a natural generalization of the Brownian random walk that allows the incorporation of waiting time distributions psi(t) and general jump distribution functions eta(x). There are two well-known fluid limits of this model in the uncoupled case. For exponential decaying waiting times and Gaussian jump distribution functions the fluid limit leads to the diffusion equation. On the other hand, for algebraic decaying waiting times psi similar to t(-(1+beta)) and algebraic decaying jump distributions eta similar to x(-(1+alpha)) corresponding to Levy stable processes, the fluid limit leads to the fractional diffusion equation of order alpha in space and order beta in time. However, these are two special cases of a wider class of models. Here we consider the CTRW for the most general Levy stochastic processes in the Levy-Khintchine representation for the jump distribution function and obtain an integrodifferential equation describing the dynamics in the fluid limit. The resulting equation contains as special cases the regular and the fractional diffusion equations. As an application we consider the case of CTRWs with exponentially truncated Levy jump distribution functions. In this case the fluid limit leads to a transport equation with exponentially truncated fractional derivatives which describes the interplay between memory, long jumps, and truncation effects in the intermediate asymptotic regime. The dynamics exhibits a transition from superdiffusion to subdiffusion with the crossover time scaling as tau(c)similar to lambda(-alpha/beta), where 1/lambda is the truncation length scale. The asymptotic behavior of the propagator (Green's function) of the truncated fractional equation exhibits a transition from algebraic decay for t <
Effect of particle size distribution on permeability in the randomly packed porous media
Markicevic, Bojan
2017-11-01
An answer of how porous medium heterogeneity influences the medium permeability is still inconclusive, where both increase and decrease in the permeability value are reported. A numerical procedure is used to generate a randomly packed porous material consisting of spherical particles. Six different particle size distributions are used including mono-, bi- and three-disperse particles, as well as uniform, normal and log-normal particle size distribution with the maximum to minimum particle size ratio ranging from three to eight for different distributions. In all six cases, the average particle size is kept the same. For all media generated, the stochastic homogeneity is checked from distribution of three coordinates of particle centers, where uniform distribution of x-, y- and z- positions is found. The medium surface area remains essentially constant except for bi-modal distribution in which medium area decreases, while no changes in the porosity are observed (around 0.36). The fluid flow is solved in such domain, and after checking for the pressure axial linearity, the permeability is calculated from the Darcy law. The permeability comparison reveals that the permeability of the mono-disperse medium is smallest, and the permeability of all poly-disperse samples is less than ten percent higher. For bi-modal particles, the permeability is for a quarter higher compared to the other media which can be explained by volumetric contribution of larger particles and larger passages for fluid flow to take place.
ERROR DISTRIBUTION EVALUATION OF THE THIRD VANISHING POINT BASED ON RANDOM STATISTICAL SIMULATION
Directory of Open Access Journals (Sweden)
C. Li
2012-07-01
Full Text Available POS, integrated by GPS / INS (Inertial Navigation Systems, has allowed rapid and accurate determination of position and attitude of remote sensing equipment for MMS (Mobile Mapping Systems. However, not only does INS have system error, but also it is very expensive. Therefore, in this paper error distributions of vanishing points are studied and tested in order to substitute INS for MMS in some special land-based scene, such as ground façade where usually only two vanishing points can be detected. Thus, the traditional calibration approach based on three orthogonal vanishing points is being challenged. In this article, firstly, the line clusters, which parallel to each others in object space and correspond to the vanishing points, are detected based on RANSAC (Random Sample Consensus and parallelism geometric constraint. Secondly, condition adjustment with parameters is utilized to estimate nonlinear error equations of two vanishing points (VX, VY. How to set initial weights for the adjustment solution of single image vanishing points is presented. Solving vanishing points and estimating their error distributions base on iteration method with variable weights, co-factor matrix and error ellipse theory. Thirdly, under the condition of known error ellipses of two vanishing points (VX, VY and on the basis of the triangle geometric relationship of three vanishing points, the error distribution of the third vanishing point (VZ is calculated and evaluated by random statistical simulation with ignoring camera distortion. Moreover, Monte Carlo methods utilized for random statistical estimation are presented. Finally, experimental results of vanishing points coordinate and their error distributions are shown and analyzed.
Error Distribution Evaluation of the Third Vanishing Point Based on Random Statistical Simulation
Li, C.
2012-07-01
POS, integrated by GPS / INS (Inertial Navigation Systems), has allowed rapid and accurate determination of position and attitude of remote sensing equipment for MMS (Mobile Mapping Systems). However, not only does INS have system error, but also it is very expensive. Therefore, in this paper error distributions of vanishing points are studied and tested in order to substitute INS for MMS in some special land-based scene, such as ground façade where usually only two vanishing points can be detected. Thus, the traditional calibration approach based on three orthogonal vanishing points is being challenged. In this article, firstly, the line clusters, which parallel to each others in object space and correspond to the vanishing points, are detected based on RANSAC (Random Sample Consensus) and parallelism geometric constraint. Secondly, condition adjustment with parameters is utilized to estimate nonlinear error equations of two vanishing points (VX, VY). How to set initial weights for the adjustment solution of single image vanishing points is presented. Solving vanishing points and estimating their error distributions base on iteration method with variable weights, co-factor matrix and error ellipse theory. Thirdly, under the condition of known error ellipses of two vanishing points (VX, VY) and on the basis of the triangle geometric relationship of three vanishing points, the error distribution of the third vanishing point (VZ) is calculated and evaluated by random statistical simulation with ignoring camera distortion. Moreover, Monte Carlo methods utilized for random statistical estimation are presented. Finally, experimental results of vanishing points coordinate and their error distributions are shown and analyzed.
Takahashi, T.; Obana, K.; Yamamoto, Y.; Nakanishi, A.; Kaiho, Y.; Kodaira, S.; Kaneda, Y.
2012-12-01
The Nankai trough in southwestern Japan is a convergent margin where the Philippine sea plate is subducted beneath the Eurasian plate. There are major faults segments of huge earthquakes that are called Tokai, Tonankai and Nankai earthquakes. According to the earthquake occurrence history over the past hundreds years, we must expect various rupture patters such as simultaneous or nearly continuous ruptures of plural fault segments. Japan Agency for Marine-Earth Science and Technology (JAMSTEC) conducted seismic surveys at Nankai trough in order to clarify mutual relations between seismic structures and fault segments, as a part of "Research concerning Interaction Between the Tokai, Tonankai and Nankai Earthquakes" funded by Ministry of Education, Culture, Sports, Science and Technology, Japan. This study evaluated the spatial distribution of random velocity inhomogeneities from Hyuga-nada to Kii-channel by using velocity seismograms of small and moderate sized earthquakes. Random velocity inhomogeneities are estimated by the peak delay time analysis of S-wave envelopes (e.g., Takahashi et al. 2009). Peak delay time is defined as the time lag from the S-wave onset to its maximal amplitude arrival. This quantity mainly reflects the accumulated multiple forward scattering effect due to random inhomogeneities, and is quite insensitive to the inelastic attenuation. Peak delay times are measured from the rms envelopes of horizontal components at 4-8Hz, 8-16Hz and 16-32Hz. This study used the velocity seismograms that are recorded by 495 ocean bottom seismographs and 378 onshore seismic stations. Onshore stations are composed of the F-net and Hi-net stations that are maintained by National Research Institute for Earth Science and Disaster Prevention (NIED) of Japan. It is assumed that the random inhomogeneities are represented by the von Karman type PSDF. Preliminary result of inversion analysis shows that spectral gradient of PSDF (i.e., scale dependence of
International Nuclear Information System (INIS)
Zhang Yu; Wang Guangyi; Lu Xinmiao; Hu Yongcai; Xu Jiangtao
2016-01-01
The random telegraph signal noise in the pixel source follower MOSFET is the principle component of the noise in the CMOS image sensor under low light. In this paper, the physical and statistical model of the random telegraph signal noise in the pixel source follower based on the binomial distribution is set up. The number of electrons captured or released by the oxide traps in the unit time is described as the random variables which obey the binomial distribution. As a result, the output states and the corresponding probabilities of the first and the second samples of the correlated double sampling circuit are acquired. The standard deviation of the output states after the correlated double sampling circuit can be obtained accordingly. In the simulation section, one hundred thousand samples of the source follower MOSFET have been simulated, and the simulation results show that the proposed model has the similar statistical characteristics with the existing models under the effect of the channel length and the density of the oxide trap. Moreover, the noise histogram of the proposed model has been evaluated at different environmental temperatures. (paper)
AbdelNabi, Amr A.
2018-02-12
This paper presents new approaches to characterize the achieved performance of hybrid control-access small cells in the context of two-tier multi-input multi-output (MIMO) cellular networks with random interference distributions. The hybrid scheme at small cells (such as femtocells) allows for sharing radio resources between the two network tiers according to the densities of small cells and their associated users, as well as the observed interference power levels in the two network tiers. The analysis considers MIMO transceivers at all nodes, for which antenna arrays can be utilized to implement transmit antenna selection (TAS) and receive maximal ratio combining (MRC) under MIMO point-to-point channels. Moreover, it tar-gets network-level models of interference sources inside each tier and between the two tiers, which are assumed to follow Poisson field processes. To fully capture the occasions for Poisson field distribution on MIMO spatial domain. Two practical scenarios of interference sources are addressed including highly-correlated or uncorrelated transmit antenna arrays of the serving macrocell base station. The analysis presents new analytical approaches that can characterize the downlink outage probability performance in any tier. Furthermore, the outage performance in high signal-to-noise (SNR) regime is also obtained, which can be useful to deduce diversity and/or coding gains.
Stationary response of multi-degree-of-freedom vibro-impact systems to Poisson white noises
International Nuclear Information System (INIS)
Wu, Y.; Zhu, W.Q.
2008-01-01
The stationary response of multi-degree-of-freedom (MDOF) vibro-impact (VI) systems to random pulse trains is studied. The system is formulated as a stochastically excited and dissipated Hamiltonian system. The constraints are modeled as non-linear springs according to the Hertz contact law. The random pulse trains are modeled as Poisson white noises. The approximate stationary probability density function (PDF) for the response of MDOF dissipated Hamiltonian systems to Poisson white noises is obtained by solving the fourth-order generalized Fokker-Planck-Kolmogorov (FPK) equation using perturbation approach. As examples, two-degree-of-freedom (2DOF) VI systems under external and parametric Poisson white noise excitations, respectively, are investigated. The validity of the proposed approach is confirmed by using the results obtained from Monte Carlo simulation. It is shown that the non-Gaussian behaviour depends on the product of the mean arrival rate of the impulses and the relaxation time of the oscillator
Honjo, Toshimori; Uchida, Atsushi; Amano, Kazuya; Hirano, Kunihito; Someya, Hiroyuki; Okumura, Haruka; Yoshimura, Kazuyuki; Davis, Peter; Tokura, Yasuhiro
2009-05-25
A high speed physical random bit generator is applied for the first time to a gigahertz clocked quantum key distribution system. Random phase-modulation in a differential-phase-shift quantum key distribution (DPS-QKD) system is performed using a 1-Gbps random bit signal which is generated by a physical random bit generator with chaotic semiconductor lasers. Stable operation is demonstrated for over one hour, and sifted keys are successfully generated at a rate of 9.0 kbps with a quantum bit error rate of 3.2% after 25-km fiber transmission.
Spectral statistics in semiclassical random-matrix ensembles
International Nuclear Information System (INIS)
Feingold, M.; Leitner, D.M.; Wilkinson, M.
1991-01-01
A novel random-matrix ensemble is introduced which mimics the global structure inherent in the Hamiltonian matrices of autonomous, ergodic systems. Changes in its parameters induce a transition between a Poisson and a Wigner distribution for the level spacings, P(s). The intermediate distributions are uniquely determined by a single scaling variable. Semiclassical constraints force the ensemble to be in a regime with Wigner P(s) for systems with more than two freedoms
Efficient information transfer by Poisson neurons
Czech Academy of Sciences Publication Activity Database
Košťál, Lubomír; Shinomoto, S.
2016-01-01
Roč. 13, č. 3 (2016), s. 509-520 ISSN 1547-1063 R&D Projects: GA ČR(CZ) GA15-08066S Institutional support: RVO:67985823 Keywords : information capacity * Poisson neuron * metabolic cost * decoding error Subject RIV: BD - Theory of Information Impact factor: 1.035, year: 2016
Poisson brackets for fluids and plasmas
International Nuclear Information System (INIS)
Morrison, P.J.
1982-01-01
Noncanonical yet Hamiltonian descriptions are presented of many of the non-dissipative field equations that govern fluids and plasmas. The dynamical variables are the usually encountered physical variables. These descriptions have the advantage that gauge conditions are absent, but at the expense of introducing peculiar Poisson brackets. Clebsch-like potential descriptions that reverse this situations are also introduced
Almost Poisson integration of rigid body systems
International Nuclear Information System (INIS)
Austin, M.A.; Krishnaprasad, P.S.; Li-Sheng Wang
1993-01-01
In this paper we discuss the numerical integration of Lie-Poisson systems using the mid-point rule. Since such systems result from the reduction of hamiltonian systems with symmetry by lie group actions, we also present examples of reconstruction rules for the full dynamics. A primary motivation is to preserve in the integration process, various conserved quantities of the original dynamics. A main result of this paper is an O(h 3 ) error estimate for the Lie-Poisson structure, where h is the integration step-size. We note that Lie-Poisson systems appear naturally in many areas of physical science and engineering, including theoretical mechanics of fluids and plasmas, satellite dynamics, and polarization dynamics. In the present paper we consider a series of progressively complicated examples related to rigid body systems. We also consider a dissipative example associated to a Lie-Poisson system. The behavior of the mid-point rule and an associated reconstruction rule is numerically explored. 24 refs., 9 figs
Dimensional reduction for generalized Poisson brackets
Acatrinei, Ciprian Sorin
2008-02-01
We discuss dimensional reduction for Hamiltonian systems which possess nonconstant Poisson brackets between pairs of coordinates and between pairs of momenta. The associated Jacobi identities imply that the dimensionally reduced brackets are always constant. Some examples are given alongside the general theory.
Affine Poisson Groups and WZW Model
Directory of Open Access Journals (Sweden)
Ctirad Klimcík
2008-01-01
Full Text Available We give a detailed description of a dynamical system which enjoys a Poisson-Lie symmetry with two non-isomorphic dual groups. The system is obtained by taking the q → ∞ limit of the q-deformed WZW model and the understanding of its symmetry structure results in uncovering an interesting duality of its exchange relations.
Extended q -Gaussian and q -exponential distributions from gamma random variables
Budini, Adrián A.
2015-05-01
The family of q -Gaussian and q -exponential probability densities fit the statistical behavior of diverse complex self-similar nonequilibrium systems. These distributions, independently of the underlying dynamics, can rigorously be obtained by maximizing Tsallis "nonextensive" entropy under appropriate constraints, as well as from superstatistical models. In this paper we provide an alternative and complementary scheme for deriving these objects. We show that q -Gaussian and q -exponential random variables can always be expressed as a function of two statistically independent gamma random variables with the same scale parameter. Their shape index determines the complexity q parameter. This result also allows us to define an extended family of asymmetric q -Gaussian and modified q -exponential densities, which reduce to the standard ones when the shape parameters are the same. Furthermore, we demonstrate that a simple change of variables always allows relating any of these distributions with a beta stochastic variable. The extended distributions are applied in the statistical description of different complex dynamics such as log-return signals in financial markets and motion of point defects in a fluid flow.
Directory of Open Access Journals (Sweden)
Jayaweera SudharmanK
2010-01-01
Full Text Available Performance gain achieved by adding mobile nodes to a stationary sensor network for target detection depends on factors such as the number of mobile nodes deployed, mobility patterns, speed and energy constraints of mobile nodes, and the nature of the target locations (deterministic or random. In this paper, we address the problem of distributed detection of a randomly located target by a hybrid sensor network. Specifically, we develop two decision-fusion architectures for detection where in the first one, impact of node mobility is taken into account for decisions updating at the fusion center, while in the second model the impact of node mobility is taken at the node level decision updating. The cost of deploying mobile nodes is analyzed in terms of the minimum fraction of mobile nodes required to achieve the desired performance level within a desired delay constraint. Moreover, we consider managing node mobility under given constraints.
Camposeo, Andrea; Del Carro, Pompilio; Persano, Luana; Cyprych, Konrad; Szukalski, Adam; Sznitko, Lech; Mysliwiec, Jaroslaw; Pisignano, Dario
2014-10-28
Room-temperature nanoimprinted, DNA-based distributed feedback (DFB) laser operation at 605 nm is reported. The laser is made of a pure DNA host matrix doped with gain dyes. At high excitation densities, the emission of the untextured dye-doped DNA films is characterized by a broad emission peak with an overall line width of 12 nm and superimposed narrow peaks, characteristic of random lasing. Moreover, direct patterning of the DNA films is demonstrated with a resolution down to 100 nm, enabling the realization of both surface-emitting and edge-emitting DFB lasers with a typical line width of <0.3 nm. The resulting emission is polarized, with a ratio between the TE- and TM-polarized intensities exceeding 30. In addition, the nanopatterned devices dissolve in water within less than 2 min. These results demonstrate the possibility of realizing various physically transient nanophotonics and laser architectures, including random lasing and nanoimprinted devices, based on natural biopolymers.
Hacking on decoy-state quantum key distribution system with partial phase randomization.
Sun, Shi-Hai; Jiang, Mu-Sheng; Ma, Xiang-Chun; Li, Chun-Yan; Liang, Lin-Mei
2014-04-23
Quantum key distribution (QKD) provides means for unconditional secure key transmission between two distant parties. However, in practical implementations, it suffers from quantum hacking due to device imperfections. Here we propose a hybrid measurement attack, with only linear optics, homodyne detection, and single photon detection, to the widely used vacuum + weak decoy state QKD system when the phase of source is partially randomized. Our analysis shows that, in some parameter regimes, the proposed attack would result in an entanglement breaking channel but still be able to trick the legitimate users to believe they have transmitted secure keys. That is, the eavesdropper is able to steal all the key information without discovered by the users. Thus, our proposal reveals that partial phase randomization is not sufficient to guarantee the security of phase-encoding QKD systems with weak coherent states.
3D vector distribution of the electro-magnetic fields on a random gold film
Canneson, Damien; Berini, Bruno; Buil, Stéphanie; Hermier, Jean-Pierre; Quélin, Xavier
2018-05-01
The 3D vector distribution of the electro-magnetic fields at the very close vicinity of the surface of a random gold film is studied. Such films are well known for their properties of light confinement and large fluctuations of local density of optical states. Using Finite-Difference Time-Domain simulations, we show that it is possible to determine the local orientation of the electro-magnetic fields. This allows us to obtain a complete characterization of the fields. Large fluctuations of their amplitude are observed as previously shown. Here, we demonstrate large variations of their direction depending both on the position on the random gold film, and on the distance to it. Such characterization could be useful for a better understanding of applications like the coupling of point-like dipoles to such films.
Hacking on decoy-state quantum key distribution system with partial phase randomization
Sun, Shi-Hai; Jiang, Mu-Sheng; Ma, Xiang-Chun; Li, Chun-Yan; Liang, Lin-Mei
2014-04-01
Quantum key distribution (QKD) provides means for unconditional secure key transmission between two distant parties. However, in practical implementations, it suffers from quantum hacking due to device imperfections. Here we propose a hybrid measurement attack, with only linear optics, homodyne detection, and single photon detection, to the widely used vacuum + weak decoy state QKD system when the phase of source is partially randomized. Our analysis shows that, in some parameter regimes, the proposed attack would result in an entanglement breaking channel but still be able to trick the legitimate users to believe they have transmitted secure keys. That is, the eavesdropper is able to steal all the key information without discovered by the users. Thus, our proposal reveals that partial phase randomization is not sufficient to guarantee the security of phase-encoding QKD systems with weak coherent states.
ACORN—A new method for generating sequences of uniformly distributed Pseudo-random Numbers
Wikramaratna, R. S.
1989-07-01
A new family of pseudo-random number generators, the ACORN ( additive congruential random number) generators, is proposed. The resulting numbers are distributed uniformly in the interval [0, 1). The ACORN generators are defined recursively, and the ( k + 1)th order generator is easily derived from the kth order generator. Some theorems concerning the period length are presented and compared with existing results for linear congruential generators. A range of statistical tests are applied to the ACORN generators, and their performance is compared with that of the linear congruential generators and the Chebyshev generators. The tests show the ACORN generators to be statistically superior to the Chebyshev generators, while being statistically similar to the linear congruential generators. However, the ACORN generators execute faster than linear congruential generators for the same statistical faithfulness. The main advantages of the ACORN generator are speed of execution, long period length, and simplicity of coding.
Dorin, Bryce; Parkinson, Patrick; Scully, Patricia
2018-04-01
The development of cost-effective electrical packaging for randomly distributed micro/nano-scale devices is a widely recognized challenge for fabrication technologies. Three-dimensional direct laser writing (DLW) has been proposed as a solution to this challenge, and has enabled the creation of rapid and low resistance graphitic wires within commercial polyimide substrates. In this work, we utilize the DLW technique to electrically contact three fully encapsulated and randomly positioned light-emitting diodes (LEDs) in a one-step process. The resolution of the contacts is in the order of 20 μ m, with an average circuit resistance of 29 ± 18 kΩ per LED contacted. The speed and simplicity of this technique is promising to meet the needs of future microelectronics and device packaging.
Pure random search for ambient sensor distribution optimisation in a smart home environment.
Poland, Michael P; Nugent, Chris D; Wang, Hui; Chen, Liming
2011-01-01
Smart homes are living spaces facilitated with technology to allow individuals to remain in their own homes for longer, rather than be institutionalised. Sensors are the fundamental physical layer with any smart home, as the data they generate is used to inform decision support systems, facilitating appropriate actuator actions. Positioning of sensors is therefore a fundamental characteristic of a smart home. Contemporary smart home sensor distribution is aligned to either a) a total coverage approach; b) a human assessment approach. These methods for sensor arrangement are not data driven strategies, are unempirical and frequently irrational. This Study hypothesised that sensor deployment directed by an optimisation method that utilises inhabitants' spatial frequency data as the search space, would produce more optimal sensor distributions vs. the current method of sensor deployment by engineers. Seven human engineers were tasked to create sensor distributions based on perceived utility for 9 deployment scenarios. A Pure Random Search (PRS) algorithm was then tasked to create matched sensor distributions. The PRS method produced superior distributions in 98.4% of test cases (n=64) against human engineer instructed deployments when the engineers had no access to the spatial frequency data, and in 92.0% of test cases (n=64) when engineers had full access to these data. These results thus confirmed the hypothesis.
Topology determines force distributions in one-dimensional random spring networks
Heidemann, Knut M.; Sageman-Furnas, Andrew O.; Sharma, Abhinav; Rehfeldt, Florian; Schmidt, Christoph F.; Wardetzky, Max
2018-02-01
Networks of elastic fibers are ubiquitous in biological systems and often provide mechanical stability to cells and tissues. Fiber-reinforced materials are also common in technology. An important characteristic of such materials is their resistance to failure under load. Rupture occurs when fibers break under excessive force and when that failure propagates. Therefore, it is crucial to understand force distributions. Force distributions within such networks are typically highly inhomogeneous and are not well understood. Here we construct a simple one-dimensional model system with periodic boundary conditions by randomly placing linear springs on a circle. We consider ensembles of such networks that consist of N nodes and have an average degree of connectivity z but vary in topology. Using a graph-theoretical approach that accounts for the full topology of each network in the ensemble, we show that, surprisingly, the force distributions can be fully characterized in terms of the parameters (N ,z ) . Despite the universal properties of such (N ,z ) ensembles, our analysis further reveals that a classical mean-field approach fails to capture force distributions correctly. We demonstrate that network topology is a crucial determinant of force distributions in elastic spring networks.
First passage time distribution and the number of returns for ultrametric random walks
Avetisov, V. A.; Bikulov, A. Kh; Zubarev, A. P.
2009-02-01
In this paper, we consider a homogeneous Markov process ξ(t ω) on an ultrametric space Qp, with distribution density f(x, t), x ∈ Qp, t ∈ R+, satisfying the equation \\frac{\\partial }{\\partial t} f(x,t)=-D_{x}^{\\alpha } f(x,t) , usually called the ultrametric diffusion equation. We construct and examine a random variable \\tau _{Z_{p} } (\\omega ) that has the meaning the first passage times. Also, we obtain a formula for the mean number of returns on the interval (0, t] and give its asymptotic estimates for large t.
On Marginal Distributions of the Ordered Eigenvalues of Certain Random Matrices
Directory of Open Access Journals (Sweden)
Jin Shi
2010-01-01
Full Text Available This paper presents a general expression for the marginal distributions of the ordered eigenvalues of certain important random matrices. The expression, given in terms of matrix determinants, is compacter in representation and more efficient in computational complexity than existing results in the literature. As an illustrative application of the new result, we then analyze the performance of the multiple-input multiple-output singular value decomposition system. Analytical expressions for the average symbol error rate and the outage probability are derived, assuming the general double-scattering fading condition.
Scattering of Dirac Electrons by Randomly Distributed Nitrogen Substitutional Impurities in Graphene
Directory of Open Access Journals (Sweden)
Khamdam Rakhimov
2016-09-01
Full Text Available The propagation of wave packets in a monolayer graphene containing a random distribution of dopant atoms has been explored. The time-dependent, two-dimensional Weyl-Dirac equation was solved numerically to propagate an initial Gaussian-type wave front and to investigate how the set of impurities influences its motion. It has been observed that the charge transport in doped graphene differs from the pristine case. In particular, nitrogen substitutional doping reduces the charge mobility in graphene due to backscattering effects.
Electromagnetic wave propagation in a random distribution of C{sub 60} molecules
Energy Technology Data Exchange (ETDEWEB)
Moradi, Afshin, E-mail: a.moradi@kut.ac.ir [Department of Engineering Physics, Kermanshah University of Technology, Kermanshah, Iran and Department of Nano Sciences, Institute for Studies in Theoretical Physics and Mathematics (IPM), Tehran (Iran, Islamic Republic of)
2014-10-15
Propagation of electromagnetic waves in a random distribution of C{sub 60} molecules are investigated, within the framework of the classical electrodynamics. Electronic excitations over the each C{sub 60} molecule surface are modeled by a spherical layer of electron gas represented by two interacting fluids, which takes into account the different nature of the π and σ electrons. It is found that the present medium supports four modes of electromagnetic waves, where they can be divided into two groups: one group with shorter wavelength than the light waves of the same frequency and the other with longer wavelength than the free-space radiation.
Magneto-transport properties of a random distribution of few-layer graphene patches
International Nuclear Information System (INIS)
Iacovella, Fabrice; Mitioglu, Anatolie; Pierre, Mathieu; Raquet, Bertrand; Goiran, Michel; Plochocka, Paulina; Escoffier, Walter; Trinsoutrot, Pierre; Vergnes, Hugues; Caussat, Brigitte; Conédéra, Véronique
2014-01-01
In this study, we address the electronic properties of conducting films constituted of an array of randomly distributed few layer graphene patches and investigate on their most salient galvanometric features in the moderate and extreme disordered limit. We demonstrate that, in annealed devices, the ambipolar behaviour and the onset of Landau level quantization in high magnetic field constitute robust hallmarks of few-layer graphene films. In the strong disorder limit, however, the magneto-transport properties are best described by a variable-range hopping behaviour. A large negative magneto-conductance is observed at the charge neutrality point, in consistency with localized transport regime
DEFF Research Database (Denmark)
Fitzek, Frank; Toth, Tamas; Szabados, Áron
2014-01-01
This paper advocates the use of random linear network coding for storage in distributed clouds in order to reduce storage and traffic costs in dynamic settings, i.e. when adding and removing numerous storage devices/clouds on-the-fly and when the number of reachable clouds is limited. We introduce...... techniques do not require us to retrieve the full original information in order to store meaningful information. Our numerical results show a high resilience over a large number of regeneration cycles compared to other approaches....
Random Linear Network Coding is Key to Data Survival in Highly Dynamic Distributed Storage
DEFF Research Database (Denmark)
Sipos, Marton A.; Fitzek, Frank; Roetter, Daniel Enrique Lucani
2015-01-01
as the number of available nodes varies greatly over time and keeping track of the system's state becomes unfeasible. As a consequence, conventional erasure correction approaches are ill-suited for maintaining data integrity. In this highly dynamic context, random linear network coding (RLNC) provides...... an interesting solution. Our goal is to characterize RLNC's guaranteed data integrity region in terms of the total number of storage devices that need to be available and stored data per device. We compare our fully distributed RLNC approach to centralized (genie aided) and fully decentralized replication...
Online Distributed Learning Over Networks in RKH Spaces Using Random Fourier Features
Bouboulis, Pantelis; Chouvardas, Symeon; Theodoridis, Sergios
2018-04-01
We present a novel diffusion scheme for online kernel-based learning over networks. So far, a major drawback of any online learning algorithm, operating in a reproducing kernel Hilbert space (RKHS), is the need for updating a growing number of parameters as time iterations evolve. Besides complexity, this leads to an increased need of communication resources, in a distributed setting. In contrast, the proposed method approximates the solution as a fixed-size vector (of larger dimension than the input space) using Random Fourier Features. This paves the way to use standard linear combine-then-adapt techniques. To the best of our knowledge, this is the first time that a complete protocol for distributed online learning in RKHS is presented. Conditions for asymptotic convergence and boundness of the networkwise regret are also provided. The simulated tests illustrate the performance of the proposed scheme.
Directory of Open Access Journals (Sweden)
Rodrigues-Motta Mariana
2008-07-01
Full Text Available Abstract Dark spots in the fleece area are often associated with dark fibres in wool, which limits its competitiveness with other textile fibres. Field data from a sheep experiment in Uruguay revealed an excess number of zeros for dark spots. We compared the performance of four Poisson and zero-inflated Poisson (ZIP models under four simulation scenarios. All models performed reasonably well under the same scenario for which the data were simulated. The deviance information criterion favoured a Poisson model with residual, while the ZIP model with a residual gave estimates closer to their true values under all simulation scenarios. Both Poisson and ZIP models with an error term at the regression level performed better than their counterparts without such an error. Field data from Corriedale sheep were analysed with Poisson and ZIP models with residuals. Parameter estimates were similar for both models. Although the posterior distribution of the sire variance was skewed due to a small number of rams in the dataset, the median of this variance suggested a scope for genetic selection. The main environmental factor was the age of the sheep at shearing. In summary, age related processes seem to drive the number of dark spots in this breed of sheep.
A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments
Energy Technology Data Exchange (ETDEWEB)
Fisicaro, G., E-mail: giuseppe.fisicaro@unibas.ch; Goedecker, S. [Department of Physics, University of Basel, Klingelbergstrasse 82, 4056 Basel (Switzerland); Genovese, L. [University of Grenoble Alpes, CEA, INAC-SP2M, L-Sim, F-38000 Grenoble (France); Andreussi, O. [Institute of Computational Science, Università della Svizzera Italiana, Via Giuseppe Buffi 13, CH-6904 Lugano (Switzerland); Theory and Simulations of Materials (THEOS) and National Centre for Computational Design and Discovery of Novel Materials (MARVEL), École Polytechnique Fédérale de Lausanne, Station 12, CH-1015 Lausanne (Switzerland); Marzari, N. [Theory and Simulations of Materials (THEOS) and National Centre for Computational Design and Discovery of Novel Materials (MARVEL), École Polytechnique Fédérale de Lausanne, Station 12, CH-1015 Lausanne (Switzerland)
2016-01-07
The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes.
A generalized Poisson and Poisson-Boltzmann solver for electrostatic environments
International Nuclear Information System (INIS)
Fisicaro, G.; Goedecker, S.; Genovese, L.; Andreussi, O.; Marzari, N.
2016-01-01
The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of applied electrochemical potentials, taking into account the non-trivial electrostatic screening coming from the solvent and the electrolytes. As a consequence, the electrostatic potential has to be found by solving the generalized Poisson and the Poisson-Boltzmann equations for neutral and ionic solutions, respectively. In the present work, solvers for both problems have been developed. A preconditioned conjugate gradient method has been implemented for the solution of the generalized Poisson equation and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations of the ordinary Poisson equation solver. In addition, a self-consistent procedure enables us to solve the non-linear Poisson-Boltzmann problem. Both solvers exhibit very high accuracy and parallel efficiency and allow for the treatment of periodic, free, and slab boundary conditions. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and will be released as an independent program, suitable for integration in other codes
Directory of Open Access Journals (Sweden)
John H Graham
Full Text Available Distributed robustness is thought to influence the buffering of random phenotypic variation through the scale-free topology of gene regulatory, metabolic, and protein-protein interaction networks. If this hypothesis is true, then the phenotypic response to the perturbation of particular nodes in such a network should be proportional to the number of links those nodes make with neighboring nodes. This suggests a probability distribution approximating an inverse power-law of random phenotypic variation. Zero phenotypic variation, however, is impossible, because random molecular and cellular processes are essential to normal development. Consequently, a more realistic distribution should have a y-intercept close to zero in the lower tail, a mode greater than zero, and a long (fat upper tail. The double Pareto-lognormal (DPLN distribution is an ideal candidate distribution. It consists of a mixture of a lognormal body and upper and lower power-law tails.If our assumptions are true, the DPLN distribution should provide a better fit to random phenotypic variation in a large series of single-gene knockout lines than other skewed or symmetrical distributions. We fit a large published data set of single-gene knockout lines in Saccharomyces cerevisiae to seven different probability distributions: DPLN, right Pareto-lognormal (RPLN, left Pareto-lognormal (LPLN, normal, lognormal, exponential, and Pareto. The best model was judged by the Akaike Information Criterion (AIC.Phenotypic variation among gene knockouts in S. cerevisiae fits a double Pareto-lognormal (DPLN distribution better than any of the alternative distributions, including the right Pareto-lognormal and lognormal distributions.A DPLN distribution is consistent with the hypothesis that developmental stability is mediated, in part, by distributed robustness, the resilience of gene regulatory, metabolic, and protein-protein interaction networks. Alternatively, multiplicative cell growth, and the
Graham, John H; Robb, Daniel T; Poe, Amy R
2012-01-01
Distributed robustness is thought to influence the buffering of random phenotypic variation through the scale-free topology of gene regulatory, metabolic, and protein-protein interaction networks. If this hypothesis is true, then the phenotypic response to the perturbation of particular nodes in such a network should be proportional to the number of links those nodes make with neighboring nodes. This suggests a probability distribution approximating an inverse power-law of random phenotypic variation. Zero phenotypic variation, however, is impossible, because random molecular and cellular processes are essential to normal development. Consequently, a more realistic distribution should have a y-intercept close to zero in the lower tail, a mode greater than zero, and a long (fat) upper tail. The double Pareto-lognormal (DPLN) distribution is an ideal candidate distribution. It consists of a mixture of a lognormal body and upper and lower power-law tails. If our assumptions are true, the DPLN distribution should provide a better fit to random phenotypic variation in a large series of single-gene knockout lines than other skewed or symmetrical distributions. We fit a large published data set of single-gene knockout lines in Saccharomyces cerevisiae to seven different probability distributions: DPLN, right Pareto-lognormal (RPLN), left Pareto-lognormal (LPLN), normal, lognormal, exponential, and Pareto. The best model was judged by the Akaike Information Criterion (AIC). Phenotypic variation among gene knockouts in S. cerevisiae fits a double Pareto-lognormal (DPLN) distribution better than any of the alternative distributions, including the right Pareto-lognormal and lognormal distributions. A DPLN distribution is consistent with the hypothesis that developmental stability is mediated, in part, by distributed robustness, the resilience of gene regulatory, metabolic, and protein-protein interaction networks. Alternatively, multiplicative cell growth, and the mixing of
Czernik, Pawel
2013-10-01
The hardware random number generator based on the 74121 monostable multivibrators for applications in cryptographically secure distributed measurement and control systems with asymmetric resources was presented. This device was implemented on the basis of the physical electronic vibration generator in which the circuit is composed of two "loop" 74121 monostable multivibrators, D flip-flop and external clock signal source. The clock signal, witch control D flip-flop was generated by a computer on one of the parallel port pins. There was presented programmed the author's acquisition process of random data from the measuring system to a computer. The presented system was designed, builded and thoroughly tested in the term of cryptographic security in our laboratory, what there is the most important part of this publication. Real cryptographic security was tested based on the author's software and the software environment called RDieHarder. The obtained results was here presented and analyzed in detail with particular reference to the specificity of distributed measurement and control systems with asymmetric resources.
Scattering of elastic waves on fractures randomly distributed in a three-dimensional medium
Strizhkov, S. A.; Ponyatovskaya, V. I.
1985-02-01
The purpose of this work is to determine the variation in basic characteristics of the wave field formed in a jointed medium, such as the intensity of fluctuations of amplitude, correlation radius, scattering coefficient and frequency composition of waves, as functions of jointing parameters. Fractures are simulated by flat plates randomly distributed and chaotically oriented in a three-dimensional medium. Experiments were performed using an alabaster model, a rectangular block measuring 50 x 50 x 120 mm. The plates were introduced into liquid alabaster which was then agitated. Models made in this way contain randomly distributed and chaotically oriented fractures. The influence of these fractures appears as fluctuations in the wave field formed in the medium. The data obtained in experimental studies showed that the dimensions of heterogeneities determined by waves in the jointed medium and the dimensions of the fractures themselves coincide only if the distance between fractures is rather great. If the distance between fractures is less than the wavelength, the dimensions of the heterogeneities located by the wave depend on wavelength.
On the Distribution of Indefinite Quadratic Forms in Gaussian Random Variables
Al-Naffouri, Tareq Y.
2015-10-30
© 2015 IEEE. In this work, we propose a unified approach to evaluating the CDF and PDF of indefinite quadratic forms in Gaussian random variables. Such a quantity appears in many applications in communications, signal processing, information theory, and adaptive filtering. For example, this quantity appears in the mean-square-error (MSE) analysis of the normalized least-meansquare (NLMS) adaptive algorithm, and SINR associated with each beam in beam forming applications. The trick of the proposed approach is to replace inequalities that appear in the CDF calculation with unit step functions and to use complex integral representation of the the unit step function. Complex integration allows us then to evaluate the CDF in closed form for the zero mean case and as a single dimensional integral for the non-zero mean case. Utilizing the saddle point technique allows us to closely approximate such integrals in non zero mean case. We demonstrate how our approach can be extended to other scenarios such as the joint distribution of quadratic forms and ratios of such forms, and to characterize quadratic forms in isotropic distributed random variables.We also evaluate the outage probability in multiuser beamforming using our approach to provide an application of indefinite forms in communications.
Distributed clone detection in static wireless sensor networks: random walk with network division.
Directory of Open Access Journals (Sweden)
Wazir Zada Khan
Full Text Available Wireless Sensor Networks (WSNs are vulnerable to clone attacks or node replication attacks as they are deployed in hostile and unattended environments where they are deprived of physical protection, lacking physical tamper-resistance of sensor nodes. As a result, an adversary can easily capture and compromise sensor nodes and after replicating them, he inserts arbitrary number of clones/replicas into the network. If these clones are not efficiently detected, an adversary can be further capable to mount a wide variety of internal attacks which can emasculate the various protocols and sensor applications. Several solutions have been proposed in the literature to address the crucial problem of clone detection, which are not satisfactory as they suffer from some serious drawbacks. In this paper we propose a novel distributed solution called Random Walk with Network Division (RWND for the detection of node replication attack in static WSNs which is based on claimer-reporter-witness framework and combines a simple random walk with network division. RWND detects clone(s by following a claimer-reporter-witness framework and a random walk is employed within each area for the selection of witness nodes. Splitting the network into levels and areas makes clone detection more efficient and the high security of witness nodes is ensured with moderate communication and memory overheads. Our simulation results show that RWND outperforms the existing witness node based strategies with moderate communication and memory overheads.
Distributed clone detection in static wireless sensor networks: random walk with network division.
Khan, Wazir Zada; Aalsalem, Mohammed Y; Saad, N M
2015-01-01
Wireless Sensor Networks (WSNs) are vulnerable to clone attacks or node replication attacks as they are deployed in hostile and unattended environments where they are deprived of physical protection, lacking physical tamper-resistance of sensor nodes. As a result, an adversary can easily capture and compromise sensor nodes and after replicating them, he inserts arbitrary number of clones/replicas into the network. If these clones are not efficiently detected, an adversary can be further capable to mount a wide variety of internal attacks which can emasculate the various protocols and sensor applications. Several solutions have been proposed in the literature to address the crucial problem of clone detection, which are not satisfactory as they suffer from some serious drawbacks. In this paper we propose a novel distributed solution called Random Walk with Network Division (RWND) for the detection of node replication attack in static WSNs which is based on claimer-reporter-witness framework and combines a simple random walk with network division. RWND detects clone(s) by following a claimer-reporter-witness framework and a random walk is employed within each area for the selection of witness nodes. Splitting the network into levels and areas makes clone detection more efficient and the high security of witness nodes is ensured with moderate communication and memory overheads. Our simulation results show that RWND outperforms the existing witness node based strategies with moderate communication and memory overheads.
Analysis and applications of a frequency selective surface via a random distribution method
International Nuclear Information System (INIS)
Xie Shao-Yi; Huang Jing-Jian; Yuan Nai-Chang; Liu Li-Guo
2014-01-01
A novel frequency selective surface (FSS) for reducing radar cross section (RCS) is proposed in this paper. This FSS is based on the random distribution method, so it can be called random surface. In this paper, the stacked patches serving as periodic elements are employed for RCS reduction. Previous work has demonstrated the efficiency by utilizing the microstrip patches, especially for the reflectarray. First, the relevant theory of the method is described. Then a sample of a three-layer variable-sized stacked patch random surface with a dimension of 260 mm×260 mm is simulated, fabricated, and measured in order to demonstrate the validity of the proposed design. For the normal incidence, the 8-dB RCS reduction can be achieved both by the simulation and the measurement in 8 GHz–13 GHz. The oblique incidence of 30° is also investigated, in which the 7-dB RCS reduction can be obtained in a frequency range of 8 GHz–14 GHz. (condensed matter: electronic structure, electrical, magnetic, and optical properties)
Stochastic Flows: Dispersion of a Mass Distribution and Lagrangian Observations of a Random Field.
Zirbel, Craig Lee
1993-01-01
We consider two classical problems from statistical fluid mechanics in the modern probabilistic setting of stochastic flows based on stochastic differential equations. Such flows model spatially coherent "noise" superimposed on classical velocity fields. Their theory was developed over the past fifteen years by Kunita, Harris, Baxendale, and Le Jan, among others. The first problem we treat is the dispersion of a mass distribution carried by an isotropic Brownian flow F = {F_{s,t}: 0 M_0 be a measure on IR^{d} and define a random measure M_{t} by M_{t}(R) = M_0( {xinIR^{d} : F_ {o,t}(x)in R}) for sets R in IR^{d}. Then M_{t}(R) is the amount of mass in set R at time t. The dispersion matrix D _{t} is the centered spatial second moment of M_{t}, which describes the spreading of M_{t} relative to its center of mass. We show that, in incompressible flows, {rm I!E }D_{t} grows linearly in t for large t and that deviations from linearity are of order sqrt{t}. This is a signature of classical diffusion. In one dimension we find that {rm I!E}D_{t } goes as sqrt{t} for large t. We give other results for the remaining cases. Our methods involve an exact analysis of a one-dimensional diffusion process and the use of comparison theorems. Finally, to sharpen our intuition about the evolution of M _{t}, we present plots of M _{t}>=nerated by numerical simulations of isotropic Brownian flows. The second problem we consider concerns Lagrangian observations of a random field A made by a particle carried by a general stochastic flow F. Assuming that A and F are homogeneous, we obtain a formula for the distribution of the Lagrangian observation A(F_{0,t }(x),t) in terms of the Eulerian distribution of A and a function representing a tracer density. This result extends the seminal work of Lumley; our proof refines other work along these lines. In addition, we introduce a class of flows which regenerate at certain random times. This independence assumption allows us to prove the
Linear odd Poisson bracket on Grassmann variables
International Nuclear Information System (INIS)
Soroka, V.A.
1999-01-01
A linear odd Poisson bracket (antibracket) realized solely in terms of Grassmann variables is suggested. It is revealed that the bracket, which corresponds to a semi-simple Lie group, has at once three Grassmann-odd nilpotent Δ-like differential operators of the first, the second and the third orders with respect to Grassmann derivatives, in contrast with the canonical odd Poisson bracket having the only Grassmann-odd nilpotent differential Δ-operator of the second order. It is shown that these Δ-like operators together with a Grassmann-odd nilpotent Casimir function of this bracket form a finite-dimensional Lie superalgebra. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)
Degenerate odd Poisson bracket on Grassmann variables
International Nuclear Information System (INIS)
Soroka, V.A.
2000-01-01
A linear degenerate odd Poisson bracket (antibracket) realized solely on Grassmann variables is proposed. It is revealed that this bracket has at once three Grassmann-odd nilpotent Δ-like differential operators of the first, second and third orders with respect to the Grassmann derivatives. It is shown that these Δ-like operators, together with the Grassmann-odd nilpotent Casimir function of this bracket, form a finite-dimensional Lie superalgebra
Poisson/Superfish codes for personal computers
International Nuclear Information System (INIS)
Humphries, S.
1992-01-01
The Poisson/Superfish codes calculate static E or B fields in two-dimensions and electromagnetic fields in resonant structures. New versions for 386/486 PCs and Macintosh computers have capabilities that exceed the mainframe versions. Notable improvements are interactive graphical post-processors, improved field calculation routines, and a new program for charged particle orbit tracking. (author). 4 refs., 1 tab., figs
Elementary derivation of Poisson structures for fluid dynamics and electrodynamics
International Nuclear Information System (INIS)
Kaufman, A.N.
1982-01-01
The canonical Poisson structure of the microscopic Lagrangian is used to deduce the noncanonical Poisson structure for the macroscopic Hamiltonian dynamics of a compressible neutral fluid and of fluid electrodynamics
Poisson Plus Quantification for Digital PCR Systems.
Majumdar, Nivedita; Banerjee, Swapnonil; Pallas, Michael; Wessel, Thomas; Hegerich, Patricia
2017-08-29
Digital PCR, a state-of-the-art nucleic acid quantification technique, works by spreading the target material across a large number of partitions. The average number of molecules per partition is estimated using Poisson statistics, and then converted into concentration by dividing by partition volume. In this standard approach, identical partition sizing is assumed. Violations of this assumption result in underestimation of target quantity, when using Poisson modeling, especially at higher concentrations. The Poisson-Plus Model accommodates for this underestimation, if statistics of the volume variation are well characterized. The volume variation was measured on the chip array based QuantStudio 3D Digital PCR System using the ROX fluorescence level as a proxy for effective load volume per through-hole. Monte Carlo simulations demonstrate the efficacy of the proposed correction. Empirical measurement of model parameters characterizing the effective load volume on QuantStudio 3D Digital PCR chips is presented. The model was used to analyze digital PCR experiments and showed improved accuracy in quantification. At the higher concentrations, the modeling must take effective fill volume variation into account to produce accurate estimates. The extent of the difference from the standard to the new modeling is positively correlated to the extent of fill volume variation in the effective load of your reactions.
Collision prediction models using multivariate Poisson-lognormal regression.
El-Basyouny, Karim; Sayed, Tarek
2009-07-01
This paper advocates the use of multivariate Poisson-lognormal (MVPLN) regression to develop models for collision count data. The MVPLN approach presents an opportunity to incorporate the correlations across collision severity levels and their influence on safety analyses. The paper introduces a new multivariate hazardous location identification technique, which generalizes the univariate posterior probability of excess that has been commonly proposed and applied in the literature. In addition, the paper presents an alternative approach for quantifying the effect of the multivariate structure on the precision of expected collision frequency. The MVPLN approach is compared with the independent (separate) univariate Poisson-lognormal (PLN) models with respect to model inference, goodness-of-fit, identification of hot spots and precision of expected collision frequency. The MVPLN is modeled using the WinBUGS platform which facilitates computation of posterior distributions as well as providing a goodness-of-fit measure for model comparisons. The results indicate that the estimates of the extra Poisson variation parameters were considerably smaller under MVPLN leading to higher precision. The improvement in precision is due mainly to the fact that MVPLN accounts for the correlation between the latent variables representing property damage only (PDO) and injuries plus fatalities (I+F). This correlation was estimated at 0.758, which is highly significant, suggesting that higher PDO rates are associated with higher I+F rates, as the collision likelihood for both types is likely to rise due to similar deficiencies in roadway design and/or other unobserved factors. In terms of goodness-of-fit, the MVPLN model provided a superior fit than the independent univariate models. The multivariate hazardous location identification results demonstrated that some hazardous locations could be overlooked if the analysis was restricted to the univariate models.
Conflict-cost based random sampling design for parallel MRI with low rank constraints
Kim, Wan; Zhou, Yihang; Lyu, Jingyuan; Ying, Leslie
2015-05-01
In compressed sensing MRI, it is very important to design sampling pattern for random sampling. For example, SAKE (simultaneous auto-calibrating and k-space estimation) is a parallel MRI reconstruction method using random undersampling. It formulates image reconstruction as a structured low-rank matrix completion problem. Variable density (VD) Poisson discs are typically adopted for 2D random sampling. The basic concept of Poisson disc generation is to guarantee samples are neither too close to nor too far away from each other. However, it is difficult to meet such a condition especially in the high density region. Therefore the sampling becomes inefficient. In this paper, we present an improved random sampling pattern for SAKE reconstruction. The pattern is generated based on a conflict cost with a probability model. The conflict cost measures how many dense samples already assigned are around a target location, while the probability model adopts the generalized Gaussian distribution which includes uniform and Gaussian-like distributions as special cases. Our method preferentially assigns a sample to a k-space location with the least conflict cost on the circle of the highest probability. To evaluate the effectiveness of the proposed random pattern, we compare the performance of SAKEs using both VD Poisson discs and the proposed pattern. Experimental results for brain data show that the proposed pattern yields lower normalized mean square error (NMSE) than VD Poisson discs.
Statistical modelling of Poisson/log-normal data
International Nuclear Information System (INIS)
Miller, G.
2007-01-01
In statistical data fitting, self consistency is checked by examining the closeness of the quantity Χ 2 /NDF to 1, where Χ 2 is the sum of squares of data minus fit divided by standard deviation, and NDF is the number of data minus the number of fit parameters. In order to calculate Χ 2 one needs an expression for the standard deviation. In this note several alternative expressions for the standard deviation of data distributed according to a Poisson/log-normal distribution are proposed and evaluated by Monte Carlo simulation. Two preferred alternatives are identified. The use of replicate data to obtain uncertainty is problematic for a small number of replicates. A method to correct this problem is proposed. The log-normal approximation is good for sufficiently positive data. A modification of the log-normal approximation is proposed, which allows it to be used to test the hypothesis that the true value is zero. (authors)
Tavala, Amir; Dovzhik, Krishna; Schicker, Klaus; Koschak, Alexandra; Zeilinger, Anton
Probing the visual system of human and animals at very low photon rate regime has recently attracted the quantum optics community. In an experiment on the isolated photoreceptor cells of Xenopus, the cell output signal was measured while stimulating it by pulses with sub-poisson distributed photons. The results showed single photon detection efficiency of 29 +/-4.7% [1]. Another behavioral experiment on human suggests a less detection capability at perception level with the chance of 0.516 +/-0.01 (i.e. slightly better than random guess) [2]. Although the species are different, both biological models and experimental observations with classical light stimuli expect that a fraction of single photon responses is filtered somewhere within the retina network and/or during the neural processes in the brain. In this ongoing experiment, we look for a quantitative answer to this question by measuring the output signals of the last neural layer of WT mouse retina using microelectrode arrays. We use a heralded downconversion single-photon source. We stimulate the retina directly since the eye lens (responsible for 20-50% of optical loss and scattering [2]) is being removed. Here, we demonstrate our first results that confirms the response to the sub-poisson distributied pulses. This project was supported by Austrian Academy of Sciences, SFB FoQuS F 4007-N23 funded by FWF and ERC QIT4QAD 227844 funded by EU Commission.
Bao, Haibo; Cao, Jinde
2011-01-01
This paper is concerned with the state estimation problem for a class of discrete-time stochastic neural networks (DSNNs) with random delays. The effect of both variation range and distribution probability of the time delay are taken into account in the proposed approach. The stochastic disturbances are described in terms of a Brownian motion and the time-varying delay is characterized by introducing a Bernoulli stochastic variable. By employing a Lyapunov-Krasovskii functional, sufficient delay-distribution-dependent conditions are established in terms of linear matrix inequalities (LMIs) that guarantee the existence of the state estimator which can be checked readily by the Matlab toolbox. The main feature of the results obtained in this paper is that they are dependent on not only the bound but also the distribution probability of the time delay, and we obtain a larger allowance variation range of the delay, hence our results are less conservative than the traditional delay-independent ones. One example is given to illustrate the effectiveness of the proposed result. Copyright © 2010 Elsevier Ltd. All rights reserved.
Modeling highway-traffic headway distributions using superstatistics.
Abul-Magd, A Y
2007-11-01
We study traffic clearance distributions (i.e., the instantaneous gap between successive vehicles) and time-headway distributions by applying the Beck and Cohen superstatistics. We model the transition from free phase to congested phase with the increase of vehicle density as a transition from the Poisson statistics to that of the random-matrix theory. We derive an analytic expression for the spacing distributions that interpolates from the Poisson distribution and Wigner's surmise and apply it to the distributions of the net distance and time gaps among the succeeding cars at different densities of traffic flow. The obtained distribution fits the experimental results for single-vehicle data of the Dutch freeway A9 and the German freeway A5.
Random distribution of background charge density for numerical simulation of discharge inception
International Nuclear Information System (INIS)
Grange, F.; Loiseau, J.F.; Spyrou, N.
1998-01-01
The models of electric streamers based on a uniform background density of electrons may appear not to be physical, as the number of electrons in the small active region located in the vicinity of the electrode tip under regular conditions can be less than one. To avoid this, the electron background is modelled by a random density distribution such that, after a certain time lag, at least one electron is present in the grid close to the point electrode. The modelling performed shows that the streamer inception is not very sensitive to the initial location of the charged particles; the ionizing front, however, may be delayed by several tens of nanoseconds, depending on the way the electron has to drift before reaching the anode. (J.U.)
Random matrix analysis of the monopole strength distribution in {sup 208}Pb
Energy Technology Data Exchange (ETDEWEB)
Severyukhin, A. P., E-mail: sever@theor.jinr.ru [Joint Institute for Nuclear Research, Bogoliubov Laboratory of Theoretical Physics (Russian Federation); Åberg, S. [LTH, Lund University, Mathematical Physics (Sweden); Arsenyev, N. N. [Joint Institute for Nuclear Research, Bogoliubov Laboratory of Theoretical Physics (Russian Federation); Nazmitdinov, R. G. [Universitat de les Illes Balears, Departament de Física (Spain); Pichugin, K. N. [Kirensky Institute of Physics (Russian Federation)
2016-11-15
We study statistical properties of the 0{sup +} spectrum of {sup 208}Pb in the energy region E{sub x} ≤ 20 MeV. We use the Skyrme interaction SLy4 as our model Hamiltonian to create a single-particle spectrum and to analyze excited states. The finite-rank separable approximation for the particle–hole interaction enables us to perform the calculations in large configuration spaces. We show that while the position of the monopole resonance centroid is determined by one-phonon excitations of 0{sup +}, the phonon–phonon coupling is crucial for the description of the strength distribution of the 0{sup +} spectrum. In fact, this coupling has an impact on the spectral rigidity Δ{sub 3}(L) which is shifted towards the random matrix limit of the Gaussian orthogonal ensembles.
Hsieh, S T; Crawford, T O; Griffin, J W
1994-04-11
The nature of neurofilament organization within the axonal cytoskeleton has been the subject of controversy for many years. Previous reports have suggested that neurofilaments are randomly distributed in the radial dimension of the myelinated axon. Randomness of distribution implies that there is no interaction between neurofilaments, while order in distribution suggest the presence of forces between neurofilaments. To address the issue of randomness vs. order, we evaluated neurofilament distribution by two different statistical approaches--nearest-neighbor distance and the Poisson tile-counting method. Neurofilament nearest-neighbor distances in a myelinated axon differ from nearest-neighbor distances of a set of random points with similar density (40.6 +/- 7.0 nm vs. 30.7 +/- 12.9 nm, P masking of other organelles. To further characterize the distribution of neurofilaments, we compared the relationship between nearest-neighbor distance and density for three sets of data: evenly spaced points, randomly distributed points and measured neurofilament coordinates. Neurofilaments do not conform to either evenly spaced or random distribution models. Instead, neurofilament distribution falls into an intermediate position between evenly spaced and random distributions. This study also demonstrates that the nearest-neighbor distance method of assessing neurofilament distribution offers several technical and theoretical advantages to the Poisson tile-counting method.
Sparsity-based Poisson denoising with dictionary learning.
Giryes, Raja; Elad, Michael
2014-12-01
The problem of Poisson denoising appears in various imaging applications, such as low-light photography, medical imaging, and microscopy. In cases of high SNR, several transformations exist so as to convert the Poisson noise into an additive-independent identically distributed. Gaussian noise, for which many effective algorithms are available. However, in a low-SNR regime, these transformations are significantly less accurate, and a strategy that relies directly on the true noise statistics is required. Salmon et al took this route, proposing a patch-based exponential image representation model based on Gaussian mixture model, leading to state-of-the-art results. In this paper, we propose to harness sparse-representation modeling to the image patches, adopting the same exponential idea. Our scheme uses a greedy pursuit with boot-strapping-based stopping condition and dictionary learning within the denoising process. The reconstruction performance of the proposed scheme is competitive with leading methods in high SNR and achieving state-of-the-art results in cases of low SNR.
Localized buckling of a microtubule surrounded by randomly distributed cross linkers.
Jin, M Z; Ru, C Q
2013-07-01
Microtubules supported by surrounding cross linkers in eukaryotic cells can bear a much higher compressive force than free-standing microtubules. Different from some previous studies, which treated the surroundings as a continuum elastic foundation or elastic medium, the present paper develops a micromechanics numerical model to examine the role of randomly distributed discrete cross linkers in the buckling of compressed microtubules. First, the proposed numerical approach is validated by reproducing the uniform multiwave buckling mode predicted by the existing elastic-foundation model. For more realistic buckling of microtubules surrounded by randomly distributed cross linkers, the present numerical model predicts that the buckling mode is localized at one end in agreement with some known experimental observations. In particular, the critical force for localized buckling, predicted by the present model, is insensitive to microtubule length and can be about 1 order of magnitude lower than those given by the elastic-foundation model, which suggests that the elastic-foundation model may have overestimated the critical force for buckling of microtubules in vivo. In addition, unlike the elastic-foundation model, the present model can capture the effect of end conditions on the critical force and wavelength of localized buckling. Based on the known data of spacing and elastic constants of cross linkers available in literature, the critical force and wavelength of the localized buckling mode, predicted by the present model, are compared to some experimental data with reasonable agreement. Finally, two empirical formulas are proposed for the critical force and wavelength of the localized buckling of microtubules surrounded by cross linkers.
Szmyt, Wojciech; Guerra, Carlos; Utke, Ivo
2017-01-01
In this work we modelled the diffusive transport of a dilute gas along arrays of randomly distributed, vertically aligned nanocylinders (nanotubes or nanowires) as opposed to gas diffusion in long pores, which is described by the well-known Knudsen theory. Analytical expressions for (i) the gas diffusion coefficient inside such arrays, (ii) the time between collisions of molecules with the nanocylinder walls (mean time of flight), (iii) the surface impingement rate, and (iv) the Knudsen number of such a system were rigidly derived based on a random-walk model of a molecule that undergoes memoryless, diffusive reflections from nanocylinder walls assuming the molecular regime of gas transport. It can be specifically shown that the gas diffusion coefficient inside such arrays is inversely proportional to the areal density of cylinders and their mean diameter. An example calculation of a diffusion coefficient is delivered for a system of titanium isopropoxide molecules diffusing between vertically aligned carbon nanotubes. Our findings are important for the correct modelling and optimisation of gas-based deposition techniques, such as atomic layer deposition or chemical vapour deposition, frequently used for surface functionalisation of high-aspect-ratio nanocylinder arrays in solar cells and energy storage applications. Furthermore, gas sensing devices with high-aspect-ratio nanocylinder arrays and the growth of vertically aligned carbon nanotubes need the fundamental understanding and precise modelling of gas transport to optimise such processes.
Directory of Open Access Journals (Sweden)
Wojciech Szmyt
2017-01-01
Full Text Available In this work we modelled the diffusive transport of a dilute gas along arrays of randomly distributed, vertically aligned nanocylinders (nanotubes or nanowires as opposed to gas diffusion in long pores, which is described by the well-known Knudsen theory. Analytical expressions for (i the gas diffusion coefficient inside such arrays, (ii the time between collisions of molecules with the nanocylinder walls (mean time of flight, (iii the surface impingement rate, and (iv the Knudsen number of such a system were rigidly derived based on a random-walk model of a molecule that undergoes memoryless, diffusive reflections from nanocylinder walls assuming the molecular regime of gas transport. It can be specifically shown that the gas diffusion coefficient inside such arrays is inversely proportional to the areal density of cylinders and their mean diameter. An example calculation of a diffusion coefficient is delivered for a system of titanium isopropoxide molecules diffusing between vertically aligned carbon nanotubes. Our findings are important for the correct modelling and optimisation of gas-based deposition techniques, such as atomic layer deposition or chemical vapour deposition, frequently used for surface functionalisation of high-aspect-ratio nanocylinder arrays in solar cells and energy storage applications. Furthermore, gas sensing devices with high-aspect-ratio nanocylinder arrays and the growth of vertically aligned carbon nanotubes need the fundamental understanding and precise modelling of gas transport to optimise such processes.
Ledford, Christy J W; Womack, Jasmyne J; Rider, Heather A; Seehusen, Angela B; Conner, Stephen J; Lauters, Rebecca A; Hodge, Joshua A
2017-09-01
As pregnant mothers increasingly engage in shared decision making regarding prenatal decisions, such as induction of labor, the patient's level of activation may influence pregnancy outcomes. One potential tool to increase patient activation in the clinical setting is mobile applications. However, research is limited in comparing mobile apps with other modalities of patient education and engagement tools. This study was designed to test the effectiveness of a mobile app as a replacement for a spiral notebook guide as a patient education and engagement tool in the prenatal clinical setting. This randomized controlled trial was conducted in the Women's Health Clinic and Family Health Clinic of three hospitals. Repeated-measures analysis of covariance was used to test intervention effects in the study sample of 205 patients. Mothers used a mobile app interface to more frequently record information about their pregnancy; however, across time, mothers using a mobile app reported a significant decrease in patient activation. The unexpected negative effects in the group of patients randomized to the mobile app prompt these authors to recommend that health systems pause before distributing their own version of mobile apps that may decrease patient activation. Mobile apps can be inherently empowering and engaging, but how a system encourages their use may ultimately determine their adoption and success.
Discrete random distribution of source dopants in nanowire tunnel transistors (TFETs)
Sylvia, Somaia; Abul Khayer, M.; Alam, Khairul; Park, Hong-Hyun; Klimeck, Gerhard; Lake, Roger
2013-03-01
InAs and InSb nanowire (NW) tunnel field effect transistors (TFETs) require highly degenerate source doping to support the high electric fields in the tunnel region. For a target on-current of 1 μA , the doping requirement may be as high as 1 . 5 ×1020cm-3 in a NW with diameter as low as 4 nm. The small size of these devices demand that the dopants near tunneling region be treated discretely. Therefore, the effects resulting from the random distribution of dopant atoms in the source of a TFET are studied for 30 test devices. Comparing with the transfer characteristics of the same device simulated with a continuum doping model, our results show (1) a spread of I - V toward the positive gate voltage axis, (2) the same average threshold voltage, (3) an average 62% reduction in the on current, and (4) a slight degradation of the subthreshold slope. Random fluctuations in both the number and placement of dopants will be discussed. Also, as the channel length is scaled down, direct tunneling through the channel starts limiting the device performance. Therefore, a comparison of materials is also performed, showing their ability to block direct tunneling for sub-10 nm channel FETs and TFETs. This work was supported in part by the Center on Functional Engineered Nano Architectonics and the Materials, Structures and Devices Focus Center, under the Focus Center Research Program, and by the National Science Foundation under Grant OCI-0749140
The probability distribution of extreme precipitation
Korolev, V. Yu.; Gorshenin, A. K.
2017-12-01
On the basis of the negative binomial distribution of the duration of wet periods calculated per day, an asymptotic model is proposed for distributing the maximum daily rainfall volume during the wet period, having the form of a mixture of Frechet distributions and coinciding with the distribution of the positive degree of a random variable having the Fisher-Snedecor distribution. The method of proving the corresponding result is based on limit theorems for extreme order statistics in samples of a random volume with a mixed Poisson distribution. The adequacy of the models proposed and methods of their statistical analysis is demonstrated by the example of estimating the extreme distribution parameters based on real data.
Shiyko, Mariya P.; Li, Yuelin; Rindskopf, David
2012-01-01
Intensive longitudinal data (ILD) have become increasingly common in the social and behavioral sciences; count variables, such as the number of daily smoked cigarettes, are frequently used outcomes in many ILD studies. We demonstrate a generalized extension of growth mixture modeling (GMM) to Poisson-distributed ILD for identifying qualitatively…
A LATENT CLASS POISSON REGRESSION-MODEL FOR HETEROGENEOUS COUNT DATA
WEDEL, M; DESARBO, WS; BULT, [No Value; RAMASWAMY, [No Value
1993-01-01
In this paper an approach is developed that accommodates heterogeneity in Poisson regression models for count data. The model developed assumes that heterogeneity arises from a distribution of both the intercept and the coefficients of the explanatory variables. We assume that the mixing
A note on the time decay of solutions for the linearized Wigner-Poisson system
Gamba, Irene
2009-01-01
We consider the one-dimensional Wigner-Poisson system of plasma physics, linearized around a (spatially homogeneous) Lorentzian distribution and prove that the solution of the corresponding linearized problem decays to zero in time. We also give an explicit algebraic decay rate.
General solution of Poisson equation in three dimensions for disk-like galaxies
International Nuclear Information System (INIS)
Tong, Y.; Zheng, X.; Peng, O.
1982-01-01
The general solution of the Poisson equation is solved by means of integral transformations for Vertical BarkVertical Barr>>1 provided that the perturbed density of disk-like galaxies distributes along the radial direction according to the Hankel function. This solution can more accurately represent the outer spiral arms of disk-like galaxies
Trophallaxis-inspired model for distributed transport between randomly interacting agents
Gräwer, Johannes; Ronellenfitsch, Henrik; Mazza, Marco G.; Katifori, Eleni
2017-08-01
Trophallaxis, the regurgitation and mouth to mouth transfer of liquid food between members of eusocial insect societies, is an important process that allows the fast and efficient dissemination of food in the colony. Trophallactic systems are typically treated as a network of agent interactions. This approach, though valuable, does not easily lend itself to analytic predictions. In this work we consider a simple trophallactic system of randomly interacting agents with finite carrying capacity, and calculate analytically and via a series of simulations the global food intake rate for the whole colony as well as observables describing how uniformly the food is distributed within the nest. Our model and predictions provide a useful benchmark to assess to what level the observed food uptake rates and efficiency in food distribution is due to stochastic effects or specific trophallactic strategies by the ant colony. Our work also serves as a stepping stone to describing the collective properties of more complex trophallactic systems, such as those including division of labor between foragers and workers.
DEFF Research Database (Denmark)
Yura, Harold; Hanson, Steen Grüner
2012-01-01
Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the......Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set...... with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative...
Algebraic properties of compatible Poisson brackets
Zhang, Pumei
2014-05-01
We discuss algebraic properties of a pencil generated by two compatible Poisson tensors A( x) and B( x). From the algebraic viewpoint this amounts to studying the properties of a pair of skew-symmetric bilinear forms A and B defined on a finite-dimensional vector space. We describe the Lie group G P of linear automorphisms of the pencil P = { A + λB}. In particular, we obtain an explicit formula for the dimension of G P and discuss some other algebraic properties such as solvability and Levi-Malcev decomposition.
DEFF Research Database (Denmark)
Workman, Christopher; Krogh, Anders Stærmose
1999-01-01
This work investigates whether mRNA has a lower estimated folding free energy than random sequences. The free energy estimates are calculated by the mfold program for prediction of RNA secondary structures. For a set of 46 mRNAs it is shown that the predicted free energy is not significantly...... different from random sequences with the same dinucleotide distribution. For random sequences with the same mononucleotide distribution it has previously been shown that the native mRNA sequences have a lower predicted free energy, which indicates a more stable structure than random sequences. However......, dinucleotide content is important when assessing the significance of predicted free energy as the physical stability of RNA secondary structure is known to depend on dinucleotide base stacking energies. Even known RNA secondary structures, like tRNAs, can be shown to have predicted free energies...
Analysing count data of Butterflies communities in Jasin, Melaka: A Poisson regression analysis
Afiqah Muhamad Jamil, Siti; Asrul Affendi Abdullah, M.; Kek, Sie Long; Nor, Maria Elena; Mohamed, Maryati; Ismail, Norradihah
2017-09-01
Counting outcomes normally have remaining values highly skewed toward the right as they are often characterized by large values of zeros. The data of butterfly communities, had been taken from Jasin, Melaka and consists of 131 number of subject visits in Jasin, Melaka. In this paper, considering the count data of butterfly communities, an analysis is considered Poisson regression analysis as it is assumed to be an alternative way on better suited to the counting process. This research paper is about analysing count data from zero observation ecological inference of butterfly communities in Jasin, Melaka by using Poisson regression analysis. The software for Poisson regression is readily available and it is becoming more widely used in many field of research and the data was analysed by using SAS software. The purpose of analysis comprised the framework of identifying the concerns. Besides, by using Poisson regression analysis, the study determines the fitness of data for accessing the reliability on using the count data. The finding indicates that the highest and lowest number of subject comes from the third family (Nymphalidae) family and fifth (Hesperidae) family and the Poisson distribution seems to fit the zero values.
Poisson regression for modeling count and frequency outcomes in trauma research.
Gagnon, David R; Doron-LaMarca, Susan; Bell, Margret; O'Farrell, Timothy J; Taft, Casey T
2008-10-01
The authors describe how the Poisson regression method for analyzing count or frequency outcome variables can be applied in trauma studies. The outcome of interest in trauma research may represent a count of the number of incidents of behavior occurring in a given time interval, such as acts of physical aggression or substance abuse. Traditional linear regression approaches assume a normally distributed outcome variable with equal variances over the range of predictor variables, and may not be optimal for modeling count outcomes. An application of Poisson regression is presented using data from a study of intimate partner aggression among male patients in an alcohol treatment program and their female partners. Results of Poisson regression and linear regression models are compared.
PB-AM: An open-source, fully analytical linear poisson-boltzmann solver
Energy Technology Data Exchange (ETDEWEB)
Felberg, Lisa E. [Department of Chemical and Biomolecular Engineering, University of California Berkeley, Berkeley California 94720; Brookes, David H. [Department of Chemistry, University of California Berkeley, Berkeley California 94720; Yap, Eng-Hui [Department of Systems and Computational Biology, Albert Einstein College of Medicine, Bronx New York 10461; Jurrus, Elizabeth [Division of Computational and Statistical Analytics, Pacific Northwest National Laboratory, Richland Washington 99352; Scientific Computing and Imaging Institute, University of Utah, Salt Lake City Utah 84112; Baker, Nathan A. [Advanced Computing, Mathematics, and Data Division, Pacific Northwest National Laboratory, Richland Washington 99352; Division of Applied Mathematics, Brown University, Providence Rhode Island 02912; Head-Gordon, Teresa [Department of Chemical and Biomolecular Engineering, University of California Berkeley, Berkeley California 94720; Department of Chemistry, University of California Berkeley, Berkeley California 94720; Department of Bioengineering, University of California Berkeley, Berkeley California 94720; Chemical Sciences Division, Lawrence Berkeley National Labs, Berkeley California 94720
2016-11-02
We present the open source distributed software package Poisson-Boltzmann Analytical Method (PB-AM), a fully analytical solution to the linearized Poisson Boltzmann equation. The PB-AM software package includes the generation of outputs files appropriate for visualization using VMD, a Brownian dynamics scheme that uses periodic boundary conditions to simulate dynamics, the ability to specify docking criteria, and offers two different kinetics schemes to evaluate biomolecular association rate constants. Given that PB-AM defines mutual polarization completely and accurately, it can be refactored as a many-body expansion to explore 2- and 3-body polarization. Additionally, the software has been integrated into the Adaptive Poisson-Boltzmann Solver (APBS) software package to make it more accessible to a larger group of scientists, educators and students that are more familiar with the APBS framework.
Canon, Louis-Claude; Jeannot, Emmanuel
2016-01-01
International audience; Coping with uncertainties when scheduling task graphs on parallel machines requires to perform non-trivial evaluations. When considering that each computation and communication duration is a random variable, evaluating the distribution of the critical path length of such graphs involves computing maximums and sums of possibly de- pendent random variables. The discrete version of this evaluation problem is known to be #P- hard. Here, we propose two heuristics, CorLCA an...
DEFF Research Database (Denmark)
Ersbøll, Annette Kjær; Ersbøll, Bjarne Kjær
2009-01-01
The K-function is often used to detect spatial clustering in spatial point processes, e.g. clustering of infected herds. Clustering is identified by testing the observed K-function for complete spatial randomness modelled, e.g. by a homogeneous Poisson process. The approach provides information a...
Tetrahedral meshing via maximal Poisson-disk sampling
Guo, Jianwei
2016-02-15
In this paper, we propose a simple yet effective method to generate 3D-conforming tetrahedral meshes from closed 2-manifold surfaces. Our approach is inspired by recent work on maximal Poisson-disk sampling (MPS), which can generate well-distributed point sets in arbitrary domains. We first perform MPS on the boundary of the input domain, we then sample the interior of the domain, and we finally extract the tetrahedral mesh from the samples by using 3D Delaunay or regular triangulation for uniform or adaptive sampling, respectively. We also propose an efficient optimization strategy to protect the domain boundaries and to remove slivers to improve the meshing quality. We present various experimental results to illustrate the efficiency and the robustness of our proposed approach. We demonstrate that the performance and quality (e.g., minimal dihedral angle) of our approach are superior to current state-of-the-art optimization-based approaches.
Yanchev, I
2003-01-01
A new expression for the Fourier transform of the binary correlation function of the random potential near the semiconductor-insulator interface is derived. The screening from the metal electrode in MIS-structure is taken into account introducing an effective insulator thickness. An essential advantage of this correlation function is the finite dispersion of the random potential to which it leads in distinction with the so far known correlation functions leading to a divergent dispersion. The dispersion, an important characteristic of the random potential distribution, determining the amplitude of the potential fluctuations is calculated.
Park, Gyeong Cheol; Song, Young Min; Ha, Jong-Hoon; Lee, Yong Tak
2011-07-01
We demonstrate broadband antireflective glasses with subwavelength structures (SWSs) using randomly distributed Ag nanoparticles. Ag nanoparticles formed by a thermal dewetting process were used as an etch mask for dry etching to fabricate antireflective SWSs on the glass surface. The size and shape of Ag nanoparticles are changed by the different thickness of the Ag thin film. The morphology of SWSs fabricated by using the Ag thin films is well consistent with that of the Ag nanoparticles. The single-side SWS integrated glass exhibits improved transmittance of approximately 96% at 750 nm due to the graded refractive index profiles, while the transmittance is only approximately 92.5% for the flat surface. To reduce Fresnel reflection at the other side of the glass substrate, the SWSs with optimized Ag film thickness and dry etching conditions are formed on both sides of the glass. The dual-side SWS integrated glass show an average transmittance of approximately 97.5% in a wavelength range of 350-750 nm. Transmission band shrinkage effects of the SWS integrated glass are also observed with increased average size of the Ag nanoparticles.
On a Poisson homogeneous space of bilinear forms with a Poisson-Lie action
Chekhov, L. O.; Mazzocco, M.
2017-12-01
Let \\mathscr A be the space of bilinear forms on C^N with defining matrices A endowed with a quadratic Poisson structure of reflection equation type. The paper begins with a short description of previous studies of the structure, and then this structure is extended to systems of bilinear forms whose dynamics is governed by the natural action A\\mapsto B ABT} of the {GL}_N Poisson-Lie group on \\mathscr A. A classification is given of all possible quadratic brackets on (B, A)\\in {GL}_N× \\mathscr A preserving the Poisson property of the action, thus endowing \\mathscr A with the structure of a Poisson homogeneous space. Besides the product Poisson structure on {GL}_N× \\mathscr A, there are two other (mutually dual) structures, which (unlike the product Poisson structure) admit reductions by the Dirac procedure to a space of bilinear forms with block upper triangular defining matrices. Further generalisations of this construction are considered, to triples (B,C, A)\\in {GL}_N× {GL}_N× \\mathscr A with the Poisson action A\\mapsto B ACT}, and it is shown that \\mathscr A then acquires the structure of a Poisson symmetric space. Generalisations to chains of transformations and to the quantum and quantum affine algebras are investigated, as well as the relations between constructions of Poisson symmetric spaces and the Poisson groupoid. Bibliography: 30 titles.
Nezhadhaghighi, Mohsen Ghasemi
2017-08-01
Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ-stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α. We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ-stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.
Poisson and negative binomial item count techniques for surveys with sensitive question.
Tian, Guo-Liang; Tang, Man-Lai; Wu, Qin; Liu, Yin
2017-04-01
Although the item count technique is useful in surveys with sensitive questions, privacy of those respondents who possess the sensitive characteristic of interest may not be well protected due to a defect in its original design. In this article, we propose two new survey designs (namely the Poisson item count technique and negative binomial item count technique) which replace several independent Bernoulli random variables required by the original item count technique with a single Poisson or negative binomial random variable, respectively. The proposed models not only provide closed form variance estimate and confidence interval within [0, 1] for the sensitive proportion, but also simplify the survey design of the original item count technique. Most importantly, the new designs do not leak respondents' privacy. Empirical results show that the proposed techniques perform satisfactorily in the sense that it yields accurate parameter estimate and confidence interval.
Stochastic Interest Model Based on Compound Poisson Process and Applications in Actuarial Science
Directory of Open Access Journals (Sweden)
Shilong Li
2017-01-01
Full Text Available Considering stochastic behavior of interest rates in financial market, we construct a new class of interest models based on compound Poisson process. Different from the references, this paper describes the randomness of interest rates by modeling the force of interest with Poisson random jumps directly. To solve the problem in calculation of accumulated interest force function, one important integral technique is employed. And a conception called the critical value is introduced to investigate the validity condition of this new model. We also discuss actuarial present values of several life annuities under this new interest model. Simulations are done to illustrate the theoretical results and the effect of parameters in interest model on actuarial present values is also analyzed.
Sepúlveda, Nuno
2013-02-26
Background: The advent of next generation sequencing technology has accelerated efforts to map and catalogue copy number variation (CNV) in genomes of important micro-organisms for public health. A typical analysis of the sequence data involves mapping reads onto a reference genome, calculating the respective coverage, and detecting regions with too-low or too-high coverage (deletions and amplifications, respectively). Current CNV detection methods rely on statistical assumptions (e.g., a Poisson model) that may not hold in general, or require fine-tuning the underlying algorithms to detect known hits. We propose a new CNV detection methodology based on two Poisson hierarchical models, the Poisson-Gamma and Poisson-Lognormal, with the advantage of being sufficiently flexible to describe different data patterns, whilst robust against deviations from the often assumed Poisson model.Results: Using sequence coverage data of 7 Plasmodium falciparum malaria genomes (3D7 reference strain, HB3, DD2, 7G8, GB4, OX005, and OX006), we showed that empirical coverage distributions are intrinsically asymmetric and overdispersed in relation to the Poisson model. We also demonstrated a low baseline false positive rate for the proposed methodology using 3D7 resequencing data and simulation. When applied to the non-reference isolate data, our approach detected known CNV hits, including an amplification of the PfMDR1 locus in DD2 and a large deletion in the CLAG3.2 gene in GB4, and putative novel CNV regions. When compared to the recently available FREEC and cn.MOPS approaches, our findings were more concordant with putative hits from the highest quality array data for the 7G8 and GB4 isolates.Conclusions: In summary, the proposed methodology brings an increase in flexibility, robustness, accuracy and statistical rigour to CNV detection using sequence coverage data. 2013 Seplveda et al.; licensee BioMed Central Ltd.
Simulation on Poisson and negative binomial models of count road accident modeling
Sapuan, M. S.; Razali, A. M.; Zamzuri, Z. H.; Ibrahim, K.
2016-11-01
Accident count data have often been shown to have overdispersion. On the other hand, the data might contain zero count (excess zeros). The simulation study was conducted to create a scenarios which an accident happen in T-junction with the assumption the dependent variables of generated data follows certain distribution namely Poisson and negative binomial distribution with different sample size of n=30 to n=500. The study objective was accomplished by fitting Poisson regression, negative binomial regression and Hurdle negative binomial model to the simulated data. The model validation was compared and the simulation result shows for each different sample size, not all model fit the data nicely even though the data generated from its own distribution especially when the sample size is larger. Furthermore, the larger sample size indicates that more zeros accident count in the dataset.
Poisson-Boltzmann versus Size-Modified Poisson-Boltzmann Electrostatics Applied to Lipid Bilayers.
Wang, Nuo; Zhou, Shenggao; Kekenes-Huskey, Peter M; Li, Bo; McCammon, J Andrew
2014-12-26
Mean-field methods, such as the Poisson-Boltzmann equation (PBE), are often used to calculate the electrostatic properties of molecular systems. In the past two decades, an enhancement of the PBE, the size-modified Poisson-Boltzmann equation (SMPBE), has been reported. Here, the PBE and the SMPBE are reevaluated for realistic molecular systems, namely, lipid bilayers, under eight different sets of input parameters. The SMPBE appears to reproduce the molecular dynamics simulation results better than the PBE only under specific parameter sets, but in general, it performs no better than the Stern layer correction of the PBE. These results emphasize the need for careful discussions of the accuracy of mean-field calculations on realistic systems with respect to the choice of parameters and call for reconsideration of the cost-efficiency and the significance of the current SMPBE formulation.
DEFF Research Database (Denmark)
Jeong, Cheol-Ho
2009-01-01
Most acoustic measurements are based on an assumption of ideal conditions. One such ideal condition is a diffuse and reverberant field. In practice, a perfectly diffuse sound field cannot be achieved in a reverberation chamber. Uneven incident energy density under measurement conditions can cause...... discrepancies between the measured value and the theoretical random incidence absorption coefficient. Therefore the angular distribution of the incident acoustic energy onto an absorber sample should be taken into account. The angular distribution of the incident energy density was simulated using the beam...... tracing method for various room shapes and source positions. The averaged angular distribution is found to be similar to a Gaussian distribution. As a result, an angle-weighted absorption coefficient was proposed by considering the angular energy distribution to improve the agreement between...
Measured PET Data Characterization with the Negative Binomial Distribution Model.
Santarelli, Maria Filomena; Positano, Vincenzo; Landini, Luigi
2017-01-01
Accurate statistical model of PET measurements is a prerequisite for a correct image reconstruction when using statistical image reconstruction algorithms, or when pre-filtering operations must be performed. Although radioactive decay follows a Poisson distribution, deviation from Poisson statistics occurs on projection data prior to reconstruction due to physical effects, measurement errors, correction of scatter and random coincidences. Modelling projection data can aid in understanding the statistical nature of the data in order to develop efficient processing methods and to reduce noise. This paper outlines the statistical behaviour of measured emission data evaluating the goodness of fit of the negative binomial (NB) distribution model to PET data for a wide range of emission activity values. An NB distribution model is characterized by the mean of the data and the dispersion parameter α that describes the deviation from Poisson statistics. Monte Carlo simulations were performed to evaluate: (a) the performances of the dispersion parameter α estimator, (b) the goodness of fit of the NB model for a wide range of activity values. We focused on the effect produced by correction for random and scatter events in the projection (sinogram) domain, due to their importance in quantitative analysis of PET data. The analysis developed herein allowed us to assess the accuracy of the NB distribution model to fit corrected sinogram data, and to evaluate the sensitivity of the dispersion parameter α to quantify deviation from Poisson statistics. By the sinogram ROI-based analysis, it was demonstrated that deviation on the measured data from Poisson statistics can be quantitatively characterized by the dispersion parameter α, in any noise conditions and corrections.
Ponomarev, A. L.; Brenner, D.; Hlatky, L. R.; Sachs, R. K.
2000-01-01
DNA double-strand breaks (DSBs) produced by densely ionizing radiation are not located randomly in the genome: recent data indicate DSB clustering along chromosomes. Stochastic DSB clustering at large scales, from > 100 Mbp down to random-walk, coarse-grained polymer model for chromatin is combined with a simple track structure model in Monte Carlo software called DNAbreak and is applied to data on alpha-particle irradiation of V-79 cells. The chromatin model neglects molecular details but systematically incorporates an increase in average spatial separation between two DNA loci as the number of base-pairs between the loci increases. Fragment-size distributions obtained using DNAbreak match data on large fragments about as well as distributions previously obtained with a less mechanistic approach. Dose-response relations, linear at small doses of high linear energy transfer (LET) radiation, are obtained. They are found to be non-linear when the dose becomes so large that there is a significant probability of overlapping or close juxtaposition, along one chromosome, for different DSB clusters from different tracks. The non-linearity is more evident for large fragments than for small. The DNAbreak results furnish an example of the RLC (randomly located clusters) analytic formalism, which generalizes the broken-stick fragment-size distribution of the random-breakage model that is often applied to low-LET data.
Directory of Open Access Journals (Sweden)
Wanxing Sheng
2016-05-01
Full Text Available In this paper, a reactive power optimization method based on historical data is investigated to solve the dynamic reactive power optimization problem in distribution network. In order to reflect the variation of loads, network loads are represented in a form of random matrix. Load similarity (LS is defined to measure the degree of similarity between the loads in different days and the calculation method of the load similarity of load random matrix (LRM is presented. By calculating the load similarity between the forecasting random matrix and the random matrix of historical load, the historical reactive power optimization dispatching scheme that most matches the forecasting load can be found for reactive power control usage. The differences of daily load curves between working days and weekends in different seasons are considered in the proposed method. The proposed method is tested on a standard 14 nodes distribution network with three different types of load. The computational result demonstrates that the proposed method for reactive power optimization is fast, feasible and effective in distribution network.
International Nuclear Information System (INIS)
Vanderhaegen, D.; Deutsch, C.
1988-01-01
Scattering effects are considered for radiative transfer within randomly distributed and binary mixtures in one dimension. The most general formalism is developed within the framework of the invariant imbedding method. The length L of the random sample thus appears as a new variable. One transmission coefficient T(L) suffices to specify locally the intensities. By analogy with the homogeneous situation, one introduces an effective opacity with = (1 + σ eff L) -1 fulfilling σ eff ≤ = p 0 σ 0 + p 1 σ 1 (0 and 1 respectively refer to the components involved in the mixture). Equality is reached when L 0, ∞. Otherwise, σ eff experiences a deep transmission window
Poisson cohomology of scalar multidimensional Dubrovin-Novikov brackets
Carlet, Guido; Casati, Matteo; Shadrin, Sergey
2017-04-01
We compute the Poisson cohomology of a scalar Poisson bracket of Dubrovin-Novikov type with D independent variables. We find that the second and third cohomology groups are generically non-vanishing in D > 1. Hence, in contrast with the D = 1 case, the deformation theory in the multivariable case is non-trivial.
Estimation of a Non-homogeneous Poisson Model: An Empirical ...
African Journals Online (AJOL)
This article aims at applying the Nonhomogeneous Poisson process to trends of economic development. For this purpose, a modified Nonhomogeneous Poisson process is derived when the intensity rate is considered as a solution of stochastic differential equation which satisfies the geometric Brownian motion. The mean ...
Formulation of Hamiltonian mechanics with even and odd Poisson brackets
International Nuclear Information System (INIS)
Khudaverdyan, O.M.; Nersesyan, A.P.
1987-01-01
A possibility is studied as to constrict the odd Poisson bracket and odd Hamiltonian by the given dynamics in phase superspace - the even Poisson bracket and even Hamiltonian so the transition to the new structure does not change the equations of motion. 9 refs
Cluster X-varieties, amalgamation, and Poisson-Lie groups
DEFF Research Database (Denmark)
Fock, V. V.; Goncharov, A. B.
2006-01-01
In this paper, starting from a split semisimple real Lie group G with trivial center, we define a family of varieties with additional structures. We describe them as cluster χ-varieties, as defined in [FG2]. In particular they are Poisson varieties. We define canonical Poisson maps of these varie...
Derivation of relativistic wave equation from the Poisson process
Indian Academy of Sciences (India)
Abstract. A Poisson process is one of the fundamental descriptions for relativistic particles: both fermions and bosons. A generalized linear photon wave equation in dispersive and homogeneous medium with dissipation is derived using the formulation of the Poisson process. This formulation provides a possible ...
Mohebbi, Mohammadreza; Wolfe, Rory; Jolley, Damien
2011-10-03
Analytic methods commonly used in epidemiology do not account for spatial correlation between observations. In regression analyses, omission of that autocorrelation can bias parameter estimates and yield incorrect standard error estimates. We used age standardised incidence ratios (SIRs) of esophageal cancer (EC) from the Babol cancer registry from 2001 to 2005, and extracted socioeconomic indices from the Statistical Centre of Iran. The following models for SIR were used: (1) Poisson regression with agglomeration-specific nonspatial random effects; (2) Poisson regression with agglomeration-specific spatial random effects. Distance-based and neighbourhood-based autocorrelation structures were used for defining the spatial random effects and a pseudolikelihood approach was applied to estimate model parameters. The Bayesian information criterion (BIC), Akaike's information criterion (AIC) and adjusted pseudo R2, were used for model comparison. A Gaussian semivariogram with an effective range of 225 km best fit spatial autocorrelation in agglomeration-level EC incidence. The Moran's I index was greater than its expected value indicating systematic geographical clustering of EC. The distance-based and neighbourhood-based Poisson regression estimates were generally similar. When residual spatial dependence was modelled, point and interval estimates of covariate effects were different to those obtained from the nonspatial Poisson model. The spatial pattern evident in the EC SIR and the observation that point estimates and standard errors differed depending on the modelling approach indicate the importance of accounting for residual spatial correlation in analyses of EC incidence in the Caspian region of Iran. Our results also illustrate that spatial smoothing must be applied with care.
Directory of Open Access Journals (Sweden)
Jolley Damien
2011-10-01
Full Text Available Abstract Background Analytic methods commonly used in epidemiology do not account for spatial correlation between observations. In regression analyses, omission of that autocorrelation can bias parameter estimates and yield incorrect standard error estimates. Methods We used age standardised incidence ratios (SIRs of esophageal cancer (EC from the Babol cancer registry from 2001 to 2005, and extracted socioeconomic indices from the Statistical Centre of Iran. The following models for SIR were used: (1 Poisson regression with agglomeration-specific nonspatial random effects; (2 Poisson regression with agglomeration-specific spatial random effects. Distance-based and neighbourhood-based autocorrelation structures were used for defining the spatial random effects and a pseudolikelihood approach was applied to estimate model parameters. The Bayesian information criterion (BIC, Akaike's information criterion (AIC and adjusted pseudo R2, were used for model comparison. Results A Gaussian semivariogram with an effective range of 225 km best fit spatial autocorrelation in agglomeration-level EC incidence. The Moran's I index was greater than its expected value indicating systematic geographical clustering of EC. The distance-based and neighbourhood-based Poisson regression estimates were generally similar. When residual spatial dependence was modelled, point and interval estimates of covariate effects were different to those obtained from the nonspatial Poisson model. Conclusions The spatial pattern evident in the EC SIR and the observation that point estimates and standard errors differed depending on the modelling approach indicate the importance of accounting for residual spatial correlation in analyses of EC incidence in the Caspian region of Iran. Our results also illustrate that spatial smoothing must be applied with care.
Maximum-likelihood fitting of data dominated by Poisson statistical uncertainties
International Nuclear Information System (INIS)
Stoneking, M.R.; Den Hartog, D.J.
1997-01-01
The fitting of data by χ 2 minimization is valid only when the uncertainties in the data are normally distributed. When analyzing spectroscopic or particle counting data at very low signal level (e.g., a Thomson scattering diagnostic), the uncertainties are distributed with a Poisson distribution. We have developed a maximum-likelihood method for fitting data that correctly treats the Poisson statistical character of the uncertainties. This method maximizes the total probability that the observed data are drawn from the assumed fit function using the Poisson probability function to determine the probability for each data point. The algorithm also returns uncertainty estimates for the fit parameters. We compare this method with a χ 2 -minimization routine applied to both simulated and real Thomson scattering data. Differences in the returned fits are greater at low signal level (less than ∼10 counts per measurement). The maximum-likelihood method is found to be more accurate and robust, returning a narrower distribution of values for the fit parameters with fewer outliers. copyright 1997 American Institute of Physics
Maximum-likelihood fitting of data dominated by Poisson statistical uncertainties
International Nuclear Information System (INIS)
Stoneking, M.R.; Den Hartog, D.J.
1996-06-01
The fitting of data by χ 2 -minimization is valid only when the uncertainties in the data are normally distributed. When analyzing spectroscopic or particle counting data at very low signal level (e.g., a Thomson scattering diagnostic), the uncertainties are distributed with a Poisson distribution. The authors have developed a maximum-likelihood method for fitting data that correctly treats the Poisson statistical character of the uncertainties. This method maximizes the total probability that the observed data are drawn from the assumed fit function using the Poisson probability function to determine the probability for each data point. The algorithm also returns uncertainty estimates for the fit parameters. They compare this method with a χ 2 -minimization routine applied to both simulated and real data. Differences in the returned fits are greater at low signal level (less than ∼20 counts per measurement). the maximum-likelihood method is found to be more accurate and robust, returning a narrower distribution of values for the fit parameters with fewer outliers
Unimodularity criteria for Poisson structures on foliated manifolds
Pedroza, Andrés; Velasco-Barreras, Eduardo; Vorobiev, Yury
2018-03-01
We study the behavior of the modular class of an orientable Poisson manifold and formulate some unimodularity criteria in the semilocal context, around a (singular) symplectic leaf. Our results generalize some known unimodularity criteria for regular Poisson manifolds related to the notion of the Reeb class. In particular, we show that the unimodularity of the transverse Poisson structure of the leaf is a necessary condition for the semilocal unimodular property. Our main tool is an explicit formula for a bigraded decomposition of modular vector fields of a coupling Poisson structure on a foliated manifold. Moreover, we also exploit the notion of the modular class of a Poisson foliation and its relationship with the Reeb class.
Hadayeghi, Alireza; Shalaby, Amer S; Persaud, Bhagwant N
2010-03-01
A common technique used for the calibration of collision prediction models is the Generalized Linear Modeling (GLM) procedure with the assumption of Negative Binomial or Poisson error distribution. In this technique, fixed coefficients that represent the average relationship between the dependent variable and each explanatory variable are estimated. However, the stationary relationship assumed may hide some important spatial factors of the number of collisions at a particular traffic analysis zone. Consequently, the accuracy of such models for explaining the relationship between the dependent variable and the explanatory variables may be suspected since collision frequency is likely influenced by many spatially defined factors such as land use, demographic characteristics, and traffic volume patterns. The primary objective of this study is to investigate the spatial variations in the relationship between the number of zonal collisions and potential transportation planning predictors, using the Geographically Weighted Poisson Regression modeling technique. The secondary objective is to build on knowledge comparing the accuracy of Geographically Weighted Poisson Regression models to that of Generalized Linear Models. The results show that the Geographically Weighted Poisson Regression models are useful for capturing spatially dependent relationships and generally perform better than the conventional Generalized Linear Models. Copyright 2009 Elsevier Ltd. All rights reserved.
Poisson regression approach for modeling fatal injury rates amongst Malaysian workers
International Nuclear Information System (INIS)
Kamarulzaman Ibrahim; Heng Khai Theng
2005-01-01
Many safety studies are based on the analysis carried out on injury surveillance data. The injury surveillance data gathered for the analysis include information on number of employees at risk of injury in each of several strata where the strata are defined in terms of a series of important predictor variables. Further insight into the relationship between fatal injury rates and predictor variables may be obtained by the poisson regression approach. Poisson regression is widely used in analyzing count data. In this study, poisson regression is used to model the relationship between fatal injury rates and predictor variables which are year (1995-2002), gender, recording system and industry type. Data for the analysis were obtained from PERKESO and Jabatan Perangkaan Malaysia. It is found that the assumption that the data follow poisson distribution has been violated. After correction for the problem of over dispersion, the predictor variables that are found to be significant in the model are gender, system of recording, industry type, two interaction effects (interaction between recording system and industry type and between year and industry type). Introduction Regression analysis is one of the most popular
Shenoi, Rajesh A; Lai, Benjamin F L; Imran ul-haq, Muhammad; Brooks, Donald E; Kizhakkedathu, Jayachandran N
2013-08-01
Biodegradable multi-functional polymeric nanostructures that undergo controlled degradation in response to physiological cues are important in numerous biomedical applications including drug delivery, bio-conjugation and tissue engineering. In this paper, we report the development of a new class of water soluble multi-functional branched biodegradable polymer with high molecular weight and biocompatibility which demonstrates good correlation of in vivo biodegradation and in vitro hydrolysis. Main chain degradable hyperbranched polyglycerols (HPG) (20-100 kDa) were synthesized by the introduction of acid labile groups within the polymer structure by an anionic ring opening copolymerization of glycidol with ketal-containing epoxide monomers with different ketal structures. The water soluble biodegradable HPGs with randomly distributed ketal groups (RBHPGs) showed controlled degradation profiles in vitro depending on the pH of solution, temperature and the structure of incorporated ketal groups, and resulted in non-toxic degradation products. NMR studies demonstrated the branched nature of RBHPGs which is correlating with their smaller hydrodynamic radii. The RBHPGs and their degradation products exhibited excellent blood compatibility and tissue compatibility based on various analyses methods, independent of their molecular weight and ketal group structure. When administered intravenously in mice, tritium labeled RBHPG of molecular weight 100 kDa with dimethyl ketal group showed a circulation half life of 2.7 ± 0.3 h, correlating well with the in vitro polymer degradation half life (4.3 h) and changes in the molecular weight profile during the degradation (as measured by gel permeation chromatography) in buffer conditions at 37 °C. The RBHPG degraded into low molecular weight fragments that were cleared from circulation rapidly. The biodistribution and excretion studies demonstrated that RBHPG exhibited significantly lower tissue accumulation and enhanced urinary
Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates
Energy Technology Data Exchange (ETDEWEB)
Laurence, T; Chromy, B
2009-11-10
Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE
Czech Academy of Sciences Publication Activity Database
Feireisl, Eduard; Laurençot, P.
2007-01-01
Roč. 88, - (2007), s. 325-349 ISSN 0021-7824 R&D Projects: GA ČR GA201/05/0164 Institutional research plan: CEZ:AV0Z10190503 Keywords : Navier-Stokes-Fourier- Poisson system * Smoluchowski- Poisson system * singular limit Subject RIV: BA - General Mathematics Impact factor: 1.118, year: 2007
Directory of Open Access Journals (Sweden)
García-Artiles, María Dolores
2014-12-01
Full Text Available This paper presents the zero-inflated generalised Poisson distribution, which is useful when there is a large presence of zeros in the sample. After presenting the model, we develop a specific program based on Mathematica, overcoming some limitations of alternative approaches such as STATA or EViews, which do not include the zero-inflated Poisson distribution among its routines. The advantages of the model used and the proposed program are illustrated with a real example that is very appropriate to its features, namely an analysis of the factors influencing university students’ attendance at tutoring sessions. This example is particularly suitable to show the usefulness of the methodology presented because it includes a large number of zeros, reflecting the many occasions on which the students do not attend these sessions. The students’ place of residence, their attendance at lectures and the application of continual assessment are variables that seem to account for attendance at tutoring sessions.
Akemann, G; Bloch, J; Shifrin, L; Wettig, T
2008-01-25
We analyze how individual eigenvalues of the QCD Dirac operator at nonzero quark chemical potential are distributed in the complex plane. Exact and approximate analytical results for both quenched and unquenched distributions are derived from non-Hermitian random matrix theory. When comparing these to quenched lattice QCD spectra close to the origin, excellent agreement is found for zero and nonzero topology at several values of the quark chemical potential. Our analytical results are also applicable to other physical systems in the same symmetry class.
Directory of Open Access Journals (Sweden)
Daniel R Feikin
Full Text Available BACKGROUND: Zinc treatment shortens diarrhea episodes and can prevent future episodes. In rural Africa, most children with diarrhea are not brought to health facilities. In a village-randomized trial in rural Kenya, we assessed if zinc treatment might have a community-level preventive effect on diarrhea incidence if available at home versus only at health facilities. METHODS: We randomized 16 Kenyan villages (1,903 eligible children to receive a 10-day course of zinc and two oral rehydration solution (ORS sachets every two months at home and 17 villages (2,241 eligible children to receive ORS at home, but zinc at the health-facility only. Children's caretakers were educated in zinc/ORS use by village workers, both unblinded to intervention arm. We evaluated whether incidence of diarrhea and acute lower respiratory illness (ALRI reported at biweekly home visits and presenting to clinic were lower in zinc villages, using poisson regression adjusting for baseline disease rates, distance to clinic, and children's age. RESULTS: There were no differences between village groups in diarrhea incidence either reported at the home or presenting to clinic. In zinc villages (1,440 children analyzed, 61.2% of diarrheal episodes were treated with zinc, compared to 5.4% in comparison villages (1,584 children analyzed, p<0.0001. There were no differences in ORS use between zinc (59.6% and comparison villages (58.8%. Among children with fever or cough without diarrhea, zinc use was low (<0.5%. There was a lower incidence of reported ALRI in zinc villages (adjusted RR 0.68, 95% CI 0.46-0.99, but not presenting at clinic. CONCLUSIONS: In this study, home zinc use to treat diarrhea did not decrease disease rates in the community. However, with proper training, availability of zinc at home could lead to more episodes of pediatric diarrhea being treated with zinc in parts of rural Africa where healthcare utilization is low. TRIAL REGISTRATION: ClinicalTrials.gov NCT
Extremal Properties of an Intermittent Poisson Process Generating 1/f Noise
Grüneis, Ferdinand
2016-08-01
It is well-known that the total power of a signal exhibiting a pure 1/f shape is divergent. This phenomenon is also called the infrared catastrophe. Mandelbrot claims that the infrared catastrophe can be overcome by stochastic processes which alternate between active and quiescent states. We investigate an intermittent Poisson process (IPP) which belongs to the family of stochastic processes suggested by Mandelbrot. During the intermission δ (quiescent period) the signal is zero. The active period is divided into random intervals of mean length τ0 consisting of a fluctuating number of events; this is giving rise to so-called clusters. The advantage of our treatment is that the spectral features of the IPP can be derived analytically. Our considerations are focused on the case that intermission is only a small disturbance of the Poisson process, i.e., to the case that δ ≤ τ0. This makes it difficult or even impossible to discriminate a spike train of such an IPP from that of a Poisson process. We investigate the conditions under which a 1/f spectrum can be observed. It is shown that 1/f noise generated by the IPP is accompanied with extreme variance. In agreement with the considerations of Mandelbrot, the IPP avoids the infrared catastrophe. Spectral analysis of the simulated IPP confirms our theoretical results. The IPP is a model for an almost random walk generating both white and 1/f noise and can be applied for an interpretation of 1/f noise in metallic resistors.
Stochastic Dynamics of a Time-Delayed Ecosystem Driven by Poisson White Noise Excitation
Directory of Open Access Journals (Sweden)
Wantao Jia
2018-02-01
Full Text Available We investigate the stochastic dynamics of a prey-predator type ecosystem with time delay and the discrete random environmental fluctuations. In this model, the delay effect is represented by a time delay parameter and the effect of the environmental randomness is modeled as Poisson white noise. The stochastic averaging method and the perturbation method are applied to calculate the approximate stationary probability density functions for both predator and prey populations. The influences of system parameters and the Poisson white noises are investigated in detail based on the approximate stationary probability density functions. It is found that, increasing time delay parameter as well as the mean arrival rate and the variance of the amplitude of the Poisson white noise will enhance the fluctuations of the prey and predator population. While the larger value of self-competition parameter will reduce the fluctuation of the system. Furthermore, the results from Monte Carlo simulation are also obtained to show the effectiveness of the results from averaging method.
Spectral shaping of a randomized PWM DC-DC converter using maximum entropy probability distributions
CSIR Research Space (South Africa)
Dove, Albert
2017-01-01
Full Text Available Entropy. It is shown that given a pool of randomized parameters, there exists a region through which maximum spreading happens and how this spreading out is compromized by having constraints....
Czech Academy of Sciences Publication Activity Database
Jordanova, P.; Dušek, Jiří; Stehlík, M.
2013-01-01
Roč. 128, OCT 15 (2013), s. 124-134 ISSN 0169-7439 R&D Projects: GA ČR(CZ) GAP504/11/1151; GA MŠk(CZ) ED1.1.00/02.0073 Institutional support: RVO:67179843 Keywords : environmental chemistry * ebullition of methane * mixed poisson processes * renewal process * pareto distribution * moving average process * robust statistics * sedge–grass marsh Subject RIV: EH - Ecology, Behaviour Impact factor: 2.381, year: 2013
Statistical Assessement on Cancer Risks of Ionizing Radiation and Smoking Based on Poisson Models
Tomita, Makoto; Otake, Masanori
2001-01-01
In many epidemiological and medical studies, a number of cancer motralities in catagorical classification may be considered as having Poisson distribution with person-years at risk depending upon time. The cancer mortalities have been evaluated by additive or multiplicative models with regard to background and excess risks based on several covariances such as sex, age at the time of bombings, time at exposure, or ionizing radiation, cigarette smoking habits, duration of smoking habits, etc. A...
On Some Compound Random Variables Motivated by Bulk Queues
Directory of Open Access Journals (Sweden)
Romeo Meštrović
2015-01-01
Full Text Available We consider the distribution of the number of customers that arrive in an arbitrary bulk arrival queue system. Under certain conditions on the distributions of the time of arrival of an arriving group (Y(t and its size (X with respect to the considered bulk queue, we derive a general expression for the probability mass function of the random variable Q(t which expresses the number of customers that arrive in this bulk queue during any considered period t. Notice that Q(t can be considered as a well-known compound random variable. Using this expression, without the use of generating function, we establish the expressions for probability mass function for some compound distributions Q(t concerning certain pairs (Y(t,X of discrete random variables which play an important role in application of batch arrival queues which have a wide range of applications in different forms of transportation. In particular, we consider the cases when Y(t and/or X are some of the following distributions: Poisson, shifted-Poisson, geometrical, or uniform random variable.
Light generation and manipulation from nonlinear randomly distributed domains in SBN
Yao, Can
2014-01-01
Disordered media with refractive index variations can be found in the atmosphere, the ocean, and in many materials or biological tissues. Several technologies that make use of such random media, as image formation, satellite communication, astronomy or microscopy, must deal with an unavoidable light scattering or diffusion. This is why for many years light propagation through random media has been a subject of intensive study. Interesting phenomena such as speckle, coherent backscattering or ...
International Nuclear Information System (INIS)
Rosales, J.; Perez, J.; Garcia, C.; Munnoz, A.; Lira, C. A. B. O.
2015-01-01
TRISO particles are the specific features of HTR-10 and generally HTGR reactors. Their heterogeneity and random arrangement in graphite matrix of these reactors create a significant modeling challenge. In the simulation of spherical fuel elements using MCNPX are usually created repetitive structures using uniform distribution models. The use of these repetitive structures introduces two major approaches: the non-randomness of the TRISO particles inside the pebbles and the intersection of the pebble surface with the TRISO particles. These approaches could affect significantly the multiplicative properties of the core. In order to study the influence of these approaches in the multiplicative properties was estimated the K inf value in one pebble with white boundary conditions using 4 different configurations regarding the distribution of the TRISO particles inside the pebble: uniform hexagonal model, cubic uniform model, cubic uniform without the effect of cutting and a random distribution model. It was studied the impact these models on core scale solving the problem B1, from the Benchmark Problems presented in a Coordinated Research Program of the IAEA. (Author)
Pareto genealogies arising from a Poisson branching evolution model with selection.
Huillet, Thierry E
2014-02-01
We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.
Dynamics of a prey-predator system under Poisson white noise excitation
Pan, Shan-Shan; Zhu, Wei-Qiu
2014-10-01
The classical Lotka-Volterra (LV) model is a well-known mathematical model for prey-predator ecosystems. In the present paper, the pulse-type version of stochastic LV model, in which the effect of a random natural environment has been modeled as Poisson white noise, is investigated by using the stochastic averaging method. The averaged generalized Itô stochastic differential equation and Fokker-Planck-Kolmogorov (FPK) equation are derived for prey-predator ecosystem driven by Poisson white noise. Approximate stationary solution for the averaged generalized FPK equation is obtained by using the perturbation method. The effect of prey self-competition parameter ɛ2 s on ecosystem behavior is evaluated. The analytical result is confirmed by corresponding Monte Carlo (MC) simulation.
Pricing Zero-Coupon Catastrophe Bonds Using EVT with Doubly Stochastic Poisson Arrivals
Directory of Open Access Journals (Sweden)
Zonggang Ma
2017-01-01
Full Text Available The frequency and severity of climate abnormal change displays an irregular upward cycle as global warming intensifies. Therefore, this paper employs a doubly stochastic Poisson process with Black Derman Toy (BDT intensity to describe the catastrophic characteristics. By using the Property Claim Services (PCS loss index data from 2001 to 2010 provided by the US Insurance Services Office (ISO, the empirical result reveals that the BDT arrival rate process is superior to the nonhomogeneous Poisson and lognormal intensity process due to its smaller RMSE, MAE, MRPE, and U and larger E and d. Secondly, to depict extreme features of catastrophic risks, this paper adopts the Peak Over Threshold (POT in extreme value theory (EVT to characterize the tail characteristics of catastrophic loss distribution. And then the loss distribution is analyzed and assessed using a quantile-quantile (QQ plot to visually check whether the PCS index observations meet the generalized Pareto distribution (GPD assumption. Furthermore, this paper derives a pricing formula for zero-coupon catastrophe bonds with a stochastic interest rate environment and aggregate losses generated by a compound doubly stochastic Poisson process under the forward measure. Finally, simulation results verify pricing model predictions and show how catastrophic risks and interest rate risk affect the prices of zero-coupon catastrophe bonds.
Casimir meets Poisson: improved quark/gluon discrimination with counting observables
Frye, Christopher; Larkoski, Andrew J.; Thaler, Jesse; Zhou, Kevin
2017-09-01
Charged track multiplicity is among the most powerful observables for discriminating quark- from gluon-initiated jets. Despite its utility, it is not infrared and collinear (IRC) safe, so perturbative calculations are limited to studying the energy evolution of multiplicity moments. While IRC-safe observables, like jet mass, are perturbatively calculable, their distributions often exhibit Casimir scaling, such that their quark/gluon discrimination power is limited by the ratio of quark to gluon color factors. In this paper, we introduce new IRC-safe counting observables whose discrimination performance exceeds that of jet mass and approaches that of track multiplicity. The key observation is that track multiplicity is approximately Poisson distributed, with more suppressed tails than the Sudakov peak structure from jet mass. By using an iterated version of the soft drop jet grooming algorithm, we can define a "soft drop multiplicity" which is Poisson distributed at leading-logarithmic accuracy. In addition, we calculate the next-to-leading-logarithmic corrections to this Poisson structure. If we allow the soft drop groomer to proceed to the end of the jet branching history, we can define a collinear-unsafe (but still infrared-safe) counting observable. Exploiting the universality of the collinear limit, we define generalized fragmentation functions to study the perturbative energy evolution of collinear-unsafe multiplicity.
Poisson Mixture Regression Models for Heart Disease Prediction
Erol, Hamza
2016-01-01
Early heart disease control can be achieved by high disease prediction and diagnosis efficiency. This paper focuses on the use of model based clustering techniques to predict and diagnose heart disease via Poisson mixture regression models. Analysis and application of Poisson mixture regression models is here addressed under two different classes: standard and concomitant variable mixture regression models. Results show that a two-component concomitant variable Poisson mixture regression model predicts heart disease better than both the standard Poisson mixture regression model and the ordinary general linear Poisson regression model due to its low Bayesian Information Criteria value. Furthermore, a Zero Inflated Poisson Mixture Regression model turned out to be the best model for heart prediction over all models as it both clusters individuals into high or low risk category and predicts rate to heart disease componentwise given clusters available. It is deduced that heart disease prediction can be effectively done by identifying the major risks componentwise using Poisson mixture regression model. PMID:27999611
Directory of Open Access Journals (Sweden)
Lope Virginia
2009-01-01
Full Text Available Abstract Background Non-Hodgkin's lymphomas (NHLs have been linked to proximity to industrial areas, but evidence regarding the health risk posed by residence near pollutant industries is very limited. The European Pollutant Emission Register (EPER is a public register that furnishes valuable information on industries that release pollutants to air and water, along with their geographical location. This study sought to explore the relationship between NHL mortality in small areas in Spain and environmental exposure to pollutant emissions from EPER-registered industries, using three Poisson-regression-based mathematical models. Methods Observed cases were drawn from mortality registries in Spain for the period 1994–2003. Industries were grouped into the following sectors: energy; metal; mineral; organic chemicals; waste; paper; food; and use of solvents. Populations having an industry within a radius of 1, 1.5, or 2 kilometres from the municipal centroid were deemed to be exposed. Municipalities outside those radii were considered as reference populations. The relative risks (RRs associated with proximity to pollutant industries were estimated using the following methods: Poisson Regression; mixed Poisson model with random provincial effect; and spatial autoregressive modelling (BYM model. Results Only proximity of paper industries to population centres (>2 km could be associated with a greater risk of NHL mortality (mixed model: RR:1.24, 95% CI:1.09–1.42; BYM model: RR:1.21, 95% CI:1.01–1.45; Poisson model: RR:1.16, 95% CI:1.06–1.27. Spatial models yielded higher estimates. Conclusion The reported association between exposure to air pollution from the paper, pulp and board industry and NHL mortality is independent of the model used. Inclusion of spatial random effects terms in the risk estimate improves the study of associations between environmental exposures and mortality. The EPER could be of great utility when studying the effects of
Boundary Lax pairs from non-ultra-local Poisson algebras
International Nuclear Information System (INIS)
Avan, Jean; Doikou, Anastasia
2009-01-01
We consider non-ultra-local linear Poisson algebras on a continuous line. Suitable combinations of representations of these algebras yield representations of novel generalized linear Poisson algebras or 'boundary' extensions. They are parametrized by a boundary scalar matrix and depend, in addition, on the choice of an antiautomorphism. The new algebras are the classical-linear counterparts of the known quadratic quantum boundary algebras. For any choice of parameters, the non-ultra-local contribution of the original Poisson algebra disappears. We also systematically construct the associated classical Lax pair. The classical boundary principal chiral model is examined as a physical example.
The Effect of Distributed Practice in Undergraduate Statistics Homework Sets: A Randomized Trial
Crissinger, Bryan R.
2015-01-01
Most homework sets in statistics courses are constructed so that students concentrate or "mass" their practice on a certain topic in one problem set. Distributed practice homework sets include review problems in each set so that practice on a topic is distributed across problem sets. There is a body of research that points to the…
Distribution of level spacing ratios using one- plus two-body random ...
Indian Academy of Sciences (India)
2015-02-03
Feb 3, 2015 ... Abstract. Probability distribution (P(r)) of the level spacing ratios has been introduced recently and is used to investigate many-body localization as well as to quantify the distance from inte- grability on finite size lattices. In this paper, we study the distribution of the ratio of consecutive level spacings using ...
Random distance distribution for spherical objects: general theory and applications to physics
International Nuclear Information System (INIS)
Tu Shuju; Fischbach, Ephraim
2002-01-01
A formalism is presented for analytically obtaining the probability density function, P n (s), for the random distance s between two random points in an n-dimensional spherical object of radius R. Our formalism allows P n (s) to be calculated for a spherical n-ball having an arbitrary volume density, and reproduces the well-known results for the case of uniform density. The results find applications in geometric probability, computational science, molecular biological systems, statistical physics, astrophysics, condensed matter physics, nuclear physics and elementary particle physics. As one application of these results, we propose a new statistical method derived from our formalism to study random number generators used in Monte Carlo simulations. (author)
Downlink Non-Orthogonal Multiple Access (NOMA) in Poisson Networks
Ali, Konpal S.
2018-03-21
A network model is considered where Poisson distributed base stations transmit to $N$ power-domain non-orthogonal multiple access (NOMA) users (UEs) each that employ successive interference cancellation (SIC) for decoding. We propose three models for the clustering of NOMA UEs and consider two different ordering techniques for the NOMA UEs: mean signal power-based and instantaneous signal-to-intercell-interference-and-noise-ratio-based. For each technique, we present a signal-to-interference-and-noise ratio analysis for the coverage of the typical UE. We plot the rate region for the two-user case and show that neither ordering technique is consistently superior to the other. We propose two efficient algorithms for finding a feasible resource allocation that maximize the cell sum rate $\\\\mathcal{R}_{\\ m tot}$, for general $N$, constrained to: 1) a minimum rate $\\\\mathcal{T}$ for each UE, 2) identical rates for all UEs. We show the existence of: 1) an optimum $N$ that maximizes the constrained $\\\\mathcal{R}_{\\ m tot}$ given a set of network parameters, 2) a critical SIC level necessary for NOMA to outperform orthogonal multiple access. The results highlight the importance in choosing the network parameters $N$, the constraints, and the ordering technique to balance the $\\\\mathcal{R}_{\\ m tot}$ and fairness requirements. We also show that interference-aware UE clustering can significantly improve performance.
METHOD OF FOREST FIRES PROBABILITY ASSESSMENT WITH POISSON LAW
Directory of Open Access Journals (Sweden)
A. S. Plotnikova
2016-01-01
Full Text Available The article describes the method for the forest fire burn probability estimation on a base of Poisson distribution. The λ parameter is assumed to be a mean daily number of fires detected for each Forest Fire Danger Index class within specific period of time. Thus, λ was calculated for spring, summer and autumn seasons separately. Multi-annual daily Forest Fire Danger Index values together with EO-derived hot spot map were input data for the statistical analysis. The major result of the study is generation of the database on forest fire burn probability. Results were validated against EO daily data on forest fires detected over Irkutsk oblast in 2013. Daily weighted average probability was shown to be linked with the daily number of detected forest fires. Meanwhile, there was found a number of fires which were developed when estimated probability was low. The possible explanation of this phenomenon was provided.
International Nuclear Information System (INIS)
Lajoie, M-A.; Marleau, G.
2010-01-01
The analysis of VHTR fuel tends to be difficult when using deterministic methods currently employed in lattice codes notably because of limitations on geometry representation and the stochastic positioning of spherical elements. The method proposed here and implemented in the lattice code DRAGON is to generate the positions of multi-layered spheres using random sequential addition, and to analyze the resulting geometry using a full three-dimensional spherical collision probability method. The preliminary validation runs are consistent with results obtained using a Monte-Carlo method, for both regularly and randomly positioned pins. (author)
Cooper, M A
2000-01-01
We present various approximations for the angular distribution of particles emerging from an optically thick, purely isotropically scattering region into a vacuum. Our motivation is to use such a distribution for the Fleck-Canfield random walk method [1] for implicit Monte Carlo (IMC) [2] radiation transport problems. We demonstrate that the cosine distribution recommended in the original random walk paper [1] is a poor approximation to the angular distribution predicted by transport theory. Then we examine other approximations that more closely match the transport angular distribution.
Cao, Qingqing; Wu, Zhenqiang; Sun, Ying; Wang, Tiezhu; Han, Tengwei; Gu, Chaomei; Sun, Yehuan
2011-11-01
To Eexplore the application of negative binomial regression and modified Poisson regression analysis in analyzing the influential factors for injury frequency and the risk factors leading to the increase of injury frequency. 2917 primary and secondary school students were selected from Hefei by cluster random sampling method and surveyed by questionnaire. The data on the count event-based injuries used to fitted modified Poisson regression and negative binomial regression model. The risk factors incurring the increase of unintentional injury frequency for juvenile students was explored, so as to probe the efficiency of these two models in studying the influential factors for injury frequency. The Poisson model existed over-dispersion (P binomial regression model, was fitted better. respectively. Both showed that male gender, younger age, father working outside of the hometown, the level of the guardian being above junior high school and smoking might be the results of higher injury frequencies. On a tendency of clustered frequency data on injury event, both the modified Poisson regression analysis and negative binomial regression analysis can be used. However, based on our data, the modified Poisson regression fitted better and this model could give a more accurate interpretation of relevant factors affecting the frequency of injury.
DEFF Research Database (Denmark)
Visser, Andre
1997-01-01
Random walk simulation has the potential to be an extremely powerful tool in the investigation of turbulence in environmental processes. However, care must be taken in applying such simulations to the motion of particles in turbulent marine systems where turbulent diffusivity is commonly spatiall...
van der Wel, A.P.; Klumperink, Eric A.M.; Hoekstra, E.; Nauta, Bram
2005-01-01
In this work, we study random telegraph signal (RTS) noise in metal-oxide-semiconductor field effect transistors when the device is periodically and rapidly cycled between an "on" and an "off" bias state. We derive the effective RTS time constants for this case using Shockley–Read–Hall statistics
The random field model of the spatial distribution of heavy vehicle loads on long-span bridges
Chen, Zhicheng; Bao, Yuequan; Li, Hui
2016-04-01
A stochastic model based on Markov random field is proposed to model the spatial distribution of vehicle loads on longspan bridges. The bridge deck is divided into a finite set of discrete grid cells, each cell has two states according to whether the cell is occupied by the heavy vehicle load or not, then a four-neighbor lattice-structured undirected graphical model with each node corresponding to a cell state variable is proposed to model the location distribution of heavy vehicle loads on the bridge deck. The node potential is defined to quantitatively describe the randomness of node state, and the edge potential is defined to quantitatively describe the correlation of the connected node pair. The junction tree algorithm is employed to obtain the systematic solutions of inference problems of the graphical model. A marked random variable is assigned to each node to represent the amplitude of the total weight of vehicle applied on the corresponding cell of the bridge deck. The rationality of the model is validated by a Monte Carlo simulation of a learned model based on monitored data of a cable-stayed bridge.
A high order solver for the unbounded Poisson equation
DEFF Research Database (Denmark)
Hejlesen, Mads Mølholm; Rasmussen, Johannes Tophøj; Chatelain, Philippe
2013-01-01
A high order converging Poisson solver is presented, based on the Greenʼs function solution to Poissonʼs equation subject to free-space boundary conditions. The high order convergence is achieved by formulating regularised integration kernels, analogous to a smoothing of the solution field....... The method is extended to directly solve the derivatives of the solution to Poissonʼs equation. In this way differential operators such as the divergence or curl of the solution field can be solved to the same high order convergence without additional computational effort. The method, is applied...... and validated, however not restricted, to the equations of fluid mechanics, and can be used in many applications to solve Poissonʼs equation on a rectangular unbounded domain....
On the poisson's ratio of the nucleus pulposus.
Farrell, M D; Riches, P E
2013-10-01
Existing experimental data on the Poisson's ratio of nucleus pulposus (NP) tissue is limited. This study aims to determine whether the Poisson's ratio of NP tissue is strain-dependent, strain-rate-dependent, or varies with axial location in the disk. Thirty-two cylindrical plugs of bovine tail NP tissue were subjected to ramp-hold unconfined compression to 20% axial strain in 5% increments, at either 30 μm/s or 0.3 μm/s ramp speeds and the radial displacement determined using biaxial video extensometry. Following radial recoil, the true Poisson's ratio of the solid phase of NP tissue increased linearly with increasing strain and demonstrated strain-rate dependency. The latter finding suggests that the solid matrix undergoes stress relaxation during the test. For small strains, we suggest a Poisson's ratio of 0.125 to be used in biphasic models of the intervertebral disk.
Coupé, Christophe
2018-01-01
As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM), which address grouping of observations, and generalized linear mixed-effects models (GLMM), which offer a family of distributions for the dependent variable. Generalized additive models (GAM) are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS). We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for 'difficult' variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships. Relying on GAMLSS, we
Directory of Open Access Journals (Sweden)
Christophe Coupé
2018-04-01
Full Text Available As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM, which address grouping of observations, and generalized linear mixed-effects models (GLMM, which offer a family of distributions for the dependent variable. Generalized additive models (GAM are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS. We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for ‘difficult’ variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships
Energy Technology Data Exchange (ETDEWEB)
Conover, W.J. [Texas Tech Univ., Lubbock, TX (United States); Cox, D.D. [Rice Univ., Houston, TX (United States); Martz, H.F. [Los Alamos National Lab., NM (United States)
1997-12-01
When using parametric empirical Bayes estimation methods for estimating the binomial or Poisson parameter, the validity of the assumed beta or gamma conjugate prior distribution is an important diagnostic consideration. Chi-square goodness-of-fit tests of the beta or gamma prior hypothesis are developed for use when the binomial sample sizes or Poisson exposure times vary. Nine examples illustrate the application of the methods, using real data from such diverse applications as the loss of feedwater flow rates in nuclear power plants, the probability of failure to run on demand and the failure rates of the high pressure coolant injection systems at US commercial boiling water reactors, the probability of failure to run on demand of emergency diesel generators in US commercial nuclear power plants, the rate of failure of aircraft air conditioners, baseball batting averages, the probability of testing positive for toxoplasmosis, and the probability of tumors in rats. The tests are easily applied in practice by means of corresponding Mathematica{reg_sign} computer programs which are provided.
International Nuclear Information System (INIS)
Conover, W.J.; Cox, D.D.; Martz, H.F.
1997-12-01
When using parametric empirical Bayes estimation methods for estimating the binomial or Poisson parameter, the validity of the assumed beta or gamma conjugate prior distribution is an important diagnostic consideration. Chi-square goodness-of-fit tests of the beta or gamma prior hypothesis are developed for use when the binomial sample sizes or Poisson exposure times vary. Nine examples illustrate the application of the methods, using real data from such diverse applications as the loss of feedwater flow rates in nuclear power plants, the probability of failure to run on demand and the failure rates of the high pressure coolant injection systems at US commercial boiling water reactors, the probability of failure to run on demand of emergency diesel generators in US commercial nuclear power plants, the rate of failure of aircraft air conditioners, baseball batting averages, the probability of testing positive for toxoplasmosis, and the probability of tumors in rats. The tests are easily applied in practice by means of corresponding Mathematica reg-sign computer programs which are provided
Organisation spatiale du peuplement de poissons dans le Bandama ...
African Journals Online (AJOL)
L'évolution des peuplements de poissons sur le Bandama a été étudiée en considérant quatre zones d'échantillonnage : en amont du lac de Kossou, dans les lacs de Kossou et de Taabo, entre les lacs de Kossou et de Taabo, et en aval du lac de Taabo. Au total, 74 espèces de poisson réparties en 49 genres, 28 familles ...
Formality theory from Poisson structures to deformation quantization
Esposito, Chiara
2015-01-01
This book is a survey of the theory of formal deformation quantization of Poisson manifolds, in the formalism developed by Kontsevich. It is intended as an educational introduction for mathematical physicists who are dealing with the subject for the first time. The main topics covered are the theory of Poisson manifolds, star products and their classification, deformations of associative algebras and the formality theorem. Readers will also be familiarized with the relevant physical motivations underlying the purely mathematical construction.
Poisson structure of the equations of ideal multispecies fluid electrodynamics
International Nuclear Information System (INIS)
Spencer, R.G.
1984-01-01
The equations of the two- (or multi-) fluid model of plasma physics are recast in Hamiltonian form, following general methods of symplectic geometry. The dynamical variables are the fields of physical interest, but are noncanonical, so that the Poisson bracket in the theory is not the standard one. However, it is a skew-symmetric bilinear form which, from the method of derivation, automatically satisfies the Jacobi identity; therefore, this noncanonical structure has all the essential properties of a canonical Poisson bracket
On the Fedosov deformation quantization beyond the regular Poisson manifolds
International Nuclear Information System (INIS)
Dolgushev, V.A.; Isaev, A.P.; Lyakhovich, S.L.; Sharapov, A.A.
2002-01-01
A simple iterative procedure is suggested for the deformation quantization of (irregular) Poisson brackets associated to the classical Yang-Baxter equation. The construction is shown to admit a pure algebraic reformulation giving the Universal Deformation Formula (UDF) for any triangular Lie bialgebra. A simple proof of classification theorem for inequivalent UDF's is given. As an example the explicit quantization formula is presented for the quasi-homogeneous Poisson brackets on two-plane
Receiver design for SPAD-based VLC systems under Poisson-Gaussian mixed noise model.
Mao, Tianqi; Wang, Zhaocheng; Wang, Qi
2017-01-23
Single-photon avalanche diode (SPAD) is a promising photosensor because of its high sensitivity to optical signals in weak illuminance environment. Recently, it has drawn much attention from researchers in visible light communications (VLC). However, existing literature only deals with the simplified channel model, which only considers the effects of Poisson noise introduced by SPAD, but neglects other noise sources. Specifically, when an analog SPAD detector is applied, there exists Gaussian thermal noise generated by the transimpedance amplifier (TIA) and the digital-to-analog converter (D/A). Therefore, in this paper, we propose an SPAD-based VLC system with pulse-amplitude-modulation (PAM) under Poisson-Gaussian mixed noise model, where Gaussian-distributed thermal noise at the receiver is also investigated. The closed-form conditional likelihood of received signals is derived using the Laplace transform and the saddle-point approximation method, and the corresponding quasi-maximum-likelihood (quasi-ML) detector is proposed. Furthermore, the Poisson-Gaussian-distributed signals are converted to Gaussian variables with the aid of the generalized Anscombe transform (GAT), leading to an equivalent additive white Gaussian noise (AWGN) channel, and a hard-decision-based detector is invoked. Simulation results demonstrate that, the proposed GAT-based detector can reduce the computational complexity with marginal performance loss compared with the proposed quasi-ML detector, and both detectors are capable of accurately demodulating the SPAD-based PAM signals.
An X-ray CCD signal generator with true random arrival time
International Nuclear Information System (INIS)
Huo Jia; Xu Yuming; Chen Yong; Cui Weiwei; Li Wei; Zhang Ziliang; Han Dawei; Wang Yusan; Wang Juan
2011-01-01
An FPGA-based true random signal generator with adjustable amplitude and exponential distribution of time interval is presented. Since traditional true random number generators (TRNG) are resource costly and difficult to transplant, we employed a method of random number generation based on jitter and phase noise in ring oscillators formed by gates in an FPGA. In order to improve the random characteristics, a combination of two different pseudo-random processing circuits is used for post processing. The effects of the design parameters, such as sample frequency are discussed. Statistical tests indicate that the generator can well simulate the timing behavior of random signals with Poisson distribution. The X-ray CCD signal generator will be used in debugging the CCD readout system of the Low Energy X-ray Instrument onboard the Hard X-ray Modulation Telescope (HXMT). (authors)
Background stratified Poisson regression analysis of cohort data.
Richardson, David B; Langholz, Bryan
2012-03-01
Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models.
Background stratified Poisson regression analysis of cohort data
International Nuclear Information System (INIS)
Richardson, David B.; Langholz, Bryan
2012-01-01
Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models. (orig.)
Background stratified Poisson regression analysis of cohort data
Energy Technology Data Exchange (ETDEWEB)
Richardson, David B. [University of North Carolina at Chapel Hill, Department of Epidemiology, School of Public Health, Chapel Hill, NC (United States); Langholz, Bryan [Keck School of Medicine, University of Southern California, Division of Biostatistics, Department of Preventive Medicine, Los Angeles, CA (United States)
2012-03-15
Background stratified Poisson regression is an approach that has been used in the analysis of data derived from a variety of epidemiologically important studies of radiation-exposed populations, including uranium miners, nuclear industry workers, and atomic bomb survivors. We describe a novel approach to fit Poisson regression models that adjust for a set of covariates through background stratification while directly estimating the radiation-disease association of primary interest. The approach makes use of an expression for the Poisson likelihood that treats the coefficients for stratum-specific indicator variables as 'nuisance' variables and avoids the need to explicitly estimate the coefficients for these stratum-specific parameters. Log-linear models, as well as other general relative rate models, are accommodated. This approach is illustrated using data from the Life Span Study of Japanese atomic bomb survivors and data from a study of underground uranium miners. The point estimate and confidence interval obtained from this 'conditional' regression approach are identical to the values obtained using unconditional Poisson regression with model terms for each background stratum. Moreover, it is shown that the proposed approach allows estimation of background stratified Poisson regression models of non-standard form, such as models that parameterize latency effects, as well as regression models in which the number of strata is large, thereby overcoming the limitations of previously available statistical software for fitting background stratified Poisson regression models. (orig.)
Beyond standard Poisson-Boltzmann theory: ion-specific interactions in aqueous solutions
International Nuclear Information System (INIS)
Ben-Yaakov, Dan; Andelman, David; Harries, Daniel; Podgornik, Rudi
2009-01-01
The Poisson-Boltzmann mean-field description of ionic solutions has been successfully used in predicting charge distributions and interactions between charged macromolecules. While the electrostatic model of charged fluids, on which the Poisson-Boltzmann description rests, and its statistical mechanical consequences have been scrutinized in great detail, much less is understood about its probable shortcomings when dealing with various aspects of real physical, chemical and biological systems. These shortcomings are not only a consequence of the limitations of the mean-field approximation per se, but perhaps are primarily due to the fact that the purely Coulombic model Hamiltonian does not take into account various additional interactions that are not electrostatic in their origin. We explore several possible non-electrostatic contributions to the free energy of ions in confined aqueous solutions and investigate their ramifications and consequences on ionic profiles and interactions between charged surfaces and macromolecules.
Numerical solution of stochastic differential equations with Poisson and Lévy white noise
Grigoriu, M.
2009-08-01
A fixed time step method is developed for integrating stochastic differential equations (SDE’s) with Poisson white noise (PWN) and Lévy white noise (LWN). The method for integrating SDE’s with PWN has the same structure as that proposed by Kim [Phys. Rev. E 76, 011109 (2007)], but is established by using different arguments. The integration of SDE’s with LWN is based on a representation of Lévy processes by sums of scaled Brownian motions and compound Poisson processes. It is shown that the numerical solutions of SDE’s with PWN and LWN converge weakly to the exact solutions of these equations, so that they can be used to estimate not only marginal properties but also distributions of functionals of the exact solutions. Numerical examples are used to demonstrate the applications and the accuracy of the proposed integration algorithms.
Numerical solution of stochastic differential equations with Poisson and Lévy white noise.
Grigoriu, M
2009-08-01
A fixed time step method is developed for integrating stochastic differential equations (SDE's) with Poisson white noise (PWN) and Lévy white noise (LWN). The method for integrating SDE's with PWN has the same structure as that proposed by Kim [Phys. Rev. E 76, 011109 (2007)], but is established by using different arguments. The integration of SDE's with LWN is based on a representation of Lévy processes by sums of scaled Brownian motions and compound Poisson processes. It is shown that the numerical solutions of SDE's with PWN and LWN converge weakly to the exact solutions of these equations, so that they can be used to estimate not only marginal properties but also distributions of functionals of the exact solutions. Numerical examples are used to demonstrate the applications and the accuracy of the proposed integration algorithms.
A note on optimal (s,S) and (R,nQ) policies under a stuttering Poisson demand process
DEFF Research Database (Denmark)
Larsen, Christian
2015-01-01
In this note, a new efficient algorithm is proposed to find an optimal (s, S) replenishment policy for inventory systems with continuous reviews and where the demand follows a stuttering Poisson process (the compound element is geometrically distributed). We also derive three upper bounds...
Bouleau, Nicolas; Chorro, Christophe
2017-08-01
In this paper we consider some elementary and fair zero-sum games of chance in order to study the impact of random effects on the wealth distribution of N interacting players. Even if an exhaustive analytical study of such games between many players may be tricky, numerical experiments highlight interesting asymptotic properties. In particular, we emphasize that randomness plays a key role in concentrating wealth in the extreme, in the hands of a single player. From a mathematical perspective, we interestingly adopt some diffusion limits for small and high-frequency transactions which are otherwise extensively used in population genetics. Finally, the impact of small tax rates on the preceding dynamics is discussed for several regulation mechanisms. We show that taxation of income is not sufficient to overcome this extreme concentration process in contrast to the uniform taxation of capital which stabilizes the economy and prevents agents from being ruined.
Impact factor distribution revisited
Huang, Ding-wei
2017-09-01
We explore the consistency of a new type of frequency distribution, where the corresponding rank distribution is Lavalette distribution. Empirical data of journal impact factors can be well described. This distribution is distinct from Poisson distribution and negative binomial distribution, which were suggested by previous study. By a log transformation, we obtain a bell-shaped distribution, which is then compared to Gaussian and catenary curves. Possible mechanisms behind the shape of impact factor distribution are suggested.
Bayesian spatial modeling of HIV mortality via zero-inflated Poisson models.
Musal, Muzaffer; Aktekin, Tevfik
2013-01-30
In this paper, we investigate the effects of poverty and inequality on the number of HIV-related deaths in 62 New York counties via Bayesian zero-inflated Poisson models that exhibit spatial dependence. We quantify inequality via the Theil index and poverty via the ratios of two Census 2000 variables, the number of people under the poverty line and the number of people for whom poverty status is determined, in each Zip Code Tabulation Area. The purpose of this study was to investigate the effects of inequality and poverty in addition to spatial dependence between neighboring regions on HIV mortality rate, which can lead to improved health resource allocation decisions. In modeling county-specific HIV counts, we propose Bayesian zero-inflated Poisson models whose rates are functions of both covariate and spatial/random effects. To show how the proposed models work, we used three different publicly available data sets: TIGER Shapefiles, Census 2000, and mortality index files. In addition, we introduce parameter estimation issues of Bayesian zero-inflated Poisson models and discuss MCMC method implications. Copyright © 2012 John Wiley & Sons, Ltd.
Sharma, Ajeet K.; Ahmed, Nabeel; O'Brien, Edward P.
2018-02-01
Ribosome profiling experiments have found greater than 100-fold variation in ribosome density along mRNA transcripts, indicating that individual codon elongation rates can vary to a similar degree. This wide range of elongation times, coupled with differences in codon usage between transcripts, suggests that the average codon translation-rate per gene can vary widely. Yet, ribosome run-off experiments have found that the average codon translation rate for different groups of transcripts in mouse stem cells is constant at 5.6 AA/s. How these seemingly contradictory results can be reconciled is the focus of this study. Here, we combine knowledge of the molecular factors shown to influence translation speed with genomic information from Escherichia coli, Saccharomyces cerevisiae and Homo sapiens to simulate the synthesis of cytosolic proteins in these organisms. The model recapitulates a near constant average translation rate, which we demonstrate arises because the molecular determinants of translation speed are distributed nearly randomly amongst most of the transcripts. Consequently, codon translation rates are also randomly distributed and fast-translating segments of a transcript are likely to be offset by equally probable slow-translating segments, resulting in similar average elongation rates for most transcripts. We also show that the codon usage bias does not significantly affect the near random distribution of codon translation rates because only about 10 % of the total transcripts in an organism have high codon usage bias while the rest have little to no bias. Analysis of Ribo-Seq data and an in vivo fluorescent assay supports these conclusions.
International Nuclear Information System (INIS)
Bunzl, K.
2002-01-01
In the field, the distribution coefficient, K d , for the sorption of a radionuclide by the soil cannot be expected to be constant. Even in a well defined soil horizon, K d will vary stochastically in horizontal as well as in vertical direction around a mean value. The horizontal random variability of K d produce a pronounced tailing effect in the concentration depth profile of a fallout radionuclide, much less is known on the corresponding effect of the vertical random variability. To analyze this effect theoretically, the classical convection-dispersion model in combination with the random-walk particle method was applied. The concentration depth profile of a radionuclide was calculated one year after deposition assuming constant values of the pore water velocity, the diffusion/dispersion coefficient, and the distribution coefficient (K d = 100 cm 3 x g -1 ) and exhibiting a vertical variability for K d according to a log-normal distribution with a geometric mean of 100 cm 3 x g -1 and a coefficient of variation of CV 0.53. The results show that these two concentration depth profiles are only slightly different, the location of the peak is shifted somewhat upwards, and the dispersion of the concentration depth profile is slightly larger. A substantial tailing effect of the concentration depth profile is not perceivable. Especially with respect to the location of the peak, a very good approximation of the concentration depth profile is obtained if the arithmetic mean of the K d -values (K d = 113 cm 3 x g -1 ) and a slightly increased dispersion coefficient are used in the analytical solution of the classical convection-dispersion equation with constant K d . The evaluation of the observed concentration depth profile with the analytical solution of the classical convection-dispersion equation with constant parameters will, within the usual experimental limits, hardly reveal the presence of a log-normal random distribution of K d in the vertical direction in
Log-normal frailty models fitted as Poisson generalized linear mixed models.
Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver
2016-12-01
The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Mean and Fluctuating Force Distribution in a Random Array of Spheres
Akiki, Georges; Jackson, Thomas; Balachandar, Sivaramakrishnan
2015-11-01
This study presents a numerical study of the force distribution within a cluster of mono-disperse spherical particles. A direct forcing immersed boundary method is used to calculate the forces on individual particles for a volume fraction range of [0.1, 0.4] and a Reynolds number range of [10, 625]. The overall drag is compared to several drag laws found in the literature. As for the fluctuation of the hydrodynamic streamwise force among individual particles, it is shown to have a normal distribution with a standard deviation that varies with the volume fraction only. The standard deviation remains approximately 25% of the mean streamwise force on a single sphere. The force distribution shows a good correlation between the location of two to three nearest upstream and downstream neighbors and the magnitude of the forces. A detailed analysis of the pressure and shear forces contributions calculated on a ghost sphere in the vicinity of a single particle in a uniform flow reveals a mapping of those contributions. The combination of the mapping and number of nearest neighbors leads to a first order correction of the force distribution within a cluster which can be used in Lagrangian-Eulerian techniques. We also explore the possibility of a binary force model that systematically accounts for the effect of the nearest neighbors. This work was supported by the National Science Foundation (NSF OISE-0968313) under Partnership for International Research and Education (PIRE) in Multiphase Flows at the University of Florida.
Modeling the magnitude and distribution of sediment-bound pollutants in estuaries is often limited by incomplete knowledge of the site and inadequate sample density. To address these modeling limitations, a decision-support tool framework was conceived that predicts sediment cont...
Modeling the magnitude and distribution of estuarine sediment contamination by pollutants of historic (e.g. PCB) and emerging concern (e.g., personal care products, PCP) is often limited by incomplete site knowledge and inadequate sediment contamination sampling. We tested a mode...
Exit times for a class of random walks: exact distribution results
DEFF Research Database (Denmark)
Jacobsen, Martin
2011-01-01
the exit possible has a Laplace transform which is a rational function. The expected exit time is also determined and the paper concludes with exact distribution results concerning exits from bounded intervals. The proofs use simple martingale techniques together with some classical expansions...
How does Poisson kriging compare to the popular BYM model for mapping disease risks?
Directory of Open Access Journals (Sweden)
Gebreab Samson
2008-02-01
geography becomes more heterogeneous and when data beyond the adjacent counties are used in the estimation. The trade-off cost for the easier implementation of point Poisson kriging is slightly larger kriging variances, which reduces the precision of the model of uncertainty. Conclusion Bayesian spatial models are increasingly used by public health officials to map mortality risk from observed rates, a preliminary step towards the identification of areas of excess. More attention should however be paid to the spatial and distributional assumptions underlying the popular BYM model. Poisson kriging offers more flexibility in modeling the spatial structure of the risk and generates less smoothing, reducing the likelihood of missing areas of high risk.
King, Douglas M; Jacobson, Sheldon H
2017-12-01
Recent mass killings, such as those in Newtown, Connecticut, and Aurora, Colorado, have brought new attention to mass killings in the United States. This article examines 323 mass killings taking place between January 1, 2006, and October 4, 2016, to assess how they are distributed over time. In particular, we find that they appear to be uniformly distributed over time, which suggests that their rate has remained stable over the past decade. Moreover, analysis of subsets of these mass killings sharing a common trait (e.g., family killings, public killings) suggests that they exhibit a memoryless property, suggesting that mass killing events within each category are random in the sense that the occurrence of a mass killing event does not signal whether another mass killing event is imminent. However, the same memoryless property is not found when combining all mass killings into a single analysis, consistent with earlier research that found evidence of a contagion effect among mass killing events. Because of the temporal randomness of public mass killings and the wide geographic area over which they can occur, these results imply that these events may be best addressed by systemic infrastructure-based interventions that deter such events, incorporate resiliency into the response system, or impede such events until law enforcement can respond when they do occur.
Zhang, Li-Zhi; Yuan, Wu-Zhi
2018-04-01
The motion of coalescence-induced condensate droplets on superhydrophobic surface (SHS) has attracted increasing attention in energy-related applications. Previous researches were focused on regularly rough surfaces. Here a new approach, a mesoscale lattice Boltzmann method (LBM), is proposed and used to model the dynamic behavior of coalescence-induced droplet jumping on SHS with randomly distributed rough structures. A Fast Fourier Transformation (FFT) method is used to generate non-Gaussian randomly distributed rough surfaces with the skewness (Sk), kurtosis (K) and root mean square (Rq) obtained from real surfaces. Three typical spreading states of coalesced droplets are observed through LBM modeling on various rough surfaces, which are found to significantly influence the jumping ability of coalesced droplet. The coalesced droplets spreading in Cassie state or in composite state will jump off the rough surfaces, while the ones spreading in Wenzel state would eventually remain on the rough surfaces. It is demonstrated that the rough surfaces with smaller Sks, larger Rqs and a K at 3.0 are beneficial to coalescence-induced droplet jumping. The new approach gives more detailed insights into the design of SHS.
Poisson sigma model with branes and hyperelliptic Riemann surfaces
International Nuclear Information System (INIS)
Ferrario, Andrea
2008-01-01
We derive the explicit form of the superpropagators in the presence of general boundary conditions (coisotropic branes) for the Poisson sigma model. This generalizes the results presented by Cattaneo and Felder [''A path integral approach to the Kontsevich quantization formula,'' Commun. Math. Phys. 212, 591 (2000)] and Cattaneo and Felder ['Coisotropic submanifolds in Poisson geometry and branes in the Poisson sigma model', Lett. Math. Phys. 69, 157 (2004)] for Kontsevich's angle function [Kontsevich, M., 'Deformation quantization of Poisson manifolds I', e-print arXiv:hep.th/0101170] used in the deformation quantization program of Poisson manifolds. The relevant superpropagators for n branes are defined as gauge fixed homotopy operators of a complex of differential forms on n sided polygons P n with particular ''alternating'' boundary conditions. In the presence of more than three branes we use first order Riemann theta functions with odd singular characteristics on the Jacobian variety of a hyperelliptic Riemann surface (canonical setting). In genus g the superpropagators present g zero mode contributions
Tanaka, Satoshi; Yoshikawa, Kohji; Minoshima, Takashi; Yoshida, Naoki
2017-11-01
We develop new numerical schemes for Vlasov-Poisson equations with high-order accuracy. Our methods are based on a spatially monotonicity-preserving (MP) scheme and are modified suitably so that the positivity of the distribution function is also preserved. We adopt an efficient semi-Lagrangian time integration scheme that is more accurate and computationally less expensive than the three-stage TVD Runge-Kutta integration. We apply our spatially fifth- and seventh-order schemes to a suite of simulations of collisionless self-gravitating systems and electrostatic plasma simulations, including linear and nonlinear Landau damping in one dimension and Vlasov-Poisson simulations in a six-dimensional phase space. The high-order schemes achieve a significantly improved accuracy in comparison with the third-order positive-flux-conserved scheme adopted in our previous study. With the semi-Lagrangian time integration, the computational cost of our high-order schemes does not significantly increase, but remains roughly the same as that of the third-order scheme. Vlasov-Poisson simulations on {128}3× {128}3 mesh grids have been successfully performed on a massively parallel computer.
A Local Poisson Graphical Model for inferring networks from sequencing data.
Allen, Genevera I; Liu, Zhandong
2013-09-01
Gaussian graphical models, a class of undirected graphs or Markov Networks, are often used to infer gene networks based on microarray expression data. Many scientists, however, have begun using high-throughput sequencing technologies such as RNA-sequencing or next generation sequencing to measure gene expression. As the resulting data consists of counts of sequencing reads for each gene, Gaussian graphical models are not optimal for this discrete data. In this paper, we propose a novel method for inferring gene networks from sequencing data: the Local Poisson Graphical Model. Our model assumes a Local Markov property where each variable conditional on all other variables is Poisson distributed. We develop a neighborhood selection algorithm to fit our model locally by performing a series of l1 penalized Poisson, or log-linear, regressions. This yields a fast parallel algorithm for estimating networks from next generation sequencing data. In simulations, we illustrate the effectiveness of our methods for recovering network structure from count data. A case study on breast cancer microRNAs (miRNAs), a novel application of graphical models, finds known regulators of breast cancer genes and discovers novel miRNA clusters and hubs that are targets for future research.
International Nuclear Information System (INIS)
Tahir-Kheli, R.A.
1975-01-01
A few simple problems relating to random magnetic systems are presented. Translational symmetry, only on the macroscopic scale, is assumed for these systems. A random set of parameters, on the microscopic scale, for the various regions of these systems is also assumed. A probability distribution for randomness is obeyed. Knowledge of the form of these probability distributions, is assumed in all cases [pt
Non-random temporal distribution of sleep onset REM periods in the MSLT in narcolepsy.
Sansa, Gemma; Falup-Pecurariu, Cristian; Salamero, Manel; Iranzo, Alex; Santamaria, Joan
2014-06-15
The diagnosis of narcolepsy is supported by the presence of two or more sleep onset REM periods (SOREMPs) in the multiple latency sleep test (MSLT). The distribution of SOREMPs throughout the MSLT has not been systematically studied in narcolepsy. We studied the temporal distribution of SOREMPs in the MSLT of a large series of narcoleptics and calculated the effects of age and the diagnostic value of shorter versions of the test. 129 patients consecutively diagnosed with narcolepsy (73.4% with cataplexy) underwent nocturnal polysomnography followed by a five-nap MSLT. 429 SOREMPs were recorded in 645 MSLT naps (66.5%). The probability of presenting SOREMPs in the fourth nap (3:30 pm) was significantly lower than in the remaining naps: 22.4% SOREMPs in the first nap, 20.5% in the second, 20.5% in the third, 16% in the fourth and 20.5% in the fifth nap (p<0.034). Patients older than 29 years had less SOREMPs than the younger ones (p:0.045). Shortening the MSLT to three or four naps decreased the capability of the test to support the diagnosis of narcolepsy in 14.7 and 10% respectively. The temporal distribution of SOREMPs in the MSLT is not even in narcolepsy, with the fourth nap having the lowest probability of presenting a SOREMP. This should be taken into account when evaluating the results of the MSLT, and particularly when using shorter versions of the test. Copyright © 2014 Elsevier B.V. All rights reserved.
A spectral Poisson solver for kinetic plasma simulation
Szeremley, Daniel; Obberath, Jens; Brinkmann, Ralf
2011-10-01
Plasma resonance spectroscopy is a well established plasma diagnostic method, realized in several designs. One of these designs is the multipole resonance probe (MRP). In its idealized - geometrically simplified - version it consists of two dielectrically shielded, hemispherical electrodes to which an RF signal is applied. A numerical tool is under development which is capable of simulating the dynamics of the plasma surrounding the MRP in electrostatic approximation. In this contribution we concentrate on the specialized Poisson solver for that tool. The plasma is represented by an ensemble of point charges. By expanding both the charge density and the potential into spherical harmonics, a largely analytical solution of the Poisson problem can be employed. For a practical implementation, the expansion must be appropriately truncated. With this spectral solver we are able to efficiently solve the Poisson equation in a kinetic plasma simulation without the need of introducing a spatial discretization.
A high order solver for the unbounded Poisson equation
DEFF Research Database (Denmark)
Hejlesen, Mads Mølholm; Rasmussen, Johannes Tophøj; Chatelain, Philippe
In mesh-free particle methods a high order solution to the unbounded Poisson equation is usually achieved by constructing regularised integration kernels for the Biot-Savart law. Here the singular, point particles are regularised using smoothed particles to obtain an accurate solution with an order...... of convergence consistent with the moments conserved by the applied smoothing function. In the hybrid particle-mesh method of Hockney and Eastwood (HE) the particles are interpolated onto a regular mesh where the unbounded Poisson equation is solved by a discrete non-cyclic convolution of the mesh values...... and the integration kernel. In this work we show an implementation of high order regularised integration kernels in the HE algorithm for the unbounded Poisson equation to formally achieve an arbitrary high order convergence. We further present a quantitative study of the convergence rate to give further insight...
Markov modulated Poisson process models incorporating covariates for rainfall intensity.
Thayakaran, R; Ramesh, N I
2013-01-01
Time series of rainfall bucket tip times at the Beaufort Park station, Bracknell, in the UK are modelled by a class of Markov modulated Poisson processes (MMPP) which may be thought of as a generalization of the Poisson process. Our main focus in this paper is to investigate the effects of including covariate information into the MMPP model framework on statistical properties. In particular, we look at three types of time-varying covariates namely temperature, sea level pressure, and relative humidity that are thought to be affecting the rainfall arrival process. Maximum likelihood estimation is used to obtain the parameter estimates, and likelihood ratio tests are employed in model comparison. Simulated data from the fitted model are used to make statistical inferences about the accumulated rainfall in the discrete time interval. Variability of the daily Poisson arrival rates is studied.
The coupling of Poisson sigma models to topological backgrounds
Energy Technology Data Exchange (ETDEWEB)
Rosa, Dario [School of Physics, Korea Institute for Advanced Study,Seoul 02455 (Korea, Republic of)
2016-12-13
We extend the coupling to the topological backgrounds, recently worked out for the 2-dimensional BF-model, to the most general Poisson sigma models. The coupling involves the choice of a Casimir function on the target manifold and modifies the BRST transformations. This in turn induces a change in the BRST cohomology of the resulting theory. The observables of the coupled theory are analyzed and their geometrical interpretation is given. We finally couple the theory to 2-dimensional topological gravity: this is the first step to study a topological string theory in propagation on a Poisson manifold. As an application, we show that the gauge-fixed vectorial supersymmetry of the Poisson sigma models has a natural explanation in terms of the theory coupled to topological gravity.
Effect of Poisson noise on adiabatic quantum control
Kiely, A.; Muga, J. G.; Ruschhaupt, A.
2017-01-01
We present a detailed derivation of the master equation describing a general time-dependent quantum system with classical Poisson white noise and outline its various properties. We discuss the limiting cases of Poisson white noise and provide approximations for the different noise strength regimes. We show that using the eigenstates of the noise superoperator as a basis can be a useful way of expressing the master equation. Using this, we simulate various settings to illustrate different effects of Poisson noise. In particular, we show a dip in the fidelity as a function of noise strength where high fidelity can occur in the strong-noise regime for some cases. We also investigate recent claims [J. Jing et al., Phys. Rev. A 89, 032110 (2014), 10.1103/PhysRevA.89.032110] that this type of noise may improve rather than destroy adiabaticity.
Esmaili, Esmat; Mardaani, Mohammad; Rabani, Hassan
2018-01-01
The electronic transport of a ladder-like graphene nanoribbon which the on-site or hopping energies of a small part of it can be random is modeled by using the Green's function technique within the nearest neighbor tight-binding approach. We employ a unitary transformation in order to convert the Hamiltonian of the nanoribbon to the Hamiltonian of a tight-binding ladder-like network. In this case, the disturbed part of the system includes the second neighbor hopping interactions. While, the converted Hamiltonian of each ideal part is equivalent to the Hamiltonian of two periodic on-site chains. Therefore, we can insert the self-energies of the alternative on-site tight-binding chains to the inverse of the Green's function matrix of the ladder-like part. In this viewpoint, the conductance is constructed from two trans and cis contributions. The results show that increasing the disorder strength causes the increase and decrease of the conductance of the trans and cis contributions, respectively.
Gap-size distribution functions of a random sequential adsorption model of segments on a line
Araújo, N. A. M.; Cadilhe, A.
2006-05-01
We performed extensive simulations accompanied by a detailed study of a two-segment size random sequential model on the line. We followed the kinetics towards the jamming state, but we paid particular attention to the characterization of the jamming state structure. In particular, we studied the effect of the size ratio on the mean-gap size, the gap-size dispersion, gap-size skewness, and gap-size kurtosis at the jamming state. We also analyzed the above quantities for the four possible segment-to-segment gap types. We ranged the values of the size ratio from one to twenty. In the limit of a size ratio of one, one recovers the classical car-parking problem. We observed that at low size ratios the jamming state is constituted by short streaks of small and large segments, while at high values of the size ratio the jamming state structure is formed by long streaks of small segments separated by a single large segment. This view of the jamming state structure as a function of the size ratio is supported by the various measured quantities. The present work can help provide insight, for example, on how to minimize the interparticle distance or minimize fluctuations around the mean particle-to-particle distance.
Quadratic Hamiltonians on non-symmetric Poisson structures
International Nuclear Information System (INIS)
Arribas, M.; Blesa, F.; Elipe, A.
2007-01-01
Many dynamical systems may be represented in a set of non-canonical coordinates that generate an su(2) algebraic structure. The topology of the phase space is the one of the S 2 sphere, the Poisson structure is the one of the rigid body, and the Hamiltonian is a parametric quadratic form in these 'spherical' coordinates. However, there are other problems in which the Poisson structure losses its symmetry. In this paper we analyze this case and, we show how the loss of the spherical symmetry affects the phase flow and parametric bifurcations for the bi-parametric cases
Efficient triangulation of Poisson-disk sampled point sets
Guo, Jianwei
2014-05-06
In this paper, we present a simple yet efficient algorithm for triangulating a 2D input domain containing a Poisson-disk sampled point set. The proposed algorithm combines a regular grid and a discrete clustering approach to speedup the triangulation. Moreover, our triangulation algorithm is flexible and performs well on more general point sets such as adaptive, non-maximal Poisson-disk sets. The experimental results demonstrate that our algorithm is robust for a wide range of input domains and achieves significant performance improvement compared to the current state-of-the-art approaches. © 2014 Springer-Verlag Berlin Heidelberg.
Gyrokinetic energy conservation and Poisson-bracket formulation
International Nuclear Information System (INIS)
Brizard, A.
1988-11-01
An integral expression for the gyrokinetic total energy of a magnetized plasma with general magnetic field configuration perturbed by fully electromagnetic fields was recently derived through the use of a gyro-center Lie transformation. We show that the gyrokinetic energy is conserved by the gyrokinetic Hamiltonian flow to all orders in perturbed fields. This paper is concerned with the explicit demonstration that a gyrokinetic Hamiltonian containing quadratic nonlinearities preserves the gyrokinetic energy up to third order. The Poisson-bracket formulation greatly facilitates this demonstration with the help of the Jacobi identity and other properties of the Poisson brackets. 18 refs
Adaptive maximal poisson-disk sampling on surfaces
Yan, Dongming
2012-01-01
In this paper, we study the generation of maximal Poisson-disk sets with varying radii on surfaces. Based on the concepts of power diagram and regular triangulation, we present a geometric analysis of gaps in such disk sets on surfaces, which is the key ingredient of the adaptive maximal Poisson-disk sampling framework. Moreover, we adapt the presented sampling framework for remeshing applications. Several novel and efficient operators are developed for improving the sampling/meshing quality over the state-of-theart. © 2012 ACM.
Robust iterative observer for source localization for Poisson equation
Majeed, Muhammad Usman
2017-01-05
Source localization problem for Poisson equation with available noisy boundary data is well known to be highly sensitive to noise. The problem is ill posed and lacks to fulfill Hadamards stability criteria for well posedness. In this work, first a robust iterative observer is presented for boundary estimation problem for Laplace equation, and then this algorithm along with the available noisy boundary data from the Poisson problem is used to localize point sources inside a rectangular domain. The algorithm is inspired from Kalman filter design, however one of the space variables is used as time-like. Numerical implementation along with simulation results is detailed towards the end.
Efficient maximal Poisson-disk sampling and remeshing on surfaces
Guo, Jianwei
2015-02-01
Poisson-disk sampling is one of the fundamental research problems in computer graphics that has many applications. In this paper, we study the problem of maximal Poisson-disk sampling on mesh surfaces. We present a simple approach that generalizes the 2D maximal sampling framework to surfaces. The key observation is to use a subdivided mesh as the sampling domain for conflict checking and void detection. Our approach improves the state-of-the-art approach in efficiency, quality and the memory consumption.
Kinasewitz, Gary T; Privalle, Christopher T; Imm, Amy; Steingrub, Jay S; Malcynski, John T; Balk, Robert A; DeAngelo, Joseph
2008-07-01
To assess the safety and efficacy of the hemoglobin-based nitric oxide scavenger, pyridoxalated hemoglobin polyoxyethylene (PHP), in patients with distributive shock. Phase II multicenter, randomized (1:1), placebo-controlled study. Fifteen intensive care units in North America. Sixty-two patients with distributive shock, > or = 2 systemic inflammatory response syndrome criteria, and persistent catecholamine dependence despite adequate fluid resuscitation (pulmonary capillary wedge pressure > or = 12). Patients were randomized to PHP at 0.25 mL/kg/hr (20 mg/kg/hr), or an equal volume of placebo, infused for up to 100 hrs, in addition to conventional vasopressor therapy. Because treatment could not be blinded, vasopressors and ventilatory support were weaned by protocol. Sixty-two patients were randomized to PHP (n = 33) or placebo (n = 29). Age, sex, etiology of shock (sepsis in 94%), and Acute Physiology and Chronic Health Evaluation II scores (33.1 +/- 8.3 vs. 30 +/- 7) were similar in PHP and placebo patients, respectively. Baseline plasma nitrite and nitrate levels were markedly elevated in both groups. PHP infusion increased systemic blood pressure within minutes. Overall 28-day mortality was similar (58% PHP vs. 59% placebo), but PHP survivors were weaned off vasopressors faster (13.7 +/- 8.2 vs. 26.3 +/- 21.4 hrs; p = .07) and spent less time on mechanical ventilation (10.4 +/- 10.2 vs. 17.4 +/- 9.9 days; p = .21). The risk ratio (PHP/placebo) for mortality was .79 (95% confidence interval, .39-1.59) when adjusted for age, sex, Acute Physiology and Chronic Health Evaluation II score, and etiology of sepsis. No excess medical interventions were noted with PHP use. PHP survivors left the intensive care unit earlier (13.6 +/- 8.6 vs. 17.9 +/- 8.2 days; p = .21) and more were discharged by day 28 (57.1 vs. 41.7%). PHP is a hemodynamically active nitric oxide scavenger. The role of PHP in distributive shock remains to be determined.
Randomized Soil Survey of the Distribution of Burkholderia pseudomallei in Rice Fields in Laos ▿ †
Rattanavong, Sayaphet; Wuthiekanun, Vanaporn; Langla, Sayan; Amornchai, Premjit; Sirisouk, Joy; Phetsouvanh, Rattanaphone; Moore, Catrin E.; Peacock, Sharon J.; Buisson, Yves; Newton, Paul N.
2011-01-01
Melioidosis is a major cause of morbidity and mortality in Southeast Asia, where the causative organism (Burkholderia pseudomallei) is present in the soil. In the Lao People's Democratic Republic (Laos), B. pseudomallei is a significant cause of sepsis around the capital, Vientiane, and has been isolated in soil near the city, adjacent to the Mekong River. We explored whether B. pseudomallei occurs in Lao soil distant from the Mekong River, drawing three axes across northwest, northeast, and southern Laos to create nine sampling areas in six provinces. Within each sampling area, a random rice field site containing a grid of 100 sampling points each 5 m apart was selected. Soil was obtained from a depth of 30 cm and cultured for B. pseudomallei. Four of nine sites (44%) were positive for B. pseudomallei, including all three sites in Saravane Province, southern Laos. The highest isolation frequency was in east Saravane, where 94% of soil samples were B. pseudomallei positive with a geometric mean concentration of 464 CFU/g soil (95% confidence interval, 372 to 579 CFU/g soil; range, 25 to 10,850 CFU/g soil). At one site in northwest Laos (Luangnamtha), only one sample (1%) was positive for B. pseudomallei, at a concentration of 80 CFU/g soil. Therefore, B. pseudomallei occurs in Lao soils beyond the immediate vicinity of the Mekong River, alerting physicians to the likelihood of melioidosis in these areas. Further studies are needed to investigate potential climatic, soil, and biological determinants of this heterogeneity. PMID:21075883
Energy Technology Data Exchange (ETDEWEB)
Granger, S.; Perotin, L. [Electricite de France (EDF), 78 - Chatou (France)
1997-12-31
Maintaining the PWR components under reliable operating conditions requires a complex design to prevent various damaging processes, including fatigue and wear problems due to flow-induced vibration. In many practical situations, it is difficult, if not impossible, to perform direct measurements or calculations of the external forces acting on vibrating structures. Instead, vibrational responses can often be conveniently measured. This paper presents an inverse method for estimating a distributed random excitation from the measurement of the structural response at a number of discrete points. This paper is devoted to the presentation of the theoretical development. The force identification method is based on a modal model for the structure and a spatial orthonormal decomposition of the excitation field. The estimation of the Fourier coefficients of this orthonormal expansion is presented. As this problem turns out to be ill-posed, a regularization process is introduced. The minimization problem associated to this process is then formulated and its solutions is developed. (author) 17 refs.
International Nuclear Information System (INIS)
Granger, S.; Perotin, L.
1997-01-01
Maintaining the PWR components under reliable operating conditions requires a complex design to prevent various damaging processes, including fatigue and wear problems due to flow-induced vibration. In many practical situations, it is difficult, if not impossible, to perform direct measurements or calculations of the external forces acting on vibrating structures. Instead, vibrational responses can often be conveniently measured. This paper presents an inverse method for estimating a distributed random excitation from the measurement of the structural response at a number of discrete points. This paper is devoted to the presentation of the theoretical development. The force identification method is based on a modal model for the structure and a spatial orthonormal decomposition of the excitation field. The estimation of the Fourier coefficients of this orthonormal expansion is presented. As this problem turns out to be ill-posed, a regularization process is introduced. The minimization problem associated to this process is then formulated and its solutions is developed. (author)
Phase diagrams of a spin-1/2 transverse Ising model with three-peak random field distribution
International Nuclear Information System (INIS)
Bassir, A.; Bassir, C.E.; Benyoussef, A.; Ez-Zahraouy, H.
1996-07-01
The effect of the transverse magnetic field on the phase diagrams structures of the Ising model in a random longitudinal magnetic field with a trimodal symmetric distribution is investigated within a finite cluster approximation. We find that a small magnetizations ordered phase (small ordered phase) disappears completely for a sufficiently large value of the transverse field or/and large value of the concentration of the disorder of the magnetic field. Multicritical behaviour and reentrant phenomena are discussed. The regions where the tricritical, reentrant phenomena and the small ordered phase persist are delimited as a function of the transverse field and the concentration p. Longitudinal magnetizations are also presented. (author). 33 refs, 6 figs
A symplectic Poisson solver based on Fast Fourier Transformation. The first trial
Energy Technology Data Exchange (ETDEWEB)
Vorobiev, L.G. [Gosudarstvennyj Komitet po Ispol`zovaniyu Atomnoj Ehnergii SSSR, Moscow (Russian Federation). Inst. Teoreticheskoj i Ehksperimental`noj Fiziki; Hirata, Kohji
1995-11-01
A symplectic Poisson solver calculates numerically a potential and fields due to a 2D distribution of particles in a way that the symplecticity and smoothness are assured automatically. Such a code, based on Fast Fourier Transformation combined with Bicubic Interpolation, is developed for the use in multi-turn particle simulation in circular accelerators. Beside that, it may have a number of applications, where computations of space charge forces should obey a symplecticity criterion. Detailed computational schemes of all algorithms will be outlined to facilitate practical programming. (author).
A symplectic Poisson solver based on Fast Fourier Transformation. The first trial
International Nuclear Information System (INIS)
Vorobiev, L.G.; Hirata, Kohji.
1995-11-01
A symplectic Poisson solver calculates numerically a potential and fields due to a 2D distribution of particles in a way that the symplecticity and smoothness are assured automatically. Such a code, based on Fast Fourier Transformation combined with Bicubic Interpolation, is developed for the use in multi-turn particle simulation in circular accelerators. Beside that, it may have a number of applications, where computations of space charge forces should obey a symplecticity criterion. Detailed computational schemes of all algorithms will be outlined to facilitate practical programming. (author)
Czech Academy of Sciences Publication Activity Database
Poplová, Michaela; Sovka, P.; Cifra, Michal
2017-01-01
Roč. 12, č. 12 (2017), č. článku e0188622. E-ISSN 1932-6203 R&D Projects: GA ČR(CZ) GA13-29294S Grant - others:AV ČR(CZ) SAV-15-22 Program:Bilaterální spolupráce Institutional support: RVO:67985882 Keywords : Poisson distribution * Photons * Neutrophils Subject RIV: JB - Sensors, Measurment, Regulation OBOR OECD: Electrical and electronic engineering Impact factor: 2.806, year: 2016
Marey, Isabelle; Ben Yaou, Rabah; Deburgrave, Nathalie; Vasson, Aurélie; Nectoux, Juliette; Leturcq, France; Eymard, Bruno; Laforet, Pascal; Behin, Anthony; Stojkovic, Tanya; Mayer, Michèle; Tiffreau, Vincent; Desguerre, Isabelle; Boyer, François Constant; Nadaj-Pakleza, Aleksandra; Ferrer, Xavier; Wahbi, Karim; Becane, Henri-Marc; Claustres, Mireille; Chelly, Jamel; Cossee, Mireille
2016-05-27
Dystrophinopathies are mostly caused by copy number variations, especially deletions, in the dystrophin gene (DMD). Despite the large size of the gene, deletions do not occur randomly but mainly in two hot spots, the main one involving exons 45 to 55. The underlying mechanisms are complex and implicate two main mechanisms: Non-homologous end joining (NHEJ) and micro-homology mediated replication-dependent recombination (MMRDR). Our goals were to assess the distribution of intronic breakpoints (BPs) in the genomic sequence of the main hot spot of deletions within DMD gene and to search for specific sequences at or near to BPs that might promote BP occurrence or be associated with DNA break repair. Using comparative genomic hybridization microarray, 57 deletions within the intron 44 to 55 region were mapped. Moreover, 21 junction fragments were sequenced to search for specific sequences. Non-randomly distributed BPs were found in introns 44, 47, 48, 49 and 53 and 50% of BPs clustered within genomic regions of less than 700bp. Repeated elements (REs), known to promote gene rearrangement via several mechanisms, were present in the vicinity of 90% of clustered BPs and less frequently (72%) close to scattered BPs, illustrating the important role of such elements in the occurrence of DMD deletions. Palindromic and TTTAAA sequences, which also promote DNA instability, were identified at fragment junctions in 20% and 5% of cases, respectively. Micro-homologies (76%) and insertions or deletions of small sequences were frequently found at BP junctions. Our results illustrate, in a large series of patients, the important role of RE and other genomic features in DNA breaks, and the involvement of different mechanisms in DMD gene deletions: Mainly replication error repair mechanisms, but also NHEJ and potentially aberrant firing of replication origins. A combination of these mechanisms may also be possible.
International Nuclear Information System (INIS)
Ruthe, Sebastian
2015-01-01
The ongoing shift towards decentralized power systems and the related rapidly growing number of decentralized energy resources (DER) like wind- and PV-units, CHP-units, storage devices and shiftable loads requires new information systems and control algorithms in order to pland and optimize the commitment of DER in line with the conventional generation system. In this context the paradigm of market based control derived from the Lagrangian relaxation of the unit commitment problem represents a promising solution approach to build highly scalable distributed systems able to perform this task within the required time limits. Market based control approaches typically achieve high quality solutions and protect the private data of the controlled units. However in case of DER with discontinuous utility functions market based control approaches suffer under the problem of ''joint commitment'', which may lead to a divergence of the iterative solution algorithm resulting in highly cost inefficient solutions. This thesis introduces a new concept of randomizing the Lagrangian multipliers to spread the individual commitment thresholds of DER thereby mitigating th negative effects of ''joint commitments''. Based on the randomized solution approach different boundaries for the solution quality regarding the overall energy production costs and the equilibrium constraints are established. Furthermore it is shown how the developed approach can be utilized to build new scalable information systems for future energy markets and their interfaces to the existing energy markets.
Multi-parameter full waveform inversion using Poisson
Oh, Juwon
2016-07-21
In multi-parameter full waveform inversion (FWI), the success of recovering each parameter is dependent on characteristics of the partial derivative wavefields (or virtual sources), which differ according to parameterisation. Elastic FWIs based on the two conventional parameterisations (one uses Lame constants and density; the other employs P- and S-wave velocities and density) have low resolution of gradients for P-wave velocities (or ). Limitations occur because the virtual sources for P-wave velocity or (one of the Lame constants) are related only to P-P diffracted waves, and generate isotropic explosions, which reduce the spatial resolution of the FWI for these parameters. To increase the spatial resolution, we propose a new parameterisation using P-wave velocity, Poisson\\'s ratio, and density for frequency-domain multi-parameter FWI for isotropic elastic media. By introducing Poisson\\'s ratio instead of S-wave velocity, the virtual source for the P-wave velocity generates P-S and S-S diffracted waves as well as P-P diffracted waves in the partial derivative wavefields for the P-wave velocity. Numerical examples of the cross-triangle-square (CTS) model indicate that the new parameterisation provides highly resolved descent directions for the P-wave velocity. Numerical examples of noise-free and noisy data synthesised for the elastic Marmousi-II model support the fact that the new parameterisation is more robust for noise than the two conventional parameterisations.
On covariant Poisson brackets in classical field theory
International Nuclear Information System (INIS)
Forger, Michael; Salles, Mário O.
2015-01-01
How to give a natural geometric definition of a covariant Poisson bracket in classical field theory has for a long time been an open problem—as testified by the extensive literature on “multisymplectic Poisson brackets,” together with the fact that all these proposals suffer from serious defects. On the other hand, the functional approach does provide a good candidate which has come to be known as the Peierls–De Witt bracket and whose construction in a geometrical setting is now well understood. Here, we show how the basic “multisymplectic Poisson bracket” already proposed in the 1970s can be derived from the Peierls–De Witt bracket, applied to a special class of functionals. This relation allows to trace back most (if not all) of the problems encountered in the past to ambiguities (the relation between differential forms on multiphase space and the functionals they define is not one-to-one) and also to the fact that this class of functionals does not form a Poisson subalgebra
Poisson processes on groups and Feynman path integrals
International Nuclear Information System (INIS)
Combe, P.; Rodriguez, R.; Sirugue-Collin, M.; Centre National de la Recherche Scientifique, 13 - Marseille; Sirugue, M.
1979-09-01
An expression is given for the perturbed evolution of a free evolution by gentle, possibly velocity dependent, potential, in terms of the expectation with respect to a Poisson process on a group. Various applications are given in particular to usual quantum mechanics but also to Fermi and spin systems
An application of the Autoregressive Conditional Poisson (ACP) model
CSIR Research Space (South Africa)
Holloway, Jennifer P
2010-11-01
Full Text Available When modelling count data that comes in the form of a time series, the static Poisson regression and standard time series models are often not appropriate. A current study therefore involves the evaluation of several observation-driven and parameter...
The Quantum Poisson Bracket and Transformation Theory in ...
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 8; Issue 8. The Quantum Poisson Bracket and Transformation Theory in Quantum Mechanics: Dirac's Early Work in Quantum Theory. Kamal Datta. General Article Volume 8 Issue 8 August 2003 pp 75-85 ...
A high order solver for the unbounded Poisson equation
DEFF Research Database (Denmark)
Hejlesen, Mads Mølholm; Rasmussen, Johannes Tophøj; Chatelain, Philippe
2012-01-01
This work improves upon Hockney and Eastwood's Fourier-based algorithm for the unbounded Poisson equation to formally achieve arbitrary high order of convergence without any additional computational cost. We assess the methodology on the kinematic relations between the velocity and vorticity fields....
Coefficient Inverse Problem for Poisson's Equation in a Cylinder
Solov'ev, V. V.
2011-01-01
The inverse problem of determining the coefficient on the right-hand side of Poisson's equation in a cylindrical domain is considered. The Dirichlet boundary value problem is studied. Two types of additional information (overdetermination) can be specified: (i) the trace of the solution to the
Modeling corporate defaults: Poisson autoregressions with exogenous covariates (PARX)
DEFF Research Database (Denmark)
Agosto, Arianna; Cavaliere, Guiseppe; Kristensen, Dennis
We develop a class of Poisson autoregressive models with additional covariates (PARX) that can be used to model and forecast time series of counts. We establish the time series properties of the models, including conditions for stationarity and existence of moments. These results are in turn used...
Is it safe to use Poisson statistics in nuclear spectrometry?
International Nuclear Information System (INIS)
Pomme, S.; Robouch, P.; Arana, G.; Eguskiza, M.; Maguregui, M.I.
2000-01-01
The boundary conditions in which Poisson statistics can be applied in nuclear spectrometry are investigated. Improved formulas for the uncertainty of nuclear counting with deadtime and pulse pileup are presented. A comparison is made between the expected statistical uncertainty for loss-free counting, fixed live-time and fixed real-time measurements. (author)
Nambu-Poisson reformulation of the finite dimensional dynamical systems
International Nuclear Information System (INIS)
Baleanu, D.; Makhaldiani, N.
1998-01-01
A system of nonlinear ordinary differential equations which in a particular case reduces to Volterra's system is introduced. We found in two simplest cases the complete sets of the integrals of motion using Nambu-Poisson reformulation of the Hamiltonian dynamics. In these cases we have solved the systems by quadratures
A Poisson type formula for Hardy classes on Heisenberg's group
Directory of Open Access Journals (Sweden)
Lopushansky O.V.
2010-06-01
Full Text Available The Hardy type class of complex functions with infinite many variables defined on the Schrodinger irreducible unitary orbit of reduced Heisenberg group, generated by the Gauss density, is investigated. A Poisson integral type formula for their analytic extensions on an open ball is established. Taylor coefficients for analytic extensions are described by the associatedsymmetric Fock space.
Subsonic Flow for the Multidimensional Euler-Poisson System
Bae, Myoungjean; Duan, Ben; Xie, Chunjing
2016-04-01
We establish the existence and stability of subsonic potential flow for the steady Euler-Poisson system in a multidimensional nozzle of a finite length when prescribing the electric potential difference on a non-insulated boundary from a fixed point at the exit, and prescribing the pressure at the exit of the nozzle. The Euler-Poisson system for subsonic potential flow can be reduced to a nonlinear elliptic system of second order. In this paper, we develop a technique to achieve a priori {C^{1,α}} estimates of solutions to a quasi-linear second order elliptic system with mixed boundary conditions in a multidimensional domain enclosed by a Lipschitz continuous boundary. In particular, we discovered a special structure of the Euler-Poisson system which enables us to obtain {C^{1,α}} estimates of the velocity potential and the electric potential functions, and this leads us to establish structural stability of subsonic flows for the Euler-Poisson system under perturbations of various data.
Poisson-generalized gamma empirical Bayes model for disease ...
African Journals Online (AJOL)
In spatial disease mapping, the use of Bayesian models of estimation technique is becoming popular for smoothing relative risks estimates for disease mapping. The most common Bayesian conjugate model for disease mapping is the Poisson-Gamma Model (PG). To explore further the activity of smoothing of relative risk ...
Inhibition in speed and concentration tests: The Poisson inhibition model
Smit, J.C.; Ven, A.H.G.S. van der
1995-01-01
A new model is presented to account for the reaction time fluctuations in concentration tests. The model is a natural generalization of an earlier model, the so-called Poisson-Erlang model, published by Pieters & van der Ven (1982). First, a description is given of the type of tasks for which the
Boundary singularity of Poisson and harmonic Bergman kernels
Czech Academy of Sciences Publication Activity Database
Engliš, Miroslav
2015-01-01
Roč. 429, č. 1 (2015), s. 233-272 ISSN 0022-247X R&D Projects: GA AV ČR IAA100190802 Institutional support: RVO:67985840 Keywords : harmonic Bergman kernel * Poisson kernel * pseudodifferential boundary operators Subject RIV: BA - General Mathematics Impact factor: 1.014, year: 2015 http://www.sciencedirect.com/science/article/pii/S0022247X15003170
Characterization and global analysis of a family of Poisson structures
Energy Technology Data Exchange (ETDEWEB)
Hernandez-Bermejo, Benito [Escuela Superior de Ciencias Experimentales y Tecnologia, Edificio Departamental II, Universidad Rey Juan Carlos, Calle Tulipan S/N, 28933 (Mostoles), Madrid (Spain)]. E-mail: benito.hernandez@urjc.es
2006-06-26
A three-dimensional family of solutions of the Jacobi equations for Poisson systems is characterized. In spite of its general form it is possible the explicit and global determination of its main features, such as the symplectic structure and the construction of the Darboux canonical form. Examples are given.
On covariant Poisson brackets in classical field theory
Energy Technology Data Exchange (ETDEWEB)
Forger, Michael [Instituto de Matemática e Estatística, Universidade de São Paulo, Caixa Postal 66281, BR–05315-970 São Paulo, SP (Brazil); Salles, Mário O. [Instituto de Matemática e Estatística, Universidade de São Paulo, Caixa Postal 66281, BR–05315-970 São Paulo, SP (Brazil); Centro de Ciências Exatas e da Terra, Universidade Federal do Rio Grande do Norte, Campus Universitário – Lagoa Nova, BR–59078-970 Natal, RN (Brazil)
2015-10-15
How to give a natural geometric definition of a covariant Poisson bracket in classical field theory has for a long time been an open problem—as testified by the extensive literature on “multisymplectic Poisson brackets,” together with the fact that all these proposals suffer from serious defects. On the other hand, the functional approach does provide a good candidate which has come to be known as the Peierls–De Witt bracket and whose construction in a geometrical setting is now well understood. Here, we show how the basic “multisymplectic Poisson bracket” already proposed in the 1970s can be derived from the Peierls–De Witt bracket, applied to a special class of functionals. This relation allows to trace back most (if not all) of the problems encountered in the past to ambiguities (the relation between differential forms on multiphase space and the functionals they define is not one-to-one) and also to the fact that this class of functionals does not form a Poisson subalgebra.
Poisson sampling - The adjusted and unadjusted estimator revisited
Michael S. Williams; Hans T. Schreuder; Gerardo H. Terrazas
1998-01-01
The prevailing assumption, that for Poisson sampling the adjusted estimator "Y-hat a" is always substantially more efficient than the unadjusted estimator "Y-hat u" , is shown to be incorrect. Some well known theoretical results are applicable since "Y-hat a" is a ratio-of-means estimator and "Y-hat u" a simple unbiased estimator...
Poisson Regression Analysis of Illness and Injury Surveillance Data
Energy Technology Data Exchange (ETDEWEB)
Frome E.L., Watkins J.P., Ellis E.D.
2012-12-12
The Department of Energy (DOE) uses illness and injury surveillance to monitor morbidity and assess the overall health of the work force. Data collected from each participating site include health events and a roster file with demographic information. The source data files are maintained in a relational data base, and are used to obtain stratified tables of health event counts and person time at risk that serve as the starting point for Poisson regression analysis. The explanatory variables that define these tables are age, gender, occupational group, and time. Typical response variables of interest are the number of absences due to illness or injury, i.e., the response variable is a count. Poisson regression methods are used to describe the effect of the explanatory variables on the health event rates using a log-linear main effects model. Results of fitting the main effects model are summarized in a tabular and graphical form and interpretation of model parameters is provided. An analysis of deviance table is used to evaluate the importance of each of the explanatory variables on the event rate of interest and to determine if interaction terms should be considered in the analysis. Although Poisson regression methods are widely used in the analysis of count data, there are situations in which over-dispersion occurs. This could be due to lack-of-fit of the regression model, extra-Poisson variation, or both. A score test statistic and regression diagnostics are used to identify over-dispersion. A quasi-likelihood method of moments procedure is used to evaluate and adjust for extra-Poisson variation when necessary. Two examples are presented using respiratory disease absence rates at two DOE sites to illustrate the methods and interpretation of the results. In the first example the Poisson main effects model is adequate. In the second example the score test indicates considerable over-dispersion and a more detailed analysis attributes the over-dispersion to extra-Poisson
Application of random matrix theory to biological networks
Energy Technology Data Exchange (ETDEWEB)
Luo Feng [Department of Computer Science, Clemson University, 100 McAdams Hall, Clemson, SC 29634 (United States); Department of Pathology, U.T. Southwestern Medical Center, 5323 Harry Hines Blvd. Dallas, TX 75390-9072 (United States); Zhong Jianxin [Department of Physics, Xiangtan University, Hunan 411105 (China) and Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States)]. E-mail: zhongjn@ornl.gov; Yang Yunfeng [Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Scheuermann, Richard H. [Department of Pathology, U.T. Southwestern Medical Center, 5323 Harry Hines Blvd. Dallas, TX 75390-9072 (United States); Zhou Jizhong [Department of Botany and Microbiology, University of Oklahoma, Norman, OK 73019 (United States) and Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States)]. E-mail: zhouj@ornl.gov
2006-09-25
We show that spectral fluctuation of interaction matrices of a yeast protein-protein interaction network and a yeast metabolic network follows the description of the Gaussian orthogonal ensemble (GOE) of random matrix theory (RMT). Furthermore, we demonstrate that while the global biological networks evaluated belong to GOE, removal of interactions between constituents transitions the networks to systems of isolated modules described by the Poisson distribution. Our results indicate that although biological networks are very different from other complex systems at the molecular level, they display the same statistical properties at network scale. The transition point provides a new objective approach for the identification of functional modules.
Zhang, Guo-Qiang; Tao, Shiqiang; Xing, Guangming; Mozes, Jeno; Zonjy, Bilal; Lhatoo, Samden D
2015-01-01
Background A unique study identifier serves as a key for linking research data about a study subject without revealing protected health information in the identifier. While sufficient for single-site and limited-scale studies, the use of common unique study identifiers has several drawbacks for large multicenter studies, where thousands of research participants may be recruited from multiple sites. An important property of study identifiers is error tolerance (or validatable), in that inadvertent editing mistakes during their transmission and use will most likely result in invalid study identifiers. Objective This paper introduces a novel method called "Randomized N-gram Hashing (NHash)," for generating unique study identifiers in a distributed and validatable fashion, in multicenter research. NHash has a unique set of properties: (1) it is a pseudonym serving the purpose of linking research data about a study participant for research purposes; (2) it can be generated automatically in a completely distributed fashion with virtually no risk for identifier collision; (3) it incorporates a set of cryptographic hash functions based on N-grams, with a combination of additional encryption techniques such as a shift cipher; (d) it is validatable (error tolerant) in the sense that inadvertent edit errors will mostly result in invalid identifiers. Methods NHash consists of 2 phases. First, an intermediate string using randomized N-gram hashing is generated. This string consists of a collection of N-gram hashes f 1, f 2, ..., f k. The input for each function f i has 3 components: a random number r, an integer n, and input data m. The result, f i(r, n, m), is an n-gram of m with a starting position s, which is computed as (r mod |m|), where |m| represents the length of m. The output for Step 1 is the concatenation of the sequence f 1(r 1, n 1, m 1), f 2(r 2, n 2, m 2), ..., f k(r k, n k, m k). In the second phase, the intermediate string generated in Phase 1 is encrypted
Poisson traces, D-modules, and symplectic resolutions
Etingof, Pavel; Schedler, Travis
2018-03-01
We survey the theory of Poisson traces (or zeroth Poisson homology) developed by the authors in a series of recent papers. The goal is to understand this subtle invariant of (singular) Poisson varieties, conditions for it to be finite-dimensional, its relationship to the geometry and topology of symplectic resolutions, and its applications to quantizations. The main technique is the study of a canonical D-module on the variety. In the case the variety has finitely many symplectic leaves (such as for symplectic singularities and Hamiltonian reductions of symplectic vector spaces by reductive groups), the D-module is holonomic, and hence, the space of Poisson traces is finite-dimensional. As an application, there are finitely many irreducible finite-dimensional representations of every quantization of the variety. Conjecturally, the D-module is the pushforward of the canonical D-module under every symplectic resolution of singularities, which implies that the space of Poisson traces is dual to the top cohomology of the resolution. We explain many examples where the conjecture is proved, such as symmetric powers of du Val singularities and symplectic surfaces and Slodowy slices in the nilpotent cone of a semisimple Lie algebra. We compute the D-module in the case of surfaces with isolated singularities and show it is not always semisimple. We also explain generalizations to arbitrary Lie algebras of vector fields, connections to the Bernstein-Sato polynomial, relations to two-variable special polynomials such as Kostka polynomials and Tutte polynomials, and a conjectural relationship with deformations of symplectic resolutions. In the appendix we give a brief recollection of the theory of D-modules on singular varieties that we require.
Poisson structure of dynamical systems with three degrees of freedom
Gümral, Hasan; Nutku, Yavuz
1993-12-01
It is shown that the Poisson structure of dynamical systems with three degrees of freedom can be defined in terms of an integrable one-form in three dimensions. Advantage is taken of this fact and the theory of foliations is used in discussing the geometrical structure underlying complete and partial integrability. Techniques for finding Poisson structures are presented and applied to various examples such as the Halphen system which has been studied as the two-monopole problem by Atiyah and Hitchin. It is shown that the Halphen system can be formulated in terms of a flat SL(2,R)-valued connection and belongs to a nontrivial Godbillon-Vey class. On the other hand, for the Euler top and a special case of three-species Lotka-Volterra equations which are contained in the Halphen system as limiting cases, this structure degenerates into the form of globally integrable bi-Hamiltonian structures. The globally integrable bi-Hamiltonian case is a linear and the SL(2,R) structure is a quadratic unfolding of an integrable one-form in 3+1 dimensions. It is shown that the existence of a vector field compatible with the flow is a powerful tool in the investigation of Poisson structure and some new techniques for incorporating arbitrary constants into the Poisson one-form are presented herein. This leads to some extensions, analogous to q extensions, of Poisson structure. The Kermack-McKendrick model and some of its generalizations describing the spread of epidemics, as well as the integrable cases of the Lorenz, Lotka-Volterra, May-Leonard, and Maxwell-Bloch systems admit globally integrable bi-Hamiltonian structure.
Stable and efficient retrospective 4D-MRI using non-uniformly distributed quasi-random numbers
Breuer, Kathrin; Meyer, Cord B.; Breuer, Felix A.; Richter, Anne; Exner, Florian; Weng, Andreas M.; Ströhle, Serge; Polat, Bülent; Jakob, Peter M.; Sauer, Otto A.; Flentje, Michael; Weick, Stefan
2018-04-01
The purpose of this work is the development of a robust and reliable three-dimensional (3D) Cartesian imaging technique for fast and flexible retrospective 4D abdominal MRI during free breathing. To this end, a non-uniform quasi random (NU-QR) reordering of the phase encoding (k y –k z ) lines was incorporated into 3D Cartesian acquisition. The proposed sampling scheme allocates more phase encoding points near the k-space origin while reducing the sampling density in the outer part of the k-space. Respiratory self-gating in combination with SPIRiT-reconstruction is used for the reconstruction of abdominal data sets in different respiratory phases (4D-MRI). Six volunteers and three patients were examined at 1.5 T during free breathing. Additionally, data sets with conventional two-dimensional (2D) linear and 2D quasi random phase encoding order were acquired for the volunteers for comparison. A quantitative evaluation of image quality versus scan times (from 70 s to 626 s) for the given sampling schemes was obtained by calculating the normalized mutual information (NMI) for all volunteers. Motion estimation was accomplished by calculating the maximum derivative of a signal intensity profile of a transition (e.g. tumor or diaphragm). The 2D non-uniform quasi-random distribution of phase encoding lines in Cartesian 3D MRI yields more efficient undersampling patterns for parallel imaging compared to conventional uniform quasi-random and linear sampling. Median NMI values of NU-QR sampling are the highest for all scan times. Therefore, within the same scan time 4D imaging could be performed with improved image quality. The proposed method allows for the reconstruction of motion artifact reduced 4D data sets with isotropic spatial resolution of 2.1 × 2.1 × 2.1 mm3 in a short scan time, e.g. 10 respiratory phases in only 3 min. Cranio-caudal tumor displacements between 23 and 46 mm could be observed. NU-QR sampling enables for stable 4D
Non-Poisson counting statistics of a hybrid G-M counter dead time model
International Nuclear Information System (INIS)
Lee, Sang Hoon; Jae, Moosung; Gardner, Robin P.
2007-01-01
The counting statistics of a G-M counter with a considerable dead time event rate deviates from Poisson statistics. Important characteristics such as observed counting rates as a function true counting rates, variances and interval distributions were analyzed for three dead time models, non-paralyzable, paralyzable and hybrid, with the help of GMSIM, a Monte Carlo dead time effect simulator. The simulation results showed good agreements with the models in observed counting rates and variances. It was found through GMSIM simulations that the interval distribution for the hybrid model showed three distinctive regions, a complete cutoff region for the duration of the total dead time, a degraded exponential and an enhanced exponential regions. By measuring the cutoff and the duration of degraded exponential from the pulse interval distribution, it is possible to evaluate the two dead times in the hybrid model
Directory of Open Access Journals (Sweden)
E.O. Ulloa-Dávila
2017-12-01
Full Text Available An approximate analytical solution to the fluctuation potential problem in the modified Poisson-Boltzmann theory of electrolyte solutions in the restricted primitive model is presented. The solution is valid for all inter-ionic distances, including contact values. The fluctuation potential solution is implemented in the theory to describe the structure of the electrolyte in terms of the radial distribution functions, and to calculate some aspects of thermodynamics, viz., configurational reduced energies, and osmotic coefficients. The calculations have been made for symmetric valence 1:1 systems at the physical parameters of ionic diameter 4.25·10^{-10} m, relative permittivity 78.5, absolute temperature 298 K, and molar concentrations 0.1038, 0.425, 1.00, and 1.968. Radial distribution functions are compared with the corresponding results from the symmetric Poisson-Boltzmann, and the conventional and modified Poisson-Boltzmann theories. Comparisons have also been done for the contact values of the radial distributions, reduced configurational energies, and osmotic coefficients as functions of electrolyte concentration. Some Monte Carlo simulation data from the literature are also included in the assessment of the thermodynamic predictions. Results show a very good agreement with the Monte Carlo results and some improvement for osmotic coefficients and radial distribution functions contact values relative to these theories. The reduced energy curve shows excellent agreement with Monte Carlo data for molarities up to 1 mol/dm^3.
International Nuclear Information System (INIS)
Raharjo, W.; Palupi, I. R.; Nurdian, S. W.; Giamboro, W. S.; Soesilo, J.
2016-01-01
Poisson's Ratio illustrates the elasticity properties of a rock. The value is affected by the ratio between the value of P and S wave velocity, where the high value ratio associated with partial melting while the low associated with gas saturated rock. Java which has many volcanoes as a result of the collision between the Australian and Eurasian plates also effects of earthquakes that result the P and S wave. By tomography techniques the distribution of the value of Poisson's ratio can be known. Western Java was dominated by high Poisson's Ratio until Mount Slamet and Dieng in Central Java, while the eastern part of Java is dominated by low Poisson's Ratio. The difference of Poisson's Ratio is located in Central Java that is also supported by the difference characteristic of hot water manifestation in geothermal potential area in the west and east of Central Java Province. Poisson's ratio value is also lower with increasing depth proving that the cold oceanic plate entrance under the continental plate. (paper)
International Nuclear Information System (INIS)
Birchall, A.; Muirhead, C.R.; James, A.C.
1988-01-01
An analytical expression has been derived for the k-sum distribution, formed by summing k random variables from a lognormal population. Poisson statistics are used with this distribution to derive distribution of intake when breathing an atmosphere with a constant particle number concentration. Bayesian inference is then used to calculate the posterior probability distribution of concentrations from a given measurement. This is combined with the above intake distribution to give the probability distribution of intake resulting from a single measurement of activity made by an ideal sampler. It is shown that the probability distribution of intake is very dependent on the prior distribution used in Bayes' theorem. The usual prior assumption, that all number concentrations are equally probable, leads to an imbalance in the posterior intake distribution. This can be resolved if a new prior proportional to w -2/3 is used, where w is the expected number of particles collected. (author)
Haris, A.; Nenggala, Y.; Suparno, S.; Raguwanti, R.; Riyanto, A.
2017-07-01
Low impedance contrast between the shale-sand layer, which can be found in the situation where shale layer wrapped in the sand reservoir, is a challenging case for explorationist in characterizing sand distribution from shale layer. In this paper, we present the implementation of Poisson impedance in mapping sand distribution in Gumai formation, Jambas Field, Jambi Sub-basin. Gumai formation has become a prospective zone, which contains sandstone with strong laterally change. The characteristic of facies at Gumai formation, which is laterally changing, has been properly mapped based on the Acoustic impedance (AI) and Shear impedance (SI). These two impedances, which is yielded by performing seismic simultaneous inversion, is then combined to generate Poisson impedance. The Poisson impedance is conceptually formulated as a contrast between AI and a scaled SI with the scale is estimated from the gradient of the relationship between AI and SI. Our experiment shows that the Poisson impedance map is able to separate the sand distribution from the shale layer. Therefore the sand facies has been clearly delineated from the contrast of Poisson impedance.
Directory of Open Access Journals (Sweden)
Regad Leslie
2010-01-01
Full Text Available Abstract Background In bioinformatics it is common to search for a pattern of interest in a potentially large set of rather short sequences (upstream gene regions, proteins, exons, etc.. Although many methodological approaches allow practitioners to compute the distribution of a pattern count in a random sequence generated by a Markov source, no specific developments have taken into account the counting of occurrences in a set of independent sequences. We aim to address this problem by deriving efficient approaches and algorithms to perform these computations both for low and high complexity patterns in the framework of homogeneous or heterogeneous Markov models. Results The latest advances in the field allowed us to use a technique of optimal Markov chain embedding based on deterministic finite automata to introduce three innovative algorithms. Algorithm 1 is the only one able to deal with heterogeneous models. It also permits to avoid any product of convolution of the pattern distribution in individual sequences. When working with homogeneous models, Algorithm 2 yields a dramatic reduction in the complexity by taking advantage of previous computations to obtain moment generating functions efficiently. In the particular case of low or moderate complexity patterns, Algorithm 3 exploits power computation and binary decomposition to further reduce the time complexity to a logarithmic scale. All these algorithms and their relative interest in comparison with existing ones were then tested and discussed on a toy-example and three biological data sets: structural patterns in protein loop structures, PROSITE signatures in a bacterial proteome, and transcription factors in upstream gene regions. On these data sets, we also compared our exact approaches to the tempting approximation that consists in concatenating the sequences in the data set into a single sequence. Conclusions Our algorithms prove to be effective and able to handle real data sets with
Random matrix theory and higher genus integrability: the quantum chiral Potts model
International Nuclear Information System (INIS)
Angles d'Auriac, J.Ch.; Maillard, J.M.; Viallet, C.M.
2002-01-01
We perform a random matrix theory (RMT) analysis of the quantum four-state chiral Potts chain for different sizes of the chain up to size L 8. Our analysis gives clear evidence of a Gaussian orthogonal ensemble (GOE) statistics, suggesting the existence of a generalized time-reversal invariance. Furthermore, a change from the (generic) GOE distribution to a Poisson distribution occurs when the integrability conditions are met. The chiral Potts model is known to correspond to a (star-triangle) integrability associated with curves of genus higher than zero or one. Therefore, the RMT analysis can also be seen as a detector of 'higher genus integrability'. (author)
2D sigma models and differential Poisson algebras
Energy Technology Data Exchange (ETDEWEB)
Arias, Cesar [Departamento de Ciencias Físicas, Universidad Andres Bello,Republica 220, Santiago (Chile); Boulanger, Nicolas [Service de Mécanique et Gravitation, Université de Mons - UMONS,20 Place du Parc, 7000 Mons (Belgium); Laboratoire de Mathématiques et Physique Théorique,Unité Mixte de Recherche 7350 du CNRS, Fédération de Recherche 2964 Denis Poisson,Université François Rabelais, Parc de Grandmont, 37200 Tours (France); Sundell, Per [Departamento de Ciencias Físicas, Universidad Andres Bello,Republica 220, Santiago (Chile); Torres-Gomez, Alexander [Departamento de Ciencias Físicas, Universidad Andres Bello,Republica 220, Santiago (Chile); Instituto de Ciencias Físicas y Matemáticas, Universidad Austral de Chile-UACh,Valdivia (Chile)
2015-08-18
We construct a two-dimensional topological sigma model whose target space is endowed with a Poisson algebra for differential forms. The model consists of an equal number of bosonic and fermionic fields of worldsheet form degrees zero and one. The action is built using exterior products and derivatives, without any reference to a worldsheet metric, and is of the covariant Hamiltonian form. The equations of motion define a universally Cartan integrable system. In addition to gauge symmetries, the model has one rigid nilpotent supersymmetry corresponding to the target space de Rham operator. The rigid and local symmetries of the action, respectively, are equivalent to the Poisson bracket being compatible with the de Rham operator and obeying graded Jacobi identities. We propose that perturbative quantization of the model yields a covariantized differential star product algebra of Kontsevich type. We comment on the resemblance to the topological A model.
A dictionary learning approach for Poisson image deblurring.
Ma, Liyan; Moisan, Lionel; Yu, Jian; Zeng, Tieyong
2013-07-01
The restoration of images corrupted by blur and Poisson noise is a key issue in medical and biological image processing. While most existing methods are based on variational models, generally derived from a maximum a posteriori (MAP) formulation, recently sparse representations of images have shown to be efficient approaches for image recovery. Following this idea, we propose in this paper a model containing three terms: a patch-based sparse representation prior over a learned dictionary, the pixel-based total variation regularization term and a data-fidelity term capturing the statistics of Poisson noise. The resulting optimization problem can be solved by an alternating minimization technique combined with variable splitting. Extensive experimental results suggest that in terms of visual quality, peak signal-to-noise ratio value and the method noise, the proposed algorithm outperforms state-of-the-art methods.
Invariants and labels for Lie-Poisson Systems
International Nuclear Information System (INIS)
Thiffeault, J.L.; Morrison, P.J.
1998-04-01
Reduction is a process that uses symmetry to lower the order of a Hamiltonian system. The new variables in the reduced picture are often not canonical: there are no clear variables representing positions and momenta, and the Poisson bracket obtained is not of the canonical type. Specifically, we give two examples that give rise to brackets of the noncanonical Lie-Poisson form: the rigid body and the two-dimensional ideal fluid. From these simple cases, we then use the semidirect product extension of algebras to describe more complex physical systems. The Casimir invariants in these systems are examined, and some are shown to be linked to the recovery of information about the configuration of the system. We discuss a case in which the extension is not a semidirect product, namely compressible reduced MHD, and find for this case that the Casimir invariants lend partial information about the configuration of the system
Reference manual for the POISSON/SUPERFISH Group of Codes
Energy Technology Data Exchange (ETDEWEB)
1987-01-01
The POISSON/SUPERFISH Group codes were set up to solve two separate problems: the design of magnets and the design of rf cavities in a two-dimensional geometry. The first stage of either problem is to describe the layout of the magnet or cavity in a way that can be used as input to solve the generalized Poisson equation for magnets or the Helmholtz equations for cavities. The computer codes require that the problems be discretized by replacing the differentials (dx,dy) by finite differences ({delta}X,{delta}Y). Instead of defining the function everywhere in a plane, the function is defined only at a finite number of points on a mesh in the plane.
Exponential Stability of Stochastic Systems with Delay and Poisson Jumps
Directory of Open Access Journals (Sweden)
Wenli Zhu
2014-01-01
Full Text Available This paper focuses on the model of a class of nonlinear stochastic delay systems with Poisson jumps based on Lyapunov stability theory, stochastic analysis, and inequality technique. The existence and uniqueness of the adapted solution to such systems are proved by applying the fixed point theorem. By constructing a Lyapunov function and using Doob’s martingale inequality and Borel-Cantelli lemma, sufficient conditions are given to establish the exponential stability in the mean square of such systems, and we prove that the exponentially stable in the mean square of such systems implies the almost surely exponentially stable. The obtained results show that if stochastic systems is exponentially stable and the time delay is sufficiently small, then the corresponding stochastic delay systems with Poisson jumps will remain exponentially stable, and time delay upper limit is solved by using the obtained results when the system is exponentially stable, and they are more easily verified and applied in practice.
Nickele, Mariane A; Oliveira, Edilson B de; Reis Filho, Wilson; Iede, Edson T; Ribeiro, Rodrigo D
2010-01-01
The spatial distribution of insects is essential to perform control strategies, to improve sample techniques and to estimate economic losses. We aimed to determine the spatial distribution of nests of Acromyrmex crassispinus (Forel) in Pinus taeda plantations. The experiments were carried out in P. taeda plantations with different ages (treatments: recently-planted, three and six-year old plants). The study took place in Rio Negrinho and in Três Barras, SC. Three plots of one hectare were delimited in each treatment, and plots were divided in 64 sample units. The analysis of the dispersion index [variance/mean relationship (I), index of Morisita (Iδ) and k exponent of negative binomial distribution] showed that the majority of the samplings presented random distribution. Among the three distributions of probabilities studied: Poisson, positive binomial and negative binomial, the Poisson distribution was the best model to fit the spatial distribution of A. crassispinus nests in all samplings. The result was a random distribution in the plantings of different ages.
Generating clustered scale-free networks using Poisson based localization of edges
Türker, İlker
2018-05-01
We introduce a variety of network models using a Poisson-based edge localization strategy, which result in clustered scale-free topologies. We first verify the success of our localization strategy by realizing a variant of the well-known Watts-Strogatz model with an inverse approach, implying a small-world regime of rewiring from a random network through a regular one. We then apply the rewiring strategy to a pure Barabasi-Albert model and successfully achieve a small-world regime, with a limited capacity of scale-free property. To imitate the high clustering property of scale-free networks with higher accuracy, we adapted the Poisson-based wiring strategy to a growing network with the ingredients of both preferential attachment and local connectivity. To achieve the collocation of these properties, we used a routine of flattening the edges array, sorting it, and applying a mixing procedure to assemble both global connections with preferential attachment and local clusters. As a result, we achieved clustered scale-free networks with a computational fashion, diverging from the recent studies by following a simple but efficient approach.
Ingels, Frank; Owens, John; Daniel, Steven
1989-01-01
The protocol definition and terminal hardware for the modified free access protocol, a communications protocol similar to Ethernet, are developed. A MFA protocol simulator and a CSMA/CD math model are also developed. The protocol is tailored to communication systems where the total traffic may be divided into scheduled traffic and Poisson traffic. The scheduled traffic should occur on a periodic basis but may occur after a given event such as a request for data from a large number of stations. The Poisson traffic will include alarms and other random traffic. The purpose of the protocol is to guarantee that scheduled packets will be delivered without collision. This is required in many control and data collection systems. The protocol uses standard Ethernet hardware and software requiring minimum modifications to an existing system. The modification to the protocol only affects the Ethernet transmission privileges and does not effect the Ethernet receiver.
Estimating small signals by using maximum likelihood and Poisson statistics
Hannam, M D
1999-01-01
Estimation of small signals from counting experiments with backgrounds larger than signals is solved using maximum likelihood estimation for situations in which both signal and background statistics are Poissonian. Confidence levels are discussed, and Poisson, Gauss and least-squares fitting methods are compared. Efficient algorithms that estimate signal strengths and confidence levels are devised for computer implementation. Examples from simulated data and a low count rate experiment in nuclear physics are given. (author)
A hybrid sampler for Poisson-Kingman mixture models
Lomeli, M.; Favaro, S.; Teh, Y. W.
2015-01-01
This paper concerns the introduction of a new Markov Chain Monte Carlo scheme for posterior sampling in Bayesian nonparametric mixture models with priors that belong to the general Poisson-Kingman class. We present a novel compact way of representing the infinite dimensional component of the model such that while explicitly representing this infinite component it has less memory and storage requirements than previous MCMC schemes. We describe comparative simulation results demonstrating the e...
A generalized Poisson solver for first-principles device simulations
Energy Technology Data Exchange (ETDEWEB)
Bani-Hashemian, Mohammad Hossein; VandeVondele, Joost, E-mail: joost.vandevondele@mat.ethz.ch [Nanoscale Simulations, ETH Zürich, 8093 Zürich (Switzerland); Brück, Sascha; Luisier, Mathieu [Integrated Systems Laboratory, ETH Zürich, 8092 Zürich (Switzerland)
2016-01-28
Electronic structure calculations of atomistic systems based on density functional theory involve solving the Poisson equation. In this paper, we present a plane-wave based algorithm for solving the generalized Poisson equation subject to periodic or homogeneous Neumann conditions on the boundaries of the simulation cell and Dirichlet type conditions imposed at arbitrary subdomains. In this way, source, drain, and gate voltages can be imposed across atomistic models of electronic devices. Dirichlet conditions are enforced as constraints in a variational framework giving rise to a saddle point problem. The resulting system of equations is then solved using a stationary iterative method in which the generalized Poisson operator is preconditioned with the standard Laplace operator. The solver can make use of any sufficiently smooth function modelling the dielectric constant, including density dependent dielectric continuum models. For all the boundary conditions, consistent derivatives are available and molecular dynamics simulations can be performed. The convergence behaviour of the scheme is investigated and its capabilities are demonstrated.
Brain, music, and non-Poisson renewal processes
Bianco, Simone; Ignaccolo, Massimiliano; Rider, Mark S.; Ross, Mary J.; Winsor, Phil; Grigolini, Paolo
2007-06-01
In this paper we show that both music composition and brain function, as revealed by the electroencephalogram (EEG) analysis, are renewal non-Poisson processes living in the nonergodic dominion. To reach this important conclusion we process the data with the minimum spanning tree method, so as to detect significant events, thereby building a sequence of times, which is the time series to analyze. Then we show that in both cases, EEG and music composition, these significant events are the signature of a non-Poisson renewal process. This conclusion is reached using a technique of statistical analysis recently developed by our group, the aging experiment (AE). First, we find that in both cases the distances between two consecutive events are described by nonexponential histograms, thereby proving the non-Poisson nature of these processes. The corresponding survival probabilities Ψ(t) are well fitted by stretched exponentials [ Ψ(t)∝exp (-(γt)α) , with 0.5music composition yield μmusic on the human brain.
Optimal smoothing of poisson degraded nuclear medicine image data
International Nuclear Information System (INIS)
Hull, D.M.
1985-01-01
The development of a method that removes Poisson noise from nuclear medicine studies will have significant impact on the quantitative analysis and clinical reliability of these data. The primary objective of the work described in this thesis was to develop a linear, non-stationary optimal filter to reduce Poisson noise. The derived filter is automatically calculated from a large group (library) of similar patient studies representing all similarly acquired studies (the ensemble). The filter design was evaluated under controlled conditions using two computer simulated ensembles, devised to represent selected properties of real patient gated blood pool studies. Fortran programs were developed to generate libraries of Poisson degraded simulated studies for each ensemble. These libraries then were used to estimate optimal filters specific to the ensemble. Libraries of previously acquired patient gated blood pool studies then were used to estimate the optimal filters for an ensemble of similarly acquired gated blood pool studies. These filters were applied to studies of 13 patients who received multiple repeat studies at one time. Comparisons of both the filtered and raw data to averages of the repeat studies demonstrated that the optimal filters, calculated from a library of 800 studies, reduce the mean square error in the patient data by 60%. It is expected that optimally filtered gated blood pool studies will improve quantitative analysis of the data
Directory of Open Access Journals (Sweden)
Fermín Segovia
2017-10-01
Full Text Available 18F-DMFP-PET is an emerging neuroimaging modality used to diagnose Parkinson's disease (PD that allows us to examine postsynaptic dopamine D2/3 receptors. Like other neuroimaging modalities used for PD diagnosis, most of the total intensity of 18F-DMFP-PET images is concentrated in the striatum. However, other regions can also be useful for diagnostic purposes. An appropriate delimitation of the regions of interest contained in 18F-DMFP-PET data is crucial to improve the automatic diagnosis of PD. In this manuscript we propose a novel methodology to preprocess 18F-DMFP-PET data that improves the accuracy of computer aided diagnosis systems for PD. First, the data were segmented using an algorithm based on Hidden Markov Random Field. As a result, each neuroimage was divided into 4 maps according to the intensity and the neighborhood of the voxels. The maps were then individually normalized so that the shape of their histograms could be modeled by a Gaussian distribution with equal parameters for all the neuroimages. This approach was evaluated using a dataset with neuroimaging data from 87 parkinsonian patients. After these preprocessing steps, a Support Vector Machine classifier was used to separate idiopathic and non-idiopathic PD. Data preprocessed by the proposed method provided higher accuracy results than the ones preprocessed with previous approaches.
Wang, Shao-Jiang; Guo, Qi; Cai, Rong-Gen
2017-12-01
We investigate the impact of different redshift distributions of random samples on the baryon acoustic oscillations (BAO) measurements of D_V(z)r_d^fid/r_d from the two-point correlation functions of galaxies in the Data Release 12 of the Baryon Oscillation Spectroscopic Survey (BOSS). Big surveys, such as BOSS, usually assign redshifts to the random samples by randomly drawing values from the measured redshift distributions of the data, which would necessarily introduce fiducial signals of fluctuations into the random samples, weakening the signals of BAO, if the cosmic variance cannot be ignored. We propose a smooth function of redshift distribution that fits the data well to populate the random galaxy samples. The resulting cosmological parameters match the input parameters of the mock catalogue very well. The significance of BAO signals has been improved by 0.33σ for a low-redshift sample and by 0.03σ for a constant-stellar-mass sample, though the absolute values do not change significantly. Given the precision of the measurements of current cosmological parameters, it would be appreciated for the future improvements on the measurements of galaxy clustering.
Narukawa, Masaki; Nohara, Katsuhito
2018-04-01
This study proposes an estimation approach to panel count data, truncated at zero, in order to apply a contingent behavior travel cost method to revealed and stated preference data collected via a web-based survey. We develop zero-truncated panel Poisson mixture models by focusing on respondents who visited a site. In addition, we introduce an inverse Gaussian distribution to unobserved individual heterogeneity as an alternative to a popular gamma distribution, making it possible to capture effectively the long tail typically observed in trip data. We apply the proposed method to estimate the impact on tourism benefits in Fukushima Prefecture as a result of the Fukushima Nuclear Power Plant No. 1 accident. Copyright © 2018 Elsevier Ltd. All rights reserved.
DEFF Research Database (Denmark)
Sibani, Paolo
2007-01-01
in a correlated fashion and through irreversible bursts, `quakes', which punctuate reversible and equilibrium-like fluctuations of zero average. The temporal distribution of the quakes is a Poisson distribution with an average growing logarithmically on time, indicating that the quakes are triggered by record...... to capture the time dependencies of the EA simulation results. Finally, we argue that whenever the changes of the linear response function and of its conjugate autocorrelation function follow from the same intermittent events a fluctuation-dissipation-like relation can arise between the two in off......We study the intermittent behavior of the energy decay and the linear magnetic response of a glassy system during isothermal aging after a deep thermal quench, using the Edward-Anderson spin glass model as a paradigmatic example. The large intermittent changes in the two observables occur...
Directory of Open Access Journals (Sweden)
Jensen Just
2002-05-01
Full Text Available Abstract In this paper, we consider selection based on the best predictor of animal additive genetic values in Gaussian linear mixed models, threshold models, Poisson mixed models, and log normal frailty models for survival data (including models with time-dependent covariates with associated fixed or random effects. In the different models, expressions are given (when these can be found – otherwise unbiased estimates are given for prediction error variance, accuracy of selection and expected response to selection on the additive genetic scale and on the observed scale. The expressions given for non Gaussian traits are generalisations of the well-known formulas for Gaussian traits – and reflect, for Poisson mixed models and frailty models for survival data, the hierarchal structure of the models. In general the ratio of the additive genetic variance to the total variance in the Gaussian part of the model (heritability on the normally distributed level of the model or a generalised version of heritability plays a central role in these formulas.
International Nuclear Information System (INIS)
Cheng, J-C; Rahmim, Arman; Blinder, Stephan; Camborde, Marie-Laure; Raywood, Kelvin; Sossi, Vesna
2007-01-01
We describe an ordinary Poisson list-mode expectation maximization (OP-LMEM) algorithm with a sinogram-based scatter correction method based on the single scatter simulation (SSS) technique and a random correction method based on the variance-reduced delayed-coincidence technique. We also describe a practical approximate scatter and random-estimation approach for dynamic PET studies based on a time-averaged scatter and random estimate followed by scaling according to the global numbers of true coincidences and randoms for each temporal frame. The quantitative accuracy achieved using OP-LMEM was compared to that obtained using the histogram-mode 3D ordinary Poisson ordered subset expectation maximization (3D-OP) algorithm with similar scatter and random correction methods, and they showed excellent agreement. The accuracy of the approximated scatter and random estimates was tested by comparing time activity curves (TACs) as well as the spatial scatter distribution from dynamic non-human primate studies obtained from the conventional (frame-based) approach and those obtained from the approximate approach. An excellent agreement was found, and the time required for the calculation of scatter and random estimates in the dynamic studies became much less dependent on the number of frames (we achieved a nearly four times faster performance on the scatter and random estimates by applying the proposed method). The precision of the scatter fraction was also demonstrated for the conventional and the approximate approach using phantom studies
Boezen, H M; Schouten, J. P.; Postma, D S; Rijcken, B
1994-01-01
Peak expiratory flow (PEF) variability can be considered as an index of bronchial lability. Population studies on PEF variability are few. The purpose of the current paper is to describe the distribution of PEF variability in a random population sample of adults with a wide age range (20-70 yrs),
Elizabeth A. Freeman; Gretchen G. Moisen; Tracy S. Frescino
2012-01-01
Random Forests is frequently used to model species distributions over large geographic areas. Complications arise when data used to train the models have been collected in stratified designs that involve different sampling intensity per stratum. The modeling process is further complicated if some of the target species are relatively rare on the landscape leading to an...
Energy Technology Data Exchange (ETDEWEB)
Munoz Montplet, C.; Jurado Bruggeman, D.
2010-07-01
Geometrical random uncertainty in radiotherapy is usually characterized by a unique value in each group of patients. We propose a novel approach based on a statistically accurate characterization of the uncertainty distribution, thus reducing the risk of obtaining potentially unsafe results in CT V-Pt margins or in the selection of correction protocols.
Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min
2016-01-01
Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information.
Yang, Yu-Guang; Xu, Peng; Yang, Rui; Zhou, Yi-Hua; Shi, Wei-Min
2016-01-01
Quantum information and quantum computation have achieved a huge success during the last years. In this paper, we investigate the capability of quantum Hash function, which can be constructed by subtly modifying quantum walks, a famous quantum computation model. It is found that quantum Hash function can act as a hash function for the privacy amplification process of quantum key distribution systems with higher security. As a byproduct, quantum Hash function can also be used for pseudo-random number generation due to its inherent chaotic dynamics. Further we discuss the application of quantum Hash function to image encryption and propose a novel image encryption algorithm. Numerical simulations and performance comparisons show that quantum Hash function is eligible for privacy amplification in quantum key distribution, pseudo-random number generation and image encryption in terms of various hash tests and randomness tests. It extends the scope of application of quantum computation and quantum information. PMID:26823196
Action-angle variables and a KAM theorem for b-Poisson manifolds
Kiesenhofer, Anna; Miranda Galcerán, Eva; Scott, Geoffrey
2015-01-01
In this article we prove an action-angle theorem for b-integrable systems on b-Poisson manifolds improving the action-angle theorem contained in [14] for general Poisson manifolds in this setting. As an application, we prove a KAM-type theorem for b-Poisson manifolds. (C) 2015 Elsevier Masson SAS. All rights reserved.
Computation of long-distance propagation of impulses elicited by Poisson-process stimulation.
Moradmand, K; Goldfinger, M D
1995-12-01
1. The purpose of this work was to determine whether computed temporally coded axonal information generated by Poisson process stimulation were modified during long-distance propagation, as originally suggested by S. A. George. Propagated impulses were computed with the use of the Hodgkin-Huxley equations and cable theory to simulate excitation and current spread in 100-microns-diam unmyelinated axons, whose total length was 8.1 cm (25 lambda) or 101.4 cm (312.5 lambda). Differential equations were solved numerically, with the use of trapezoidal integration over small, constant electrotonic and temporal steps (0.125 lambda and 1.0 microsecond, respectively). 2. Using dual-pulse stimulation, we confirmed that for interstimulus intervals between 5 and 11 ms, the conduction velocity of the second of a short-interval pair of impulses was slower than that of the first impulse. Further, with sufficiently long propagation distance, the second impulse's conduction velocity increased steadily and eventually approached that of the first impulse. This effect caused a spatially varying interspike interval: as propagation proceeded, the interspike interval increased and eventually approached stabilization. 3. With Poisson stimulation, the peak amplitude of propagating action potentials varied with interspike interval durations between 5 and 11 ms. Such amplitude attenuation was caused by the incomplete relaxation of parameters n (macroscopic K-conductance activation) and h (macroscopic Na-conductance inactivation) during the interspike period. 4. The stochastic properties of the impulse train became less Poisson-like with propagation distance. In cases of propagation over 99.4 cm, the impulse trains developed marked periodicities in Interevent Interval Distribution and Expectation Density function because of the axially modulated transformation of interspike intervals. 5. Despite these changes in impulse train parameters, the arithmetic value of the mean interspike interval did
Poisson and Porter-Thomas fluctuations in off-yrast rotational transitions
International Nuclear Information System (INIS)
Matsuo, M.; Doessing, T.; Herskind, B.; Frauendorf, S.
1993-01-01
Fluctuations associated with stretched E2 transitions from high-spin levels in nuclei around 168 Yb are investigated by a cranked shell model extended to include residual two-body interactions. In the cranked mean-field model without residual interactions, it is found that gamma-ray energies behave like random variables and the energy spectra show Poisson fluctuation. With two-body residual interactions included, the discrete transition pattern with unmixed rotational bands is still valid up to around 600 keV above yrast, in good agreement with experiments. At higher excitation energy, a gradual onset of rotational damping emerges. At 1.8 MeV above yrast, complete damping is observed with GOE-type fluctuations for both energy levels and transition strengths (Porter-Thomas fluctuations). (orig.)
Use of Poisson spatiotemporal regression models for the Brazilian Amazon Forest: malaria count data.
Achcar, Jorge Alberto; Martinez, Edson Zangiacomi; Souza, Aparecida Doniseti Pires de; Tachibana, Vilma Mayumi; Flores, Edilson Ferreira
2011-01-01
Malaria is a serious problem in the Brazilian Amazon region, and the detection of possible risk factors could be of great interest for public health authorities. The objective of this article was to investigate the association between environmental variables and the yearly registers of malaria in the Amazon region using bayesian spatiotemporal methods. We used Poisson spatiotemporal regression models to analyze the Brazilian Amazon forest malaria count for the period from 1999 to 2008. In this study, we included some covariates that could be important in the yearly prediction of malaria, such as deforestation rate. We obtained the inferences using a bayesian approach and Markov Chain Monte Carlo (MCMC) methods to simulate samples for the joint posterior distribution of interest. The discrimination of different models was also discussed. The model proposed here suggests that deforestation rate, the number of inhabitants per km², and the human development index (HDI) are important in the prediction of malaria cases. It is possible to conclude that human development, population growth, deforestation, and their associated ecological alterations are conducive to increasing malaria risk. We conclude that the use of Poisson regression models that capture the spatial and temporal effects under the bayesian paradigm is a good strategy for modeling malaria counts.
Directory of Open Access Journals (Sweden)
Asma Shaheen
2018-03-01
Full Text Available In third world countries, industries mainly cause environmental contamination due to lack of environmental policies or oversight during their implementation. The Sheikhupura industrial zone, which includes industries such as tanneries, leather, chemical, textiles, and colour and dyes, contributes massive amounts of untreated effluents that are released directly into drains and used for the irrigation of crops and vegetables. This practice causes not only soil contamination with an excessive amount of heavy metals, but is also considered a source of toxicity in the food chain, i.e., bioaccumulation in plants and ultimately in human body organs. The objective of this research study was to assess the spatial distribution of the heavy metals chromium (Cr, cadmium (Cd, and lead (Pb, at three depths of soil using geostatistics and the selection of significant contributing variables to soil contamination using the Random Forest (RF function of the Boruta Algorithm. A total of 60 sampling locations were selected in the study area to collect soil samples (180 samples at three depths (0–15 cm, 15–30 cm, and 60–90 cm. The soil samples were analysed for their physico-chemical properties, i.e., soil saturation, electrical conductivity (EC, organic matter (OM, pH, phosphorus (P, potassium (K, and Cr, Cd, and Pb using standard laboratory procedures. The data were analysed with comprehensive statistics and geostatistical techniques. The correlation coefficient matrix between the heavy metals and the physico-chemical properties revealed that electrical conductivity (EC had a significant (p ≤ 0.05 negative correlation with Cr, Cd, and Pb. The RF function of the Boruta Algorithm employed soil depth as a classifier and ranked the significant soil contamination parameters (Cr, Cd, Pb, EC, and P in relation to depth. The mobility factor indicated the leachate percentage of heavy metals at different vertical depths of soil. The spatial distribution pattern of
Directory of Open Access Journals (Sweden)
Jinyu Ren
2015-01-01
Full Text Available Purpose: The purpose of this paper is to set up the coordinating mechanism for a decentralized distribution system consisting of a manufacturer and multiple independent retailers by means of contracts. It is in the two-stage supply chain system that all retailers sell an identical product made by the manufacturer and determine their order quantities which directly affect the expected profit of the supply chain with random demand. Design/methodology/approach: First comparison of the optimal order quantities in the centralized and decentralized system shows that the supply chain needs coordination. Then the coordination model is given based on buyback cost and compensation benefit. Finally the coordination mechanism is set up in which the manufacturer as the leader uses a buyback policy to incentive these retailers and the retailers pay profit returns to compensate the manufacturer. Findings: The results of a numerical example show that the perfect supply chain coordination and the flexible allocation of the profit can be achieved in the multi-retailer supply chain by the buyback and compensation contracts. Research limitations: The results based on assumptions might not completely hold in practice and the paper only focuses on studying a single product in two-stage supply chain. Practical implications: The coordination mechanism is applicable to a realistic supply chain under a private information setting and the research results is the foundation of further developing the coordination mechanism for a realistic multi-stage supply chain system with more products. Originality/value: This paper focused on studying the coordination mechanism for a decentralized multi-retailer supply chain by the joint application of the buyback and compensation contracts. Furthermore the perfect supply chain coordination and the flexible allocation of the profit are achieved.
de Bock, Martin; Derraik, José G B; Brennan, Christine M; Biggs, Janene B; Smith, Greg C; Cameron-Smith, David; Wall, Clare R; Cutfield, Wayne S
2012-01-01
We aimed to assess the effects of psyllium supplementation on insulin sensitivity and other parameters of the metabolic syndrome in an at risk adolescent population. This study encompassed a participant-blinded, randomized, placebo-controlled, crossover trial. Subjects were 47 healthy adolescent males aged 15-16 years, recruited from secondary schools in lower socio-economic areas with high rates of obesity. Participants received 6 g/day of psyllium or placebo for 6 weeks, with a two-week washout before crossing over. Fasting lipid profiles, ambulatory blood pressure, auxological data, body composition, activity levels, and three-day food records were collected at baseline and after each 6-week intervention. Insulin sensitivity was measured by the Matsuda method using glucose and insulin values from an oral glucose tolerance test. 45 subjects completed the study, and compliance was very high: 87% of participants took >80% of prescribed capsules. At baseline, 44% of subjects were overweight or obese. 28% had decreased insulin sensitivity, but none had impaired glucose tolerance. Fibre supplementation led to a 4% reduction in android fat to gynoid fat ratio (p = 0.019), as well as a 0.12 mmol/l (6%) reduction in LDL cholesterol (p = 0.042). No associated adverse events were recorded. Dietary supplementation with 6 g/day of psyllium over 6 weeks improves fat distribution and lipid profile (parameters of the metabolic syndrome) in an at risk population of adolescent males. Australian New Zealand Clinical Trials Registry ACTRN12609000888268.
Directory of Open Access Journals (Sweden)
Martin de Bock
Full Text Available AIMS: We aimed to assess the effects of psyllium supplementation on insulin sensitivity and other parameters of the metabolic syndrome in an at risk adolescent population. METHODS: This study encompassed a participant-blinded, randomized, placebo-controlled, crossover trial. Subjects were 47 healthy adolescent males aged 15-16 years, recruited from secondary schools in lower socio-economic areas with high rates of obesity. Participants received 6 g/day of psyllium or placebo for 6 weeks, with a two-week washout before crossing over. Fasting lipid profiles, ambulatory blood pressure, auxological data, body composition, activity levels, and three-day food records were collected at baseline and after each 6-week intervention. Insulin sensitivity was measured by the Matsuda method using glucose and insulin values from an oral glucose tolerance test. RESULTS: 45 subjects completed the study, and compliance was very high: 87% of participants took >80% of prescribed capsules. At baseline, 44% of subjects were overweight or obese. 28% had decreased insulin sensitivity, but none had impaired glucose tolerance. Fibre supplementation led to a 4% reduction in android fat to gynoid fat ratio (p = 0.019, as well as a 0.12 mmol/l (6% reduction in LDL cholesterol (p = 0.042. No associated adverse events were recorded. CONCLUSIONS: Dietary supplementation with 6 g/day of psyllium over 6 weeks improves fat distribution and lipid profile (parameters of the metabolic syndrome in an at risk population of adolescent males. TRIAL REGISTRATION: Australian New Zealand Clinical Trials Registry ACTRN12609000888268.
Bases chimiosensorielles du comportement alimentaire chez les poissons
Directory of Open Access Journals (Sweden)
SAGLIO Ph.
1981-07-01
Full Text Available Le comportement alimentaire, indispensable à la survie de l'individu et donc de l'espèce, occupe à ce titre une position de première importance dans la hiérarchie des comportements fondamentaux qui tous en dépendent très étroitement. Chez les poissons, cette prééminence se trouve illustrée par l'extrême diversité des supports sensoriels impliqués et des expressions comportementales qui leur sont liées. A la suite d'un certain nombre de mises en évidence neurophysiologiques et éthologiques de l'importance du sens chimique (olfaction, gustation dans le comportement alimentaire des poissons, de très importants secteurs d'études électrophysiologiques et d'analyses physico-chimiques visant à en déterminer la nature exacte (en termes de substances actives se sont développés ces vingt dernières années. De tous ces travaux dont les plus avancés sont présentés ici, il ressort que les acides aminés de série L plus ou moins associés à d'autres composés de poids moléculaires < 1000 constituent des composés chimiques jouant un rôle déterminant dans le comportement alimentaire de nombreuses espèces de poissons carnivores.
Random functions and turbulence
Panchev, S
1971-01-01
International Series of Monographs in Natural Philosophy, Volume 32: Random Functions and Turbulence focuses on the use of random functions as mathematical methods. The manuscript first offers information on the elements of the theory of random functions. Topics include determination of statistical moments by characteristic functions; functional transformations of random variables; multidimensional random variables with spherical symmetry; and random variables and distribution functions. The book then discusses random processes and random fields, including stationarity and ergodicity of random
On population size estimators in the Poisson mixture model.
Mao, Chang Xuan; Yang, Nan; Zhong, Jinhua
2013-09-01
Estimating population sizes via capture-recapture experiments has enormous applications. The Poisson mixture model can be adopted for those applications with a single list in which individuals appear one or more times. We compare several nonparametric estimators, including the Chao estimator, the Zelterman estimator, two jackknife estimators and the bootstrap estimator. The target parameter of the Chao estimator is a lower bound of the population size. Those of the other four estimators are not lower bounds, and they may produce lower confidence limits for the population size with poor coverage probabilities. A simulation study is reported and two examples are investigated. © 2013, The International Biometric Society.
Team behaviour analysis in sports using the poisson equation
Direkoglu, Cem; O'Connor, Noel E.
2012-01-01
We propose a novel physics-based model for analysing team play- ers’ positions and movements on a sports playing field. The goal is to detect for each frame the region with the highest population of a given team’s players and the region towards which the team is moving as they press for territorial advancement, termed the region of intent. Given the positions of team players from a plan view of the playing field at any given time, we solve a particular Poisson equation to generate a smooth di...
An approach to numerically solving the Poisson equation
Feng, Zhichen; Sheng, Zheng-Mao
2015-06-01
We introduce an approach for numerically solving the Poisson equation by using a physical model, which is a way to solve a partial differential equation without the finite difference method. This method is especially useful for obtaining the solutions in very many free-charge neutral systems with open boundary conditions. It can be used for arbitrary geometry and mesh style and is more efficient comparing with the widely-used iterative algorithm with multigrid methods. It is especially suitable for parallel computing. This method can also be applied to numerically solving other partial differential equations whose Green functions exist in analytic expression.
Large Time Behavior of the Vlasov-Poisson-Boltzmann System
Directory of Open Access Journals (Sweden)
Li Li
2013-01-01
Full Text Available The motion of dilute charged particles can be modeled by Vlasov-Poisson-Boltzmann system. We study the large time stability of the VPB system. To be precise, we prove that when time goes to infinity, the solution of VPB system tends to global Maxwellian state in a rate Ot−∞, by using a method developed for Boltzmann equation without force in the work of Desvillettes and Villani (2005. The improvement of the present paper is the removal of condition on parameter λ as in the work of Li (2008.
Localization of Point Sources for Poisson Equation using State Observers
Majeed, Muhammad Usman
2016-08-09
A method based On iterative observer design is presented to solve point source localization problem for Poisson equation with riven boundary data. The procedure involves solution of multiple boundary estimation sub problems using the available Dirichlet and Neumann data from different parts of the boundary. A weighted sum of these solution profiles of sub-problems localizes point sources inside the domain. Method to compute these weights is also provided. Numerical results are presented using finite differences in a rectangular domain. (C) 2016, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved.
Supersymmetric quantum corrections and Poisson-Lie T-duality
International Nuclear Information System (INIS)
Assaoui, F.; Lhallabi, T.; Abdus Salam International Centre for Theoretical Physics, Trieste
2000-07-01
The quantum actions of the (4,4) supersymmetric non-linear sigma model and its dual in the Abelian case are constructed by using the background superfield method. The propagators of the quantum superfield and its dual and the gauge fixing actions of the original and dual (4,4) supersymmetric sigma models are determined. On the other hand, the BRST transformations are used to obtain the quantum dual action of the (4,4) supersymmetric nonlinear sigma model in the sense of Poisson-Lie T-duality. (author)
Improving EWMA Plans for Detecting Unusual Increases in Poisson Counts
Directory of Open Access Journals (Sweden)
R. S. Sparks
2009-01-01
adaptive exponentially weighted moving average (EWMA plan is developed for signalling unusually high incidence when monitoring a time series of nonhomogeneous daily disease counts. A Poisson transitional regression model is used to fit background/expected trend in counts and provides “one-day-ahead” forecasts of the next day's count. Departures of counts from their forecasts are monitored. The paper outlines an approach for improving early outbreak data signals by dynamically adjusting the exponential weights to be efficient at signalling local persistent high side changes. We emphasise outbreak signals in steady-state situations; that is, changes that occur after the EWMA statistic had run through several in-control counts.
Standard Test Method for Determining Poisson's Ratio of Honeycomb Cores
American Society for Testing and Materials. Philadelphia
2002-01-01
1.1 This test method covers the determination of the honeycomb Poisson's ratio from the anticlastic curvature radii, see . 1.2 The values stated in SI units are to be regarded as the standard. The inch-pound units given may be approximate. This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Maslov indices, Poisson brackets, and singular differential forms
Esterlis, I.; Haggard, H. M.; Hedeman, A.; Littlejohn, R. G.
2014-06-01
Maslov indices are integers that appear in semiclassical wave functions and quantization conditions. They are often notoriously difficult to compute. We present methods of computing the Maslov index that rely only on typically elementary Poisson brackets and simple linear algebra. We also present a singular differential form, whose integral along a curve gives the Maslov index of that curve. The form is closed but not exact, and transforms by an exact differential under canonical transformations. We illustrate the method with the 6j-symbol, which is important in angular-momentum theory and in quantum gravity.
Gap processing for adaptive maximal poisson-disk sampling
Yan, Dongming
2013-10-17
In this article, we study the generation of maximal Poisson-disk sets with varying radii. First, we present a geometric analysis of gaps in such disk sets. This analysis is the basis for maximal and adaptive sampling in Euclidean space and on manifolds. Second, we propose efficient algorithms and data structures to detect gaps and update gaps when disks are inserted, deleted, moved, or when their radii are changed.We build on the concepts of regular triangulations and the power diagram. Third, we show how our analysis contributes to the state-of-the-art in surface remeshing. © 2013 ACM.
Random numbers spring from alpha decay
International Nuclear Information System (INIS)
Frigerio, N.A.; Sanathanan, L.P.; Morley, M.; Clark, N.A.; Tyler, S.A.
1980-05-01
Congruential random number generators, which are widely used in Monte Carlo simulations, are deficient in that the number they generate are concentrated in a relatively small number of hyperplanes. While this deficiency may not be a limitation in small Monte Carlo studies involving a few variables, it introduces a significant bias in large simulations requiring high resolution. This bias was recognized and assessed during preparations for an accident analysis study of nuclear power plants. This report describes a random number device based on the radioactive decay of alpha particles from a 235 U source in a high-resolution gas proportional counter. The signals were fed to a 4096-channel analyzer and for each channel the frequency of signals registered in a 20,000-microsecond interval was recorded. The parity bits of these frequency counts (0 for an even count and 1 for an odd count) were then assembled in sequence to form 31-bit binary random numbers and transcribed to a magnetic tape. This cycle was repeated as many times as were necessary to create 3 million random numbers. The frequency distribution of counts from the present device conforms to the Brockwell-Moyal distribution, which takes into account the dead time of the counter (both the dead time and decay constant of the underlying Poisson process were estimated). Analysis of the count data and tests of randomness on a sample set of the 31-bit binary numbers indicate that this random number device is a highly reliable source of truly random numbers. Its use is, therefore, recommended in Monte Carlo simulations for which the congruential pseudorandom number generators are found to be inadequate. 6 figures, 5 tables
Randomness at the root of things 1: Random walks
Ogborn, Jon; Collins, Simon; Brown, Mick
2003-09-01
This is the first of a pair of articles about randomness in physics. In this article, we use some variations on the idea of a `random walk' to consider first the path of a particle in Brownian motion, and then the random variation to be expected in radioactive decay. The arguments are set in the context of the general importance of randomness both in physics and in everyday life. We think that the ideas could usefully form part of students' A-level work on random decay and quantum phenomena, as well as being good for their general education. In the second article we offer a novel and simple approach to Poisson sequences.
Fitting the Statistical Distribution for Daily Rainfall in Ibadan, Based ...
African Journals Online (AJOL)
PROF. O. E. OSUAGWU
2013-06-01
Jun 1, 2013 ... followed by normal and poisson model that has the same estimated rainfall amount for describing the daily rainfall in Ibadan metropolis. Keywords : scale parameter, asymptotically, exponential distribution, gamma distribution, poisson and kolmogorov-smirnov. .... Equation (7) can be simply written as.
Directory of Open Access Journals (Sweden)
Dongkyun Kim
2014-01-01
Full Text Available A novel approach for a Poisson cluster stochastic rainfall generator was validated in its ability to reproduce important rainfall and watershed response characteristics at 104 locations in the United States. The suggested novel approach, The Hybrid Model (THM, as compared to the traditional Poisson cluster rainfall modeling approaches, has an additional capability to account for the interannual variability of rainfall statistics. THM and a traditional approach of Poisson cluster rainfall model (modified Bartlett-Lewis rectangular pulse model were compared in their ability to reproduce the characteristics of extreme rainfall and watershed response variables such as runoff and peak flow. The results of the comparison indicate that THM generally outperforms the traditional approach in reproducing the distributions of peak rainfall, peak flow, and runoff volume. In addition, THM significantly outperformed the traditional approach in reproducing extreme rainfall by 2.3% to 66% and extreme flow values by 32% to 71%.