WorldWideScience

Sample records for non-randomly distributed locations

  1. Real-time definition of non-randomness in the distribution of genomic events.

    Directory of Open Access Journals (Sweden)

    Ulrich Abel

    Full Text Available Features such as mutations or structural characteristics can be non-randomly or non-uniformly distributed within a genome. So far, computer simulations were required for statistical inferences on the distribution of sequence motifs. Here, we show that these analyses are possible using an analytical, mathematical approach. For the assessment of non-randomness, our calculations only require information including genome size, number of (sampled sequence motifs and distance parameters. We have developed computer programs evaluating our analytical formulas for the real-time determination of expected values and p-values. This approach permits a flexible cluster definition that can be applied to most effectively identify non-random or non-uniform sequence motif distribution. As an example, we show the effectivity and reliability of our mathematical approach in clinical retroviral vector integration site distribution.

  2. Modeling Linguistic Variables With Regression Models: Addressing Non-Gaussian Distributions, Non-independent Observations, and Non-linear Predictors With Random Effects and Generalized Additive Models for Location, Scale, and Shape

    Directory of Open Access Journals (Sweden)

    Christophe Coupé

    2018-04-01

    Full Text Available As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM, which address grouping of observations, and generalized linear mixed-effects models (GLMM, which offer a family of distributions for the dependent variable. Generalized additive models (GAM are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS. We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for ‘difficult’ variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships

  3. Modeling Linguistic Variables With Regression Models: Addressing Non-Gaussian Distributions, Non-independent Observations, and Non-linear Predictors With Random Effects and Generalized Additive Models for Location, Scale, and Shape.

    Science.gov (United States)

    Coupé, Christophe

    2018-01-01

    As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM), which address grouping of observations, and generalized linear mixed-effects models (GLMM), which offer a family of distributions for the dependent variable. Generalized additive models (GAM) are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS). We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for 'difficult' variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships. Relying on GAMLSS, we

  4. Microstructural descriptors and cellular automata simulation of the effects of non-random nuclei location on recrystallization in two dimensions

    Directory of Open Access Journals (Sweden)

    Paulo Rangel Rios

    2006-06-01

    Full Text Available The effect of non-random nuclei location and the efficiency of microstructural descriptors in assessing such a situation are studied. Cellular automata simulation of recrystallization in two dimensions is carried out to simulate microstrutural evolution for nuclei distribution ranging from a periodic arrangement to clusters of nuclei. The simulation results are compared in detail with microstrutural descriptors normally used to follow transformation evolution. It is shown that the contiguity is particularly relevant to detect microstructural deviations from randomness. This work focuses on recrystallization but its results are applicable to any nucleation and growth transformation.

  5. Non-random distribution of instability-associated chromosomal rearrangement breakpoints in human lymphoblastoid cells

    International Nuclear Information System (INIS)

    Moore, Stephen R.; Papworth, David; Grosovsky, Andrew J.

    2006-01-01

    Genomic instability is observed in tumors and in a large fraction of the progeny surviving irradiation. One of the best-characterized phenotypic manifestations of genomic instability is delayed chromosome aberrations. Our working hypothesis for the current study was that if genomic instability is in part attributable to cis mechanisms, we should observe a non-random distribution of chromosomes or sites involved in instability-associated rearrangements, regardless of radiation quality, dose, or trans factor expression. We report here the karyotypic examination of 296 instability-associated chromosomal rearrangement breaksites (IACRB) from 118 unstable TK6 human B lymphoblast, and isogenic derivative, clones. When we tested whether IACRB were distributed across the chromosomes based on target size, a significant non-random distribution was evident (p < 0.00001), and three IACRB hotspots (chromosomes 11, 12, and 22) and one IACRB coldspot (chromosome 2) were identified. Statistical analysis at the chromosomal band-level identified four IACRB hotspots accounting for 20% of all instability-associated breaks, two of which account for over 14% of all IACRB. Further, analysis of independent clones provided evidence within 14 individual clones of IACRB clustering at the chromosomal band level, suggesting a predisposition for further breaks after an initial break at some chromosomal bands. All of these events, independently, or when taken together, were highly unlikely to have occurred by chance (p < 0.000001). These IACRB band-level cluster hotspots were observed independent of radiation quality, dose, or cellular p53 status. The non-random distribution of instability-associated chromosomal rearrangements described here significantly differs from the distribution that was observed in a first-division post-irradiation metaphase analysis (p = 0.0004). Taken together, these results suggest that genomic instability may be in part driven by chromosomal cis mechanisms

  6. The area distribution of two-dimensional random walks and non-Hermitian Hofstadter quantum mechanics

    International Nuclear Information System (INIS)

    Matveenko, Sergey; Ouvry, Stéphane

    2014-01-01

    When random walks on a square lattice are biased horizontally to move solely to the right, the probability distribution of their algebraic area can be obtained exactly (Mashkevich and Ouvry 2009 J. Stat. Phys. 137 71). We explicitly map this biased classical random system onto a non-Hermitian Hofstadter-like quantum model where a charged particle on a square lattice coupled to a perpendicular magnetic field hops only to the right. For the commensurate case, when the magnetic flux per unit cell is rational, an exact solution of the quantum model is obtained. The periodicity of the lattice allows one to relate traces of the Nth power of the Hamiltonian to probability distribution generating functions of biased walks of length N. (paper)

  7. Nonparametric Estimation of Distributions in Random Effects Models

    KAUST Repository

    Hart, Jeffrey D.

    2011-01-01

    We propose using minimum distance to obtain nonparametric estimates of the distributions of components in random effects models. A main setting considered is equivalent to having a large number of small datasets whose locations, and perhaps scales, vary randomly, but which otherwise have a common distribution. Interest focuses on estimating the distribution that is common to all datasets, knowledge of which is crucial in multiple testing problems where a location/scale invariant test is applied to every small dataset. A detailed algorithm for computing minimum distance estimates is proposed, and the usefulness of our methodology is illustrated by a simulation study and an analysis of microarray data. Supplemental materials for the article, including R-code and a dataset, are available online. © 2011 American Statistical Association.

  8. The relationship between randomness and power-law distributed move lengths in random walk algorithms

    Science.gov (United States)

    Sakiyama, Tomoko; Gunji, Yukio-Pegio

    2014-05-01

    Recently, we proposed a new random walk algorithm, termed the REV algorithm, in which the agent alters the directional rule that governs it using the most recent four random numbers. Here, we examined how a non-bounded number, i.e., "randomness" regarding move direction, was important for optimal searching and power-law distributed step lengths in rule change. We proposed two algorithms: the REV and REV-bounded algorithms. In the REV algorithm, one of the four random numbers used to change the rule is non-bounded. In contrast, all four random numbers in the REV-bounded algorithm are bounded. We showed that the REV algorithm exhibited more consistent power-law distributed step lengths and flexible searching behavior.

  9. A polymer, random walk model for the size-distribution of large DNA fragments after high linear energy transfer radiation

    Science.gov (United States)

    Ponomarev, A. L.; Brenner, D.; Hlatky, L. R.; Sachs, R. K.

    2000-01-01

    DNA double-strand breaks (DSBs) produced by densely ionizing radiation are not located randomly in the genome: recent data indicate DSB clustering along chromosomes. Stochastic DSB clustering at large scales, from > 100 Mbp down to simulations and analytic equations. A random-walk, coarse-grained polymer model for chromatin is combined with a simple track structure model in Monte Carlo software called DNAbreak and is applied to data on alpha-particle irradiation of V-79 cells. The chromatin model neglects molecular details but systematically incorporates an increase in average spatial separation between two DNA loci as the number of base-pairs between the loci increases. Fragment-size distributions obtained using DNAbreak match data on large fragments about as well as distributions previously obtained with a less mechanistic approach. Dose-response relations, linear at small doses of high linear energy transfer (LET) radiation, are obtained. They are found to be non-linear when the dose becomes so large that there is a significant probability of overlapping or close juxtaposition, along one chromosome, for different DSB clusters from different tracks. The non-linearity is more evident for large fragments than for small. The DNAbreak results furnish an example of the RLC (randomly located clusters) analytic formalism, which generalizes the broken-stick fragment-size distribution of the random-breakage model that is often applied to low-LET data.

  10. Free energy distribution function of a random Ising ferromagnet

    International Nuclear Information System (INIS)

    Dotsenko, Victor; Klumov, Boris

    2012-01-01

    We study the free energy distribution function of a weakly disordered Ising ferromagnet in terms of the D-dimensional random temperature Ginzburg–Landau Hamiltonian. It is shown that besides the usual Gaussian 'body' this distribution function exhibits non-Gaussian tails both in the paramagnetic and in the ferromagnetic phases. Explicit asymptotic expressions for these tails are derived. It is demonstrated that the tails are strongly asymmetric: the left tail (for large negative values of the free energy) is much slower than the right one (for large positive values of the free energy). It is argued that at the critical point the free energy of the random Ising ferromagnet in dimensions D < 4 is described by a non-trivial universal distribution function which is non-self-averaging

  11. Weighted Scaling in Non-growth Random Networks

    International Nuclear Information System (INIS)

    Chen Guang; Yang Xuhua; Xu Xinli

    2012-01-01

    We propose a weighted model to explain the self-organizing formation of scale-free phenomenon in non-growth random networks. In this model, we use multiple-edges to represent the connections between vertices and define the weight of a multiple-edge as the total weights of all single-edges within it and the strength of a vertex as the sum of weights for those multiple-edges attached to it. The network evolves according to a vertex strength preferential selection mechanism. During the evolution process, the network always holds its total number of vertices and its total number of single-edges constantly. We show analytically and numerically that a network will form steady scale-free distributions with our model. The results show that a weighted non-growth random network can evolve into scale-free state. It is interesting that the network also obtains the character of an exponential edge weight distribution. Namely, coexistence of scale-free distribution and exponential distribution emerges.

  12. Practice Location Characteristics of Non-Traditional Dental Practices.

    Science.gov (United States)

    Solomon, Eric S; Jones, Daniel L

    2016-04-01

    Current and future dental school graduates are increasingly likely to choose a non-traditional dental practice-a group practice managed by a dental service organization or a corporate practice with employed dentists-for their initial practice experience. In addition, the growth of non-traditional practices, which are located primarily in major urban areas, could accelerate the movement of dentists to those areas and contribute to geographic disparities in the distribution of dental services. To help the profession understand the implications of these developments, the aim of this study was to compare the location characteristics of non-traditional practices and traditional dental practices. After identifying non-traditional practices across the United States, the authors located those practices and traditional dental practices geographically by zip code. Non-traditional dental practices were found to represent about 3.1% of all dental practices, but they had a greater impact on the marketplace with almost twice the average number of staff and annual revenue. Virtually all non-traditional dental practices were located in zip codes that also had a traditional dental practice. Zip codes with non-traditional practices had significant differences from zip codes with only a traditional dental practice: the populations in areas with non-traditional practices had higher income levels and higher education and were slightly younger and proportionally more Hispanic; those practices also had a much higher likelihood of being located in a major metropolitan area. Dental educators and leaders need to understand the impact of these trends in the practice environment in order to both prepare graduates for practice and make decisions about planning for the workforce of the future.

  13. Location and multi-depot vehicle routing for emergency vehicles using tour coverage and random sampling

    Directory of Open Access Journals (Sweden)

    Alireza Goli

    2015-09-01

    Full Text Available Distribution and optimum allocation of emergency resources are the most important tasks, which need to be accomplished during crisis. When a natural disaster such as earthquake, flood, etc. takes place, it is necessary to deliver rescue efforts as quickly as possible. Therefore, it is important to find optimum location and distribution of emergency relief resources. When a natural disaster occurs, it is not possible to reach some damaged areas. In this paper, location and multi-depot vehicle routing for emergency vehicles using tour coverage and random sampling is investigated. In this study, there is no need to visit all the places and some demand points receive their needs from the nearest possible location. The proposed study is implemented for some randomly generated numbers in different sizes. The preliminary results indicate that the proposed method was capable of reaching desirable solutions in reasonable amount of time.

  14. A story about distributions of dimensions and locations of boulders

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager

    2006-01-01

    for making a bored tunnel through the till deposit. Geographical universality was discovered through the statistical analysis of observations of boulder coordinates and dimension measures from wide spread cliff beach locations. One conclusion is that the joint size distribution up to some degree of modeling...... distribution. Moreover, these ratios are independent of the maximal dimension. The random point field structure of the boulder coordinates as isolated points or as clusters of points makes Poisson fields reasonable modeling candidates for the fields of both single points and cluster points. The cluster size...

  15. An Approach to Distinguish between Plasticity and Non-random Distributions of Behavioral Types Along Urban Gradients in a Wild Passerine Bird

    Directory of Open Access Journals (Sweden)

    Philipp Sprau

    2017-08-01

    Full Text Available The impact of urbanization has been widely studied in the context of species diversity and life history evolution. Behavioral adaptation, by contrast, remains poorly understood because empirical studies rarely investigate the relative importance of two key mechanisms: plastic responses vs. non-random distributions of behavioral types. We propose here an approach that enables the simultaneous estimation of the respective roles of these distinct mechanisms. We investigated why risky behaviors are often associated with urbanization, using an urban nest box population of great tits (Parus major as a study system. We simultaneously and repeatedly quantified individual behavior (aggression and flight initiation distance as well as environmental factors characterizing level of urbanization (numbers of pedestrians, cars and cyclists. This enabled us to statistically distinguish plastic responses from patterns of non-random distributions of behavioral types. Data analyses revealed that individuals did not plastically adjust their behavior to the level of urbanization. Behavioral types were instead non-randomly distributed: bold birds occurred more frequently in areas with more cars and fewer pedestrians while shy individuals were predominantly found in areas with fewer cars and more pedestrians. These novel findings imply a major role for behavioral types in the evolutionary ecology of urban environments and call for the full integration of among- and within-individual variation in urban ecological studies.

  16. Efficiency of the human observer for detecting a Gaussian signal at a known location in non-Gaussian distributed lumpy backgrounds.

    Science.gov (United States)

    Park, Subok; Gallas, Bradon D; Badano, Aldo; Petrick, Nicholas A; Myers, Kyle J

    2007-04-01

    A previous study [J. Opt. Soc. Am. A22, 3 (2005)] has shown that human efficiency for detecting a Gaussian signal at a known location in non-Gaussian distributed lumpy backgrounds is approximately 4%. This human efficiency is much less than the reported 40% efficiency that has been documented for Gaussian-distributed lumpy backgrounds [J. Opt. Soc. Am. A16, 694 (1999) and J. Opt. Soc. Am. A18, 473 (2001)]. We conducted a psychophysical study with a number of changes, specifically in display-device calibration and data scaling, from the design of the aforementioned study. Human efficiency relative to the ideal observer was found again to be approximately 5%. Our variance analysis indicates that neither scaling nor display made a statistically significant difference in human performance for the task. We conclude that the non-Gaussian distributed lumpy background is a major factor in our low human-efficiency results.

  17. Nonparametric estimation of location and scale parameters

    KAUST Repository

    Potgieter, C.J.; Lombard, F.

    2012-01-01

    Two random variables X and Y belong to the same location-scale family if there are constants μ and σ such that Y and μ+σX have the same distribution. In this paper we consider non-parametric estimation of the parameters μ and σ under minimal

  18. The compaction of a random distribution of metal cylinders by the discrete element method

    DEFF Research Database (Denmark)

    Redanz, Pia; Fleck, N. A.

    2001-01-01

    -linear springs. The initial packing of the particles is generated by the ballistic deposition method. Salient micromechanical features of closed die and isostatic powder compaction are elucidated for both frictionless and sticking contacts. It is found that substantial rearrangement of frictionless particles......The cold compaction of a 2D random distribution of metal circular cylinders has been investigated numerically by the discrete element method. Each cylindrical particle is located by a node at its centre and the plastic indentation of the contacts between neighbouring particles is represented by non...

  19. Simulating Pre-Asymptotic, Non-Fickian Transport Although Doing Simple Random Walks - Supported By Empirical Pore-Scale Velocity Distributions and Memory Effects

    Science.gov (United States)

    Most, S.; Jia, N.; Bijeljic, B.; Nowak, W.

    2016-12-01

    Pre-asymptotic characteristics are almost ubiquitous when analyzing solute transport processes in porous media. These pre-asymptotic aspects are caused by spatial coherence in the velocity field and by its heterogeneity. For the Lagrangian perspective of particle displacements, the causes of pre-asymptotic, non-Fickian transport are skewed velocity distribution, statistical dependencies between subsequent increments of particle positions (memory) and dependence between the x, y and z-components of particle increments. Valid simulation frameworks should account for these factors. We propose a particle tracking random walk (PTRW) simulation technique that can use empirical pore-space velocity distributions as input, enforces memory between subsequent random walk steps, and considers cross dependence. Thus, it is able to simulate pre-asymptotic non-Fickian transport phenomena. Our PTRW framework contains an advection/dispersion term plus a diffusion term. The advection/dispersion term produces time-series of particle increments from the velocity CDFs. These time series are equipped with memory by enforcing that the CDF values of subsequent velocities change only slightly. The latter is achieved through a random walk on the axis of CDF values between 0 and 1. The virtual diffusion coefficient for that random walk is our only fitting parameter. Cross-dependence can be enforced by constraining the random walk to certain combinations of CDF values between the three velocity components in x, y and z. We will show that this modelling framework is capable of simulating non-Fickian transport by comparison with a pore-scale transport simulation and we analyze the approach to asymptotic behavior.

  20. Partial summations of stationary sequences of non-Gaussian random variables

    DEFF Research Database (Denmark)

    Mohr, Gunnar; Ditlevsen, Ove Dalager

    1996-01-01

    The distribution of the sum of a finite number of identically distributed random variables is in many cases easily determined given that the variables are independent. The moments of any order of the sum can always be expressed by the moments of the single term without computational problems...... of convergence of the distribution of a sum (or an integral) of mutually dependent random variables to the Gaussian distribution. The paper is closely related to the work in Ditlevsen el al. [Ditlevsen, O., Mohr, G. & Hoffmeyer, P. Integration of non-Gaussian fields. Prob. Engng Mech 11 (1996) 15-23](2)....... lognormal variables or polynomials of standard Gaussian variables. The dependency structure is induced by specifying the autocorrelation structure of the sequence of standard Gaussian variables. Particularly useful polynomials are the Winterstein approximations that distributionally fit with non...

  1. SSRscanner: a program for reporting distribution and exact location of simple sequence repeats.

    Science.gov (United States)

    Anwar, Tamanna; Khan, Asad U

    2006-02-20

    Simple sequence repeats (SSRs) have become important molecular markers for a broad range of applications, such as genome mapping and characterization, phenotype mapping, marker assisted selection of crop plants and a range of molecular ecology and diversity studies. These repeated DNA sequences are found in both prokaryotes and eukaryotes. They are distributed almost at random throughout the genome, ranging from mononucleotide to trinucleotide repeats. They are also found at longer lengths (> 6 repeating units) of tracts. Most of the computer programs that find SSRs do not report its exact position. A computer program SSRscanner was written to find out distribution, frequency and exact location of each SSR in the genome. SSRscanner is user friendly. It can search repeats of any length and produce outputs with their exact position on chromosome and their frequency of occurrence in the sequence. This program has been written in PERL and is freely available for non-commercial users by request from the authors. Please contact the authors by E-mail: huzzi99@hotmail.com.

  2. Non-random intrachromosomal distribution of radiation-induced chromatid aberrations in Vicia faba. [Aberration clustering

    Energy Technology Data Exchange (ETDEWEB)

    Schubert, I; Rieger, R [Akademie der Wissenschaften der DDR, Gatersleben. Zentralinst. fuer Genetik und Kulturpflanzenforschung

    1976-04-01

    A reconstructed karyotype of Vicia faba, with all chromosomes individually distinguishable, was treated with X-rays, fast neutrons, (/sup 3/H) uridine (/sup 3/HU). The distribution within metaphase chromosomes of induced chromatid aberrations was non-random for all agents used. Aberration clustering, in part agent specific, occurred in chromosome segments containing heterochromatin as defined by the presence of G bands. The pattern of aberration clustering found after treatment with /sup 3/HU did not allow the recognition of chromosome regions active in transcription during treatment. Furthermore, it was impossible to obtain unambiguous indications of the presence of AT- and GC-base clusters from the patterns of /sup 3/HT- and /sup 3/HC-induced chromatid aberrations, respectively. Possible reasons underlying these observations are discussed.

  3. Random distributed feedback fibre lasers

    Energy Technology Data Exchange (ETDEWEB)

    Turitsyn, Sergei K., E-mail: s.k.turitsyn@aston.ac.uk [Aston Institute of Photonic Technologies, Aston University, Birmingham B4 7ET (United Kingdom); Novosibirsk State University, 2 Pirogova str., 630090, Novosibirsk (Russian Federation); Babin, Sergey A. [Novosibirsk State University, 2 Pirogova str., 630090, Novosibirsk (Russian Federation); Institute of Automation and Electrometry SB RAS, 1 Ac. Koptug. ave., 630090, Novosibirsk (Russian Federation); Churkin, Dmitry V. [Aston Institute of Photonic Technologies, Aston University, Birmingham B4 7ET (United Kingdom); Novosibirsk State University, 2 Pirogova str., 630090, Novosibirsk (Russian Federation); Institute of Automation and Electrometry SB RAS, 1 Ac. Koptug. ave., 630090, Novosibirsk (Russian Federation); Vatnik, Ilya D.; Nikulin, Maxim [Institute of Automation and Electrometry SB RAS, 1 Ac. Koptug. ave., 630090, Novosibirsk (Russian Federation); Podivilov, Evgenii V. [Novosibirsk State University, 2 Pirogova str., 630090, Novosibirsk (Russian Federation); Institute of Automation and Electrometry SB RAS, 1 Ac. Koptug. ave., 630090, Novosibirsk (Russian Federation)

    2014-09-10

    The concept of random lasers exploiting multiple scattering of photons in an amplifying disordered medium in order to generate coherent light without a traditional laser resonator has attracted a great deal of attention in recent years. This research area lies at the interface of the fundamental theory of disordered systems and laser science. The idea was originally proposed in the context of astrophysics in the 1960s by V.S. Letokhov, who studied scattering with “negative absorption” of the interstellar molecular clouds. Research on random lasers has since developed into a mature experimental and theoretical field. A simple design of such lasers would be promising for potential applications. However, in traditional random lasers the properties of the output radiation are typically characterized by complex features in the spatial, spectral and time domains, making them less attractive than standard laser systems in terms of practical applications. Recently, an interesting and novel type of one-dimensional random laser that operates in a conventional telecommunication fibre without any pre-designed resonator mirrors–random distributed feedback fibre laser–was demonstrated. The positive feedback required for laser generation in random fibre lasers is provided by the Rayleigh scattering from the inhomogeneities of the refractive index that are naturally present in silica glass. In the proposed laser concept, the randomly backscattered light is amplified through the Raman effect, providing distributed gain over distances up to 100 km. Although an effective reflection due to the Rayleigh scattering is extremely small (∼0.1%), the lasing threshold may be exceeded when a sufficiently large distributed Raman gain is provided. Such a random distributed feedback fibre laser has a number of interesting and attractive features. The fibre waveguide geometry provides transverse confinement, and effectively one-dimensional random distributed feedback leads to the

  4. Random distributed feedback fibre lasers

    International Nuclear Information System (INIS)

    Turitsyn, Sergei K.; Babin, Sergey A.; Churkin, Dmitry V.; Vatnik, Ilya D.; Nikulin, Maxim; Podivilov, Evgenii V.

    2014-01-01

    The concept of random lasers exploiting multiple scattering of photons in an amplifying disordered medium in order to generate coherent light without a traditional laser resonator has attracted a great deal of attention in recent years. This research area lies at the interface of the fundamental theory of disordered systems and laser science. The idea was originally proposed in the context of astrophysics in the 1960s by V.S. Letokhov, who studied scattering with “negative absorption” of the interstellar molecular clouds. Research on random lasers has since developed into a mature experimental and theoretical field. A simple design of such lasers would be promising for potential applications. However, in traditional random lasers the properties of the output radiation are typically characterized by complex features in the spatial, spectral and time domains, making them less attractive than standard laser systems in terms of practical applications. Recently, an interesting and novel type of one-dimensional random laser that operates in a conventional telecommunication fibre without any pre-designed resonator mirrors–random distributed feedback fibre laser–was demonstrated. The positive feedback required for laser generation in random fibre lasers is provided by the Rayleigh scattering from the inhomogeneities of the refractive index that are naturally present in silica glass. In the proposed laser concept, the randomly backscattered light is amplified through the Raman effect, providing distributed gain over distances up to 100 km. Although an effective reflection due to the Rayleigh scattering is extremely small (∼0.1%), the lasing threshold may be exceeded when a sufficiently large distributed Raman gain is provided. Such a random distributed feedback fibre laser has a number of interesting and attractive features. The fibre waveguide geometry provides transverse confinement, and effectively one-dimensional random distributed feedback leads to the

  5. Smart intimation and location of faults in distribution system

    Science.gov (United States)

    Hari Krishna, K.; Srinivasa Rao, B.

    2018-04-01

    Location of faults in the distribution system is one of the most complicated problems that we are facing today. Identification of fault location and severity of fault within a short time is required to provide continuous power supply but fault identification and information transfer to the operator is the biggest challenge in the distribution network. This paper proposes a fault location method in the distribution system based on Arduino nano and GSM module with flame sensor. The main idea is to locate the fault in the distribution transformer by sensing the arc coming out from the fuse element. The biggest challenge in the distribution network is to identify the location and the severity of faults under different conditions. Well operated transmission and distribution systems will play a key role for uninterrupted power supply. Whenever fault occurs in the distribution system the time taken to locate and eliminate the fault has to be reduced. The proposed design was achieved with flame sensor and GSM module. Under faulty condition, the system will automatically send an alert message to the operator in the distribution system, about the abnormal conditions near the transformer, site code and its exact location for possible power restoration.

  6. The relationship between multilevel models and non-parametric multilevel mixture models: Discrete approximation of intraclass correlation, random coefficient distributions, and residual heteroscedasticity.

    Science.gov (United States)

    Rights, Jason D; Sterba, Sonya K

    2016-11-01

    Multilevel data structures are common in the social sciences. Often, such nested data are analysed with multilevel models (MLMs) in which heterogeneity between clusters is modelled by continuously distributed random intercepts and/or slopes. Alternatively, the non-parametric multilevel regression mixture model (NPMM) can accommodate the same nested data structures through discrete latent class variation. The purpose of this article is to delineate analytic relationships between NPMM and MLM parameters that are useful for understanding the indirect interpretation of the NPMM as a non-parametric approximation of the MLM, with relaxed distributional assumptions. We define how seven standard and non-standard MLM specifications can be indirectly approximated by particular NPMM specifications. We provide formulas showing how the NPMM can serve as an approximation of the MLM in terms of intraclass correlation, random coefficient means and (co)variances, heteroscedasticity of residuals at level 1, and heteroscedasticity of residuals at level 2. Further, we discuss how these relationships can be useful in practice. The specific relationships are illustrated with simulated graphical demonstrations, and direct and indirect interpretations of NPMM classes are contrasted. We provide an R function to aid in implementing and visualizing an indirect interpretation of NPMM classes. An empirical example is presented and future directions are discussed. © 2016 The British Psychological Society.

  7. Modelling Thomson scattering for systems with non-equilibrium electron distributions

    Directory of Open Access Journals (Sweden)

    Chapman D.A.

    2013-11-01

    Full Text Available We investigate the effect of non-equilibrium electron distributions in the analysis of Thomson scattering for a range of conditions of interest to inertial confinement fusion experiments. Firstly, a generalised one-component model based on quantum statistical theory is given in the random phase approximation (RPA. The Chihara expression for electron-ion plasmas is then adapted to include the new non-equilibrium electron physics. The theoretical scattering spectra for both diffuse and dense plasmas in which non-equilibrium electron distributions are expected to arise are considered. We find that such distributions strongly influence the spectra and are hence an important consideration for accurately determining the plasma conditions.

  8. Stable and efficient retrospective 4D-MRI using non-uniformly distributed quasi-random numbers

    Science.gov (United States)

    Breuer, Kathrin; Meyer, Cord B.; Breuer, Felix A.; Richter, Anne; Exner, Florian; Weng, Andreas M.; Ströhle, Serge; Polat, Bülent; Jakob, Peter M.; Sauer, Otto A.; Flentje, Michael; Weick, Stefan

    2018-04-01

    The purpose of this work is the development of a robust and reliable three-dimensional (3D) Cartesian imaging technique for fast and flexible retrospective 4D abdominal MRI during free breathing. To this end, a non-uniform quasi random (NU-QR) reordering of the phase encoding (k y –k z ) lines was incorporated into 3D Cartesian acquisition. The proposed sampling scheme allocates more phase encoding points near the k-space origin while reducing the sampling density in the outer part of the k-space. Respiratory self-gating in combination with SPIRiT-reconstruction is used for the reconstruction of abdominal data sets in different respiratory phases (4D-MRI). Six volunteers and three patients were examined at 1.5 T during free breathing. Additionally, data sets with conventional two-dimensional (2D) linear and 2D quasi random phase encoding order were acquired for the volunteers for comparison. A quantitative evaluation of image quality versus scan times (from 70 s to 626 s) for the given sampling schemes was obtained by calculating the normalized mutual information (NMI) for all volunteers. Motion estimation was accomplished by calculating the maximum derivative of a signal intensity profile of a transition (e.g. tumor or diaphragm). The 2D non-uniform quasi-random distribution of phase encoding lines in Cartesian 3D MRI yields more efficient undersampling patterns for parallel imaging compared to conventional uniform quasi-random and linear sampling. Median NMI values of NU-QR sampling are the highest for all scan times. Therefore, within the same scan time 4D imaging could be performed with improved image quality. The proposed method allows for the reconstruction of motion artifact reduced 4D data sets with isotropic spatial resolution of 2.1  ×  2.1  ×  2.1 mm3 in a short scan time, e.g. 10 respiratory phases in only 3 min. Cranio-caudal tumor displacements between 23 and 46 mm could be observed. NU-QR sampling enables for stable 4D

  9. Cluster analysis for determining distribution center location

    Science.gov (United States)

    Lestari Widaningrum, Dyah; Andika, Aditya; Murphiyanto, Richard Dimas Julian

    2017-12-01

    Determination of distribution facilities is highly important to survive in the high level of competition in today’s business world. Companies can operate multiple distribution centers to mitigate supply chain risk. Thus, new problems arise, namely how many and where the facilities should be provided. This study examines a fast-food restaurant brand, which located in the Greater Jakarta. This brand is included in the category of top 5 fast food restaurant chain based on retail sales. There were three stages in this study, compiling spatial data, cluster analysis, and network analysis. Cluster analysis results are used to consider the location of the additional distribution center. Network analysis results show a more efficient process referring to a shorter distance to the distribution process.

  10. Additive non-uniform random sampling in superimposed fiber Bragg grating strain gauge

    Science.gov (United States)

    Ma, Y. C.; Liu, H. Y.; Yan, S. B.; Yang, Y. H.; Yang, M. W.; Li, J. M.; Tang, J.

    2013-05-01

    This paper demonstrates an additive non-uniform random sampling and interrogation method for dynamic and/or static strain gauge using a reflection spectrum from two superimposed fiber Bragg gratings (FBGs). The superimposed FBGs are designed to generate non-equidistant space of a sensing pulse train in the time domain during dynamic strain gauge. By combining centroid finding with smooth filtering methods, both the interrogation speed and accuracy are improved. A 1.9 kHz dynamic strain is measured by generating an additive non-uniform randomly distributed 2 kHz optical sensing pulse train from a mean 500 Hz triangular periodically changing scanning frequency.

  11. Protection of Location Privacy Based on Distributed Collaborative Recommendations.

    Science.gov (United States)

    Wang, Peng; Yang, Jing; Zhang, Jian-Pei

    2016-01-01

    In the existing centralized location services system structure, the server is easily attracted and be the communication bottleneck. It caused the disclosure of users' location. For this, we presented a new distributed collaborative recommendation strategy that is based on the distributed system. In this strategy, each node establishes profiles of their own location information. When requests for location services appear, the user can obtain the corresponding location services according to the recommendation of the neighboring users' location information profiles. If no suitable recommended location service results are obtained, then the user can send a service request to the server according to the construction of a k-anonymous data set with a centroid position of the neighbors. In this strategy, we designed a new model of distributed collaborative recommendation location service based on the users' location information profiles and used generalization and encryption to ensure the safety of the user's location information privacy. Finally, we used the real location data set to make theoretical and experimental analysis. And the results show that the strategy proposed in this paper is capable of reducing the frequency of access to the location server, providing better location services and protecting better the user's location privacy.

  12. Robustness to non-normality of common tests for the many-sample location problem

    Directory of Open Access Journals (Sweden)

    Azmeri Khan

    2003-01-01

    Full Text Available This paper studies the effect of deviating from the normal distribution assumption when considering the power of two many-sample location test procedures: ANOVA (parametric and Kruskal-Wallis (non-parametric. Power functions for these tests under various conditions are produced using simulation, where the simulated data are produced using MacGillivray and Cannon's [10] recently suggested g-and-k distribution. This distribution can provide data with selected amounts of skewness and kurtosis by varying two nearly independent parameters.

  13. Growth-induced strong pinning sites in laser ablated YBa2Cu3O7-δ films with a non-random distribution

    International Nuclear Information System (INIS)

    Huijbregtse, J.M.; Klaassen, F.C.; Geest, R.C.F. van der; Dam, B.; Griessen, R.

    1999-01-01

    Recently, the authors showed that natural linear defects are the origin of the high critical currents in laser ablated YGBa 2 Cu 3 O 7-δ films. Combining wet-chemical etching and Atomic Force Microscopy, they find that these dislocations are created by island coalescence during growth. Consequently, the defect density can be reproducibly varied by manipulating the density of growth islands, which in turn depends on the substrate temperature. Interestingly, the radial defect distribution function approaches zero at small distances, indicating short range order. Therefore, they are now able to study vortex matter in films with a tailored non-random distribution of natural strong pinning sites

  14. Statistical Feature Extraction for Fault Locations in Nonintrusive Fault Detection of Low Voltage Distribution Systems

    Directory of Open Access Journals (Sweden)

    Hsueh-Hsien Chang

    2017-04-01

    Full Text Available This paper proposes statistical feature extraction methods combined with artificial intelligence (AI approaches for fault locations in non-intrusive single-line-to-ground fault (SLGF detection of low voltage distribution systems. The input features of the AI algorithms are extracted using statistical moment transformation for reducing the dimensions of the power signature inputs measured by using non-intrusive fault monitoring (NIFM techniques. The data required to develop the network are generated by simulating SLGF using the Electromagnetic Transient Program (EMTP in a test system. To enhance the identification accuracy, these features after normalization are given to AI algorithms for presenting and evaluating in this paper. Different AI techniques are then utilized to compare which identification algorithms are suitable to diagnose the SLGF for various power signatures in a NIFM system. The simulation results show that the proposed method is effective and can identify the fault locations by using non-intrusive monitoring techniques for low voltage distribution systems.

  15. Place field assembly distribution encodes preferred locations.

    Directory of Open Access Journals (Sweden)

    Omar Mamad

    2017-09-01

    Full Text Available The hippocampus is the main locus of episodic memory formation and the neurons there encode the spatial map of the environment. Hippocampal place cells represent location, but their role in the learning of preferential location remains unclear. The hippocampus may encode locations independently from the stimuli and events that are associated with these locations. We have discovered a unique population code for the experience-dependent value of the context. The degree of reward-driven navigation preference highly correlates with the spatial distribution of the place fields recorded in the CA1 region of the hippocampus. We show place field clustering towards rewarded locations. Optogenetic manipulation of the ventral tegmental area demonstrates that the experience-dependent place field assembly distribution is directed by tegmental dopaminergic activity. The ability of the place cells to remap parallels the acquisition of reward context. Our findings present key evidence that the hippocampal neurons are not merely mapping the static environment but also store the concurrent context reward value, enabling episodic memory for past experience to support future adaptive behavior.

  16. On the Distribution of Random Geometric Graphs

    DEFF Research Database (Denmark)

    Badiu, Mihai Alin; Coon, Justin P.

    2018-01-01

    as a measure of the graph’s topological uncertainty (or information content). Moreover, the distribution is also relevant for determining average network performance or designing protocols. However, a major impediment in deducing the graph distribution is that it requires the joint probability distribution......Random geometric graphs (RGGs) are commonly used to model networked systems that depend on the underlying spatial embedding. We concern ourselves with the probability distribution of an RGG, which is crucial for studying its random topology, properties (e.g., connectedness), or Shannon entropy...... of the n(n − 1)/2 distances between n nodes randomly distributed in a bounded domain. As no such result exists in the literature, we make progress by obtaining the joint distribution of the distances between three nodes confined in a disk in R 2. This enables the calculation of the probability distribution...

  17. Random generation of RNA secondary structures according to native distributions

    Directory of Open Access Journals (Sweden)

    Nebel Markus E

    2011-10-01

    Full Text Available Abstract Background Random biological sequences are a topic of great interest in genome analysis since, according to a powerful paradigm, they represent the background noise from which the actual biological information must differentiate. Accordingly, the generation of random sequences has been investigated for a long time. Similarly, random object of a more complicated structure like RNA molecules or proteins are of interest. Results In this article, we present a new general framework for deriving algorithms for the non-uniform random generation of combinatorial objects according to the encoding and probability distribution implied by a stochastic context-free grammar. Briefly, the framework extends on the well-known recursive method for (uniform random generation and uses the popular framework of admissible specifications of combinatorial classes, introducing weighted combinatorial classes to allow for the non-uniform generation by means of unranking. This framework is used to derive an algorithm for the generation of RNA secondary structures of a given fixed size. We address the random generation of these structures according to a realistic distribution obtained from real-life data by using a very detailed context-free grammar (that models the class of RNA secondary structures by distinguishing between all known motifs in RNA structure. Compared to well-known sampling approaches used in several structure prediction tools (such as SFold ours has two major advantages: Firstly, after a preprocessing step in time O(n2 for the computation of all weighted class sizes needed, with our approach a set of m random secondary structures of a given structure size n can be computed in worst-case time complexity Om⋅n⋅ log(n while other algorithms typically have a runtime in O(m⋅n2. Secondly, our approach works with integer arithmetic only which is faster and saves us from all the discomforting details of using floating point arithmetic with

  18. Additive non-uniform random sampling in superimposed fiber Bragg grating strain gauge

    International Nuclear Information System (INIS)

    Ma, Y C; Liu, H Y; Yan, S B; Li, J M; Tang, J; Yang, Y H; Yang, M W

    2013-01-01

    This paper demonstrates an additive non-uniform random sampling and interrogation method for dynamic and/or static strain gauge using a reflection spectrum from two superimposed fiber Bragg gratings (FBGs). The superimposed FBGs are designed to generate non-equidistant space of a sensing pulse train in the time domain during dynamic strain gauge. By combining centroid finding with smooth filtering methods, both the interrogation speed and accuracy are improved. A 1.9 kHz dynamic strain is measured by generating an additive non-uniform randomly distributed 2 kHz optical sensing pulse train from a mean 500 Hz triangular periodically changing scanning frequency. (paper)

  19. Parameter estimation of multivariate multiple regression model using bayesian with non-informative Jeffreys’ prior distribution

    Science.gov (United States)

    Saputro, D. R. S.; Amalia, F.; Widyaningsih, P.; Affan, R. C.

    2018-05-01

    Bayesian method is a method that can be used to estimate the parameters of multivariate multiple regression model. Bayesian method has two distributions, there are prior and posterior distributions. Posterior distribution is influenced by the selection of prior distribution. Jeffreys’ prior distribution is a kind of Non-informative prior distribution. This prior is used when the information about parameter not available. Non-informative Jeffreys’ prior distribution is combined with the sample information resulting the posterior distribution. Posterior distribution is used to estimate the parameter. The purposes of this research is to estimate the parameters of multivariate regression model using Bayesian method with Non-informative Jeffreys’ prior distribution. Based on the results and discussion, parameter estimation of β and Σ which were obtained from expected value of random variable of marginal posterior distribution function. The marginal posterior distributions for β and Σ are multivariate normal and inverse Wishart. However, in calculation of the expected value involving integral of a function which difficult to determine the value. Therefore, approach is needed by generating of random samples according to the posterior distribution characteristics of each parameter using Markov chain Monte Carlo (MCMC) Gibbs sampling algorithm.

  20. Lower limits for distribution tails of randomly stopped sums

    NARCIS (Netherlands)

    Denisov, D.E.; Korshunov, D.A.; Foss, S.G.

    2008-01-01

    We study lower limits for the ratio $\\overline{F^{*\\tau}}(x)/\\,\\overline F(x)$ of tail distributions, where $F^{*\\tau}$ is a distribution of a sum of a random size $\\tau$ of independent identically distributed random variables having a common distribution $F$, and a random variable $\\tau$ does not

  1. Laser absorption of carbon fiber reinforced polymer with randomly distributed carbon fibers

    Science.gov (United States)

    Hu, Jun; Xu, Hebing; Li, Chao

    2018-03-01

    Laser processing of carbon fiber reinforced polymer (CFRP) is a non-traditional machining method which has many prospective applications. The laser absorption characteristics of CFRP are analyzed in this paper. A ray tracing model describing the interaction of the laser spot with CFRP is established. The material model contains randomly distributed carbon fibers which are generated using an improved carbon fiber placement method. It was found that CFRP has good laser absorption due to multiple reflections of the light rays in the material’s microstructure. The randomly distributed carbon fibers make the absorptivity of the light rays change randomly in the laser spot. Meanwhile, the average absorptivity fluctuation is obvious during movement of the laser. The experimental measurements agree well with the values predicted by the ray tracing model.

  2. Distributional and efficiency results for subset selection

    NARCIS (Netherlands)

    Laan, van der P.

    1996-01-01

    Assume k (??k \\geq 2) populations are given. The associated independent random variables have continuous distribution functions with an unknown location parameter. The statistical selec??tion goal is to select a non??empty subset which contains the best population,?? that is the pop??ulation with

  3. Non-compact random generalized games and random quasi-variational inequalities

    OpenAIRE

    Yuan, Xian-Zhi

    1994-01-01

    In this paper, existence theorems of random maximal elements, random equilibria for the random one-person game and random generalized game with a countable number of players are given as applications of random fixed point theorems. By employing existence theorems of random generalized games, we deduce the existence of solutions for non-compact random quasi-variational inequalities. These in turn are used to establish several existence theorems of noncompact generalized random ...

  4. Potential use of the non-random distribution of N2 and N2O mole masses in the atmosphere as a tool for tracing atmospheric mixing and isotope fractionation processes

    International Nuclear Information System (INIS)

    Well, R.; Langel, R.; Reineking, A.

    2002-01-01

    The variation in the natural abundance of 15 N in atmospheric gas species is often used to determine the mixing of trace gases from different sources. With conventional budget calculations one unknown quantity can be determined if the remaining quantities are known. From 15 N tracer studies in soils with highly enriched 15 N-nitrate a procedure is known to calculate the mixing of atmospheric and soil derived N 2 based on the measurement of the 30/28 and 29/28 ratios in gas samples collected from soil covers. Because of the non-random distribution of the mole masses 30 N 2 , 29 N 2 and 28 N 2 in the mixing gas it is possible to calculate two quantities simultaneously, i.e. the mixing ratio of atmospheric and soil derived N 2 , and the isotopic signature of the soil derived N 2 . Routine standard measurements of laboratory air had suggested a non-random distribution of N 2 -mole masses. The objective of this study was to investigate and explain the existence of non-random distributions of 15 N 15 N, 14 N 15 N and 14 N 14 N in N 2 and N 2 O in environmental samples. The calculation of theoretical isotope data resulting from hypothetical mixing of two sources differing in 15 N natural abundance demonstrated, that the deviation from an ideal random distribution of mole masses is not detectable with the current precision of mass spectrometry. 15 N-analysis of N 2 or N 2 O was conducted with randomised and non-randomised replicate samples of different origin. 15 N abundance as calculated from 29/28 ratios were generally higher in randomised samples. The differences between the treatments ranged between 0.05 and 0.17 δper mille 15 N. It was concluded that the observed randomisation effect is probably caused by 15 N 15 N fractionation during environmental processes. (author)

  5. Distribution Locational Marginal Pricing for Optimal Electric Vehicle Charging Management

    DEFF Research Database (Denmark)

    Li, Ruoyang; Wu, Qiuwei; Oren, Shmuel S.

    2013-01-01

    This paper presents an integrated distribution locational marginal pricing (DLMP) method designed to alleviate congestion induced by electric vehicle (EV) loads in future power systems. In the proposed approach, the distribution system operator (DSO) determines distribution locational marginal...... shown that the socially optimal charging schedule can be implemented through a decentralized mechanism where loads respond autonomously to the posted DLMPs by maximizing their individual net surplus...

  6. Selection of City Distribution Locations in Urbanized Areas

    NARCIS (Netherlands)

    Bu, L.; Van Duin, J.H.R.; Wiegmans, B.; Luo, Z.; Yin, C.

    2012-01-01

    This paper aims to apply a preference method for selecting optimal city distribution reloading locations in urbanized areas. The focus in the optimization is on trucks entering the urbanized area where the truck can choose between at least two locations with similar distances determined by a

  7. A location-based multiple point statistics method: modelling the reservoir with non-stationary characteristics

    Directory of Open Access Journals (Sweden)

    Yin Yanshu

    2017-12-01

    Full Text Available In this paper, a location-based multiple point statistics method is developed to model a non-stationary reservoir. The proposed method characterizes the relationship between the sedimentary pattern and the deposit location using the relative central position distance function, which alleviates the requirement that the training image and the simulated grids have the same dimension. The weights in every direction of the distance function can be changed to characterize the reservoir heterogeneity in various directions. The local integral replacements of data events, structured random path, distance tolerance and multi-grid strategy are applied to reproduce the sedimentary patterns and obtain a more realistic result. This method is compared with the traditional Snesim method using a synthesized 3-D training image of Poyang Lake and a reservoir model of Shengli Oilfield in China. The results indicate that the new method can reproduce the non-stationary characteristics better than the traditional method and is more suitable for simulation of delta-front deposits. These results show that the new method is a powerful tool for modelling a reservoir with non-stationary characteristics.

  8. Joint location, inventory, and preservation decisions for non-instantaneous deterioration items under delay in payments

    Science.gov (United States)

    Tsao, Yu-Chung

    2016-02-01

    This study models a joint location, inventory and preservation decision-making problem for non-instantaneous deteriorating items under delay in payments. An outside supplier provides a credit period to the wholesaler which has a distribution system with distribution centres (DCs). The non-instantaneous deteriorating means no deterioration occurs in the earlier stage, which is very useful for items such as fresh food and fruits. This paper also considers that the deteriorating rate will decrease and the reservation cost will increase as the preservation effort increases. Therefore, how much preservation effort should be made is a crucial decision. The objective of this paper is to determine the optimal locations and number of DCs, the optimal replenishment cycle time at DCs, and the optimal preservation effort simultaneously such that the total network profit is maximised. The problem is formulated as piecewise nonlinear functions and has three different cases. Algorithms based on piecewise nonlinear optimisation are provided to solve the joint location and inventory problem for all cases. Computational analysis illustrates the solution procedures and the impacts of the related parameters on decisions and profits. The results of this study can serve as references for business managers or administrators.

  9. Non-uniform approximations for sums of discrete m-dependent random variables

    OpenAIRE

    Vellaisamy, P.; Cekanavicius, V.

    2013-01-01

    Non-uniform estimates are obtained for Poisson, compound Poisson, translated Poisson, negative binomial and binomial approximations to sums of of m-dependent integer-valued random variables. Estimates for Wasserstein metric also follow easily from our results. The results are then exemplified by the approximation of Poisson binomial distribution, 2-runs and $m$-dependent $(k_1,k_2)$-events.

  10. Three dimensional multi perspective imaging with randomly distributed sensors

    International Nuclear Information System (INIS)

    DaneshPanah, Mehdi; Javidi, Bahrain

    2008-01-01

    In this paper, we review a three dimensional (3D) passive imaging system that exploits the visual information captured from the scene from multiple perspectives to reconstruct the scene voxel by voxel in 3D space. The primary contribution of this work is to provide a computational reconstruction scheme based on randomly distributed sensor locations in space. In virtually all of multi perspective techniques (e.g. integral imaging, synthetic aperture integral imaging, etc), there is an implicit assumption that the sensors lie on a simple, regular pickup grid. Here, we relax this assumption and suggest a computational reconstruction framework that unifies the available methods as its special cases. The importance of this work is that it enables three dimensional imaging technology to be implemented in a multitude of novel application domains such as 3D aerial imaging, collaborative imaging, long range 3D imaging and etc, where sustaining a regular pickup grid is not possible and/or the parallax requirements call for a irregular or sparse synthetic aperture mode. Although the sensors can be distributed in any random arrangement, we assume that the pickup position is measured at the time of capture of each elemental image. We demonstrate the feasibility of the methods proposed here by experimental results.

  11. Distribution of blood types in a sample of 245 New Zealand non-purebred cats.

    Science.gov (United States)

    Cattin, R P

    2016-05-01

    To determine the distribution of feline blood types in a sample of non-pedigree, domestic cats in New Zealand, whether a difference exists in this distribution between domestic short haired and domestic long haired cats, and between the North and South Islands of New Zealand; and to calculate the risk of a random blood transfusion causing a severe transfusion reaction, and the risk of a random mating producing kittens susceptible to neonatal isoerythrolysis. The results of 245 blood typing tests in non-pedigree cats performed at the New Zealand Veterinary Pathology (NZVP) and Gribbles Veterinary Pathology laboratories between the beginning of 2009 and the end of 2014 were retrospectively collated and analysed. Cats that were identified as domestic short or long haired were included. For the cats tested at Gribbles Veterinary Pathology 62 were from the North Island, and 27 from the South Island. The blood type distribution differed between samples from the two laboratories (p=0.029), but not between domestic short and long haired cats (p=0.50), or between the North and South Islands (p=0.76). Of the 89 cats tested at Gribbles Veterinary Pathology, 70 (79%) were type A, 18 (20%) type B, and 1 (1%) type AB; for NZVP 139/156 (89.1%) cats were type A, 16 (10.3%) type B, and 1 (0.6%) type AB. It was estimated that 18.3-31.9% of random blood transfusions would be at risk of a transfusion reaction, and neonatal isoerythrolysis would be a risk in 9.2-16.1% of random matings between non-pedigree cats. The results from this study suggest that there is a high risk of complications for a random blood transfusion between non-purebred cats in New Zealand. Neonatal isoerythrolysis should be considered an important differential diagnosis in illness or mortality in kittens during the first days of life.

  12. Digital simulation of two-dimensional random fields with arbitrary power spectra and non-Gaussian probability distribution functions.

    Science.gov (United States)

    Yura, Harold T; Hanson, Steen G

    2012-04-01

    Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.

  13. Nonparametric estimation of location and scale parameters

    KAUST Repository

    Potgieter, C.J.

    2012-12-01

    Two random variables X and Y belong to the same location-scale family if there are constants μ and σ such that Y and μ+σX have the same distribution. In this paper we consider non-parametric estimation of the parameters μ and σ under minimal assumptions regarding the form of the distribution functions of X and Y. We discuss an approach to the estimation problem that is based on asymptotic likelihood considerations. Our results enable us to provide a methodology that can be implemented easily and which yields estimators that are often near optimal when compared to fully parametric methods. We evaluate the performance of the estimators in a series of Monte Carlo simulations. © 2012 Elsevier B.V. All rights reserved.

  14. The randomly renewed general item and the randomly inspected item with exponential life distribution

    International Nuclear Information System (INIS)

    Schneeweiss, W.G.

    1979-01-01

    For a randomly renewed item the probability distributions of the time to failure and of the duration of down time and the expectations of these random variables are determined. Moreover, it is shown that the same theory applies to randomly checked items with exponential probability distribution of life such as electronic items. The case of periodic renewals is treated as an example. (orig.) [de

  15. Cover estimation and payload location using Markov random fields

    Science.gov (United States)

    Quach, Tu-Thach

    2014-02-01

    Payload location is an approach to find the message bits hidden in steganographic images, but not necessarily their logical order. Its success relies primarily on the accuracy of the underlying cover estimators and can be improved if more estimators are used. This paper presents an approach based on Markov random field to estimate the cover image given a stego image. It uses pairwise constraints to capture the natural two-dimensional statistics of cover images and forms a basis for more sophisticated models. Experimental results show that it is competitive against current state-of-the-art estimators and can locate payload embedded by simple LSB steganography and group-parity steganography. Furthermore, when combined with existing estimators, payload location accuracy improves significantly.

  16. Obtaining location/arrival-time and location/outflow-quantity distributions for steady flow systems

    International Nuclear Information System (INIS)

    Anon.

    1977-01-01

    A steady, two-dimensional flow system is used to demonstrate the application of location/arrival-time and location/outflow-quantity curves in determining the environmental consequences of groundwater contamination. The subsurface geologic and hydrologic evaluations needed to obtain the arrival results involve a sequence of four phases: system identification, new potential determination, flow systems kinematics, and contaminant transport analysis. Once these phases are completed, they are effectively summarized and easily used to evaluate environmental consequences through the arrival distributions

  17. Location Analysis of Freight Distribution Terminal of Jakarta City, Indonesia

    Directory of Open Access Journals (Sweden)

    Nahry Nahry

    2016-03-01

    Full Text Available Currently Jakarta has two freight terminals, namely Pulo Gebang and Tanah Merdeka. But, both terminals are just functioned for parking and have not been utilized properly yet, e.g. for consolidation. Goods consolidation, which is usually performed in distribution terminal, may reduce number of freight flow within the city. This paper is aimed to determine the best location of distribution terminal in Jakarta among those two terminals and two additional alternative sites, namely Lodan and Rawa Buaya. It is initialized by the identification of important factors that affect the location selection. It is carried out by Likert analysis through the questionnaires distributed to logistics firms. The best location is determined by applying Overlay Analysis using ArcGIS 9.2. Four grid maps are produced to represent the accessibility, cost, time, and environment factors as the important factors of location. The result shows that the ranking from the best is; Lodan, Tanah Merdeka, Pulo Gebang, and Rawa Buaya.

  18. High Dimensional Spectral Graph Theory and Non-backtracking Random Walks on Graphs

    Science.gov (United States)

    Kempton, Mark

    This thesis has two primary areas of focus. First we study connection graphs, which are weighted graphs in which each edge is associated with a d-dimensional rotation matrix for some fixed dimension d, in addition to a scalar weight. Second, we study non-backtracking random walks on graphs, which are random walks with the additional constraint that they cannot return to the immediately previous state at any given step. Our work in connection graphs is centered on the notion of consistency, that is, the product of rotations moving from one vertex to another is independent of the path taken, and a generalization called epsilon-consistency. We present higher dimensional versions of the combinatorial Laplacian matrix and normalized Laplacian matrix from spectral graph theory, and give results characterizing the consistency of a connection graph in terms of the spectra of these matrices. We generalize several tools from classical spectral graph theory, such as PageRank and effective resistance, to apply to connection graphs. We use these tools to give algorithms for sparsification, clustering, and noise reduction on connection graphs. In non-backtracking random walks, we address the question raised by Alon et. al. concerning how the mixing rate of a non-backtracking random walk to its stationary distribution compares to the mixing rate for an ordinary random walk. Alon et. al. address this question for regular graphs. We take a different approach, and use a generalization of Ihara's Theorem to give a new proof of Alon's result for regular graphs, and to extend the result to biregular graphs. Finally, we give a non-backtracking version of Polya's Random Walk Theorem for 2-dimensional grids.

  19. Probability distribution for the Gaussian curvature of the zero level surface of a random function

    Science.gov (United States)

    Hannay, J. H.

    2018-04-01

    A rather natural construction for a smooth random surface in space is the level surface of value zero, or ‘nodal’ surface f(x,y,z)  =  0, of a (real) random function f; the interface between positive and negative regions of the function. A physically significant local attribute at a point of a curved surface is its Gaussian curvature (the product of its principal curvatures) because, when integrated over the surface it gives the Euler characteristic. Here the probability distribution for the Gaussian curvature at a random point on the nodal surface f  =  0 is calculated for a statistically homogeneous (‘stationary’) and isotropic zero mean Gaussian random function f. Capitalizing on the isotropy, a ‘fixer’ device for axes supplies the probability distribution directly as a multiple integral. Its evaluation yields an explicit algebraic function with a simple average. Indeed, this average Gaussian curvature has long been known. For a non-zero level surface instead of the nodal one, the probability distribution is not fully tractable, but is supplied as an integral expression.

  20. On the Distribution of Indefinite Quadratic Forms in Gaussian Random Variables

    KAUST Repository

    Al-Naffouri, Tareq Y.

    2015-10-30

    © 2015 IEEE. In this work, we propose a unified approach to evaluating the CDF and PDF of indefinite quadratic forms in Gaussian random variables. Such a quantity appears in many applications in communications, signal processing, information theory, and adaptive filtering. For example, this quantity appears in the mean-square-error (MSE) analysis of the normalized least-meansquare (NLMS) adaptive algorithm, and SINR associated with each beam in beam forming applications. The trick of the proposed approach is to replace inequalities that appear in the CDF calculation with unit step functions and to use complex integral representation of the the unit step function. Complex integration allows us then to evaluate the CDF in closed form for the zero mean case and as a single dimensional integral for the non-zero mean case. Utilizing the saddle point technique allows us to closely approximate such integrals in non zero mean case. We demonstrate how our approach can be extended to other scenarios such as the joint distribution of quadratic forms and ratios of such forms, and to characterize quadratic forms in isotropic distributed random variables.We also evaluate the outage probability in multiuser beamforming using our approach to provide an application of indefinite forms in communications.

  1. Random distribution of background charge density for numerical simulation of discharge inception

    International Nuclear Information System (INIS)

    Grange, F.; Loiseau, J.F.; Spyrou, N.

    1998-01-01

    The models of electric streamers based on a uniform background density of electrons may appear not to be physical, as the number of electrons in the small active region located in the vicinity of the electrode tip under regular conditions can be less than one. To avoid this, the electron background is modelled by a random density distribution such that, after a certain time lag, at least one electron is present in the grid close to the point electrode. The modelling performed shows that the streamer inception is not very sensitive to the initial location of the charged particles; the ionizing front, however, may be delayed by several tens of nanoseconds, depending on the way the electron has to drift before reaching the anode. (J.U.)

  2. Modulation of early cortical processing during divided attention to non-contiguous locations.

    Science.gov (United States)

    Frey, Hans-Peter; Schmid, Anita M; Murphy, Jeremy W; Molholm, Sophie; Lalor, Edmund C; Foxe, John J

    2014-05-01

    We often face the challenge of simultaneously attending to multiple non-contiguous regions of space. There is ongoing debate as to how spatial attention is divided under these situations. Whereas, for several years, the predominant view was that humans could divide the attentional spotlight, several recent studies argue in favor of a unitary spotlight that rhythmically samples relevant locations. Here, this issue was addressed by the use of high-density electrophysiology in concert with the multifocal m-sequence technique to examine visual evoked responses to multiple simultaneous streams of stimulation. Concurrently, we assayed the topographic distribution of alpha-band oscillatory mechanisms, a measure of attentional suppression. Participants performed a difficult detection task that required simultaneous attention to two stimuli in contiguous (undivided) or non-contiguous parts of space. In the undivided condition, the classic pattern of attentional modulation was observed, with increased amplitude of the early visual evoked response and increased alpha amplitude ipsilateral to the attended hemifield. For the divided condition, early visual responses to attended stimuli were also enhanced, and the observed multifocal topographic distribution of alpha suppression was in line with the divided attention hypothesis. These results support the existence of divided attentional spotlights, providing evidence that the corresponding modulation occurs during initial sensory processing time-frames in hierarchically early visual regions, and that suppressive mechanisms of visual attention selectively target distracter locations during divided spatial attention. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  3. The reliability of measuring pain distribution and location using body pain diagrams in patients with acute whiplash-associated disorders.

    Science.gov (United States)

    Southerst, Danielle; Stupar, Maja; Côté, Pierre; Mior, Silvano; Stern, Paula

    2013-09-01

    The objective of this study was to measure the interexaminer reliability of scoring pain distribution using paper and electronic body pain diagrams in patients with acute whiplash-associated disorder and to assess the intermethod reliability of measuring pain distribution and location using paper and electronic diagrams. We conducted an interexaminer reliability study on 80 participants recruited from a randomized controlled trial on the conservative management of acute grade I/II whiplash-associated disorder. Participants were assessed for inclusion/exclusion criteria by an experienced clinician. As part of the baseline assessment, participants independently completed paper and electronic pain diagrams. Diagrams were scored independently by 2 examiners using the body region method. Interexaminer and intermethod reliability was computed using intraclass correlation coefficients (ICCs) for pain distribution and κ coefficient for pain location. We used Bland-Altman plots to compute limits of agreement. The interexaminer reliability was ICC = 0.925 for paper and ICC = 0.997 for the electronic body pain diagram. The intermethod reliability for measuring pain distribution ranged from ICC = 0.63 to ICC = 0.93. For pain location, the intermethod reliability varied from κ = 0.23 (posterior neck) to κ = 0.90 (right side of the face). We found good to excellent interexaminer reliability for scoring 2 versions of the body pain diagram. Pain distribution and pain location were reliably and consistently measured on body pain diagrams using paper and electronic methods; therefore, clinicians and researchers may choose either medium when using body pain diagrams. Copyright © 2013 National University of Health Sciences. Published by Mosby, Inc. All rights reserved.

  4. Fast randomized point location without preprocessing in two- and three-dimensional Delaunay triangulations

    Energy Technology Data Exchange (ETDEWEB)

    Muecke, E.P.; Saias, I.; Zhu, B.

    1996-05-01

    This paper studies the point location problem in Delaunay triangulations without preprocessing and additional storage. The proposed procedure finds the query point simply by walking through the triangulation, after selecting a good starting point by random sampling. The analysis generalizes and extends a recent result of d = 2 dimensions by proving this procedure to take expected time close to O(n{sup 1/(d+1)}) for point location in Delaunay triangulations of n random points in d = 3 dimensions. Empirical results in both two and three dimensions show that this procedure is efficient in practice.

  5. Random graph states, maximal flow and Fuss-Catalan distributions

    International Nuclear Information System (INIS)

    Collins, BenoIt; Nechita, Ion; Zyczkowski, Karol

    2010-01-01

    For any graph consisting of k vertices and m edges we construct an ensemble of random pure quantum states which describe a system composed of 2m subsystems. Each edge of the graph represents a bipartite, maximally entangled state. Each vertex represents a random unitary matrix generated according to the Haar measure, which describes the coupling between subsystems. Dividing all subsystems into two parts, one may study entanglement with respect to this partition. A general technique to derive an expression for the average entanglement entropy of random pure states associated with a given graph is presented. Our technique relies on Weingarten calculus and flow problems. We analyze the statistical properties of spectra of such random density matrices and show for which cases they are described by the free Poissonian (Marchenko-Pastur) distribution. We derive a discrete family of generalized, Fuss-Catalan distributions and explicitly construct graphs which lead to ensembles of random states characterized by these novel distributions of eigenvalues.

  6. Influence of random setup error on dose distribution

    International Nuclear Information System (INIS)

    Zhai Zhenyu

    2008-01-01

    Objective: To investigate the influence of random setup error on dose distribution in radiotherapy and determine the margin from ITV to PTV. Methods: A random sample approach was used to simulate the fields position in target coordinate system. Cumulative effect of random setup error was the sum of dose distributions of all individual treatment fractions. Study of 100 cumulative effects might get shift sizes of 90% dose point position. Margins from ITV to PTV caused by random setup error were chosen by 95% probability. Spearman's correlation was used to analyze the influence of each factor. Results: The average shift sizes of 90% dose point position was 0.62, 1.84, 3.13, 4.78, 6.34 and 8.03 mm if random setup error was 1,2,3,4,5 and 6 mm,respectively. Univariate analysis showed the size of margin was associated only by the size of random setup error. Conclusions: Margin of ITV to PTV is 1.2 times random setup error for head-and-neck cancer and 1.5 times for thoracic and abdominal cancer. Field size, energy and target depth, unlike random setup error, have no relation with the size of the margin. (authors)

  7. Partial discharge location technique for covered-conductor overhead distribution lines

    Energy Technology Data Exchange (ETDEWEB)

    Isa, M.

    2013-02-01

    In Finland, covered-conductor (CC) overhead lines are commonly used in medium voltage (MV) networks because the loads are widely distributed in the forested terrain. Such parts of the network are exposed to leaning trees which produce partial discharges (PDs) in CC lines. This thesis presents a technique to locate the PD source on CC overhead distribution line networks. The algorithm is developed and tested using a simulated study and experimental measurements. The Electromagnetic Transient Program-Alternative Transient Program (EMTP-ATP) is used to simulate and analyze a three-phase PD monitoring system, while MATLAB is used for post-processing of the high frequency signals which were measured. A Rogowski coil is used as the measuring sensor. A multi-end correlation-based technique for PD location is implemented using the theory of maximum correlation factor in order to find the time difference of arrival (TDOA) between signal arrivals at three synchronized measuring points. The three stages of signal analysis used are: (1) denoising by applying discrete wavelet transform (DWT); (2) extracting the PD features using the absolute or windowed standard deviation (STD) and; (3) locating the PD point. The advantage of this technique is the ability to locate the PD source without the need to know the first arrival time and the propagation velocity of the signals. In addition, the faulty section of the CC line between three measuring points can also be identified based on the degrees of correlation. An experimental analysis is performed to evaluate the PD measurement system performance for PD location on CC overhead lines. The measuring set-up is arranged in a high voltage (HV) laboratory. A multi-end measuring method is chosen as a technique to locate the PD source point on the line. A power transformer 110/20 kV was used to energize the AC voltage up to 11.5 kV/phase (20 kV system). The tests were designed to cover different conditions such as offline and online

  8. Effect of transverse power distribution on the ONB location in the subcooled boiling flow

    International Nuclear Information System (INIS)

    Al-Yahia, Omar S.; Lee, Yong Joong; Jo, Daeseong

    2017-01-01

    Highlights: • Effect of transverse power distribution on ONB incipient. • Uniform and non-uniform heat distribution is simulated in a narrow rectangular channel. • Simulations are performed using CFX and TMAP codes. • For uniform heating, ONB incipient by CFX occurs between predictions by TMAP analyses. • For non-uniform heating, ONB incipient by CFX occurs at a higher power than that by TMAP analysis. - Abstract: This study investigates the effect of transverse power distribution on the ONB (Onset of Nucleate Boiling) incipient. For this purpose, a subcooled boiling model with uniform and non-uniform heat flux distribution is simulated in a narrow vertical rectangular channel heated from both sides by applying a wide range of thermal power (8–16 kW). The simulations are performed using the CFX and TMAP codes. The CFX code incorporates both a two-fluid model and RPI wall boiling model to investigate coolant and wall temperature distributions along the heated channel. The TMAP code implements two different sets of heat transfer correlations to evaluate the wall temperature. The results obtained from the TMAP analyses show that the wall temperatures predicted by the Jo et al. heat transfer correlation are higher than the ones predicted by the Dittus and Boelter heat transfer correlation. The wall temperatures predicted by the CFX analyses lie between the predicted wall temperatures obtained by the TMAP analyses. Based on the superheated temperature on the heated surface, the ONB incipient is determined. The axial locations of the ONB incipient are predicted differently by the CFX and TMAP analyses. For uniform heating, the ONB incipient predicted by the CFX analysis occurs between the predictions made by the TMAP analyses. For non-uniform heating, the ONB incipient by the CFX analysis occurs at a higher power than the power required by the TMAP analyses.

  9. Distribution Locational Marginal Pricing through Quadratic Programming for Congestion Management in Distribution Networks

    DEFF Research Database (Denmark)

    Huang, Shaojun; Wu, Qiuwei; Oren, Shmuel S.

    2015-01-01

    ) calculates dynamic tariffs and publishes them to the aggregators, who make the optimal energy plans for the flexible demands. The DLMP through QP instead of linear programing as studied in previous literatures solves the multiple solution issue of the ag- gregator optimization which may cause......This paper presents the distribution locational mar- ginal pricing (DLMP) method through quadratic programming (QP) designed to alleviate the congestion that might occur in a distribution network with high penetration of flexible demands. In the DLMP method, the distribution system operator (DSO...

  10. Scattering of elastic waves on fractures randomly distributed in a three-dimensional medium

    Science.gov (United States)

    Strizhkov, S. A.; Ponyatovskaya, V. I.

    1985-02-01

    The purpose of this work is to determine the variation in basic characteristics of the wave field formed in a jointed medium, such as the intensity of fluctuations of amplitude, correlation radius, scattering coefficient and frequency composition of waves, as functions of jointing parameters. Fractures are simulated by flat plates randomly distributed and chaotically oriented in a three-dimensional medium. Experiments were performed using an alabaster model, a rectangular block measuring 50 x 50 x 120 mm. The plates were introduced into liquid alabaster which was then agitated. Models made in this way contain randomly distributed and chaotically oriented fractures. The influence of these fractures appears as fluctuations in the wave field formed in the medium. The data obtained in experimental studies showed that the dimensions of heterogeneities determined by waves in the jointed medium and the dimensions of the fractures themselves coincide only if the distance between fractures is rather great. If the distance between fractures is less than the wavelength, the dimensions of the heterogeneities located by the wave depend on wavelength.

  11. Using ArcMap, Google Earth, and Global Positioning Systems to select and locate random households in rural Haiti.

    Science.gov (United States)

    Wampler, Peter J; Rediske, Richard R; Molla, Azizur R

    2013-01-18

    A remote sensing technique was developed which combines a Geographic Information System (GIS); Google Earth, and Microsoft Excel to identify home locations for a random sample of households in rural Haiti. The method was used to select homes for ethnographic and water quality research in a region of rural Haiti located within 9 km of a local hospital and source of health education in Deschapelles, Haiti. The technique does not require access to governmental records or ground based surveys to collect household location data and can be performed in a rapid, cost-effective manner. The random selection of households and the location of these households during field surveys were accomplished using GIS, Google Earth, Microsoft Excel, and handheld Garmin GPSmap 76CSx GPS units. Homes were identified and mapped in Google Earth, exported to ArcMap 10.0, and a random list of homes was generated using Microsoft Excel which was then loaded onto handheld GPS units for field location. The development and use of a remote sensing method was essential to the selection and location of random households. A total of 537 homes initially were mapped and a randomized subset of 96 was identified as potential survey locations. Over 96% of the homes mapped using Google Earth imagery were correctly identified as occupied dwellings. Only 3.6% of the occupants of mapped homes visited declined to be interviewed. 16.4% of the homes visited were not occupied at the time of the visit due to work away from the home or market days. A total of 55 households were located using this method during the 10 days of fieldwork in May and June of 2012. The method used to generate and field locate random homes for surveys and water sampling was an effective means of selecting random households in a rural environment lacking geolocation infrastructure. The success rate for locating households using a handheld GPS was excellent and only rarely was local knowledge required to identify and locate households. This

  12. Using ArcMap, Google Earth, and Global Positioning Systems to select and locate random households in rural Haiti

    Directory of Open Access Journals (Sweden)

    Wampler Peter J

    2013-01-01

    Full Text Available Abstract Background A remote sensing technique was developed which combines a Geographic Information System (GIS; Google Earth, and Microsoft Excel to identify home locations for a random sample of households in rural Haiti. The method was used to select homes for ethnographic and water quality research in a region of rural Haiti located within 9 km of a local hospital and source of health education in Deschapelles, Haiti. The technique does not require access to governmental records or ground based surveys to collect household location data and can be performed in a rapid, cost-effective manner. Methods The random selection of households and the location of these households during field surveys were accomplished using GIS, Google Earth, Microsoft Excel, and handheld Garmin GPSmap 76CSx GPS units. Homes were identified and mapped in Google Earth, exported to ArcMap 10.0, and a random list of homes was generated using Microsoft Excel which was then loaded onto handheld GPS units for field location. The development and use of a remote sensing method was essential to the selection and location of random households. Results A total of 537 homes initially were mapped and a randomized subset of 96 was identified as potential survey locations. Over 96% of the homes mapped using Google Earth imagery were correctly identified as occupied dwellings. Only 3.6% of the occupants of mapped homes visited declined to be interviewed. 16.4% of the homes visited were not occupied at the time of the visit due to work away from the home or market days. A total of 55 households were located using this method during the 10 days of fieldwork in May and June of 2012. Conclusions The method used to generate and field locate random homes for surveys and water sampling was an effective means of selecting random households in a rural environment lacking geolocation infrastructure. The success rate for locating households using a handheld GPS was excellent and only

  13. High-power random distributed feedback fiber laser: From science to application

    Energy Technology Data Exchange (ETDEWEB)

    Du, Xueyuan [College of Optoelectronic Science and Engineering, National University of Defense Technology, Changsha 410073 (China); Naval Academy of Armament, Beijing 100161 (China); Zhang, Hanwei; Xiao, Hu; Ma, Pengfei; Wang, Xiaolin; Zhou, Pu; Liu, Zejin [College of Optoelectronic Science and Engineering, National University of Defense Technology, Changsha 410073 (China)

    2016-10-15

    A fiber laser based on random distributed feedback has attracted increasing attention in recent years, as it has become an important photonic device and has found wide applications in fiber communications or sensing. In this article, recent advances in high-power random distributed feedback fiber laser are reviewed, including the theoretical analyses, experimental approaches, discussion on the practical applications and outlook. It is found that a random distributed feedback fiber laser can not only act as an information photonics device, but also has the feasibility for high-efficiency/high-power generation, which makes it competitive with conventional high-power laser sources. In addition, high-power random distributed feedback fiber laser has been successfully applied for midinfrared lasing, frequency doubling to the visible and high-quality imaging. It is believed that the high-power random distributed feedback fiber laser could become a promising light source with simple and economic configurations. (copyright 2016 by WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  14. Design and analysis of the location of an online resale business distribution centre in Japan

    Directory of Open Access Journals (Sweden)

    K. Suzuki

    2016-01-01

    Full Text Available The location of distribution centres for online retailers has become an important issue for many companies. The operation of the distribution centre for e-commerce seems to be different from the traditional concept of distribution centres. This study considers how the location of distribution centres are designed and operated for online retailing using a computer simulation with actual data. This paper analyses current issues in order to propose an effective location for e-commerce distribution centres. An effective location for a multi-distribution system over a wide geographical area is required. This paper clearly points out the importance of locating e-commerce distribution centres over a wide geographical area by performing a computer simulation using actual data on the assumption that online resale business customers live in a metropolitan area. The study created a business model in which a buffer warehouse, which primarily handled commodities with high-frequency shipments, was located near a large consumption area and was used in addition to a large-scale distribution centre. The validity of the created business model was verified by performing a simulation based on actual data. The result revealed that these recommended improvement measures for the location of e-commerce distribution centres can function effectively.

  15. Quantitative non-monotonic modeling of economic uncertainty by probability and possibility distributions

    DEFF Research Database (Denmark)

    Schjær-Jacobsen, Hans

    2012-01-01

    uncertainty can be calculated. The possibility approach is particular well suited for representation of uncertainty of a non-statistical nature due to lack of knowledge and requires less information than the probability approach. Based on the kind of uncertainty and knowledge present, these aspects...... to the understanding of similarities and differences of the two approaches as well as practical applications. The probability approach offers a good framework for representation of randomness and variability. Once the probability distributions of uncertain parameters and their correlations are known the resulting...... are thoroughly discussed in the case of rectangular representation of uncertainty by the uniform probability distribution and the interval, respectively. Also triangular representations are dealt with and compared. Calculation of monotonic as well as non-monotonic functions of variables represented...

  16. Equivalent non-Gaussian excitation method for response moment calculation of systems under non-Gaussian random excitation

    International Nuclear Information System (INIS)

    Tsuchida, Takahiro; Kimura, Koji

    2015-01-01

    Equivalent non-Gaussian excitation method is proposed to obtain the moments up to the fourth order of the response of systems under non-Gaussian random excitation. The excitation is prescribed by the probability density and power spectrum. Moment equations for the response can be derived from the stochastic differential equations for the excitation and the system. However, the moment equations are not closed due to the nonlinearity of the diffusion coefficient in the equation for the excitation. In the proposed method, the diffusion coefficient is replaced with the equivalent diffusion coefficient approximately to obtain a closed set of the moment equations. The square of the equivalent diffusion coefficient is expressed by the second-order polynomial. In order to demonstrate the validity of the method, a linear system to non-Gaussian excitation with generalized Gaussian distribution is analyzed. The results show the method is applicable to non-Gaussian excitation with the widely different kurtosis and bandwidth. (author)

  17. Coverage of space by random sets

    Indian Academy of Sciences (India)

    Consider the non-negative integer line. For each integer point we toss a coin. If the toss at location i is a. Heads we place an interval (of random length) there and move to location i + 1,. Tails we move to location i + 1. Coverage of space by random sets – p. 2/29 ...

  18. Location Privacy with Randomness Consistency

    Directory of Open Access Journals (Sweden)

    Wu Hao

    2016-10-01

    Full Text Available Location-Based Social Network (LBSN applications that support geo-location-based posting and queries to provide location-relevant information to mobile users are increasingly popular, but pose a location-privacy risk to posts. We investigated existing LBSNs and location privacy mechanisms, and found a powerful potential attack that can accurately locate users with relatively few queries, even when location data is well secured and location noise is applied. Our technique defeats previously proposed solutions including fake-location detection and query rate limits.

  19. On bounds in Poisson approximation for distributions of independent negative-binomial distributed random variables.

    Science.gov (United States)

    Hung, Tran Loc; Giang, Le Truong

    2016-01-01

    Using the Stein-Chen method some upper bounds in Poisson approximation for distributions of row-wise triangular arrays of independent negative-binomial distributed random variables are established in this note.

  20. Discrete Wavelet Transform for Fault Locations in Underground Distribution System

    Science.gov (United States)

    Apisit, C.; Ngaopitakkul, A.

    2010-10-01

    In this paper, a technique for detecting faults in underground distribution system is presented. Discrete Wavelet Transform (DWT) based on traveling wave is employed in order to detect the high frequency components and to identify fault locations in the underground distribution system. The first peak time obtained from the faulty bus is employed for calculating the distance of fault from sending end. The validity of the proposed technique is tested with various fault inception angles, fault locations and faulty phases. The result is found that the proposed technique provides satisfactory result and will be very useful in the development of power systems protection scheme.

  1. Probabilistic SSME blades structural response under random pulse loading

    Science.gov (United States)

    Shiao, Michael; Rubinstein, Robert; Nagpal, Vinod K.

    1987-01-01

    The purpose is to develop models of random impacts on a Space Shuttle Main Engine (SSME) turbopump blade and to predict the probabilistic structural response of the blade to these impacts. The random loading is caused by the impact of debris. The probabilistic structural response is characterized by distribution functions for stress and displacements as functions of the loading parameters which determine the random pulse model. These parameters include pulse arrival, amplitude, and location. The analysis can be extended to predict level crossing rates. This requires knowledge of the joint distribution of the response and its derivative. The model of random impacts chosen allows the pulse arrivals, pulse amplitudes, and pulse locations to be random. Specifically, the pulse arrivals are assumed to be governed by a Poisson process, which is characterized by a mean arrival rate. The pulse intensity is modelled as a normally distributed random variable with a zero mean chosen independently at each arrival. The standard deviation of the distribution is a measure of pulse intensity. Several different models were used for the pulse locations. For example, three points near the blade tip were chosen at which pulses were allowed to arrive with equal probability. Again, the locations were chosen independently at each arrival. The structural response was analyzed both by direct Monte Carlo simulation and by a semi-analytical method.

  2. Continuous energy Monte Carlo calculations for randomly distributed spherical fuels based on statistical geometry model

    Energy Technology Data Exchange (ETDEWEB)

    Murata, Isao [Osaka Univ., Suita (Japan); Mori, Takamasa; Nakagawa, Masayuki; Itakura, Hirofumi

    1996-03-01

    The method to calculate neutronics parameters of a core composed of randomly distributed spherical fuels has been developed based on a statistical geometry model with a continuous energy Monte Carlo method. This method was implemented in a general purpose Monte Carlo code MCNP, and a new code MCNP-CFP had been developed. This paper describes the model and method how to use it and the validation results. In the Monte Carlo calculation, the location of a spherical fuel is sampled probabilistically along the particle flight path from the spatial probability distribution of spherical fuels, called nearest neighbor distribution (NND). This sampling method was validated through the following two comparisons: (1) Calculations of inventory of coated fuel particles (CFPs) in a fuel compact by both track length estimator and direct evaluation method, and (2) Criticality calculations for ordered packed geometries. This method was also confined by applying to an analysis of the critical assembly experiment at VHTRC. The method established in the present study is quite unique so as to a probabilistic model of the geometry with a great number of spherical fuels distributed randomly. Realizing the speed-up by vector or parallel computations in future, it is expected to be widely used in calculation of a nuclear reactor core, especially HTGR cores. (author).

  3. Effects of the randomly distributed magnetic field on the phase diagrams of the Ising Nanowire II: Continuous distributions

    International Nuclear Information System (INIS)

    Akıncı, Ümit

    2012-01-01

    The effect of the random magnetic field distribution on the phase diagrams and ground state magnetizations of the Ising nanowire has been investigated with effective field theory with correlations. Gaussian distribution has been chosen as a random magnetic field distribution. The variation of the phase diagrams with that distribution parameters has been obtained and some interesting results have been found such as disappearance of the reentrant behavior and first order transitions which appear in the case of discrete distributions. Also for single and double Gaussian distributions, ground state magnetizations for different distribution parameters have been determined which can be regarded as separate partially ordered phases of the system. - Highlights: ► We give the phase diagrams of the Ising nanowire under the continuous randomly distributed magnetic field. ► Ground state magnetization values obtained. ► Different partially ordered phases observed.

  4. Super Generalized Central Limit Theorem —Limit Distributions for Sums of Non-identical Random Variables with Power Laws—

    Science.gov (United States)

    Shintani, Masaru; Umeno, Ken

    2018-04-01

    The power law is present ubiquitously in nature and in our societies. Therefore, it is important to investigate the characteristics of power laws in the current era of big data. In this paper we prove that the superposition of non-identical stochastic processes with power laws converges in density to a unique stable distribution. This property can be used to explain the universality of stable laws that the sums of the logarithmic returns of non-identical stock price fluctuations follow stable distributions.

  5. The limit distribution of the maximum increment of a random walk with regularly varying jump size distribution

    DEFF Research Database (Denmark)

    Mikosch, Thomas Valentin; Rackauskas, Alfredas

    2010-01-01

    In this paper, we deal with the asymptotic distribution of the maximum increment of a random walk with a regularly varying jump size distribution. This problem is motivated by a long-standing problem on change point detection for epidemic alternatives. It turns out that the limit distribution...... of the maximum increment of the random walk is one of the classical extreme value distributions, the Fréchet distribution. We prove the results in the general framework of point processes and for jump sizes taking values in a separable Banach space...

  6. High-resolution characterization of sequence signatures due to non-random cleavage of cell-free DNA.

    Science.gov (United States)

    Chandrananda, Dineika; Thorne, Natalie P; Bahlo, Melanie

    2015-06-17

    High-throughput sequencing of cell-free DNA fragments found in human plasma has been used to non-invasively detect fetal aneuploidy, monitor organ transplants and investigate tumor DNA. However, many biological properties of this extracellular genetic material remain unknown. Research that further characterizes circulating DNA could substantially increase its diagnostic value by allowing the application of more sophisticated bioinformatics tools that lead to an improved signal to noise ratio in the sequencing data. In this study, we investigate various features of cell-free DNA in plasma using deep-sequencing data from two pregnant women (>70X, >50X) and compare them with matched cellular DNA. We utilize a descriptive approach to examine how the biological cleavage of cell-free DNA affects different sequence signatures such as fragment lengths, sequence motifs at fragment ends and the distribution of cleavage sites along the genome. We show that the size distributions of these cell-free DNA molecules are dependent on their autosomal and mitochondrial origin as well as the genomic location within chromosomes. DNA mapping to particular microsatellites and alpha repeat elements display unique size signatures. We show how cell-free fragments occur in clusters along the genome, localizing to nucleosomal arrays and are preferentially cleaved at linker regions by correlating the mapping locations of these fragments with ENCODE annotation of chromatin organization. Our work further demonstrates that cell-free autosomal DNA cleavage is sequence dependent. The region spanning up to 10 positions on either side of the DNA cleavage site show a consistent pattern of preference for specific nucleotides. This sequence motif is present in cleavage sites localized to nucleosomal cores and linker regions but is absent in nucleosome-free mitochondrial DNA. These background signals in cell-free DNA sequencing data stem from the non-random biological cleavage of these fragments. This

  7. Magnetic field line random walk in non-axisymmetric turbulence

    International Nuclear Information System (INIS)

    Tautz, R.C.; Lerche, I.

    2011-01-01

    Including a random component of a magnetic field parallel to an ambient field introduces a mean perpendicular motion to the average field line. This effect is normally not discussed because one customarily chooses at the outset to ignore such a field component in discussing random walk and diffusion of field lines. A discussion of the basic effect is given, indicating that any random magnetic field with a non-zero helicity will lead to such a non-zero perpendicular mean motion. Several exact analytic illustrations are given of the effect as well as a simple numerical illustration. -- Highlights: → For magnetic field line random walk all magnetic field components are important. → Non-vanishing magnetic helicity leads to mean perpendicular motion. → Analytically exact stream functions illustrate that the novel transverse effect exists.

  8. LPPS: A Distributed Cache Pushing Based K-Anonymity Location Privacy Preserving Scheme

    Directory of Open Access Journals (Sweden)

    Ming Chen

    2016-01-01

    Full Text Available Recent years have witnessed the rapid growth of location-based services (LBSs for mobile social network applications. To enable location-based services, mobile users are required to report their location information to the LBS servers and receive answers of location-based queries. Location privacy leak happens when such servers are compromised, which has been a primary concern for information security. To address this issue, we propose the Location Privacy Preservation Scheme (LPPS based on distributed cache pushing. Unlike existing solutions, LPPS deploys distributed cache proxies to cover users mostly visited locations and proactively push cache content to mobile users, which can reduce the risk of leaking users’ location information. The proposed LPPS includes three major process. First, we propose an algorithm to find the optimal deployment of proxies to cover popular locations. Second, we present cache strategies for location-based queries based on the Markov chain model and propose update and replacement strategies for cache content maintenance. Third, we introduce a privacy protection scheme which is proved to achieve k-anonymity guarantee for location-based services. Extensive experiments illustrate that the proposed LPPS achieves decent service coverage ratio and cache hit ratio with lower communication overhead compared to existing solutions.

  9. On the generation of log-Levy distributions and extreme randomness

    International Nuclear Information System (INIS)

    Eliazar, Iddo; Klafter, Joseph

    2011-01-01

    The log-normal distribution is prevalent across the sciences, as it emerges from the combination of multiplicative processes and the central limit theorem (CLT). The CLT, beyond yielding the normal distribution, also yields the class of Levy distributions. The log-Levy distributions are the Levy counterparts of the log-normal distribution, they appear in the context of ultraslow diffusion processes, and they are categorized by Mandelbrot as belonging to the class of extreme randomness. In this paper, we present a natural stochastic growth model from which both the log-normal distribution and the log-Levy distributions emerge universally-the former in the case of deterministic underlying setting, and the latter in the case of stochastic underlying setting. In particular, we establish a stochastic growth model which universally generates Mandelbrot's extreme randomness. (paper)

  10. Randomness determines practical security of BB84 quantum key distribution

    Science.gov (United States)

    Li, Hong-Wei; Yin, Zhen-Qiang; Wang, Shuang; Qian, Yong-Jun; Chen, Wei; Guo, Guang-Can; Han, Zheng-Fu

    2015-11-01

    Unconditional security of the BB84 quantum key distribution protocol has been proved by exploiting the fundamental laws of quantum mechanics, but the practical quantum key distribution system maybe hacked by considering the imperfect state preparation and measurement respectively. Until now, different attacking schemes have been proposed by utilizing imperfect devices, but the general security analysis model against all of the practical attacking schemes has not been proposed. Here, we demonstrate that the general practical attacking schemes can be divided into the Trojan horse attack, strong randomness attack and weak randomness attack respectively. We prove security of BB84 protocol under randomness attacking models, and these results can be applied to guarantee the security of the practical quantum key distribution system.

  11. Mobile agent location in distributed environments

    Science.gov (United States)

    Fountoukis, S. G.; Argyropoulos, I. P.

    2012-12-01

    An agent is a small program acting on behalf of a user or an application which plays the role of a user. Artificial intelligence can be encapsulated in agents so that they can be capable of both behaving autonomously and showing an elementary decision ability regarding movement and some specific actions. Therefore they are often called autonomous mobile agents. In a distributed system, they can move themselves from one processing node to another through the interconnecting network infrastructure. Their purpose is to collect useful information and to carry it back to their user. Also, agents are used to start, monitor and stop processes running on the individual interconnected processing nodes of computer cluster systems. An agent has a unique id to discriminate itself from other agents and a current position. The position can be expressed as the address of the processing node which currently hosts the agent. Very often, it is necessary for a user, a processing node or another agent to know the current position of an agent in a distributed system. Several procedures and algorithms have been proposed for the purpose of position location of mobile agents. The most basic of all employs a fixed computing node, which acts as agent position repository, receiving messages from all the moving agents and keeping records of their current positions. The fixed node, responds to position queries and informs users, other nodes and other agents about the position of an agent. Herein, a model is proposed that considers pairs and triples of agents instead of single ones. A location method, which is investigated in this paper, attempts to exploit this model.

  12. Application of algorithms and artificial-intelligence approach for locating multiple harmonics in distribution systems

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Y.-Y.; Chen, Y.-C. [Chung Yuan University (China). Dept. of Electrical Engineering

    1999-05-01

    A new method is proposed for locating multiple harmonic sources in distribution systems. The proposed method first determines the proper locations for metering measurement using fuzzy clustering. Next, an artificial neural network based on the back-propagation approach is used to identify the most likely location for multiple harmonic sources. A set of systematic algorithmic steps is developed until all harmonic locations are identified. The simulation results for an 18-busbar system show that the proposed method is very efficient in locating the multiple harmonics in a distribution system. (author)

  13. Fully-distributed randomized cooperation in wireless sensor networks

    KAUST Repository

    Bader, Ahmed

    2015-01-07

    When marrying randomized distributed space-time coding (RDSTC) to geographical routing, new performance horizons can be created. In order to reach those horizons however, routing protocols must evolve to operate in a fully distributed fashion. In this letter, we expose a technique to construct a fully distributed geographical routing scheme in conjunction with RDSTC. We then demonstrate the performance gains of this novel scheme by comparing it to one of the prominent classical schemes.

  14. Fully-distributed randomized cooperation in wireless sensor networks

    KAUST Repository

    Bader, Ahmed; Abed-Meraim, Karim; Alouini, Mohamed-Slim

    2015-01-01

    When marrying randomized distributed space-time coding (RDSTC) to geographical routing, new performance horizons can be created. In order to reach those horizons however, routing protocols must evolve to operate in a fully distributed fashion. In this letter, we expose a technique to construct a fully distributed geographical routing scheme in conjunction with RDSTC. We then demonstrate the performance gains of this novel scheme by comparing it to one of the prominent classical schemes.

  15. Location Model for Distribution Centers for Fulfilling Electronic Orders of Fresh Foods under Uncertain Demand

    Directory of Open Access Journals (Sweden)

    Hao Zhang

    2017-01-01

    Full Text Available The problem of locating distribution centers for delivering fresh food as a part of electronic commerce is a strategic decision problem for enterprises. This paper establishes a model for locating distribution centers that considers the uncertainty of customer demands for fresh goods in terms of time-sensitiveness and freshness. Based on the methodology of robust optimization in dealing with uncertain problems, this paper optimizes the location model in discrete demand probabilistic scenarios. In this paper, an improved fruit fly optimization algorithm is proposed to solve the distribution center location problem. An example is given to show that the proposed model and algorithm are robust and can effectively handle the complications caused by uncertain demand. The model proposed in this paper proves valuable both theoretically and practically in the selection of locations of distribution centers.

  16. Vertical random variability of the distribution coefficient in the soil and its effect on the migration of fallout radionuclides

    International Nuclear Information System (INIS)

    Bunzl, K.

    2002-01-01

    In the field, the distribution coefficient, K d , for the sorption of a radionuclide by the soil cannot be expected to be constant. Even in a well defined soil horizon, K d will vary stochastically in horizontal as well as in vertical direction around a mean value. The horizontal random variability of K d produce a pronounced tailing effect in the concentration depth profile of a fallout radionuclide, much less is known on the corresponding effect of the vertical random variability. To analyze this effect theoretically, the classical convection-dispersion model in combination with the random-walk particle method was applied. The concentration depth profile of a radionuclide was calculated one year after deposition assuming constant values of the pore water velocity, the diffusion/dispersion coefficient, and the distribution coefficient (K d = 100 cm 3 x g -1 ) and exhibiting a vertical variability for K d according to a log-normal distribution with a geometric mean of 100 cm 3 x g -1 and a coefficient of variation of CV 0.53. The results show that these two concentration depth profiles are only slightly different, the location of the peak is shifted somewhat upwards, and the dispersion of the concentration depth profile is slightly larger. A substantial tailing effect of the concentration depth profile is not perceivable. Especially with respect to the location of the peak, a very good approximation of the concentration depth profile is obtained if the arithmetic mean of the K d -values (K d = 113 cm 3 x g -1 ) and a slightly increased dispersion coefficient are used in the analytical solution of the classical convection-dispersion equation with constant K d . The evaluation of the observed concentration depth profile with the analytical solution of the classical convection-dispersion equation with constant parameters will, within the usual experimental limits, hardly reveal the presence of a log-normal random distribution of K d in the vertical direction in

  17. Location and Size Planning of Distributed Photovoltaic Generation in Distribution network System Based on K-means Clustering Analysis

    Science.gov (United States)

    Lu, Siqi; Wang, Xiaorong; Wu, Junyong

    2018-01-01

    The paper presents a method to generate the planning scenarios, which is based on K-means clustering analysis algorithm driven by data, for the location and size planning of distributed photovoltaic (PV) units in the network. Taken the power losses of the network, the installation and maintenance costs of distributed PV, the profit of distributed PV and the voltage offset as objectives and the locations and sizes of distributed PV as decision variables, Pareto optimal front is obtained through the self-adaptive genetic algorithm (GA) and solutions are ranked by a method called technique for order preference by similarity to an ideal solution (TOPSIS). Finally, select the planning schemes at the top of the ranking list based on different planning emphasis after the analysis in detail. The proposed method is applied to a 10-kV distribution network in Gansu Province, China and the results are discussed.

  18. Interpolation between Airy and Poisson statistics for unitary chiral non-Hermitian random matrix ensembles

    International Nuclear Information System (INIS)

    Akemann, G.; Bender, M.

    2010-01-01

    We consider a family of chiral non-Hermitian Gaussian random matrices in the unitarily invariant symmetry class. The eigenvalue distribution in this model is expressed in terms of Laguerre polynomials in the complex plane. These are orthogonal with respect to a non-Gaussian weight including a modified Bessel function of the second kind, and we give an elementary proof for this. In the large n limit, the eigenvalue statistics at the spectral edge close to the real axis are described by the same family of kernels interpolating between Airy and Poisson that was recently found by one of the authors for the elliptic Ginibre ensemble. We conclude that this scaling limit is universal, appearing for two different non-Hermitian random matrix ensembles with unitary symmetry. As a second result we give an equivalent form for the interpolating Airy kernel in terms of a single real integral, similar to representations for the asymptotic kernel in the bulk and at the hard edge of the spectrum. This makes its structure as a one-parameter deformation of the Airy kernel more transparent.

  19. Location of Urban Logistic Terminals as Hub Location Problem

    Directory of Open Access Journals (Sweden)

    Jasmina Pašagić Škrinjar

    2012-09-01

    Full Text Available In this paper the problems of locating urban logistic terminals are studied as hub location problems that due to a large number of potential nodes in big cities belong to hard non-polynomial problems, the so-called NP-problems. The hub location problems have found wide application in physical planning of transport and telecommunication systems, especially systems of fast delivery, networks of logistic and distribution centres and cargo traffic terminals of the big cities, etc. The paper defines single and multiple allocations and studies the numerical examples. The capacitated single allocation hub location problems have been studied, with the provision of a mathematical model of selecting the location for the hubs on the network. The paper also presents the differences in the possibilities of implementing the exact and heuristic methods to solve the actual location problems of big dimensions i.e. hub problems of the big cities.

  20. Comparative study of random and uniform models for the distribution of TRISO particles in HTR-10 fuel elements

    International Nuclear Information System (INIS)

    Rosales, J.; Perez, J.; Garcia, C.; Munnoz, A.; Lira, C. A. B. O.

    2015-01-01

    TRISO particles are the specific features of HTR-10 and generally HTGR reactors. Their heterogeneity and random arrangement in graphite matrix of these reactors create a significant modeling challenge. In the simulation of spherical fuel elements using MCNPX are usually created repetitive structures using uniform distribution models. The use of these repetitive structures introduces two major approaches: the non-randomness of the TRISO particles inside the pebbles and the intersection of the pebble surface with the TRISO particles. These approaches could affect significantly the multiplicative properties of the core. In order to study the influence of these approaches in the multiplicative properties was estimated the K inf value in one pebble with white boundary conditions using 4 different configurations regarding the distribution of the TRISO particles inside the pebble: uniform hexagonal model, cubic uniform model, cubic uniform without the effect of cutting and a random distribution model. It was studied the impact these models on core scale solving the problem B1, from the Benchmark Problems presented in a Coordinated Research Program of the IAEA. (Author)

  1. A biorthogonal decomposition for the identification and simulation of non-stationary and non-Gaussian random fields

    Energy Technology Data Exchange (ETDEWEB)

    Zentner, I. [IMSIA, UMR EDF-ENSTA-CNRS-CEA 9219, Université Paris-Saclay, 828 Boulevard des Maréchaux, 91762 Palaiseau Cedex (France); Ferré, G., E-mail: gregoire.ferre@ponts.org [CERMICS – Ecole des Ponts ParisTech, 6 et 8 avenue Blaise Pascal, Cité Descartes, Champs sur Marne, 77455 Marne la Vallée Cedex 2 (France); Poirion, F. [Department of Structural Dynamics and Aeroelasticity, ONERA, BP 72, 29 avenue de la Division Leclerc, 92322 Chatillon Cedex (France); Benoit, M. [Institut de Recherche sur les Phénomènes Hors Equilibre (IRPHE), UMR 7342 (CNRS, Aix-Marseille Université, Ecole Centrale Marseille), 49 rue Frédéric Joliot-Curie, BP 146, 13384 Marseille Cedex 13 (France)

    2016-06-01

    In this paper, a new method for the identification and simulation of non-Gaussian and non-stationary stochastic fields given a database is proposed. It is based on two successive biorthogonal decompositions aiming at representing spatio–temporal stochastic fields. The proposed double expansion allows to build the model even in the case of large-size problems by separating the time, space and random parts of the field. A Gaussian kernel estimator is used to simulate the high dimensional set of random variables appearing in the decomposition. The capability of the method to reproduce the non-stationary and non-Gaussian features of random phenomena is illustrated by applications to earthquakes (seismic ground motion) and sea states (wave heights).

  2. On lower limits and equivalences for distribution tails of randomly stopped sums

    NARCIS (Netherlands)

    Denisov, D.E.; Foss, S.G.; Korshunov, D.A.

    2008-01-01

    For a distribution F*t of a random sum St=¿1+¿+¿t of i.i.d. random variables with a common distribution F on the half-line [0, 8), we study the limits of the ratios of tails as x¿8 (here, t is a counting random variable which does not depend on {¿n}n=1). We also consider applications of the results

  3. Response moments of dynamic systems under non-Gaussian random excitation by the equivalent non-Gaussian excitation method

    International Nuclear Information System (INIS)

    Tsuchida, Takahiro; Kimura, Koji

    2016-01-01

    Equivalent non-Gaussian excitation method is proposed to obtain the response moments up to the 4th order of dynamic systems under non-Gaussian random excitation. The non-Gaussian excitation is prescribed by the probability density and the power spectrum, and is described by an Ito stochastic differential equation. Generally, moment equations for the response, which are derived from the governing equations for the excitation and the system, are not closed due to the nonlinearity of the diffusion coefficient in the equation for the excitation even though the system is linear. In the equivalent non-Gaussian excitation method, the diffusion coefficient is replaced with the equivalent diffusion coefficient approximately to obtain a closed set of the moment equations. The square of the equivalent diffusion coefficient is expressed by a quadratic polynomial. In numerical examples, a linear system subjected to nonGaussian excitations with bimodal and Rayleigh distributions is analyzed by using the present method. The results show that the method yields the variance, skewness and kurtosis of the response with high accuracy for non-Gaussian excitation with the widely different probability densities and bandwidth. The statistical moments of the equivalent non-Gaussian excitation are also investigated to describe the feature of the method. (paper)

  4. Distributed Detection with Collisions in a Random, Single-Hop Wireless Sensor Network

    Science.gov (United States)

    2013-05-26

    public release; distribution is unlimited. Distributed detection with collisions in a random, single-hop wireless sensor network The views, opinions...1274 2 ABSTRACT Distributed detection with collisions in a random, single-hop wireless sensor network Report Title We consider the problem of... WIRELESS SENSOR NETWORK Gene T. Whipps?† Emre Ertin† Randolph L. Moses† ?U.S. Army Research Laboratory, Adelphi, MD 20783 †The Ohio State University

  5. On the Shaker Simulation of Wind-Induced Non-Gaussian Random Vibration

    Directory of Open Access Journals (Sweden)

    Fei Xu

    2016-01-01

    Full Text Available Gaussian signal is produced by ordinary random vibration controllers to test the products in the laboratory, while the field data is usually non-Gaussian. Two methodologies are presented in this paper for shaker simulation of wind-induced non-Gaussian vibration. The first methodology synthesizes the non-Gaussian signal offline and replicates it on the shaker in the Time Waveform Replication (TWR mode. A new synthesis method is used to model the non-Gaussian signal as a Gaussian signal multiplied by an amplitude modulation function (AMF. A case study is presented to show that the synthesized non-Gaussian signal has the same power spectral density (PSD, probability density function (PDF, and loading cycle distribution (LCD as the field data. The second methodology derives a damage equivalent Gaussian signal from the non-Gaussian signal based on the fatigue damage spectrum (FDS and the extreme response spectrum (ERS and reproduces it on the shaker in the closed-loop frequency domain control mode. The PSD level and the duration time of the derived Gaussian signal can be manipulated for accelerated testing purpose. A case study is presented to show that the derived PSD matches the damage potential of the non-Gaussian environment for both fatigue and peak response.

  6. Limit distributions of random walks on stochastic matrices

    Indian Academy of Sciences (India)

    condition that μm(P) > 0 for some positive integer m (as opposed to just 1, instead of m, considered in [1]), where μm is the ...... Limit distributions of random walks. 611. PROPOSITION 3.2. Let f be as introduced before Proposition 3.1. The probability distribution λ is the image of π by the map b ↦→ f (b). In other words, λ = ∑.

  7. Zero Distribution of System with Unknown Random Variables Case Study: Avoiding Collision Path

    Directory of Open Access Journals (Sweden)

    Parman Setyamartana

    2014-07-01

    Full Text Available This paper presents the stochastic analysis of finding the feasible trajectories of robotics arm motion at obstacle surrounding. Unknown variables are coefficients of polynomials joint angle so that the collision-free motion is achieved. ãk is matrix consisting of these unknown feasible polynomial coefficients. The pattern of feasible polynomial in the obstacle environment shows as random. This paper proposes to model the pattern of this randomness values using random polynomial with unknown variables as coefficients. The behavior of the system will be obtained from zero distribution as the characteristic of such random polynomial. Results show that the pattern of random polynomial of avoiding collision can be constructed from zero distribution. Zero distribution is like building block of the system with obstacles as uncertainty factor. By scale factor k, which has range, the random coefficient pattern can be predicted.

  8. A randomized clinical trial of hyperthermia and radiation versus radiation alone for superficially located cancers

    International Nuclear Information System (INIS)

    Egawa, Sunao; Tsukiyama, Iwao; Watanabe, Shaw

    1989-01-01

    A randomized clinical trial was performed in order to evaluate the effect of combined hyperthermia and radiation for superficially located tumors. Ten institutions participated in this study and 92 evaluable patients were entered from September 1985 to March 1987 (44 patients for radiation plus hyperthermia and 48 for radiation only). Superficially located tumors, more than 3x3 cm in diameter, regardless of whether they were primary or metastatic, and of their histology, were included in the study. Radiotherapy was performed by the conventional fractionation method (2 Gyx5/week). Hyperthermia was conducted once a week. There was no statistical difference between the two groups regarding age, sex, the distribution of tumors and treatment parameters. The complete response (CR) and partial response (PR) rate for the hyperthermia plus radiation group was 81.8%, while the rate for the radiation alone group was 62.6% (p<0.05). Six factors were selected for analysis of the above effect by a multiple logistic model. Sex contributed the most (p=0.001), then the site of the tumor (p=0.016) and the method of treatment (p=0.023). Sex and the site influenced the results. Age, irradiation dose and frequency and duration of heating were not significant factors for response to treatment. (author)

  9. Integrated planning of electric vehicles routing and charging stations location considering transportation networks and power distribution systems

    Directory of Open Access Journals (Sweden)

    Andrés Arias

    2018-09-01

    Full Text Available Electric Vehicles (EVs represent a significant option that contributes to improve the mobility and reduce the pollution, leaving a future expectation in the merchandise transportation sector, which has been demonstrated with pilot projects of companies operating EVs for products delivering. In this work a new approach of EVs for merchandise transportation considering the location of Electric Vehicle Charging Stations (EVCSs and the impact on the Power Distribution System (PDS is addressed. This integrated planning is formulated through a mixed integer non-linear mathematical model. Test systems of different sizes are designed to evaluate the model performance, considering the transportation network and PDS. The results show a trade-off between EVs routing, PDS energy losses and EVCSs location.

  10. Raney Distributions and Random Matrix Theory

    Science.gov (United States)

    Forrester, Peter J.; Liu, Dang-Zheng

    2015-03-01

    Recent works have shown that the family of probability distributions with moments given by the Fuss-Catalan numbers permit a simple parameterized form for their density. We extend this result to the Raney distribution which by definition has its moments given by a generalization of the Fuss-Catalan numbers. Such computations begin with an algebraic equation satisfied by the Stieltjes transform, which we show can be derived from the linear differential equation satisfied by the characteristic polynomial of random matrix realizations of the Raney distribution. For the Fuss-Catalan distribution, an equilibrium problem characterizing the density is identified. The Stieltjes transform for the limiting spectral density of the singular values squared of the matrix product formed from inverse standard Gaussian matrices, and standard Gaussian matrices, is shown to satisfy a variant of the algebraic equation relating to the Raney distribution. Supported on , we show that it too permits a simple functional form upon the introduction of an appropriate choice of parameterization. As an application, the leading asymptotic form of the density as the endpoints of the support are approached is computed, and is shown to have some universal features.

  11. Estimating anisotropic diffusion of neutrons near the boundary of a pebble bed random system

    Energy Technology Data Exchange (ETDEWEB)

    Vasques, R. [Department of Mathematics, Center for Computational Engineering Science, RWTH Aachen University, Schinkel Strasse 2, D-52062 Aachen (Germany)

    2013-07-01

    Due to the arrangement of the pebbles in a Pebble Bed Reactor (PBR) core, if a neutron is located close to a boundary wall, its path length probability distribution function in directions of flight parallel to the wall is significantly different than in other directions. Hence, anisotropic diffusion of neutrons near the boundaries arises. We describe an analysis of neutron transport in a simplified 3-D pebble bed random system, in which we investigate the anisotropic diffusion of neutrons born near one of the system's boundary walls. While this simplified system does not model the actual physical process that takes place near the boundaries of a PBR core, the present work paves the road to a formulation that may enable more accurate diffusion simulations of such problems to be performed in the future. Monte Carlo codes have been developed for (i) deriving realizations of the 3-D random system, and (ii) performing 3-D neutron transport inside the heterogeneous model; numerical results are presented for three different choices of parameters. These numerical results are used to assess the accuracy of estimates for the mean-squared displacement of neutrons obtained with the diffusion approximations of the Atomic Mix Model and of the recently introduced [1] Non-Classical Theory with angular-dependent path length distribution. The Non-Classical Theory makes use of a Generalized Linear Boltzmann Equation in which the locations of the scattering centers in the system are correlated and the distance to collision is not exponentially distributed. We show that the results predicted using the Non-Classical Theory successfully model the anisotropic behavior of the neutrons in the random system, and more closely agree with experiment than the results predicted by the Atomic Mix Model. (authors)

  12. Estimating anisotropic diffusion of neutrons near the boundary of a pebble bed random system

    International Nuclear Information System (INIS)

    Vasques, R.

    2013-01-01

    Due to the arrangement of the pebbles in a Pebble Bed Reactor (PBR) core, if a neutron is located close to a boundary wall, its path length probability distribution function in directions of flight parallel to the wall is significantly different than in other directions. Hence, anisotropic diffusion of neutrons near the boundaries arises. We describe an analysis of neutron transport in a simplified 3-D pebble bed random system, in which we investigate the anisotropic diffusion of neutrons born near one of the system's boundary walls. While this simplified system does not model the actual physical process that takes place near the boundaries of a PBR core, the present work paves the road to a formulation that may enable more accurate diffusion simulations of such problems to be performed in the future. Monte Carlo codes have been developed for (i) deriving realizations of the 3-D random system, and (ii) performing 3-D neutron transport inside the heterogeneous model; numerical results are presented for three different choices of parameters. These numerical results are used to assess the accuracy of estimates for the mean-squared displacement of neutrons obtained with the diffusion approximations of the Atomic Mix Model and of the recently introduced [1] Non-Classical Theory with angular-dependent path length distribution. The Non-Classical Theory makes use of a Generalized Linear Boltzmann Equation in which the locations of the scattering centers in the system are correlated and the distance to collision is not exponentially distributed. We show that the results predicted using the Non-Classical Theory successfully model the anisotropic behavior of the neutrons in the random system, and more closely agree with experiment than the results predicted by the Atomic Mix Model. (authors)

  13. Optimal sensor placement for leak location in water distribution networks using genetic algorithms.

    Science.gov (United States)

    Casillas, Myrna V; Puig, Vicenç; Garza-Castañón, Luis E; Rosich, Albert

    2013-11-04

    This paper proposes a new sensor placement approach for leak location in water distribution networks (WDNs). The sensor placement problem is formulated as an integer optimization problem. The optimization criterion consists in minimizing the number of non-isolable leaks according to the isolability criteria introduced. Because of the large size and non-linear integer nature of the resulting optimization problem, genetic algorithms (GAs) are used as the solution approach. The obtained results are compared with a semi-exhaustive search method with higher computational effort, proving that GA allows one to find near-optimal solutions with less computational load. Moreover, three ways of increasing the robustness of the GA-based sensor placement method have been proposed using a time horizon analysis, a distance-based scoring and considering different leaks sizes. A great advantage of the proposed methodology is that it does not depend on the isolation method chosen by the user, as long as it is based on leak sensitivity analysis. Experiments in two networks allow us to evaluate the performance of the proposed approach.

  14. Optimal Sensor Placement for Leak Location in Water Distribution Networks Using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Luis E. Garza-Castañón

    2013-11-01

    Full Text Available This paper proposes a new sensor placement approach for leak location in water distribution networks (WDNs. The sensor placement problem is formulated as an integer optimization problem. The optimization criterion consists in minimizing the number of non-isolable leaks according to the isolability criteria introduced. Because of the large size and non-linear integer nature of the resulting optimization problem, genetic algorithms (GAs are used as the solution approach. The obtained results are compared with a semi-exhaustive search method with higher computational effort, proving that GA allows one to find near-optimal solutions with less computational load. Moreover, three ways of increasing the robustness of the GA-based sensor placement method have been proposed using a time horizon analysis, a distance-based scoring and considering different leaks sizes. A great advantage of the proposed methodology is that it does not depend on the isolation method chosen by the user, as long as it is based on leak sensitivity analysis. Experiments in two networks allow us to evaluate the performance of the proposed approach.

  15. Optimal Sensor Placement for Leak Location in Water Distribution Networks Using Genetic Algorithms

    Science.gov (United States)

    Casillas, Myrna V.; Puig, Vicenç; Garza-Castañón, Luis E.; Rosich, Albert

    2013-01-01

    This paper proposes a new sensor placement approach for leak location in water distribution networks (WDNs). The sensor placement problem is formulated as an integer optimization problem. The optimization criterion consists in minimizing the number of non-isolable leaks according to the isolability criteria introduced. Because of the large size and non-linear integer nature of the resulting optimization problem, genetic algorithms (GAs) are used as the solution approach. The obtained results are compared with a semi-exhaustive search method with higher computational effort, proving that GA allows one to find near-optimal solutions with less computational load. Moreover, three ways of increasing the robustness of the GA-based sensor placement method have been proposed using a time horizon analysis, a distance-based scoring and considering different leaks sizes. A great advantage of the proposed methodology is that it does not depend on the isolation method chosen by the user, as long as it is based on leak sensitivity analysis. Experiments in two networks allow us to evaluate the performance of the proposed approach. PMID:24193099

  16. GLN standard as a facilitator of physical location identification within process of distribution

    Directory of Open Access Journals (Sweden)

    Davor Dujak

    2017-09-01

    Full Text Available Background: Distribution, from the business point of view, is a set of decisions and actions that will provide the right products at the right time and place, in line with customer expectations. It is a process that generates significant cost, but also effectively implemented, significantly affects the positive perception of the company. Institute of Logistics and Warehousing (IliM, based on the research results related to the optimization of the distribution network and consulting projects for companies, indicates the high importance of the correct description of the physical location within the supply chains in order to make transport processes more effective. Individual companies work on their own geocoding of warehouse locations and location of their business partners (suppliers, customers, but the lack of standardization in this area causes delays related to delivery problems with reaching the right destination. Furthermore, the cooperating companies do not have a precise indication of the operating conditions of each location, e.g. Time windows of the plant, logistic units accepted at parties, supported transport etc. Lack of this information generates additional costs associated with re-operation and the costs of lost benefits for the lack of goods on time. The solution to this problem seems to be a wide-scale implementation of GS1 standard which is the Global Location Number (GLN, that, thanks to a broad base of information will assist the distribution processes. Material and methods: The results of survey conducted among Polish companies in the second half of 2016 indicate an unsatisfactory degree of implementation of the transport processes, resulting from incorrect or inaccurate description of the location, and thus, a significant number of errors in deliveries. Accordingly, authors studied literature and examined case studies indicating the possibility of using GLN standard to identify the physical location and to show the

  17. On the Optimal Location of Sensors for Parametric Identification of Linear Systems

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Brincker, Rune

    1994-01-01

    . It is assumed most often that the results of the measurements are statistically independent random variables. In an example the importance of considering the measurements as statistically dependent random variables is shown. The covariance of the model parameters expected to be obtained is investigated......An outline of the field of optimal location of sensors for parametric identification of linear structural systems is presented. There are few papers devoted to the case of optimal location of sensors in which the measurements are modeled by a random field with non-trivial covariance function...

  18. Pneumatic Performance of a Non-Axisymmetric Floating Oscillating Water Column Wave Energy Conversion Device in Random Waves

    OpenAIRE

    Bull, Diana

    2014-01-01

    A stochastic approach is used to gain a sophisticated understanding of a non-axisymmetric floating oscillating water column's response to random waves. A linear, frequency-domain performance model that links the oscillating structure to air-pressure fluctuations with a Wells Turbine in 3-dimensions is used to study the device performance at a northern California deployment location. Both short-term, sea-state, and long-term, annual, predictions are made regarding the devices performance. U...

  19. Randomly displaced phase distribution design and its advantage in page-data recording of Fourier transform holograms.

    Science.gov (United States)

    Emoto, Akira; Fukuda, Takashi

    2013-02-20

    For Fourier transform holography, an effective random phase distribution with randomly displaced phase segments is proposed for obtaining a smooth finite optical intensity distribution in the Fourier transform plane. Since unitary phase segments are randomly distributed in-plane, the blanks give various spatial frequency components to an image, and thus smooth the spectrum. Moreover, by randomly changing the phase segment size, spike generation from the unitary phase segment size in the spectrum can be reduced significantly. As a result, a smooth spectrum including sidebands can be formed at a relatively narrow extent. The proposed phase distribution sustains the primary functions of a random phase mask for holographic-data recording and reconstruction. Therefore, this distribution is expected to find applications in high-density holographic memory systems, replacing conventional random phase mask patterns.

  20. Acoustic Emission Source Location Using a Distributed Feedback Fiber Laser Rosette

    Directory of Open Access Journals (Sweden)

    Fang Li

    2013-10-01

    Full Text Available This paper proposes an approach for acoustic emission (AE source localization in a large marble stone using distributed feedback (DFB fiber lasers. The aim of this study is to detect damage in structures such as those found in civil applications. The directional sensitivity of DFB fiber laser is investigated by calculating location coefficient using a method of digital signal analysis. In this, autocorrelation is used to extract the location coefficient from the periodic AE signal and wavelet packet energy is calculated to get the location coefficient of a burst AE source. Normalization is processed to eliminate the influence of distance and intensity of AE source. Then a new location algorithm based on the location coefficient is presented and tested to determine the location of AE source using a Delta (Δ DFB fiber laser rosette configuration. The advantage of the proposed algorithm over the traditional methods based on fiber Bragg Grating (FBG include the capability of: having higher strain resolution for AE detection and taking into account two different types of AE source for location.

  1. A Comparison of Distribution Free and Non-Distribution Free Factor Analysis Methods

    Science.gov (United States)

    Ritter, Nicola L.

    2012-01-01

    Many researchers recognize that factor analysis can be conducted on both correlation matrices and variance-covariance matrices. Although most researchers extract factors from non-distribution free or parametric methods, researchers can also extract factors from distribution free or non-parametric methods. The nature of the data dictates the method…

  2. Transient modeling of non-Fickian transport and first-order reaction using continuous time random walk

    Science.gov (United States)

    Burnell, Daniel K.; Hansen, Scott K.; Xu, Jie

    2017-09-01

    Contaminants in groundwater may experience a broad spectrum of velocities and multiple rates of mass transfer between mobile and immobile zones during transport. These conditions may lead to non-Fickian plume evolution which is not well described by the advection-dispersion equation (ADE). Simultaneously, many groundwater contaminants are degraded by processes that may be modeled as first-order decay. It is now known that non-Fickian transport and reaction are intimately coupled, with reaction affecting the transport operator. However, closed-form solutions for these important scenarios have not been published for use in applications. In this paper, we present four new Green's function analytic solutions in the uncoupled, uncorrelated continuous time random walk (CTRW) framework for reactive non-Fickian transport, corresponding to the quartet of conservative tracer solutions presented by Kreft and Zuber (1978) for Fickian transport. These consider pulse injection for both resident and flux concentration combined with detection in both resident and flux concentration. A pair of solutions for resident concentration temporal pulses with detection in both flux and resident concentration is also presented. We also derive the relationship between flux and resident concentration for non-Fickian transport with first-order reaction for this CTRW formulation. An explicit discussion of employment of the new solutions to model transport with arbitrary upgradient boundary conditions as well as mobile-immobile mass transfer is then presented. Using the new solutions, we show that first-order reaction has no effect on the anomalous spatial spreading rate of concentration profiles, but produces breakthrough curves at fixed locations that appear to have been generated by Fickian transport. Under the assumption of a Pareto CTRW transition distribution, we present a variety of numerical simulations including results showing coherence of our analytic solutions and CTRW particle

  3. Representing Degree Distributions, Clustering, and Homophily in Social Networks With Latent Cluster Random Effects Models.

    Science.gov (United States)

    Krivitsky, Pavel N; Handcock, Mark S; Raftery, Adrian E; Hoff, Peter D

    2009-07-01

    Social network data often involve transitivity, homophily on observed attributes, clustering, and heterogeneity of actor degrees. We propose a latent cluster random effects model to represent all of these features, and we describe a Bayesian estimation method for it. The model is applicable to both binary and non-binary network data. We illustrate the model using two real datasets. We also apply it to two simulated network datasets with the same, highly skewed, degree distribution, but very different network behavior: one unstructured and the other with transitivity and clustering. Models based on degree distributions, such as scale-free, preferential attachment and power-law models, cannot distinguish between these very different situations, but our model does.

  4. Statistical distributions of optimal global alignment scores of random protein sequences

    Directory of Open Access Journals (Sweden)

    Tang Jiaowei

    2005-10-01

    Full Text Available Abstract Background The inference of homology from statistically significant sequence similarity is a central issue in sequence alignments. So far the statistical distribution function underlying the optimal global alignments has not been completely determined. Results In this study, random and real but unrelated sequences prepared in six different ways were selected as reference datasets to obtain their respective statistical distributions of global alignment scores. All alignments were carried out with the Needleman-Wunsch algorithm and optimal scores were fitted to the Gumbel, normal and gamma distributions respectively. The three-parameter gamma distribution performs the best as the theoretical distribution function of global alignment scores, as it agrees perfectly well with the distribution of alignment scores. The normal distribution also agrees well with the score distribution frequencies when the shape parameter of the gamma distribution is sufficiently large, for this is the scenario when the normal distribution can be viewed as an approximation of the gamma distribution. Conclusion We have shown that the optimal global alignment scores of random protein sequences fit the three-parameter gamma distribution function. This would be useful for the inference of homology between sequences whose relationship is unknown, through the evaluation of gamma distribution significance between sequences.

  5. Multi-Label Learning via Random Label Selection for Protein Subcellular Multi-Locations Prediction.

    Science.gov (United States)

    Wang, Xiao; Li, Guo-Zheng

    2013-03-12

    Prediction of protein subcellular localization is an important but challenging problem, particularly when proteins may simultaneously exist at, or move between, two or more different subcellular location sites. Most of the existing protein subcellular localization methods are only used to deal with the single-location proteins. In the past few years, only a few methods have been proposed to tackle proteins with multiple locations. However, they only adopt a simple strategy, that is, transforming the multi-location proteins to multiple proteins with single location, which doesn't take correlations among different subcellular locations into account. In this paper, a novel method named RALS (multi-label learning via RAndom Label Selection), is proposed to learn from multi-location proteins in an effective and efficient way. Through five-fold cross validation test on a benchmark dataset, we demonstrate our proposed method with consideration of label correlations obviously outperforms the baseline BR method without consideration of label correlations, indicating correlations among different subcellular locations really exist and contribute to improvement of prediction performance. Experimental results on two benchmark datasets also show that our proposed methods achieve significantly higher performance than some other state-of-the-art methods in predicting subcellular multi-locations of proteins. The prediction web server is available at http://levis.tongji.edu.cn:8080/bioinfo/MLPred-Euk/ for the public usage.

  6. Disparities in the Population Distribution of African American and Non-Hispanic White Smokers along the Quitting Continuum

    Science.gov (United States)

    Trinidad, Dennis R.; Xie, Bin; Fagan, Pebbles; Pulvers, Kim; Romero, Devan R.; Blanco, Lyzette; Sakuma, Kari-Lyn K.

    2015-01-01

    Purpose: To examine disparities and changes over time in the population-level distribution of smokers along a cigarette quitting continuum among African American smokers compared with non-Hispanic Whites. Methods: Secondary data analyses of the 1999, 2002, 2005, and 2008 California Tobacco Surveys (CTS). The CTS are large, random-digit-dialed,…

  7. Decompounding random sums: A nonparametric approach

    DEFF Research Database (Denmark)

    Hansen, Martin Bøgsted; Pitts, Susan M.

    Observations from sums of random variables with a random number of summands, known as random, compound or stopped sums arise within many areas of engineering and science. Quite often it is desirable to infer properties of the distribution of the terms in the random sum. In the present paper we...... review a number of applications and consider the nonlinear inverse problem of inferring the cumulative distribution function of the components in the random sum. We review the existing literature on non-parametric approaches to the problem. The models amenable to the analysis are generalized considerably...

  8. Fermi-dirac and random carrier distributions in quantum dot lasers

    OpenAIRE

    Hutchings, M.; O'Driscoll, Ian; Smowton, P. M.; Blood, P.

    2014-01-01

    Using experimental gain and emission measurements as functions of temperature, a method is described to characterise the carrier distribution of radiative states in a quantum dot (QD) laser structure in terms of a temperature. This method is independent of the form of the inhomogeneous dot distribution. A thermal distribution at the lattice temperature is found between 200 and 300K. Below 200K the characteristic temperature exceeds the lattice temperature and the distribution becomes random b...

  9. Accurately bearing measurement in non-cooperative passive location system

    International Nuclear Information System (INIS)

    Liu Zhiqiang; Ma Hongguang; Yang Lifeng

    2007-01-01

    The system of non-cooperative passive location based on array is proposed. In the system, target is detected by beamforming and Doppler matched filtering; and bearing is measured by a long-base-ling interferometer which is composed of long distance sub-arrays. For the interferometer with long-base-line, the bearing is measured accurately but ambiguously. To realize unambiguous accurately bearing measurement, beam width and multiple constraint adoptive beamforming technique is used to resolve azimuth ambiguous. Theory and simulation result shows this method is effective to realize accurately bearing measurement in no-cooperate passive location system. (authors)

  10. Towards the characterization of short-term memory of zebrafish: effect of fixed versus random reward location.

    Science.gov (United States)

    Fernandes, Yohaan; Talpos, Andrea; Gerlai, Robert

    2015-01-02

    The zebrafish has been proposed as an efficient tool for the analysis of behavioral and neurobiological mechanisms of learning and memory. However, compared to traditional laboratory rodents, it is a relatively newcomer. In fact, only limited information on its mnemonic and cognitive abilities has been obtained, and only a small number of learning and memory paradigms have been available for its testing. Previously, we have shown that zebrafish are capable of learning the systematic alternating sequence of reward location in a shuttle box task in which we evaluated behavioral responses manually. Here, we employ a computerized, automated version of this task. We study whether zebrafish can remember the prior location of a reward (the sight of conspecifics) when the location is fixed (constant), or when the sequence of the location of presentation randomly changes between the left and the right side of the experimental tank. We also analyze performance features including the swim speed of experimental fish as well as the temporal changes of the position of fish when the reward (stimulus) is not presented. Our results show that under both the fixed and randomly changing reward location conditions zebrafish exhibit a significant preference for the prior location of reward, albeit the preference is stronger under the fixed location condition. We conclude that adult zebrafish have short-term associative memory that can be induced and quantified in an automated manner. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Random walk generated by random permutations of {1, 2, 3, ..., n + 1}

    International Nuclear Information System (INIS)

    Oshanin, G; Voituriez, R

    2004-01-01

    We study properties of a non-Markovian random walk X (n) l , l = 0, 1, 2, ..., n, evolving in discrete time l on a one-dimensional lattice of integers, whose moves to the right or to the left are prescribed by the rise-and-descent sequences characterizing random permutations π of [n + 1] = {1, 2, 3, ..., n + 1}. We determine exactly the probability of finding the end-point X n = X (n) n of the trajectory of such a permutation-generated random walk (PGRW) at site X, and show that in the limit n → ∞ it converges to a normal distribution with a smaller, compared to the conventional Polya random walk, diffusion coefficient. We formulate, as well, an auxiliary stochastic process whose distribution is identical to the distribution of the intermediate points X (n) l , l < n, which enables us to obtain the probability measure of different excursions and to define the asymptotic distribution of the number of 'turns' of the PGRW trajectories

  12. A random sampling procedure for anisotropic distributions

    International Nuclear Information System (INIS)

    Nagrajan, P.S.; Sethulakshmi, P.; Raghavendran, C.P.; Bhatia, D.P.

    1975-01-01

    A procedure is described for sampling the scattering angle of neutrons as per specified angular distribution data. The cosine of the scattering angle is written as a double Legendre expansion in the incident neutron energy and a random number. The coefficients of the expansion are given for C, N, O, Si, Ca, Fe and Pb and these elements are of interest in dosimetry and shielding. (author)

  13. On the Optimal Location of Sensors for Parametric Identification of Linear Structural Systems

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Brincker, Rune

    A survey of the field of optimal location of sensors for parametric identification of linear structural systems is presented. The survey shows that few papers are devoted to the case of optimal location sensors in which the measurements are modelled by a random field with non-trivial covariance...... function. Most often it is assumed that the results of the measurements are statistically independent variables. In an example the importance of considering the measurements as statistically dependent random variables is shown. The example is concerned with optimal location of sensors for parametric...... identification of modal parameters for a vibrating beam under random loading. The covariance of the modal parameters expected to be obtained is investigated to variations of number and location of sensors. Further, the influence of the noise on the optimal location of the sensors is investigated....

  14. Drop Spreading with Random Viscosity

    Science.gov (United States)

    Xu, Feng; Jensen, Oliver

    2016-11-01

    Airway mucus acts as a barrier to protect the lung. However as a biological material, its physical properties are known imperfectly and can be spatially heterogeneous. In this study we assess the impact of these uncertainties on the rate of spreading of a drop (representing an inhaled aerosol) over a mucus film. We model the film as Newtonian, having a viscosity that depends linearly on the concentration of a passive solute (a crude proxy for mucin proteins). Given an initial random solute (and hence viscosity) distribution, described as a Gaussian random field with a given correlation structure, we seek to quantify the uncertainties in outcomes as the drop spreads. Using lubrication theory, we describe the spreading of the drop in terms of a system of coupled nonlinear PDEs governing the evolution of film height and the vertically-averaged solute concentration. We perform Monte Carlo simulations to predict the variability in the drop centre location and width (1D) or area (2D). We show how simulation results are well described (at much lower computational cost) by a low-order model using a weak disorder expansion. Our results show for example how variability in the drop location is a non-monotonic function of the solute correlation length increases. Engineering and Physical Sciences Research Council.

  15. Non-classical radiation transport in random media with fluctuating densities

    International Nuclear Information System (INIS)

    Dyuldya, S.V.; Bratchenko, M.I.

    2012-01-01

    The ensemble averaged propagation kernels of the non-classical radiation transport are studied by means of the proposed application of the stochastic differential equation random medium generators. It is shown that the non-classical transport is favored in long-correlated weakly fluctuating media. The developed kernel models have been implemented in GEANT4 and validated against the d ouble Monte Carlo m odeling of absorptions curves of disperse neutron absorbers and γ-albedos from a scatterer/absorber random mix

  16. Commercial milk distribution profiles and production locations

    International Nuclear Information System (INIS)

    Deonigi, D.E.; Anderson, D.M.; Wilfert, G.L.

    1994-04-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project was established to estimate radiation doses that people could have received from nuclear operations at the Hanford Site since 1944. For this period iodine-131 is the most important offsite contributor to radiation doses from Hanford operations. Consumption of milk from cows that ate vegetation contaminated by iodine-131 is the dominant radiation pathway for individuals who drank milk (Napier 1992). Information has been developed on commercial milk cow locations and commercial milk distribution during 1945 and 1951. The year 1945 was selected because during 1945 the largest amount of iodine-131 was released from Hanford facilities in a calendar year (Heeb 1993); therefore, 1945 was the year in which an individual was likely to have received the highest dose. The year 1951 was selected to provide data for comparing the changes that occurred in commercial milk flows (i.e., sources, processing locations, and market areas) between World War II and the post-war period. To estimate the doses people could have received from this milk flow, it is necessary to estimate the amount of milk people consumed, the source of the milk, the specific feeding regime used for milk cows, and the amount of iodine-131 contamination deposited on feed

  17. Discovering non-random segregation of sister chromatids: The naïve treatment of a premature discovery

    Directory of Open Access Journals (Sweden)

    Karl G. Lark

    2013-02-01

    Full Text Available The discovery of non-random chromosome segregation is discussed from the perspective of what was known in1965 and1966. The distinction between daughter, parent or grandparent strands of DNA was developed in a bacterial system and led to the discovery that multiple copies of DNA elements of bacteria are not distributed randomly with respect to the age of the template strand. Experiments with higher eukaryotic cells demonstrated that during mitosis Mendel’s laws were violated; and the initial serendipitous choice of eukaryotic cell system led to the striking example of non-random segregation of parent and grand-parent DNA template strands in primary cultures of cells derived from mouse embryos. Attempts to extrapolate these findings to established TC lines demonstrated that the property could be lost. Experiments using plant root tips demonstrated that the phenomenon exists in plants and that it was, at some level, under genetic control. Despite publication in major journals and symposia (Lark et al. (1966a; Lark (1967a; 1967b; 1969, 1969a; 1969b the potential implications of these findings were ignored for several decades. Here we explore possible reasons for the pre-maturity (Stent, 1972 of this discovery.

  18. Correlated random sampling for multivariate normal and log-normal distributions

    International Nuclear Information System (INIS)

    Žerovnik, Gašper; Trkov, Andrej; Kodeli, Ivan A.

    2012-01-01

    A method for correlated random sampling is presented. Representative samples for multivariate normal or log-normal distribution can be produced. Furthermore, any combination of normally and log-normally distributed correlated variables may be sampled to any requested accuracy. Possible applications of the method include sampling of resonance parameters which are used for reactor calculations.

  19. Regularized κ-distributions with non-diverging moments

    Science.gov (United States)

    Scherer, K.; Fichtner, H.; Lazar, M.

    2017-12-01

    For various plasma applications the so-called (non-relativistic) κ-distribution is widely used to reproduce and interpret the suprathermal particle populations exhibiting a power-law distribution in velocity or energy. Despite its reputation the standard κ-distribution as a concept is still disputable, mainly due to the velocity moments M l which make a macroscopic characterization possible, but whose existence is restricted only to low orders l definition of the κ-distribution itself is conditioned by the existence of the moment of order l = 2 (i.e., kinetic temperature) satisfied only for κ > 3/2 . In order to resolve these critical limitations we introduce the regularized κ-distribution with non-diverging moments. For the evaluation of all velocity moments a general analytical expression is provided enabling a significant step towards a macroscopic (fluid-like) description of space plasmas, and, in general, any system of κ-distributed particles.

  20. Estimation of distributed Fermat-point location for wireless sensor networking.

    Science.gov (United States)

    Huang, Po-Hsian; Chen, Jiann-Liang; Larosa, Yanuarius Teofilus; Chiang, Tsui-Lien

    2011-01-01

    This work presents a localization scheme for use in wireless sensor networks (WSNs) that is based on a proposed connectivity-based RF localization strategy called the distributed Fermat-point location estimation algorithm (DFPLE). DFPLE applies triangle area of location estimation formed by intersections of three neighboring beacon nodes. The Fermat point is determined as the shortest path from three vertices of the triangle. The area of estimated location then refined using Fermat point to achieve minimum error in estimating sensor nodes location. DFPLE solves problems of large errors and poor performance encountered by localization schemes that are based on a bounding box algorithm. Performance analysis of a 200-node development environment reveals that, when the number of sensor nodes is below 150, the mean error decreases rapidly as the node density increases, and when the number of sensor nodes exceeds 170, the mean error remains below 1% as the node density increases. Second, when the number of beacon nodes is less than 60, normal nodes lack sufficient beacon nodes to enable their locations to be estimated. However, the mean error changes slightly as the number of beacon nodes increases above 60. Simulation results revealed that the proposed algorithm for estimating sensor positions is more accurate than existing algorithms, and improves upon conventional bounding box strategies.

  1. Distributed Synchronization in Networks of Agent Systems With Nonlinearities and Random Switchings.

    Science.gov (United States)

    Tang, Yang; Gao, Huijun; Zou, Wei; Kurths, Jürgen

    2013-02-01

    In this paper, the distributed synchronization problem of networks of agent systems with controllers and nonlinearities subject to Bernoulli switchings is investigated. Controllers and adaptive updating laws injected in each vertex of networks depend on the state information of its neighborhood. Three sets of Bernoulli stochastic variables are introduced to describe the occurrence probabilities of distributed adaptive controllers, updating laws and nonlinearities, respectively. By the Lyapunov functions method, we show that the distributed synchronization of networks composed of agent systems with multiple randomly occurring nonlinearities, multiple randomly occurring controllers, and multiple randomly occurring updating laws can be achieved in mean square under certain criteria. The conditions derived in this paper can be solved by semi-definite programming. Moreover, by mathematical analysis, we find that the coupling strength, the probabilities of the Bernoulli stochastic variables, and the form of nonlinearities have great impacts on the convergence speed and the terminal control strength. The synchronization criteria and the observed phenomena are demonstrated by several numerical simulation examples. In addition, the advantage of distributed adaptive controllers over conventional adaptive controllers is illustrated.

  2. MODELS OF COVARIANCE FUNCTIONS OF GAUSSIAN RANDOM FIELDS ESCAPING FROM ISOTROPY, STATIONARITY AND NON NEGATIVITY

    Directory of Open Access Journals (Sweden)

    Pablo Gregori

    2014-03-01

    Full Text Available This paper represents a survey of recent advances in modeling of space or space-time Gaussian Random Fields (GRF, tools of Geostatistics at hand for the understanding of special cases of noise in image analysis. They can be used when stationarity or isotropy are unrealistic assumptions, or even when negative covariance between some couples of locations are evident. We show some strategies in order to escape from these restrictions, on the basis of rich classes of well known stationary or isotropic non negative covariance models, and through suitable operations, like linear combinations, generalized means, or with particular Fourier transforms.

  3. A Distribution-class Locational Marginal Price (DLMP) Index for Enhanced Distribution Systems

    Science.gov (United States)

    Akinbode, Oluwaseyi Wemimo

    The smart grid initiative is the impetus behind changes that are expected to culminate into an enhanced distribution system with the communication and control infrastructure to support advanced distribution system applications and resources such as distributed generation, energy storage systems, and price responsive loads. This research proposes a distribution-class analog of the transmission LMP (DLMP) as an enabler of the advanced applications of the enhanced distribution system. The DLMP is envisioned as a control signal that can incentivize distribution system resources to behave optimally in a manner that benefits economic efficiency and system reliability and that can optimally couple the transmission and the distribution systems. The DLMP is calculated from a two-stage optimization problem; a transmission system OPF and a distribution system OPF. An iterative framework that ensures accurate representation of the distribution system's price sensitive resources for the transmission system problem and vice versa is developed and its convergence problem is discussed. As part of the DLMP calculation framework, a DCOPF formulation that endogenously captures the effect of real power losses is discussed. The formulation uses piecewise linear functions to approximate losses. This thesis explores, with theoretical proofs, the breakdown of the loss approximation technique when non-positive DLMPs/LMPs occur and discusses a mixed integer linear programming formulation that corrects the breakdown. The DLMP is numerically illustrated in traditional and enhanced distribution systems and its superiority to contemporary pricing mechanisms is demonstrated using price responsive loads. Results show that the impact of the inaccuracy of contemporary pricing schemes becomes significant as flexible resources increase. At high elasticity, aggregate load consumption deviated from the optimal consumption by up to about 45 percent when using a flat or time-of-use rate. Individual load

  4. A non-negative Wigner-type distribution

    International Nuclear Information System (INIS)

    Cartwright, N.D.

    1976-01-01

    The Wigner function, which is commonly used as a joint distribution for non-commuting observables, is shown to be non-negative in all quantum states when smoothed with a gaussian whose variances are greater than or equal to those of the minimum uncertainty wave packet. (Auth.)

  5. Targeting health subsidies through a non-price mechanism: A randomized controlled trial in Kenya

    Science.gov (United States)

    Dupas, Pascaline; Hoffmann, Vivian; Kremer, Michael; Zwane, Alix Peterson

    2016-01-01

    Free provision of preventive health products can dramatically increase access in low income countries. A cost concern about free provision is that some recipients may not use the product, wasting resources (over-inclusion). Yet charging a price to screen out non-users may screen out poor people who need and would use the product (over-exclusion). We report on a randomized controlled trial of a screening mechanism that combines the free provision of chlorine solution for water treatment with a small non-monetary cost (household vouchers that need to be redeemed monthly in order). Relative to a non-voucher free distribution program, this mechanism reduces the quantity of chlorine procured by 60 percentage points, but reduces the share of households whose stored water tests positive for chlorine residual by only one percentage point, dramatically improving the tradeoff between over-inclusion and over-exclusion. PMID:27563091

  6. Statistics for Ratios of Rayleigh, Rician, Nakagami-m, and Weibull Distributed Random Variables

    Directory of Open Access Journals (Sweden)

    Dragana Č. Pavlović

    2013-01-01

    Full Text Available The distributions of ratios of random variables are of interest in many areas of the sciences. In this brief paper, we present the joint probability density function (PDF and PDF of maximum of ratios μ1=R1/r1 and μ2=R2/r2 for the cases where R1, R2, r1, and r2 are Rayleigh, Rician, Nakagami-m, and Weibull distributed random variables. Random variables R1 and R2, as well as random variables r1 and r2, are correlated. Ascertaining on the suitability of the Weibull distribution to describe fading in both indoor and outdoor environments, special attention is dedicated to the case of Weibull random variables. For this case, analytical expressions for the joint PDF, PDF of maximum, PDF of minimum, and product moments of arbitrary number of ratios μi=Ri/ri, i=1,…,L are obtained. Random variables in numerator, Ri, as well as random variables in denominator, ri, are exponentially correlated. To the best of the authors' knowledge, analytical expressions for the PDF of minimum and product moments of {μi}i=1L are novel in the open technical literature. The proposed mathematical analysis is complemented by various numerical results. An application of presented theoretical results is illustrated with respect to performance assessment of wireless systems.

  7. Location-Based Mapping Services to Support Collaboration in Spatially Distributed Workgroups

    Science.gov (United States)

    Meyer, Eike Michael; Wichmann, Daniel; Büsch, Henning; Boll, Susanne

    Mobile devices and systems reached almost every part of our daily life. Following the mobile computing trend, also business logics of distributed, cooperative applications started to move into the mobile client applications. With this shift, the cooperation aspect may also exploit the user’s location and situation context and capabilities of the mobile device and integrate it into the actual cooperation and collaboration. In this paper, we present an approach for a Collaborative Map that exploits the spatial context of the member of a distributed group as a means to visualize and provide collaboration functionality. Then, a number of location-related cooperation methods become feasible such as getting an overview of the spatial distribution of the team members, identify an ad-hoc meeting place nearby, or chat with a group member who has a certain expertise in his or her profile. With CoMa, we move from standard collaboration tools that marginally consider spatial information towards context-aware mobile collaborative systems that can support a wide range of applications such as emergency response, maintenance work or event organization where human resources have to be coordinated in a spatial context and tasks need to be assigned dynamically depending on capabilities and situation context.

  8. Chaotic spin exchange: is the spin non-flip rate observable?

    International Nuclear Information System (INIS)

    Senba, Masayoshi

    1994-01-01

    If spin exchange is of the Poisson nature, that is, if the time distribution of collisions obeys an exponential distribution function and the collision process is random, the muon spin depolarization is determined only by the spin flip rate regardless of the spin non-flip rate. In this work, spin exchange is discussed in the case of chaotic spin exchange, where the distribution of collision time sequences, generated by a deterministic equation, is exponential but not random (deterministic chaos). Even though this process has the same time distribution as a Poisson process, the muon polarization is affected by the spin non-flip rate. Having an exponential time distribution function is not a sufficient condition for the non-observation of the spin non-flip rate and it is essential that the process is also random. (orig.)

  9. Generating log-normally distributed random numbers by using the Ziggurat algorithm

    International Nuclear Information System (INIS)

    Choi, Jong Soo

    2016-01-01

    Uncertainty analyses are usually based on the Monte Carlo method. Using an efficient random number generator(RNG) is a key element in success of Monte Carlo simulations. Log-normal distributed variates are very typical in NPP PSAs. This paper proposes an approach to generate log normally distributed variates based on the Ziggurat algorithm and evaluates the efficiency of the proposed Ziggurat RNG. The proposed RNG can be helpful to improve the uncertainty analysis of NPP PSAs. This paper focuses on evaluating the efficiency of the Ziggurat algorithm from a NPP PSA point of view. From this study, we can draw the following conclusions. - The Ziggurat algorithm is one of perfect random number generators to product normal distributed variates. - The Ziggurat algorithm is computationally much faster than the most commonly used method, Marsaglia polar method

  10. Non-Linguistic Vocal Event Detection Using Online Random

    DEFF Research Database (Denmark)

    Abou-Zleikha, Mohamed; Tan, Zheng-Hua; Christensen, Mads Græsbøll

    2014-01-01

    areas such as object detection, face recognition, and audio event detection. This paper proposes to use online random forest technique for detecting laughter and filler and for analyzing the importance of various features for non-linguistic vocal event classification through permutation. The results...... show that according to the Area Under Curve measure the online random forest achieved 88.1% compared to 82.9% obtained by the baseline support vector machines for laughter classification and 86.8% to 83.6% for filler classification....

  11. A distributed location management strategy for next generation IP-based wireless networks

    Institute of Scientific and Technical Information of China (English)

    SONG Mei; FENG Rui-jun; HUANG Jian-wen; SONG Jun-de

    2006-01-01

    Location management is the most important function in mobility management technology. The hierarchical structure of the proposed hierarchical network-layer mobility management (HNMM) can reduce the signaling cost. The selforganizing topology scheme can enhance the robustness and quality of the mobility management. The information of location of the mobile node is stored in the distributed database,which makes the storage of the location information more reliable and robust, and more flexible strategies can be used.The numeric results show that HNMM can provide better performance than the general structure of mobile IPv6, when the mobile nodes move frequently and when there is high traffic throughput.

  12. CONVERGENCE OF THE FRACTIONAL PARTS OF THE RANDOM VARIABLES TO THE TRUNCATED EXPONENTIAL DISTRIBUTION

    Directory of Open Access Journals (Sweden)

    Bogdan Gheorghe Munteanu

    2013-01-01

    Full Text Available Using the stochastic approximations, in this paper it was studiedthe convergence in distribution of the fractional parts of the sum of random variables to the truncated exponential distribution with parameter lambda. This fact is feasible by means of the Fourier-Stieltjes sequence (FSS of the random variable.

  13. Angular Distribution of GRBs

    Directory of Open Access Journals (Sweden)

    L. G. Balázs

    2012-01-01

    Full Text Available We studied the complete randomness of the angular distribution of BATSE gamma-ray bursts (GRBs. Based on their durations and peak fluxes, we divided the BATSE sample into 5 subsamples (short1, short2, intermediate, long1, long2 and studied the angular distributions separately. We used three methods to search for non-randomness in the subsamples: Voronoi tesselation, minimal spanning tree, and multifractal spectra. To study any non-randomness in the subsamples we defined 13 test-variables (9 from Voronoi tesselation, 3 from the minimal spanning tree and one from the multifractal spectrum. We made Monte Carlo simulations taking into account the BATSE’s sky-exposure function. We tested therandomness by introducing squared Euclidean distances in the parameter space of the test-variables. We recognized that the short1, short2 groups deviate significantly (99.90%, 99.98% from the fully random case in the distribution of the squared Euclidean distances but this is not true for the long samples. In the intermediate group, the squared Euclidean distances also give significant deviation (98.51%.

  14. Distribution of Schmidt-like eigenvalues for Gaussian ensembles of the random matrix theory

    Science.gov (United States)

    Pato, Mauricio P.; Oshanin, Gleb

    2013-03-01

    We study the probability distribution function P(β)n(w) of the Schmidt-like random variable w = x21/(∑j = 1nx2j/n), where xj, (j = 1, 2, …, n), are unordered eigenvalues of a given n × n β-Gaussian random matrix, β being the Dyson symmetry index. This variable, by definition, can be considered as a measure of how any individual (randomly chosen) eigenvalue deviates from the arithmetic mean value of all eigenvalues of a given random matrix, and its distribution is calculated with respect to the ensemble of such β-Gaussian random matrices. We show that in the asymptotic limit n → ∞ and for arbitrary β the distribution P(β)n(w) converges to the Marčenko-Pastur form, i.e. is defined as P_{n}^{( \\beta )}(w) \\sim \\sqrt{(4 - w)/w} for w ∈ [0, 4] and equals zero outside of the support, despite the fact that formally w is defined on the interval [0, n]. Furthermore, for Gaussian unitary ensembles (β = 2) we present exact explicit expressions for P(β = 2)n(w) which are valid for arbitrary n and analyse their behaviour.

  15. Distribution of Schmidt-like eigenvalues for Gaussian ensembles of the random matrix theory

    International Nuclear Information System (INIS)

    Pato, Mauricio P; Oshanin, Gleb

    2013-01-01

    We study the probability distribution function P (β) n (w) of the Schmidt-like random variable w = x 2 1 /(∑ j=1 n x 2 j /n), where x j , (j = 1, 2, …, n), are unordered eigenvalues of a given n × n β-Gaussian random matrix, β being the Dyson symmetry index. This variable, by definition, can be considered as a measure of how any individual (randomly chosen) eigenvalue deviates from the arithmetic mean value of all eigenvalues of a given random matrix, and its distribution is calculated with respect to the ensemble of such β-Gaussian random matrices. We show that in the asymptotic limit n → ∞ and for arbitrary β the distribution P (β) n (w) converges to the Marčenko–Pastur form, i.e. is defined as P n (β) (w)∼√((4 - w)/w) for w ∈ [0, 4] and equals zero outside of the support, despite the fact that formally w is defined on the interval [0, n]. Furthermore, for Gaussian unitary ensembles (β = 2) we present exact explicit expressions for P (β=2) n (w) which are valid for arbitrary n and analyse their behaviour. (paper)

  16. Estimation of Distributed Fermat-Point Location for Wireless Sensor Networking

    Directory of Open Access Journals (Sweden)

    Yanuarius Teofilus Larosa

    2011-04-01

    Full Text Available This work presents a localization scheme for use in wireless sensor networks (WSNs that is based on a proposed connectivity-based RF localization strategy called the distributed Fermat-point location estimation algorithm (DFPLE. DFPLE applies triangle area of location estimation formed by intersections of three neighboring beacon nodes. The Fermat point is determined as the shortest path from three vertices of the triangle. The area of estimated location then refined using Fermat point to achieve minimum error in estimating sensor nodes location. DFPLE solves problems of large errors and poor performance encountered by localization schemes that are based on a bounding box algorithm. Performance analysis of a 200-node development environment reveals that, when the number of sensor nodes is below 150, the mean error decreases rapidly as the node density increases, and when the number of sensor nodes exceeds 170, the mean error remains below 1% as the node density increases. Second, when the number of beacon nodes is less than 60, normal nodes lack sufficient beacon nodes to enable their locations to be estimated. However, the mean error changes slightly as the number of beacon nodes increases above 60. Simulation results revealed that the proposed algorithm for estimating sensor positions is more accurate than existing algorithms, and improves upon conventional bounding box strategies.

  17. Fermi-dirac and random carrier distributions in quantum dot lasers

    International Nuclear Information System (INIS)

    Hutchings, M.; Smowton, P. M.; Blood, P.; O'Driscoll, I.

    2014-01-01

    Using experimental gain and emission measurements as functions of temperature, a method is described to characterise the carrier distribution of radiative states in a quantum dot (QD) laser structure in terms of a temperature. This method is independent of the form of the inhomogeneous dot distribution. A thermal distribution at the lattice temperature is found between 200 and 300 K. Below 200 K the characteristic temperature exceeds the lattice temperature and the distribution becomes random below about 60 K. This enables the temperature range for which Fermi-Dirac statistics are applicable in QD laser threshold calculations to be identified

  18. The limit distribution of the maximum increment of a random walk with dependent regularly varying jump sizes

    DEFF Research Database (Denmark)

    Mikosch, Thomas Valentin; Moser, Martin

    2013-01-01

    We investigate the maximum increment of a random walk with heavy-tailed jump size distribution. Here heavy-tailedness is understood as regular variation of the finite-dimensional distributions. The jump sizes constitute a strictly stationary sequence. Using a continuous mapping argument acting...... on the point processes of the normalized jump sizes, we prove that the maximum increment of the random walk converges in distribution to a Fréchet distributed random variable....

  19. Location priority for non-formal early childhood education school based on promethee method and map visualization

    Science.gov (United States)

    Ayu Nurul Handayani, Hemas; Waspada, Indra

    2018-05-01

    Non-formal Early Childhood Education (non-formal ECE) is an education that is held for children under 4 years old. The implementation in District of Banyumas, Non-formal ECE is monitored by The District Government of Banyumas and helped by Sanggar Kegiatan Belajar (SKB) Purwokerto as one of the organizer of Non-formal Education. The government itself has a program for distributing ECE to all villages in Indonesia. However, The location to construct the ECE school in several years ahead is not arranged yet. Therefore, for supporting that program, a decision support system is made to give some recommendation villages for constructing The ECE building. The data are projected based on Brown’s Double Exponential Smoothing Method and utilizing Preference Ranking Organization Method for Enrichment Evaluation (Promethee) to generate priority order. As the recommendations system, it generates map visualization which is colored according to the priority level of sub-district and village area. The system was tested with black box testing, Promethee testing, and usability testing. The results showed that the system functionality and Promethee algorithm were working properly, and the user was satisfied.

  20. Randomly and Non-Randomly Missing Renal Function Data in the Strong Heart Study: A Comparison of Imputation Methods.

    Directory of Open Access Journals (Sweden)

    Nawar Shara

    Full Text Available Kidney and cardiovascular disease are widespread among populations with high prevalence of diabetes, such as American Indians participating in the Strong Heart Study (SHS. Studying these conditions simultaneously in longitudinal studies is challenging, because the morbidity and mortality associated with these diseases result in missing data, and these data are likely not missing at random. When such data are merely excluded, study findings may be compromised. In this article, a subset of 2264 participants with complete renal function data from Strong Heart Exams 1 (1989-1991, 2 (1993-1995, and 3 (1998-1999 was used to examine the performance of five methods used to impute missing data: listwise deletion, mean of serial measures, adjacent value, multiple imputation, and pattern-mixture. Three missing at random models and one non-missing at random model were used to compare the performance of the imputation techniques on randomly and non-randomly missing data. The pattern-mixture method was found to perform best for imputing renal function data that were not missing at random. Determining whether data are missing at random or not can help in choosing the imputation method that will provide the most accurate results.

  1. Football fever: goal distributions and non-Gaussian statistics

    Science.gov (United States)

    Bittner, E.; Nußbaumer, A.; Janke, W.; Weigel, M.

    2009-02-01

    Analyzing football score data with statistical techniques, we investigate how the not purely random, but highly co-operative nature of the game is reflected in averaged properties such as the probability distributions of scored goals for the home and away teams. As it turns out, especially the tails of the distributions are not well described by the Poissonian or binomial model resulting from the assumption of uncorrelated random events. Instead, a good effective description of the data is provided by less basic distributions such as the negative binomial one or the probability densities of extreme value statistics. To understand this behavior from a microscopical point of view, however, no waiting time problem or extremal process need be invoked. Instead, modifying the Bernoulli random process underlying the Poissonian model to include a simple component of self-affirmation seems to describe the data surprisingly well and allows to understand the observed deviation from Gaussian statistics. The phenomenological distributions used before can be understood as special cases within this framework. We analyzed historical football score data from many leagues in Europe as well as from international tournaments, including data from all past tournaments of the “FIFA World Cup” series, and found the proposed models to be applicable rather universally. In particular, here we analyze the results of the German women’s premier football league and consider the two separate German men’s premier leagues in the East and West during the cold war times as well as the unified league after 1990 to see how scoring in football and the component of self-affirmation depend on cultural and political circumstances.

  2. Evaluating the Use of Random Distribution Theory to Introduce Statistical Inference Concepts to Business Students

    Science.gov (United States)

    Larwin, Karen H.; Larwin, David A.

    2011-01-01

    Bootstrapping methods and random distribution methods are increasingly recommended as better approaches for teaching students about statistical inference in introductory-level statistics courses. The authors examined the effect of teaching undergraduate business statistics students using random distribution and bootstrapping simulations. It is the…

  3. Thermodynamic method for generating random stress distributions on an earthquake fault

    Science.gov (United States)

    Barall, Michael; Harris, Ruth A.

    2012-01-01

    This report presents a new method for generating random stress distributions on an earthquake fault, suitable for use as initial conditions in a dynamic rupture simulation. The method employs concepts from thermodynamics and statistical mechanics. A pattern of fault slip is considered to be analogous to a micro-state of a thermodynamic system. The energy of the micro-state is taken to be the elastic energy stored in the surrounding medium. Then, the Boltzmann distribution gives the probability of a given pattern of fault slip and stress. We show how to decompose the system into independent degrees of freedom, which makes it computationally feasible to select a random state. However, due to the equipartition theorem, straightforward application of the Boltzmann distribution leads to a divergence which predicts infinite stress. To avoid equipartition, we show that the finite strength of the fault acts to restrict the possible states of the system. By analyzing a set of earthquake scaling relations, we derive a new formula for the expected power spectral density of the stress distribution, which allows us to construct a computer algorithm free of infinities. We then present a new technique for controlling the extent of the rupture by generating a random stress distribution thousands of times larger than the fault surface, and selecting a portion which, by chance, has a positive stress perturbation of the desired size. Finally, we present a new two-stage nucleation method that combines a small zone of forced rupture with a larger zone of reduced fracture energy.

  4. Random surfaces: A non-perturbative regularization of strings?

    International Nuclear Information System (INIS)

    Ambjoern, J.

    1989-12-01

    I review the basic properties of the theory of randum surfaces. While it is by now well known that the theory of (discretized) random surfaces correctly describes the (perturbative) aspects of non-critical strings in d 1. In these lectures I intend to show that the theory of dynamical triangulated random surfaces provides us with a lot of information about the dynamics of both the bosonic string and the superstring even for d>1. I also briefly review recent attempts to define a string field theory (sum over all genus) in this approach. (orig.)

  5. Maximum Likelihood and Bayes Estimation in Randomly Censored Geometric Distribution

    Directory of Open Access Journals (Sweden)

    Hare Krishna

    2017-01-01

    Full Text Available In this article, we study the geometric distribution under randomly censored data. Maximum likelihood estimators and confidence intervals based on Fisher information matrix are derived for the unknown parameters with randomly censored data. Bayes estimators are also developed using beta priors under generalized entropy and LINEX loss functions. Also, Bayesian credible and highest posterior density (HPD credible intervals are obtained for the parameters. Expected time on test and reliability characteristics are also analyzed in this article. To compare various estimates developed in the article, a Monte Carlo simulation study is carried out. Finally, for illustration purpose, a randomly censored real data set is discussed.

  6. A distribution-free newsvendor model with balking penalty and random yield

    Directory of Open Access Journals (Sweden)

    Chongfeng Lan

    2015-05-01

    Full Text Available Purpose: The purpose of this paper is to extend the analysis of the distribution-free newsvendor problem in an environment of customer balking, which occurs when customers are reluctant to buy a product if its available inventory falls below a threshold level. Design/methodology/approach: We provide a new tradeoff tool as a replacement of the traditional one to weigh the holding cost and the goodwill costs segment: in addition to the shortage penalty, we also introduce the balking penalty. Furthermore, we extend our model to the case of random yield. Findings: A model is presented for determining both an optimal order quantity and a lower bound on the profit under the worst possible distribution of the demand. We also study the effects of shortage penalty and the balking penalty on the optimal order quantity, which have been largely bypassed in the existing distribution free single period models with balking. Numerical examples are presented to illustrate the result. Originality/value: The incorporation of balking penalty and random yield represents an important improvement in inventory policy performance for distribution-free newsvendor problem when customer balking occurs and the distributional form of demand is unknown.

  7. Online distribution channel increases article usage on Mendeley: a randomized controlled trial.

    Science.gov (United States)

    Kudlow, Paul; Cockerill, Matthew; Toccalino, Danielle; Dziadyk, Devin Bissky; Rutledge, Alan; Shachak, Aviv; McIntyre, Roger S; Ravindran, Arun; Eysenbach, Gunther

    2017-01-01

    Prior research shows that article reader counts (i.e. saves) on the online reference manager, Mendeley, correlate to future citations. There are currently no evidenced-based distribution strategies that have been shown to increase article saves on Mendeley. We conducted a 4-week randomized controlled trial to examine how promotion of article links in a novel online cross-publisher distribution channel (TrendMD) affect article saves on Mendeley. Four hundred articles published in the Journal of Medical Internet Research were randomized to either the TrendMD arm ( n  = 200) or the control arm ( n  = 200) of the study. Our primary outcome compares the 4-week mean Mendeley saves of articles randomized to TrendMD versus control. Articles randomized to TrendMD showed a 77% increase in article saves on Mendeley relative to control. The difference in mean Mendeley saves for TrendMD articles versus control was 2.7, 95% CI (2.63, 2.77), and statistically significant ( p  < 0.01). There was a positive correlation between pageviews driven by TrendMD and article saves on Mendeley (Spearman's rho r  = 0.60). This is the first randomized controlled trial to show how an online cross-publisher distribution channel (TrendMD) enhances article saves on Mendeley. While replication and further study are needed, these data suggest that cross-publisher article recommendations via TrendMD may enhance citations of scholarly articles.

  8. Challenges in Locating Microseismic Events Using Distributed Acoustic Sensors

    Science.gov (United States)

    Williams, A.; Kendall, J. M.; Clarke, A.; Verdon, J.

    2017-12-01

    Microseismic monitoring is an important method of assessing the behaviour of subsurface fluid processes, and is commonly acquired using geophone arrays in boreholes or on the surface. A new alternative technology has been recently developed - fibre-optic Distributed Acoustic Sensing (DAS) - using strain along a fibre-optic cable as a measure of seismic signals. DAS can offer high density arrays and full-well coverage from the surface to bottom, with less overall disruption to operations, so there are many exciting possible applications in monitoring both petroleum and other subsurface industries. However, there are challenges in locating microseismic events recorded using current DAS systems, which only record seismic data in one-component and consequently omit the azimuthal information provided by a three-component geophone. To test the impact of these limitations we used finite difference modelling to generate one-component synthetic DAS datasets and investigated the impact of picking solely P-wave or both P- and S-wave arrivals and the impact of different array geometries. These are then compared to equivalent 3-component synthetic geophone datasets. In simple velocity models, P-wave arrivals along linear arrays cannot be used to constrain locations using DAS, without further a priori information. We then tested the impact of straight cables vs. L-shaped arrays and found improved locations when the cable is deviated, especially when both P- and S-wave picks are included. There is a trade-off between the added coverage of DAS cables versus sparser 3C geophone arrays where particle motion helps constrains locations, which cannot be assessed without forward modelling.

  9. Sharp lower bounds on the extractable randomness from non-uniform sources

    NARCIS (Netherlands)

    Skoric, B.; Obi, C.; Verbitskiy, E.A.; Schoenmakers, B.

    2011-01-01

    Extraction of uniform randomness from (noisy) non-uniform sources is an important primitive in many security applications, e.g. (pseudo-)random number generators, privacy-preserving biometrics, and key storage based on Physical Unclonable Functions. Generic extraction methods exist, using universal

  10. A method for generating skewed random numbers using two overlapping uniform distributions

    International Nuclear Information System (INIS)

    Ermak, D.L.; Nasstrom, J.S.

    1995-02-01

    The objective of this work was to implement and evaluate a method for generating skewed random numbers using a combination of uniform random numbers. The method provides a simple and accurate way of generating skewed random numbers from the specified first three moments without an a priori specification of the probability density function. We describe the procedure for generating skewed random numbers from unifon-n random numbers, and show that it accurately produces random numbers with the desired first three moments over a range of skewness values. We also show that in the limit of zero skewness, the distribution of random numbers is an accurate approximation to the Gaussian probability density function. Future work win use this method to provide skewed random numbers for a Langevin equation model for diffusion in skewed turbulence

  11. Covert Communication in MIMO-OFDM System Using Pseudo Random Location of Fake Subcarriers

    Directory of Open Access Journals (Sweden)

    Rizky Pratama Hudhajanto

    2016-08-01

    Full Text Available Multiple-Input Multiple-Output Orthogonal Frequency Division Multiplexing (MIMO-OFDM is the most used wireless transmission scheme in the world. However, its security is the interesting problem to discuss if we want to use this scheme to transmit a sensitive data, such as in the military and commercial communication systems. In this paper, we propose a new method to increase the security of MIMO-OFDM system using the change of location of fake subcarrier. The fake subcarriers’ location is generated per packet of data using Pseudo Random sequence generator. The simulation results show that the proposed scheme does not decrease the performance of conventional MIMO-OFDM. The attacker or eavesdropper gets worse Bit Error Rate (BER than the legal receiver compared to the conventional MIMO-OFDM system.

  12. A lattice Boltzmann simulation of coalescence-induced droplet jumping on superhydrophobic surfaces with randomly distributed structures

    Science.gov (United States)

    Zhang, Li-Zhi; Yuan, Wu-Zhi

    2018-04-01

    The motion of coalescence-induced condensate droplets on superhydrophobic surface (SHS) has attracted increasing attention in energy-related applications. Previous researches were focused on regularly rough surfaces. Here a new approach, a mesoscale lattice Boltzmann method (LBM), is proposed and used to model the dynamic behavior of coalescence-induced droplet jumping on SHS with randomly distributed rough structures. A Fast Fourier Transformation (FFT) method is used to generate non-Gaussian randomly distributed rough surfaces with the skewness (Sk), kurtosis (K) and root mean square (Rq) obtained from real surfaces. Three typical spreading states of coalesced droplets are observed through LBM modeling on various rough surfaces, which are found to significantly influence the jumping ability of coalesced droplet. The coalesced droplets spreading in Cassie state or in composite state will jump off the rough surfaces, while the ones spreading in Wenzel state would eventually remain on the rough surfaces. It is demonstrated that the rough surfaces with smaller Sks, larger Rqs and a K at 3.0 are beneficial to coalescence-induced droplet jumping. The new approach gives more detailed insights into the design of SHS.

  13. Micromechanical Modeling of Fiber-Reinforced Composites with Statistically Equivalent Random Fiber Distribution

    Directory of Open Access Journals (Sweden)

    Wenzhi Wang

    2016-07-01

    Full Text Available Modeling the random fiber distribution of a fiber-reinforced composite is of great importance for studying the progressive failure behavior of the material on the micro scale. In this paper, we develop a new algorithm for generating random representative volume elements (RVEs with statistical equivalent fiber distribution against the actual material microstructure. The realistic statistical data is utilized as inputs of the new method, which is archived through implementation of the probability equations. Extensive statistical analysis is conducted to examine the capability of the proposed method and to compare it with existing methods. It is found that the proposed method presents a good match with experimental results in all aspects including the nearest neighbor distance, nearest neighbor orientation, Ripley’s K function, and the radial distribution function. Finite element analysis is presented to predict the effective elastic properties of a carbon/epoxy composite, to validate the generated random representative volume elements, and to provide insights of the effect of fiber distribution on the elastic properties. The present algorithm is shown to be highly accurate and can be used to generate statistically equivalent RVEs for not only fiber-reinforced composites but also other materials such as foam materials and particle-reinforced composites.

  14. Softening in Random Networks of Non-Identical Beams.

    Science.gov (United States)

    Ban, Ehsan; Barocas, Victor H; Shephard, Mark S; Picu, Catalin R

    2016-02-01

    Random fiber networks are assemblies of elastic elements connected in random configurations. They are used as models for a broad range of fibrous materials including biopolymer gels and synthetic nonwovens. Although the mechanics of networks made from the same type of fibers has been studied extensively, the behavior of composite systems of fibers with different properties has received less attention. In this work we numerically and theoretically study random networks of beams and springs of different mechanical properties. We observe that the overall network stiffness decreases on average as the variability of fiber stiffness increases, at constant mean fiber stiffness. Numerical results and analytical arguments show that for small variabilities in fiber stiffness the amount of network softening scales linearly with the variance of the fiber stiffness distribution. This result holds for any beam structure and is expected to apply to a broad range of materials including cellular solids.

  15. Distribution Locational Marginal Pricing for Optimal Electric Vehicle Charging through Chance Constrained Mixed-Integer Programming

    DEFF Research Database (Denmark)

    Liu, Zhaoxi; Wu, Qiuwei; Oren, Shmuel S.

    2017-01-01

    This paper presents a distribution locational marginal pricing (DLMP) method through chance constrained mixed-integer programming designed to alleviate the possible congestion in the future distribution network with high penetration of electric vehicles (EVs). In order to represent the stochastic...

  16. Functional redundancy patterns reveal non-random assembly rules in a species-rich marine assemblage.

    Directory of Open Access Journals (Sweden)

    Nicolas Guillemot

    Full Text Available The relationship between species and the functional diversity of assemblages is fundamental in ecology because it contains key information on functional redundancy, and functionally redundant ecosystems are thought to be more resilient, resistant and stable. However, this relationship is poorly understood and undocumented for species-rich coastal marine ecosystems. Here, we used underwater visual censuses to examine the patterns of functional redundancy for one of the most diverse vertebrate assemblages, the coral reef fishes of New Caledonia, South Pacific. First, we found that the relationship between functional and species diversity displayed a non-asymptotic power-shaped curve, implying that rare functions and species mainly occur in highly diverse assemblages. Second, we showed that the distribution of species amongst possible functions was significantly different from a random distribution up to a threshold of ∼90 species/transect. Redundancy patterns for each function further revealed that some functions displayed fast rates of increase in redundancy at low species diversity, whereas others were only becoming redundant past a certain threshold. This suggested non-random assembly rules and the existence of some primordial functions that would need to be fulfilled in priority so that coral reef fish assemblages can gain a basic ecological structure. Last, we found little effect of habitat on the shape of the functional-species diversity relationship and on the redundancy of functions, although habitat is known to largely determine assemblage characteristics such as species composition, biomass, and abundance. Our study shows that low functional redundancy is characteristic of this highly diverse fish assemblage, and, therefore, that even species-rich ecosystems such as coral reefs may be vulnerable to the removal of a few keystone species.

  17. Superdiffusion in a non-Markovian random walk model with a Gaussian memory profile

    Science.gov (United States)

    Borges, G. M.; Ferreira, A. S.; da Silva, M. A. A.; Cressoni, J. C.; Viswanathan, G. M.; Mariz, A. M.

    2012-09-01

    Most superdiffusive Non-Markovian random walk models assume that correlations are maintained at all time scales, e.g., fractional Brownian motion, Lévy walks, the Elephant walk and Alzheimer walk models. In the latter two models the random walker can always "remember" the initial times near t = 0. Assuming jump size distributions with finite variance, the question naturally arises: is superdiffusion possible if the walker is unable to recall the initial times? We give a conclusive answer to this general question, by studying a non-Markovian model in which the walker's memory of the past is weighted by a Gaussian centered at time t/2, at which time the walker had one half the present age, and with a standard deviation σt which grows linearly as the walker ages. For large widths we find that the model behaves similarly to the Elephant model, but for small widths this Gaussian memory profile model behaves like the Alzheimer walk model. We also report that the phenomenon of amnestically induced persistence, known to occur in the Alzheimer walk model, arises in the Gaussian memory profile model. We conclude that memory of the initial times is not a necessary condition for generating (log-periodic) superdiffusion. We show that the phenomenon of amnestically induced persistence extends to the case of a Gaussian memory profile.

  18. Correction of confounding bias in non-randomized studies by appropriate weighting.

    Science.gov (United States)

    Schmoor, Claudia; Gall, Christine; Stampf, Susanne; Graf, Erika

    2011-03-01

    In non-randomized studies, the assessment of a causal effect of treatment or exposure on outcome is hampered by possible confounding. Applying multiple regression models including the effects of treatment and covariates on outcome is the well-known classical approach to adjust for confounding. In recent years other approaches have been promoted. One of them is based on the propensity score and considers the effect of possible confounders on treatment as a relevant criterion for adjustment. Another proposal is based on using an instrumental variable. Here inference relies on a factor, the instrument, which affects treatment but is thought to be otherwise unrelated to outcome, so that it mimics randomization. Each of these approaches can basically be interpreted as a simple reweighting scheme, designed to address confounding. The procedures will be compared with respect to their fundamental properties, namely, which bias they aim to eliminate, which effect they aim to estimate, and which parameter is modelled. We will expand our overview of methods for analysis of non-randomized studies to methods for analysis of randomized controlled trials and show that analyses of both study types may target different effects and different parameters. The considerations will be illustrated using a breast cancer study with a so-called Comprehensive Cohort Study design, including a randomized controlled trial and a non-randomized study in the same patient population as sub-cohorts. This design offers ideal opportunities to discuss and illustrate the properties of the different approaches. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. INEMO: Distributed RF-Based Indoor Location Determination with Confidence Indicator

    Directory of Open Access Journals (Sweden)

    Youxian Sun

    2007-12-01

    Full Text Available Using radio signal strength (RSS in sensor networks localization is an attractive method since it is a cost-efficient method to provide range indication. In this paper, we present a two-tier distributed approach for RF-based indoor location determination. Our approach, namely, INEMO, provides positioning accuracy of room granularity and office cube granularity. A target can first give a room granularity request and the background anchor nodes cooperate to accomplish the positioning process. Anchors in the same room can give cube granularity if the target requires further accuracy. Fixed anchor nodes keep monitoring status of nearby anchors and local reference matching is used to support room separation. Furthermore, we utilize the RSS difference to infer the positioning confidence. The simulation results demonstrate the efficiency of the proposed RF-based indoor location determination.

  20. Simulation of the K-function in the analysis of spatial clustering for non-randomly distributed locations-Exemplified by bovine virus diarrhoea virus (BVDV) infection in Denmark

    DEFF Research Database (Denmark)

    Ersbøll, Annette Kjær; Ersbøll, Bjarne Kjær

    2009-01-01

    -infected (N-N+)). The differences between the empirical and the estimated null-hypothesis version of the K-function are plotted together with the 95% simulation envelopes versus the distance, h. In this way we test if the spatial distribution of the infected herds differs from the spatial distribution...

  1. The distribution of the number of node neighbors in random hypergraphs

    International Nuclear Information System (INIS)

    López, Eduardo

    2013-01-01

    Hypergraphs, the generalization of graphs in which edges become conglomerates of r nodes called hyperedges of rank r ⩾ 2, are excellent models to study systems with interactions that are beyond the pairwise level. For hypergraphs, the node degree ℓ (number of hyperedges connected to a node) and the number of neighbors k of a node differ from each other in contrast to the case of graphs, where counting the number of edges is equivalent to counting the number of neighbors. In this paper, I calculate the distribution of the number of node neighbors in random hypergraphs in which hyperedges of uniform rank r have a homogeneous (equal for all hyperedges) probability p to appear. This distribution is equivalent to the degree distribution of ensembles of graphs created as projections of hypergraph or bipartite network ensembles, where the projection connects any two nodes in the projected graph when they are also connected in the hypergraph or bipartite network. The calculation is non-trivial due to the possibility that neighbor nodes belong simultaneously to multiple hyperedges (node overlaps). From the exact results, the traditional asymptotic approximation to the distribution in the sparse regime (small p) where overlaps are ignored is rederived and improved; the approximation exhibits Poisson-like behavior accompanied by strong fluctuations modulated by power-law decays in the system size N with decay exponents equal to the minimum number of overlapping nodes possible for a given number of neighbors. It is shown that the dense limit cannot be explained if overlaps are ignored, and the correct asymptotic distribution is provided. The neighbor distribution requires the calculation of a new combinatorial coefficient Q r−1 (k, ℓ), which counts the number of distinct labeled hypergraphs of k nodes, ℓ hyperedges of rank r − 1, and where every node is connected to at least one hyperedge. Some identities of Q r−1 (k, ℓ) are derived and applied to the

  2. Measurement of fuel importance distribution in non-uniformly distributed fuel systems

    International Nuclear Information System (INIS)

    Yamane, Yoshihiro; Hirano, Yasushi; Yasui, Hazime; Izima, Kazunori; Shiroya, Seiji; Kobayashi, Keiji.

    1995-01-01

    A reactivity effect due to a spatial variation of nuclear fuel concentration is an important problem for nuclear criticality safety in a reprocessing plant. As a theory estimating this reactivity effect, the Goertzel and fuel importance theories are well known. It has been shown that the Goertzel's theory is valid in the range of our experiments based on measurements of reactivity effect and thermal neutron flux in non-uniformly distributed fuel systems. On the other hand, there have been no reports concerning systematic experimental studies on the flatness of fuel importance which is a more general index than the Goertzel's theory. It is derived from the perturbation theory that the fuel importance is proportional to the reactivity change resulting from a change of small amount of fuel mass. Using a uniform and three kinds of nonuniform fuel systems consisting of 93.2% enriched uranium plates and polyethylene plates, the fuel importance distributions were measured. As a result, it was found experimentally that the fuel importance distribution became flat, as its reactivity effect became large. Therefore it was concluded that the flatness of fuel importance distribution is the useful index for estimating reactivity effect of non-uniformly distributed fuel system. (author)

  3. Hessian eigenvalue distribution in a random Gaussian landscape

    Science.gov (United States)

    Yamada, Masaki; Vilenkin, Alexander

    2018-03-01

    The energy landscape of multiverse cosmology is often modeled by a multi-dimensional random Gaussian potential. The physical predictions of such models crucially depend on the eigenvalue distribution of the Hessian matrix at potential minima. In particular, the stability of vacua and the dynamics of slow-roll inflation are sensitive to the magnitude of the smallest eigenvalues. The Hessian eigenvalue distribution has been studied earlier, using the saddle point approximation, in the leading order of 1/ N expansion, where N is the dimensionality of the landscape. This approximation, however, is insufficient for the small eigenvalue end of the spectrum, where sub-leading terms play a significant role. We extend the saddle point method to account for the sub-leading contributions. We also develop a new approach, where the eigenvalue distribution is found as an equilibrium distribution at the endpoint of a stochastic process (Dyson Brownian motion). The results of the two approaches are consistent in cases where both methods are applicable. We discuss the implications of our results for vacuum stability and slow-roll inflation in the landscape.

  4. Multi criteria decision making methods for location selection of distribution centers

    Directory of Open Access Journals (Sweden)

    Romita Chakraborty

    2013-10-01

    Full Text Available In recent years, major challenges such as, increase in inflexible consumer demands and to improve the competitive advantage, it has become necessary for various industrial organizations all over the world to focus on strategies that will help them achieve cost reduction, continual quality improvement, increased customer satisfaction and on time delivery performance. As a result, selection of the most suitable and optimal facility location for a new organization or expansion of an existing location is one of the most important strategic issues, required to fulfill all of these above mentioned objectives. In order to sustain in the global competitive market of 21st century, many industrial organizations have begun to concentrate on the proper selection of the plant site or best facility location. The best location is that which results in higher economic benefits through increased productivity and good distribution network. When a choice is to be made from among several alternative facility locations, it is necessary to compare their performance characteristics in a decisive way. As the facility location selection problem involves multiple conflicting criteria and a finite set of potential candidate alternatives, different multi-criteria decision-making (MCDM methods can be effectively applied to solve such type of problem. In this paper, four well known MCDM methods have been applied on a facility location selection problem and their relative ranking performances are compared. Because of disagreement in the ranks obtained by the four different MCDM methods a final ranking method based on REGIME has been proposed by the authors to facilitate the decision making process.

  5. An efficient method of randomly sampling the coherent angular scatter distribution

    International Nuclear Information System (INIS)

    Williamson, J.F.; Morin, R.L.

    1983-01-01

    Monte Carlo simulations of photon transport phenomena require random selection of an interaction process at each collision site along the photon track. Possible choices are usually limited to photoelectric absorption and incoherent scatter as approximated by the Klein-Nishina distribution. A technique is described for sampling the coherent angular scatter distribution, for the benefit of workers in medical physics. (U.K.)

  6. Stereotactic Body Radiotherapy for Centrally Located Non-small Cell Lung Cancer

    Directory of Open Access Journals (Sweden)

    Yuming WAN

    2018-05-01

    Full Text Available A few study has proven that about 90% of local control rates might be benefit from stereotactic body radiotherapy (SBRT for patients with medically inoperable stage I non-small cell lung cancer (NSCLC, it is reported SBRT associated overall survival and tumor specific survival is comparable with those treated with surgery. SBRT has been accepted as the first line treatment for inoperable patients with peripheral located stage I NSCLC. However, the role of SBRT in centrally located lesions is controversial for potential toxic effects from the adjacent anatomical structure. This paper will review the definition, indication, dose regimens, dose-volume constraints for organs at risk, radiation technology, treatment side effect of centrally located NSCLC treated with SBRT and stereotactic body proton therapy.

  7. A method for the generation of random multiple Coulomb scattering angles

    International Nuclear Information System (INIS)

    Campbell, J.R.

    1995-06-01

    A method for the random generation of spatial angles drawn from non-Gaussian multiple Coulomb scattering distributions is presented. The method employs direct numerical inversion of cumulative probability distributions computed from the universal non-Gaussian angular distributions of Marion and Zimmerman. (author). 12 refs., 3 figs

  8. Optimal Location and Sizing of UPQC in Distribution Networks Using Differential Evolution Algorithm

    Directory of Open Access Journals (Sweden)

    Seyed Abbas Taher

    2012-01-01

    Full Text Available Differential evolution (DE algorithm is used to determine optimal location of unified power quality conditioner (UPQC considering its size in the radial distribution systems. The problem is formulated to find the optimum location of UPQC based on an objective function (OF defined for improving of voltage and current profiles, reducing power loss and minimizing the investment costs considering the OF's weighting factors. Hence, a steady-state model of UPQC is derived to set in forward/backward sweep load flow. Studies are performed on two IEEE 33-bus and 69-bus standard distribution networks. Accuracy was evaluated by reapplying the procedures using both genetic (GA and immune algorithms (IA. Comparative results indicate that DE is capable of offering a nearer global optimal in minimizing the OF and reaching all the desired conditions than GA and IA.

  9. Two-terminal reliability of a mobile ad hoc network under the asymptotic spatial distribution of the random waypoint model

    International Nuclear Information System (INIS)

    Chen, Binchao; Phillips, Aaron; Matis, Timothy I.

    2012-01-01

    The random waypoint (RWP) mobility model is frequently used in describing the movement pattern of mobile users in a mobile ad hoc network (MANET). As the asymptotic spatial distribution of nodes under a RWP model exhibits central tendency, the two-terminal reliability of the MANET is investigated as a function of the source node location. In particular, analytical expressions for one and two hop connectivities are developed as well as an efficient simulation methodology for two-terminal reliability. A study is then performed to assess the effect of nodal density and network topology on network reliability.

  10. On intra-supply chain system with an improved distribution plan, multiple sales locations and quality assurance.

    Science.gov (United States)

    Chiu, Singa Wang; Huang, Chao-Chih; Chiang, Kuo-Wei; Wu, Mei-Fang

    2015-01-01

    Transnational companies, operating in extremely competitive global markets, always seek to lower different operating costs, such as inventory holding costs in their intra- supply chain system. This paper incorporates a cost reducing product distribution policy into an intra-supply chain system with multiple sales locations and quality assurance studied by [Chiu et al., Expert Syst Appl, 40:2669-2676, (2013)]. Under the proposed cost reducing distribution policy, an added initial delivery of end items is distributed to multiple sales locations to meet their demand during the production unit's uptime and rework time. After rework when the remaining production lot goes through quality assurance, n fixed quantity installments of finished items are then transported to sales locations at a fixed time interval. Mathematical modeling and optimization techniques are used to derive closed-form optimal operating policies for the proposed system. Furthermore, the study demonstrates significant savings in stock holding costs for both the production unit and sales locations. Alternative of outsourcing product delivery task to an external distributor is analyzed to assist managerial decision making in potential outsourcing issues in order to facilitate further reduction in operating costs.

  11. Dipole location using SQUID based measurements: Application to magnetocardiography

    Science.gov (United States)

    Mariyappa, N.; Parasakthi, C.; Sengottuvel, S.; Gireesan, K.; Patel, Rajesh; Janawadkar, M. P.; Sundar, C. S.; Radhakrishnan, T. S.

    2012-07-01

    We report a method of inferring the dipole location using iterative nonlinear least square optimization based on Levenberg-Marquardt algorithm, wherein, we use different sets of pseudo-random numbers as initial parameter values. The method has been applied to (i) the simulated data representing the calculated magnetic field distribution produced by a point dipole placed at a known position, (ii) the experimental data from SQUID based measurements of the magnetic field distribution produced by a source coil carrying current, and (iii) the actual experimentally measured magnetocardiograms of human subjects using a SQUID based system.

  12. Random phenotypic variation of yeast (Saccharomyces cerevisiae) single-gene knockouts fits a double pareto-lognormal distribution.

    Science.gov (United States)

    Graham, John H; Robb, Daniel T; Poe, Amy R

    2012-01-01

    Distributed robustness is thought to influence the buffering of random phenotypic variation through the scale-free topology of gene regulatory, metabolic, and protein-protein interaction networks. If this hypothesis is true, then the phenotypic response to the perturbation of particular nodes in such a network should be proportional to the number of links those nodes make with neighboring nodes. This suggests a probability distribution approximating an inverse power-law of random phenotypic variation. Zero phenotypic variation, however, is impossible, because random molecular and cellular processes are essential to normal development. Consequently, a more realistic distribution should have a y-intercept close to zero in the lower tail, a mode greater than zero, and a long (fat) upper tail. The double Pareto-lognormal (DPLN) distribution is an ideal candidate distribution. It consists of a mixture of a lognormal body and upper and lower power-law tails. If our assumptions are true, the DPLN distribution should provide a better fit to random phenotypic variation in a large series of single-gene knockout lines than other skewed or symmetrical distributions. We fit a large published data set of single-gene knockout lines in Saccharomyces cerevisiae to seven different probability distributions: DPLN, right Pareto-lognormal (RPLN), left Pareto-lognormal (LPLN), normal, lognormal, exponential, and Pareto. The best model was judged by the Akaike Information Criterion (AIC). Phenotypic variation among gene knockouts in S. cerevisiae fits a double Pareto-lognormal (DPLN) distribution better than any of the alternative distributions, including the right Pareto-lognormal and lognormal distributions. A DPLN distribution is consistent with the hypothesis that developmental stability is mediated, in part, by distributed robustness, the resilience of gene regulatory, metabolic, and protein-protein interaction networks. Alternatively, multiplicative cell growth, and the mixing of

  13. Averaging in SU(2) open quantum random walk

    International Nuclear Information System (INIS)

    Ampadu Clement

    2014-01-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT

  14. Averaging in SU(2) open quantum random walk

    Science.gov (United States)

    Clement, Ampadu

    2014-03-01

    We study the average position and the symmetry of the distribution in the SU(2) open quantum random walk (OQRW). We show that the average position in the central limit theorem (CLT) is non-uniform compared with the average position in the non-CLT. The symmetry of distribution is shown to be even in the CLT.

  15. Modelling the spectral irradiance distribution in sunny inland locations using an ANN-based methodology

    International Nuclear Information System (INIS)

    Torres-Ramírez, M.; Elizondo, D.; García-Domingo, B.; Nofuentes, G.; Talavera, D.L.

    2015-01-01

    This work is aimed at verifying that in sunny inland locations artificial intelligence techniques may provide an estimation of the spectral irradiance with adequate accuracy for photovoltaic applications. An ANN (artificial neural network) based method was developed, trained and tested to model the spectral distributions between wavelengths ranging from 350 to 1050 nm. Only commonly available input data such as geographical information regarding location, specific date and time together with horizontal global irradiance and ambient temperature are required. Historical information from a 24-month experimental campaign carried out in Jaén (Spain) provided the necessary data to train and test the ANN tool. A Kohonen self-organized map was used as innovative technique to classify the whole input dataset and build a small and representative training dataset. The shape of the spectral irradiance distribution, the in-plane global irradiance (G T ) and irradiation (H T ) and the APE (average photon energy) values obtained through the ANN method were statistically compared to the experimental ones. In terms of shape distribution fitting, the mean relative deformation error stays below 4.81%. The root mean square percentage error is around 6.89% and 0.45% when estimating G T and APE, respectively. Regarding H T , errors lie below 3.18% in all cases. - Highlights: • ANN-based model to estimate the spectral irradiance distribution in sunny inland locations. • MRDE value stay below 4.81% in spectral irradiance distribution shape fitting. • RMSPE is about 6.89% for the in-plane global irradiance and 0.45% for the average photon energy. • Errors stay below 3.18% for all the months of the year in incident irradiation terms. • Improvement of assessment of the impact of the solar spectrum in the performance of a PV module

  16. Distributed computing feasibility in a non-dedicated homogeneous distributed system

    Science.gov (United States)

    Leutenegger, Scott T.; Sun, Xian-He

    1993-01-01

    The low cost and availability of clusters of workstations have lead researchers to re-explore distributed computing using independent workstations. This approach may provide better cost/performance than tightly coupled multiprocessors. In practice, this approach often utilizes wasted cycles to run parallel jobs. The feasibility of such a non-dedicated parallel processing environment assuming workstation processes have preemptive priority over parallel tasks is addressed. An analytical model is developed to predict parallel job response times. Our model provides insight into how significantly workstation owner interference degrades parallel program performance. A new term task ratio, which relates the parallel task demand to the mean service demand of nonparallel workstation processes, is introduced. It was proposed that task ratio is a useful metric for determining how large the demand of a parallel applications must be in order to make efficient use of a non-dedicated distributed system.

  17. Robustness to non-normality of various tests for the one-sample location problem

    Directory of Open Access Journals (Sweden)

    Michelle K. McDougall

    2004-01-01

    Full Text Available This paper studies the effect of the normal distribution assumption on the power and size of the sign test, Wilcoxon's signed rank test and the t-test when used in one-sample location problems. Power functions for these tests under various skewness and kurtosis conditions are produced for several sample sizes from simulated data using the g-and-k distribution of MacGillivray and Cannon [5].

  18. A Randomized Central Limit Theorem

    International Nuclear Information System (INIS)

    Eliazar, Iddo; Klafter, Joseph

    2010-01-01

    The Central Limit Theorem (CLT), one of the most elemental pillars of Probability Theory and Statistical Physics, asserts that: the universal probability law of large aggregates of independent and identically distributed random summands with zero mean and finite variance, scaled by the square root of the aggregate-size (√(n)), is Gaussian. The scaling scheme of the CLT is deterministic and uniform - scaling all aggregate-summands by the common and deterministic factor √(n). This Letter considers scaling schemes which are stochastic and non-uniform, and presents a 'Randomized Central Limit Theorem' (RCLT): we establish a class of random scaling schemes which yields universal probability laws of large aggregates of independent and identically distributed random summands. The RCLT universal probability laws, in turn, are the one-sided and the symmetric Levy laws.

  19. Detection of arcing ground fault location on a distribution network connected PV system; Hikarihatsuden renkei haidensen ni okeru koko chiryaku kukan no kenshutsuho

    Energy Technology Data Exchange (ETDEWEB)

    Sato, M; Iwaya, K; Morooka, Y [Hachinohe Institute of Technology, Aomori (Japan)

    1996-10-27

    In the near future, it is supposed that a great number of small-scale distributed power sources, such as photovoltaic power generation for general houses, will be interconnected with the ungrounded neutral distribution system in Japan. When ground fault of commercial frequency once occurs, great damage is easily guessed. This paper discusses the effect of the ground fault on the ground phase current using a 6.6 kV high-voltage model system by considering the non-linear self-inductance in the line, and by considering the non-linear relation of arcing ground fault current frequency. In the present method, the remarkable difference of series resonance frequency determined by the inductance and earth capacity between the source side and load side is utilized for the detection of high-voltage arcing ground fault location. In this method, there are some cases in which the non-linear effect obtained by measuring the inductance of sound phase including the secondary winding of transformer can not be neglected. Especially, for the actual high-voltage system, it was shown that the frequency characteristics of transformer inductance for distribution should be theoretically derived in the frequency range between 2 kHz and 6 kHz. 2 refs., 5 figs., 1 tab.

  20. Quantifying geocode location error using GIS methods

    Directory of Open Access Journals (Sweden)

    Gardner Bennett R

    2007-04-01

    Full Text Available Abstract Background The Metropolitan Atlanta Congenital Defects Program (MACDP collects maternal address information at the time of delivery for infants and fetuses with birth defects. These addresses have been geocoded by two independent agencies: (1 the Georgia Division of Public Health Office of Health Information and Policy (OHIP and (2 a commercial vendor. Geographic information system (GIS methods were used to quantify uncertainty in the two sets of geocodes using orthoimagery and tax parcel datasets. Methods We sampled 599 infants and fetuses with birth defects delivered during 1994–2002 with maternal residence in either Fulton or Gwinnett County. Tax parcel datasets were obtained from the tax assessor's offices of Fulton and Gwinnett County. High-resolution orthoimagery for these counties was acquired from the U.S. Geological Survey. For each of the 599 addresses we attempted to locate the tax parcel corresponding to the maternal address. If the tax parcel was identified the distance and the angle between the geocode and the residence were calculated. We used simulated data to characterize the impact of geocode location error. In each county 5,000 geocodes were generated and assigned their corresponding Census 2000 tract. Each geocode was then displaced at a random angle by a random distance drawn from the distribution of observed geocode location errors. The census tract of the displaced geocode was determined. We repeated this process 5,000 times and report the percentage of geocodes that resolved into incorrect census tracts. Results Median location error was less than 100 meters for both OHIP and commercial vendor geocodes; the distribution of angles appeared uniform. Median location error was approximately 35% larger in Gwinnett (a suburban county relative to Fulton (a county with urban and suburban areas. Location error occasionally caused the simulated geocodes to be displaced into incorrect census tracts; the median percentage

  1. Extended q -Gaussian and q -exponential distributions from gamma random variables

    Science.gov (United States)

    Budini, Adrián A.

    2015-05-01

    The family of q -Gaussian and q -exponential probability densities fit the statistical behavior of diverse complex self-similar nonequilibrium systems. These distributions, independently of the underlying dynamics, can rigorously be obtained by maximizing Tsallis "nonextensive" entropy under appropriate constraints, as well as from superstatistical models. In this paper we provide an alternative and complementary scheme for deriving these objects. We show that q -Gaussian and q -exponential random variables can always be expressed as a function of two statistically independent gamma random variables with the same scale parameter. Their shape index determines the complexity q parameter. This result also allows us to define an extended family of asymmetric q -Gaussian and modified q -exponential densities, which reduce to the standard ones when the shape parameters are the same. Furthermore, we demonstrate that a simple change of variables always allows relating any of these distributions with a beta stochastic variable. The extended distributions are applied in the statistical description of different complex dynamics such as log-return signals in financial markets and motion of point defects in a fluid flow.

  2. Saddlepoint approximation to the distribution of the total distance of the continuous time random walk

    Science.gov (United States)

    Gatto, Riccardo

    2017-12-01

    This article considers the random walk over Rp, with p ≥ 2, where a given particle starts at the origin and moves stepwise with uniformly distributed step directions and step lengths following a common distribution. Step directions and step lengths are independent. The case where the number of steps of the particle is fixed and the more general case where it follows an independent continuous time inhomogeneous counting process are considered. Saddlepoint approximations to the distribution of the distance from the position of the particle to the origin are provided. Despite the p-dimensional nature of the random walk, the computations of the saddlepoint approximations are one-dimensional and thus simple. Explicit formulae are derived with dimension p = 3: for uniformly and exponentially distributed step lengths, for fixed and for Poisson distributed number of steps. In these situations, the high accuracy of the saddlepoint approximations is illustrated by numerical comparisons with Monte Carlo simulation. Contribution to the "Topical Issue: Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.

  3. Patterns of particle distribution in multiparticle systems by random walks with memory enhancement and decay

    Science.gov (United States)

    Tan, Zhi-Jie; Zou, Xian-Wu; Huang, Sheng-You; Zhang, Wei; Jin, Zhun-Zhi

    2002-07-01

    We investigate the pattern of particle distribution and its evolution with time in multiparticle systems using the model of random walks with memory enhancement and decay. This model describes some biological intelligent walks. With decrease in the memory decay exponent α, the distribution of particles changes from a random dispersive pattern to a locally dense one, and then returns to the random one. Correspondingly, the fractal dimension Df,p characterizing the distribution of particle positions increases from a low value to a maximum and then decreases to the low one again. This is determined by the degree of overlap of regions consisting of sites with remanent information. The second moment of the density ρ(2) was introduced to investigate the inhomogeneity of the particle distribution. The dependence of ρ(2) on α is similar to that of Df,p on α. ρ(2) increases with time as a power law in the process of adjusting the particle distribution, and then ρ(2) tends to a stable equilibrium value.

  4. ESEARCH OF THE LAW OF DISTRIBUTION OF THE RANDOM VARIABLE OF THE COMPRESSION

    Directory of Open Access Journals (Sweden)

    I. Sarayeva

    2011-01-01

    Full Text Available At research of diagnosing the process of modern automobile engines by means of methods of mathematical statistics the experimental data of the random variable of compression are analysed and it is proved that the random variable of compression has the form of the normal law of distribution.

  5. Electrospun dye-doped fiber networks: lasing emission from randomly distributed cavities

    DEFF Research Database (Denmark)

    Krammer, Sarah; Vannahme, Christoph; Smith, Cameron

    2015-01-01

    Dye-doped polymer fiber networks fabricated with electrospinning exhibit comb-like laser emission. We identify randomly distributed ring resonators being responsible for lasing emission by making use of spatially resolved spectroscopy. Numerical simulations confirm this result quantitatively....

  6. Three-dimensional distribution of random velocity inhomogeneities at the Nankai trough seismogenic zone

    Science.gov (United States)

    Takahashi, T.; Obana, K.; Yamamoto, Y.; Nakanishi, A.; Kaiho, Y.; Kodaira, S.; Kaneda, Y.

    2012-12-01

    The Nankai trough in southwestern Japan is a convergent margin where the Philippine sea plate is subducted beneath the Eurasian plate. There are major faults segments of huge earthquakes that are called Tokai, Tonankai and Nankai earthquakes. According to the earthquake occurrence history over the past hundreds years, we must expect various rupture patters such as simultaneous or nearly continuous ruptures of plural fault segments. Japan Agency for Marine-Earth Science and Technology (JAMSTEC) conducted seismic surveys at Nankai trough in order to clarify mutual relations between seismic structures and fault segments, as a part of "Research concerning Interaction Between the Tokai, Tonankai and Nankai Earthquakes" funded by Ministry of Education, Culture, Sports, Science and Technology, Japan. This study evaluated the spatial distribution of random velocity inhomogeneities from Hyuga-nada to Kii-channel by using velocity seismograms of small and moderate sized earthquakes. Random velocity inhomogeneities are estimated by the peak delay time analysis of S-wave envelopes (e.g., Takahashi et al. 2009). Peak delay time is defined as the time lag from the S-wave onset to its maximal amplitude arrival. This quantity mainly reflects the accumulated multiple forward scattering effect due to random inhomogeneities, and is quite insensitive to the inelastic attenuation. Peak delay times are measured from the rms envelopes of horizontal components at 4-8Hz, 8-16Hz and 16-32Hz. This study used the velocity seismograms that are recorded by 495 ocean bottom seismographs and 378 onshore seismic stations. Onshore stations are composed of the F-net and Hi-net stations that are maintained by National Research Institute for Earth Science and Disaster Prevention (NIED) of Japan. It is assumed that the random inhomogeneities are represented by the von Karman type PSDF. Preliminary result of inversion analysis shows that spectral gradient of PSDF (i.e., scale dependence of

  7. A Note on the Tail Behavior of Randomly Weighted Sums with Convolution-Equivalently Distributed Random Variables

    Directory of Open Access Journals (Sweden)

    Yang Yang

    2013-01-01

    Full Text Available We investigate the tailed asymptotic behavior of the randomly weighted sums with increments with convolution-equivalent distributions. Our obtained result can be directly applied to a discrete-time insurance risk model with insurance and financial risks and derive the asymptotics for the finite-time probability of the above risk model.

  8. Prediction future asset price which is non-concordant with the historical distribution

    Science.gov (United States)

    Seong, Ng Yew; Hin, Pooi Ah

    2015-12-01

    This paper attempts to predict the major characteristics of the future asset price which is non-concordant with the distribution estimated from the price today and the prices on a large number of previous days. The three major characteristics of the i-th non-concordant asset price are the length of the interval between the occurrence time of the previous non-concordant asset price and that of the present non-concordant asset price, the indicator which denotes that the non-concordant price is extremely small or large by its values -1 and 1 respectively, and the degree of non-concordance given by the negative logarithm of the probability of the left tail or right tail of which one of the end points is given by the observed future price. The vector of three major characteristics of the next non-concordant price is modelled to be dependent on the vectors corresponding to the present and l - 1 previous non-concordant prices via a 3-dimensional conditional distribution which is derived from a 3(l + 1)-dimensional power-normal mixture distribution. The marginal distribution for each of the three major characteristics can then be derived from the conditional distribution. The mean of the j-th marginal distribution is an estimate of the value of the j-th characteristics of the next non-concordant price. Meanwhile, the 100(α/2) % and 100(1 - α/2) % points of the j-th marginal distribution can be used to form a prediction interval for the j-th characteristic of the next non-concordant price. The performance measures of the above estimates and prediction intervals indicate that the fitted conditional distribution is satisfactory. Thus the incorporation of the distribution of the characteristics of the next non-concordant price in the model for asset price has a good potential of yielding a more realistic model.

  9. Does the location of a vascular loop in the cerebellopontine angle explain pulsatile and non-pulsatile tinnitus?

    International Nuclear Information System (INIS)

    Nowe, V.; Wang, X.L.; Gielen, J.; Goethem, J.Van; Oezsarlak, Oe.; De Schepper, A.M.; Parizel, P.M.; Ridder, D. De; Heyning, P.H.Van de

    2004-01-01

    The purpose was to investigate patients with unexplained pulsatile and non-pulsatile tinnitus by means of MR imaging of the cerebellopontine angle (CPA) and to correlate the clinical subtype of tinnitus with the location of a blood vessel (in the internal auditory canal or at the cisternal part of the VIIIth cranial nerve). Clinical presentation of tinnitus and perceptive hearing loss were correlated. In 47 patients with unexplained tinnitus, an MR examination of the CPA was performed. Virtual endoscopy reconstructions were obtained using a 3D axial thin-section high-resolution heavily T2-weighted gradient echo constructive interference in steady state (CISS) data-set. High-resolution T2-weighted CISS images showed a significantly higher number of vascular loops in the internal auditory canal in patients with arterial pulsatile tinnitus compared to patients with non-pulsatile tinnitus (P<0.00001). Virtual endoscopy images were used to investigate vascular contacts at the cisternal part of the VIIIth cranial nerve in patients with low pitch and high pitch non-pulsatile tinnitus. A significantly different distribution of the vascular contacts (P=0.0320) was found. Furthermore, a correlation between the clinical presentation of non-pulsatile tinnitus (high pitch and low pitch) and the perceptive hearing loss was found (P=0.0235). High-resolution heavily T2-weighted CISS images and virtual endoscopy of the CPA can be used to evaluate whether a vascular contact is present in the internal auditory canal or at the cisternal part of the VIIIth cranial nerve and whether the location of the vascular contact correlates with the clinical subtype of tinnitus. Our findings suggest that there is a tonotopical structure of the cisternal part of the VIIIth cranial nerve. A correlation between the clinical presentation of tinnitus and hearing loss was found. (orig.)

  10. Does the location of a vascular loop in the cerebellopontine angle explain pulsatile and non-pulsatile tinnitus?

    Energy Technology Data Exchange (ETDEWEB)

    Nowe, V; Wang, X L; Gielen, J; Goethem, J Van; Oezsarlak, Oe; De Schepper, A M; Parizel, P M [University of Antwerp, Department of Radiology, Edegem (Belgium); Ridder, D De [University of Antwerp, Department of Neurosurgery, Edegem (Belgium); Heyning, P.H.Van de [University of Antwerp, Department of Otorhinolaryngology, Edegem (Belgium)

    2004-12-01

    The purpose was to investigate patients with unexplained pulsatile and non-pulsatile tinnitus by means of MR imaging of the cerebellopontine angle (CPA) and to correlate the clinical subtype of tinnitus with the location of a blood vessel (in the internal auditory canal or at the cisternal part of the VIIIth cranial nerve). Clinical presentation of tinnitus and perceptive hearing loss were correlated. In 47 patients with unexplained tinnitus, an MR examination of the CPA was performed. Virtual endoscopy reconstructions were obtained using a 3D axial thin-section high-resolution heavily T2-weighted gradient echo constructive interference in steady state (CISS) data-set. High-resolution T2-weighted CISS images showed a significantly higher number of vascular loops in the internal auditory canal in patients with arterial pulsatile tinnitus compared to patients with non-pulsatile tinnitus (P<0.00001). Virtual endoscopy images were used to investigate vascular contacts at the cisternal part of the VIIIth cranial nerve in patients with low pitch and high pitch non-pulsatile tinnitus. A significantly different distribution of the vascular contacts (P=0.0320) was found. Furthermore, a correlation between the clinical presentation of non-pulsatile tinnitus (high pitch and low pitch) and the perceptive hearing loss was found (P=0.0235). High-resolution heavily T2-weighted CISS images and virtual endoscopy of the CPA can be used to evaluate whether a vascular contact is present in the internal auditory canal or at the cisternal part of the VIIIth cranial nerve and whether the location of the vascular contact correlates with the clinical subtype of tinnitus. Our findings suggest that there is a tonotopical structure of the cisternal part of the VIIIth cranial nerve. A correlation between the clinical presentation of tinnitus and hearing loss was found. (orig.)

  11. Gravitational lensing by eigenvalue distributions of random matrix models

    Science.gov (United States)

    Martínez Alonso, Luis; Medina, Elena

    2018-05-01

    We propose to use eigenvalue densities of unitary random matrix ensembles as mass distributions in gravitational lensing. The corresponding lens equations reduce to algebraic equations in the complex plane which can be treated analytically. We prove that these models can be applied to describe lensing by systems of edge-on galaxies. We illustrate our analysis with the Gaussian and the quartic unitary matrix ensembles.

  12. Graphene materials having randomly distributed two-dimensional structural defects

    Science.gov (United States)

    Kung, Harold H; Zhao, Xin; Hayner, Cary M; Kung, Mayfair C

    2013-10-08

    Graphene-based storage materials for high-power battery applications are provided. The storage materials are composed of vertical stacks of graphene sheets and have reduced resistance for Li ion transport. This reduced resistance is achieved by incorporating a random distribution of structural defects into the stacked graphene sheets, whereby the structural defects facilitate the diffusion of Li ions into the interior of the storage materials.

  13. Distributed Random Process for a Large-Scale Peer-to-Peer Lottery

    OpenAIRE

    Grumbach, Stéphane; Riemann, Robert

    2017-01-01

    International audience; Most online lotteries today fail to ensure the verifiability of the random process and rely on a trusted third party. This issue has received little attention since the emergence of distributed protocols like Bitcoin that demonstrated the potential of protocols with no trusted third party. We argue that the security requirements of online lotteries are similar to those of online voting, and propose a novel distributed online lottery protocol that applies techniques dev...

  14. The Influence of Emission Location on the Magnitude and Spatial Distribution of Aerosols' Climate Effects

    Science.gov (United States)

    Persad, G.; Caldeira, K.

    2017-12-01

    The global distribution of anthropogenic aerosol emissions has evolved continuously since the preindustrial era - from 20th century North American and Western European emissions hotspots to present-day South and East Asian ones. With this comes a relocation of the regional radiative, dynamical, and hydrological impacts of aerosol emissions, which may influence global climate differently depending on where they occur. A lack of understanding of this relationship between aerosol emissions' location and their global climate effects, however, obscures the potential influence that aerosols' evolving geographic distribution may have on global and regional climate change—a gap which we address in this work. Using a novel suite of experiments in the CESM CAM5 atmospheric general circulation model coupled to a slab ocean, we systematically test and analyze mechanisms behind the relative climate impact of identical black carbon and sulfate aerosol emissions located in each of 8 past, present, or projected future major emissions regions. Results indicate that historically high emissions regions, such as North America and Western Europe, produce a stronger cooling effect than current and projected future high emissions regions. Aerosol emissions located in Western Europe produce 3 times the global mean cooling (-0.34 °C) as those located in East Africa or India (-0.11 °C). The aerosols' in-situ radiative effects remain relatively confined near the emissions region, but large distal cooling results from remote feedback processes - such as ice albedo and cloud changes - that are excited more strongly by emissions from certain regions than others. Results suggest that aerosol emissions from different countries should not be considered equal in the context of climate mitigation accounting, and that the evolving geographic distribution of aerosol emissions may have a substantial impact on the magnitude and spatial distribution of global climate change.

  15. Weight Distribution for Non-binary Cluster LDPC Code Ensemble

    Science.gov (United States)

    Nozaki, Takayuki; Maehara, Masaki; Kasai, Kenta; Sakaniwa, Kohichi

    In this paper, we derive the average weight distributions for the irregular non-binary cluster low-density parity-check (LDPC) code ensembles. Moreover, we give the exponential growth rate of the average weight distribution in the limit of large code length. We show that there exist $(2,d_c)$-regular non-binary cluster LDPC code ensembles whose normalized typical minimum distances are strictly positive.

  16. Telemedicine Provides Non-Inferior Research Informed Consent for Remote Study Enrollment: A Randomized Controlled Trial

    Science.gov (United States)

    Bobb, Morgan R.; Van Heukelom, Paul G.; Faine, Brett A.; Ahmed, Azeemuddin; Messerly, Jeffrey T.; Bell, Gregory; Harland, Karisa K.; Simon, Christian; Mohr, Nicholas M.

    2016-01-01

    Objective Telemedicine networks are beginning to provide an avenue for conducting emergency medicine research, but using telemedicine to recruit participants for clinical trials has not been validated. The goal of this consent study is to determine whether patient comprehension of telemedicine-enabled research informed consent is non-inferior to standard face-to-face research informed consent. Methods A prospective, open-label randomized controlled trial was performed in a 60,000-visit Midwestern academic Emergency Department (ED) to test whether telemedicine-enabled research informed consent provided non-inferior comprehension compared with standard consent. This study was conducted as part of a parent clinical trial evaluating the effectiveness of oral chlorhexidine gluconate 0.12% in preventing hospital-acquired pneumonia among adult ED patients with expected hospital admission. Prior to being recruited into the study, potential participants were randomized in a 1:1 allocation ratio to consent by telemedicine versus standard face-to-face consent. Telemedicine connectivity was provided using a commercially available interface (REACH platform, Vidyo Inc., Hackensack, NJ) to an emergency physician located in another part of the ED. Comprehension of research consent (primary outcome) was measured using the modified Quality of Informed Consent (QuIC) instrument, a validated tool for measuring research informed consent comprehension. Parent trial accrual rate and qualitative survey data were secondary outcomes. Results One-hundred thirty-one patients were randomized (n = 64, telemedicine), and 101 QuIC surveys were completed. Comprehension of research informed consent using telemedicine was not inferior to face-to-face consent (QuIC scores 74.4 ± 8.1 vs. 74.4 ± 6.9 on a 100-point scale, p = 0.999). Subjective understanding of consent (p=0.194) and parent trial study accrual rates (56% vs. 69%, p = 0.142) were similar. Conclusion Telemedicine is non-inferior to face

  17. A Randomized Clinical Trial Comparing Methotrexate and Mycophenolate Mofetil for Non-Infectious Uveitis

    Science.gov (United States)

    Rathinam, Sivakumar R; Babu, Manohar; Thundikandy, Radhika; Kanakath, Anuradha; Nardone, Natalie; Esterberg, Elizabeth; Lee, Salena M; Enanoria, Wayne TA; Porco, Travis C; Browne, Erica N; Weinrib, Rachel; Acharya, Nisha R

    2014-01-01

    Objective To compare the relative effectiveness of methotrexate and mycophenolate mofetil for non-infectious intermediate uveitis, posterior uveitis, or panuveitis. Design Multicenter, block-randomized, observer-masked clinical trial Participants Eighty patients with non-infectious intermediate, posterior or panuveitis requiring corticosteroid-sparing therapy at Aravind Eye Hospitals in Madurai and Coimbatore, India. Intervention Patients were randomized to receive 25mg weekly oral methotrexate or 1g twice daily oral mycophenolate mofetil and were monitored monthly for 6 months. Oral prednisone and topical corticosteroids were tapered. Main Outcome Measures Masked examiners assessed the primary outcome of treatment success, defined by achieving the following at 5 and 6 months: (1) ≤0.5+ anterior chamber cells, ≤0.5+ vitreous cells, ≤0.5+ vitreous haze and no active retinal/choroidal lesions in both eyes, (2) ≤ 10 mg of prednisone and ≤ 2 drops of prednisolone acetate 1% a day and (3) no declaration of treatment failure due to intolerability or safety. Additional outcomes included time to sustained corticosteroid-sparing control of inflammation, change in best spectacle-corrected visual acuity, resolution of macular edema, adverse events, subgroup analysis by anatomic location, and medication adherence. Results Forty-one patients were randomized to methotrexate and 39 to mycophenolate mofetil. A total of 67 patients (35 methotrexate, 32 mycophenolate mofetil) contributed to the primary outcome. Sixty-nine percent of patients achieved treatment success with methotrexate and 47% with mycophenolate mofetil (p=0.09). Treatment failure due to adverse events or tolerability was not significantly different by treatment arm (p=0.99). There were no statistically significant differences between treatment groups in time to corticosteroid-sparing control of inflammation (p=0.44), change in best spectacle-corrected visual acuity (p=0.68), and resolution of macular

  18. A Calculus of Located Entities

    Directory of Open Access Journals (Sweden)

    Adriana Compagnoni

    2014-03-01

    Full Text Available We define BioScapeL, a stochastic pi-calculus in 3D-space. A novel aspect of BioScapeL is that entities have programmable locations. The programmer can specify a particular location where to place an entity, or a location relative to the current location of the entity. The motivation for the extension comes from the need to describe the evolution of populations of biochemical species in space, while keeping a sufficiently high level description, so that phenomena like diffusion, collision, and confinement can remain part of the semantics of the calculus. Combined with the random diffusion movement inherited from BioScape, programmable locations allow us to capture the assemblies of configurations of polymers, oligomers, and complexes such as microtubules or actin filaments. Further new aspects of BioScapeL include random translation and scaling. Random translation is instrumental in describing the location of new entities relative to the old ones. For example, when a cell secretes a hydronium ion, the ion should be placed at a given distance from the originating cell, but in a random direction. Additionally, scaling allows us to capture at a high level events such as division and growth; for example, daughter cells after mitosis have half the size of the mother cell.

  19. Facility Location Modeling in Multi-Echelon Distribution System: A Case Study of Indonesian Liquefied Petroleum Gas Supply Chain

    Directory of Open Access Journals (Sweden)

    Ilyas Masudin

    2013-04-01

    Full Text Available This paper presents model of Indonesian LPG supply chain by opening new facilities (new echelon taking into account the current facilities. The objective is to investigate the relation between distribution costs such as transportation, inventory cost and facility location in Indonesian multi-echelon LPG supply chain. Fixed-charged capacitated facility location problem is used to determine the optimal solution of the proposed model. In the sensitivity analysis, it is reported that the trade-offs between facility locations and distribution costs are exist. Results report that as the number of facility increases, total transportation and inventory cost also increase.

  20. A random phased-array for MR-guided transcranial ultrasound neuromodulation in non-human primates

    Science.gov (United States)

    Chaplin, Vandiver; Phipps, Marshal A.; Caskey, Charles F.

    2018-05-01

    Transcranial focused ultrasound (FUS) is a non-invasive technique for therapy and study of brain neural activation. Here we report on the design and characterization of a new MR-guided FUS transducer for neuromodulation in non-human primates at 650 kHz. The array is randomized with 128 elements 6.6 mm in diameter, radius of curvature 7.2 cm, opening diameter 10.3 cm (focal ratio 0.7), and 46% coverage. Simulations were used to optimize transducer geometry with respect to focus size, grating lobes, and directivity. Focus size and grating lobes during electronic steering were quantified using hydrophone measurements in water and a three-axis stage. A novel combination of optical tracking and acoustic mapping enabled measurement of the 3D pressure distribution in the cortical region of an ex vivo skull to within ~3.5 mm of the surface, and allowed accurate modelling of the experiment via non-homogeneous 3D acoustic simulations. The data demonstrates acoustic focusing beyond the skull bone, with the focus slightly broadened and shifted proximal to the skull. The fabricated design is capable of targeting regions within the S1 sensorimotor cortex of macaques.

  1. Three-Phase AC Optimal Power Flow Based Distribution Locational Marginal Price: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Rui; Zhang, Yingchen

    2017-05-17

    Designing market mechanisms for electricity distribution systems has been a hot topic due to the increased presence of smart loads and distributed energy resources (DERs) in distribution systems. The distribution locational marginal pricing (DLMP) methodology is one of the real-time pricing methods to enable such market mechanisms and provide economic incentives to active market participants. Determining the DLMP is challenging due to high power losses, the voltage volatility, and the phase imbalance in distribution systems. Existing DC Optimal Power Flow (OPF) approaches are unable to model power losses and the reactive power, while single-phase AC OPF methods cannot capture the phase imbalance. To address these challenges, in this paper, a three-phase AC OPF based approach is developed to define and calculate DLMP accurately. The DLMP is modeled as the marginal cost to serve an incremental unit of demand at a specific phase at a certain bus, and is calculated using the Lagrange multipliers in the three-phase AC OPF formulation. Extensive case studies have been conducted to understand the impact of system losses and the phase imbalance on DLMPs as well as the potential benefits of flexible resources.

  2. Effects of vortex-like and non-thermal ion distributions on non-linear dust-acoustic waves

    International Nuclear Information System (INIS)

    Mamun, A.A.; Cairns, R.A.; Shukla, P.K.

    1996-01-01

    The effects of vortex-like and non-thermal ion distributions are incorporated in the study of nonlinear dust-acoustic waves in an unmagnetized dusty plasma. It is found that owing to the departure from the Boltzmann ion distribution to a vortex-like phase space distribution, the dynamics of small but finite amplitude dust-acoustic waves is governed by a modified Kortweg endash de Vries equation. The latter admits a stationary dust-acoustic solitary wave solution, which has larger amplitude, smaller width, and higher propagation velocity than that involving adiabatic ions. On the other hand, consideration of a non-thermal ion distribution provides the possibility of coexistence of large amplitude rarefactive as well as compressive dust-acoustic solitary waves, whereas these structures appear independently when the wave amplitudes become infinitely small. The present investigation should help us to understand the salient features of the non-linear dust-acoustic waves that have been observed in a recent numerical simulation study. copyright 1996 American Institute of Physics

  3. Chord length distributions between hard disks and spheres in regular, semi-regular, and quasi-random structures

    International Nuclear Information System (INIS)

    Olson, Gordon L.

    2008-01-01

    In binary stochastic media in two- and three-dimensions consisting of randomly placed impenetrable disks or spheres, the chord lengths in the background material between disks and spheres closely follow exponential distributions if the disks and spheres occupy less than 10% of the medium. This work demonstrates that for regular spatial structures of disks and spheres, the tails of the chord length distributions (CLDs) follow power laws rather than exponentials. In dilute media, when the disks and spheres are widely spaced, the slope of the power law seems to be independent of the details of the structure. When approaching a close-packed arrangement, the exact placement of the spheres can make a significant difference. When regular structures are perturbed by small random displacements, the CLDs become power laws with steeper slopes. An example CLD from a quasi-random distribution of spheres in clusters shows a modified exponential distribution

  4. Chord length distributions between hard disks and spheres in regular, semi-regular, and quasi-random structures

    Energy Technology Data Exchange (ETDEWEB)

    Olson, Gordon L. [Computer and Computational Sciences Division (CCS-2), Los Alamos National Laboratory, 5 Foxglove Circle, Madison, WI 53717 (United States)], E-mail: olson99@tds.net

    2008-11-15

    In binary stochastic media in two- and three-dimensions consisting of randomly placed impenetrable disks or spheres, the chord lengths in the background material between disks and spheres closely follow exponential distributions if the disks and spheres occupy less than 10% of the medium. This work demonstrates that for regular spatial structures of disks and spheres, the tails of the chord length distributions (CLDs) follow power laws rather than exponentials. In dilute media, when the disks and spheres are widely spaced, the slope of the power law seems to be independent of the details of the structure. When approaching a close-packed arrangement, the exact placement of the spheres can make a significant difference. When regular structures are perturbed by small random displacements, the CLDs become power laws with steeper slopes. An example CLD from a quasi-random distribution of spheres in clusters shows a modified exponential distribution.

  5. Distribution of sizes of erased loops for loop-erased random walks

    OpenAIRE

    Dhar, Deepak; Dhar, Abhishek

    1997-01-01

    We study the distribution of sizes of erased loops for loop-erased random walks on regular and fractal lattices. We show that for arbitrary graphs the probability $P(l)$ of generating a loop of perimeter $l$ is expressible in terms of the probability $P_{st}(l)$ of forming a loop of perimeter $l$ when a bond is added to a random spanning tree on the same graph by the simple relation $P(l)=P_{st}(l)/l$. On $d$-dimensional hypercubical lattices, $P(l)$ varies as $l^{-\\sigma}$ for large $l$, whe...

  6. Linear velocity fields in non-Gaussian models for large-scale structure

    Science.gov (United States)

    Scherrer, Robert J.

    1992-01-01

    Linear velocity fields in two types of physically motivated non-Gaussian models are examined for large-scale structure: seed models, in which the density field is a convolution of a density profile with a distribution of points, and local non-Gaussian fields, derived from a local nonlinear transformation on a Gaussian field. The distribution of a single component of the velocity is derived for seed models with randomly distributed seeds, and these results are applied to the seeded hot dark matter model and the global texture model with cold dark matter. An expression for the distribution of a single component of the velocity in arbitrary local non-Gaussian models is given, and these results are applied to such fields with chi-squared and lognormal distributions. It is shown that all seed models with randomly distributed seeds and all local non-Guassian models have single-component velocity distributions with positive kurtosis.

  7. Dose distribution of non-coplanar irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Fukui, Toshiharu; Wada, Yoichi; Takenaka, Eiichi

    1987-02-01

    Non-coplanar irradiations were applied to the treatment of brain tumor. The dose distribution around the target area due to non-coplanar irradiation was half less than the dose when coplanar irradiation used. Integral volume dose due to this irradiation was not always less than that due to conventional opposing or rotational irradiation. This irradiation has the better application to the following;as a boost therapy, glioblastoma multiforme;as a radical therapy, recurrent brain tumor, well differentiated brain tumor such as craniopharyngioma, hypophyseal tumor etc and AV-malformation.

  8. An Ad Hoc Adaptive Hashing Technique forNon-Uniformly Distributed IP Address Lookup in Computer Networks

    Directory of Open Access Journals (Sweden)

    Christopher Martinez

    2007-02-01

    Full Text Available Hashing algorithms long have been widely adopted to design a fast address look-up process which involves a search through a large database to find a record associated with a given key. Hashing algorithms involve transforming a key inside each target data to a hash value hoping that the hashing would render the database a uniform distribution with respect to this new hash value. The close the final distribution is to uniform, the less search time would be required when a query is made. When the database is already key-wise uniformly distributed, any regular hashing algorithm, such as bit-extraction, bit-group XOR, etc., would easily lead to a statistically perfect uniform distribution after the hashing. On the other hand, if records in the database are instead not uniformly distributed as in almost all known practical applications, then even different regular hash functions would lead to very different performance. When the target database has a key with a highly skewed distributed value, performance delivered by regular hashing algorithms usually becomes far from desirable. This paper aims at designing a hashing algorithm to achieve the highest probability in leading to a uniformly distributed hash result from a non-uniformly distributed database. An analytical pre-process on the original database is first performed to extract critical information that would significantly benefit the design of a better hashing algorithm. This process includes sorting on the bits of the key to prioritize the use of them in the XOR hashing sequence, or in simple bit extraction, or even a combination of both. Such an ad hoc hash design is critical to adapting to all real-time situations when there exists a changing (and/or expanding database with an irregular non-uniform distribution. Significant improvement from simulation results is obtained in randomly generated data as well as real data.

  9. Probability evolution method for exit location distribution

    Science.gov (United States)

    Zhu, Jinjie; Chen, Zhen; Liu, Xianbin

    2018-03-01

    The exit problem in the framework of the large deviation theory has been a hot topic in the past few decades. The most probable escape path in the weak-noise limit has been clarified by the Freidlin-Wentzell action functional. However, noise in real physical systems cannot be arbitrarily small while noise with finite strength may induce nontrivial phenomena, such as noise-induced shift and noise-induced saddle-point avoidance. Traditional Monte Carlo simulation of noise-induced escape will take exponentially large time as noise approaches zero. The majority of the time is wasted on the uninteresting wandering around the attractors. In this paper, a new method is proposed to decrease the escape simulation time by an exponentially large factor by introducing a series of interfaces and by applying the reinjection on them. This method can be used to calculate the exit location distribution. It is verified by examining two classical examples and is compared with theoretical predictions. The results show that the method performs well for weak noise while may induce certain deviations for large noise. Finally, some possible ways to improve our method are discussed.

  10. Self-adaptive change detection in streaming data with non-stationary distribution

    KAUST Repository

    Zhang, Xiangliang; Wang, Wei

    2010-01-01

    Non-stationary distribution, in which the data distribution evolves over time, is a common issue in many application fields, e.g., intrusion detection and grid computing. Detecting the changes in massive streaming data with a non

  11. A Campbell random process

    International Nuclear Information System (INIS)

    Reuss, J.D.; Misguich, J.H.

    1993-02-01

    The Campbell process is a stationary random process which can have various correlation functions, according to the choice of an elementary response function. The statistical properties of this process are presented. A numerical algorithm and a subroutine for generating such a process is built up and tested, for the physically interesting case of a Campbell process with Gaussian correlations. The (non-Gaussian) probability distribution appears to be similar to the Gamma distribution

  12. Extension of the Accurate Voltage-Sag Fault Location Method in Electrical Power Distribution Systems

    Directory of Open Access Journals (Sweden)

    Youssef Menchafou

    2016-03-01

    Full Text Available Accurate Fault location in an Electric Power Distribution System (EPDS is important in maintaining system reliability. Several methods have been proposed in the past. However, the performances of these methods either show to be inefficient or are a function of the fault type (Fault Classification, because they require the use of an appropriate algorithm for each fault type. In contrast to traditional approaches, an accurate impedance-based Fault Location (FL method is presented in this paper. It is based on the voltage-sag calculation between two measurement points chosen carefully from the available strategic measurement points of the line, network topology and current measurements at substation. The effectiveness and the accuracy of the proposed technique are demonstrated for different fault types using a radial power flow system. The test results are achieved from the numerical simulation using the data of a distribution line recognized in the literature.

  13. Location tests for biomarker studies: a comparison using simulations for the two-sample case.

    Science.gov (United States)

    Scheinhardt, M O; Ziegler, A

    2013-01-01

    Gene, protein, or metabolite expression levels are often non-normally distributed, heavy tailed and contain outliers. Standard statistical approaches may fail as location tests in this situation. In three Monte-Carlo simulation studies, we aimed at comparing the type I error levels and empirical power of standard location tests and three adaptive tests [O'Gorman, Can J Stat 1997; 25: 269 -279; Keselman et al., Brit J Math Stat Psychol 2007; 60: 267- 293; Szymczak et al., Stat Med 2013; 32: 524 - 537] for a wide range of distributions. We simulated two-sample scenarios using the g-and-k-distribution family to systematically vary tail length and skewness with identical and varying variability between groups. All tests kept the type I error level when groups did not vary in their variability. The standard non-parametric U-test performed well in all simulated scenarios. It was outperformed by the two non-parametric adaptive methods in case of heavy tails or large skewness. Most tests did not keep the type I error level for skewed data in the case of heterogeneous variances. The standard U-test was a powerful and robust location test for most of the simulated scenarios except for very heavy tailed or heavy skewed data, and it is thus to be recommended except for these cases. The non-parametric adaptive tests were powerful for both normal and non-normal distributions under sample variance homogeneity. But when sample variances differed, they did not keep the type I error level. The parametric adaptive test lacks power for skewed and heavy tailed distributions.

  14. Variation in size frequency distribution of coral populations under different fishing pressures in two contrasting locations in the Indian Ocean.

    Science.gov (United States)

    Grimsditch, G; Pisapia, C; Huck, M; Karisa, J; Obura, D; Sweet, M

    2017-10-01

    This study aimed to assess how the size-frequency distributions of coral genera varied between reefs under different fishing pressures in two contrasting Indian Ocean locations (the Maldives and East Africa). Using generalized linear mixed models, we were able to demonstrate that complex interactions occurred between coral genera, coral size class and fishing pressure. In both locations, we found Acropora coral species to be more abundant in non-fished compared to fished sites (a pattern which was consistent for nearly all the assessed size classes). Coral genera classified as 'stress tolerant' showed a contrasting pattern i.e. were higher in abundance in fished compared to non-fished sites. Site specific variations were also observed. For example, Maldivian reefs exhibited a significantly higher abundance in all size classes of 'competitive' corals compared to East Africa. This possibly indicates that East African reefs have already been subjected to higher levels of stress and are therefore less suitable environments for 'competitive' corals. This study also highlights the potential structure and composition of reefs under future degradation scenarios, for example with a loss of Acropora corals and an increase in dominance of 'stress tolerant' and 'generalist' coral genera. Copyright © 2017. Published by Elsevier Ltd.

  15. Optimal Sizing and Location of Distributed Generators Based on PBIL and PSO Techniques

    Directory of Open Access Journals (Sweden)

    Luis Fernando Grisales-Noreña

    2018-04-01

    Full Text Available The optimal location and sizing of distributed generation is a suitable option for improving the operation of electric systems. This paper proposes a parallel implementation of the Population-Based Incremental Learning (PBIL algorithm to locate distributed generators (DGs, and the use of Particle Swarm Optimization (PSO to define the size those devices. The resulting method is a master-slave hybrid approach based on both the parallel PBIL (PPBIL algorithm and the PSO, which reduces the computation time in comparison with other techniques commonly used to address this problem. Moreover, the new hybrid method also reduces the active power losses and improves the nodal voltage profiles. In order to verify the performance of the new method, test systems with 33 and 69 buses are implemented in Matlab, using Matpower, for evaluating multiple cases. Finally, the proposed method is contrasted with the Loss Sensitivity Factor (LSF, a Genetic Algorithm (GA and a Parallel Monte-Carlo algorithm. The results demonstrate that the proposed PPBIL-PSO method provides the best balance between processing time, voltage profiles and reduction of power losses.

  16. Factors influencing non-native tree species distribution in urban landscapes

    Science.gov (United States)

    Wayne C. Zipperer

    2010-01-01

    Non-native species are presumed to be pervasive across the urban landscape. Yet, we actually know very little about their actual distribution. For this study, vegetation plot data from Syracuse, NY and Baltimore, MD were used to examine non-native tree species distribution in urban landscapes. Data were collected from remnant and emergent forest patches on upland sites...

  17. Large deviations of heavy-tailed random sums with applications in insurance and finance

    NARCIS (Netherlands)

    Kluppelberg, C; Mikosch, T

    We prove large deviation results for the random sum S(t)=Sigma(i=1)(N(t)) X-i, t greater than or equal to 0, where (N(t))(t greater than or equal to 0) are non-negative integer-valued random variables and (X-n)(n is an element of N) are i.i.d. non-negative random Variables with common distribution

  18. A data based random number generator for a multivariate distribution (using stochastic interpolation)

    Science.gov (United States)

    Thompson, J. R.; Taylor, M. S.

    1982-01-01

    Let X be a K-dimensional random variable serving as input for a system with output Y (not necessarily of dimension k). given X, an outcome Y or a distribution of outcomes G(Y/X) may be obtained either explicitly or implicity. The situation is considered in which there is a real world data set X sub j sub = 1 (n) and a means of simulating an outcome Y. A method for empirical random number generation based on the sample of observations of the random variable X without estimating the underlying density is discussed.

  19. New large-deviation local theorems for sums of independent and identically distributed random vectors when the limit distribution is α-stable

    OpenAIRE

    Nagaev, Alexander; Zaigraev, Alexander

    2005-01-01

    A class of absolutely continuous distributions in Rd is considered. Each distribution belongs to the domain of normal attraction of an α-stable law. The limit law is characterized by a spectral measure which is absolutely continuous with respect to the spherical Lebesgue measure. The large-deviation problem for sums of independent and identically distributed random vectors when the underlying distribution belongs to that class is studied. At the focus of attention are the deviations in the di...

  20. School-located Influenza Vaccinations for Adolescents: A Randomized Controlled Trial.

    Science.gov (United States)

    Szilagyi, Peter G; Schaffer, Stanley; Rand, Cynthia M; Goldstein, Nicolas P N; Vincelli, Phyllis; Hightower, A Dirk; Younge, Mary; Eagan, Ashley; Blumkin, Aaron; Albertin, Christina S; DiBitetto, Kristine; Yoo, Byung-Kwang; Humiston, Sharon G

    2018-02-01

    We aimed to evaluate the effect of school-located influenza vaccination (SLIV) on adolescents' influenza vaccination rates. In 2015-2016, we performed a cluster-randomized trial of adolescent SLIV in middle/high schools. We selected 10 pairs of schools (identical grades within pairs) and randomly allocated schools within pairs to SLIV or usual care control. At eight suburban SLIV schools, we sent parents e-mail notifications about upcoming SLIV clinics and promoted online immunization consent. At two urban SLIV schools, we sent parents (via student backpack fliers) paper immunization consent forms and information about SLIV. E-mails were unavailable at these schools. Local health department nurses administered nasal or injectable influenza vaccine at dedicated SLIV clinics and billed insurers. We compared influenza vaccination rates at SLIV versus control schools using school directories to identify the student sample in each school. We used the state immunization registry to determine receipt of influenza vaccination. The final sample comprised 17,650 students enrolled in the 20 schools. Adolescents at suburban SLIV schools had higher overall influenza vaccination rates than did adolescents at control schools (51% vs. 46%, p < .001; adjusted odds ratio = 1.27, 95% confidence interval 1.18-1.38, controlling for vaccination during the prior two seasons). No effect of SLIV was noted among urbanschools on multivariate analysis. SLIV did not substitute for vaccinations in primary care or other settings; in suburban settings, SLIV was associated with increased vaccinations in primary care or other settings (adjusted odds ratio = 1.10, 95% confidence interval 1.02-1.19). SLIV in this community increased influenza vaccination rates among adolescents attending suburban schools. Copyright © 2018. Published by Elsevier Inc.

  1. Pharmaceutical Distribution Market Channels in Poland

    Directory of Open Access Journals (Sweden)

    Agnieszka Woś

    2009-09-01

    Full Text Available Distribution on the pharmaceutical market in Poland is interesting and the most difficult sphere to manage. Numerous varied and specialized companies operating on the market cause that the processes of choosing middlemen in distribution channels are very complex. The hereby article presents the role and location of the companies operating within distribution channels on the pharmaceutical market. It draws attention to the development of non-pharmacy and non-wholesale sales channels.

  2. A location-inventory model for distribution centers in a three-level supply chain under uncertainty

    OpenAIRE

    Ali Bozorgi-Amiri; M. Saeed Jabalameli; Sara Gharegozloo Hamedani

    2013-01-01

    We study a location-inventory problem in a three level supply chain network under uncertainty, which leads to risk. The (r,Q) inventory control policy is applied for this problem. Besides, uncertainty exists in different parameters such as procurement, transportation costs, supply, demand and the capacity of different facilities (due to disaster, man-made events and etc). We present a robust optimization model, which concurrently specifies: locations of distribution centers to be opened, inve...

  3. Spatial distribution

    DEFF Research Database (Denmark)

    Borregaard, Michael Krabbe; Hendrichsen, Ditte Katrine; Nachman, Gøsta Støger

    2008-01-01

    , depending on the nature of intraspecific interactions between them: while the individuals of some species repel each other and partition the available area, others form groups of varying size, determined by the fitness of each group member. The spatial distribution pattern of individuals again strongly......Living organisms are distributed over the entire surface of the planet. The distribution of the individuals of each species is not random; on the contrary, they are strongly dependent on the biology and ecology of the species, and vary over different spatial scale. The structure of whole...... populations reflects the location and fragmentation pattern of the habitat types preferred by the species, and the complex dynamics of migration, colonization, and population growth taking place over the landscape. Within these, individuals are distributed among each other in regular or clumped patterns...

  4. Does mass azithromycin distribution impact child growth and nutrition in Niger? A cluster-randomized trial.

    Directory of Open Access Journals (Sweden)

    Abdou Amza

    2014-09-01

    Full Text Available Antibiotic use on animals demonstrates improved growth regardless of whether or not there is clinical evidence of infectious disease. Antibiotics used for trachoma control may play an unintended benefit of improving child growth.In this sub-study of a larger randomized controlled trial, we assess anthropometry of pre-school children in a community-randomized trial of mass oral azithromycin distributions for trachoma in Niger. We measured height, weight, and mid-upper arm circumference (MUAC in 12 communities randomized to receive annual mass azithromycin treatment of everyone versus 12 communities randomized to receive biannual mass azithromycin treatments for children, 3 years after the initial mass treatment. We collected measurements in 1,034 children aged 6-60 months of age.We found no difference in the prevalence of wasting among children in the 12 annually treated communities that received three mass azithromycin distributions compared to the 12 biannually treated communities that received six mass azithromycin distributions (odds ratio = 0.88, 95% confidence interval = 0.53 to 1.49.We were unable to demonstrate a statistically significant difference in stunting, underweight, and low MUAC of pre-school children in communities randomized to annual mass azithromycin treatment or biannual mass azithromycin treatment. The role of antibiotics on child growth and nutrition remains unclear, but larger studies and longitudinal trials may help determine any association.

  5. Optimum aggregation of geographically distributed flexible resources in strategic smart-grid/microgrid locations

    DEFF Research Database (Denmark)

    Bhattarai, Bishnu P.; Myers, Kurt S.; Bak-Jensen, Birgitte

    2017-01-01

    This study determines optimum aggregation areas for a given distribution network considering spatial distribution of loads and costs of aggregation. An elitist genetic algorithm combined with a hierarchical clustering and a Thevenin network reduction is implemented to compute strategic locations...... operational decisions by understanding during which time of the day will be in need of flexibility, from which specific area, and in which amount, but also enables the flexibilities stemming from small distributed resources to be traded in various power/energy markets. A combination of central and local...... aggregation scheme where a central aggregator enables market participation, while local aggregators materialize the accepted bids, is implemented to realize this concept. The effectiveness of the proposed method is evaluated by comparing network performances with and without aggregation. For a given network...

  6. Optimal random search for a single hidden target.

    Science.gov (United States)

    Snider, Joseph

    2011-01-01

    A single target is hidden at a location chosen from a predetermined probability distribution. Then, a searcher must find a second probability distribution from which random search points are sampled such that the target is found in the minimum number of trials. Here it will be shown that if the searcher must get very close to the target to find it, then the best search distribution is proportional to the square root of the target distribution regardless of dimension. For a Gaussian target distribution, the optimum search distribution is approximately a Gaussian with a standard deviation that varies inversely with how close the searcher must be to the target to find it. For a network where the searcher randomly samples nodes and looks for the fixed target along edges, the optimum is either to sample a node with probability proportional to the square root of the out-degree plus 1 or not to do so at all.

  7. Distribution network fault section identification and fault location using artificial neural network

    DEFF Research Database (Denmark)

    Dashtdar, Masoud; Dashti, Rahman; Shaker, Hamid Reza

    2018-01-01

    In this paper, a method for fault location in power distribution network is presented. The proposed method uses artificial neural network. In order to train the neural network, a series of specific characteristic are extracted from the recorded fault signals in relay. These characteristics...... components of the sequences as well as three-phase signals could be obtained using statistics to extract the hidden features inside them and present them separately to train the neural network. Also, since the obtained inputs for the training of the neural network strongly depend on the fault angle, fault...... resistance, and fault location, the training data should be selected such that these differences are properly presented so that the neural network does not face any issues for identification. Therefore, selecting the signal processing function, data spectrum and subsequently, statistical parameters...

  8. Network meta-analysis incorporating randomized controlled trials and non-randomized comparative cohort studies for assessing the safety and effectiveness of medical treatments: challenges and opportunities

    OpenAIRE

    Cameron, Chris; Fireman, Bruce; Hutton, Brian; Clifford, Tammy; Coyle, Doug; Wells, George; Dormuth, Colin R.; Platt, Robert; Toh, Sengwee

    2015-01-01

    Network meta-analysis is increasingly used to allow comparison of multiple treatment alternatives simultaneously, some of which may not have been compared directly in primary research studies. The majority of network meta-analyses published to date have incorporated data from randomized controlled trials (RCTs) only; however, inclusion of non-randomized studies may sometimes be considered. Non-randomized studies can complement RCTs or address some of their limitations, such as short follow-up...

  9. Non-Hermitian Extensions of Wishart Random Matrix Ensembles

    International Nuclear Information System (INIS)

    Akemann, G.

    2011-01-01

    We briefly review the solution of three ensembles of non-Hermitian random matrices generalizing the Wishart-Laguerre (also called chiral) ensembles. These generalizations are realized as Gaussian two-matrix models, where the complex eigenvalues of the product of the two independent rectangular matrices are sought, with the matrix elements of both matrices being either real, complex or quaternion real. We also present the more general case depending on a non-Hermiticity parameter, that allows us to interpolate between the corresponding three Hermitian Wishart ensembles with real eigenvalues and the maximally non-Hermitian case. All three symmetry classes are explicitly solved for finite matrix size N x M for all complex eigenvalue correlations functions (and real or mixed correlations for real matrix elements). These are given in terms of the corresponding kernels built from orthogonal or skew-orthogonal Laguerre polynomials in the complex plane. We then present the corresponding three Bessel kernels in the complex plane in the microscopic large-N scaling limit at the origin, both at weak and strong non-Hermiticity with M - N ≥ 0 fixed. (author)

  10. Non-random alkylation of DNA sequences induced in vivo by chemical mutagens

    Energy Technology Data Exchange (ETDEWEB)

    Durante, M.; Geri, C.; Bonatti, S.; Parenti, R. (Universita di Pisa (Italy))

    1989-08-01

    Previous studies of the interaction of alkylating agents on the eukaryotic genome support the idea that induction of DNA adducts is at specific genomic sites. Here we show molecular and cytological evidence that alkylation is rather specific. Mammalian cell cultures were exposed to different doses of mutagens and the DNA was analyzed by density gradient ultracentrifugation, hydroxylapatite fractionation, and by restriction enzyme analysis. Studies with the labelled mutagens N-ethyl-N-nitrosourea and N-methyl-N'-nitro-N-nitrosoguanidine show that there is a non-random distribution of the adducts. The adducts are found more frequently in A-T, G-C rich satellite DNA and highly repetitive sequences. Analysis with restriction enzymes shows that both methyl and ethyl groups influence the restriction patterns of the enzymes HpaII and MspI that recognize specific endogenous DNA methylation. These data suggest, as a subsequent mechanism, a modification in the pattern of the normal endogenous methylation of 5-methylcytosine.

  11. Approach of the value of an annuity when non-central moments of the capitalization factor are known: an R application with interest rates following normal and beta distributions

    Directory of Open Access Journals (Sweden)

    Salvador Cruz Rambaud

    2015-07-01

    Full Text Available This paper proposes an expression of the value of an annuity with payments of 1 unit each when the interest rate is random. In order to attain this objective, we proceed on the assumption that the non-central moments of the capitalization factor are known. Specifically, to calculate the value of these annuities, we propose two different expressions. First, we suppose that the random interest rate is normally distributed; then, we assume that it follows the beta distribution. A practical application of these two methodologies is also implemented using the R statistical software.

  12. Universality of non-leading logarithmic contributions in transverse-momentum distributions

    CERN Document Server

    Catani, S; Grazzini, Massimiliano

    2001-01-01

    We consider the resummation of the logarithmic contributions to the region of small transverse momenta in the distributions of high-mass systems (lepton pairs, vector bosons, Higgs particles, ....) produced in hadron collisions. We point out that the resummation formulae that are usually used to compute the distributions in perturbative QCD involve process-dependent form factors and coefficient functions. We present a new universal form of the resummed distribution, in which the dependence on the process is embodied in a single perturbative factor. The new form simplifies the calculation of non-leading logarithms at higher perturbative orders. It can also be useful to systematically implement process-independent non-perturbative effects in transverse-momentum distributions. We also comment on the dependence of these distributions on the factorization and renormalization scales.

  13. A non-Gaussian multivariate distribution with all lower-dimensional Gaussians and related families

    KAUST Repository

    Dutta, Subhajit

    2014-07-28

    Several fascinating examples of non-Gaussian bivariate distributions which have marginal distribution functions to be Gaussian have been proposed in the literature. These examples often clarify several properties associated with the normal distribution. In this paper, we generalize this result in the sense that we construct a pp-dimensional distribution for which any proper subset of its components has the Gaussian distribution. However, the jointpp-dimensional distribution is inconsistent with the distribution of these subsets because it is not Gaussian. We study the probabilistic properties of this non-Gaussian multivariate distribution in detail. Interestingly, several popular tests of multivariate normality fail to identify this pp-dimensional distribution as non-Gaussian. We further extend our construction to a class of elliptically contoured distributions as well as skewed distributions arising from selections, for instance the multivariate skew-normal distribution.

  14. A non-Gaussian multivariate distribution with all lower-dimensional Gaussians and related families

    KAUST Repository

    Dutta, Subhajit; Genton, Marc G.

    2014-01-01

    Several fascinating examples of non-Gaussian bivariate distributions which have marginal distribution functions to be Gaussian have been proposed in the literature. These examples often clarify several properties associated with the normal distribution. In this paper, we generalize this result in the sense that we construct a pp-dimensional distribution for which any proper subset of its components has the Gaussian distribution. However, the jointpp-dimensional distribution is inconsistent with the distribution of these subsets because it is not Gaussian. We study the probabilistic properties of this non-Gaussian multivariate distribution in detail. Interestingly, several popular tests of multivariate normality fail to identify this pp-dimensional distribution as non-Gaussian. We further extend our construction to a class of elliptically contoured distributions as well as skewed distributions arising from selections, for instance the multivariate skew-normal distribution.

  15. Modeling of parallel-plate regenerators with non-uniform plate distributions

    DEFF Research Database (Denmark)

    Jensen, Jesper Buch; Engelbrecht, Kurt; Bahl, Christian Robert Haffenden

    2010-01-01

    plate spacing distributions are presented in order to understand the impact of spacing non-uniformity. Simulations of more realistic distributions where the plate spacings follow normal distributions are then discussed in order to describe the deviation of the performance of a regenerator relative...

  16. Quantum tunneling recombination in a system of randomly distributed trapped electrons and positive ions.

    Science.gov (United States)

    Pagonis, Vasilis; Kulp, Christopher; Chaney, Charity-Grace; Tachiya, M

    2017-09-13

    During the past 10 years, quantum tunneling has been established as one of the dominant mechanisms for recombination in random distributions of electrons and positive ions, and in many dosimetric materials. Specifically quantum tunneling has been shown to be closely associated with two important effects in luminescence materials, namely long term afterglow luminescence and anomalous fading. Two of the common assumptions of quantum tunneling models based on random distributions of electrons and positive ions are: (a) An electron tunnels from a donor to the nearest acceptor, and (b) the concentration of electrons is much lower than that of positive ions at all times during the tunneling process. This paper presents theoretical studies for arbitrary relative concentrations of electrons and positive ions in the solid. Two new differential equations are derived which describe the loss of charge in the solid by tunneling, and they are solved analytically. The analytical solution compares well with the results of Monte Carlo simulations carried out in a random distribution of electrons and positive ions. Possible experimental implications of the model are discussed for tunneling phenomena in long term afterglow signals, and also for anomalous fading studies in feldspars and apatite samples.

  17. Reinforcing Sampling Distributions through a Randomization-Based Activity for Introducing ANOVA

    Science.gov (United States)

    Taylor, Laura; Doehler, Kirsten

    2015-01-01

    This paper examines the use of a randomization-based activity to introduce the ANOVA F-test to students. The two main goals of this activity are to successfully teach students to comprehend ANOVA F-tests and to increase student comprehension of sampling distributions. Four sections of students in an advanced introductory statistics course…

  18. ERROR DISTRIBUTION EVALUATION OF THE THIRD VANISHING POINT BASED ON RANDOM STATISTICAL SIMULATION

    Directory of Open Access Journals (Sweden)

    C. Li

    2012-07-01

    Full Text Available POS, integrated by GPS / INS (Inertial Navigation Systems, has allowed rapid and accurate determination of position and attitude of remote sensing equipment for MMS (Mobile Mapping Systems. However, not only does INS have system error, but also it is very expensive. Therefore, in this paper error distributions of vanishing points are studied and tested in order to substitute INS for MMS in some special land-based scene, such as ground façade where usually only two vanishing points can be detected. Thus, the traditional calibration approach based on three orthogonal vanishing points is being challenged. In this article, firstly, the line clusters, which parallel to each others in object space and correspond to the vanishing points, are detected based on RANSAC (Random Sample Consensus and parallelism geometric constraint. Secondly, condition adjustment with parameters is utilized to estimate nonlinear error equations of two vanishing points (VX, VY. How to set initial weights for the adjustment solution of single image vanishing points is presented. Solving vanishing points and estimating their error distributions base on iteration method with variable weights, co-factor matrix and error ellipse theory. Thirdly, under the condition of known error ellipses of two vanishing points (VX, VY and on the basis of the triangle geometric relationship of three vanishing points, the error distribution of the third vanishing point (VZ is calculated and evaluated by random statistical simulation with ignoring camera distortion. Moreover, Monte Carlo methods utilized for random statistical estimation are presented. Finally, experimental results of vanishing points coordinate and their error distributions are shown and analyzed.

  19. Error Distribution Evaluation of the Third Vanishing Point Based on Random Statistical Simulation

    Science.gov (United States)

    Li, C.

    2012-07-01

    POS, integrated by GPS / INS (Inertial Navigation Systems), has allowed rapid and accurate determination of position and attitude of remote sensing equipment for MMS (Mobile Mapping Systems). However, not only does INS have system error, but also it is very expensive. Therefore, in this paper error distributions of vanishing points are studied and tested in order to substitute INS for MMS in some special land-based scene, such as ground façade where usually only two vanishing points can be detected. Thus, the traditional calibration approach based on three orthogonal vanishing points is being challenged. In this article, firstly, the line clusters, which parallel to each others in object space and correspond to the vanishing points, are detected based on RANSAC (Random Sample Consensus) and parallelism geometric constraint. Secondly, condition adjustment with parameters is utilized to estimate nonlinear error equations of two vanishing points (VX, VY). How to set initial weights for the adjustment solution of single image vanishing points is presented. Solving vanishing points and estimating their error distributions base on iteration method with variable weights, co-factor matrix and error ellipse theory. Thirdly, under the condition of known error ellipses of two vanishing points (VX, VY) and on the basis of the triangle geometric relationship of three vanishing points, the error distribution of the third vanishing point (VZ) is calculated and evaluated by random statistical simulation with ignoring camera distortion. Moreover, Monte Carlo methods utilized for random statistical estimation are presented. Finally, experimental results of vanishing points coordinate and their error distributions are shown and analyzed.

  20. An efficient algorithm for generating random number pairs drawn from a bivariate normal distribution

    Science.gov (United States)

    Campbell, C. W.

    1983-01-01

    An efficient algorithm for generating random number pairs from a bivariate normal distribution was developed. Any desired value of the two means, two standard deviations, and correlation coefficient can be selected. Theoretically the technique is exact and in practice its accuracy is limited only by the quality of the uniform distribution random number generator, inaccuracies in computer function evaluation, and arithmetic. A FORTRAN routine was written to check the algorithm and good accuracy was obtained. Some small errors in the correlation coefficient were observed to vary in a surprisingly regular manner. A simple model was developed which explained the qualities aspects of the errors.

  1. A random phased-array for MR-guided transcranial ultrasound neuromodulation in non-human primates.

    Science.gov (United States)

    Chaplin, Vandiver; Phipps, Marshal A; Caskey, Charles F

    2018-04-18

    Transcranial focused ultrasound (FUS) is a non-invasive technique for therapy and study of brain neural activation. Here we report on the design and characterization of a new MR-guided FUS transducer for neuromodulation in non-human primates at 650 kHz. The array is randomized with 128 elements 6.6 mm in diameter, radius of curvature 7.2 cm, opening diameter 10.3 cm (focal ratio 0.7), and 46% coverage. Simulations were used to optimize transducer geometry with respect to focus size, grating lobes, and directivity. Focus size and grating lobes during electronic steering were quantified using hydrophone measurements in water and a three-axis stage. A novel combination of optical tracking and acoustic mapping enabled measurement of the 3D pressure distribution in the cortical region of an ex vivo skull to within ~3.5 mm of the surface, and allowed accurate modelling of the experiment via non-homogeneous 3D acoustic simulations. The data demonstrates acoustic focusing beyond the skull bone, with the focus slightly broadened and shifted proximal to the skull. The fabricated design is capable of targeting regions within the S1 sensorimotor cortex of macaques. © 2018 Institute of Physics and Engineering in Medicine.

  2. Piecewise linearisation of the first order loss function for families of arbitrarily distributed random variables

    NARCIS (Netherlands)

    Rossi, R.; Hendrix, E.M.T.

    2014-01-01

    We discuss the problem of computing optimal linearisation parameters for the first order loss function of a family of arbitrarily distributed random variable. We demonstrate that, in contrast to the problem in which parameters must be determined for the loss function of a single random variable,

  3. On Origin of Power-Law Distributions in Self-Organized Criticality from Random Walk Treatment

    International Nuclear Information System (INIS)

    Cao Xiaofeng; Deng Zongwei; Yang Chunbin

    2008-01-01

    The origin of power-law distributions in self-organized criticality is investigated by treating the variation of the number of active sites in the system as a stochastic process. An avalanche is then regarded as a first-return random walk process in a one-dimensional lattice. We assume that the variation of the number of active sites has three possibilities in each update: to increase by 1 with probability f 1 , to decrease by 1 with probability f 2 , or remain unchanged with probability 1-f 1 -f 2 . This mimics the dynamics in the system. Power-law distributions of the lifetime are found when the random walk is unbiased with equal probability to move in opposite directions. This shows that power-law distributions in self-organized criticality may be caused by the balance of competitive interactions.

  4. GRD: An SPSS extension command for generating random data

    Directory of Open Access Journals (Sweden)

    Bradley Harding

    2014-09-01

    Full Text Available To master statistics and data analysis tools, it is necessary to understand a number of concepts, manyof which are quite abstract. For example, sampling from a theoretical distribution can help individuals explore andunderstand randomness. Sampling can also be used to build exercises aimed to help students master statistics. Here, we present GRD (Generator of Random Data, an extension command for SPSS (version 17 and above. With GRD, it is possible to get random data from a given distribution. In its simplest use, GRD will return a set of simulated data from a normal distribution.With subcommands to GRD, it is possible to get data from multiple groups, over multiple repeated measures, and with desired effectsizes. Group sizes can be equal or unequal. With further subcommands, it is possible to sample from any theoretical population, (not simply the normal distribution, introduce non-homogeneous variances,fix or randomize subject effects, etc. Finally, GRD’s generated data are in a format ready to be analyzed.

  5. The random walk model of intrafraction movement

    International Nuclear Information System (INIS)

    Ballhausen, H; Reiner, M; Kantz, S; Belka, C; Söhn, M

    2013-01-01

    The purpose of this paper is to understand intrafraction movement as a stochastic process driven by random external forces. The hypothetically proposed three-dimensional random walk model has significant impact on optimal PTV margins and offers a quantitatively correct explanation of experimental findings. Properties of the random walk are calculated from first principles, in particular fraction-average population density distributions for displacements along the principal axes. When substituted into the established optimal margin recipes these fraction-average distributions yield safety margins about 30% smaller as compared to the suggested values from end-of-fraction Gaussian fits. Stylized facts of a random walk are identified in clinical data, such as the increase of the standard deviation of displacements with the square root of time. Least squares errors in the comparison to experimental results are reduced by about 50% when accounting for non-Gaussian corrections from the random walk model. (paper)

  6. The random walk model of intrafraction movement.

    Science.gov (United States)

    Ballhausen, H; Reiner, M; Kantz, S; Belka, C; Söhn, M

    2013-04-07

    The purpose of this paper is to understand intrafraction movement as a stochastic process driven by random external forces. The hypothetically proposed three-dimensional random walk model has significant impact on optimal PTV margins and offers a quantitatively correct explanation of experimental findings. Properties of the random walk are calculated from first principles, in particular fraction-average population density distributions for displacements along the principal axes. When substituted into the established optimal margin recipes these fraction-average distributions yield safety margins about 30% smaller as compared to the suggested values from end-of-fraction gaussian fits. Stylized facts of a random walk are identified in clinical data, such as the increase of the standard deviation of displacements with the square root of time. Least squares errors in the comparison to experimental results are reduced by about 50% when accounting for non-gaussian corrections from the random walk model.

  7. Critical current of the nonuniform Josephson transition at intergranular boundary with random dislocation distribution

    International Nuclear Information System (INIS)

    Mejlikhov, E.Z.; Farzetdinova, R.M.

    1997-01-01

    Critical current of inhomogeneous intergranular Josephson transition is calculated in the assumption concerning superconductivity suppression by local strains of boundary dislocations with random distribution

  8. Topology determines force distributions in one-dimensional random spring networks

    Science.gov (United States)

    Heidemann, Knut M.; Sageman-Furnas, Andrew O.; Sharma, Abhinav; Rehfeldt, Florian; Schmidt, Christoph F.; Wardetzky, Max

    2018-02-01

    Networks of elastic fibers are ubiquitous in biological systems and often provide mechanical stability to cells and tissues. Fiber-reinforced materials are also common in technology. An important characteristic of such materials is their resistance to failure under load. Rupture occurs when fibers break under excessive force and when that failure propagates. Therefore, it is crucial to understand force distributions. Force distributions within such networks are typically highly inhomogeneous and are not well understood. Here we construct a simple one-dimensional model system with periodic boundary conditions by randomly placing linear springs on a circle. We consider ensembles of such networks that consist of N nodes and have an average degree of connectivity z but vary in topology. Using a graph-theoretical approach that accounts for the full topology of each network in the ensemble, we show that, surprisingly, the force distributions can be fully characterized in terms of the parameters (N ,z ) . Despite the universal properties of such (N ,z ) ensembles, our analysis further reveals that a classical mean-field approach fails to capture force distributions correctly. We demonstrate that network topology is a crucial determinant of force distributions in elastic spring networks.

  9. High Voltage Distribution System (HVDS) as a better system compared to Low Voltage Distribution System (LVDS) applied at Medan city power network

    Science.gov (United States)

    Dinzi, R.; Hamonangan, TS; Fahmi, F.

    2018-02-01

    In the current distribution system, a large-capacity distribution transformer supplies loads to remote locations. The use of 220/380 V network is nowadays less common compared to 20 kV network. This results in losses due to the non-optimal distribution transformer, which neglected the load location, poor consumer profile, and large power losses along the carrier. This paper discusses how high voltage distribution systems (HVDS) can be a better system used in distribution networks than the currently used distribution system (Low Voltage Distribution System, LVDS). The proposed change of the system into the new configuration is done by replacing a large-capacity distribution transformer with some smaller-capacity distribution transformers and installed them in positions that closest to the load. The use of high voltage distribution systems will result in better voltage profiles and fewer power losses. From the non-technical side, the annual savings and payback periods on high voltage distribution systems will also be the advantage.

  10. A cloud based model to facilitate software development uutsourcing to globally distributed locations

    OpenAIRE

    Hashmi, Sajid Ibrahim; Richardson, Ita

    2013-01-01

    peer-reviewed Outsourcing is an essential part of global software development and entails software development distributed across geographical borders. More specifically, it deals with software development teams dispersed across multiple geographical locations to carry out software development activities. By means of this business model, organizations expect to benefit from enhanced corporate value through advantages such as round the clock software development, availability of skills and ...

  11. Scaling characteristics of one-dimensional fractional diffusion processes in the presence of power-law distributed random noise.

    Science.gov (United States)

    Nezhadhaghighi, Mohsen Ghasemi

    2017-08-01

    Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ-stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α. We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ-stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.

  12. Scaling characteristics of one-dimensional fractional diffusion processes in the presence of power-law distributed random noise

    Science.gov (United States)

    Nezhadhaghighi, Mohsen Ghasemi

    2017-08-01

    Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ -stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α . We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ -stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.

  13. (Non-) Gibbsianness and Phase Transitions in Random Lattice Spin Models

    NARCIS (Netherlands)

    Külske, C.

    1999-01-01

    We consider disordered lattice spin models with finite-volume Gibbs measures µΛ[η](dσ). Here σ denotes a lattice spin variable and η a lattice random variable with product distribution P describing the quenched disorder of the model. We ask: when will the joint measures limΛ↑Zd P(dη)µΛ[η](dσ) be

  14. Wishart and anti-Wishart random matrices

    International Nuclear Information System (INIS)

    Janik, Romuald A; Nowak, Maciej A

    2003-01-01

    We provide a compact exact representation for the distribution of the matrix elements of the Wishart-type random matrices A † A, for any finite number of rows and columns of A, without any large N approximations. In particular, we treat the case when the Wishart-type random matrix contains redundant, non-random information, which is a new result. This representation is of interest for a procedure for reconstructing the redundant information hidden in Wishart matrices, with potential applications to numerous models based on biological, social and artificial intelligence networks

  15. Liquid water breakthrough location distances on a gas diffusion layer of polymer electrolyte membrane fuel cells

    Science.gov (United States)

    Yu, Junliang; Froning, Dieter; Reimer, Uwe; Lehnert, Werner

    2018-06-01

    The lattice Boltzmann method is adopted to simulate the three dimensional dynamic process of liquid water breaking through the gas diffusion layer (GDL) in the polymer electrolyte membrane fuel cell. 22 micro-structures of Toray GDL are built based on a stochastic geometry model. It is found that more than one breakthrough locations are formed randomly on the GDL surface. Breakthrough location distance (BLD) are analyzed statistically in two ways. The distribution is evaluated statistically by the Lilliefors test. It is concluded that the BLD can be described by the normal distribution with certain statistic characteristics. Information of the shortest neighbor breakthrough location distance can be the input modeling setups on the cell-scale simulations in the field of fuel cell simulation.

  16. k-means algorithm and mixture distributions for locating faults in power systems

    Energy Technology Data Exchange (ETDEWEB)

    Mora-Florez, J. [The Technological University of Pereira, La Julita, Ciudad Universitaria, Pereira, Risaralda (Colombia); Cormane-Angarita, J.; Ordonez-Plata, G. [The Industrial University of Santander (Colombia)

    2009-05-15

    Enhancement of power distribution system reliability requires of a considerable investment in studies and equipment, however, not all the utilities have the capability to spend time and money to assume it. Therefore, any strategy that allows the improvement of reliability should be reflected directly in the reduction of the duration and frequency interruption indexes (SAIFI and SAIDI). In this paper, an alternative solution to the problem of power service continuity associated to fault location is presented. A methodology of statistical nature based on finite mixtures is proposed. A statistical model is obtained from the extraction of the magnitude of the voltage sag registered during a fault event, along with the network parameters and topology. The objective is to offer an economic alternative of easy implementation for the development of strategies oriented to improve the reliability from the reduction of the restoration times in power distribution systems. In the application case for an application example in a power distribution system, the faulted zones were identified, having low error rates. (author)

  17. Random skew plane partitions with a piecewise periodic back wall

    DEFF Research Database (Denmark)

    Boutillier, Cedric; Mkrtchyan, Sevak; Reshetikhin, Nicolai

    Random skew plane partitions of large size distributed according to an appropriately scaled Schur process develop limit shapes. In the present work we consider the limit of large random skew plane partitions where the inner boundary approaches a piecewise linear curve with non-lattice slopes. Muc...

  18. Probability calculus of fractional order and fractional Taylor's series application to Fokker-Planck equation and information of non-random functions

    International Nuclear Information System (INIS)

    Jumarie, Guy

    2009-01-01

    A probability distribution of fractional (or fractal) order is defined by the measure μ{dx} = p(x)(dx) α , 0 α (D x α h α )f(x) provided by the modified Riemann Liouville definition, one can expand a probability calculus parallel to the standard one. A Fourier's transform of fractional order using the Mittag-Leffler function is introduced, together with its inversion formula; and it provides a suitable generalization of the characteristic function of fractal random variables. It appears that the state moments of fractional order are more especially relevant. The main properties of this fractional probability calculus are outlined, it is shown that it provides a sound approach to Fokker-Planck equation which are fractional in both space and time, and it provides new results in the information theory of non-random functions.

  19. ON THE ESTIMATION OF DISTANCE DISTRIBUTION FUNCTIONS FOR POINT PROCESSES AND RANDOM SETS

    Directory of Open Access Journals (Sweden)

    Dietrich Stoyan

    2011-05-01

    Full Text Available This paper discusses various estimators for the nearest neighbour distance distribution function D of a stationary point process and for the quadratic contact distribution function Hq of a stationary random closed set. It recommends the use of Hanisch's estimator of D, which is of Horvitz-Thompson type, and the minussampling estimator of Hq. This recommendation is based on simulations for Poisson processes and Boolean models.

  20. Field signatures of non-Fickian transport processes: transit time distributions, spatial correlations, reversibility and hydrogeophysical imaging

    Science.gov (United States)

    Le Borgne, T.; Kang, P. K.; Guihéneuf, N.; Shakas, A.; Bour, O.; Linde, N.; Dentz, M.

    2015-12-01

    Non-Fickian transport phenomena are observed in a wide range of scales across hydrological systems. They are generally manifested by a broad range of transit time distributions, as measured for instance in tracer breakthrough curves. However, similar transit time distributions may be caused by different origins, including broad velocity distributions, flow channeling or diffusive mass transfer [1,2]. The identification of these processes is critical for defining relevant transport models. How can we distinguish the different origins of non-Fickian transport in the field? In this presentation, we will review recent experimental developments to decipher the different causes of anomalous transport, based on tracer tests performed at different scales in cross borehole and push pull conditions, and time lapse hydrogeophysical imaging of tracer motion [3,4]. References:[1] de Anna-, P., T. Le Borgne, M. Dentz, A. M. Tartakovsky, D. Bolster, P. Davy (2013) Flow Intermittency, Dispersion and Correlated Continuous Time Random Walks in Porous Media, Phys. Rev. Lett., 110, 184502 [2] Le Borgne T., Dentz M., and Carrera J. (2008) Lagrangian Statistical Model for Transport in Highly Heterogeneous Velocity Fields. Phys. Rev. Lett. 101, 090601 [3] Kang, P. K., T. Le Borgne, M. Dentz, O. Bour, and R. Juanes (2015), Impact of velocity correlation and distribution on transport in fractured media : Field evidence and theoretical model, Water Resour. Res., 51, 940-959 [4] Dorn C., Linde N., Le Borgne T., O. Bour and L. Baron (2011) Single-hole GPR reflection imaging of solute transport in a granitic aquifer Geophys. Res. Lett. Vol.38, L08401

  1. A Fast Reactive Power Optimization in Distribution Network Based on Large Random Matrix Theory and Data Analysis

    Directory of Open Access Journals (Sweden)

    Wanxing Sheng

    2016-05-01

    Full Text Available In this paper, a reactive power optimization method based on historical data is investigated to solve the dynamic reactive power optimization problem in distribution network. In order to reflect the variation of loads, network loads are represented in a form of random matrix. Load similarity (LS is defined to measure the degree of similarity between the loads in different days and the calculation method of the load similarity of load random matrix (LRM is presented. By calculating the load similarity between the forecasting random matrix and the random matrix of historical load, the historical reactive power optimization dispatching scheme that most matches the forecasting load can be found for reactive power control usage. The differences of daily load curves between working days and weekends in different seasons are considered in the proposed method. The proposed method is tested on a standard 14 nodes distribution network with three different types of load. The computational result demonstrates that the proposed method for reactive power optimization is fast, feasible and effective in distribution network.

  2. Non-homogeneous Behaviour of the Spatial Distribution of Macrospicules

    Science.gov (United States)

    Gyenge, N.; Bennett, S.; Erdélyi, R.

    2015-03-01

    In this paper the longitudinal and latitudinal spatial distribution of macrospicules is examined. We found a statistical relationship between the active longitude (determined by sunspot groups) and the longitudinal distribution of macrospicules. This distribution of macrospicules shows an inhomogeneity and non-axisymmetrical behaviour in the time interval between June 2010 and December 2012, covered by observations of the Solar Dynamic Observatory (SDO) satellite. The enhanced positions of the activity and its time variation have been calculated. The migration of the longitudinal distribution of macrospicules shows a similar behaviour to that of the sunspot groups.

  3. A New Distribution-Random Limit Normal Distribution

    OpenAIRE

    Gong, Xiaolin; Yang, Shuzhen

    2013-01-01

    This paper introduces a new distribution to improve tail risk modeling. Based on the classical normal distribution, we define a new distribution by a series of heat equations. Then, we use market data to verify our model.

  4. Non-periodic pseudo-random numbers used in Monte Carlo calculations

    Science.gov (United States)

    Barberis, Gaston E.

    2007-09-01

    The generation of pseudo-random numbers is one of the interesting problems in Monte Carlo simulations, mostly because the common computer generators produce periodic numbers. We used simple pseudo-random numbers generated with the simplest chaotic system, the logistic map, with excellent results. The numbers generated in this way are non-periodic, which we demonstrated for 1013 numbers, and they are obtained in a deterministic way, which allows to repeat systematically any calculation. The Monte Carlo calculations are the ideal field to apply these numbers, and we did it for simple and more elaborated cases. Chemistry and Information Technology use this kind of simulations, and the application of this numbers to quantum Monte Carlo and cryptography is immediate. I present here the techniques to calculate, analyze and use these pseudo-random numbers, show that they lack periodicity up to 1013 numbers and that they are not correlated.

  5. Non-periodic pseudo-random numbers used in Monte Carlo calculations

    International Nuclear Information System (INIS)

    Barberis, Gaston E.

    2007-01-01

    The generation of pseudo-random numbers is one of the interesting problems in Monte Carlo simulations, mostly because the common computer generators produce periodic numbers. We used simple pseudo-random numbers generated with the simplest chaotic system, the logistic map, with excellent results. The numbers generated in this way are non-periodic, which we demonstrated for 10 13 numbers, and they are obtained in a deterministic way, which allows to repeat systematically any calculation. The Monte Carlo calculations are the ideal field to apply these numbers, and we did it for simple and more elaborated cases. Chemistry and Information Technology use this kind of simulations, and the application of this numbers to quantum Monte Carlo and cryptography is immediate. I present here the techniques to calculate, analyze and use these pseudo-random numbers, show that they lack periodicity up to 10 13 numbers and that they are not correlated

  6. Indoor Location Sensing with Invariant Wi-Fi Received Signal Strength Fingerprinting

    Directory of Open Access Journals (Sweden)

    Mohd Nizam Husen

    2016-11-01

    Full Text Available A method of location fingerprinting based on the Wi-Fi received signal strength (RSS in an indoor environment is presented. The method aims to overcome the RSS instability due to varying channel disturbances in time by introducing the concept of invariant RSS statistics. The invariant RSS statistics represent here the RSS distributions collected at individual calibration locations under minimal random spatiotemporal disturbances in time. The invariant RSS statistics thus collected serve as the reference pattern classes for fingerprinting. Fingerprinting is carried out at an unknown location by identifying the reference pattern class that maximally supports the spontaneous RSS sensed from individual Wi-Fi sources. A design guideline is also presented as a rule of thumb for estimating the number of Wi-Fi signal sources required to be available for any given number of calibration locations under a certain level of random spatiotemporal disturbances. Experimental results show that the proposed method not only provides 17% higher success rate than conventional ones but also removes the need for recalibration. Furthermore, the resolution is shown finer by 40% with the execution time more than an order of magnitude faster than the conventional methods. These results are also backed up by theoretical analysis.

  7. Cost Optimisation in Freight Distribution with Cross-Docking: N-Echelon Location Routing Problem

    Directory of Open Access Journals (Sweden)

    Jesus Gonzalez-Feliu

    2012-03-01

    Full Text Available Freight transportation constitutes one of the main activities that influence the economy and society, as it assures a vital link between suppliers and customers and represents a major source of employment. Multi-echelon distribution is one of the most common strategies adopted by the transportation companies in an aim of cost reduction. Although vehicle routing problems are very common in operational research, they are essentially related to single-echelon cases. This paper presents the main concepts of multi-echelon distribution with cross-docks and a unified notation for the N-echelon location routing problem. A literature review is also presented, in order to list the main problems and methods that can be helpful for scientists and transportation practitioners.

  8. Non-Gaussian Velocity Distributions in Solar Flares from Extreme Ultraviolet Lines: A Possible Diagnostic of Ion Acceleration

    International Nuclear Information System (INIS)

    Jeffrey, Natasha L. S.; Fletcher, Lyndsay; Labrosse, Nicolas

    2017-01-01

    In a solar flare, a large fraction of the magnetic energy released is converted rapidly to the kinetic energy of non-thermal particles and bulk plasma motion. This will likely result in non-equilibrium particle distributions and turbulent plasma conditions. We investigate this by analyzing the profiles of high temperature extreme ultraviolet emission lines from a major flare (SOL2014-03-29T17:44) observed by the EUV Imaging Spectrometer (EIS) on Hinode . We find that in many locations the line profiles are non-Gaussian, consistent with a kappa distribution of emitting ions with properties that vary in space and time. At the flare footpoints, close to sites of hard X-ray emission from non-thermal electrons, the κ index for the Fe xvi 262.976 Å line at 3 MK takes values of 3–5. In the corona, close to a low-energy HXR source, the Fe xxiii 263.760 Å line at 15 MK shows κ values of typically 4–7. The observed trends in the κ parameter show that we are most likely detecting the properties of the ion population rather than any instrumental effects. We calculate that a non-thermal ion population could exist if locally accelerated on timescales ≤0.1 s. However, observations of net redshifts in the lines also imply the presence of plasma downflows, which could lead to bulk turbulence, with increased non-Gaussianity in cooler regions. Both interpretations have important implications for theories of solar flare particle acceleration.

  9. Layered Fiberconcrete with Non-Homogeneous Fibers Distribution

    OpenAIRE

    Lūsis, V; Krasņikovs, A

    2013-01-01

    The aim of present research is to create fiberconcrete construction with non-homogeneous fibers distribution in it. Traditionally fibers are homogeneously dispersed in a concrete. At the same time in many situations fiberconcretes with homogeneously dispersed fibers are not optimal (majority of added fibers are not participating in a loads bearing process).

  10. Quantized vortices in the ideal bose gas: a physical realization of random polynomials.

    Science.gov (United States)

    Castin, Yvan; Hadzibabic, Zoran; Stock, Sabine; Dalibard, Jean; Stringari, Sandro

    2006-02-03

    We propose a physical system allowing one to experimentally observe the distribution of the complex zeros of a random polynomial. We consider a degenerate, rotating, quasi-ideal atomic Bose gas prepared in the lowest Landau level. Thermal fluctuations provide the randomness of the bosonic field and of the locations of the vortex cores. These vortices can be mapped to zeros of random polynomials, and observed in the density profile of the gas.

  11. Strong result for real zeros of random algebraic polynomials

    Directory of Open Access Journals (Sweden)

    T. Uno

    2001-01-01

    Full Text Available An estimate is given for the lower bound of real zeros of random algebraic polynomials whose coefficients are non-identically distributed dependent Gaussian random variables. Moreover, our estimated measure of the exceptional set, which is independent of the degree of the polynomials, tends to zero as the degree of the polynomial tends to infinity.

  12. Distribution and correlates of non-high-density lipoprotein cholesterol and triglycerides in Lebanese school children.

    Science.gov (United States)

    Gannagé-Yared, Marie-Hélène; Farah, Vanessa; Chahine, Elise; Balech, Nicole; Ibrahim, Toni; Asmar, Nadia; Barakett-Hamadé, Vanda; Jambart, Selim

    2016-01-01

    The prevalence of dyslipidelmia in pediatric Middle-Eastern populations is unknown. Our study aims to investigate the distribution and correlates of non-high-density lipoprotein cholesterol (non-HDL-C) and triglycerides among Lebanese school children. A total of 969 subjects aged 8-18 years were included in the study (505 boys and 464 girls). Recruitment was done from 10 schools located in the Great Beirut and Mount-Lebanon areas. Non-fasting total cholesterol, triglycerides, and HDL-cholesterol (HDL-C) were measured. Non-HDL-C was calculated. Schools were categorized into 3 socioeconomic statuses (SESs; low, middle, and high). In the overall population, the prevalence of high non-HDL-C (>3.8 mmol/L), very high non-HDL-C (>4.9 mmol/L), and high triglycerides (>1.5 mmol/l) are respectively 9.2%, 1.24%, and 26.6%. There is no significant gender difference for non-HDL-C or triglycerides. Non-HDL-C and triglycerides are inversely correlated with age in girls (P triglycerides are higher in children from lower SES schools. After adjustment for age and body mass index (BMI), testosterone is inversely associated with triglycerides in boys (P triglycerides are independently associated with BMI and schools' SES in both girls and boys. This study confirms, in our population, the association between obesity and both high non-HDL-C and triglycerides, and between high triglycerides and low SES. Copyright © 2016 National Lipid Association. Published by Elsevier Inc. All rights reserved.

  13. Impact of distributed generation on distribution investment deferral

    International Nuclear Information System (INIS)

    Mendez, V.H.; Rivier, J.; Fuente, J.I. de la; Gomez, T.; Arceluz, J.; Marin, J.; Madurga, A.

    2006-01-01

    The amount of distributed generation (DG) is increasing worldwide, and it is foreseen that in the future it will play an important role in electrical energy systems. DG is located in distribution networks close to consumers or even in the consumers' side of the meter. Therefore, the net demand to be supplied through transmission and distribution networks may decrease, allowing to postpone reinforcement of existing networks. This paper proposes a method to assess the impact of DG on distribution networks investment deferral in the long-term. Due to the randomness of the variables that have an impact on such matter (load demand patterns, DG hourly energy production, DG availability, etc.), a probabilistic approach using a Monte Carlo simulation is adopted. Several scenarios characterized by different DG penetration and concentration levels, and DG technology mixes, are analyzed. Results show that, once initial network reinforcements for DG connection have been accomplished, in the medium and long-term DG can defer feeder and/or transformer reinforcements. (author)

  14. Random errors in the magnetic field coefficients of superconducting quadrupole magnets

    International Nuclear Information System (INIS)

    Herrera, J.; Hogue, R.; Prodell, A.; Thompson, P.; Wanderer, P.; Willen, E.

    1987-01-01

    The random multipole errors of superconducting quadrupoles are studied. For analyzing the multipoles which arise due to random variations in the size and locations of the current blocks, a model is outlined which gives the fractional field coefficients from the current distributions. With this approach, based on the symmetries of the quadrupole magnet, estimates are obtained of the random multipole errors for the arc quadrupoles envisioned for the Relativistic Heavy Ion Collider and for a single-layer quadrupole proposed for the Superconducting Super Collider

  15. Motile male gametes of the araphid diatom Tabularia fasciculata search randomly for mates.

    Directory of Open Access Journals (Sweden)

    Robyn Edgar

    Full Text Available Sexuality in the marine araphid diatom Tabularia involves an unusual type of gamete, not only among diatoms but possibly in all of nature. The non-flagellated male gamete is free and vigorously motile, propelled by pseudopodia. However, the cues (if any in their search for compatible female gametes and the general search patterns to locate them are unknown. We tracked and compared male gamete movements in the presence and absence of receptive female gametes. Path linearity of male movement was not affected by presence of female gametes. Male gametes did not move towards female gametes regardless of their proximity to each other, suggesting that the detection range for a compatible mate is very small compared to known algal examples (mostly spermatozoids and that mate recognition requires (near contact with a female gamete. We therefore investigated how male gametes move to bring insight into their search strategy and found that it was consistent with the predictions of a random-walk model with changes in direction coming from an even distribution. We further investigated the type of random walk by determining the best-fit distribution on the tail of the move length distribution and found it to be consistent with a truncated power law distribution with an exponent of 2.34. Although consistent with a Lévy walk search pattern, the range of move lengths in the tail was too narrow for Lévy properties to emerge and so would be best described as Brownian motion. This is somewhat surprising because female gametes were often outnumbered by male gametes, thus contrary to the assumption that a Brownian search mode may be most optimal with an abundant target resource. This is also the first mathematically analysed search pattern of a non-flagellated protistan gamete, supporting the notion that principles of Brownian motion have wide application in biology.

  16. Time distributions of solar energetic particle events: Are SEPEs really random?

    Science.gov (United States)

    Jiggens, P. T. A.; Gabriel, S. B.

    2009-10-01

    Solar energetic particle events (SEPEs) can exhibit flux increases of several orders of magnitude over background levels and have always been considered to be random in nature in statistical models with no dependence of any one event on the occurrence of previous events. We examine whether this assumption of randomness in time is correct. Engineering modeling of SEPEs is important to enable reliable and efficient design of both Earth-orbiting and interplanetary spacecraft and future manned missions to Mars and the Moon. All existing engineering models assume that the frequency of SEPEs follows a Poisson process. We present analysis of the event waiting times using alternative distributions described by Lévy and time-dependent Poisson processes and compared these with the usual Poisson distribution. The results show significant deviation from a Poisson process and indicate that the underlying physical processes might be more closely related to a Lévy-type process, suggesting that there is some inherent “memory” in the system. Inherent Poisson assumptions of stationarity and event independence are investigated, and it appears that they do not hold and can be dependent upon the event definition used. SEPEs appear to have some memory indicating that events are not completely random with activity levels varying even during solar active periods and are characterized by clusters of events. This could have significant ramifications for engineering models of the SEP environment, and it is recommended that current statistical engineering models of the SEP environment should be modified to incorporate long-term event dependency and short-term system memory.

  17. Joint probability distributions for a class of non-Markovian processes.

    Science.gov (United States)

    Baule, A; Friedrich, R

    2005-02-01

    We consider joint probability distributions for the class of coupled Langevin equations introduced by Fogedby [H. C. Fogedby, Phys. Rev. E 50, 1657 (1994)]. We generalize well-known results for the single-time probability distributions to the case of N -time joint probability distributions. It is shown that these probability distribution functions can be obtained by an integral transform from distributions of a Markovian process. The integral kernel obeys a partial differential equation with fractional time derivatives reflecting the non-Markovian character of the process.

  18. Vocal activities reflect the temporal distribution of bottlenose dolphin social and non-social activity in a zoological park.

    Science.gov (United States)

    Lima, Alice; Lemasson, Alban; Boye, Martin; Hausberger, Martine

    2017-12-01

    Under natural conditions bottlenose dolphins (Tursiops truncatus) spend their time mostly feeding and then travelling, socializing, or resting. These activities are not randomly distributed, with feeding being higher in early morning and late afternoon. Social activities and vocal behavior seem to be very important in dolphin daily activity. This study aimed to describe the activity time-budget and its relation to vocal behavior for dolphins in a zoological park. We recorded behaviors and vocalizations of six dolphins over 2 months. All subjects performed more non-agonistic social interactions and play in the morning than in the afternoon. The different categories of vocalizations were distributed non-randomly throughout the day, with more chirps in the afternoon, when the animals were "less social." The most striking result was the strong correlation between activities and the categories of vocalizations produced. The results confirm the association between burst pulses and whistles with social activities, but also reveal that both are also associated with solitary play. More chirps were produced when dolphins were engaged in socio-sexual behaviors, emphasizing the need for further questioning about the function of this vocal category. This study reveals that: (i) in a group kept in zoological management, social activities are mostly present in the morning; and (ii) the acoustic signals produced by dolphins may give a reliable representation of their current activities. While more studies on the context of signal production are needed, our findings provide a useful tool for understanding free ranging dolphin behavior when they are not visible. © 2017 Wiley Periodicals, Inc.

  19. Scattering by a slab containing randomly located cylinders: comparison between radiative transfer and electromagnetic simulation.

    Science.gov (United States)

    Roux, L; Mareschal, P; Vukadinovic, N; Thibaud, J B; Greffet, J J

    2001-02-01

    This study is devoted to the examination of scattering of waves by a slab containing randomly located cylinders. For the first time to our knowledge, the complete transmission problem has been solved numerically. We have compared the radiative transfer theory with a numerical solution of the wave equation. We discuss the coherent effects, such as forward-scattering dip and backscattering enhancement. It is seen that the radiative transfer equation can be used with great accuracy even for optically thin systems whose geometric thickness is comparable with the wavelength. We have also shown the presence of dependent scattering.

  20. INFLUENCE OF NON-PERFORATED SCREEN LOCATION ON HEAT TRANSFER PROCESS IN BUILDING ENCLOSING PARTS

    Directory of Open Access Journals (Sweden)

    V. D. Sizov

    2017-01-01

    Full Text Available It is recommended to have a vapor-proof barrier on the internal side of heat insulation system in multi-layer building enclosing parts in order to ensure protection of a heat-insulation layer against humidification because relative humidity of internal air is generally higher than external one and diffusion of water steam is directed from premises outside. While having a barrier with high vapor permeability a part of moisture can be accumulated in the structure and heat insulation core and difference of actual and maximum possible partial pressures leads to condensate formation. In order to improve thermal properties of enclosing parts the necessity arises to create a vapor-proof protection screen. It complies with the design of a panel with a vapor-proof screen in the form of non-perforated aluminium foil. The given screen located at internal panel layer prevents penetration of water vapor from premises into enclosing part and heat insulation layer. In such a case condensation zones and, consequently, their moistening can occur in some layers of enclosing parts according to their thermal and physical characteristics. The paper contains a calculation of thermal and moisture regime of the enclosing parts with vapor-proof layer (non-perforated aluminium foil located in enclosing part core between various layers. An analysis of thermal and moisture regime diagrams for multi-layer external enclosing part demonstrates that the part of non-perforated screen (aluminium foil located between internal concrete layer and perforated heat insulation layer is considered the most rational one. At the same time other screens between separate layers are perforated.

  1. Non-surgical treatment of lateral epicondylitis: a systematic review of randomized controlled trials.

    Science.gov (United States)

    Sims, Susan E G; Miller, Katherine; Elfar, John C; Hammert, Warren C

    2014-12-01

    Non-surgical approaches to treatment of lateral epicondylitis are numerous. The aim of this systematic review is to examine randomized, controlled trials of these treatments. Numerous databases were systematically searched from earliest records to February 2013. Search terms included "lateral epicondylitis," "lateral elbow pain," "tennis elbow," "lateral epicondylalgia," and "elbow tendinopathy" combined with "randomized controlled trial." Two reviewers examined the literature for eligibility via article abstract and full text. Fifty-eight articles met eligibility criteria: (1) a target population of patients with symptoms of lateral epicondylitis; (2) evaluation of treatment of lateral epicondylitis with the following non-surgical techniques: corticosteroid injection, injection technique, iontophoresis, botulinum toxin A injection, prolotherapy, platelet-rich plasma or autologous blood injection, bracing, physical therapy, shockwave therapy, or laser therapy; and (3) a randomized controlled trial design. Lateral epicondylitis is a condition that is usually self-limited. There may be a short-term pain relief advantage found with the application of corticosteroids, but no demonstrable long-term pain relief. Injection of botulinum toxin A and prolotherapy are superior to placebo but not to corticosteroids, and botulinum toxin A is likely to produce concomitant extensor weakness. Platelet-rich plasma or autologous blood injections have been found to be both more and less effective than corticosteroid injections. Non-invasive treatment methods such as bracing, physical therapy, and extracorporeal shockwave therapy do not appear to provide definitive benefit regarding pain relief. Some studies of low-level laser therapy show superiority to placebo whereas others do not. There are multiple randomized controlled trials for non-surgical management of lateral epicondylitis, but the existing literature does not provide conclusive evidence that there is one preferred method

  2. Leak Signature Space: An Original Representation for Robust Leak Location in Water Distribution Networks

    Directory of Open Access Journals (Sweden)

    Myrna V. Casillas

    2015-03-01

    Full Text Available In this paper, an original model-based scheme for leak location using pressure sensors in water distribution networks is introduced. The proposed approach is based on a new representation called the Leak Signature Space (LSS that associates a specific signature to each leak location being minimally affected by leak magnitude. The LSS considers a linear model approximation of the relation between pressure residuals and leaks that is projected onto a selected hyperplane. This new approach allows to infer the location of a given leak by comparing the position of its signature with other leak signatures. Moreover, two ways of improving the method’s robustness are proposed. First, by associating a domain of influence to each signature and second, through a time horizon analysis. The efficiency of the method is highlighted by means of a real network using several scenarios involving different number of sensors and considering the presence of noise in the measurements.

  3. Locations of Sampling Stations for Water Quality Monitoring in Water Distribution Networks.

    Science.gov (United States)

    Rathi, Shweta; Gupta, Rajesh

    2014-04-01

    Water quality is required to be monitored in the water distribution networks (WDNs) at salient locations to assure the safe quality of water supplied to the consumers. Such monitoring stations (MSs) provide warning against any accidental contaminations. Various objectives like demand coverage, time for detection, volume of water contaminated before detection, extent of contamination, expected population affected prior to detection, detection likelihood and others, have been independently or jointly considered in determining optimal number and location of MSs in WDNs. "Demand coverage" defined as the percentage of network demand monitored by a particular monitoring station is a simple measure to locate MSs. Several methods based on formulation of coverage matrix using pre-specified coverage criteria and optimization have been suggested. Coverage criteria is defined as some minimum percentage of total flow received at the monitoring stations that passed through any upstream node included then as covered node of the monitoring station. Number of monitoring stations increases with the increase in the value of coverage criteria. Thus, the design of monitoring station becomes subjective. A simple methodology is proposed herein which priority wise iteratively selects MSs to achieve targeted demand coverage. The proposed methodology provided the same number and location of MSs for illustrative network as an optimization method did. Further, the proposed method is simple and avoids subjectivity that could arise from the consideration of coverage criteria. The application of methodology is also shown on a WDN of Dharampeth zone (Nagpur city WDN in Maharashtra, India) having 285 nodes and 367 pipes.

  4. BWIP-RANDOM-SAMPLING, Random Sample Generation for Nuclear Waste Disposal

    International Nuclear Information System (INIS)

    Sagar, B.

    1989-01-01

    1 - Description of program or function: Random samples for different distribution types are generated. Distribution types as required for performance assessment modeling of geologic nuclear waste disposal are provided. These are: - Uniform, - Log-uniform (base 10 or natural), - Normal, - Lognormal (base 10 or natural), - Exponential, - Bernoulli, - User defined continuous distribution. 2 - Method of solution: A linear congruential generator is used for uniform random numbers. A set of functions is used to transform the uniform distribution to the other distributions. Stratified, rather than random, sampling can be chosen. Truncated limits can be specified on many distributions, whose usual definition has an infinite support. 3 - Restrictions on the complexity of the problem: Generation of correlated random variables is not included

  5. Characterisation of random Gaussian and non-Gaussian stress processes in terms of extreme responses

    Directory of Open Access Journals (Sweden)

    Colin Bruno

    2015-01-01

    Full Text Available In the field of military land vehicles, random vibration processes generated by all-terrain wheeled vehicles in motion are not classical stochastic processes with a stationary and Gaussian nature. Non-stationarity of processes induced by the variability of the vehicle speed does not form a major difficulty because the designer can have good control over the vehicle speed by characterising the histogram of instantaneous speed of the vehicle during an operational situation. Beyond this non-stationarity problem, the hard point clearly lies in the fact that the random processes are not Gaussian and are generated mainly by the non-linear behaviour of the undercarriage and the strong occurrence of shocks generated by roughness of the terrain. This non-Gaussian nature is expressed particularly by very high flattening levels that can affect the design of structures under extreme stresses conventionally acquired by spectral approaches, inherent to Gaussian processes and based essentially on spectral moments of stress processes. Due to these technical considerations, techniques for characterisation of random excitation processes generated by this type of carrier need to be changed, by proposing innovative characterisation methods based on time domain approaches as described in the body of the text rather than spectral domain approaches.

  6. Analysis of the spatial distribution of infant mortality by cause of death in Austria in 1984 to 2006

    Directory of Open Access Journals (Sweden)

    Heinzl Harald

    2008-05-01

    Full Text Available Abstract Background In Austria, over the last 20 years infant mortality declined from 11.2 per 1,000 life births (1985 to 4.7 per 1,000 in1997 but remained rather constant since then. In addition to this time trend we already reported a non-random spatial distribution of infant mortality rates in a recent study covering the time period 1984 to 2002. This present study includes four additional years and now covers about 1.9 million individual birth certificates. It aimes to elucidate the observed non-random spatial distribution in more detail. We split up infant mortality into six groups according to the underlying cause of death. The underlying spatial distribution of standardized mortality ratios (SMR is estimated by univariate models as well as by two models incorporating all six groups simultaneously. Results We observe strong correlations between the individual spatial patterns of SMR's except for "Sudden Infant Death Syndrome" and to some extent for "Peripartal Problems". The spatial distribution of SMR's is non-random with an area of decreased risk in the South-East of Austria. The group "Sudden Infant Death Syndrome" clearly and the group "Peripartal Problems" slightly show deviations from the common pattern. When comparing univariate and multivariate SMR estimates we observe that the resulting spatial distributions are very similar. Conclusion We observe different non-random spatial distributions of infant mortality rates when grouped by cause of death. The models applied were based on individual data thereby avoiding ecological regression bias. The estimated spatial distributions do not substantially depend on the employed estimation method. The observed non-random spatial patterns of Austrian infant mortality remain to appear ambiguous.

  7. Non-equilibrium work distribution for interacting colloidal particles under friction

    International Nuclear Information System (INIS)

    Gomez-Solano, Juan Ruben; July, Christoph; Mehl, Jakob; Bechinger, Clemens

    2015-01-01

    We experimentally investigate the non-equilibrium steady-state distribution of the work done by an external force on a mesoscopic system with many coupled degrees of freedom: a colloidal crystal mechanically driven across a commensurate periodic light field. Since this system mimics the spatiotemporal dynamics of a crystalline surface moving on a corrugated substrate, our results show general properties of the work distribution for atomically flat surfaces undergoing friction. We address the role of several parameters which can influence the shape of the work distribution, e.g. the number of particles used to locally probe the properties of the system and the time interval to measure the work. We find that, when tuning the control parameters to induce particle depinning from the substrate, there is an abrupt change of the shape of the work distribution. While in the completely static and sliding friction regimes the work distribution is Gaussian, non-Gaussian tails show up due to the spatiotemporal heterogeneity of the particle dynamics during the transition between these two regimes. (paper)

  8. Suppression of thermal noise in a non-Markovian random velocity field

    International Nuclear Information System (INIS)

    Ueda, Masahiko

    2016-01-01

    We study the diffusion of Brownian particles in a Gaussian random velocity field with short memory. By extending the derivation of an effective Fokker–Planck equation for the Lanvegin equation with weakly colored noise to a random velocity-field problem, we find that the effect of thermal noise on particles is suppressed by the existence of memory. We also find that the renormalization effect for the relative diffusion of two particles is stronger than that for single-particle diffusion. The results are compared with those of molecular dynamics simulations. (paper: classical statistical mechanics, equilibrium and non-equilibrium)

  9. Load flow analysis for determining the location of NPP power distribution in West Kalimantan

    International Nuclear Information System (INIS)

    Citra Candranurani; Rizki Finnansyah Setya Budi; Sahala M Lumbanraja

    2015-01-01

    Electricity crisis condition happened in West Kalimantan (Kalbar) as a result of power plant capacity almost equal to the peak load. The system will experience a shortfall if there are plants that not operating and do not have reserve. The policy of electricity planning until 2022 is replacing diesel power plant with steam power plant. For long-term planning is required the role of new and renewable energy in order to reduce dependency on fossil fuel consumption, such as NPP utilization. The purpose of this study was to determine the optimum location of the NPP power distribution in order to prepare electricity infrastructure. Load flow calculation in this study using ETAP 12.5 software. NPP is planned to supply base load, so the optimum capacity factor is above 80 %. The result show that there are three location where NPP can generate over 80 % of its capacity, namely: Mempawah Substation, Singkawang Substation, and Sambas Substation. The most optimum located in Mempawah Substation with capacity factor 83.5 %. The location of the three Substation are onshore and in line with one requirement for NPP construction, namely: the availability of cooling water. (author)

  10. The Location Dynamics of Firms in Transitional Shanghai, 1978-2005

    Directory of Open Access Journals (Sweden)

    Bo Qin

    2012-01-01

    Full Text Available China’s economic reform started in 1978 has brought in profound changes to firms by transforming the state-owned-enterprises and by encouraging the growth of the non-state sector business. These changes have been accompanied by broader institutional changes and economic restructuring in the cities, especially in the larger ones. This paper focuses on the changing spatial distribution patterns and the underlying location factors of firms in different sectors within Shanghai, one of China’s largest and most dynamic cities. The central research question is raised as do the rapidly changing spatial patterns of corporate activities within Shanghai since the onset of China's economic reform reflect the influence of market forces? Data were collected from Shanghai Administration of Industry and Commerce. Both GIS mapping and statistics (i.e. Moran’s Index, density gradient were used to assess the spatial distribution pattern of firms in Shanghai. An empirical model derived from neo-classical location theory is employed to examine the location factors of firms in different sectors. Findings of the paper indicate that the spatial distribution and location factors of firms in Shanghai demonstrate the city’s unique urban restructuring process, which is closely related to the city’s specific economic stage and unique “transitional” characteristics. However, market forces do play an increasingly import role in firm’s location-choice behavior in Shanghai. This study contributes to the understanding of firm location dynamics in post-socialist cities.

  11. Fatigue Reliability under Random Loads

    DEFF Research Database (Denmark)

    Talreja, R.

    1979-01-01

    We consider the problem of estimating the probability of survival (non-failure) and the probability of safe operation (strength greater than a limiting value) of structures subjected to random loads. These probabilities are formulated in terms of the probability distributions of the loads...... propagation stage. The consequences of this behaviour on the fatigue reliability are discussed....

  12. A random effects meta-analysis model with Box-Cox transformation.

    Science.gov (United States)

    Yamaguchi, Yusuke; Maruo, Kazushi; Partlett, Christopher; Riley, Richard D

    2017-07-19

    In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and prediction intervals from the normal random effects model. The

  13. A random effects meta-analysis model with Box-Cox transformation

    Directory of Open Access Journals (Sweden)

    Yusuke Yamaguchi

    2017-07-01

    Full Text Available Abstract Background In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. Methods We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. Results A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and

  14. Potential fluctuations due to randomly distributed charges at the semiconductor-insulator interface in MIS-structures

    International Nuclear Information System (INIS)

    Yanchev, I.

    2003-01-01

    A new expression for the Fourier transform of the binary correlation function of the random potential near the semiconductor-insulator interface is derived. The screening from the metal electrode in MIS-structure is taken into account introducing an effective insulator thickness. An essential advantage of this correlation function is the finite dispersion of the random potential to which it leads in distinction with the so far known correlation functions leading to a divergent dispersion. The dispersion, an important characteristic of the random potential distribution, determining the amplitude of the potential fluctuations is calculated

  15. Potential fluctuations due to randomly distributed charges at the semiconductor-insulator interface in MIS-structures

    CERN Document Server

    Yanchev, I

    2003-01-01

    A new expression for the Fourier transform of the binary correlation function of the random potential near the semiconductor-insulator interface is derived. The screening from the metal electrode in MIS-structure is taken into account introducing an effective insulator thickness. An essential advantage of this correlation function is the finite dispersion of the random potential to which it leads in distinction with the so far known correlation functions leading to a divergent dispersion. The dispersion, an important characteristic of the random potential distribution, determining the amplitude of the potential fluctuations is calculated.

  16. Potential fluctuations due to randomly distributed charges at the semiconductor-insulator interface in MIS-structures

    Energy Technology Data Exchange (ETDEWEB)

    Yanchev, I

    2003-07-01

    A new expression for the Fourier transform of the binary correlation function of the random potential near the semiconductor-insulator interface is derived. The screening from the metal electrode in MIS-structure is taken into account introducing an effective insulator thickness. An essential advantage of this correlation function is the finite dispersion of the random potential to which it leads in distinction with the so far known correlation functions leading to a divergent dispersion. The dispersion, an important characteristic of the random potential distribution, determining the amplitude of the potential fluctuations is calculated.

  17. Statistical properties of random clique networks

    Science.gov (United States)

    Ding, Yi-Min; Meng, Jun; Fan, Jing-Fang; Ye, Fang-Fu; Chen, Xiao-Song

    2017-10-01

    In this paper, a random clique network model to mimic the large clustering coefficient and the modular structure that exist in many real complex networks, such as social networks, artificial networks, and protein interaction networks, is introduced by combining the random selection rule of the Erdös and Rényi (ER) model and the concept of cliques. We find that random clique networks having a small average degree differ from the ER network in that they have a large clustering coefficient and a power law clustering spectrum, while networks having a high average degree have similar properties as the ER model. In addition, we find that the relation between the clustering coefficient and the average degree shows a non-monotonic behavior and that the degree distributions can be fit by multiple Poisson curves; we explain the origin of such novel behaviors and degree distributions.

  18. Non-perturbative inputs for gluon distributions in the hadrons

    International Nuclear Information System (INIS)

    Ermolaev, B.I.; Troyan, S.I.

    2017-01-01

    Description of hadronic reactions at high energies is conventionally done in the framework of QCD factorization. All factorization convolutions comprise non-perturbative inputs mimicking non-perturbative contributions and perturbative evolution of those inputs. We construct inputs for the gluon-hadron scattering amplitudes in the forward kinematics and, using the optical theorem, convert them into inputs for gluon distributions in the hadrons, embracing the cases of polarized and unpolarized hadrons. In the first place, we formulate mathematical criteria which any model for the inputs should obey and then suggest a model satisfying those criteria. This model is based on a simple reasoning: after emitting an active parton off the hadron, the remaining set of spectators becomes unstable and therefore it can be described through factors of the resonance type, so we call it the resonance model. We use it to obtain non-perturbative inputs for gluon distributions in unpolarized and polarized hadrons for all available types of QCD factorization: basic, K_T-and collinear factorizations. (orig.)

  19. Non-perturbative inputs for gluon distributions in the hadrons

    Energy Technology Data Exchange (ETDEWEB)

    Ermolaev, B.I. [Ioffe Physico-Technical Institute, Saint Petersburg (Russian Federation); Troyan, S.I. [St. Petersburg Institute of Nuclear Physics, Gatchina (Russian Federation)

    2017-03-15

    Description of hadronic reactions at high energies is conventionally done in the framework of QCD factorization. All factorization convolutions comprise non-perturbative inputs mimicking non-perturbative contributions and perturbative evolution of those inputs. We construct inputs for the gluon-hadron scattering amplitudes in the forward kinematics and, using the optical theorem, convert them into inputs for gluon distributions in the hadrons, embracing the cases of polarized and unpolarized hadrons. In the first place, we formulate mathematical criteria which any model for the inputs should obey and then suggest a model satisfying those criteria. This model is based on a simple reasoning: after emitting an active parton off the hadron, the remaining set of spectators becomes unstable and therefore it can be described through factors of the resonance type, so we call it the resonance model. We use it to obtain non-perturbative inputs for gluon distributions in unpolarized and polarized hadrons for all available types of QCD factorization: basic, K{sub T}-and collinear factorizations. (orig.)

  20. Approximate Forward Difference Equations for the Lower Order Non-Stationary Statistics of Geometrically Non-Linear Systems subject to Random Excitation

    DEFF Research Database (Denmark)

    Köylüoglu, H. U.; Nielsen, Søren R. K.; Cakmak, A. S.

    Geometrically non-linear multi-degree-of-freedom (MDOF) systems subject to random excitation are considered. New semi-analytical approximate forward difference equations for the lower order non-stationary statistical moments of the response are derived from the stochastic differential equations...... of motion, and, the accuracy of these equations is numerically investigated. For stationary excitations, the proposed method computes the stationary statistical moments of the response from the solution of non-linear algebraic equations....

  1. A Location-Inventory-Routing Problem in Forward and Reverse Logistics Network Design

    Directory of Open Access Journals (Sweden)

    Qunli Yuchi

    2016-01-01

    Full Text Available We study a new problem of location-inventory-routing in forward and reverse logistic (LIRP-FRL network design, which simultaneously integrates the location decisions of distribution centers (DCs, the inventory policies of opened DCs, and the vehicle routing decision in serving customers, in which new goods are produced and damaged goods are repaired by a manufacturer and then returned to the market to satisfy customers’ demands as new ones. Our objective is to minimize the total costs of manufacturing and remanufacturing goods, building DCs, shipping goods (new or recovered between the manufacturer and opened DCs, and distributing new or recovered goods to customers and ordering and storage costs of goods. A nonlinear integer programming model is proposed to formulate the LIRP-FRL. A new tabu search (NTS algorithm is developed to achieve near optimal solution of the problem. Numerical experiments on the benchmark instances of a simplified version of the LIRP-FRL, the capacitated location routing problem, and the randomly generated LIRP-FRL instances demonstrate the effectiveness and efficiency of the proposed NTS algorithm in problem resolution.

  2. Mutual trust method for forwarding information in wireless sensor networks using random secret pre-distribution

    Directory of Open Access Journals (Sweden)

    Chih-Hsueh Lin

    2016-04-01

    Full Text Available In wireless sensor networks, sensing information must be transmitted from sensor nodes to the base station by multiple hopping. Every sensor node is a sender and a relay node that forwards the sensing information that is sent by other nodes. Under an attack, the sensing information may be intercepted, modified, interrupted, or fabricated during transmission. Accordingly, the development of mutual trust to enable a secure path to be established for forwarding information is an important issue. Random key pre-distribution has been proposed to establish mutual trust among sensor nodes. This article modifies the random key pre-distribution to a random secret pre-distribution and incorporates identity-based cryptography to establish an effective method of establishing mutual trust for a wireless sensor network. In the proposed method, base station assigns an identity and embeds n secrets into the private secret keys for every sensor node. Based on the identity and private secret keys, the mutual trust method is utilized to explore the types of trust among neighboring sensor nodes. The novel method can resist malicious attacks and satisfy the requirements of wireless sensor network, which are resistance to compromising attacks, masquerading attacks, forger attacks, replying attacks, authentication of forwarding messages, and security of sensing information.

  3. An Adaptive Tabu Search Heuristic for the Location Routing Pickup and Delivery Problem with Time Windows with a Theater Distribution Application

    National Research Council Canada - National Science Library

    Burks, Jr, Robert E

    2006-01-01

    .... The location routing problem (LRP) is an extension of the vehicle routing problem where the solution identifies the optimal location of the depots and provides the vehicle schedules and distribution routes...

  4. Line-to-Line Fault Analysis and Location in a VSC-Based Low-Voltage DC Distribution Network

    Directory of Open Access Journals (Sweden)

    Shi-Min Xue

    2018-03-01

    Full Text Available A DC cable short-circuit fault is the most severe fault type that occurs in DC distribution networks, having a negative impact on transmission equipment and the stability of system operation. When a short-circuit fault occurs in a DC distribution network based on a voltage source converter (VSC, an in-depth analysis and characterization of the fault is of great significance to establish relay protection, devise fault current limiters and realize fault location. However, research on short-circuit faults in VSC-based low-voltage DC (LVDC systems, which are greatly different from high-voltage DC (HVDC systems, is currently stagnant. The existing research in this area is not conclusive, with further study required to explain findings in HVDC systems that do not fit with simulated results or lack thorough theoretical analyses. In this paper, faults are divided into transient- and steady-state faults, and detailed formulas are provided. A more thorough and practical theoretical analysis with fewer errors can be used to develop protection schemes and short-circuit fault locations based on transient- and steady-state analytic formulas. Compared to the classical methods, the fault analyses in this paper provide more accurate computed results of fault current. Thus, the fault location method can rapidly evaluate the distance between the fault and converter. The analyses of error increase and an improved handshaking method coordinating with the proposed location method are presented.

  5. Evidence of significant bias in an elementary random number generator

    International Nuclear Information System (INIS)

    Borgwaldt, H.; Brandl, V.

    1981-03-01

    An elementary pseudo random number generator for isotropically distributed unit vectors in 3-dimensional space has ben tested for bias. This generator uses the IBM-suplied routine RANDU and a transparent rejection technique. The tests show clearly that non-randomness in the pseudo random numbers generated by the primary IBM generator leads to bias in the order of 1 percent in estimates obtained from the secondary random number generator. FORTRAN listings of 4 variants of the random number generator called by a simple test programme and output listings are included for direct reference. (orig.) [de

  6. Lack of correlation between the location of choroidal melanoma and ultraviolet radiation dose distribution

    International Nuclear Information System (INIS)

    Schwartz, L.; Ferrand, R.; Boelle, P.Y.; Maylin, C.; D'Hermies, F.; Virmont, J.

    1998-01-01

    Full text of publication follows: ocular melanomas arise from the choroid. The result of our study of a total of 92 ocular melanomas would indicate that there is no preferential location for tumors on the eye. We estimate the ultraviolet (UV) radiation dose distribution using data available in the literature. We then compared tumor location and UV radiation. UVC and UVB do not reach the choroid and UVA is filtered by the cornea and the lens. Only, a small percentage of the incoming rays reach the posterior and inferior part of retina, but none reach the superior and anterior part of the eye. We concluded that it is therefore very unlikely that UV radiation exposure is responsible for choroidal melanoma. (authors)

  7. Selective attention to sound location or pitch studied with event-related brain potentials and magnetic fields.

    Science.gov (United States)

    Degerman, Alexander; Rinne, Teemu; Särkkä, Anna-Kaisa; Salmi, Juha; Alho, Kimmo

    2008-06-01

    Event-related brain potentials (ERPs) and magnetic fields (ERFs) were used to compare brain activity associated with selective attention to sound location or pitch in humans. Sixteen healthy adults participated in the ERP experiment, and 11 adults in the ERF experiment. In different conditions, the participants focused their attention on a designated sound location or pitch, or pictures presented on a screen, in order to detect target sounds or pictures among the attended stimuli. In the Attend Location condition, the location of sounds varied randomly (left or right), while their pitch (high or low) was kept constant. In the Attend Pitch condition, sounds of varying pitch (high or low) were presented at a constant location (left or right). Consistent with previous ERP results, selective attention to either sound feature produced a negative difference (Nd) between ERPs to attended and unattended sounds. In addition, ERPs showed a more posterior scalp distribution for the location-related Nd than for the pitch-related Nd, suggesting partially different generators for these Nds. The ERF source analyses found no source distribution differences between the pitch-related Ndm (the magnetic counterpart of the Nd) and location-related Ndm in the superior temporal cortex (STC), where the main sources of the Ndm effects are thought to be located. Thus, the ERP scalp distribution differences between the location-related and pitch-related Nd effects may have been caused by activity of areas outside the STC, perhaps in the inferior parietal regions.

  8. Mental Health and Drivers of Need in Emergent and Non-Emergent Emergency Department (ED) Use: Do Living Location and Non-Emergent Care Sources Matter?

    Science.gov (United States)

    McManus, Moira C; Cramer, Robert J; Boshier, Maureen; Akpinar-Elci, Muge; Van Lunen, Bonnie

    2018-01-13

    Emergency department (ED) utilization has increased due to factors such as admissions for mental health conditions, including suicide and self-harm. We investigate direct and moderating influences on non-emergent ED utilization through the Behavioral Model of Health Services Use. Through logistic regression, we examined correlates of ED use via 2014 New York State Department of Health Statewide Planning and Research Cooperative System outpatient data. Consistent with the primary hypothesis, mental health admissions were associated with emergent use across models, with only a slight decrease in effect size in rural living locations. Concerning moderating effects, Spanish/Hispanic origin was associated with increased likelihood for emergent ED use in the rural living location model, and non-emergent ED use for the no non-emergent source model. 'Other' ethnic origin increased the likelihood of emergent ED use for rural living location and no non-emergent source models. The findings reveal 'need', including mental health admissions, as the largest driver for ED use. This may be due to mental healthcare access, or patients with mental health emergencies being transported via first responders to the ED, as in the case of suicide, self-harm, manic episodes or psychotic episodes. Further educating ED staff on this patient population through gatekeeper training may ensure patients receive the best treatment and aid in driving access to mental healthcare delivery changes.

  9. Approximations to the Non-Isothermal Distributed Activation Energy Model for Biomass Pyrolysis Using the Rayleigh Distribution

    Directory of Open Access Journals (Sweden)

    Dhaundiyal Alok

    2017-09-01

    Full Text Available This paper deals with the influence of some parameters relevant to biomass pyrolysis on the numerical solutions of the nonisothermal nth order distributed activation energy model using the Rayleigh distribution. Investigated parameters are the integral upper limit, the frequency factor, the heating rate, the reaction order and the scale parameters of the Rayleigh distribution. The influence of these parameters has been considered for the determination of the kinetic parameters of the non-isothermal nth order Rayleigh distribution from the experimentally derived thermoanalytical data of biomass pyrolysis.

  10. Evaluation of physical activity interventions in children via the reach, efficacy/effectiveness, adoption, implementation, and maintenance (RE-AIM) framework: A systematic review of randomized and non-randomized trials.

    Science.gov (United States)

    McGoey, Tara; Root, Zach; Bruner, Mark W; Law, Barbi

    2016-01-01

    Existing reviews of physical activity (PA) interventions designed to increase PA behavior exclusively in children (ages 5 to 11years) focus primarily on the efficacy (e.g., internal validity) of the interventions without addressing the applicability of the results in terms of generalizability and translatability (e.g., external validity). This review used the RE-AIM (Reach, Efficacy/Effectiveness, Adoption, Implementation, Maintenance) framework to measure the degree to which randomized and non-randomized PA interventions in children report on internal and external validity factors. A systematic search for controlled interventions conducted within the past 12years identified 78 studies that met the inclusion criteria. Based on the RE-AIM criteria, most of the studies focused on elements of internal validity (e.g., sample size, intervention location and efficacy/effectiveness) with minimal reporting of external validity indicators (e.g., representativeness of participants, start-up costs, protocol fidelity and sustainability). Results of this RE-AIM review emphasize the need for future PA interventions in children to report on real-world challenges and limitations, and to highlight considerations for translating evidence-based results into health promotion practice. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Size distributions of non-volatile particle residuals (Dp<800 nm at a rural site in Germany and relation to air mass origin

    Directory of Open Access Journals (Sweden)

    T. Tuch

    2007-11-01

    Full Text Available Atmospheric aerosol particle size distributions at a continental background site in Eastern Germany were examined for a one-year period. Particles were classified using a twin differential mobility particle sizer in a size range between 3 and 800 nm. As a novelty, every second measurement of this experiment involved the removal of volatile chemical compounds in a thermodenuder at 300°C. This concept allowed to quantify the number size distribution of non-volatile particle cores – primarily associated with elemental carbon, and to compare this to the original non-conditioned size distribution. As a byproduct of the volatility analysis, new particles originating from nucleation inside the thermodenuder can be observed, however, overwhelmingly at diameters below 6 nm. Within the measurement uncertainty, every particle down to particle sizes of 15 nm is concluded to contain a non-volatile core. The volume fraction of non-volatile particulate matter (non-conditioned diameter < 800 nm varied between 10 and 30% and was largely consistent with the experimentally determined mass fraction of elemental carbon. The average size of the non-volatile particle cores was estimated as a function of original non-conditioned size using a summation method, which showed that larger particles (>200 nm contained more non-volatile compounds than smaller particles (<50 nm, thus indicating a significantly different chemical composition. Two alternative air mass classification schemes based on either, synoptic chart analysis (Berliner Wetterkarte or back trajectories showed that the volume and number fraction of non-volatile cores depended less on air mass than the total particle number concentration. In all air masses, the non-volatile size distributions showed a more and a less volatile ("soot" mode, the latter being located at about 50 nm. During unstable conditions and in maritime air masses, smaller values were observed compared to stable or continental conditions

  12. Modelling population distribution using remote sensing imagery and location-based data

    Science.gov (United States)

    Song, J.; Prishchepov, A. V.

    2017-12-01

    Detailed spatial distribution of population density is essential for city studies such as urban planning, environmental pollution and city emergency, even estimate pressure on the environment and human exposure and risks to health. However, most of the researches used census data as the detailed dynamic population distribution are difficult to acquire, especially in microscale research. This research describes a method using remote sensing imagery and location-based data to model population distribution at the function zone level. Firstly, urban functional zones within a city were mapped by high-resolution remote sensing images and POIs. The workflow of functional zones extraction includes five parts: (1) Urban land use classification. (2) Segmenting images in built-up area. (3) Identification of functional segments by POIs. (4) Identification of functional blocks by functional segmentation and weight coefficients. (5) Assessing accuracy by validation points. The result showed as Fig.1. Secondly, we applied ordinary least square and geographically weighted regression to assess spatial nonstationary relationship between light digital number (DN) and population density of sampling points. The two methods were employed to predict the population distribution over the research area. The R²of GWR model were in the order of 0.7 and typically showed significant variations over the region than traditional OLS model. The result showed as Fig.2.Validation with sampling points of population density demonstrated that the result predicted by the GWR model correlated well with light value. The result showed as Fig.3. Results showed: (1) Population density is not linear correlated with light brightness using global model. (2) VIIRS night-time light data could estimate population density integrating functional zones at city level. (3) GWR is a robust model to map population distribution, the adjusted R2 of corresponding GWR models were higher than the optimal OLS models

  13. Large-area imaging reveals biologically driven non-random spatial patterns of corals at a remote reef

    Science.gov (United States)

    Edwards, Clinton B.; Eynaud, Yoan; Williams, Gareth J.; Pedersen, Nicole E.; Zgliczynski, Brian J.; Gleason, Arthur C. R.; Smith, Jennifer E.; Sandin, Stuart A.

    2017-12-01

    For sessile organisms such as reef-building corals, differences in the degree of dispersion of individuals across a landscape may result from important differences in life-history strategies or may reflect patterns of habitat availability. Descriptions of spatial patterns can thus be useful not only for the identification of key biological and physical mechanisms structuring an ecosystem, but also by providing the data necessary to generate and test ecological theory. Here, we used an in situ imaging technique to create large-area photomosaics of 16 plots at Palmyra Atoll, central Pacific, each covering 100 m2 of benthic habitat. We mapped the location of 44,008 coral colonies and identified each to the lowest taxonomic level possible. Using metrics of spatial dispersion, we tested for departures from spatial randomness. We also used targeted model fitting to explore candidate processes leading to differences in spatial patterns among taxa. Most taxa were clustered and the degree of clustering varied by taxon. A small number of taxa did not significantly depart from randomness and none revealed evidence of spatial uniformity. Importantly, taxa that readily fragment or tolerate stress through partial mortality were more clustered. With little exception, clustering patterns were consistent with models of fragmentation and dispersal limitation. In some taxa, dispersion was linearly related to abundance, suggesting density dependence of spatial patterning. The spatial patterns of stony corals are non-random and reflect fundamental life-history characteristics of the taxa, suggesting that the reef landscape may, in many cases, have important elements of spatial predictability.

  14. Generation of pseudo-random numbers from given probabilistic distribution with the use of chaotic maps

    Science.gov (United States)

    Lawnik, Marcin

    2018-01-01

    The scope of the paper is the presentation of a new method of generating numbers from a given distribution. The method uses the inverse cumulative distribution function and a method of flattening of probabilistic distributions. On the grounds of these methods, a new construction of chaotic maps was derived, which generates values from a given distribution. The analysis of the new method was conducted on the example of a newly constructed chaotic recurrences, based on the Box-Muller transformation and the quantile function of the exponential distribution. The obtained results certify that the proposed method may be successively applicable for the construction of generators of pseudo-random numbers.

  15. Performance Analysis of 5G Transmission over Fading Channels with Random IG Distributed LOS Components

    Directory of Open Access Journals (Sweden)

    Dejan Jaksic

    2017-01-01

    Full Text Available Mathematical modelling of the behavior of the radio propagation at mmWave bands is crucial to the development of transmission and reception algorithms of new 5G systems. In this study we will model 5G propagation in nondeterministic line-of-sight (LOS conditions, when the random nature of LOS component ratio will be observed as Inverse Gamma (IG distributed process. Closed-form expressions will be presented for the probability density function (PDF and cumulative distribution function (CDF of such random process. Further, closed-form expressions will be provided for important performance measures such as level crossing rate (LCR and average fade duration (AFD. Capitalizing on proposed expressions, LCR and AFD will be discussed in the function of transmission parameters.

  16. Potential fluctuations due to randomly distributed charges at the semiconductor-insulator interface in mis-structures

    International Nuclear Information System (INIS)

    Yanchev, I; Slavcheva, G.

    1993-01-01

    A new expression for the Fourier transform of the binary correlation function of the random potential near the semiconductor-insulator interface is derived. The screening from the metal electrode in MIS-structure is taken into account introducing an effective insulator thickness. An essential advantage of this correlation function is the finite dispersion of the random potential Γ 2 to which it leads in distinction with the so far known correlation functions leading to divergent dispersion. The important characteristic of the random potential distribution Γ 2 determining the amplitude of the potential fluctuations is calculated. 7 refs. (orig.)

  17. Estimation of the location parameter of distributions with known coefficient of variation by record values

    Directory of Open Access Journals (Sweden)

    N. K. Sajeevkumar

    2014-09-01

    Full Text Available In this article, we derived the Best Linear Unbiased Estimator (BLUE of the location parameter of certain distributions with known coefficient of variation by record values. Efficiency comparisons are also made on the proposed estimator with some of the usual estimators. Finally we give a real life data to explain the utility of results developed in this article.

  18. Distribution and speciation of metals (Cu, Zn, Cd, and Pb) in agricultural and non-agricultural soils near a stream upriver from the Pearl River, China

    International Nuclear Information System (INIS)

    Yang, Silin; Zhou, Dequn; Yu, Huayong; Wei, Rong; Pan, Bo

    2013-01-01

    The distribution and chemical speciation of typical metals (Cu, Zn, Cd and Pb) in agricultural and non-agricultural soils were investigated in the area of Nanpan River, upstream of the Pearl River. The investigated four metals showed higher concentrations in agricultural soils than in non-agricultural soils, and the site located in factory district contained metals much higher than the other sampling sites. These observations suggested that human activities, such as water irrigation, fertilizer and pesticide applications might have a major impact on the distribution of metals. Metal speciation analysis presented that Cu, Zn and Cd were dominated by the residual fraction, while Pb was dominated by the reducible fraction. Because of the low mobility of the metals in the investigated area, no remarkable difference could be observed between upstream and downstream separated by the factory site. -- Highlights: ► Agricultural soils contain higher metal concentrations than non-agricultural soils. ► The site located in the factory district has the highest metal concentration. ► Cu, Zn and Cd are dominated by residual fraction, and Pb by reducible fraction. ► Cd pollution should not be overlooked in soils upstream of Pearl River. -- The mobility of four investigated metals is low but Cd pollution should not be overlooked in soils upstream of Pearl River

  19. Distribution and Characteristics of Non Carious Cervical lesions in ...

    African Journals Online (AJOL)

    Background: Controversies rages in the literature as to the characteristics of non carious cervical lesions (NCCLs) in terms of the location and its severity. Objective: The study is to investigate the characteristics of NCCLs in adult patients who had a high incidence in them and to see if there are any association with the ...

  20. An Analysis of Spherical Particles Distribution Randomly Packed in a Medium for the Monte Carlo Implicit Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jae Yong; Kim, Song Hyun; Shin, Chang Ho; Kim, Jong Kyung [Hanyang Univ., Seoul (Korea, Republic of)

    2014-05-15

    In this study, as a preliminary study to develop an implicit method having high accuracy, the distribution characteristics of spherical particles were evaluated by using explicit modeling techniques in various volume packing fractions. This study was performed to evaluate implicitly simulated distribution of randomly packed spheres in a medium. At first, an explicit modeling method to simulate random packed spheres in a hexahedron medium was proposed. The distributed characteristics of l{sub p} and r{sub p}, which are used in the particle position sampling, was estimated. It is analyzed that the use of the direct exponential distribution, which is generally used in the implicit modeling, can cause the distribution bias of the spheres. It is expected that the findings in this study can be utilized for improving the accuracy in using the implicit method. Spherical particles, which are randomly distributed in medium, are utilized for the radiation shields, fusion reactor blanket, fuels of VHTR reactors. Due to the difficulty on the simulation of the stochastic distribution, Monte Carlo (MC) method has been mainly considered as the tool for the analysis of the particle transport. For the MC modeling of the spherical particles, three methods are known; repeated structure, explicit modeling, and implicit modeling. Implicit method (called as the track length sampling method) is a modeling method that is the sampling based modeling technique of each spherical geometry (or track length of the sphere) during the MC simulation. Implicit modeling method has advantages in high computational efficiency and user convenience. However, it is noted that the implicit method has lower modeling accuracy in various finite mediums.

  1. Distributed Pseudo-Random Number Generation and Its Application to Cloud Database

    OpenAIRE

    Chen, Jiageng; Miyaji, Atsuko; Su, Chunhua

    2014-01-01

    Cloud database is now a rapidly growing trend in cloud computing market recently. It enables the clients run their computation on out-sourcing databases or access to some distributed database service on the cloud. At the same time, the security and privacy concerns is major challenge for cloud database to continue growing. To enhance the security and privacy of the cloud database technology, the pseudo-random number generation (PRNG) plays an important roles in data encryptions and privacy-pr...

  2. Residual Defect Density in Random Disks Deposits.

    Science.gov (United States)

    Topic, Nikola; Pöschel, Thorsten; Gallas, Jason A C

    2015-08-03

    We investigate the residual distribution of structural defects in very tall packings of disks deposited randomly in large channels. By performing simulations involving the sedimentation of up to 50 × 10(9) particles we find all deposits to consistently show a non-zero residual density of defects obeying a characteristic power-law as a function of the channel width. This remarkable finding corrects the widespread belief that the density of defects should vanish algebraically with growing height. A non-zero residual density of defects implies a type of long-range spatial order in the packing, as opposed to only local ordering. In addition, we find deposits of particles to involve considerably less randomness than generally presumed.

  3. Characterization of geometrical random uncertainty distribution for a group of patients in radiotherapy

    International Nuclear Information System (INIS)

    Munoz Montplet, C.; Jurado Bruggeman, D.

    2010-01-01

    Geometrical random uncertainty in radiotherapy is usually characterized by a unique value in each group of patients. We propose a novel approach based on a statistically accurate characterization of the uncertainty distribution, thus reducing the risk of obtaining potentially unsafe results in CTV-PTV margins or in the selection of correction protocols.

  4. Estimating Invasion Success by Non-Native Trees in a National Park Combining WorldView-2 Very High Resolution Satellite Data and Species Distribution Models

    Directory of Open Access Journals (Sweden)

    Antonio T. Monteiro

    2017-01-01

    Full Text Available Invasion by non-native tree species is an environmental and societal challenge requiring predictive tools to assess invasion dynamics. The frequent scale mismatch between such tools and on-ground conservation is currently limiting invasion management. This study aimed to reduce these scale mismatches, assess the success of non-native tree invasion and determine the environmental factors associated to it. A hierarchical scaling approach combining species distribution models (SDMs and satellite mapping at very high resolution (VHR was developed to assess invasion by Acacia dealbata in Peneda-Gerês National Park, the only national park in Portugal. SDMs were first used to predict the climatically suitable areas for A. dealdata and satellite mapping with the random-forests classifier was then applied to WorldView-2 very-high resolution imagery to determine whether A. dealdata had actually colonized the predicted areas (invasion success. Environmental attributes (topographic, disturbance and canopy-related differing between invaded and non-invaded vegetated areas were then analyzed. The SDM results indicated that most (67% of the study area was climatically suitable for A. dealbata invasion. The onset of invasion was documented to 1905 and satellite mapping highlighted that 12.6% of study area was colonized. However, this species had only colonized 62.5% of the maximum potential range, although was registered within 55.6% of grid cells that were considerable unsuitable. Across these areas, the specific success rate of invasion was mostly below 40%, indicating that A. dealbata invasion was not dominant and effective management may still be possible. Environmental attributes related to topography (slope, canopy (normalized difference vegetation index (ndvi, land surface albedo and disturbance (historical burnt area differed between invaded and non-invaded vegetated area, suggesting that landscape attributes may alter at specific locations with Acacia

  5. Shear elastic modulus of magnetic gels with random distribution of magnetizable particles

    Science.gov (United States)

    Iskakova, L. Yu; Zubarev, A. Yu

    2017-04-01

    Magnetic gels present new type of composite materials with rich set of uniquie physical properties, which find active applications in many industrial and bio-medical technologies. We present results of mathematically strict theoretical study of elastic modulus of these systems with randomly distributed magnetizable particles in an elastic medium. The results show that an external magnetic field can pronouncedly increase the shear modulus of these composites.

  6. Comparison of the distribution of non-AIDS Kaposi's sarcoma and non-Hodgkin's lymphoma in Europe

    Science.gov (United States)

    Maso, L Dal; Franceschi, S; Re, A Lo; Vecchia, C La

    1999-01-01

    To evaluate whether some form of mild immunosuppression may influence the geographical distribution of non-AIDS Kaposi's sarcoma (KS), we correlated incidence rates of KS and non-Hodgkin's lymphoma in individuals aged 60 or more in 18 European countries and Israel. Significant positive correlations emerged but, within highest risk countries (i.e.Italy and Israel), internal correlations were inconsistent. © 1999 Cancer Research Campaign PMID:10408708

  7. Clustering, randomness, and regularity in cloud fields. 4: Stratocumulus cloud fields

    Science.gov (United States)

    Lee, J.; Chou, J.; Weger, R. C.; Welch, R. M.

    1994-01-01

    To complete the analysis of the spatial distribution of boundary layer cloudiness, the present study focuses on nine stratocumulus Landsat scenes. The results indicate many similarities between stratocumulus and cumulus spatial distributions. Most notably, at full spatial resolution all scenes exhibit a decidedly clustered distribution. The strength of the clustering signal decreases with increasing cloud size; the clusters themselves consist of a few clouds (less than 10), occupy a small percentage of the cloud field area (less than 5%), contain between 20% and 60% of the cloud field population, and are randomly located within the scene. In contrast, stratocumulus in almost every respect are more strongly clustered than are cumulus cloud fields. For instance, stratocumulus clusters contain more clouds per cluster, occupy a larger percentage of the total area, and have a larger percentage of clouds participating in clusters than the corresponding cumulus examples. To investigate clustering at intermediate spatial scales, the local dimensionality statistic is introduced. Results obtained from this statistic provide the first direct evidence for regularity among large (more than 900 m in diameter) clouds in stratocumulus and cumulus cloud fields, in support of the inhibition hypothesis of Ramirez and Bras (1990). Also, the size compensated point-to-cloud cumulative distribution function statistic is found to be necessary to obtain a consistent description of stratocumulus cloud distributions. A hypothesis regarding the underlying physical mechanisms responsible for cloud clustering is presented. It is suggested that cloud clusters often arise from 4 to 10 triggering events localized within regions less than 2 km in diameter and randomly distributed within the cloud field. As the size of the cloud surpasses the scale of the triggering region, the clustering signal weakens and the larger cloud locations become more random.

  8. Clustering, randomness, and regularity in cloud fields. 4. Stratocumulus cloud fields

    Science.gov (United States)

    Lee, J.; Chou, J.; Weger, R. C.; Welch, R. M.

    1994-07-01

    To complete the analysis of the spatial distribution of boundary layer cloudiness, the present study focuses on nine stratocumulus Landsat scenes. The results indicate many similarities between stratocumulus and cumulus spatial distributions. Most notably, at full spatial resolution all scenes exhibit a decidedly clustered distribution. The strength of the clustering signal decreases with increasing cloud size; the clusters themselves consist of a few clouds (less than 10), occupy a small percentage of the cloud field area (less than 5%), contain between 20% and 60% of the cloud field population, and are randomly located within the scene. In contrast, stratocumulus in almost every respect are more strongly clustered than are cumulus cloud fields. For instance, stratocumulus clusters contain more clouds per cluster, occupy a larger percentage of the total area, and have a larger percentage of clouds participating in clusters than the corresponding cumulus examples. To investigate clustering at intermediate spatial scales, the local dimensionality statistic is introduced. Results obtained from this statistic provide the first direct evidence for regularity among large (>900 m in diameter) clouds in stratocumulus and cumulus cloud fields, in support of the inhibition hypothesis of Ramirez and Bras (1990). Also, the size compensated point-to-cloud cumulative distribution function statistic is found to be necessary to obtain a consistent description of stratocumulus cloud distributions. A hypothesis regarding the underlying physical mechanisms responsible for cloud clustering is presented. It is suggested that cloud clusters often arise from 4 to 10 triggering events localized within regions less than 2 km in diameter and randomly distributed within the cloud field. As the size of the cloud surpasses the scale of the triggering region, the clustering signal weakens and the larger cloud locations become more random.

  9. Uncertainty Visualization Using Copula-Based Analysis in Mixed Distribution Models.

    Science.gov (United States)

    Hazarika, Subhashis; Biswas, Ayan; Shen, Han-Wei

    2018-01-01

    Distributions are often used to model uncertainty in many scientific datasets. To preserve the correlation among the spatially sampled grid locations in the dataset, various standard multivariate distribution models have been proposed in visualization literature. These models treat each grid location as a univariate random variable which models the uncertainty at that location. Standard multivariate distributions (both parametric and nonparametric) assume that all the univariate marginals are of the same type/family of distribution. But in reality, different grid locations show different statistical behavior which may not be modeled best by the same type of distribution. In this paper, we propose a new multivariate uncertainty modeling strategy to address the needs of uncertainty modeling in scientific datasets. Our proposed method is based on a statistically sound multivariate technique called Copula, which makes it possible to separate the process of estimating the univariate marginals and the process of modeling dependency, unlike the standard multivariate distributions. The modeling flexibility offered by our proposed method makes it possible to design distribution fields which can have different types of distribution (Gaussian, Histogram, KDE etc.) at the grid locations, while maintaining the correlation structure at the same time. Depending on the results of various standard statistical tests, we can choose an optimal distribution representation at each location, resulting in a more cost efficient modeling without significantly sacrificing on the analysis quality. To demonstrate the efficacy of our proposed modeling strategy, we extract and visualize uncertain features like isocontours and vortices in various real world datasets. We also study various modeling criterion to help users in the task of univariate model selection.

  10. Tukey g-and-h Random Fields

    KAUST Repository

    Xu, Ganggang; Genton, Marc G.

    2016-01-01

    We propose a new class of trans-Gaussian random fields named Tukey g-and-h (TGH) random fields to model non-Gaussian spatial data. The proposed TGH random fields have extremely flexible marginal distributions, possibly skewed and/or heavy-tailed, and, therefore, have a wide range of applications. The special formulation of the TGH random field enables an automatic search for the most suitable transformation for the dataset of interest while estimating model parameters. Asymptotic properties of the maximum likelihood estimator and the probabilistic properties of the TGH random fields are investigated. An efficient estimation procedure, based on maximum approximated likelihood, is proposed and an extreme spatial outlier detection algorithm is formulated. Kriging and probabilistic prediction with TGH random fields are developed along with prediction confidence intervals. The predictive performance of TGH random fields is demonstrated through extensive simulation studies and an application to a dataset of total precipitation in the south east of the United States.

  11. Tukey g-and-h Random Fields

    KAUST Repository

    Xu, Ganggang

    2016-07-15

    We propose a new class of trans-Gaussian random fields named Tukey g-and-h (TGH) random fields to model non-Gaussian spatial data. The proposed TGH random fields have extremely flexible marginal distributions, possibly skewed and/or heavy-tailed, and, therefore, have a wide range of applications. The special formulation of the TGH random field enables an automatic search for the most suitable transformation for the dataset of interest while estimating model parameters. Asymptotic properties of the maximum likelihood estimator and the probabilistic properties of the TGH random fields are investigated. An efficient estimation procedure, based on maximum approximated likelihood, is proposed and an extreme spatial outlier detection algorithm is formulated. Kriging and probabilistic prediction with TGH random fields are developed along with prediction confidence intervals. The predictive performance of TGH random fields is demonstrated through extensive simulation studies and an application to a dataset of total precipitation in the south east of the United States.

  12. Three-dimensional direct laser written graphitic electrical contacts to randomly distributed components

    Science.gov (United States)

    Dorin, Bryce; Parkinson, Patrick; Scully, Patricia

    2018-04-01

    The development of cost-effective electrical packaging for randomly distributed micro/nano-scale devices is a widely recognized challenge for fabrication technologies. Three-dimensional direct laser writing (DLW) has been proposed as a solution to this challenge, and has enabled the creation of rapid and low resistance graphitic wires within commercial polyimide substrates. In this work, we utilize the DLW technique to electrically contact three fully encapsulated and randomly positioned light-emitting diodes (LEDs) in a one-step process. The resolution of the contacts is in the order of 20 μ m, with an average circuit resistance of 29 ± 18 kΩ per LED contacted. The speed and simplicity of this technique is promising to meet the needs of future microelectronics and device packaging.

  13. Green facility location

    NARCIS (Netherlands)

    Velázquez Martínez, J.C.; Fransoo, J.C.; Bouchery, Y.; Corbett, C.J.; Fransoo, J.C.; Tan, T.

    2017-01-01

    Transportation is one of the main contributing factors of global carbon emissions, and thus, when dealing with facility location models in a distribution context, transportation emissions may be substantially higher than the emissions due to production or storage. Because facility location models

  14. Measuring symmetry, asymmetry and randomness in neural network connectivity.

    Directory of Open Access Journals (Sweden)

    Umberto Esposito

    Full Text Available Cognitive functions are stored in the connectome, the wiring diagram of the brain, which exhibits non-random features, so-called motifs. In this work, we focus on bidirectional, symmetric motifs, i.e. two neurons that project to each other via connections of equal strength, and unidirectional, non-symmetric motifs, i.e. within a pair of neurons only one neuron projects to the other. We hypothesise that such motifs have been shaped via activity dependent synaptic plasticity processes. As a consequence, learning moves the distribution of the synaptic connections away from randomness. Our aim is to provide a global, macroscopic, single parameter characterisation of the statistical occurrence of bidirectional and unidirectional motifs. To this end we define a symmetry measure that does not require any a priori thresholding of the weights or knowledge of their maximal value. We calculate its mean and variance for random uniform or Gaussian distributions, which allows us to introduce a confidence measure of how significantly symmetric or asymmetric a specific configuration is, i.e. how likely it is that the configuration is the result of chance. We demonstrate the discriminatory power of our symmetry measure by inspecting the eigenvalues of different types of connectivity matrices. We show that a Gaussian weight distribution biases the connectivity motifs to more symmetric configurations than a uniform distribution and that introducing a random synaptic pruning, mimicking developmental regulation in synaptogenesis, biases the connectivity motifs to more asymmetric configurations, regardless of the distribution. We expect that our work will benefit the computational modelling community, by providing a systematic way to characterise symmetry and asymmetry in network structures. Further, our symmetry measure will be of use to electrophysiologists that investigate symmetry of network connectivity.

  15. Measuring symmetry, asymmetry and randomness in neural network connectivity.

    Science.gov (United States)

    Esposito, Umberto; Giugliano, Michele; van Rossum, Mark; Vasilaki, Eleni

    2014-01-01

    Cognitive functions are stored in the connectome, the wiring diagram of the brain, which exhibits non-random features, so-called motifs. In this work, we focus on bidirectional, symmetric motifs, i.e. two neurons that project to each other via connections of equal strength, and unidirectional, non-symmetric motifs, i.e. within a pair of neurons only one neuron projects to the other. We hypothesise that such motifs have been shaped via activity dependent synaptic plasticity processes. As a consequence, learning moves the distribution of the synaptic connections away from randomness. Our aim is to provide a global, macroscopic, single parameter characterisation of the statistical occurrence of bidirectional and unidirectional motifs. To this end we define a symmetry measure that does not require any a priori thresholding of the weights or knowledge of their maximal value. We calculate its mean and variance for random uniform or Gaussian distributions, which allows us to introduce a confidence measure of how significantly symmetric or asymmetric a specific configuration is, i.e. how likely it is that the configuration is the result of chance. We demonstrate the discriminatory power of our symmetry measure by inspecting the eigenvalues of different types of connectivity matrices. We show that a Gaussian weight distribution biases the connectivity motifs to more symmetric configurations than a uniform distribution and that introducing a random synaptic pruning, mimicking developmental regulation in synaptogenesis, biases the connectivity motifs to more asymmetric configurations, regardless of the distribution. We expect that our work will benefit the computational modelling community, by providing a systematic way to characterise symmetry and asymmetry in network structures. Further, our symmetry measure will be of use to electrophysiologists that investigate symmetry of network connectivity.

  16. A location-inventory model for distribution centers in a three-level supply chain under uncertainty

    Directory of Open Access Journals (Sweden)

    Ali Bozorgi-Amiri

    2013-01-01

    Full Text Available We study a location-inventory problem in a three level supply chain network under uncertainty, which leads to risk. The (r,Q inventory control policy is applied for this problem. Besides, uncertainty exists in different parameters such as procurement, transportation costs, supply, demand and the capacity of different facilities (due to disaster, man-made events and etc. We present a robust optimization model, which concurrently specifies: locations of distribution centers to be opened, inventory control parameters (r,Q, and allocation of supply chain components. The model is formulated as a multi-objective mixed-integer nonlinear programming in order to minimize the expected total cost of such a supply chain network comprising location, procurement, transportation, holding, ordering, and shortage costs. Moreover, we develop an effective solution approach on the basis of multi-objective particle swarm optimization for solving the proposed model. Eventually, computational results of different examples of the problem and sensitivity analysis are exhibited to show the model and algorithm's feasibility and efficiency.

  17. A hybrid flower pollination algorithm based modified randomized location for multi-threshold medical image segmentation.

    Science.gov (United States)

    Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou

    2015-01-01

    Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.

  18. Critical location identification and vulnerability analysis of interdependent infrastructure systems under spatially localized attacks

    International Nuclear Information System (INIS)

    Ouyang, Min

    2016-01-01

    Infrastructure systems are usually spatially distributed in a wide area and are subject to many types of hazards. For each type of hazards, modeling their direct impact on infrastructure components and analyzing their induced system-level vulnerability are important for identifying mitigation strategies. This paper mainly studies spatially localized attacks that a set of infrastructure components located within or crossing a circle shaped spatially localized area is subject to damage while other components do not directly fail. For this type of attacks, taking interdependent power and gas systems in Harris County, Texas, USA as an example, this paper proposes an approach to exactly identify critical locations in interdependent infrastructure systems and make pertinent vulnerability analysis. Results show that (a) infrastructure interdependencies and attack radius largely affect the position of critical locations; (b) spatially localized attacks cause less vulnerability than equivalent random failures; (c) in most values of attack radius critical locations identified by considering only node failures do not change when considering both node and edge failures in the attack area; (d) for many values of attack radius critical locations identified by topology-based model are also critical from the flow-based perspective. - Highlights: • We propose a method to identify critical locations in interdependent infrastructures. • Geographical interdependencies and attack radius largely affect critical locations. • Localized attacks cause less vulnerability than equivalent random failures. • Whether considering both node and edge failures affects critical locations. • Topology-based critical locations are also critical from flow-based perspective.

  19. The Surprising Impact of Seat Location on Student Performance

    Science.gov (United States)

    Perkins, Katherine K.; Wieman, Carl E.

    2005-01-01

    Every physics instructor knows that the most engaged and successful students tend to sit at the front of the class and the weakest students tend to sit at the back. However, it is normally assumed that this is merely an indication of the respective seat location preferences of weaker and stronger students. Here we present evidence suggesting that in fact this may be mixing up the cause and effect. It may be that the seat selection itself contributes to whether the student does well or poorly, rather than the other way around. While a number of studies have looked at the effect of seat location on students, the results are often inconclusive, and few, if any, have studied the effects in college classrooms with randomly assigned seats. In this paper, we report on our observations of a large introductory physics course in which we randomly assigned students to particular seat locations at the beginning of the semester. Seat location during the first half of the semester had a noticeable impact on student success in the course, particularly in the top and bottom parts of the grade distribution. Students sitting in the back of the room for the first half of the term were nearly six times as likely to receive an F as students who started in the front of the room. A corresponding but less dramatic reversal was evident in the fractions of students receiving As. These effects were in spite of many unusual efforts to engage students at the back of the class and a front-to-back reversal of seat location halfway through the term. These results suggest there may be inherent detrimental effects of large physics lecture halls that need to be further explored.

  20. Random magnetism

    International Nuclear Information System (INIS)

    Tahir-Kheli, R.A.

    1975-01-01

    A few simple problems relating to random magnetic systems are presented. Translational symmetry, only on the macroscopic scale, is assumed for these systems. A random set of parameters, on the microscopic scale, for the various regions of these systems is also assumed. A probability distribution for randomness is obeyed. Knowledge of the form of these probability distributions, is assumed in all cases [pt

  1. Kinetic corrections from analytic non-Maxwellian distribution functions in magnetized plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Izacard, Olivier, E-mail: izacard@llnl.gov [Lawrence Livermore National Laboratory, 7000 East Avenue, L-637, Livermore, California 94550 (United States)

    2016-08-15

    In magnetized plasma physics, almost all developed analytic theories assume a Maxwellian distribution function (MDF) and in some cases small deviations are described using the perturbation theory. The deviations with respect to the Maxwellian equilibrium, called kinetic effects, are required to be taken into account especially for fusion reactor plasmas. Generally, because the perturbation theory is not consistent with observed steady-state non-Maxwellians, these kinetic effects are numerically evaluated by very central processing unit (CPU)-expensive codes, avoiding the analytic complexity of velocity phase space integrals. We develop here a new method based on analytic non-Maxwellian distribution functions constructed from non-orthogonal basis sets in order to (i) use as few parameters as possible, (ii) increase the efficiency to model numerical and experimental non-Maxwellians, (iii) help to understand unsolved problems such as diagnostics discrepancies from the physical interpretation of the parameters, and (iv) obtain analytic corrections due to kinetic effects given by a small number of terms and removing the numerical error of the evaluation of velocity phase space integrals. This work does not attempt to derive new physical effects even if it could be possible to discover one from the better understandings of some unsolved problems, but here we focus on the analytic prediction of kinetic corrections from analytic non-Maxwellians. As applications, examples of analytic kinetic corrections are shown for the secondary electron emission, the Langmuir probe characteristic curve, and the entropy. This is done by using three analytic representations of the distribution function: the Kappa distribution function, the bi-modal or a new interpreted non-Maxwellian distribution function (INMDF). The existence of INMDFs is proved by new understandings of the experimental discrepancy of the measured electron temperature between two diagnostics in JET. As main results, it

  2. Distribution center consolidation games

    NARCIS (Netherlands)

    Klijn, F.; Slikker, M.

    2005-01-01

    We study a location-inventory model to analyze the impact of consolidation of distribution centers on facility and inventory costs. We introduce a cooperative game and show that when demand processes are i.i.d. the core is non-empty, i.e., consolidation allows for a stable division of the minimal

  3. Specialty resident perceptions of the impact of a distributed education model on practice location intentions.

    Science.gov (United States)

    Myhre, Douglas L; Adamiak, Paul J; Pedersen, Jeanette S

    2015-01-01

    There is an increased focus internationally on the social mandate of postgraduate training programs. This study explores specialty residents' perceptions of the impact of the University of Calgary's (UC) distributed education rotations on their self-perceived likelihood of practice location, and if this effect is influenced by resident specialty or stage of program. Residents participating in the UC Distributed Royal College Initiative (DistRCI) between July 2010 and June 2013 completed an online survey following their rotation. Descriptive statistics and student's t-test were employed to analyze quantitative survey data, and a constant comparative approach was used to analyze free text qualitative responses. Residents indicated they were satisfied with the program (92%), and that the distributed rotations significantly increased their self-reported likelihood of practicing in smaller centers (p education program in contributing to future practice and career development, and its relevance in the social accountability of postgraduate programs.

  4. Non-specific filtering of beta-distributed data.

    Science.gov (United States)

    Wang, Xinhui; Laird, Peter W; Hinoue, Toshinori; Groshen, Susan; Siegmund, Kimberly D

    2014-06-19

    Non-specific feature selection is a dimension reduction procedure performed prior to cluster analysis of high dimensional molecular data. Not all measured features are expected to show biological variation, so only the most varying are selected for analysis. In DNA methylation studies, DNA methylation is measured as a proportion, bounded between 0 and 1, with variance a function of the mean. Filtering on standard deviation biases the selection of probes to those with mean values near 0.5. We explore the effect this has on clustering, and develop alternate filter methods that utilize a variance stabilizing transformation for Beta distributed data and do not share this bias. We compared results for 11 different non-specific filters on eight Infinium HumanMethylation data sets, selected to span a variety of biological conditions. We found that for data sets having a small fraction of samples showing abnormal methylation of a subset of normally unmethylated CpGs, a characteristic of the CpG island methylator phenotype in cancer, a novel filter statistic that utilized a variance-stabilizing transformation for Beta distributed data outperformed the common filter of using standard deviation of the DNA methylation proportion, or its log-transformed M-value, in its ability to detect the cancer subtype in a cluster analysis. However, the standard deviation filter always performed among the best for distinguishing subgroups of normal tissue. The novel filter and standard deviation filter tended to favour features in different genome contexts; for the same data set, the novel filter always selected more features from CpG island promoters and the standard deviation filter always selected more features from non-CpG island intergenic regions. Interestingly, despite selecting largely non-overlapping sets of features, the two filters did find sample subsets that overlapped for some real data sets. We found two different filter statistics that tended to prioritize features with

  5. Cross-modal and intra-modal binding between identity and location in spatial working memory: The identity of objects does not help recalling their locations.

    Science.gov (United States)

    Del Gatto, Claudia; Brunetti, Riccardo; Delogu, Franco

    2016-01-01

    In this study we tested incidental feature-to-location binding in a spatial task, both in unimodal and cross-modal conditions. In Experiment 1 we administered a computerised version of the Corsi Block-Tapping Task (CBTT) in three different conditions: the first one analogous to the original CBTT test; the second one in which locations were associated with unfamiliar images; the third one in which locations were associated with non-verbal sounds. Results showed no effect on performance by the addition of identity information. In Experiment 2, locations on the screen were associated with pitched sounds in two different conditions: one in which different pitches were randomly associated with locations and the other in which pitches were assigned to match the vertical position of the CBTT squares congruently with their frequencies. In Experiment 2 we found marginal evidence of a pitch facilitation effect in the spatial memory task. We ran a third experiment to test the same conditions of Experiment 2 with a within-subject design. Results of Experiment 3 did not confirm the pitch-location facilitation effect. We concluded that the identity of objects does not affect recalling their locations. We discuss our results within the framework of the debate about the mechanisms of "what" and "where" feature binding in working memory.

  6. The optimal number, type and location of devices in automation of electrical distribution networks

    Directory of Open Access Journals (Sweden)

    Popović Željko N.

    2015-01-01

    Full Text Available This paper presents the mixed integer linear programming based model for determining optimal number, type and location of remotely controlled and supervised devices in distribution networks in the presence of distributed generators. The proposed model takes into consideration a number of different devices simultaneously (remotely controlled circuit breakers/reclosers, sectionalizing switches, remotely supervised and local fault passage indicators along with the following: expected outage cost to consumers and producers due to momentary and long-term interruptions, automated device expenses (capital investment, installation, and annual operation and maintenance costs, number and expenses of crews involved in the isolation and restoration process. Furthermore, the other possible benefits of each of automated device are also taken into account (e.g., benefits due to decreasing the cost of switching operations in normal conditions. The obtained numerical results emphasize the importance of consideration of different types of automation devices simultaneously. They also show that the proposed approach have a potential to improve the process of determining of the best automation strategy in real life distribution networks.

  7. Artificial Neural Network for Location Estimation in Wireless Communication Systems

    Directory of Open Access Journals (Sweden)

    Chien-Sheng Chen

    2012-03-01

    Full Text Available In a wireless communication system, wireless location is the technique used to estimate the location of a mobile station (MS. To enhance the accuracy of MS location prediction, we propose a novel algorithm that utilizes time of arrival (TOA measurements and the angle of arrival (AOA information to locate MS when three base stations (BSs are available. Artificial neural networks (ANN are widely used techniques in various areas to overcome the problem of exclusive and nonlinear relationships. When the MS is heard by only three BSs, the proposed algorithm utilizes the intersections of three TOA circles (and the AOA line, based on various neural networks, to estimate the MS location in non-line-of-sight (NLOS environments. Simulations were conducted to evaluate the performance of the algorithm for different NLOS error distributions. The numerical analysis and simulation results show that the proposed algorithms can obtain more precise location estimation under different NLOS environments.

  8. Artificial neural network for location estimation in wireless communication systems.

    Science.gov (United States)

    Chen, Chien-Sheng

    2012-01-01

    In a wireless communication system, wireless location is the technique used to estimate the location of a mobile station (MS). To enhance the accuracy of MS location prediction, we propose a novel algorithm that utilizes time of arrival (TOA) measurements and the angle of arrival (AOA) information to locate MS when three base stations (BSs) are available. Artificial neural networks (ANN) are widely used techniques in various areas to overcome the problem of exclusive and nonlinear relationships. When the MS is heard by only three BSs, the proposed algorithm utilizes the intersections of three TOA circles (and the AOA line), based on various neural networks, to estimate the MS location in non-line-of-sight (NLOS) environments. Simulations were conducted to evaluate the performance of the algorithm for different NLOS error distributions. The numerical analysis and simulation results show that the proposed algorithms can obtain more precise location estimation under different NLOS environments.

  9. Locating distribution/service centers based on multi objective decision making using set covering and proximity to stock market

    Directory of Open Access Journals (Sweden)

    Mazyar Dabibi

    2016-09-01

    Full Text Available In the present competitive world, facility location is an important aspect of the supply chain (sc optimization. It involves selecting specific locations for facility construction and allocation of the distribution channel among different SC levels. In fact, it is a strategic issue which directly affects many operational/tactical decisions. Besides the accessibility, which results in customer satisfaction, the present paper optimizes the establishment costs of a number of distribution channels by considering their proximity to the stock market of the goods they distribute, and proposes mathematical models for two objective functions using the set covering problem. Then, two objective functions are proposed into one through the ε-constraint method and solved by the metaheuristic Genetic Algorithm (GA. To test the resulted model, a smaller scale problem is solved. Results from running the algorithm with different ε-values show that, on average, a 10% increase in ε, which increases the value of the second objective function - distance covered by customers will cause a 2% decrease in the value of the first objective function including the costs of establishing distribution centers. The repeatability and solution convergence of the two-objective model presented by the GA are other results obtained in this study.

  10. Predictions for an invaded world: A strategy to predict the distribution of native and non-indigenous species at multiple scales

    Science.gov (United States)

    Reusser, D.A.; Lee, H.

    2008-01-01

    Habitat models can be used to predict the distributions of marine and estuarine non-indigenous species (NIS) over several spatial scales. At an estuary scale, our goal is to predict the estuaries most likely to be invaded, but at a habitat scale, the goal is to predict the specific locations within an estuary that are most vulnerable to invasion. As an initial step in evaluating several habitat models, model performance for a suite of benthic species with reasonably well-known distributions on the Pacific coast of the US needs to be compared. We discuss the utility of non-parametric multiplicative regression (NPMR) for predicting habitat- and estuary-scale distributions of native and NIS. NPMR incorporates interactions among variables, allows qualitative and categorical variables, and utilizes data on absence as well as presence. Preliminary results indicate that NPMR generally performs well at both spatial scales and that distributions of NIS are predicted as well as those of native species. For most species, latitude was the single best predictor, although similar model performance could be obtained at both spatial scales with combinations of other habitat variables. Errors of commission were more frequent at a habitat scale, with omission and commission errors approximately equal at an estuary scale. ?? 2008 International Council for the Exploration of the Sea. Published by Oxford Journals. All rights reserved.

  11. Locating non-volcanic tremor along the San Andreas Fault using a multiple array source imaging technique

    Science.gov (United States)

    Ryberg, T.; Haberland, C.H.; Fuis, G.S.; Ellsworth, W.L.; Shelly, D.R.

    2010-01-01

    Non-volcanic tremor (NVT) has been observed at several subduction zones and at the San Andreas Fault (SAF). Tremor locations are commonly derived by cross-correlating envelope-transformed seismic traces in combination with source-scanning techniques. Recently, they have also been located by using relative relocations with master events, that is low-frequency earthquakes that are part of the tremor; locations are derived by conventional traveltime-based methods. Here we present a method to locate the sources of NVT using an imaging approach for multiple array data. The performance of the method is checked with synthetic tests and the relocation of earthquakes. We also applied the method to tremor occurring near Cholame, California. A set of small-aperture arrays (i.e. an array consisting of arrays) installed around Cholame provided the data set for this study. We observed several tremor episodes and located tremor sources in the vicinity of SAF. During individual tremor episodes, we observed a systematic change of source location, indicating rapid migration of the tremor source along SAF. ?? 2010 The Authors Geophysical Journal International ?? 2010 RAS.

  12. Comparison of four modeling tools for the prediction of potential distribution for non-indigenous weeds in the United States

    Science.gov (United States)

    Magarey, Roger; Newton, Leslie; Hong, Seung C.; Takeuchi, Yu; Christie, Dave; Jarnevich, Catherine S.; Kohl, Lisa; Damus, Martin; Higgins, Steven I.; Miller, Leah; Castro, Karen; West, Amanda; Hastings, John; Cook, Gericke; Kartesz, John; Koop, Anthony

    2018-01-01

    This study compares four models for predicting the potential distribution of non-indigenous weed species in the conterminous U.S. The comparison focused on evaluating modeling tools and protocols as currently used for weed risk assessment or for predicting the potential distribution of invasive weeds. We used six weed species (three highly invasive and three less invasive non-indigenous species) that have been established in the U.S. for more than 75 years. The experiment involved providing non-U. S. location data to users familiar with one of the four evaluated techniques, who then developed predictive models that were applied to the United States without knowing the identity of the species or its U.S. distribution. We compared a simple GIS climate matching technique known as Proto3, a simple climate matching tool CLIMEX Match Climates, the correlative model MaxEnt, and a process model known as the Thornley Transport Resistance (TTR) model. Two experienced users ran each modeling tool except TTR, which had one user. Models were trained with global species distribution data excluding any U.S. data, and then were evaluated using the current known U.S. distribution. The influence of weed species identity and modeling tool on prevalence and sensitivity effects was compared using a generalized linear mixed model. Each modeling tool itself had a low statistical significance, while weed species alone accounted for 69.1 and 48.5% of the variance for prevalence and sensitivity, respectively. These results suggest that simple modeling tools might perform as well as complex ones in the case of predicting potential distribution for a weed not yet present in the United States. Considerations of model accuracy should also be balanced with those of reproducibility and ease of use. More important than the choice of modeling tool is the construction of robust protocols and testing both new and experienced users under blind test conditions that approximate operational conditions.

  13. Anomalous Anticipatory Responses in Networked Random Data

    International Nuclear Information System (INIS)

    Nelson, Roger D.; Bancel, Peter A.

    2006-01-01

    We examine an 8-year archive of synchronized, parallel time series of random data from a world spanning network of physical random event generators (REGs). The archive is a publicly accessible matrix of normally distributed 200-bit sums recorded at 1 Hz which extends from August 1998 to the present. The primary question is whether these data show non-random structure associated with major events such as natural or man-made disasters, terrible accidents, or grand celebrations. Secondarily, we examine the time course of apparently correlated responses. Statistical analyses of the data reveal consistent evidence that events which strongly affect people engender small but significant effects. These include suggestions of anticipatory responses in some cases, leading to a series of specialized analyses to assess possible non-random structure preceding precisely timed events. A focused examination of data collected around the time of earthquakes with Richter magnitude 6 and greater reveals non-random structure with a number of intriguing, potentially important features. Anomalous effects in the REG data are seen only when the corresponding earthquakes occur in populated areas. No structure is found if they occur in the oceans. We infer that an important contributor to the effect is the relevance of the earthquake to humans. Epoch averaging reveals evidence for changes in the data some hours prior to the main temblor, suggestive of reverse causation

  14. Internet Geo-Location

    Science.gov (United States)

    2017-12-01

    INTERNET GEO-LOCATION DUKE UNIVERSITY DECEMBER 2017 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED STINFO COPY AIR...REPORT TYPE FINAL TECHNICAL REPORT 3. DATES COVERED (From - To) MAY 2014 – MAY 2017 4. TITLE AND SUBTITLE INTERNET GEO-LOCATION 5a. CONTRACT...of SpeedTest servers that are used by end users to measure the speed of their Internet connection. The servers log the IP address and the location

  15. Expectation-Maximization Tensor Factorization for Practical Location Privacy Attacks

    Directory of Open Access Journals (Sweden)

    Murakami Takao

    2017-10-01

    Full Text Available Location privacy attacks based on a Markov chain model have been widely studied to de-anonymize or de-obfuscate mobility traces. An adversary can perform various kinds of location privacy attacks using a personalized transition matrix, which is trained for each target user. However, the amount of training data available to the adversary can be very small, since many users do not disclose much location information in their daily lives. In addition, many locations can be missing from the training traces, since many users do not disclose their locations continuously but rather sporadically. In this paper, we show that the Markov chain model can be a threat even in this realistic situation. Specifically, we focus on a training phase (i.e. mobility profile building phase and propose Expectation-Maximization Tensor Factorization (EMTF, which alternates between computing a distribution of missing locations (E-step and computing personalized transition matrices via tensor factorization (M-step. Since the time complexity of EMTF is exponential in the number of missing locations, we propose two approximate learning methods, one of which uses the Viterbi algorithm while the other uses the Forward Filtering Backward Sampling (FFBS algorithm. We apply our learning methods to a de-anonymization attack and a localization attack, and evaluate them using three real datasets. The results show that our learning methods significantly outperform a random guess, even when there is only one training trace composed of 10 locations per user, and each location is missing with probability 80% (i.e. even when users hardly disclose two temporally-continuous locations.

  16. Smooth conditional distribution function and quantiles under random censorship.

    Science.gov (United States)

    Leconte, Eve; Poiraud-Casanova, Sandrine; Thomas-Agnan, Christine

    2002-09-01

    We consider a nonparametric random design regression model in which the response variable is possibly right censored. The aim of this paper is to estimate the conditional distribution function and the conditional alpha-quantile of the response variable. We restrict attention to the case where the response variable as well as the explanatory variable are unidimensional and continuous. We propose and discuss two classes of estimators which are smooth with respect to the response variable as well as to the covariate. Some simulations demonstrate that the new methods have better mean square error performances than the generalized Kaplan-Meier estimator introduced by Beran (1981) and considered in the literature by Dabrowska (1989, 1992) and Gonzalez-Manteiga and Cadarso-Suarez (1994).

  17. 2 × 2 random matrix ensembles with reduced symmetry: from Hermitian to PT -symmetric matrices

    International Nuclear Information System (INIS)

    Gong Jiangbin; Wang Qinghai

    2012-01-01

    A possibly fruitful extension of conventional random matrix ensembles is proposed by imposing symmetry constraints on conventional Hermitian matrices or parity–time (PT)-symmetric matrices. To illustrate the main idea, we first study 2 × 2 complex Hermitian matrix ensembles with O(2)-invariant constraints, yielding novel level-spacing statistics such as singular distributions, the half-Gaussian distribution, distributions interpolating between the GOE (Gaussian orthogonal ensemble) distribution and half-Gaussian distributions, as well as the gapped-GOE distribution. Such a symmetry-reduction strategy is then used to explore 2 × 2 PT-symmetric matrix ensembles with real eigenvalues. In particular, PT-symmetric random matrix ensembles with U(2) invariance can be constructed, with the conventional complex Hermitian random matrix ensemble being a special case. In two examples of PT-symmetric random matrix ensembles, the level-spacing distributions are found to be the standard GUE (Gaussian unitary ensemble) statistics or the ‘truncated-GUE’ statistics. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical devoted to ‘Quantum physics with non-Hermitian operators’. (paper)

  18. Analysis of random number generators in abnormal usage conditions

    International Nuclear Information System (INIS)

    Soucarros, M.

    2012-01-01

    Random numbers have been used through the ages for games of chance, more recently for secret codes and today they are necessary to the execution of computer programs. Random number generators have now evolved from simple dices to electronic circuits and algorithms. Accordingly, the ability to distinguish between random and non-random numbers has become more difficult. Furthermore, whereas in the past dices were loaded in order to increase winning chances, it is now possible to influence the outcome of random number generators. In consequence, this subject is still very much an issue and has recently made the headlines. Indeed, there was talks about the PS3 game console which generates constant random numbers and redundant distribution of secret keys on the internet. This thesis presents a study of several generators as well as different means to perturb them. It shows the inherent defects of their conceptions and possible consequences of their failure when they are embedded inside security components. Moreover, this work highlights problems yet to be solved concerning the testing of random numbers and the post-processing eliminating bias in these numbers distribution. (author) [fr

  19. Geological Effects on Lightning Strike Distributions

    KAUST Repository

    Berdahl, J. Scott

    2016-05-16

    Recent advances in lightning detection networks allow for detailed mapping of lightning flash locations. Longstanding rumors of geological influence on cloud-to-ground (CG) lightning distribution and recent commercial claims based on such influence can now be tested empirically. If present, such influence could represent a new, cheap and efficient geophysical tool with applications in mineral, hydrothermal and oil exploration, regional geological mapping, and infrastructure planning. This project applies statistical analysis to lightning data collected by the United States National Lightning Detection Network from 2006 through 2015 in order to assess whether the huge range in electrical conductivities of geological materials plays a role in the spatial distribution of CG lightning. CG flash densities are mapped for twelve areas in the contiguous United States and compared to elevation and geology, as well as to the locations of faults, railroads and tall towers including wind turbines. Overall spatial randomness is assessed, along with spatial correlation of attributes. Negative and positive polarity lightning are considered separately and together. Topography and tower locations show a strong influence on CG distribution patterns. Geology, faults and railroads do not. This suggests that ground conductivity is not an important factor in determining lightning strike location on scales larger than current flash location accuracies, which are generally several hundred meters. Once a lightning channel is established, however, ground properties at the contact point may play a role in determining properties of the subsequent stroke.

  20. Self-adaptive change detection in streaming data with non-stationary distribution

    KAUST Repository

    Zhang, Xiangliang

    2010-01-01

    Non-stationary distribution, in which the data distribution evolves over time, is a common issue in many application fields, e.g., intrusion detection and grid computing. Detecting the changes in massive streaming data with a non-stationary distribution helps to alarm the anomalies, to clean the noises, and to report the new patterns. In this paper, we employ a novel approach for detecting changes in streaming data with the purpose of improving the quality of modeling the data streams. Through observing the outliers, this approach of change detection uses a weighted standard deviation to monitor the evolution of the distribution of data streams. A cumulative statistical test, Page-Hinkley, is employed to collect the evidence of changes in distribution. The parameter used for reporting the changes is self-adaptively adjusted according to the distribution of data streams, rather than set by a fixed empirical value. The self-adaptability of the novel approach enhances the effectiveness of modeling data streams by timely catching the changes of distributions. We validated the approach on an online clustering framework with a benchmark KDDcup 1999 intrusion detection data set as well as with a real-world grid data set. The validation results demonstrate its better performance on achieving higher accuracy and lower percentage of outliers comparing to the other change detection approaches. © 2010 Springer-Verlag.

  1. A simple approximation of moments of the quasi-equilibrium distribution of an extended stochastic theta-logistic model with non-integer powers.

    Science.gov (United States)

    Bhowmick, Amiya Ranjan; Bandyopadhyay, Subhadip; Rana, Sourav; Bhattacharya, Sabyasachi

    2016-01-01

    The stochastic versions of the logistic and extended logistic growth models are applied successfully to explain many real-life population dynamics and share a central body of literature in stochastic modeling of ecological systems. To understand the randomness in the population dynamics of the underlying processes completely, it is important to have a clear idea about the quasi-equilibrium distribution and its moments. Bartlett et al. (1960) took a pioneering attempt for estimating the moments of the quasi-equilibrium distribution of the stochastic logistic model. Matis and Kiffe (1996) obtain a set of more accurate and elegant approximations for the mean, variance and skewness of the quasi-equilibrium distribution of the same model using cumulant truncation method. The method is extended for stochastic power law logistic family by the same and several other authors (Nasell, 2003; Singh and Hespanha, 2007). Cumulant truncation and some alternative methods e.g. saddle point approximation, derivative matching approach can be applied if the powers involved in the extended logistic set up are integers, although plenty of evidence is available for non-integer powers in many practical situations (Sibly et al., 2005). In this paper, we develop a set of new approximations for mean, variance and skewness of the quasi-equilibrium distribution under more general family of growth curves, which is applicable for both integer and non-integer powers. The deterministic counterpart of this family of models captures both monotonic and non-monotonic behavior of the per capita growth rate, of which theta-logistic is a special case. The approximations accurately estimate the first three order moments of the quasi-equilibrium distribution. The proposed method is illustrated with simulated data and real data from global population dynamics database. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Optimal sizing and location of SVC devices for improvement of voltage profile in distribution network with dispersed photovoltaic and wind power plants

    International Nuclear Information System (INIS)

    Savić, Aleksandar; Đurišić, Željko

    2014-01-01

    Highlights: • Significant voltage variations in a distribution network with dispersed generation. • The use of SVC devices to improve the voltage profiles are an effective solution. • Number, size and location of SVC devices are optimized using genetic algorithm. • The methodology is presented on an example of a real distribution system in Serbia. - Abstract: Intermittent power generation of wind turbines and photovoltaic plants creates voltage disturbances in power distribution networks which may not be acceptable to the consumers. To control the deviations of the nodal voltages, it is necessary to use fast dynamic control of the reactive power in the distribution network. Implementation of the power electronic devices, such as Static Var Compensator (SVC), enables effective dynamic state as well as a static state of the nodal voltage control in the distribution network. This paper analyzed optimal sizing and location of SVC devices by using genetic algorithm, to improve nodal voltages profile in a distribution network with dispersed photovoltaic and wind power plants. Practical application of the developed methodology was tested on an example of a real distribution network

  3. Degradation of the compressive strength of unstiffened/stiffened steel plates due to both-sides randomly distributed corrosion wastage

    Directory of Open Access Journals (Sweden)

    Zorareh Hadj Mohammad

    Full Text Available The paper addresses the problem of the influence of randomly distributed corrosion wastage on the collapse strength and behaviour of unstiffened/stiffened steel plates in longitudinal compression. A series of elastic-plastic large deflection finite element analyses is performed on both-sides randomly corroded steel plates and stiffened plates. The effects of general corrosion are introduced into the finite element models using a novel random thickness surface model. Buckling strength, post-buckling behaviour, ultimate strength and post-ultimate behaviour of the models are investigated as results of both-sides random corrosion.

  4. A permutation test to analyse systematic bias and random measurement errors of medical devices via boosting location and scale models.

    Science.gov (United States)

    Mayr, Andreas; Schmid, Matthias; Pfahlberg, Annette; Uter, Wolfgang; Gefeller, Olaf

    2017-06-01

    Measurement errors of medico-technical devices can be separated into systematic bias and random error. We propose a new method to address both simultaneously via generalized additive models for location, scale and shape (GAMLSS) in combination with permutation tests. More precisely, we extend a recently proposed boosting algorithm for GAMLSS to provide a test procedure to analyse potential device effects on the measurements. We carried out a large-scale simulation study to provide empirical evidence that our method is able to identify possible sources of systematic bias as well as random error under different conditions. Finally, we apply our approach to compare measurements of skin pigmentation from two different devices in an epidemiological study.

  5. Detection of Leaks in Water Distribution System using Non-Destructive Techniques

    Science.gov (United States)

    Aslam, H.; Kaur, M.; Sasi, S.; Mortula, Md M.; Yehia, S.; Ali, T.

    2018-05-01

    Water is scarce and needs to be conserved. A considerable amount of water which flows in the water distribution systems was found to be lost due to pipe leaks. Consequently, innovations in methods of pipe leakage detections for early recognition and repair of these leaks is vital to ensure minimum wastage of water in distribution systems. A major component of detection of pipe leaks is the ability to accurately locate the leak location in pipes through minimum invasion. Therefore, this paper studies the leak detection abilities of the three NDT’s: Ground Penetration Radar (GPR) and spectrometer and aims at determining whether these instruments are effective in identifying the leak. An experimental setup was constructed to simulate the underground conditions of water distribution systems. After analysing the experimental data, it was concluded that both the GPR and the spectrometer were effective in detecting leaks in the pipes. However, the results obtained from the spectrometer were not very differentiating in terms of observing the leaks in comparison to the results obtained from the GPR. In addition to this, it was concluded that both instruments could not be used if the water from the leaks had reached on the surface, resulting in surface ponding.

  6. Non extensivity and frequency magnitude distribution of earthquakes

    International Nuclear Information System (INIS)

    Sotolongo-Costa, Oscar; Posadas, Antonio

    2003-01-01

    Starting from first principles (in this case a non-extensive formulation of the maximum entropy principle) and a phenomenological approach, an explicit formula for the magnitude distribution of earthquakes is derived, which describes earthquakes in the whole range of magnitudes. The Gutenberg-Richter law appears as a particular case of the obtained formula. Comparison with geophysical data gives a very good agreement

  7. Statistical analysis of COMPTEL maximum likelihood-ratio distributions: evidence for a signal from previously undetected AGN

    International Nuclear Information System (INIS)

    Williams, O. R.; Bennett, K.; Much, R.; Schoenfelder, V.; Blom, J. J.; Ryan, J.

    1997-01-01

    The maximum likelihood-ratio method is frequently used in COMPTEL analysis to determine the significance of a point source at a given location. In this paper we do not consider whether the likelihood-ratio at a particular location indicates a detection, but rather whether distributions of likelihood-ratios derived from many locations depart from that expected for source free data. We have constructed distributions of likelihood-ratios by reading values from standard COMPTEL maximum-likelihood ratio maps at positions corresponding to the locations of different categories of AGN. Distributions derived from the locations of Seyfert galaxies are indistinguishable, according to a Kolmogorov-Smirnov test, from those obtained from ''random'' locations, but differ slightly from those obtained from the locations of flat spectrum radio loud quasars, OVVs, and BL Lac objects. This difference is not due to known COMPTEL sources, since regions near these sources are excluded from the analysis. We suggest that it might arise from a number of sources with fluxes below the COMPTEL detection threshold

  8. Hybrid algorithm of ensemble transform and importance sampling for assimilation of non-Gaussian observations

    Directory of Open Access Journals (Sweden)

    Shin'ya Nakano

    2014-05-01

    Full Text Available A hybrid algorithm that combines the ensemble transform Kalman filter (ETKF and the importance sampling approach is proposed. Since the ETKF assumes a linear Gaussian observation model, the estimate obtained by the ETKF can be biased in cases with nonlinear or non-Gaussian observations. The particle filter (PF is based on the importance sampling technique, and is applicable to problems with nonlinear or non-Gaussian observations. However, the PF usually requires an unrealistically large sample size in order to achieve a good estimation, and thus it is computationally prohibitive. In the proposed hybrid algorithm, we obtain a proposal distribution similar to the posterior distribution by using the ETKF. A large number of samples are then drawn from the proposal distribution, and these samples are weighted to approximate the posterior distribution according to the importance sampling principle. Since the importance sampling provides an estimate of the probability density function (PDF without assuming linearity or Gaussianity, we can resolve the bias due to the nonlinear or non-Gaussian observations. Finally, in the next forecast step, we reduce the sample size to achieve computational efficiency based on the Gaussian assumption, while we use a relatively large number of samples in the importance sampling in order to consider the non-Gaussian features of the posterior PDF. The use of the ETKF is also beneficial in terms of the computational simplicity of generating a number of random samples from the proposal distribution and in weighting each of the samples. The proposed algorithm is not necessarily effective in case that the ensemble is located distant from the true state. However, monitoring the effective sample size and tuning the factor for covariance inflation could resolve this problem. In this paper, the proposed hybrid algorithm is introduced and its performance is evaluated through experiments with non-Gaussian observations.

  9. Skewness and kurtosis analysis for non-Gaussian distributions

    Science.gov (United States)

    Celikoglu, Ahmet; Tirnakli, Ugur

    2018-06-01

    In this paper we address a number of pitfalls regarding the use of kurtosis as a measure of deviations from the Gaussian. We treat kurtosis in both its standard definition and that which arises in q-statistics, namely q-kurtosis. We have recently shown that the relation proposed by Cristelli et al. (2012) between skewness and kurtosis can only be verified for relatively small data sets, independently of the type of statistics chosen; however it fails for sufficiently large data sets, if the fourth moment of the distribution is finite. For infinite fourth moments, kurtosis is not defined as the size of the data set tends to infinity. For distributions with finite fourth moments, the size, N, of the data set for which the standard kurtosis saturates to a fixed value, depends on the deviation of the original distribution from the Gaussian. Nevertheless, using kurtosis as a criterion for deciding which distribution deviates further from the Gaussian can be misleading for small data sets, even for finite fourth moment distributions. Going over to q-statistics, we find that although the value of q-kurtosis is finite in the range of 0 < q < 3, this quantity is not useful for comparing different non-Gaussian distributed data sets, unless the appropriate q value, which truly characterizes the data set of interest, is chosen. Finally, we propose a method to determine the correct q value and thereby to compute the q-kurtosis of q-Gaussian distributed data sets.

  10. Slip-Size Distribution and Self-Organized Criticality in Block-Spring Models with Quenched Randomness

    Science.gov (United States)

    Sakaguchi, Hidetsugu; Kadowaki, Shuntaro

    2017-07-01

    We study slowly pulling block-spring models in random media. Second-order phase transitions exist in a model pulled by a constant force in the case of velocity-strengthening friction. If external forces are slowly increased, nearly critical states are self-organized. Slips of various sizes occur, and the probability distributions of slip size roughly obey power laws. The exponent is close to that in the quenched Edwards-Wilkinson model. Furthermore, the slip-size distributions are investigated in cases of Coulomb friction, velocity-weakening friction, and two-dimensional block-spring models.

  11. Algorithm describing pressure distribution of non-contact TNT explosion

    Directory of Open Access Journals (Sweden)

    Radosław Kiciński

    2014-12-01

    Full Text Available [b]Abstract[/b]. The aim of this study is to develop a computational algorithm, describing the shock wave pressure distribution in the space induced by non-contact TNT explosion. The procedure describes pressure distribution on a damp surface of the hull. Simulations have been carried out using Abaqus/CAE. The study also shows the pressure waveform descriptions provided by various authors and presents them in charts. The formulated conclusions convince efficiency of the algorithm application.[b]Keywords:[/b] Underwater explosion, shock wave, CAE, TNT, Kobben class submarine

  12. Diffusion in randomly perturbed dissipative dynamics

    Science.gov (United States)

    Rodrigues, Christian S.; Chechkin, Aleksei V.; de Moura, Alessandro P. S.; Grebogi, Celso; Klages, Rainer

    2014-11-01

    Dynamical systems having many coexisting attractors present interesting properties from both fundamental theoretical and modelling points of view. When such dynamics is under bounded random perturbations, the basins of attraction are no longer invariant and there is the possibility of transport among them. Here we introduce a basic theoretical setting which enables us to study this hopping process from the perspective of anomalous transport using the concept of a random dynamical system with holes. We apply it to a simple model by investigating the role of hyperbolicity for the transport among basins. We show numerically that our system exhibits non-Gaussian position distributions, power-law escape times, and subdiffusion. Our simulation results are reproduced consistently from stochastic continuous time random walk theory.

  13. Human mammary epithelial cells exhibit a bimodal correlated random walk pattern.

    Science.gov (United States)

    Potdar, Alka A; Jeon, Junhwan; Weaver, Alissa M; Quaranta, Vito; Cummings, Peter T

    2010-03-10

    Organisms, at scales ranging from unicellular to mammals, have been known to exhibit foraging behavior described by random walks whose segments confirm to Lévy or exponential distributions. For the first time, we present evidence that single cells (mammary epithelial cells) that exist in multi-cellular organisms (humans) follow a bimodal correlated random walk (BCRW). Cellular tracks of MCF-10A pBabe, neuN and neuT random migration on 2-D plastic substrates, analyzed using bimodal analysis, were found to reveal the BCRW pattern. We find two types of exponentially distributed correlated flights (corresponding to what we refer to as the directional and re-orientation phases) each having its own correlation between move step-lengths within flights. The exponential distribution of flight lengths was confirmed using different analysis methods (logarithmic binning with normalization, survival frequency plots and maximum likelihood estimation). Because of the presence of non-uniform turn angle distribution of move step-lengths within a flight and two different types of flights, we propose that the epithelial random walk is a BCRW comprising of two alternating modes with varying degree of correlations, rather than a simple persistent random walk. A BCRW model rather than a simple persistent random walk correctly matches the super-diffusivity in the cell migration paths as indicated by simulations based on the BCRW model.

  14. Human mammary epithelial cells exhibit a bimodal correlated random walk pattern.

    Directory of Open Access Journals (Sweden)

    Alka A Potdar

    2010-03-01

    Full Text Available Organisms, at scales ranging from unicellular to mammals, have been known to exhibit foraging behavior described by random walks whose segments confirm to Lévy or exponential distributions. For the first time, we present evidence that single cells (mammary epithelial cells that exist in multi-cellular organisms (humans follow a bimodal correlated random walk (BCRW.Cellular tracks of MCF-10A pBabe, neuN and neuT random migration on 2-D plastic substrates, analyzed using bimodal analysis, were found to reveal the BCRW pattern. We find two types of exponentially distributed correlated flights (corresponding to what we refer to as the directional and re-orientation phases each having its own correlation between move step-lengths within flights. The exponential distribution of flight lengths was confirmed using different analysis methods (logarithmic binning with normalization, survival frequency plots and maximum likelihood estimation.Because of the presence of non-uniform turn angle distribution of move step-lengths within a flight and two different types of flights, we propose that the epithelial random walk is a BCRW comprising of two alternating modes with varying degree of correlations, rather than a simple persistent random walk. A BCRW model rather than a simple persistent random walk correctly matches the super-diffusivity in the cell migration paths as indicated by simulations based on the BCRW model.

  15. Commercial milk distribution profiles and production locations. Hanford Environmental Dose Reconstruction Project

    Energy Technology Data Exchange (ETDEWEB)

    Deonigi, D.E.; Anderson, D.M.; Wilfert, G.L.

    1994-04-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project was established to estimate radiation doses that people could have received from nuclear operations at the Hanford Site since 1944. For this period iodine-131 is the most important offsite contributor to radiation doses from Hanford operations. Consumption of milk from cows that ate vegetation contaminated by iodine-131 is the dominant radiation pathway for individuals who drank milk (Napier 1992). Information has been developed on commercial milk cow locations and commercial milk distribution during 1945 and 1951. The year 1945 was selected because during 1945 the largest amount of iodine-131 was released from Hanford facilities in a calendar year (Heeb 1993); therefore, 1945 was the year in which an individual was likely to have received the highest dose. The year 1951 was selected to provide data for comparing the changes that occurred in commercial milk flows (i.e., sources, processing locations, and market areas) between World War II and the post-war period. To estimate the doses people could have received from this milk flow, it is necessary to estimate the amount of milk people consumed, the source of the milk, the specific feeding regime used for milk cows, and the amount of iodine-131 contamination deposited on feed.

  16. Air bubble migration is a random event post embryo transfer.

    Science.gov (United States)

    Confino, E; Zhang, J; Risquez, F

    2007-06-01

    Air bubble location following embryo transfer (ET) is the presumable placement spot of embryos. The purpose of this study was to document endometrial air bubble position and migration following embryo transfer. Multicenter prospective case study. Eighty-eight embryo transfers were performed under abdominal ultrasound guidance in two countries by two authors. A single or double air bubble was loaded with the embryos using a soft, coaxial, end opened catheters. The embryos were slowly injected 10-20 mm from the fundus. Air bubble position was recorded immediately, 30 minutes later and when the patient stood up. Bubble marker location analysis revealed a random distribution without visible gravity effect when the patients stood up. The bubble markers demonstrated splitting, moving in all directions and dispersion. Air bubbles move and split frequently post ET with the patient in the horizontal position, suggestive of active uterine contractions. Bubble migration analysis supports a rather random movement of the bubbles and possibly the embryos. Standing up changed somewhat bubble configuration and distribution in the uterine cavity. Gravity related bubble motion was uncommon, suggesting that horizontal rest post ET may not be necessary. This report challenges the common belief that a very accurate ultrasound guided embryo placement is mandatory. The very random bubble movement observed in this two-center study suggests that a large "window" of embryo placement maybe present.

  17. NON-INVASIVE DETERMINATION OF THE LOCATION AND DISTRIBUTION OF FREE-PHASE DENSE NONAQUEOUS PHASE LIQUIDS (DNAPL) BY SEISMIC REFLECTION TECHNIQUES

    International Nuclear Information System (INIS)

    Michael G. Waddell; William J. Domoracki; Tom J. Temples; Jerome Eyer

    2001-01-01

    The Earth Sciences and Resources Institute, University of South Carolina is conducting a 14 month proof of concept study to determine the location and distribution of subsurface Dense Nonaqueous Phase Liquid (DNAPL) carbon tetrachloride (CCl 4 ) contamination at the 216-Z-9 crib, 200 West area, Department of Energy (DOE) Hanford Site, Washington by use of two-dimensional high resolution seismic reflection surveys and borehole geophysical data. The study makes use of recent advances in seismic reflection amplitude versus offset (AVO) technology to directly detect the presence of subsurface DNAPL. The techniques proposed are a noninvasive means towards site characterization and direct free-phase DNAPL detection. This report covers the results of Task 3 and change of scope of Tasks 4-6. Task 1 contains site evaluation and seismic modeling studies. The site evaluation consists of identifying and collecting preexisting geological and geophysical information regarding subsurface structure and the presence and quantity of DNAPL. The seismic modeling studies were undertaken to determine the likelihood that an AVO response exists and its probable manifestation. Task 2 is the design and acquisition of 2-D seismic reflection data designed to image areas of probable high concentration of DNAPL. Task 3 is the processing and interpretation of the 2-D data. Task 4, 5, and 6 were designing, acquiring, processing, and interpretation of a three dimensional seismic survey (3D) at the Z-9 crib area at 200 west area, Hanford

  18. Non-fragile observer design for discrete-time genetic regulatory networks with randomly occurring uncertainties

    International Nuclear Information System (INIS)

    Banu, L Jarina; Balasubramaniam, P

    2015-01-01

    This paper investigates the problem of non-fragile observer design for a class of discrete-time genetic regulatory networks (DGRNs) with time-varying delays and randomly occurring uncertainties. A non-fragile observer is designed, for estimating the true concentration of mRNAs and proteins from available measurement outputs. One important feature of the results obtained that are reported here is that the parameter uncertainties are assumed to be random and their probabilities of occurrence are known a priori. On the basis of the Lyapunov–Krasovskii functional approach and using a convex combination technique, a delay-dependent estimation criterion is established for DGRNs in terms of linear matrix inequalities (LMIs) that can be efficiently solved using any available LMI solver. Finally numerical examples are provided to substantiate the theoretical results. (paper)

  19. PARAMETRIC IDENTIFICATION OF STOCHASTIC SYSTEM BY NON-GRADIENT RANDOM SEARCHING

    Directory of Open Access Journals (Sweden)

    A. A. Lobaty

    2017-01-01

    Full Text Available At this moment we know a great variety of identification objects, tasks and methods and its significance is constantly increasing in various fields of science and technology.  The identification problem is dependent on a priori information about identification object, besides that  the existing approaches and methods of identification are determined by the form of mathematical models (deterministic, stochastic, frequency, temporal, spectral etc.. The paper considers a problem for determination of system parameters  (identification object which is assigned by the stochastic mathematical model including random functions of time. It has been shown  that while making optimization of the stochastic systems subject to random actions deterministic methods can be applied only for a limited approximate optimization of the system by taking into account average random effects and fixed structure of the system. The paper proposes an algorithm for identification of  parameters in a mathematical model of  the stochastic system by non-gradient random searching. A specific  feature  of the algorithm is its applicability  practically to mathematic models of any type because the applied algorithm does not depend on linearization and differentiability of functions included in the mathematical model of the system. The proposed algorithm  ensures searching of  an extremum for the specified quality criteria in terms of external uncertainties and limitations while using random searching of parameters for a mathematical model of the system. The paper presents results of the investigations on operational capability of the considered identification method  while using mathematical simulation of hypothetical control system with a priori unknown parameter values of the mathematical model. The presented results of the mathematical simulation obviously demonstrate the operational capability of the proposed identification method.

  20. Generation and distribution of wealth in Blumenau non-profit social service

    Directory of Open Access Journals (Sweden)

    Loriberto Starosky Filho

    2013-08-01

    Full Text Available Non-profit organizations exist all over the world and they have an important role to the economy. These are not aimed at profits and they appeared to develop initiatives of social aspects. The main goal of this research is to check how the wealth is generated and distributed by the non-profits that are enrolled in the Welfare Assistance Council in Blumenau city. This data was gotten through a qualitative, descriptive and documentary research based on analysis of published financial statements of a sample consisting of  nineteen non-profit Welfare Assistance Organizations. The results showed that: a To maintain their activities most  institutions  rely on resources coming from social grants, partnerships and donations; b The added value distributed represents more than fifty percent of the total proceeds in a large number  of institutions; c in most organizations the biggest share of the wealth distribution was used to the workers payment; d a low percentage of  the wealth is to lenders and government. As a general rule, most organizations presented a very low rate of retentions for themselves because they do not seek profits. Their goals are related to social services activities.

  1. Inference of median difference based on the Box-Cox model in randomized clinical trials.

    Science.gov (United States)

    Maruo, K; Isogawa, N; Gosho, M

    2015-05-10

    In randomized clinical trials, many medical and biological measurements are not normally distributed and are often skewed. The Box-Cox transformation is a powerful procedure for comparing two treatment groups for skewed continuous variables in terms of a statistical test. However, it is difficult to directly estimate and interpret the location difference between the two groups on the original scale of the measurement. We propose a helpful method that infers the difference of the treatment effect on the original scale in a more easily interpretable form. We also provide statistical analysis packages that consistently include an estimate of the treatment effect, covariance adjustments, standard errors, and statistical hypothesis tests. The simulation study that focuses on randomized parallel group clinical trials with two treatment groups indicates that the performance of the proposed method is equivalent to or better than that of the existing non-parametric approaches in terms of the type-I error rate and power. We illustrate our method with cluster of differentiation 4 data in an acquired immune deficiency syndrome clinical trial. Copyright © 2015 John Wiley & Sons, Ltd.

  2. Analytic degree distributions of horizontal visibility graphs mapped from unrelated random series and multifractal binomial measures

    Science.gov (United States)

    Xie, Wen-Jie; Han, Rui-Qi; Jiang, Zhi-Qiang; Wei, Lijian; Zhou, Wei-Xing

    2017-08-01

    Complex network is not only a powerful tool for the analysis of complex system, but also a promising way to analyze time series. The algorithm of horizontal visibility graph (HVG) maps time series into graphs, whose degree distributions are numerically and analytically investigated for certain time series. We derive the degree distributions of HVGs through an iterative construction process of HVGs. The degree distributions of the HVG and the directed HVG for random series are derived to be exponential, which confirms the analytical results from other methods. We also obtained the analytical expressions of degree distributions of HVGs and in-degree and out-degree distributions of directed HVGs transformed from multifractal binomial measures, which agree excellently with numerical simulations.

  3. Hacking on decoy-state quantum key distribution system with partial phase randomization

    Science.gov (United States)

    Sun, Shi-Hai; Jiang, Mu-Sheng; Ma, Xiang-Chun; Li, Chun-Yan; Liang, Lin-Mei

    2014-04-01

    Quantum key distribution (QKD) provides means for unconditional secure key transmission between two distant parties. However, in practical implementations, it suffers from quantum hacking due to device imperfections. Here we propose a hybrid measurement attack, with only linear optics, homodyne detection, and single photon detection, to the widely used vacuum + weak decoy state QKD system when the phase of source is partially randomized. Our analysis shows that, in some parameter regimes, the proposed attack would result in an entanglement breaking channel but still be able to trick the legitimate users to believe they have transmitted secure keys. That is, the eavesdropper is able to steal all the key information without discovered by the users. Thus, our proposal reveals that partial phase randomization is not sufficient to guarantee the security of phase-encoding QKD systems with weak coherent states.

  4. Hacking on decoy-state quantum key distribution system with partial phase randomization.

    Science.gov (United States)

    Sun, Shi-Hai; Jiang, Mu-Sheng; Ma, Xiang-Chun; Li, Chun-Yan; Liang, Lin-Mei

    2014-04-23

    Quantum key distribution (QKD) provides means for unconditional secure key transmission between two distant parties. However, in practical implementations, it suffers from quantum hacking due to device imperfections. Here we propose a hybrid measurement attack, with only linear optics, homodyne detection, and single photon detection, to the widely used vacuum + weak decoy state QKD system when the phase of source is partially randomized. Our analysis shows that, in some parameter regimes, the proposed attack would result in an entanglement breaking channel but still be able to trick the legitimate users to believe they have transmitted secure keys. That is, the eavesdropper is able to steal all the key information without discovered by the users. Thus, our proposal reveals that partial phase randomization is not sufficient to guarantee the security of phase-encoding QKD systems with weak coherent states.

  5. Random distribution of nucleoli in metabolic cells

    Energy Technology Data Exchange (ETDEWEB)

    Beckman, R.J.; Waterman, M.S.

    1977-01-01

    Hasofer (1974) has studied a probabilistic model for the fusion of nucleoli in metabolic cells. The nucleoli are uniformly distributed at points in the nucleus, assumed to be a sphere. The nucleoli grow from a point to a maximum size during interphase, and fusion is said to occur if the nucleoli touch. For this model, Hasofer calculated the probability of fusion and found it much smaller than experimental data would indicate. Experimental data of this type is taken by use of a microscope where a two-dimensional view or projection of the three-dimensional cell is obtained. Hasofer implicitly assumes that actual fusion can be distinguished from the case where the two nucleoli do not touch but their two-dimensional projections overlap. It is assumed, in this letter, that these two cases cannot be distinguished. The probability obtained by Beckman and Waterman is larger than Hasofer's and a much better fit to the experimental data is obtained. Even if true fusion can be unfailingly distinguished from overlap of the two-dimensional projections, it is hoped that these calculations will allow someone to propose the correct (non-uniform) model. It is concluded, for the assumptions used, that there is not sufficient evidence to reject the hypothesis of uniform distribution of the nucleoli.

  6. Potential fluctuations due to the randomly distributed charges at the semiconductor-insulator interface in MIS-structures

    International Nuclear Information System (INIS)

    Slavcheva, G.; Yanchev, I.

    1991-01-01

    A new expression for the Fourier transform of the binary correlation function of the random potential near the semiconductor-insulator interface is derived. The screening due to the image charge with respect to the metal electrode in MIS-structure is taken into account, introducing an effective insulator thickness. An essential advantage of this correlation function is the finite dispersion of the random potential Γ 2 to which it leads in distinction with the so far known correlation functions leading to divergent dispersion. The important characteristic of the random potential distribution Γ 2 determining the amplitude of the potential fluctuations is calculated. (author). 7 refs, 1 fig

  7. Digital simulation of two-dimensional random fields with arbitrary power spectra and non-Gaussian probability distribution functions

    DEFF Research Database (Denmark)

    Yura, Harold; Hanson, Steen Grüner

    2012-01-01

    with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative...

  8. Trophallaxis-inspired model for distributed transport between randomly interacting agents

    Science.gov (United States)

    Gräwer, Johannes; Ronellenfitsch, Henrik; Mazza, Marco G.; Katifori, Eleni

    2017-08-01

    Trophallaxis, the regurgitation and mouth to mouth transfer of liquid food between members of eusocial insect societies, is an important process that allows the fast and efficient dissemination of food in the colony. Trophallactic systems are typically treated as a network of agent interactions. This approach, though valuable, does not easily lend itself to analytic predictions. In this work we consider a simple trophallactic system of randomly interacting agents with finite carrying capacity, and calculate analytically and via a series of simulations the global food intake rate for the whole colony as well as observables describing how uniformly the food is distributed within the nest. Our model and predictions provide a useful benchmark to assess to what level the observed food uptake rates and efficiency in food distribution is due to stochastic effects or specific trophallactic strategies by the ant colony. Our work also serves as a stepping stone to describing the collective properties of more complex trophallactic systems, such as those including division of labor between foragers and workers.

  9. Inflation with a graceful exit in a random landscape

    International Nuclear Information System (INIS)

    Pedro, F.G.; Westphal, A.

    2016-11-01

    We develop a stochastic description of small-field inflationary histories with a graceful exit in a random potential whose Hessian is a Gaussian random matrix as a model of the unstructured part of the string landscape. The dynamical evolution in such a random potential from a small-field inflation region towards a viable late-time de Sitter (dS) minimum maps to the dynamics of Dyson Brownian motion describing the relaxation of non-equilibrium eigenvalue spectra in random matrix theory. We analytically compute the relaxation probability in a saddle point approximation of the partition function of the eigenvalue distribution of the Wigner ensemble describing the mass matrices of the critical points. When applied to small-field inflation in the landscape, this leads to an exponentially strong bias against small-field ranges and an upper bound N<<10 on the number of light fields N participating during inflation from the non-observation of negative spatial curvature.

  10. Inflation with a graceful exit in a random landscape

    Energy Technology Data Exchange (ETDEWEB)

    Pedro, F.G. [Univ. Autonoma de Madrid (Spain). Dept. de Fisica Teorica y Inst. de Fisica Teorica UAM/CSIC; Westphal, A. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Theory Group

    2016-11-15

    We develop a stochastic description of small-field inflationary histories with a graceful exit in a random potential whose Hessian is a Gaussian random matrix as a model of the unstructured part of the string landscape. The dynamical evolution in such a random potential from a small-field inflation region towards a viable late-time de Sitter (dS) minimum maps to the dynamics of Dyson Brownian motion describing the relaxation of non-equilibrium eigenvalue spectra in random matrix theory. We analytically compute the relaxation probability in a saddle point approximation of the partition function of the eigenvalue distribution of the Wigner ensemble describing the mass matrices of the critical points. When applied to small-field inflation in the landscape, this leads to an exponentially strong bias against small-field ranges and an upper bound N<<10 on the number of light fields N participating during inflation from the non-observation of negative spatial curvature.

  11. Inflation with a graceful exit in a random landscape

    Energy Technology Data Exchange (ETDEWEB)

    Pedro, F.G. [Departamento de Física Teórica and Instituto de Física Teórica UAM/CSIC,Universidad Autónoma de Madrid,Cantoblanco, 28049 Madrid (Spain); Westphal, A. [Deutsches Elektronen-Synchrotron DESY, Theory Group,D-22603 Hamburg (Germany)

    2017-03-30

    We develop a stochastic description of small-field inflationary histories with a graceful exit in a random potential whose Hessian is a Gaussian random matrix as a model of the unstructured part of the string landscape. The dynamical evolution in such a random potential from a small-field inflation region towards a viable late-time de Sitter (dS) minimum maps to the dynamics of Dyson Brownian motion describing the relaxation of non-equilibrium eigenvalue spectra in random matrix theory. We analytically compute the relaxation probability in a saddle point approximation of the partition function of the eigenvalue distribution of the Wigner ensemble describing the mass matrices of the critical points. When applied to small-field inflation in the landscape, this leads to an exponentially strong bias against small-field ranges and an upper bound N≪10 on the number of light fields N participating during inflation from the non-observation of negative spatial curvature.

  12. Evaluation of the differential energy distribution of systems of non-thermally activated molecules

    International Nuclear Information System (INIS)

    Rogers, E.B.

    1986-01-01

    A non-thermally activated molecule may undergo pressure dependent deactivation or energy dependent decomposition. It should be possible to use the pressure dependent stabilization/decomposition yields to determine the energy distribution in non-thermal systems. The numerical technique of regularization has been applied to this chemical problem to evaluate this distribution. The resulting method has been tested with a number of simulated distributions and kinetic models. Application was then made to several real chemical systems to determine the energy distribution resulting from the primary excitation process. Testing showed the method to be quite effective in reproducing input distributions from simulated data in all test cases. The effect of experimental error proved to be negligible when the error-filled data were first smoothed with a parabolic spline. This method has been applied to three different hot atom activated systems. Application to 18 F-for-F substituted CH 3 CF 3 generated a broad distribution extending from 62 to 318 kcal/mol, with a median energy of 138 kcal/mol. The shape of this distribution (and those from the other applications) indicated the involvement of two mechanisms in the excitation process. Analysis of the T-for-H substituted CH 3 CH 2 F system showed a more narrow distribution (56-218 kcal/mol) with a median energy of 79.8 kcal/mol. The distribution of the T-for-H substituted CH 3 CH 2 Cl system, extending from 54.5 to 199 kcal/mol was seen to be quite similar. It was concluded that this method is a valid approach to evaluating differential energy distributions in non-thermal systems, specifically those activated by hot atom substitution

  13. What influences national and foreign physicians' geographic distribution? An analysis of medical doctors' residence location in Portugal.

    Science.gov (United States)

    Russo, Giuliano; Ferrinho, Paulo; de Sousa, Bruno; Conceição, Cláudia

    2012-07-02

    The debate over physicians' geographical distribution has attracted the attention of the economic and public health literature over the last forty years. Nonetheless, it is still to date unclear what influences physicians' location, and whether foreign physicians contribute to fill the geographical gaps left by national doctors in any given country. The present research sets out to investigate the current distribution of national and international physicians in Portugal, with the objective to understand its determinants and provide an evidence base for policy-makers to identify policies to influence it. A cross-sectional study of physicians currently registered in Portugal was conducted to describe the population and explore the association of physician residence patterns with relevant personal and municipality characteristics. Data from the Portuguese Medical Council on physicians' residence and characteristics were analysed, as well as data from the National Institute of Statistics on municipalities' population, living standards and health care network. Descriptive statistics, chi-square tests, negative binomial and logistic regression modelling were applied to determine: (a) municipality characteristics predicting Portuguese and International physicians' geographical distribution, and; (b) doctors' characteristics that could increase the odds of residing outside the country's metropolitan areas. There were 39,473 physicians in Portugal in 2008, 51.1% of whom male, and 40.2% between 41 and 55 years of age. They were predominantly Portuguese (90.5%), with Spanish, Brazilian and African nationalities also represented. Population, Population's Purchasing Power, Nurses per capita and Municipality Development Index (MDI) were the municipality characteristics displaying the strongest association with national physicians' location. For foreign physicians, the MDI was not statistically significant, while municipalities' foreign population applying for residence

  14. What influences national and foreign physicians’ geographic distribution? An analysis of medical doctors’ residence location in Portugal

    Directory of Open Access Journals (Sweden)

    Russo Giuliano

    2012-07-01

    Full Text Available Abstract Background The debate over physicians’ geographical distribution has attracted the attention of the economic and public health literature over the last forty years. Nonetheless, it is still to date unclear what influences physicians’ location, and whether foreign physicians contribute to fill the geographical gaps left by national doctors in any given country. The present research sets out to investigate the current distribution of national and international physicians in Portugal, with the objective to understand its determinants and provide an evidence base for policy-makers to identify policies to influence it. Methods A cross-sectional study of physicians currently registered in Portugal was conducted to describe the population and explore the association of physician residence patterns with relevant personal and municipality characteristics. Data from the Portuguese Medical Council on physicians’ residence and characteristics were analysed, as well as data from the National Institute of Statistics on municipalities’ population, living standards and health care network. Descriptive statistics, chi-square tests, negative binomial and logistic regression modelling were applied to determine: (a municipality characteristics predicting Portuguese and International physicians’ geographical distribution, and; (b doctors’ characteristics that could increase the odds of residing outside the country’s metropolitan areas. Results There were 39,473 physicians in Portugal in 2008, 51.1% of whom male, and 40.2% between 41 and 55 years of age. They were predominantly Portuguese (90.5%, with Spanish, Brazilian and African nationalities also represented. Population, Population’s Purchasing Power, Nurses per capita and Municipality Development Index (MDI were the municipality characteristics displaying the strongest association with national physicians’ location. For foreign physicians, the MDI was not statistically significant

  15. Distributed solar photovoltaic array location and extent dataset for remote sensing object identification

    Science.gov (United States)

    Bradbury, Kyle; Saboo, Raghav; L. Johnson, Timothy; Malof, Jordan M.; Devarajan, Arjun; Zhang, Wuming; M. Collins, Leslie; G. Newell, Richard

    2016-12-01

    Earth-observing remote sensing data, including aerial photography and satellite imagery, offer a snapshot of the world from which we can learn about the state of natural resources and the built environment. The components of energy systems that are visible from above can be automatically assessed with these remote sensing data when processed with machine learning methods. Here, we focus on the information gap in distributed solar photovoltaic (PV) arrays, of which there is limited public data on solar PV deployments at small geographic scales. We created a dataset of solar PV arrays to initiate and develop the process of automatically identifying solar PV locations using remote sensing imagery. This dataset contains the geospatial coordinates and border vertices for over 19,000 solar panels across 601 high-resolution images from four cities in California. Dataset applications include training object detection and other machine learning algorithms that use remote sensing imagery, developing specific algorithms for predictive detection of distributed PV systems, estimating installed PV capacity, and analysis of the socioeconomic correlates of PV deployment.

  16. Chemical Continuous Time Random Walks

    Science.gov (United States)

    Aquino, T.; Dentz, M.

    2017-12-01

    Traditional methods for modeling solute transport through heterogeneous media employ Eulerian schemes to solve for solute concentration. More recently, Lagrangian methods have removed the need for spatial discretization through the use of Monte Carlo implementations of Langevin equations for solute particle motions. While there have been recent advances in modeling chemically reactive transport with recourse to Lagrangian methods, these remain less developed than their Eulerian counterparts, and many open problems such as efficient convergence and reconstruction of the concentration field remain. We explore a different avenue and consider the question: In heterogeneous chemically reactive systems, is it possible to describe the evolution of macroscopic reactant concentrations without explicitly resolving the spatial transport? Traditional Kinetic Monte Carlo methods, such as the Gillespie algorithm, model chemical reactions as random walks in particle number space, without the introduction of spatial coordinates. The inter-reaction times are exponentially distributed under the assumption that the system is well mixed. In real systems, transport limitations lead to incomplete mixing and decreased reaction efficiency. We introduce an arbitrary inter-reaction time distribution, which may account for the impact of incomplete mixing. This process defines an inhomogeneous continuous time random walk in particle number space, from which we derive a generalized chemical Master equation and formulate a generalized Gillespie algorithm. We then determine the modified chemical rate laws for different inter-reaction time distributions. We trace Michaelis-Menten-type kinetics back to finite-mean delay times, and predict time-nonlocal macroscopic reaction kinetics as a consequence of broadly distributed delays. Non-Markovian kinetics exhibit weak ergodicity breaking and show key features of reactions under local non-equilibrium.

  17. [Intel random number generator-based true random number generator].

    Science.gov (United States)

    Huang, Feng; Shen, Hong

    2004-09-01

    To establish a true random number generator on the basis of certain Intel chips. The random numbers were acquired by programming using Microsoft Visual C++ 6.0 via register reading from the random number generator (RNG) unit of an Intel 815 chipset-based computer with Intel Security Driver (ISD). We tested the generator with 500 random numbers in NIST FIPS 140-1 and X(2) R-Squared test, and the result showed that the random number it generated satisfied the demand of independence and uniform distribution. We also compared the random numbers generated by Intel RNG-based true random number generator and those from the random number table statistically, by using the same amount of 7500 random numbers in the same value domain, which showed that the SD, SE and CV of Intel RNG-based random number generator were less than those of the random number table. The result of u test of two CVs revealed no significant difference between the two methods. Intel RNG-based random number generator can produce high-quality random numbers with good independence and uniform distribution, and solves some problems with random number table in acquisition of the random numbers.

  18. Effects of random pebble distribution on the multiplication factor in HTR pebble bed reactors

    Energy Technology Data Exchange (ETDEWEB)

    Auwerda, G.J., E-mail: g.j.auwerda@tudelft.n [Department of Physics of Nuclear Reactors at the Delft University of Technology, Mekelweg 15, Delft (Netherlands); Kloosterman, J.L.; Lathouwers, D.; Hagen, T.H.J.J. van der [Department of Physics of Nuclear Reactors at the Delft University of Technology, Mekelweg 15, Delft (Netherlands)

    2010-08-15

    In pebble bed reactors the pebbles have a random distribution within the core. The usual approach in modeling the bed is homogenizing the entire bed. To quantify the errors arising in such a model, this article investigates the effect on k{sub eff} of three phenomena in random pebble distributions: non-uniform packing density, neutron streaming in between the pebbles, and variations in Dancoff factor. For a 100 cm high cylinder with reflective top and bottom boundary conditions 25 pebble beds were generated. Of each bed three core models were made: a homogeneous model, a zones model including density fluctuations, and an exact model with all pebbles modeled individually. The same was done for a model of the PROTEUS facility. k{sub eff} calculations were performed with three codes: Monte Carlo, diffusion, and finite element transport. By comparing k{sub eff} of the homogenized and zones model the effect of including density fluctuations in the pebble bed was found to increase k{sub eff} by 71 pcm for the infinite cylinder and 649 pcm for PROTEUS. The large value for PROTEUS is due to the low packing fraction near the top of the pebble bed, causing a significant lower packing fraction for the bulk of the pebble bed in the homogenized model. The effect of neutron streaming was calculated by comparing the zones model with the exact model, and was found to decrease k{sub eff} by 606 pcm for the infinite cylinder, and by 1240 pcm for PROTEUS. This was compared with the effect of using a streaming correction factor on the diffusion coefficient in the zones model, which resulted in {Delta}{sub streaming} values of 340 and 1085 pcm. From this we conclude neutron streaming is an important effect in pebble bed reactors, and is not accurately described by the correction factor on the diffusion coefficient. Changing the Dancoff factor in the outer part of the pebble bed to compensate for the lower probability of neutrons to enter other fuel pebbles caused no significant changes

  19. New Distribution Centre Locations for a growing retail company in Thailand: The case study of Siam Global House Plc.

    OpenAIRE

    Suriyawanakul, Kriangkai

    2010-01-01

    Logistics and distribution management is a very important topic for many industries for the last couple of decades. Many companies have structured and redesigning their logistics network to compete in the market. Among all, warehouse location decision is one of the most important elements as it is learnt that good location decisions can save companies millions every year. Siam Globalhouse, one of the largest hardware retailers in Thailand, wants to improve the efficiency of their supply n...

  20. Exceptional diversity, non-random distribution, and rapid evolution of retroelements in the B73 maize genome.

    Directory of Open Access Journals (Sweden)

    Regina S Baucom

    2009-11-01

    Full Text Available Recent comprehensive sequence analysis of the maize genome now permits detailed discovery and description of all transposable elements (TEs in this complex nuclear environment. Reiteratively optimized structural and homology criteria were used in the computer-assisted search for retroelements, TEs that transpose by reverse transcription of an RNA intermediate, with the final results verified by manual inspection. Retroelements were found to occupy the majority (>75% of the nuclear genome in maize inbred B73. Unprecedented genetic diversity was discovered in the long terminal repeat (LTR retrotransposon class of retroelements, with >400 families (>350 newly discovered contributing >31,000 intact elements. The two other classes of retroelements, SINEs (four families and LINEs (at least 30 families, were observed to contribute 1,991 and approximately 35,000 copies, respectively, or a combined approximately 1% of the B73 nuclear genome. With regard to fully intact elements, median copy numbers for all retroelement families in maize was 2 because >250 LTR retrotransposon families contained only one or two intact members that could be detected in the B73 draft sequence. The majority, perhaps all, of the investigated retroelement families exhibited non-random dispersal across the maize genome, with LINEs, SINEs, and many low-copy-number LTR retrotransposons exhibiting a bias for accumulation in gene-rich regions. In contrast, most (but not all medium- and high-copy-number LTR retrotransposons were found to preferentially accumulate in gene-poor regions like pericentromeric heterochromatin, while a few high-copy-number families exhibited the opposite bias. Regions of the genome with the highest LTR retrotransposon density contained the lowest LTR retrotransposon diversity. These results indicate that the maize genome provides a great number of different niches for the survival and procreation of a great variety of retroelements that have evolved to

  1. An Isometric Mapping Based Co-Location Decision Tree Algorithm

    Science.gov (United States)

    Zhou, G.; Wei, J.; Zhou, X.; Zhang, R.; Huang, W.; Sha, H.; Chen, J.

    2018-05-01

    Decision tree (DT) induction has been widely used in different pattern classification. However, most traditional DTs have the disadvantage that they consider only non-spatial attributes (ie, spectral information) as a result of classifying pixels, which can result in objects being misclassified. Therefore, some researchers have proposed a co-location decision tree (Cl-DT) method, which combines co-location and decision tree to solve the above the above-mentioned traditional decision tree problems. Cl-DT overcomes the shortcomings of the existing DT algorithms, which create a node for each value of a given attribute, which has a higher accuracy than the existing decision tree approach. However, for non-linearly distributed data instances, the euclidean distance between instances does not reflect the true positional relationship between them. In order to overcome these shortcomings, this paper proposes an isometric mapping method based on Cl-DT (called, (Isomap-based Cl-DT), which is a method that combines heterogeneous and Cl-DT together. Because isometric mapping methods use geodetic distances instead of Euclidean distances between non-linearly distributed instances, the true distance between instances can be reflected. The experimental results and several comparative analyzes show that: (1) The extraction method of exposed carbonate rocks is of high accuracy. (2) The proposed method has many advantages, because the total number of nodes, the number of leaf nodes and the number of nodes are greatly reduced compared to Cl-DT. Therefore, the Isomap -based Cl-DT algorithm can construct a more accurate and faster decision tree.

  2. AN ISOMETRIC MAPPING BASED CO-LOCATION DECISION TREE ALGORITHM

    Directory of Open Access Journals (Sweden)

    G. Zhou

    2018-05-01

    Full Text Available Decision tree (DT induction has been widely used in different pattern classification. However, most traditional DTs have the disadvantage that they consider only non-spatial attributes (ie, spectral information as a result of classifying pixels, which can result in objects being misclassified. Therefore, some researchers have proposed a co-location decision tree (Cl-DT method, which combines co-location and decision tree to solve the above the above-mentioned traditional decision tree problems. Cl-DT overcomes the shortcomings of the existing DT algorithms, which create a node for each value of a given attribute, which has a higher accuracy than the existing decision tree approach. However, for non-linearly distributed data instances, the euclidean distance between instances does not reflect the true positional relationship between them. In order to overcome these shortcomings, this paper proposes an isometric mapping method based on Cl-DT (called, (Isomap-based Cl-DT, which is a method that combines heterogeneous and Cl-DT together. Because isometric mapping methods use geodetic distances instead of Euclidean distances between non-linearly distributed instances, the true distance between instances can be reflected. The experimental results and several comparative analyzes show that: (1 The extraction method of exposed carbonate rocks is of high accuracy. (2 The proposed method has many advantages, because the total number of nodes, the number of leaf nodes and the number of nodes are greatly reduced compared to Cl-DT. Therefore, the Isomap -based Cl-DT algorithm can construct a more accurate and faster decision tree.

  3. Non-Random Distribution of 5S rDNA Sites and Its Association with 45S rDNA in Plant Chromosomes.

    Science.gov (United States)

    Roa, Fernando; Guerra, Marcelo

    2015-01-01

    5S and 45S rDNA sites are the best mapped chromosome regions in eukaryotic chromosomes. In this work, a database was built gathering information about the position and number of 5S rDNA sites in 784 plant species, aiming to identify patterns of distribution along the chromosomes and its correlation with the position of 45S rDNA sites. Data revealed that in most karyotypes (54.5%, including polyploids) two 5S rDNA sites (a single pair) are present, with 58.7% of all sites occurring in the short arm, mainly in the proximal region. In karyotypes of angiosperms with only 1 pair of sites (single sites) they are mostly found in the proximal region (52.0%), whereas in karyotypes with multiple sites the location varies according to the average chromosome size. Karyotypes with multiple sites and small chromosomes (6 µm) more commonly show terminal or interstitial sites. In species with holokinetic chromosomes, the modal value of sites per karyotype was also 2, but they were found mainly in a terminal position. Adjacent 5S and 45S rDNA sites were often found in the short arm, reflecting the preferential distribution of both sites in this arm. The high frequency of genera with at least 1 species with adjacent 5S and 45S sites reveals that this association appeared several times during angiosperm evolution, but it has been maintained only rarely as the dominant array in plant genera. © 2015 S. Karger AG, Basel.

  4. Hardware random number generator base on monostable multivibrators dedicated for distributed measurement and control systems

    Science.gov (United States)

    Czernik, Pawel

    2013-10-01

    The hardware random number generator based on the 74121 monostable multivibrators for applications in cryptographically secure distributed measurement and control systems with asymmetric resources was presented. This device was implemented on the basis of the physical electronic vibration generator in which the circuit is composed of two "loop" 74121 monostable multivibrators, D flip-flop and external clock signal source. The clock signal, witch control D flip-flop was generated by a computer on one of the parallel port pins. There was presented programmed the author's acquisition process of random data from the measuring system to a computer. The presented system was designed, builded and thoroughly tested in the term of cryptographic security in our laboratory, what there is the most important part of this publication. Real cryptographic security was tested based on the author's software and the software environment called RDieHarder. The obtained results was here presented and analyzed in detail with particular reference to the specificity of distributed measurement and control systems with asymmetric resources.

  5. New distributed fusion filtering algorithm based on covariances over sensor networks with random packet dropouts

    Science.gov (United States)

    Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J.

    2017-07-01

    This paper studies the distributed fusion estimation problem from multisensor measured outputs perturbed by correlated noises and uncertainties modelled by random parameter matrices. Each sensor transmits its outputs to a local processor over a packet-erasure channel and, consequently, random losses may occur during transmission. Different white sequences of Bernoulli variables are introduced to model the transmission losses. For the estimation, each lost output is replaced by its estimator based on the information received previously, and only the covariances of the processes involved are used, without requiring the signal evolution model. First, a recursive algorithm for the local least-squares filters is derived by using an innovation approach. Then, the cross-correlation matrices between any two local filters is obtained. Finally, the distributed fusion filter weighted by matrices is obtained from the local filters by applying the least-squares criterion. The performance of the estimators and the influence of both sensor uncertainties and transmission losses on the estimation accuracy are analysed in a numerical example.

  6. Random Forest Based Coarse Locating and KPCA Feature Extraction for Indoor Positioning System

    Directory of Open Access Journals (Sweden)

    Yun Mo

    2014-01-01

    Full Text Available With the fast developing of mobile terminals, positioning techniques based on fingerprinting method draw attention from many researchers even world famous companies. To conquer some shortcomings of the existing fingerprinting systems and further improve the system performance, on the one hand, in the paper, we propose a coarse positioning method based on random forest, which is able to customize several subregions, and classify test point to the region with an outstanding accuracy compared with some typical clustering algorithms. On the other hand, through the mathematical analysis in engineering, the proposed kernel principal component analysis algorithm is applied for radio map processing, which may provide better robustness and adaptability compared with linear feature extraction methods and manifold learning technique. We build both theoretical model and real environment for verifying the feasibility and reliability. The experimental results show that the proposed indoor positioning system could achieve 99% coarse locating accuracy and enhance 15% fine positioning accuracy on average in a strong noisy environment compared with some typical fingerprinting based methods.

  7. Statistical mechanics of the fashion game on random networks

    International Nuclear Information System (INIS)

    Sun, YiFan

    2016-01-01

    A model of fashion on networks is studied. This model consists of two groups of agents that are located on a network and have opposite viewpoints towards being fashionable: behaving consistently with either the majority or the minority of adjacent agents. Checking whether the fashion game has a pure Nash equilibrium (pure NE) is a non-deterministic polynomial complete problem. Using replica-symmetric mean field theory, the largest proportion of satisfied agents and the region where at least one pure NE should exist are determined for several types of random networks. Furthermore, a quantitive analysis of the asynchronous best response dynamics yields the phase diagram of existence and detectability of pure NE in the fashion game on some random networks. (paper: classical statistical mechanics, equilibrium and non-equilibrium).

  8. Epidermis Microstructure Inspired Graphene Pressure Sensor with Random Distributed Spinosum for High Sensitivity and Large Linearity.

    Science.gov (United States)

    Pang, Yu; Zhang, Kunning; Yang, Zhen; Jiang, Song; Ju, Zhenyi; Li, Yuxing; Wang, Xuefeng; Wang, Danyang; Jian, Muqiang; Zhang, Yingying; Liang, Renrong; Tian, He; Yang, Yi; Ren, Tian-Ling

    2018-03-27

    Recently, wearable pressure sensors have attracted tremendous attention because of their potential applications in monitoring physiological signals for human healthcare. Sensitivity and linearity are the two most essential parameters for pressure sensors. Although various designed micro/nanostructure morphologies have been introduced, the trade-off between sensitivity and linearity has not been well balanced. Human skin, which contains force receptors in a reticular layer, has a high sensitivity even for large external stimuli. Herein, inspired by the skin epidermis with high-performance force sensing, we have proposed a special surface morphology with spinosum microstructure of random distribution via the combination of an abrasive paper template and reduced graphene oxide. The sensitivity of the graphene pressure sensor with random distribution spinosum (RDS) microstructure is as high as 25.1 kPa -1 in a wide linearity range of 0-2.6 kPa. Our pressure sensor exhibits superior comprehensive properties compared with previous surface-modified pressure sensors. According to simulation and mechanism analyses, the spinosum microstructure and random distribution contribute to the high sensitivity and large linearity range, respectively. In addition, the pressure sensor shows promising potential in detecting human physiological signals, such as heartbeat, respiration, phonation, and human motions of a pushup, arm bending, and walking. The wearable pressure sensor array was further used to detect gait states of supination, neutral, and pronation. The RDS microstructure provides an alternative strategy to improve the performance of pressure sensors and extend their potential applications in monitoring human activities.

  9. Non-motor outcomes of subthalamic stimulation in Parkinson's disease depend on location of active contacts.

    Science.gov (United States)

    Dafsari, Haidar Salimi; Petry-Schmelzer, Jan Niklas; Ray-Chaudhuri, K; Ashkan, Keyoumars; Weis, Luca; Dembek, Till A; Samuel, Michael; Rizos, Alexandra; Silverdale, Monty; Barbe, Michael T; Fink, Gereon R; Evans, Julian; Martinez-Martin, Pablo; Antonini, Angelo; Visser-Vandewalle, Veerle; Timmermann, Lars

    2018-03-16

    Subthalamic nucleus (STN) deep brain stimulation (DBS) improves quality of life (QoL), motor, and non-motor symptoms (NMS) in Parkinson's disease (PD). Few studies have investigated the influence of the location of neurostimulation on NMS. To investigate the impact of active contact location on NMS in STN-DBS in PD. In this prospective, open-label, multicenter study including 50 PD patients undergoing bilateral STN-DBS, we collected NMSScale (NMSS), NMSQuestionnaire (NMSQ), Hospital Anxiety and Depression Scale (anxiety/depression, HADS-A/-D), PDQuestionnaire-8 (PDQ-8), Scales for Outcomes in PD-motor examination, motor complications, activities of daily living (ADL), and levodopa equivalent daily dose (LEDD) preoperatively and at 6 months follow-up. Changes were analyzed with Wilcoxon signed-rank/t-test and Bonferroni-correction for multiple comparisons. Although the STN was targeted visually, we employed an atlas-based approach to explore the relationship between active contact locations and DBS outcomes. Based on fused MRI/CT-images, we identified Cartesian coordinates of active contacts with patient-specific Mai-atlas standardization. We computed linear mixed-effects models with x-/y-/z-coordinates as independent, hemispheres as within-subject, and test change scores as dependent variables. NMSS, NMSQ, PDQ-8, motor examination, complications, and LEDD significantly improved at follow-up. Linear mixed-effect models showed that NMS and QoL improvement significantly depended on more medial (HADS-D, NMSS), anterior (HADS-D, NMSQ, PDQ-8), and ventral (HADS-A/-D, NMSS, PDQ-8) neurostimulation. ADL improved more in posterior, LEDD in lateral neurostimulation locations. No relationship was observed for motor examination and complications scores. Our study provides evidence that more anterior, medial, and ventral STN-DBS is significantly related to more beneficial non-motor outcomes. Copyright © 2018. Published by Elsevier Inc.

  10. MINIMUM ENTROPY DECONVOLUTION OF ONE-AND MULTI-DIMENSIONAL NON-GAUSSIAN LINEAR RANDOM PROCESSES

    Institute of Scientific and Technical Information of China (English)

    程乾生

    1990-01-01

    The minimum entropy deconvolution is considered as one of the methods for decomposing non-Gaussian linear processes. The concept of peakedness of a system response sequence is presented and its properties are studied. With the aid of the peakedness, the convergence theory of the minimum entropy deconvolution is established. The problem of the minimum entropy deconvolution of multi-dimensional non-Gaussian linear random processes is first investigated and the corresponding theory is given. In addition, the relation between the minimum entropy deconvolution and parameter method is discussed.

  11. Tumour control probability (TCP) for non-uniform activity distribution in radionuclide therapy

    International Nuclear Information System (INIS)

    Uusijaervi, Helena; Bernhardt, Peter; Forssell-Aronsson, Eva

    2008-01-01

    Non-uniform radionuclide distribution in tumours will lead to a non-uniform absorbed dose. The aim of this study was to investigate how tumour control probability (TCP) depends on the radionuclide distribution in the tumour, both macroscopically and at the subcellular level. The absorbed dose in the cell nuclei of tumours was calculated for 90 Y, 177 Lu, 103m Rh and 211 At. The radionuclides were uniformly distributed within the subcellular compartment and they were uniformly, normally or log-normally distributed among the cells in the tumour. When all cells contain the same amount of activity, the cumulated activities required for TCP = 0.99 (A-tilde TCP=0.99 ) were 1.5-2 and 2-3 times higher when the activity was distributed on the cell membrane compared to in the cell nucleus for 103m Rh and 211 At, respectively. TCP for 90 Y was not affected by different radionuclide distributions, whereas for 177 Lu, it was slightly affected when the radionuclide was in the nucleus. TCP for 103m Rh and 211 At were affected by different radionuclide distributions to a great extent when the radionuclides were in the cell nucleus and to lesser extents when the radionuclides were distributed on the cell membrane or in the cytoplasm. When the activity was distributed in the nucleus, A-tilde TCP=0.99 increased when the activity distribution became more heterogeneous for 103m Rh and 211 At, and the increase was large when the activity was normally distributed compared to log-normally distributed. When the activity was distributed on the cell membrane, A-tilde TCP=0.99 was not affected for 103m Rh and 211 At when the activity distribution became more heterogeneous. A-tilde TCP=0.99 for 90 Y and 177 Lu were not affected by different activity distributions, neither macroscopic nor subcellular

  12. On properties of continuous-time random walks with non-Poissonian jump-times

    International Nuclear Information System (INIS)

    Villarroel, Javier; Montero, Miquel

    2009-01-01

    The usual development of the continuous-time random walk (CTRW) proceeds by assuming that the present is one of the jumping times. Under this restrictive assumption integral equations for the propagator and mean escape times have been derived. We generalize these results to the case when the present is an arbitrary time by recourse to renewal theory. The case of Erlang distributed times is analyzed in detail. Several concrete examples are considered.

  13. In the eye of the beholder: Inhomogeneous distribution of high-resolution shapes within the random-walk ensemble

    Science.gov (United States)

    Müller, Christian L.; Sbalzarini, Ivo F.; van Gunsteren, Wilfred F.; Žagrović, Bojan; Hünenberger, Philippe H.

    2009-06-01

    The concept of high-resolution shapes (also referred to as folds or states, depending on the context) of a polymer chain plays a central role in polymer science, structural biology, bioinformatics, and biopolymer dynamics. However, although the idea of shape is intuitively very useful, there is no unambiguous mathematical definition for this concept. In the present work, the distributions of high-resolution shapes within the ideal random-walk ensembles with N =3,…,6 beads (or up to N =10 for some properties) are investigated using a systematic (grid-based) approach based on a simple working definition of shapes relying on the root-mean-square atomic positional deviation as a metric (i.e., to define the distance between pairs of structures) and a single cutoff criterion for the shape assignment. Although the random-walk ensemble appears to represent the paramount of homogeneity and randomness, this analysis reveals that the distribution of shapes within this ensemble, i.e., in the total absence of interatomic interactions characteristic of a specific polymer (beyond the generic connectivity constraint), is significantly inhomogeneous. In particular, a specific (densest) shape occurs with a local probability that is 1.28, 1.79, 2.94, and 10.05 times (N =3,…,6) higher than the corresponding average over all possible shapes (these results can tentatively be extrapolated to a factor as large as about 1028 for N =100). The qualitative results of this analysis lead to a few rather counterintuitive suggestions, namely, that, e.g., (i) a fold classification analysis applied to the random-walk ensemble would lead to the identification of random-walk "folds;" (ii) a clustering analysis applied to the random-walk ensemble would also lead to the identification random-walk "states" and associated relative free energies; and (iii) a random-walk ensemble of polymer chains could lead to well-defined diffraction patterns in hypothetical fiber or crystal diffraction experiments

  14. Modeling random telegraph signal noise in CMOS image sensor under low light based on binomial distribution

    International Nuclear Information System (INIS)

    Zhang Yu; Wang Guangyi; Lu Xinmiao; Hu Yongcai; Xu Jiangtao

    2016-01-01

    The random telegraph signal noise in the pixel source follower MOSFET is the principle component of the noise in the CMOS image sensor under low light. In this paper, the physical and statistical model of the random telegraph signal noise in the pixel source follower based on the binomial distribution is set up. The number of electrons captured or released by the oxide traps in the unit time is described as the random variables which obey the binomial distribution. As a result, the output states and the corresponding probabilities of the first and the second samples of the correlated double sampling circuit are acquired. The standard deviation of the output states after the correlated double sampling circuit can be obtained accordingly. In the simulation section, one hundred thousand samples of the source follower MOSFET have been simulated, and the simulation results show that the proposed model has the similar statistical characteristics with the existing models under the effect of the channel length and the density of the oxide trap. Moreover, the noise histogram of the proposed model has been evaluated at different environmental temperatures. (paper)

  15. Multi-isocenter stereotactic radiotherapy: implications for target dose distributions of systematic and random localization errors

    International Nuclear Information System (INIS)

    Ebert, M.A.; Zavgorodni, S.F.; Kendrick, L.A.; Weston, S.; Harper, C.S.

    2001-01-01

    Purpose: This investigation examined the effect of alignment and localization errors on dose distributions in stereotactic radiotherapy (SRT) with arced circular fields. In particular, it was desired to determine the effect of systematic and random localization errors on multi-isocenter treatments. Methods and Materials: A research version of the FastPlan system from Surgical Navigation Technologies was used to generate a series of SRT plans of varying complexity. These plans were used to examine the influence of random setup errors by recalculating dose distributions with successive setup errors convolved into the off-axis ratio data tables used in the dose calculation. The influence of systematic errors was investigated by displacing isocenters from their planned positions. Results: For single-isocenter plans, it is found that the influences of setup error are strongly dependent on the size of the target volume, with minimum doses decreasing most significantly with increasing random and systematic alignment error. For multi-isocenter plans, similar variations in target dose are encountered, with this result benefiting from the conventional method of prescribing to a lower isodose value for multi-isocenter treatments relative to single-isocenter treatments. Conclusions: It is recommended that the systematic errors associated with target localization in SRT be tracked via a thorough quality assurance program, and that random setup errors be minimized by use of a sufficiently robust relocation system. These errors should also be accounted for by incorporating corrections into the treatment planning algorithm or, alternatively, by inclusion of sufficient margins in target definition

  16. Analysis and applications of a frequency selective surface via a random distribution method

    International Nuclear Information System (INIS)

    Xie Shao-Yi; Huang Jing-Jian; Yuan Nai-Chang; Liu Li-Guo

    2014-01-01

    A novel frequency selective surface (FSS) for reducing radar cross section (RCS) is proposed in this paper. This FSS is based on the random distribution method, so it can be called random surface. In this paper, the stacked patches serving as periodic elements are employed for RCS reduction. Previous work has demonstrated the efficiency by utilizing the microstrip patches, especially for the reflectarray. First, the relevant theory of the method is described. Then a sample of a three-layer variable-sized stacked patch random surface with a dimension of 260 mm×260 mm is simulated, fabricated, and measured in order to demonstrate the validity of the proposed design. For the normal incidence, the 8-dB RCS reduction can be achieved both by the simulation and the measurement in 8 GHz–13 GHz. The oblique incidence of 30° is also investigated, in which the 7-dB RCS reduction can be obtained in a frequency range of 8 GHz–14 GHz. (condensed matter: electronic structure, electrical, magnetic, and optical properties)

  17. A multi-objective possibilistic programming approach for locating distribution centers and allocating customers demands in supply chains

    Directory of Open Access Journals (Sweden)

    Seyed Ahmad Yazdian

    2011-01-01

    Full Text Available In this paper, we present a multi-objective possibilistic programming model to locate distribution centers (DCs and allocate customers' demands in a supply chain network design (SCND problem. The SCND problem deals with determining locations of facilities (DCs and/or plants, and also shipment quantities between each two consecutive tier of the supply chain. The primary objective of this study is to consider different risk factors which are involved in both locating DCs and shipping products as an objective function. The risk consists of various components: the risks related to each potential DC location, the risk associated with each arc connecting a plant to a DC and the risk of shipment from a DC to a customer. The proposed method of this paper considers the risk phenomenon in fuzzy forms to handle the uncertainties inherent in these factors. A possibilistic programming approach is proposed to solve the resulted multi-objective problem and a numerical example for three levels of possibility is conducted to analyze the model.

  18. On the remarkable spectrum of a non-Hermitian random matrix model

    International Nuclear Information System (INIS)

    Holz, D E; Orland, H; Zee, A

    2003-01-01

    A non-Hermitian random matrix model proposed a few years ago has a remarkably intricate spectrum. Various attempts have been made to understand the spectrum, but even its dimension is not known. Using the Dyson-Schmidt equation, we show that the spectrum consists of a non-denumerable set of lines in the complex plane. Each line is the support of the spectrum of a periodic Hamiltonian, obtained by the infinite repetition of any finite sequence of the disorder variables. Our approach is based on the 'theory of words'. We make a complete study of all four-letter words. The spectrum is complicated because our matrix contains everything that will ever be written in the history of the universe, including this particular paper

  19. Validation of the k-filtering technique for a signal composed of random-phase plane waves and non-random coherent structures

    Directory of Open Access Journals (Sweden)

    O. W. Roberts

    2014-12-01

    Full Text Available Recent observations of astrophysical magnetic fields have shown the presence of fluctuations being wave-like (propagating in the plasma frame and those described as being structure-like (advected by the plasma bulk velocity. Typically with single-spacecraft missions it is impossible to differentiate between these two fluctuations, due to the inherent spatio-temporal ambiguity associated with a single point measurement. However missions such as Cluster which contain multiple spacecraft have allowed for temporal and spatial changes to be resolved, using techniques such as k filtering. While this technique does not assume Taylor's hypothesis it requires both weak stationarity of the time series and that the fluctuations can be described by a superposition of plane waves with random phases. In this paper we test whether the method can cope with a synthetic signal which is composed of a combination of non-random-phase coherent structures with a mean radius d and a mean separation λ, as well as plane waves with random phase.

  20. Lattice location of Tm in Si and Ge determined from ion channeling followed by Monte Carlo simulations

    International Nuclear Information System (INIS)

    Yamamoto, Y.; Wakaiki, M.; Ikeda, A.; Kido, Y.

    1999-01-01

    The lattice location of Tm implanted into Si(1 0 0) and Ge(1 1 1) with energy of 180 keV was determined precisely by ion channeling followed by Monte Carlo simulations of ion trajectories. The implantations were performed at 550 deg. C with a dose of 5 x 10 14 ions/cm 2 . In the case of Tm in Si, 25 at.% and 50 at.% of Tm are located in the tetrahedral interstitial site and in the random site, respectively and the rest takes the substitutional position. The assumption of the Gaussian distribution centered at the exact tetrahedral site with a standard deviation of 0.2 Angstroms reproduced the azimuth angular-scan spectrum around the [1 1 0] axis. However, the observed angular spectrum is significantly broader than the simulated one. This is probably due to the fact that there exist slightly different Tm lattice sites from the exact tetrahedral position. For Ge(1 1 1) substrates, 25 at.% of Tm occupied the tetrahedral interstitial site and the rest was located randomly

  1. Fitting and Analyzing Randomly Censored Geometric Extreme Exponential Distribution

    Directory of Open Access Journals (Sweden)

    Muhammad Yameen Danish

    2016-06-01

    Full Text Available The paper presents the Bayesian analysis of two-parameter geometric extreme exponential distribution with randomly censored data. The continuous conjugate prior of the scale and shape parameters of the model does not exist while computing the Bayes estimates, it is assumed that the scale and shape parameters have independent gamma priors. It is seen that the closed-form expressions for the Bayes estimators are not possible; we suggest the Lindley’s approximation to obtain the Bayes estimates. However, the Bayesian credible intervals cannot be constructed while using this method, we propose Gibbs sampling to obtain the Bayes estimates and also to construct the Bayesian credible intervals. Monte Carlo simulation study is carried out to observe the behavior of the Bayes estimators and also to compare with the maximum likelihood estimators. One real data analysis is performed for illustration.

  2. Plasma Dielectric Tensor for Non-Maxwellian Distributions in the FLR Limit

    International Nuclear Information System (INIS)

    Phillips, C.K.; Pletzer, A.; Dumont, R.J.; Smithe, D.N.

    2003-01-01

    Previous analytical and numerical studies have noted that the presence of fully non-Maxwellian plasma species can significantly alter the dynamics of electromagnetic waves in magnetized plasmas. In this paper, a general form for the hot plasma dielectric tensor for non-Maxwellian distributions is derived that is valid in the finite Larmor radius approximation. This model provides some insight into understanding the limitations on representing non-Maxwellian plasma species with equivalent Maxwellian components in modeling radio-frequency wave propagation and absorption

  3. Random-Resistor-Random-Temperature Kirchhoff-Law-Johnson-Noise (RRRT-KLJN Key Exchange

    Directory of Open Access Journals (Sweden)

    Kish Laszlo B.

    2016-03-01

    Full Text Available We introduce two new Kirchhoff-law-Johnson-noise (KLJN secure key distribution schemes which are generalizations of the original KLJN scheme. The first of these, the Random-Resistor (RR- KLJN scheme, uses random resistors with values chosen from a quasi-continuum set. It is well-known since the creation of the KLJN concept that such a system could work in cryptography, because Alice and Bob can calculate the unknown resistance value from measurements, but the RR-KLJN system has not been addressed in prior publications since it was considered impractical. The reason for discussing it now is the second scheme, the Random Resistor Random Temperature (RRRT- KLJN key exchange, inspired by a recent paper of Vadai, Mingesz and Gingl, wherein security was shown to be maintained at non-zero power flow. In the RRRT-KLJN secure key exchange scheme, both the resistances and their temperatures are continuum random variables. We prove that the security of the RRRT-KLJN scheme can prevail at a non-zero power flow, and thus the physical law guaranteeing security is not the Second Law of Thermodynamics but the Fluctuation-Dissipation Theorem. Alice and Bob know their own resistances and temperatures and can calculate the resistance and temperature values at the other end of the communication channel from measured voltage, current and power-flow data in the wire. However, Eve cannot determine these values because, for her, there are four unknown quantities while she can set up only three equations. The RRRT-KLJN scheme has several advantages and makes all former attacks on the KLJN scheme invalid or incomplete.

  4. Effects of Action Video Game on Attention Distribution: A Cognitive Study

    Science.gov (United States)

    Zhang, Xuemin; Yan, Bin; Shu, Hua

    Based on the previous researches, Flanker compatibility effect paradigm was applied to explore the degree where people process the visual information presented on to-be-ignored locations. In present study, this paradigm was used to investigate attention distribution of Video Game Players (VGPs) and Non Video Game Players (NVGPs). The results suggested, under low perceptual load, VGPs tried to focus their attention on the task at-hand whereas the NVGPs tried to explore the adjacent locations with the left-over resources from the research task; however, under high perceptual load, the players would process the visual information at the adjacent locations of the target with the left-over resources, because they had comparatively greater attention capability, whereas the non-players focused their attention on the target locations to finish the search task. To conclude, the present study suggested that action video game play could not only enhance the attention capacity but also cause a different way of attention distribution in different perceptual load situations.

  5. Random phenomena; Phenomenes aleatoires

    Energy Technology Data Exchange (ETDEWEB)

    Bonnet, G. [Commissariat a l' energie atomique et aux energies alternatives - CEA, C.E.N.G., Service d' Electronique, Section d' Electronique, Grenoble (France)

    1963-07-01

    This document gathers a set of conferences presented in 1962. A first one proposes a mathematical introduction to the analysis of random phenomena. The second one presents an axiomatic of probability calculation. The third one proposes an overview of one-dimensional random variables. The fourth one addresses random pairs, and presents basic theorems regarding the algebra of mathematical expectations. The fifth conference discusses some probability laws: binomial distribution, the Poisson distribution, and the Laplace-Gauss distribution. The last one deals with the issues of stochastic convergence and asymptotic distributions.

  6. The investigation of social networks based on multi-component random graphs

    Science.gov (United States)

    Zadorozhnyi, V. N.; Yudin, E. B.

    2018-01-01

    The methods of non-homogeneous random graphs calibration are developed for social networks simulation. The graphs are calibrated by the degree distributions of the vertices and the edges. The mathematical foundation of the methods is formed by the theory of random graphs with the nonlinear preferential attachment rule and the theory of Erdôs-Rényi random graphs. In fact, well-calibrated network graph models and computer experiments with these models would help developers (owners) of the networks to predict their development correctly and to choose effective strategies for controlling network projects.

  7. Online Distributed Learning Over Networks in RKH Spaces Using Random Fourier Features

    Science.gov (United States)

    Bouboulis, Pantelis; Chouvardas, Symeon; Theodoridis, Sergios

    2018-04-01

    We present a novel diffusion scheme for online kernel-based learning over networks. So far, a major drawback of any online learning algorithm, operating in a reproducing kernel Hilbert space (RKHS), is the need for updating a growing number of parameters as time iterations evolve. Besides complexity, this leads to an increased need of communication resources, in a distributed setting. In contrast, the proposed method approximates the solution as a fixed-size vector (of larger dimension than the input space) using Random Fourier Features. This paves the way to use standard linear combine-then-adapt techniques. To the best of our knowledge, this is the first time that a complete protocol for distributed online learning in RKHS is presented. Conditions for asymptotic convergence and boundness of the networkwise regret are also provided. The simulated tests illustrate the performance of the proposed scheme.

  8. Non-canonical spectral decomposition of random functions of the traction voltage and current in electric transportation systems

    Directory of Open Access Journals (Sweden)

    N.A. Kostin

    2015-03-01

    Full Text Available The paper proposes the non-canonical spectral decomposition of random functions of the traction voltages and currents. This decomposition is adapted for the electric transportation systems. The numerical representation is carried out for the random function of voltage on the pantograph of electric locomotives VL8 and DE1.

  9. Risk of viral acute gastrointestinal illness from non-disinfected drinking water distribution systems

    Science.gov (United States)

    Acute gastrointestinal illness (AGI) resulting from pathogens directly entering the piping of drinking water distribution systems is insufficiently understood. Here, we estimate AGI incidence attributable to virus intrusions into non-disinfecting municipal distribution systems. Viruses were enumerat...

  10. Mining Significant Semantic Locations from GPS Data

    DEFF Research Database (Denmark)

    Cao, Xin; Cong, Gao; Jensen, Christian Søndergaard

    2010-01-01

    With the increasing deployment and use of GPS-enabled devices, massive amounts of GPS data are becoming available. We propose a general framework for the mining of semantically meaningful, significant locations, e.g., shopping malls and restaurants, from such data. We present techniques capable...... of extracting semantic locations from GPS data. We capture the relationships between locations and between locations and users with a graph. Significance is then assigned to locations using random walks over the graph that propagates significance among the locations. In doing so, mutual reinforcement between...

  11. Mining significant semantic locations from GPS data

    DEFF Research Database (Denmark)

    Cao, Xin; Cong, Gao; Jensen, Christian S.

    2010-01-01

    With the increasing deployment and use of GPS-enabled devices, massive amounts of GPS data are becoming available. We propose a general framework for the mining of semantically meaningful, significant locations, e.g., shopping malls and restaurants, from such data. We present techniques capable...... of extracting semantic locations from GPS data. We capture the relationships between locations and between locations and users with a graph. Significance is then assigned to locations using random walks over the graph that propagates significance among the locations. In doing so, mutual reinforcement between...

  12. Study of Landau spectrum for a two-dimensional random magnetic field

    International Nuclear Information System (INIS)

    Furtlehner, C.

    1997-01-01

    This thesis deals with the two-dimensional problem of a charged particle coupled to a random magnetic field. Various situations are considered, according to the relative importance of the mean value of field and random component. The last one is conceived as a distribution of magnetic impurities (punctual vortex), having various statistical properties (local or non-local correlations, Poisson distribution, etc). The study of this system has led to two distinct situations: - the case of the charged particle feeling the influence of mean field that manifests its presence in the spectrum of broadened Landau levels; - the disordered situation in which the spectrum can be distinguished from the free one only by a low energy Lifshits behaviour. Additional properties are occurring in the limit of 'strong' mean field, namely a non-conventional low energy behaviour (in contrast to Lifshits behaviour) which was interpreted in terms of localized states. (author)

  13. Characterizing the strand-specific distribution of non-CpG methylation in human pluripotent cells.

    Science.gov (United States)

    Guo, Weilong; Chung, Wen-Yu; Qian, Minping; Pellegrini, Matteo; Zhang, Michael Q

    2014-03-01

    DNA methylation is an important defense and regulatory mechanism. In mammals, most DNA methylation occurs at CpG sites, and asymmetric non-CpG methylation has only been detected at appreciable levels in a few cell types. We are the first to systematically study the strand-specific distribution of non-CpG methylation. With the divide-and-compare strategy, we show that CHG and CHH methylation are not intrinsically different in human embryonic stem cells (ESCs) and induced pluripotent stem cells (iPSCs). We also find that non-CpG methylation is skewed between the two strands in introns, especially at intron boundaries and in highly expressed genes. Controlling for the proximal sequences of non-CpG sites, we show that the skew of non-CpG methylation in introns is mainly guided by sequence skew. By studying subgroups of transposable elements, we also found that non-CpG methylation is distributed in a strand-specific manner in both short interspersed nuclear elements (SINE) and long interspersed nuclear elements (LINE), but not in long terminal repeats (LTR). Finally, we show that on the antisense strand of Alus, a non-CpG site just downstream of the A-box is highly methylated. Together, the divide-and-compare strategy leads us to identify regions with strand-specific distributions of non-CpG methylation in humans.

  14. Size Distribution Imaging by Non-Uniform Oscillating-Gradient Spin Echo (NOGSE MRI.

    Directory of Open Access Journals (Sweden)

    Noam Shemesh

    Full Text Available Objects making up complex porous systems in Nature usually span a range of sizes. These size distributions play fundamental roles in defining the physicochemical, biophysical and physiological properties of a wide variety of systems - ranging from advanced catalytic materials to Central Nervous System diseases. Accurate and noninvasive measurements of size distributions in opaque, three-dimensional objects, have thus remained long-standing and important challenges. Herein we describe how a recently introduced diffusion-based magnetic resonance methodology, Non-Uniform-Oscillating-Gradient-Spin-Echo (NOGSE, can determine such distributions noninvasively. The method relies on its ability to probe confining lengths with a (length6 parametric sensitivity, in a constant-time, constant-number-of-gradients fashion; combined, these attributes provide sufficient sensitivity for characterizing the underlying distributions in μm-scaled cellular systems. Theoretical derivations and simulations are presented to verify NOGSE's ability to faithfully reconstruct size distributions through suitable modeling of their distribution parameters. Experiments in yeast cell suspensions - where the ground truth can be determined from ancillary microscopy - corroborate these trends experimentally. Finally, by appending to the NOGSE protocol an imaging acquisition, novel MRI maps of cellular size distributions were collected from a mouse brain. The ensuing micro-architectural contrasts successfully delineated distinctive hallmark anatomical sub-structures, in both white matter and gray matter tissues, in a non-invasive manner. Such findings highlight NOGSE's potential for characterizing aberrations in cellular size distributions upon disease, or during normal processes such as development.

  15. NON-INVASIVE DETERMINATION OF THE LOCATION AND DISTRIBUTION OF FREE-PHASE DENSE NONAQUEOUS PHASE LIQUIDS (DNAPL) BY SEISMIC REFLECTION TECHNIQUES

    Energy Technology Data Exchange (ETDEWEB)

    Michael G. Waddell; William J. Domoracki; Tom J. Temples

    2001-12-01

    This annual technical progress report is for part of Task 4 (site evaluation), Task 5 (2D seismic design, acquisition, and processing), and Task 6 (2D seismic reflection, interpretation, and AVO analysis) on DOE contact number DE-AR26-98FT40369. The project had planned one additional deployment to another site other than Savannah River Site (SRS) or DOE Hanford Site. After the SUBCON midyear review in Albuquerque, NM, it was decided that two additional deployments would be performed. The first deployment is to test the feasibility of using non-invasive seismic reflection and AVO analysis as a monitoring tool to assist in determining the effectiveness of Dynamic Underground Stripping (DUS) in removal of DNAPL. The second deployment is to the Department of Defense (DOD) Charleston Naval Weapons Station Solid Waste Management Unit 12 (SWMU-12), Charleston, SC to further test the technique to detect high concentrations of DNAPL. The Charleston Naval Weapons Station SWMU-12 site was selected in consultation with National Energy Technology Laboratory (NETL) and DOD Naval Facilities Engineering Command Southern Division (NAVFAC) personnel. Based upon the review of existing data and due to the shallow target depth, the project team collected three Vertical Seismic Profiles (VSP) and an experimental P-wave seismic reflection line. After preliminary data analysis of the VSP data and the experimental reflection line data, it was decided to proceed with Task 5 and Task 6. Three high resolution P-wave reflection profiles were collected with two objectives; (1) design the reflection survey to image a target depth of 20 feet below land surface to assist in determining the geologic controls on the DNAPL plume geometry, and (2) apply AVO analysis to the seismic data to locate the zone of high concentration of DNAPL. Based upon the results of the data processing and interpretation of the seismic data, the project team was able to map the channel that is controlling the DNAPL plume

  16. Angular Distribution of Particles Emerging from a Diffusive Region and its Implications for the Fleck-Canfield Random Walk Algorithm for Implicit Monte Carlo Radiation Transport

    CERN Document Server

    Cooper, M A

    2000-01-01

    We present various approximations for the angular distribution of particles emerging from an optically thick, purely isotropically scattering region into a vacuum. Our motivation is to use such a distribution for the Fleck-Canfield random walk method [1] for implicit Monte Carlo (IMC) [2] radiation transport problems. We demonstrate that the cosine distribution recommended in the original random walk paper [1] is a poor approximation to the angular distribution predicted by transport theory. Then we examine other approximations that more closely match the transport angular distribution.

  17. A Simulated Annealing method to solve a generalized maximal covering location problem

    Directory of Open Access Journals (Sweden)

    M. Saeed Jabalameli

    2011-04-01

    Full Text Available The maximal covering location problem (MCLP seeks to locate a predefined number of facilities in order to maximize the number of covered demand points. In a classical sense, MCLP has three main implicit assumptions: all or nothing coverage, individual coverage, and fixed coverage radius. By relaxing these assumptions, three classes of modelling formulations are extended: the gradual cover models, the cooperative cover models, and the variable radius models. In this paper, we develop a special form of MCLP which combines the characteristics of gradual cover models, cooperative cover models, and variable radius models. The proposed problem has many applications such as locating cell phone towers. The model is formulated as a mixed integer non-linear programming (MINLP. In addition, a simulated annealing algorithm is used to solve the resulted problem and the performance of the proposed method is evaluated with a set of randomly generated problems.

  18. Multifractal detrended fluctuation analysis of analog random multiplicative processes

    Energy Technology Data Exchange (ETDEWEB)

    Silva, L.B.M.; Vermelho, M.V.D. [Instituto de Fisica, Universidade Federal de Alagoas, Maceio - AL, 57072-970 (Brazil); Lyra, M.L. [Instituto de Fisica, Universidade Federal de Alagoas, Maceio - AL, 57072-970 (Brazil)], E-mail: marcelo@if.ufal.br; Viswanathan, G.M. [Instituto de Fisica, Universidade Federal de Alagoas, Maceio - AL, 57072-970 (Brazil)

    2009-09-15

    We investigate non-Gaussian statistical properties of stationary stochastic signals generated by an analog circuit that simulates a random multiplicative process with weak additive noise. The random noises are originated by thermal shot noise and avalanche processes, while the multiplicative process is generated by a fully analog circuit. The resulting signal describes stochastic time series of current interest in several areas such as turbulence, finance, biology and environment, which exhibit power-law distributions. Specifically, we study the correlation properties of the signal by employing a detrended fluctuation analysis and explore its multifractal nature. The singularity spectrum is obtained and analyzed as a function of the control circuit parameter that tunes the asymptotic power-law form of the probability distribution function.

  19. Physically transient photonics: random versus distributed feedback lasing based on nanoimprinted DNA.

    Science.gov (United States)

    Camposeo, Andrea; Del Carro, Pompilio; Persano, Luana; Cyprych, Konrad; Szukalski, Adam; Sznitko, Lech; Mysliwiec, Jaroslaw; Pisignano, Dario

    2014-10-28

    Room-temperature nanoimprinted, DNA-based distributed feedback (DFB) laser operation at 605 nm is reported. The laser is made of a pure DNA host matrix doped with gain dyes. At high excitation densities, the emission of the untextured dye-doped DNA films is characterized by a broad emission peak with an overall line width of 12 nm and superimposed narrow peaks, characteristic of random lasing. Moreover, direct patterning of the DNA films is demonstrated with a resolution down to 100 nm, enabling the realization of both surface-emitting and edge-emitting DFB lasers with a typical line width of <0.3 nm. The resulting emission is polarized, with a ratio between the TE- and TM-polarized intensities exceeding 30. In addition, the nanopatterned devices dissolve in water within less than 2 min. These results demonstrate the possibility of realizing various physically transient nanophotonics and laser architectures, including random lasing and nanoimprinted devices, based on natural biopolymers.

  20. Towards an accurate real-time locator of infrasonic sources

    Science.gov (United States)

    Pinsky, V.; Blom, P.; Polozov, A.; Marcillo, O.; Arrowsmith, S.; Hofstetter, A.

    2017-11-01

    Infrasonic signals propagate from an atmospheric source via media with stochastic and fast space-varying conditions. Hence, their travel time, the amplitude at sensor recordings and even manifestation in the so-called "shadow zones" are random. Therefore, the traditional least-squares technique for locating infrasonic sources is often not effective, and the problem for the best solution must be formulated in probabilistic terms. Recently, a series of papers has been published about Bayesian Infrasonic Source Localization (BISL) method based on the computation of the posterior probability density function (PPDF) of the source location, as a convolution of a priori probability distribution function (APDF) of the propagation model parameters with likelihood function (LF) of observations. The present study is devoted to the further development of BISL for higher accuracy and stability of the source location results and decreasing of computational load. We critically analyse previous algorithms and propose several new ones. First of all, we describe the general PPDF formulation and demonstrate that this relatively slow algorithm might be among the most accurate algorithms, provided the adequate APDF and LF are used. Then, we suggest using summation instead of integration in a general PPDF calculation for increased robustness, but this leads us to the 3D space-time optimization problem. Two different forms of APDF approximation are considered and applied for the PPDF calculation in our study. One of them is previously suggested, but not yet properly used is the so-called "celerity-range histograms" (CRHs). Another is the outcome from previous findings of linear mean travel time for the four first infrasonic phases in the overlapping consecutive distance ranges. This stochastic model is extended here to the regional distance of 1000 km, and the APDF introduced is the probabilistic form of the junction between this travel time model and range-dependent probability

  1. Automatic location of short circuit faults

    Energy Technology Data Exchange (ETDEWEB)

    Lehtonen, M. [VTT Energy, Espoo (Finland); Hakola, T.; Antila, E. [ABB Power Oy, Helsinki (Finland); Seppaenen, M. [North-Carelian Power Company (Finland)

    1996-12-31

    In this presentation, the automatic location of short circuit faults on medium voltage distribution lines, based on the integration of computer systems of medium voltage distribution network automation is discussed. First the distribution data management systems and their interface with the substation telecontrol, or SCADA systems, is studied. Then the integration of substation telecontrol system and computerised relay protection is discussed. Finally, the implementation of the fault location system is presented and the practical experience with the system is discussed

  2. Automatic location of short circuit faults

    Energy Technology Data Exchange (ETDEWEB)

    Lehtonen, M [VTT Energy, Espoo (Finland); Hakola, T; Antila, E [ABB Power Oy (Finland); Seppaenen, M [North-Carelian Power Company (Finland)

    1998-08-01

    In this chapter, the automatic location of short circuit faults on medium voltage distribution lines, based on the integration of computer systems of medium voltage distribution network automation is discussed. First the distribution data management systems and their interface with the substation telecontrol, or SCADA systems, is studied. Then the integration of substation telecontrol system and computerized relay protection is discussed. Finally, the implementation of the fault location system is presented and the practical experience with the system is discussed

  3. Automatic location of short circuit faults

    Energy Technology Data Exchange (ETDEWEB)

    Lehtonen, M [VTT Energy, Espoo (Finland); Hakola, T; Antila, E [ABB Power Oy, Helsinki (Finland); Seppaenen, M [North-Carelian Power Company (Finland)

    1997-12-31

    In this presentation, the automatic location of short circuit faults on medium voltage distribution lines, based on the integration of computer systems of medium voltage distribution network automation is discussed. First the distribution data management systems and their interface with the substation telecontrol, or SCADA systems, is studied. Then the integration of substation telecontrol system and computerised relay protection is discussed. Finally, the implementation of the fault location system is presented and the practical experience with the system is discussed

  4. Modeling and optimization of an electric power distribution network ...

    African Journals Online (AJOL)

    Modeling and optimization of an electric power distribution network planning system using ... of the network was modelled with non-linear mathematical expressions. ... given feasible locations, re-conductoring of existing feeders in the network, ...

  5. Integrating Non-Spatial Preferences into Spatial Location Queries

    DEFF Research Database (Denmark)

    Qu, Qiang; Liu, Siyuan; Yang, Bin

    2014-01-01

    Increasing volumes of geo-referenced data are becoming available. This data includes so-called points of interest that describe businesses, tourist attractions, etc. by means of a geo-location and properties such as a textual description or ratings. We propose and study the efficient implementation...... of a new kind of query on points of interest that takes into account both the locations and properties of the points of interest. The query takes a result cardinality, a spatial range, and property-related preferences as parameters, and it returns a compact set of points of interest with the given...... cardinality and in the given range that satisfies the preferences. Specifically, the points of interest in the result set cover so-called allying preferences and are located far from points of interest that possess so-called alienating preferences. A unified result rating function integrates the two kinds...

  6. Model correction factor method for reliability problems involving integrals of non-Gaussian random fields

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...

  7. A distributed scheduling algorithm for heterogeneous real-time systems

    Science.gov (United States)

    Zeineldine, Osman; El-Toweissy, Mohamed; Mukkamala, Ravi

    1991-01-01

    Much of the previous work on load balancing and scheduling in distributed environments was concerned with homogeneous systems and homogeneous loads. Several of the results indicated that random policies are as effective as other more complex load allocation policies. The effects of heterogeneity on scheduling algorithms for hard real time systems is examined. A distributed scheduler specifically to handle heterogeneities in both nodes and node traffic is proposed. The performance of the algorithm is measured in terms of the percentage of jobs discarded. While a random task allocation is very sensitive to heterogeneities, the algorithm is shown to be robust to such non-uniformities in system components and load.

  8. Changes in the content of edible and non-edible components and distribution of tissue components in cockerels and capons

    Directory of Open Access Journals (Sweden)

    Magdalena Zawacka

    2018-04-01

    Full Text Available The aim of this study was to determine the effects of castration and age on the content of edible and non-edible components, and the distribution of tissue components in the carcasses of cockerels and capons. The study was conducted on 200 birds (Green-legged Partridge, divided into two sex categories (with 5 replications per group and 20 birds per replication, raised to 28 wk of age. At 8 wk of age, 100 birds were surgically castrated and afterwards at 12 wk of age and at four-wk intervals, 10 intact cockerels and 10 capons were selected randomly and slaughtered. Cockerels, compared with capons, were characterized by a higher proportion of edible components at 24 and 28 wk of age, and a more desirable carcass tissue composition due to a higher content of lean meat in total body weight (BW. Capons had higher abdominal fat content than cockerels, which resulted in a higher percentage of non-edible components in their BW at 24 and 28 wk of age. Differences in the distribution of lean meat in the carcass were noted from 20 wk of age in both castrated and intact birds. The content of breast muscles increased in capons, and the content of leg muscles (thigh and drumstick increased in cockerels. The results of this study indicate that in view of the optimal lean meat content of the carcass and the optimal distribution of major tissue components, Green-legged Partridge capons should be fattened for a maximum period of 24 wk.

  9. Changes in the content of edible and non-edible components and distribution of tissue components in cockerels and capons

    International Nuclear Information System (INIS)

    Zawacka, M.; Gesek, M.; Michalik, D.; Murawska, D.

    2018-01-01

    The aim of this study was to determine the effects of castration and age on the content of edible and non-edible components, and the distribution of tissue components in the carcasses of cockerels and capons. The study was conducted on 200 birds (Green-legged Partridge), divided into two sex categories (with 5 replications per group and 20 birds per replication), raised to 28 wk of age. At 8 wk of age, 100 birds were surgically castrated and afterwards at 12 wk of age and at four-wk intervals, 10 intact cockerels and 10 capons were selected randomly and slaughtered. Cockerels, compared with capons, were characterized by a higher proportion of edible components at 24 and 28 wk of age, and a more desirable carcass tissue composition due to a higher content of lean meat in total body weight (BW). Capons had higher abdominal fat content than cockerels, which resulted in a higher percentage of non-edible components in their BW at 24 and 28 wk of age. Differences in the distribution of lean meat in the carcass were noted from 20 wk of age in both castrated and intact birds. The content of breast muscles increased in capons, and the content of leg muscles (thigh and drumstick) increased in cockerels. The results of this study indicate that in view of the optimal lean meat content of the carcass and the optimal distribution of major tissue components, Green-legged Partridge capons should be fattened for a maximum period of 24 wk.

  10. The effectiveness of non-pharmacological interventions in improvement of sleep quality among non-remissive cancer patients: A systematic review of randomized trials

    Directory of Open Access Journals (Sweden)

    Fatmawati Fadli

    2016-12-01

    Full Text Available Statistical results estimated that most of non-remissive cancer patients face sleep problem and experience the symptoms of insomnia throughout and after the completion of cancer treatment. The purpose of this review was to compare the effectiveness between several types of non-pharmacological interventions and standard care or treatment to improve the sleep quality among non-remissive cancer patients. All randomized studies focused on non-pharmacological interventions to improve sleep quality among non-remissive cancer patients were included. Thirteen studies were selected with a total of 1,617 participants. The results found that only four interventions were significantly effective to improve sleep quality among non-remissive cancer patients, included cognitive behavioral therapy, relaxation and guided imagery program, self-care behavior education program, and energy and sleep enhancement program.

  11. Sub-micron particle number size distribution characteristics at two urban locations in Leicester

    Science.gov (United States)

    Hama, Sarkawt M. L.; Cordell, Rebecca L.; Kos, Gerard P. A.; Weijers, E. P.; Monks, Paul S.

    2017-09-01

    The particle number size distribution (PNSD) of atmospheric particles not only provides information about sources and atmospheric processing of particles, but also plays an important role in determining regional lung dose. Owing to the importance of PNSD in understanding particulate pollution two short-term campaigns (March-June 2014) measurements of sub-micron PNSD were conducted at two urban background locations in Leicester, UK. At the first site, Leicester Automatic Urban Rural Network (AURN), the mean number concentrations of nucleation, Aitken, accumulation modes, the total particles, equivalent black carbon (eBC) mass concentrations were 2002, 3258, 1576, 6837 # cm-3, 1.7 μg m-3, respectively, and at the second site, Brookfield (BF), were 1455, 2407, 874, 4737 # cm-3, 0.77 μg m-3, respectively. The total particle number was dominated by the nucleation and Aitken modes, with both consisting of 77%, and 81% of total number concentrations at AURN and BF sites, respectively. This behaviour could be attributed to primary emissions (traffic) of ultrafine particles and the temporal evolution of mixing layer. The size distribution at the AURN site shows bimodal distribution at 22 nm with a minor peak at 70 nm. The size distribution at BF site, however, exhibits unimodal distribution at 35 nm. This study has for the first time investigated the effect of Easter holiday on PNSD in UK. The temporal variation of PNSD demonstrated a good degree of correlation with traffic-related pollutants (NOX, and eBC at both sites). The meteorological conditions, also had an impact on the PNSD and eBC at both sites. During the measurement period, the frequency of NPF events was calculated to be 13.3%, and 22.2% at AURN and BF sites, respectively. The average value of formation and growth rates of nucleation mode particles were 1.3, and 1.17 cm-3 s-1 and 7.42, and 5.3 nm h-1 at AURN, and BF sites, respectively. It can suggested that aerosol particles in Leicester originate mainly

  12. Pure random search for ambient sensor distribution optimisation in a smart home environment.

    Science.gov (United States)

    Poland, Michael P; Nugent, Chris D; Wang, Hui; Chen, Liming

    2011-01-01

    Smart homes are living spaces facilitated with technology to allow individuals to remain in their own homes for longer, rather than be institutionalised. Sensors are the fundamental physical layer with any smart home, as the data they generate is used to inform decision support systems, facilitating appropriate actuator actions. Positioning of sensors is therefore a fundamental characteristic of a smart home. Contemporary smart home sensor distribution is aligned to either a) a total coverage approach; b) a human assessment approach. These methods for sensor arrangement are not data driven strategies, are unempirical and frequently irrational. This Study hypothesised that sensor deployment directed by an optimisation method that utilises inhabitants' spatial frequency data as the search space, would produce more optimal sensor distributions vs. the current method of sensor deployment by engineers. Seven human engineers were tasked to create sensor distributions based on perceived utility for 9 deployment scenarios. A Pure Random Search (PRS) algorithm was then tasked to create matched sensor distributions. The PRS method produced superior distributions in 98.4% of test cases (n=64) against human engineer instructed deployments when the engineers had no access to the spatial frequency data, and in 92.0% of test cases (n=64) when engineers had full access to these data. These results thus confirmed the hypothesis.

  13. A Danish randomized trial comparing breast-preserving therapy with mastectomy in mammary carcinoma

    International Nuclear Information System (INIS)

    Blichert-Toft, M.; Brincker, H.; Andersen, J.A.; Andersen, K.W.; Axelsson, C.K.; Mouridsen, H.T.; Dombernowsky, P.; Overgaard, M.; Gadeberg, C.; Knudsen, G.; Borgeskov, S.; Bertelsen, S.; Knudsen, J.B.; Hansen, J.B.; Poulsen, P.E.; Willumsen, H.; Schousen, P.; Froberg, D.; Oernsholt, J.; Andersen, M.; Olesen, S.; Skovgaard, S.; Oester, M.; Schumacher, H.; Lynderup, E.K.; Holm, C.N.

    1988-01-01

    The present study comprises 847 women operated upon for invasive breast carcinoma at 19 surgical departments and enrolled in protocol DBCG-82TM from January 1983 to November 1987. Among them 662 (78%) were allocated for breast-preserving therapy or mastectomy by randomization, while 185 patients (22%) did not accept randomization. Within the randomized group 6% could not be entered into adjuvant protocols, i.e. subsequent programmes of postoperative therapy and follow-up. This left 619 evaluable patients. In the non-randomized series 26% did not fulfil the demands for entrance into the adjuvant protocols, leaving 136 evaluable patients, 60 of whom had chosen a breast-preserving operation and 76 mastectomy. In the randomized series the patients in the two treatment arms were comparable in age, menopausal status, site of tumour, pathoanatomical diameter of the tumour, number of removed axillary lymph nodes, number of metastatic axillary lymph nodes, and distribution on adjuvant regimens. Ninety per cent of the patients in the randomized group accepted the method offered, whereas 10% declined and wanted the alternate form of operation. The median follow-up period was approximately 1.75 years. The cumulative recurrence rate in the randomized group was 13% and in the non-randomized group 7%. These results are preliminary. Life-table analyses have not so far demonstrated differences in recurrence-free survival either in the randomized or the non-randomized series. (orig.)

  14. Gossip and Distributed Kalman Filtering: Weak Consensus Under Weak Detectability

    Science.gov (United States)

    Kar, Soummya; Moura, José M. F.

    2011-04-01

    The paper presents the gossip interactive Kalman filter (GIKF) for distributed Kalman filtering for networked systems and sensor networks, where inter-sensor communication and observations occur at the same time-scale. The communication among sensors is random; each sensor occasionally exchanges its filtering state information with a neighbor depending on the availability of the appropriate network link. We show that under a weak distributed detectability condition: 1. the GIKF error process remains stochastically bounded, irrespective of the instability properties of the random process dynamics; and 2. the network achieves \\emph{weak consensus}, i.e., the conditional estimation error covariance at a (uniformly) randomly selected sensor converges in distribution to a unique invariant measure on the space of positive semi-definite matrices (independent of the initial state.) To prove these results, we interpret the filtered states (estimates and error covariances) at each node in the GIKF as stochastic particles with local interactions. We analyze the asymptotic properties of the error process by studying as a random dynamical system the associated switched (random) Riccati equation, the switching being dictated by a non-stationary Markov chain on the network graph.

  15. Limit distribution function of inhomogeneities in regions with random boundary in the Hilbert space

    International Nuclear Information System (INIS)

    Rasulova, M.Yu.; Tashpulatov, S.M.

    2004-10-01

    The interaction of charged particle systems with a membrane consisting of nonhomogeneities which are randomly distributed by the same law in the vicinity of appropriate sites of a planax crystal lattice is studied. A system of equations for the self-consistent potential U 1 (x,ξ 0 ,..., ξ N ,...) and the density of induced charges σ(x,ξ 0 ,...,ξ N ,...) is solved on Hilbert space. (author)

  16. Electronic oscillations in a hot plasma due the non-Maxwellian velocity distributions

    International Nuclear Information System (INIS)

    Dias, L.A.V.; Nakamura, Y.

    1977-01-01

    In a completely ionized hot plasma, with a non-Maxwellian electron velocity distribution, it is shown that, depending on the electron temperature, oscillations may occur at the elctron plasma and gyro frequencies. For three different electron velocity distributions, it is shown the oscillations dependency on the temperature. This situation occurs in the ionospheric plasma when artificially heated by HF radio waves. If the distribution is Maxwellian, the oscillation only occur near the electron plasma frequency [pt

  17. Stochastic space interval as a link between quantum randomness and macroscopic randomness?

    Science.gov (United States)

    Haug, Espen Gaarder; Hoff, Harald

    2018-03-01

    For many stochastic phenomena, we observe statistical distributions that have fat-tails and high-peaks compared to the Gaussian distribution. In this paper, we will explain how observable statistical distributions in the macroscopic world could be related to the randomness in the subatomic world. We show that fat-tailed (leptokurtic) phenomena in our everyday macroscopic world are ultimately rooted in Gaussian - or very close to Gaussian-distributed subatomic particle randomness, but they are not, in a strict sense, Gaussian distributions. By running a truly random experiment over a three and a half-year period, we observed a type of random behavior in trillions of photons. Combining our results with simple logic, we find that fat-tailed and high-peaked statistical distributions are exactly what we would expect to observe if the subatomic world is quantized and not continuously divisible. We extend our analysis to the fact that one typically observes fat-tails and high-peaks relative to the Gaussian distribution in stocks and commodity prices and many aspects of the natural world; these instances are all observable and documentable macro phenomena that strongly suggest that the ultimate building blocks of nature are discrete (e.g. they appear in quanta).

  18. Superstatistical generalised Langevin equation: non-Gaussian viscoelastic anomalous diffusion

    Science.gov (United States)

    Ślęzak, Jakub; Metzler, Ralf; Magdziarz, Marcin

    2018-02-01

    Recent advances in single particle tracking and supercomputing techniques demonstrate the emergence of normal or anomalous, viscoelastic diffusion in conjunction with non-Gaussian distributions in soft, biological, and active matter systems. We here formulate a stochastic model based on a generalised Langevin equation in which non-Gaussian shapes of the probability density function and normal or anomalous diffusion have a common origin, namely a random parametrisation of the stochastic force. We perform a detailed analysis demonstrating how various types of parameter distributions for the memory kernel result in exponential, power law, or power-log law tails of the memory functions. The studied system is also shown to exhibit a further unusual property: the velocity has a Gaussian one point probability density but non-Gaussian joint distributions. This behaviour is reflected in the relaxation from a Gaussian to a non-Gaussian distribution observed for the position variable. We show that our theoretical results are in excellent agreement with stochastic simulations.

  19. Random field assessment of nanoscopic inhomogeneity of bone.

    Science.gov (United States)

    Dong, X Neil; Luo, Qing; Sparkman, Daniel M; Millwater, Harry R; Wang, Xiaodu

    2010-12-01

    Bone quality is significantly correlated with the inhomogeneous distribution of material and ultrastructural properties (e.g., modulus and mineralization) of the tissue. Current techniques for quantifying inhomogeneity consist of descriptive statistics such as mean, standard deviation and coefficient of variation. However, these parameters do not describe the spatial variations of bone properties. The objective of this study was to develop a novel statistical method to characterize and quantitatively describe the spatial variation of bone properties at ultrastructural levels. To do so, a random field defined by an exponential covariance function was used to represent the spatial uncertainty of elastic modulus by delineating the correlation of the modulus at different locations in bone lamellae. The correlation length, a characteristic parameter of the covariance function, was employed to estimate the fluctuation of the elastic modulus in the random field. Using this approach, two distribution maps of the elastic modulus within bone lamellae were generated using simulation and compared with those obtained experimentally by a combination of atomic force microscopy and nanoindentation techniques. The simulation-generated maps of elastic modulus were in close agreement with the experimental ones, thus validating the random field approach in defining the inhomogeneity of elastic modulus in lamellae of bone. Indeed, generation of such random fields will facilitate multi-scale modeling of bone in more pragmatic details. Copyright © 2010 Elsevier Inc. All rights reserved.

  20. Bayesian optimization for computationally extensive probability distributions.

    Science.gov (United States)

    Tamura, Ryo; Hukushima, Koji

    2018-01-01

    An efficient method for finding a better maximizer of computationally extensive probability distributions is proposed on the basis of a Bayesian optimization technique. A key idea of the proposed method is to use extreme values of acquisition functions by Gaussian processes for the next training phase, which should be located near a local maximum or a global maximum of the probability distribution. Our Bayesian optimization technique is applied to the posterior distribution in the effective physical model estimation, which is a computationally extensive probability distribution. Even when the number of sampling points on the posterior distributions is fixed to be small, the Bayesian optimization provides a better maximizer of the posterior distributions in comparison to those by the random search method, the steepest descent method, or the Monte Carlo method. Furthermore, the Bayesian optimization improves the results efficiently by combining the steepest descent method and thus it is a powerful tool to search for a better maximizer of computationally extensive probability distributions.

  1. Ice Water Classification Using Statistical Distribution Based Conditional Random Fields in RADARSAT-2 Dual Polarization Imagery

    Science.gov (United States)

    Zhang, Y.; Li, F.; Zhang, S.; Hao, W.; Zhu, T.; Yuan, L.; Xiao, F.

    2017-09-01

    In this paper, Statistical Distribution based Conditional Random Fields (STA-CRF) algorithm is exploited for improving marginal ice-water classification. Pixel level ice concentration is presented as the comparison of methods based on CRF. Furthermore, in order to explore the effective statistical distribution model to be integrated into STA-CRF, five statistical distribution models are investigated. The STA-CRF methods are tested on 2 scenes around Prydz Bay and Adélie Depression, where contain a variety of ice types during melt season. Experimental results indicate that the proposed method can resolve sea ice edge well in Marginal Ice Zone (MIZ) and show a robust distinction of ice and water.

  2. OLBS: Offline location based services

    OpenAIRE

    Coelho, P; Ana Aguiar; João Correia Lopes

    2011-01-01

    Most existing location-based services rely on ubiquitous connectivity to deliver location-based contents to the users. However, connectivity is not available anywhere at anytime even in urban centres. Underground, indoors, remote areas, and foreign countries are examples situations where users commonly do not have guaranteed connectivity but could profit from location-based contents. In this work, we propose an open platform for publishing, distributing and maintaining location-based contents...

  3. Oral Health-Related Quality of Life in Edentulous Patients with Two- vs Four-Locator-Retained Mandibular Overdentures: A Prospective, Randomized, Crossover Study.

    Science.gov (United States)

    Karbach, Julia; Hartmann, Sinsa; Jahn-Eimermacher, Antje; Wagner, Wilfried

    2015-01-01

    To compare the oral health-related quality of life (OHRQoL) in a prospective, randomized crossover trial in patients with mandibular overdentures retained with two or four locators. In 30 patients with edentulous mandibles, four implants (ICX-plus implants [Medentis Medical]) were placed in the intraforaminal area. Eight weeks after transgingival healing, patients were randomly assigned to have two or four implants incorporated in the prosthesis. After 3 months, the retention concepts were switched. The patients with a two-implant-supported overdenture had four implants incorporated, whereas patients with a four-implant-supported overdenture had two retention locators taken out. After 3 more months, all four implants were retained in the implant-supported overdenture in every patient. To measure OHRQoL of the patients, the Oral Health Impact Profile 14, German version (OHIP-14 G), was used. A considerable increase in OHRQoL could be seen in all patients after the prosthesis was placed on the implants. Also, a statistically significant difference of OHRQoL could be seen in the OHIP-14 G scores between two-implant and four-implant overdentures. Patients had a higher OHRQoL after incorporation of four implants in the overdenture compared with only two implants. Patients with implant-retained overdentures had better OHRQoL compared with those with conventional dentures. The number of incorporated implants in the locator-retained overdenture also influenced the increase in OHRQoL, with four implants having a statistically significant advantage over two implants.

  4. Non-intersecting Brownian walkers and Yang-Mills theory on the sphere

    International Nuclear Information System (INIS)

    Forrester, Peter J.; Majumdar, Satya N.; Schehr, Gregory

    2011-01-01

    We study a system of N non-intersecting Brownian motions on a line segment [0,L] with periodic, absorbing and reflecting boundary conditions. We show that the normalized reunion probabilities of these Brownian motions in the three models can be mapped to the partition function of two-dimensional continuum Yang-Mills theory on a sphere respectively with gauge groups U(N), Sp(2N) and SO(2N). Consequently, we show that in each of these Brownian motion models, as one varies the system size L, a third order phase transition occurs at a critical value L=L c (N)∼√(N) in the large N limit. Close to the critical point, the reunion probability, properly centered and scaled, is identical to the Tracy-Widom distribution describing the probability distribution of the largest eigenvalue of a random matrix. For the periodic case we obtain the Tracy-Widom distribution corresponding to the GUE random matrices, while for the absorbing and reflecting cases we get the Tracy-Widom distribution corresponding to GOE random matrices. In the absorbing case, the reunion probability is also identified as the maximal height of N non-intersecting Brownian excursions ('watermelons' with a wall) whose distribution in the asymptotic scaling limit is then described by GOE Tracy-Widom law. In addition, large deviation formulas for the maximum height are also computed.

  5. Generalized parton distribution for non zero skewness

    International Nuclear Information System (INIS)

    Kumar, Narinder; Dahiya, Harleen; Teryaev, Oleg

    2012-01-01

    In the theory of strong interactions the main open question is how the nucleon and other hadrons are built from quarks and gluons, the fundamental degrees of freedom in QCD. An essential tool to investigate hadron structure is the study of deep inelastic scattering processes, where individual quarks and gluons can be resolved. The parton densities extracted from such processes encode the distribution of longitudinal momentum and polarization carried by quarks, antiquarks and gluons within a fast moving hadron. They have provided much to shape the physical picture of hadron structure. In the recent years, it has become clear that appropriate exclusive scattering processes may provide such information encoded in the general parton distributions (GPDs). Here, we investigate the GPD for deep virtual compton scattering (DVCS) for the non zero skewness. The study has investigated the GPDs by expressing them in terms of overlaps of light front wave functions (LFWFs). The work represented a spin 1/2 system as a composite of spin 1/2 fermion and spin 1 boson with arbitrary masses

  6. Calculation of momentum distribution function of a non-thermal fermionic dark matter

    Energy Technology Data Exchange (ETDEWEB)

    Biswas, Anirban; Gupta, Aritra, E-mail: anirbanbiswas@hri.res.in, E-mail: aritra@hri.res.in [Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad 211 019 (India)

    2017-03-01

    The most widely studied scenario in dark matter phenomenology is the thermal WIMP scenario. Inspite of numerous efforts to detect WIMP, till now we have no direct evidence for it. A possible explanation for this non-observation of dark matter could be because of its very feeble interaction strength and hence, failing to thermalise with the rest of the cosmic soup. In other words, the dark matter might be of non-thermal origin where the relic density is obtained by the so-called freeze-in mechanism. Furthermore, if this non-thermal dark matter is itself produced substantially from the decay of another non-thermal mother particle, then their distribution functions may differ in both size and shape from the usual equilibrium distribution function. In this work, we have studied such a non-thermal (fermionic) dark matter scenario in the light of a new type of U(1){sub B−L} model. The U(1){sub B−L} model is interesting, since, besides being anomaly free, it can give rise to neutrino mass by Type II see-saw mechanism. Moreover, as we will show, it can accommodate a non-thermal fermionic dark matter as well. Starting from the collision terms, we have calculated the momentum distribution function for the dark matter by solving a coupled system of Boltzmann equations. We then used it to calculate the final relic abundance, as well as other relevant physical quantities. We have also compared our result with that obtained from solving the usual Boltzmann (or rate) equations directly in terms of comoving number density, Y . Our findings suggest that the latter approximation is valid only in cases where the system under study is close to equilibrium, and hence should be used with caution.

  7. The Spotting Distribution of Wildfires

    Directory of Open Access Journals (Sweden)

    Jonathan Martin

    2016-06-01

    Full Text Available In wildfire science, spotting refers to non-local creation of new fires, due to downwind ignition of brands launched from a primary fire. Spotting is often mentioned as being one of the most difficult problems for wildfire management, because of its unpredictable nature. Since spotting is a stochastic process, it makes sense to talk about a probability distribution for spotting, which we call the spotting distribution. Given a location ahead of the fire front, we would like to know how likely is it to observe a spot fire at that location in the next few minutes. The aim of this paper is to introduce a detailed procedure to find the spotting distribution. Most prior modelling has focused on the maximum spotting distance, or on physical subprocesses. We will use mathematical modelling, which is based on detailed physical processes, to derive a spotting distribution. We discuss the use and measurement of this spotting distribution in fire spread, fire management and fire breaching. The appendix of this paper contains a comprehensive review of the relevant underlying physical sub-processes of fire plumes, launching fire brands, wind transport, falling and terminal velocity, combustion during transport, and ignition upon landing.

  8. Bias analysis applied to Agricultural Health Study publications to estimate non-random sources of uncertainty.

    Science.gov (United States)

    Lash, Timothy L

    2007-11-26

    The associations of pesticide exposure with disease outcomes are estimated without the benefit of a randomized design. For this reason and others, these studies are susceptible to systematic errors. I analyzed studies of the associations between alachlor and glyphosate exposure and cancer incidence, both derived from the Agricultural Health Study cohort, to quantify the bias and uncertainty potentially attributable to systematic error. For each study, I identified the prominent result and important sources of systematic error that might affect it. I assigned probability distributions to the bias parameters that allow quantification of the bias, drew a value at random from each assigned distribution, and calculated the estimate of effect adjusted for the biases. By repeating the draw and adjustment process over multiple iterations, I generated a frequency distribution of adjusted results, from which I obtained a point estimate and simulation interval. These methods were applied without access to the primary record-level dataset. The conventional estimates of effect associating alachlor and glyphosate exposure with cancer incidence were likely biased away from the null and understated the uncertainty by quantifying only random error. For example, the conventional p-value for a test of trend in the alachlor study equaled 0.02, whereas fewer than 20% of the bias analysis iterations yielded a p-value of 0.02 or lower. Similarly, the conventional fully-adjusted result associating glyphosate exposure with multiple myleoma equaled 2.6 with 95% confidence interval of 0.7 to 9.4. The frequency distribution generated by the bias analysis yielded a median hazard ratio equal to 1.5 with 95% simulation interval of 0.4 to 8.9, which was 66% wider than the conventional interval. Bias analysis provides a more complete picture of true uncertainty than conventional frequentist statistical analysis accompanied by a qualitative description of study limitations. The latter approach is

  9. Bias analysis applied to Agricultural Health Study publications to estimate non-random sources of uncertainty

    Directory of Open Access Journals (Sweden)

    Lash Timothy L

    2007-11-01

    Full Text Available Abstract Background The associations of pesticide exposure with disease outcomes are estimated without the benefit of a randomized design. For this reason and others, these studies are susceptible to systematic errors. I analyzed studies of the associations between alachlor and glyphosate exposure and cancer incidence, both derived from the Agricultural Health Study cohort, to quantify the bias and uncertainty potentially attributable to systematic error. Methods For each study, I identified the prominent result and important sources of systematic error that might affect it. I assigned probability distributions to the bias parameters that allow quantification of the bias, drew a value at random from each assigned distribution, and calculated the estimate of effect adjusted for the biases. By repeating the draw and adjustment process over multiple iterations, I generated a frequency distribution of adjusted results, from which I obtained a point estimate and simulation interval. These methods were applied without access to the primary record-level dataset. Results The conventional estimates of effect associating alachlor and glyphosate exposure with cancer incidence were likely biased away from the null and understated the uncertainty by quantifying only random error. For example, the conventional p-value for a test of trend in the alachlor study equaled 0.02, whereas fewer than 20% of the bias analysis iterations yielded a p-value of 0.02 or lower. Similarly, the conventional fully-adjusted result associating glyphosate exposure with multiple myleoma equaled 2.6 with 95% confidence interval of 0.7 to 9.4. The frequency distribution generated by the bias analysis yielded a median hazard ratio equal to 1.5 with 95% simulation interval of 0.4 to 8.9, which was 66% wider than the conventional interval. Conclusion Bias analysis provides a more complete picture of true uncertainty than conventional frequentist statistical analysis accompanied by a

  10. Effect of a data buffer on the recorded distribution of time intervals for random events

    Energy Technology Data Exchange (ETDEWEB)

    Barton, J C [Polytechnic of North London (UK)

    1976-03-15

    The use of a data buffer enables the distribution of the time intervals between events to be studied for times less than the recording system dead-time but the usual negative exponential distribution for random events has to be modified. The theory for this effect is developed for an n-stage buffer followed by an asynchronous recorder. Results are evaluated for the values of n from 1 to 5. In the language of queueing theory the system studied is of type M/D/1/n+1, i.e. with constant service time and a finite number of places.

  11. Scale-dependent bias from the reconstruction of non-Gaussian distributions

    International Nuclear Information System (INIS)

    Chongchitnan, Sirichai; Silk, Joseph

    2011-01-01

    Primordial non-Gaussianity introduces a scale-dependent variation in the clustering of density peaks corresponding to rare objects. This variation, parametrized by the bias, is investigated on scales where a linear perturbation theory is sufficiently accurate. The bias is obtained directly in real space by comparing the one- and two-point probability distributions of density fluctuations. We show that these distributions can be reconstructed using a bivariate Edgeworth series, presented here up to an arbitrarily high order. The Edgeworth formalism is shown to be well-suited for ''local'' cubic-order non-Gaussianity parametrized by g NL . We show that a strong scale dependence in the bias can be produced by g NL of order 10 5 , consistent with cosmic microwave background constraints. On a separation length of ∼100 Mpc, current constraints on g NL still allow the bias for the most massive clusters to be enhanced by 20-30% of the Gaussian value. We further examine the bias as a function of mass scale, and also explore the relationship between the clustering and the abundance of massive clusters in the presence of g NL . We explain why the Edgeworth formalism, though technically challenging, is a very powerful technique for constraining high-order non-Gaussianity with large-scale structures.

  12. Monofractal or multifractal: a case study of spatial distribution of mining-induced seismic activity

    Directory of Open Access Journals (Sweden)

    M. Eneva

    1994-01-01

    Full Text Available Using finite data sets and limited size of study volumes may result in significant spurious effects when estimating the scaling properties of various physical processes. These effects are examined with an example featuring the spatial distribution of induced seismic activity in Creighton Mine (northern Ontario, Canada. The events studied in the present work occurred during a three-month period, March-May 1992, within a volume of approximate size 400 x 400 x 180 m3. Two sets of microearthquake locations are studied: Data Set 1 (14,338 events and Data Set 2 (1654 events. Data Set 1 includes the more accurately located events and amounts to about 30 per cent of all recorded data. Data Set 2 represents a portion of the first data set that is formed by the most accurately located and the strongest microearthquakes. The spatial distribution of events in the two data sets is examined for scaling behaviour using the method of generalized correlation integrals featuring various moments q. From these, generalized correlation dimensions are estimated using the slope method. Similar estimates are made for randomly generated point sets using the same numbers of events and the same study volumes as for the real data. Uniform and monofractal random distributions are used for these simulations. In addition, samples from the real data are randomly extracted and the dimension spectra for these are examined as well. The spectra for the uniform and monofractal random generations show spurious multifractality due only to the use of finite numbers of data points and limited size of study volume. Comparing these with the spectra of dimensions for Data Set 1 and Data Set 2 allows us to estimate the bias likely to be present in the estimates for the real data. The strong multifractality suggested by the spectrum for Data Set 2 appears to be largely spurious; the spatial distribution, while different from uniform, could originate from a monofractal process. The spatial

  13. Non-Gaussian probability distributions of solar wind fluctuations

    Directory of Open Access Journals (Sweden)

    E. Marsch

    Full Text Available The probability distributions of field differences ∆x(τ=x(t+τ-x(t, where the variable x(t may denote any solar wind scalar field or vector field component at time t, have been calculated from time series of Helios data obtained in 1976 at heliocentric distances near 0.3 AU. It is found that for comparatively long time lag τ, ranging from a few hours to 1 day, the differences are normally distributed according to a Gaussian. For shorter time lags, of less than ten minutes, significant changes in shape are observed. The distributions are often spikier and narrower than the equivalent Gaussian distribution with the same standard deviation, and they are enhanced for large, reduced for intermediate and enhanced for very small values of ∆x. This result is in accordance with fluid observations and numerical simulations. Hence statistical properties are dominated at small scale τ by large fluctuation amplitudes that are sparsely distributed, which is direct evidence for spatial intermittency of the fluctuations. This is in agreement with results from earlier analyses of the structure functions of ∆x. The non-Gaussian features are differently developed for the various types of fluctuations. The relevance of these observations to the interpretation and understanding of the nature of solar wind magnetohydrodynamic (MHD turbulence is pointed out, and contact is made with existing theoretical concepts of intermittency in fluid turbulence.

  14. Non-destructive radiometry inspection technique for locating reinforcements and void/porosity in bridge bearings

    International Nuclear Information System (INIS)

    Yahaya bin Jafar; Jaafar bin Abdullah; Mohamad Azmi bin Ismail.

    1989-01-01

    Defects detection in bridge bearings is very important in controlling quality and safety. Typical manufacturing defects include misalligned or bent steel plates and the presence of voids/porosity within the rubber. A non-destructive radiometry inspection technique was used to locate steel plates position and the presence of voids/porosity in bridge bearing samples provided by the Rubber Research Institute of Malaysia (RRIM). Preliminary studies show that the mentioned defects can readily be determined by this technique. Some of the results are also presented. (author)

  15. Application of a stratified random sampling technique to the estimation and minimization of respirable quartz exposure to underground miners

    International Nuclear Information System (INIS)

    Makepeace, C.E.; Horvath, F.J.; Stocker, H.

    1981-11-01

    The aim of a stratified random sampling plan is to provide the best estimate (in the absence of full-shift personal gravimetric sampling) of personal exposure to respirable quartz among underground miners. One also gains information of the exposure distribution of all the miners at the same time. Three variables (or strata) are considered in the present scheme: locations, occupations and times of sampling. Random sampling within each stratum ensures that each location, occupation and time of sampling has equal opportunity of being selected without bias. Following implementation of the plan and analysis of collected data, one can determine the individual exposures and the mean. This information can then be used to identify those groups whose exposure contributes significantly to the collective exposure. In turn, this identification, along with other considerations, allows the mine operator to carry out a cost-benefit optimization and eventual implementation of engineering controls for these groups. This optimization and engineering control procedure, together with the random sampling plan, can then be used in an iterative manner to minimize the mean value of the distribution and collective exposures

  16. Law of Iterated Logarithm for NA Sequences with Non-Identical ...

    Indian Academy of Sciences (India)

    Based on a law of the iterated logarithm for independent random variables sequences, an iterated logarithm theorem for NA sequences with non-identical distributions is obtained. The proof is based on a Kolmogrov-type exponential inequality.

  17. Randomized phase III trial of regorafenib in metastatic colorectal cancer: analysis of the CORRECT Japanese and non-Japanese subpopulations.

    Science.gov (United States)

    Yoshino, Takayuki; Komatsu, Yoshito; Yamada, Yasuhide; Yamazaki, Kentaro; Tsuji, Akihito; Ura, Takashi; Grothey, Axel; Van Cutsem, Eric; Wagner, Andrea; Cihon, Frank; Hamada, Yoko; Ohtsu, Atsushi

    2015-06-01

    In the international, phase III, randomized, double-blind CORRECT trial, regorafenib significantly prolonged overall survival (OS) versus placebo in patients with metastatic colorectal cancer (mCRC) that had progressed on all standard therapies. This post hoc analysis evaluated the efficacy and safety of regorafenib in Japanese and non-Japanese subpopulations in the CORRECT trial. Patients were randomized 2 : 1 to regorafenib 160 mg once daily or placebo for weeks 1-3 of each 4-week cycle. The primary endpoint was OS. Outcomes were assessed using descriptive statistics. One hundred Japanese and 660 non-Japanese patients were randomized to regorafenib (n = 67 and n = 438) or placebo (n = 33 and n = 222). Regorafenib had a consistent OS benefit in the Japanese and non-Japanese subpopulations, with hazard ratios of 0.81 (95 % confidence interval [CI] 0.43-1.51) and 0.77 (95 % CI 0.62-0.94), respectively. Regorafenib-associated hand-foot skin reaction, hypertension, proteinuria, thrombocytopenia, and lipase elevations occurred more frequently in the Japanese subpopulation than in the non-Japanese subpopulation, but were generally manageable. Regorafenib appears to have comparable efficacy in Japanese and non-Japanese subpopulations, with a manageable adverse-event profile, suggesting that this agent could potentially become a standard of care in patients with mCRC.

  18. ACORN—A new method for generating sequences of uniformly distributed Pseudo-random Numbers

    Science.gov (United States)

    Wikramaratna, R. S.

    1989-07-01

    A new family of pseudo-random number generators, the ACORN ( additive congruential random number) generators, is proposed. The resulting numbers are distributed uniformly in the interval [0, 1). The ACORN generators are defined recursively, and the ( k + 1)th order generator is easily derived from the kth order generator. Some theorems concerning the period length are presented and compared with existing results for linear congruential generators. A range of statistical tests are applied to the ACORN generators, and their performance is compared with that of the linear congruential generators and the Chebyshev generators. The tests show the ACORN generators to be statistically superior to the Chebyshev generators, while being statistically similar to the linear congruential generators. However, the ACORN generators execute faster than linear congruential generators for the same statistical faithfulness. The main advantages of the ACORN generator are speed of execution, long period length, and simplicity of coding.

  19. Effect of particle size distribution on permeability in the randomly packed porous media

    Science.gov (United States)

    Markicevic, Bojan

    2017-11-01

    An answer of how porous medium heterogeneity influences the medium permeability is still inconclusive, where both increase and decrease in the permeability value are reported. A numerical procedure is used to generate a randomly packed porous material consisting of spherical particles. Six different particle size distributions are used including mono-, bi- and three-disperse particles, as well as uniform, normal and log-normal particle size distribution with the maximum to minimum particle size ratio ranging from three to eight for different distributions. In all six cases, the average particle size is kept the same. For all media generated, the stochastic homogeneity is checked from distribution of three coordinates of particle centers, where uniform distribution of x-, y- and z- positions is found. The medium surface area remains essentially constant except for bi-modal distribution in which medium area decreases, while no changes in the porosity are observed (around 0.36). The fluid flow is solved in such domain, and after checking for the pressure axial linearity, the permeability is calculated from the Darcy law. The permeability comparison reveals that the permeability of the mono-disperse medium is smallest, and the permeability of all poly-disperse samples is less than ten percent higher. For bi-modal particles, the permeability is for a quarter higher compared to the other media which can be explained by volumetric contribution of larger particles and larger passages for fluid flow to take place.

  20. Distribution of orientation selectivity in recurrent networks of spiking neurons with different random topologies.

    Science.gov (United States)

    Sadeh, Sadra; Rotter, Stefan

    2014-01-01

    Neurons in the primary visual cortex are more or less selective for the orientation of a light bar used for stimulation. A broad distribution of individual grades of orientation selectivity has in fact been reported in all species. A possible reason for emergence of broad distributions is the recurrent network within which the stimulus is being processed. Here we compute the distribution of orientation selectivity in randomly connected model networks that are equipped with different spatial patterns of connectivity. We show that, for a wide variety of connectivity patterns, a linear theory based on firing rates accurately approximates the outcome of direct numerical simulations of networks of spiking neurons. Distance dependent connectivity in networks with a more biologically realistic structure does not compromise our linear analysis, as long as the linearized dynamics, and hence the uniform asynchronous irregular activity state, remain stable. We conclude that linear mechanisms of stimulus processing are indeed responsible for the emergence of orientation selectivity and its distribution in recurrent networks with functionally heterogeneous synaptic connectivity.