WorldWideScience

Sample records for scale-invariant transition probabilities

  1. Scale-invariant transition probabilities in free word association trajectories

    Directory of Open Access Journals (Sweden)

    Martin Elias Costa

    2009-09-01

    Full Text Available Free-word association has been used as a vehicle to understand the organization of human thoughts. The original studies relied mainly on qualitative assertions, yielding the widely intuitive notion that trajectories of word associations are structured, yet considerably more random than organized linguistic text. Here we set to determine a precise characterization of this space, generating a large number of word association trajectories in a web implemented game. We embedded the trajectories in the graph of word co-occurrences from a linguistic corpus. To constrain possible transport models we measured the memory loss and the cycling probability. These two measures could not be reconciled by a bounded diffusive model since the cycling probability was very high (16 % of order-2 cycles implying a majority of short-range associations whereas the memory loss was very rapid (converging to the asymptotic value in ∼ 7 steps which, in turn, forced a high fraction of long-range associations. We show that memory loss and cycling probabilities of free word association trajectories can be simultaneously accounted by a model in which transitions are determined by a scale invariant probability distribution.

  2. Scale invariance from phase transitions to turbulence

    CERN Document Server

    Lesne, Annick

    2012-01-01

    During a century, from the Van der Waals mean field description (1874) of gases to the introduction of renormalization group (RG techniques 1970), thermodynamics and statistical physics were just unable to account for the incredible universality which was observed in numerous critical phenomena. The great success of RG techniques is not only to solve perfectly this challenge of critical behaviour in thermal transitions but to introduce extremely useful tools in a wide field of daily situations where a system exhibits scale invariance. The introduction of scaling, scale invariance and universality concepts has been a significant turn in modern physics and more generally in natural sciences. Since then, a new "physics of scaling laws and critical exponents", rooted in scaling approaches, allows quantitative descriptions of numerous phenomena, ranging from phase transitions to earthquakes, polymer conformations, heartbeat rhythm, diffusion, interface growth and roughening, DNA sequence, dynamical systems, chaos ...

  3. Low temperature electroweak phase transition in the Standard Model with hidden scale invariance

    Directory of Open Access Journals (Sweden)

    Suntharan Arunasalam

    2018-01-01

    Full Text Available We discuss a cosmological phase transition within the Standard Model which incorporates spontaneously broken scale invariance as a low-energy theory. In addition to the Standard Model fields, the minimal model involves a light dilaton, which acquires a large vacuum expectation value (VEV through the mechanism of dimensional transmutation. Under the assumption of the cancellation of the vacuum energy, the dilaton develops a very small mass at 2-loop order. As a result, a flat direction is present in the classical dilaton-Higgs potential at zero temperature while the quantum potential admits two (almost degenerate local minima with unbroken and broken electroweak symmetry. We found that the cosmological electroweak phase transition in this model can only be triggered by a QCD chiral symmetry breaking phase transition at low temperatures, T≲132 MeV. Furthermore, unlike the standard case, the universe settles into the chiral symmetry breaking vacuum via a first-order phase transition which gives rise to a stochastic gravitational background with a peak frequency ∼10−8 Hz as well as triggers the production of approximately solar mass primordial black holes. The observation of these signatures of cosmological phase transitions together with the detection of a light dilaton would provide a strong hint of the fundamental role of scale invariance in particle physics.

  4. Low temperature electroweak phase transition in the Standard Model with hidden scale invariance

    Science.gov (United States)

    Arunasalam, Suntharan; Kobakhidze, Archil; Lagger, Cyril; Liang, Shelley; Zhou, Albert

    2018-01-01

    We discuss a cosmological phase transition within the Standard Model which incorporates spontaneously broken scale invariance as a low-energy theory. In addition to the Standard Model fields, the minimal model involves a light dilaton, which acquires a large vacuum expectation value (VEV) through the mechanism of dimensional transmutation. Under the assumption of the cancellation of the vacuum energy, the dilaton develops a very small mass at 2-loop order. As a result, a flat direction is present in the classical dilaton-Higgs potential at zero temperature while the quantum potential admits two (almost) degenerate local minima with unbroken and broken electroweak symmetry. We found that the cosmological electroweak phase transition in this model can only be triggered by a QCD chiral symmetry breaking phase transition at low temperatures, T ≲ 132 MeV. Furthermore, unlike the standard case, the universe settles into the chiral symmetry breaking vacuum via a first-order phase transition which gives rise to a stochastic gravitational background with a peak frequency ∼10-8 Hz as well as triggers the production of approximately solar mass primordial black holes. The observation of these signatures of cosmological phase transitions together with the detection of a light dilaton would provide a strong hint of the fundamental role of scale invariance in particle physics.

  5. Void probability as a function of the void's shape and scale-invariant models. [in studies of spacial galactic distribution

    Science.gov (United States)

    Elizalde, E.; Gaztanaga, E.

    1992-01-01

    The dependence of counts in cells on the shape of the cell for the large scale galaxy distribution is studied. A very concrete prediction can be done concerning the void distribution for scale invariant models. The prediction is tested on a sample of the CfA catalog, and good agreement is found. It is observed that the probability of a cell to be occupied is bigger for some elongated cells. A phenomenological scale invariant model for the observed distribution of the counts in cells, an extension of the negative binomial distribution, is presented in order to illustrate how this dependence can be quantitatively determined. An original, intuitive derivation of this model is presented.

  6. Scale invariance and universality of economic fluctuations

    Science.gov (United States)

    Stanley, H. E.; Amaral, L. A. N.; Gopikrishnan, P.; Plerou, V.

    2000-08-01

    In recent years, physicists have begun to apply concepts and methods of statistical physics to study economic problems, and the neologism “econophysics” is increasingly used to refer to this work. Much recent work is focused on understanding the statistical properties of time series. One reason for this interest is that economic systems are examples of complex interacting systems for which a huge amount of data exist, and it is possible that economic time series viewed from a different perspective might yield new results. This manuscript is a brief summary of a talk that was designed to address the question of whether two of the pillars of the field of phase transitions and critical phenomena - scale invariance and universality - can be useful in guiding research on economics. We shall see that while scale invariance has been tested for many years, universality is relatively less frequently discussed. This article reviews the results of two recent studies - (i) The probability distribution of stock price fluctuations: Stock price fluctuations occur in all magnitudes, in analogy to earthquakes - from tiny fluctuations to drastic events, such as market crashes. The distribution of price fluctuations decays with a power-law tail well outside the Lévy stable regime and describes fluctuations that differ in size by as much as eight orders of magnitude. (ii) Quantifying business firm fluctuations: We analyze the Computstat database comprising all publicly traded United States manufacturing companies within the years 1974-1993. We find that the distributions of growth rates is different for different bins of firm size, with a width that varies inversely with a power of firm size. Similar variation is found for other complex organizations, including country size, university research budget size, and size of species of bird populations.

  7. Hidden scale invariance of metals

    DEFF Research Database (Denmark)

    Hummel, Felix; Kresse, Georg; Dyre, Jeppe C.

    2015-01-01

    Density functional theory (DFT) calculations of 58 liquid elements at their triple point show that most metals exhibit near proportionality between the thermal fluctuations of the virial and the potential energy in the isochoric ensemble. This demonstrates a general “hidden” scale invariance...... of iron and phosphorous are shown to increase at elevated pressures. Finally, we discuss how scale invariance explains the Grüneisen equation of state and a number of well-known empirical melting and freezing rules...

  8. Transition Probabilities of Gd I

    Science.gov (United States)

    Bilty, Katherine; Lawler, J. E.; Den Hartog, E. A.

    2011-01-01

    Rare earth transition probabilities are needed within the astrophysics community to determine rare earth abundances in stellar photospheres. The current work is part an on-going study of rare earth element neutrals. Transition probabilities are determined by combining radiative lifetimes measured using time-resolved laser-induced fluorescence on a slow atom beam with branching fractions measured from high resolution Fourier transform spectra. Neutral rare earth transition probabilities will be helpful in improving abundances in cool stars in which a significant fraction of rare earths are neutral. Transition probabilities are also needed for research and development in the lighting industry. Rare earths have rich spectra containing 100's to 1000's of transitions throughout the visible and near UV. This makes rare earths valuable additives in Metal Halide - High Intensity Discharge (MH-HID) lamps, giving them a pleasing white light with good color rendering. This poster presents the work done on neutral gadolinium. We will report radiative lifetimes for 135 levels and transition probabilities for upwards of 1500 lines of Gd I. The lifetimes are reported to ±5% and the transition probabilities range from 5% for strong lines to 25% for weak lines. This work is supported by the National Science Foundation under grant CTS 0613277 and the National Science Foundation's REU program through NSF Award AST-1004881.

  9. Scale invariance in road networks.

    Science.gov (United States)

    Kalapala, Vamsi; Sanwalani, Vishal; Clauset, Aaron; Moore, Cristopher

    2006-02-01

    We study the topological and geographic structure of the national road networks of the United States, England, and Denmark. By transforming these networks into their dual representation, where roads are vertices and an edge connects two vertices if the corresponding roads ever intersect, we show that they exhibit both topological and geographic scale invariance. That is, we show that for sufficiently large geographic areas, the dual degree distribution follows a power law with exponent 2.2< or = alpha < or =2.4, and that journeys, regardless of their length, have a largely identical structure. To explain these properties, we introduce and analyze a simple fractal model of road placement that reproduces the observed structure, and suggests a testable connection between the scaling exponent and the fractal dimensions governing the placement of roads and intersections.

  10. Scale invariant Volkov–Akulov supergravity

    Directory of Open Access Journals (Sweden)

    S. Ferrara

    2015-10-01

    Full Text Available A scale invariant goldstino theory coupled to supergravity is obtained as a standard supergravity dual of a rigidly scale-invariant higher-curvature supergravity with a nilpotent chiral scalar curvature. The bosonic part of this theory describes a massless scalaron and a massive axion in a de Sitter Universe.

  11. A scale invariance criterion for LES parametrizations

    Directory of Open Access Journals (Sweden)

    Urs Schaefer-Rolffs

    2015-01-01

    Full Text Available Turbulent kinetic energy cascades in fluid dynamical systems are usually characterized by scale invariance. However, representations of subgrid scales in large eddy simulations do not necessarily fulfill this constraint. So far, scale invariance has been considered in the context of isotropic, incompressible, and three-dimensional turbulence. In the present paper, the theory is extended to compressible flows that obey the hydrostatic approximation, as well as to corresponding subgrid-scale parametrizations. A criterion is presented to check if the symmetries of the governing equations are correctly translated into the equations used in numerical models. By applying scaling transformations to the model equations, relations between the scaling factors are obtained by demanding that the mathematical structure of the equations does not change.The criterion is validated by recovering the breakdown of scale invariance in the classical Smagorinsky model and confirming scale invariance for the Dynamic Smagorinsky Model. The criterion also shows that the compressible continuity equation is intrinsically scale-invariant. The criterion also proves that a scale-invariant turbulent kinetic energy equation or a scale-invariant equation of motion for a passive tracer is obtained only with a dynamic mixing length. For large-scale atmospheric flows governed by the hydrostatic balance the energy cascade is due to horizontal advection and the vertical length scale exhibits a scaling behaviour that is different from that derived for horizontal length scales.

  12. Modified dispersion relations, inflation, and scale invariance

    Science.gov (United States)

    Bianco, Stefano; Friedhoff, Victor Nicolai; Wilson-Ewing, Edward

    2018-02-01

    For a certain type of modified dispersion relations, the vacuum quantum state for very short wavelength cosmological perturbations is scale-invariant and it has been suggested that this may be the source of the scale-invariance observed in the temperature anisotropies in the cosmic microwave background. We point out that for this scenario to be possible, it is necessary to redshift these short wavelength modes to cosmological scales in such a way that the scale-invariance is not lost. This requires nontrivial background dynamics before the onset of standard radiation-dominated cosmology; we demonstrate that one possible solution is inflation with a sufficiently large Hubble rate, for this slow roll is not necessary. In addition, we also show that if the slow-roll condition is added to inflation with a large Hubble rate, then for any power law modified dispersion relation quantum vacuum fluctuations become nearly scale-invariant when they exit the Hubble radius.

  13. Holography for chiral scale-invariant models

    NARCIS (Netherlands)

    Caldeira Costa, R.N.; Taylor, M.

    2011-01-01

    Deformation of any d-dimensional conformal field theory by a constant null source for a vector operator of dimension (d + z -1) is exactly marginal with respect to anisotropic scale invariance, of dynamical exponent z. The holographic duals to such deformations are AdS plane waves, with z=2 being

  14. Holography for chiral scale-invariant models

    NARCIS (Netherlands)

    Caldeira Costa, R.N.; Taylor, M.

    2010-01-01

    Deformation of any d-dimensional conformal field theory by a constant null source for a vector operator of dimension (d + z -1) is exactly marginal with respect to anisotropic scale invariance, of dynamical exponent z. The holographic duals to such deformations are AdS plane waves, with z=2 being

  15. Nonequilibrium random matrix theory: Transition probabilities

    Science.gov (United States)

    Pedro, Francisco Gil; Westphal, Alexander

    2017-03-01

    In this paper we present an analytic method for calculating the transition probability between two random Gaussian matrices with given eigenvalue spectra in the context of Dyson Brownian motion. We show that in the Coulomb gas language, in large N limit, memory of the initial state is preserved in the form of a universal linear potential acting on the eigenvalues. We compute the likelihood of any given transition as a function of time, showing that as memory of the initial state is lost, transition probabilities converge to those of the static ensemble.

  16. Hidden Scale Invariance in Condensed Matter

    DEFF Research Database (Denmark)

    Dyre, J. C.

    2014-01-01

    . This means that the phase diagram becomes effectively one-dimensional with regard to several physical properties. Liquids and solids with isomorphs include most or all van der Waals bonded systems and metals, as well as weakly ionic or dipolar systems. On the other hand, systems with directional bonding...... (hydrogen bonds or covalent bonds) or strong Coulomb forces generally do not exhibit hidden scale invariance. The article reviews the theory behind this picture of condensed matter and the evidence for it coming from computer simulations and experiments...

  17. Natural inflation with hidden scale invariance

    Directory of Open Access Journals (Sweden)

    Neil D. Barrie

    2016-05-01

    Full Text Available We propose a new class of natural inflation models based on a hidden scale invariance. In a very generic Wilsonian effective field theory with an arbitrary number of scalar fields, which exhibits scale invariance via the dilaton, the potential necessarily contains a flat direction in the classical limit. This flat direction is lifted by small quantum corrections and inflation is realised without need for an unnatural fine-tuning. In the conformal limit, the effective potential becomes linear in the inflaton field, yielding to specific predictions for the spectral index and the tensor-to-scalar ratio, being respectively: ns−1≈−0.025(N⋆60−1 and r≈0.0667(N⋆60−1, where N⋆≈30–65 is a number of efolds during observable inflation. This predictions are in reasonable agreement with cosmological measurements. Further improvement of the accuracy of these measurements may turn out to be critical in falsifying our scenario.

  18. Transit probabilities for debris around white dwarfs

    Science.gov (United States)

    Lewis, John Arban; Johnson, John A.

    2017-01-01

    The discovery of WD 1145+017 (Vanderburg et al. 2015), a metal-polluted white dwarf with an infrared-excess and transits confirmed the long held theory that at least some metal-polluted white dwarfs are actively accreting material from crushed up planetesimals. A statistical understanding of WD 1145-like systems would inform us on the various pathways for metal-pollution and the end states of planetary systems around medium- to high-mass stars. However, we only have one example and there are presently no published studies of transit detection/discovery probabilities for white dwarfs within this interesting regime. We present a preliminary look at the transit probabilities for metal-polluted white dwarfs and their projected space density in the Solar Neighborhood, which will inform future searches for analogs to WD 1145+017.

  19. Estimation of transition probabilities of credit ratings

    Science.gov (United States)

    Peng, Gan Chew; Hin, Pooi Ah

    2015-12-01

    The present research is based on the quarterly credit ratings of ten companies over 15 years taken from the database of the Taiwan Economic Journal. The components in the vector mi (mi1, mi2,⋯, mi10) may first be used to denote the credit ratings of the ten companies in the i-th quarter. The vector mi+1 in the next quarter is modelled to be dependent on the vector mi via a conditional distribution which is derived from a 20-dimensional power-normal mixture distribution. The transition probability Pkl (i ,j ) for getting mi+1,j = l given that mi, j = k is then computed from the conditional distribution. It is found that the variation of the transition probability Pkl (i ,j ) as i varies is able to give indication for the possible transition of the credit rating of the j-th company in the near future.

  20. Higgs mass naturalness and scale invariance in the UV

    CERN Document Server

    Tavares, Gustavo Marques; Skiba, Witold

    2014-01-01

    It has been suggested that electroweak symmetry breaking in the Standard Model may be natural if the Standard Model merges into a conformal field theory (CFT) at short distances. In such a scenario the Higgs mass would be protected from quantum corrections by the scale invariance of the CFT. In order for the Standard Model to merge into a CFT at least one new ultraviolet (UV) scale is required at which the couplings turn over from their usual Standard Model running to the fixed point behavior. We argue that the Higgs mass is sensitive to such a turn-over scale even if there are no associated massive particles and the scale arises purely from dimensional transmutation. We demonstrate this sensitivity to the turnover scale explicitly in toy models. Thus if scale invariance is responsible for Higgs mass naturalness, then the transition to CFT dynamics must occur near the TeV scale with observable consequences at colliders. In addition, the UV fixed point theory in such a scenario must be interacting because loga...

  1. System Geometries and Transit/Eclipse Probabilities

    Directory of Open Access Journals (Sweden)

    Howard A.

    2011-02-01

    Full Text Available Transiting exoplanets provide access to data to study the mass-radius relation and internal structure of extrasolar planets. Long-period transiting planets allow insight into planetary environments similar to the Solar System where, in contrast to hot Jupiters, planets are not constantly exposed to the intense radiation of their parent stars. Observations of secondary eclipses additionally permit studies of exoplanet temperatures and large-scale exo-atmospheric properties. We show how transit and eclipse probabilities are related to planet-star system geometries, particularly for long-period, eccentric orbits. The resulting target selection and observational strategies represent the principal ingredients of our photometric survey of known radial-velocity planets with the aim of detecting transit signatures (TERMS.

  2. Atomic Transition Probabilities in TiI

    Science.gov (United States)

    Nitz, David E.; Siewert, Lowell K.; Schneider, Matthew N.

    2001-05-01

    We have measured branching fractions and atomic transition probabilities in TiI for 50 visible and near-IR transitions which connect odd-parity levels lying 25000 cm-1 to 27000 cm-1 above the ground state to low-lying even parity levels. Branching fractions are obtained from the analysis of six hollow cathode emission spectra recorded using the Fourier transform spectrometer at the National Solar Observatory, supplemented in cases susceptible to radiation-trapping problems by conventional emission spectroscopy using a commercial sealed lamp operated at very low discharge current. The absolute scale for normalizing the branching fractions is established using radiative lifetimes from time-resolved laser-induced fluorescence measurements.(S. Salih and J.E. Lawler, Astronomy and Astrophysics 239, 407 (1990).) Uncertainties of the transition probabilities range from ±5% for the stronger branches to ±20% for the weaker ones. Among the 16 lines for which previously-measured transition probabilities are listed in the NIST critical compilation,(G. A. Martin, J. R. Fuhr, and W. L. Wiese, J. Phys. Chem. Ref. Data 17, Suppl. 3, 85 (1988).) several significant discrepancies are noted.

  3. Scale-invariant nonlinear optics in gases

    CERN Document Server

    Heyl, C M; Miranda, M; Louisy, M; Kovacs, K; Tosa, V; Balogh, E; Varjú, K; L'Huillier, A; Couairon, A; Arnold, C L

    2015-01-01

    Nonlinear optical methods are becoming ubiquitous in many areas of modern photonics. They are, however, often limited to a certain range of input parameters, such as pulse energy and average power, since restrictions arise from, for example, parasitic nonlinear effects, damage problems and geometrical considerations. Here, we show that many nonlinear optics phenomena in gaseous media are scale-invariant if spatial coordinates, gas density and laser pulse energy are scaled appropriately. We develop a general scaling model for (3+1)-dimensional wave equations, demonstrating the invariant scaling of nonlinear pulse propagation in gases. Our model is numerically applied to high-order harmonic generation and filamentation as well as experimentally verified using the example of pulse post-compression via filamentation. Our results provide a simple recipe for up-or downscaling of nonlinear processes in gases with numerous applications in many areas of science.

  4. Magnetic compressibility and Isotropic Scale-Invariant Dissipation of Solar Wind Turbulence

    Science.gov (United States)

    Kiyani, K. H.; Chapman, S. C.; Khotyaintsev, Y. V.; Hnat, B.; Sahraoui, F.

    2010-12-01

    The anisotropic nature of solar wind magnetic fluctuations is investigated scale-by-scale using high cadence in-situ magnetic field ACE, and Cluster FGM and STAFF observations spanning five decades in scales from the inertial to dissipation ranges of plasma turbulence. We find an abrupt transition at ion kinetic scales to a single isotropic stochastic process as characterized by the single functional form of the probability density functions (PDFs) of fluctuations that characterizes the dissipation range on all observable scales. In contrast to the inertial range, this is accompanied by a successive scale-invariant reduction in the ratio between parallel and transverse power. We suggest that this reflects the phase space nature of the cascade which operates in a scale-invariant isotropic manner in the (kinetic) dissipation range - distinct from the anisotropic phenomenology in the (magnetohydrodynamic) inertial range. Alternatively, if we assume that non-linear effects are weak in the dissipation range and use the results of the linear dispersion theory of waves; then our measurements of fluctuation anisotropy provide deep insight into the nature of these waves. In particular, using these measurements to form a measure for the scale-by-scale magnetic compressibility, we can distinguish between the competing hypotheses of oblique kinetic Alfven waves versus Whistler waves dominating the energy transfer in the dissipation range. By looking at the scale-by-scale PDFs of the fluctuations we will also comment on how reasonable the assumption of linear theory is as we cross from the inertial to the dissipation range of plasma turbulence.

  5. Atomic transition probabilities of Er i

    Energy Technology Data Exchange (ETDEWEB)

    Lawler, J E; Den Hartog, E A [Department of Physics, University of Wisconsin, 1150 University Ave., Madison, WI 53706 (United States); Wyart, J-F, E-mail: jelawler@wisc.ed, E-mail: jean-francois.wyart@lac.u-psud.f, E-mail: eadenhar@wisc.ed [Laboratoire Aime Cotton, CNRS (UPR3321), Bat. 505, Centre Universitaire Paris-Sud, 91405-Orsay (France)

    2010-12-14

    Atomic transition probabilities for 562 lines of the first spectrum of erbium (Er i) are reported. These data are from new branching fraction measurements on Fourier transform spectra normalized with previously reported radiative lifetimes from time-resolved laser-induced fluorescence measurements (Den Hartog et al 2010 J. Phys. B: At. Mol. Opt. Phys. 43 155004). The wavelength range of the data set is from 298 to 1981 nm. In this work we explore the utility of parametric fits based on the Cowan code in assessing branching fraction errors due to lines connecting to unobserved lower levels.

  6. Atomic transition probabilities of Gd i

    Energy Technology Data Exchange (ETDEWEB)

    Lawler, J E; Den Hartog, E A [Department of Physics, University of Wisconsin, 1150 University Avenue, Madison, WI 53706 (United States); Bilty, K A, E-mail: jelawler@wisc.edu, E-mail: biltyka@uwec.edu, E-mail: eadenhar@wisc.edu [Department of Physics and Astronomy, University of Wisconsin-Eau Claire, Eau Claire, WI 54702 (United States)

    2011-05-14

    Fourier transform spectra are used to determine emission branching fractions for 1290 lines of the first spectrum of gadolinium (Gd i). These branching fractions are converted to absolute atomic transition probabilities using previously reported radiative lifetimes from time-resolved laser-induced-fluorescence measurements (Den Hartog et al 2011 J. Phys. B: At. Mol. Opt. Phys. 44 055001). The wavelength range of the data set is from 300 to 1850 nm. A least squares technique for separating blends of the first and second spectra lines is also described and demonstrated in this work.

  7. The Scale Invariant Synchrotron Jet of Flat Spectrum Radio Quasars

    Indian Academy of Sciences (India)

    2016-01-27

    Jan 27, 2016 ... In this paper, the scale invariance of the synchrotron jet of Flat Spectrum Radio Quasars has been studied using a sample of combined sources from FKM04 and from SDSS DR3 catalogue. Since the research of scale invariance has been focused on sub-Eddington cases that can be fitted onto the ...

  8. The Scale Invariant Synchrotron Jet of Flat Spectrum Radio Quasars ...

    Indian Academy of Sciences (India)

    Abstract. In this paper, the scale invariance of the synchrotron jet of Flat. Spectrum Radio Quasars has been studied using a sample of combined sources from FKM04 and from SDSS DR3 catalogue. Since the research of scale invariance has been focused on sub-Eddington cases that can be fitted onto the fundamental ...

  9. Network connectivity modulates power spectrum scale invariance.

    Science.gov (United States)

    Rădulescu, Anca; Mujica-Parodi, Lilianne R

    2014-04-15

    Measures of complexity are sensitive in detecting disease, which has made them attractive candidates for diagnostic biomarkers; one complexity measure that has shown promise in fMRI is power spectrum scale invariance (PSSI). Even if scale-free features of neuroimaging turn out to be diagnostically useful, however, their underlying neurobiological basis is poorly understood. Using modeling and simulations of a schematic prefrontal-limbic meso-circuit, with excitatory and inhibitory networks of nodes, we present here a framework for how network density within a control system can affect the complexity of signal outputs. Our model demonstrates that scale-free behavior, similar to that observed in fMRI PSSI data, can be obtained for sufficiently large networks in a context as simple as a linear stochastic system of differential equations, although the scale-free range improves when introducing more realistic, nonlinear behavior in the system. PSSI values (reflective of complexity) vary as a function of both input type (excitatory, inhibitory) and input density (mean number of long-range connections, or strength), independent of their node-specific geometric distribution. Signals show pink noise (1/f) behavior when excitatory and inhibitory influences are balanced. As excitatory inputs are increased and decreased, signals shift towards white and brown noise, respectively. As inhibitory inputs are increased and decreased, signals shift towards brown and white noise, respectively. The results hold qualitatively at the hemodynamic scale, which we modeled by introducing a neurovascular component. Comparing hemodynamic simulation results to fMRI PSSI results from 96 individuals across a wide spectrum of anxiety-levels, we show how our model can generate concrete and testable hypotheses for understanding how connectivity affects regulation of meso-circuits in the brain. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. The Mond Limit from Spacetime Scale Invariance

    Science.gov (United States)

    Milgrom, Mordehai

    2009-06-01

    The modified Newtonian dynamics (MOND) limit is shown to follow from a requirement of spacetime scale invariance of the equations of motion for nonrelativistic, purely gravitational systems, i.e., invariance of the equations of motion under (t, r) → (λt, λr) in the limit a 0 → ∞. It is suggested that this should replace the definition of the MOND limit based on the low-acceleration behavior of a Newtonian-MOND interpolating function. In this way, the salient, deep-MOND results—asymptotically flat rotation curves, the mass-rotational-speed relation (baryonic Tully-Fisher relation), the Faber-Jackson relation, etc.,—follow from a symmetry principle. For example, asymptotic flatness of rotation curves reflects the fact that radii change under scaling, while velocities do not. I then comment on the interpretation of the deep-MOND limit as one of "zero mass": rest masses, whose presence obstructs scaling symmetry, become negligible compared to the "phantom," dynamical masses—those that some would attribute to dark matter. Unlike the former masses, the latter transform in a way that is consistent with the symmetry. Finally, I discuss the putative MOND-cosmology connection in light of another, previously known symmetry of the deep-MOND limit. In particular, it is suggested that MOND is related to the asymptotic de Sitter geometry of our universe. It is conjectured, for example that in an exact de Sitter cosmos, deep-MOND physics would exactly apply to local systems. I also point out, in this connection, the possible relevance of a de Sitter-conformal-field-theory (dS/CFT) duality.

  11. Manifestly scale-invariant regularization and quantum effective operators

    CERN Document Server

    Ghilencea, D.M.

    2016-01-01

    Scale invariant theories are often used to address the hierarchy problem, however the regularization of their quantum corrections introduces a dimensionful coupling (dimensional regularization) or scale (Pauli-Villars, etc) which break this symmetry explicitly. We show how to avoid this problem and study the implications of a manifestly scale invariant regularization in (classical) scale invariant theories. We use a dilaton-dependent subtraction function $\\mu(\\sigma)$ which after spontaneous breaking of scale symmetry generates the usual DR subtraction scale $\\mu(\\langle\\sigma\\rangle)$. One consequence is that "evanescent" interactions generated by scale invariance of the action in $d=4-2\\epsilon$ (but vanishing in $d=4$), give rise to new, finite quantum corrections. We find a (finite) correction $\\Delta U(\\phi,\\sigma)$ to the one-loop scalar potential for $\\phi$ and $\\sigma$, beyond the Coleman-Weinberg term. $\\Delta U$ is due to an evanescent correction ($\\propto\\epsilon$) to the field-dependent masses (of...

  12. Atomic Transition Probabilities for Neutral Cerium

    Science.gov (United States)

    Lawler, J. E.; den Hartog, E. A.; Wood, M. P.; Nitz, D. E.; Chisholm, J.; Sobeck, J.

    2009-10-01

    The spectra of neutral cerium (Ce I) and singly ionized cerium (Ce II) are more complex than spectra of other rare earth species. The resulting high density of lines in the visible makes Ce ideal for use in metal halide (MH) High Intensity Discharge (HID) lamps. Inclusion of cerium-iodide in a lamp dose can improve both the Color Rendering Index and luminous efficacy of a MH-HID lamp. Basic spectroscopic data including absolute atomic transition probabilities for Ce I and Ce II are needed for diagnosing and modeling these MH-HID lamps. Recent work on Ce II [1] is now being augmented with similar work on Ce I. Radiative lifetimes from laser induced fluorescence measurements [2] on neutral Ce are being combined with emission branching fractions from spectra recorded using a Fourier transform spectrometer. A total of 14 high resolution spectra are being analyzed to determine branching fractions for 2000 to 3000 lines from 153 upper levels in neutral Ce. Representative data samples and progress to date will be presented. [4pt] [1] J. E. Lawler, C. Sneden, J. J. Cowan, I. I. Ivans, and E. A. Den Hartog, Astrophys. J. Suppl. Ser. 182, 51-79 (2009). [0pt] [2] E. A. Den Hartog, K. P. Buettner, and J. E. Lawler, J. Phys. B: Atomic, Molecular & Optical Physics 42, 085006 (7pp) (2009).

  13. Scale-invariant structure of energy fluctuations in real earthquakes

    Science.gov (United States)

    Wang, Ping; Chang, Zhe; Wang, Huanyu; Lu, Hong

    2017-11-01

    Earthquakes are obviously complex phenomena associated with complicated spatiotemporal correlations, and they are generally characterized by two power laws: the Gutenberg-Richter (GR) and the Omori-Utsu laws. However, an important challenge has been to explain two apparently contrasting features: the GR and Omori-Utsu laws are scale-invariant and unaffected by energy or time scales, whereas earthquakes occasionally exhibit a characteristic energy or time scale, such as with asperity events. In this paper, three high-quality datasets on earthquakes were used to calculate the earthquake energy fluctuations at various spatiotemporal scales, and the results reveal the correlations between seismic events regardless of their critical or characteristic features. The probability density functions (PDFs) of the fluctuations exhibit evidence of another scaling that behaves as a q-Gaussian rather than random process. The scaling behaviors are observed for scales spanning three orders of magnitude. Considering the spatial heterogeneities in a real earthquake fault, we propose an inhomogeneous Olami-Feder-Christensen (OFC) model to describe the statistical properties of real earthquakes. The numerical simulations show that the inhomogeneous OFC model shares the same statistical properties with real earthquakes.

  14. Rigidity-induced scale invariance in polymer ejection from capsid

    Science.gov (United States)

    Linna, R. P.; Suhonen, P. M.; Piili, J.

    2017-11-01

    While the dynamics of a fully flexible polymer ejecting a capsid through a nanopore has been extensively studied, the ejection dynamics of semiflexible polymers has not been properly characterized. Here we report results from simulations of ejection dynamics of semiflexible polymers ejecting from spherical capsids. Ejections start from strongly confined polymer conformations of constant initial monomer density. We find that, unlike for fully flexible polymers, for semiflexible polymers the force measured at the pore does not show a direct relation to the instantaneous ejection velocity. The cumulative waiting time t (s ) , that is, the time at which a monomer s exits the capsid the last time, shows a clear change when increasing the polymer rigidity κ . The major part of an ejecting polymer is driven out of the capsid by internal pressure. At the final stage the polymer escapes the capsid by diffusion. For the driven part there is a crossover from essentially exponential growth of t with s of the fully flexible polymers to a scale-invariant form. In addition, a clear dependence of t on polymer length N0 was found. These findings combined give the dependence t (s ) ∝N00.55s1.33 for the strongly rigid polymers. This crossover in dynamics where κ acts as a control parameter is reminiscent of a phase transition. This analogy is further enhanced by our finding a perfect data collapse of t for polymers of different N0 and any constant κ .

  15. Exact scale-invariant background of gravitational waves from cosmic defects.

    Science.gov (United States)

    Figueroa, Daniel G; Hindmarsh, Mark; Urrestilla, Jon

    2013-03-08

    We demonstrate that any scaling source in the radiation era produces a background of gravitational waves with an exact scale-invariant power spectrum. Cosmic defects, created after a phase transition in the early universe, are such a scaling source. We emphasize that the result is independent of the topology of the cosmic defects, the order of phase transition, and the nature of the symmetry broken, global or gauged. As an example, using large-scale numerical simulations, we calculate the scale-invariant gravitational wave power spectrum generated by the dynamics of a global O(N) scalar theory. The result approaches the large N theoretical prediction as N(-2), albeit with a large coefficient. The signal from global cosmic strings is O(100) times larger than the large N prediction.

  16. Searching and fixating: Scale-invariance vs. characteristic timescales in attentional processes

    Science.gov (United States)

    Shinde, D. P.; Mehta, Anita; Mishra, R. K.

    2011-06-01

    In an experiment involving semantic search, the visual movements of sample populations subjected to visual and aural input were tracked in a taskless paradigm. The probability distributions of saccades and fixations were obtained and analyzed. Scale-invariance was observed in the saccadic distributions, while the fixation distributions revealed the presence of a characteristic (attentional) time scale for literate subjects. A detailed analysis of our results suggests that saccadic eye motions are an example of Levy, rather than Brownian, dynamics.

  17. Tuning the cosmological constant, broken scale invariance, unitarity

    Energy Technology Data Exchange (ETDEWEB)

    Förste, Stefan; Manz, Paul [Bethe Center for Theoretical Physics,Nussallee 12, 53115 Bonn (Germany); Physikalisches Institut der Universität Bonn,Nussallee 12, 53115 Bonn (Germany)

    2016-06-10

    We study gravity coupled to a cosmological constant and a scale but not conformally invariant sector. In Minkowski vacuum, scale invariance is spontaneously broken. We consider small fluctuations around the Minkowski vacuum. At the linearised level we find that the trace of metric perturbations receives a positive or negative mass squared contribution. However, only for the Fierz-Pauli combination the theory is free of ghosts. The mass term for the trace of metric perturbations can be cancelled by explicitly breaking scale invariance. This reintroduces fine-tuning. Models based on four form field strength show similarities with explicit scale symmetry breaking due to quantisation conditions.

  18. Binary optical filters for scale invariant pattern recognition

    Science.gov (United States)

    Reid, Max B.; Downie, John D.; Hine, Butler P.

    1992-01-01

    Binary synthetic discriminant function (BSDF) optical filters which are invariant to scale changes in the target object of more than 50 percent are demonstrated in simulation and experiment. Efficient databases of scale invariant BSDF filters can be designed which discriminate between two very similar objects at any view scaled over a factor of 2 or more. The BSDF technique has considerable advantages over other methods for achieving scale invariant object recognition, as it also allows determination of the object's scale. In addition to scale, the technique can be used to design recognition systems invariant to other geometric distortions.

  19. Scalar dark matter in scale invariant standard model

    Energy Technology Data Exchange (ETDEWEB)

    Ghorbani, Karim [Physics Department, Faculty of Sciences,Arak University, Arak 38156-8-8349 (Iran, Islamic Republic of); Ghorbani, Hossein [Institute for Research in Fundamental Sciences (IPM),School of Particles and Accelerators, P.O. Box 19395-5531, Tehran (Iran, Islamic Republic of)

    2016-04-05

    We investigate single and two-component scalar dark matter scenarios in classically scale invariant standard model which is free of the hierarchy problem in the Higgs sector. We show that despite the very restricted space of parameters imposed by the scale invariance symmetry, both single and two-component scalar dark matter models overcome the direct and indirect constraints provided by the Planck/WMAP observational data and the LUX/Xenon100 experiment. We comment also on the radiative mass corrections of the classically massless scalon that plays a crucial role in our study.

  20. Spacetime scale-invariance and the super p-brane

    NARCIS (Netherlands)

    Bergshoeff, E.; London, L.A.J.; Townsend, P.K.

    1992-01-01

    We generalize to p-dimensional extended objects and type II superstrings a recently proposed Green-Schwarz type I superstring action in which the tension T emerges as an integration constant of the equations of motion. The action is spacetime scale-invariant but its equations of motion are

  1. On scale invariant features and sequential Monte Carlo sampling for bronchoscope tracking

    Science.gov (United States)

    Luó, Xióngbiao; Feuerstein, Marco; Kitasaka, Takayuki; Natori, Hiroshi; Takabatake, Hirotsugu; Hasegawa, Yoshinori; Mori, Kensaku

    2011-03-01

    This paper presents an improved bronchoscope tracking method for bronchoscopic navigation using scale invariant features and sequential Monte Carlo sampling. Although image-based methods are widely discussed in the community of bronchoscope tracking, they are still limited to characteristic information such as bronchial bifurcations or folds and cannot automatically resume the tracking procedure after failures, which result usually from problematic bronchoscopic video frames or airway deformation. To overcome these problems, we propose a new approach that integrates scale invariant feature-based camera motion estimation into sequential Monte Carlo sampling to achieve an accurate and robust tracking. In our approach, sequential Monte Carlo sampling is employed to recursively estimate the posterior probability densities of the bronchoscope camera motion parameters according to the observation model based on scale invariant feature-based camera motion recovery. We evaluate our proposed method on patient datasets. Experimental results illustrate that our proposed method can track a bronchoscope more accurate and robust than current state-of-the-art method, particularly increasing the tracking performance by 38.7% without using an additional position sensor.

  2. Fluctuating States: What is the Probability of a Thermodynamical Transition?

    Directory of Open Access Journals (Sweden)

    Álvaro M. Alhambra

    2016-10-01

    Full Text Available If the second law of thermodynamics forbids a transition from one state to another, then it is still possible to make the transition happen by using a sufficient amount of work. But if we do not have access to this amount of work, can the transition happen probabilistically? In the thermodynamic limit, this probability tends to zero, but here we find that for finite-sized and quantum systems it can be finite. We compute the maximum probability of a transition or a thermodynamical fluctuation from any initial state to any final state and show that this maximum can be achieved for any final state that is block diagonal in the energy eigenbasis. We also find upper and lower bounds on this transition probability, in terms of the work of transition. As a by-product, we introduce a finite set of thermodynamical monotones related to the thermomajorization criteria which governs state transitions and compute the work of transition in terms of them. The trade-off between the probability of a transition and any partial work added to aid in that transition is also considered. Our results have applications in entanglement theory, and we find the amount of entanglement required (or gained when transforming one pure entangled state into any other.

  3. Transition probabilities in a problem of stochastic process switching

    NARCIS (Netherlands)

    Veestraeten, D.

    2012-01-01

    Extant solutions for state-contingent process switching use first-passage time densities or differential equations. We alternatively employ transition probabilities. These conditional likelihood functions also have obvious appeal for econometric analyses as well as derivative pricing and decision

  4. Transition probabilities and radiative lifetimes of levels in F I

    Energy Technology Data Exchange (ETDEWEB)

    Celik, Gueltekin, E-mail: gultekin@selcuk.edu.tr; Dogan, Duygu; Ates, Sule; Taser, Mehmet

    2012-07-15

    The electric dipole transition probabilities and the lifetimes of excited levels have been calculated using the weakest bound electron potential model theory (WBEPMT) and the quantum defect orbital theory (QDOT) in atomic fluorine. In the calculations, many of transition arrays included both multiplet and fine-structure transitions are considered. We employed Numerical Coulomb Approximation (NCA) wave functions and numerical non-relativistic Hartree-Fock (NRHF) wave functions for expectation values of radii in determination of parameters. The necessary energy values have been taken from experimental energy data in the literature. The calculated transition probabilities and lifetimes have been compared with available theoretical and experimental results. A good agreement with results in literature has been obtained. Moreover, some transition probability and the lifetime values not existing in the literature for some highly excited levels have been obtained using these methods.

  5. Gauge coupling unification in a classically scale invariant model

    Energy Technology Data Exchange (ETDEWEB)

    Haba, Naoyuki; Ishida, Hiroyuki [Graduate School of Science and Engineering, Shimane University,Matsue 690-8504 (Japan); Takahashi, Ryo [Graduate School of Science, Tohoku University,Sendai, 980-8578 (Japan); Yamaguchi, Yuya [Graduate School of Science and Engineering, Shimane University,Matsue 690-8504 (Japan); Department of Physics, Faculty of Science, Hokkaido University,Sapporo 060-0810 (Japan)

    2016-02-08

    There are a lot of works within a class of classically scale invariant model, which is motivated by solving the gauge hierarchy problem. In this context, the Higgs mass vanishes at the UV scale due to the classically scale invariance, and is generated via the Coleman-Weinberg mechanism. Since the mass generation should occur not so far from the electroweak scale, we extend the standard model only around the TeV scale. We construct a model which can achieve the gauge coupling unification at the UV scale. In the same way, the model can realize the vacuum stability, smallness of active neutrino masses, baryon asymmetry of the universe, and dark matter relic abundance. The model predicts the existence vector-like fermions charged under SU(3){sub C} with masses lower than 1 TeV, and the SM singlet Majorana dark matter with mass lower than 2.6 TeV.

  6. On logarithmic extensions of local scale-invariance

    Energy Technology Data Exchange (ETDEWEB)

    Henkel, Malte, E-mail: malte.henkel@ijl.nancy-universite.fr [Groupe de Physique Statistique, Département de Physique de la Matière et des Matériaux, Institut Jean Lamour (CNRS UMR 7198), Université de Lorraine Nancy, B.P. 70239, F-54506 Vandoeuvre lès Nancy Cedex (France)

    2013-04-11

    Ageing phenomena far from equilibrium naturally present dynamical scaling and in many situations this may be generalised to local scale-invariance. Generically, the absence of time-translation-invariance implies that each scaling operator is characterised by two independent scaling dimensions. Building on analogies with logarithmic conformal invariance and logarithmic Schrödinger-invariance, this work proposes a logarithmic extension of local scale-invariance, without time-translation-invariance. Carrying this out requires in general to replace both scaling dimensions of each scaling operator by Jordan cells. Co-variant two-point functions are derived for the most simple case of a two-dimensional logarithmic extension. Their form is compared to simulational data for autoresponse functions in several universality classes of non-equilibrium ageing phenomena.

  7. Scale-invariance as the origin of dark radiation?

    Directory of Open Access Journals (Sweden)

    Dmitry Gorbunov

    2014-12-01

    Full Text Available Recent cosmological data favor R2-inflation and some amount of non-standard dark radiation in the Universe. We show that a framework of high energy scale invariance can explain these data. The spontaneous breaking of this symmetry provides gravity with the Planck mass and particle physics with the electroweak scale. We found that the corresponding massless Nambu–Goldstone bosons – dilatons – are produced at reheating by the inflaton decay right at the amount needed to explain primordial abundances of light chemical elements and anisotropy of the cosmic microwave background. Then we extended the discussion on the interplay with Higgs-inflation and on general class of inflationary models where dilatons are allowed and may form the dark radiation. As a result we put a lower limit on the reheating temperature in a general scale invariant model of inflation.

  8. The evolving Planck mass in classically scale-invariant theories

    Science.gov (United States)

    Kannike, K.; Raidal, M.; Spethmann, C.; Veermäe, H.

    2017-04-01

    We consider classically scale-invariant theories with non-minimally coupled scalar fields, where the Planck mass and the hierarchy of physical scales are dynamically generated. The classical theories possess a fixed point, where scale invariance is spontaneously broken. In these theories, however, the Planck mass becomes unstable in the presence of explicit sources of scale invariance breaking, such as non-relativistic matter and cosmological constant terms. We quantify the constraints on such classical models from Big Bang Nucleosynthesis that lead to an upper bound on the non-minimal coupling and require trans-Planckian field values. We show that quantum corrections to the scalar potential can stabilise the fixed point close to the minimum of the Coleman-Weinberg potential. The time-averaged motion of the evolving fixed point is strongly suppressed, thus the limits on the evolving gravitational constant from Big Bang Nucleosynthesis and other measurements do not presently constrain this class of theories. Field oscillations around the fixed point, if not damped, contribute to the dark matter density of the Universe.

  9. The evolving Planck mass in classically scale-invariant theories

    Energy Technology Data Exchange (ETDEWEB)

    Kannike, K.; Raidal, M.; Spethmann, C.; Veermäe, H. [National Institute of Chemical Physics and Biophysics,Rävala 10, 10143 Tallinn (Estonia)

    2017-04-05

    We consider classically scale-invariant theories with non-minimally coupled scalar fields, where the Planck mass and the hierarchy of physical scales are dynamically generated. The classical theories possess a fixed point, where scale invariance is spontaneously broken. In these theories, however, the Planck mass becomes unstable in the presence of explicit sources of scale invariance breaking, such as non-relativistic matter and cosmological constant terms. We quantify the constraints on such classical models from Big Bang Nucleosynthesis that lead to an upper bound on the non-minimal coupling and require trans-Planckian field values. We show that quantum corrections to the scalar potential can stabilise the fixed point close to the minimum of the Coleman-Weinberg potential. The time-averaged motion of the evolving fixed point is strongly suppressed, thus the limits on the evolving gravitational constant from Big Bang Nucleosynthesis and other measurements do not presently constrain this class of theories. Field oscillations around the fixed point, if not damped, contribute to the dark matter density of the Universe.

  10. One-loop potential with scale invariance and effective operators

    CERN Document Server

    Ghilencea, D M

    2016-01-01

    We study quantum corrections to the scalar potential in classically scale invariant theories, using a manifestly scale invariant regularization. To this purpose, the subtraction scale $\\mu$ of the dimensional regularization is generated after spontaneous scale symmetry breaking, from a subtraction function of the fields, $\\mu(\\phi,\\sigma)$. This function is then uniquely determined from general principles showing that it depends on the dilaton only, with $\\mu(\\sigma)\\sim \\sigma$. The result is a scale invariant one-loop potential $U$ for a higgs field $\\phi$ and dilaton $\\sigma$ that contains an additional {\\it finite} quantum correction $\\Delta U(\\phi,\\sigma)$, beyond the Coleman Weinberg term. $\\Delta U$ contains new, non-polynomial effective operators like $\\phi^6/\\sigma^2$ whose quantum origin is explained. A flat direction is maintained at the quantum level, the model has vanishing vacuum energy and the one-loop correction to the mass of $\\phi$ remains small without tuning (of its self-coupling, etc) bey...

  11. Standard model with spontaneously broken quantum scale invariance

    Science.gov (United States)

    Ghilencea, D. M.; Lalak, Z.; Olszewski, P.

    2017-09-01

    We explore the possibility that scale symmetry is a quantum symmetry that is broken only spontaneously and apply this idea to the standard model. We compute the quantum corrections to the potential of the Higgs field (ϕ ) in the classically scale-invariant version of the standard model (mϕ=0 at tree level) extended by the dilaton (σ ). The tree-level potential of ϕ and σ , dictated by scale invariance, may contain nonpolynomial effective operators, e.g., ϕ6/σ2, ϕ8/σ4, ϕ10/σ6, etc. The one-loop scalar potential is scale invariant, since the loop calculations manifestly preserve the scale symmetry, with the dimensional regularization subtraction scale μ generated spontaneously by the dilaton vacuum expectation value μ ˜⟨σ ⟩. The Callan-Symanzik equation of the potential is verified in the presence of the gauge, Yukawa, and the nonpolynomial operators. The couplings of the nonpolynomial operators have nonzero beta functions that we can actually compute from the quantum potential. At the quantum level, the Higgs mass is protected by spontaneously broken scale symmetry, even though the theory is nonrenormalizable. We compare the one-loop potential to its counterpart computed in the "traditional" dimensional regularization scheme that breaks scale symmetry explicitly (μ =constant) in the presence at the tree level of the nonpolynomial operators.

  12. Non-equilibrium random matrix theory. Transition probabilities

    Energy Technology Data Exchange (ETDEWEB)

    Pedro, Francisco Gil [Univ. Autonoma de Madrid (Spain). Dept. de Fisica Teorica; Westphal, Alexander [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Gruppe Theorie

    2016-06-15

    In this letter we present an analytic method for calculating the transition probability between two random Gaussian matrices with given eigenvalue spectra in the context of Dyson Brownian motion. We show that in the Coulomb gas language, in large N limit, memory of the initial state is preserved in the form of a universal linear potential acting on the eigenvalues. We compute the likelihood of any given transition as a function of time, showing that as memory of the initial state is lost, transition probabilities converge to those of the static ensemble.

  13. Scale invariance of shallow seismicity and the prognostic signatures of earthquakes

    Science.gov (United States)

    Stakhovsky, I. R.

    2017-08-01

    The results of seismic investigations based on methods of the theory of nonequilibrium processes and self-similarity theory have shown that a shallow earthquake can be treated as a critical transition that occurs during the evolution of a non-equilibrium seismogenic system and is preceded by phenomena such as the scale invariance of spatiotemporal seismic structures. The implication is that seismicity can be interpreted as a purely multifractal process. Modeling the focal domain as a fractal cluster of microcracks allows formulating the prognostic signatures of earthquakes actually observed in seismic data. Seismic scaling permits monitoring the state of a seismogenic system as it approaches instability.

  14. Atomic transition probabilities of Ce I from Fourier transform spectra

    Science.gov (United States)

    Lawler, J. E.; Chisholm, J.; Nitz, D. E.; Wood, M. P.; Sobeck, J.; Den Hartog, E. A.

    2010-04-01

    Atomic transition probabilities for 2874 lines of the first spectrum of cerium (Ce I) are reported. These data are from new branching fraction measurements on Fourier transform spectra normalized with previously reported radiative lifetimes from time-resolved laser-induced-fluorescence measurements (Den Hartog et al 2009 J. Phys. B: At. Mol. Opt. Phys. 42 085006). The wavelength range of the data set is from 360 to 1500 nm. Comparisons are made to previous investigations which are less extensive. Accurate Ce i transition probabilities are needed for lighting research and development on metal halide high-intensity discharge lamps.

  15. Atomic transition probabilities of Ce I from Fourier transform spectra

    Energy Technology Data Exchange (ETDEWEB)

    Lawler, J E; Wood, M P; Den Hartog, E A [Department of Physics, University of Wisconsin, 1150 University Ave., Madison, WI 53706 (United States); Chisholm, J [Department of Physics, Boston College, 140 Commonwealth Ave., Chestnut Hill, MA 02467 (United States); Nitz, D E [Department of Physics, St. Olaf College, 1520 St. Olaf Ave., Northfield, MN 55057 (United States); Sobeck, J, E-mail: jelawler@wisc.ed, E-mail: chishojd@bc.ed, E-mail: nitz@stolaf.ed, E-mail: mpwood@wisc.ed, E-mail: jsobeck@uchicago.ed, E-mail: eadenhar@wisc.ed [Department of Astronomy and Astrophysics, University of Chicago, 5640 Ellis Ave., Chicago, IL 60637 (United States)

    2010-04-28

    Atomic transition probabilities for 2874 lines of the first spectrum of cerium (Ce I) are reported. These data are from new branching fraction measurements on Fourier transform spectra normalized with previously reported radiative lifetimes from time-resolved laser-induced-fluorescence measurements (Den Hartog et al 2009 J. Phys. B: At. Mol. Opt. Phys. 42 085006). The wavelength range of the data set is from 360 to 1500 nm. Comparisons are made to previous investigations which are less extensive. Accurate Ce i transition probabilities are needed for lighting research and development on metal halide high-intensity discharge lamps.

  16. Exact Scale Invariance in Mixing of Binary Candidates in Voting Model

    Science.gov (United States)

    Mori, Shintaro; Hisakado, Masato

    2010-03-01

    We introduce a voting model and discuss the scale invariance in the mixing of candidates. The Candidates are classified into two categories μ\\in \\{0,1\\} and are called as “binary” candidates. There are in total N=N0+N1 candidates, and voters vote for them one by one. The probability that a candidate gets a vote is proportional to the number of votes. The initial number of votes (“seed”) of a candidate μ is set to be sμ. After infinite counts of voting, the probability function of the share of votes of the candidate μ obeys gamma distributions with the shape exponent sμ in the thermodynamic limit Z0=N1s1+N0s0\\to ∞. Between the cumulative functions \\{xμ\\} of binary candidates, the power-law relation 1-x1 ˜ (1-x0)α with the critical exponent α=s1/s0 holds in the region 1-x0,1-x1≪ 1. In the double scaling limit (s1,s0)\\to (0,0) and Z0 \\to ∞ with s1/s0=α fixed, the relation 1-x1=(1-x0)α holds exactly over the entire range 0≤ x0,x1 ≤ 1. We study the data on horse races obtained from the Japan Racing Association for the period 1986 to 2006 and confirm scale invariance.

  17. Measurement of "optical" transition probabilities in the silver atom

    NARCIS (Netherlands)

    Terpstra, J.; Smit, J.A.

    1958-01-01

    For 22 spectral lines of the silver atom the probability of spontaneous transition has been derived from measurements of the emission intensity of the line and the population of the corresponding upper level. The medium of excitation was the column of a vertical arc discharge in air of atmospheric

  18. Evaluation of scaling invariance embedded in short time series.

    Directory of Open Access Journals (Sweden)

    Xue Pan

    Full Text Available Scaling invariance of time series has been making great contributions in diverse research fields. But how to evaluate scaling exponent from a real-world series is still an open problem. Finite length of time series may induce unacceptable fluctuation and bias to statistical quantities and consequent invalidation of currently used standard methods. In this paper a new concept called correlation-dependent balanced estimation of diffusion entropy is developed to evaluate scale-invariance in very short time series with length ~10(2. Calculations with specified Hurst exponent values of 0.2,0.3,...,0.9 show that by using the standard central moving average de-trending procedure this method can evaluate the scaling exponents for short time series with ignorable bias (≤0.03 and sharp confidential interval (standard deviation ≤0.05. Considering the stride series from ten volunteers along an approximate oval path of a specified length, we observe that though the averages and deviations of scaling exponents are close, their evolutionary behaviors display rich patterns. It has potential use in analyzing physiological signals, detecting early warning signals, and so on. As an emphasis, the our core contribution is that by means of the proposed method one can estimate precisely shannon entropy from limited records.

  19. Criticality in the scale invariant standard model (squared

    Directory of Open Access Journals (Sweden)

    Robert Foot

    2015-07-01

    Full Text Available We consider first the standard model Lagrangian with μh2 Higgs potential term set to zero. We point out that this classically scale invariant theory potentially exhibits radiative electroweak/scale symmetry breaking with very high vacuum expectation value (VEV for the Higgs field, 〈ϕ〉≈1017–18 GeV. Furthermore, if such a vacuum were realized then cancellation of vacuum energy automatically implies that this nontrivial vacuum is degenerate with the trivial unbroken vacuum. Such a theory would therefore be critical with the Higgs self-coupling and its beta function nearly vanishing at the symmetry breaking minimum, λ(μ=〈ϕ〉≈βλ(μ=〈ϕ〉≈0. A phenomenologically viable model that predicts this criticality property arises if we consider two copies of the standard model Lagrangian, with exact Z2 symmetry swapping each ordinary particle with a partner. The spontaneously broken vacuum can then arise where one sector gains the high scale VEV, while the other gains the electroweak scale VEV. The low scale VEV is perturbed away from zero due to a Higgs portal coupling, or via the usual small Higgs mass terms μh2, which softly break the scale invariance. In either case, the cancellation of vacuum energy requires Mt=(171.53±0.42 GeV, which is close to its measured value of (173.34±0.76 GeV.

  20. Scaling invariance of spherical projectile fragmentation upon high-velocity impact on a thin continuous shield

    Energy Technology Data Exchange (ETDEWEB)

    Myagkov, N. N., E-mail: nn-myagkov@mail.ru [Russian Academy of Sciences, Institute of Applied Mechanics (Russian Federation)

    2017-01-15

    The problem of aluminum projectile fragmentation upon high-velocity impact on a thin aluminum shield is considered. A distinctive feature of this description is that the fragmentation has been numerically simulated using the complete system of equations of deformed solid mechanics by a method of smoothed particle hydrodynamics in three-dimensional setting. The transition from damage to fragmentation is analyzed and scaling relations are derived in terms of the impact velocity (V), ratio of shield thickness to projectile diameter (h/D), and ultimate strength (σ{sub p}) in the criterion of projectile and shield fracture. Analysis shows that the critical impact velocity V{sub c} (separating the damage and fragmentation regions) is a power function of σ{sub p} and h/D. In the supercritical region (V > V{sub c}), the weight-average fragment mass asymptotically tends to a power function of the impact velocity with exponent independent of h/D and σ{sub p}. Mean cumulative fragment mass distributions at the critical point are scale-invariant with respect to parameters h/D and σ{sub p}. Average masses of the largest fragments are also scale-invariant at V > V{sub c}, but only with respect to variable parameter σ{sub p}.

  1. Executable Code Recognition in Network Flows Using Instruction Transition Probabilities

    Science.gov (United States)

    Kim, Ikkyun; Kang, Koohong; Choi, Yangseo; Kim, Daewon; Oh, Jintae; Jang, Jongsoo; Han, Kijun

    The ability to recognize quickly inside network flows to be executable is prerequisite for malware detection. For this purpose, we introduce an instruction transition probability matrix (ITPX) which is comprised of the IA-32 instruction sets and reveals the characteristics of executable code's instruction transition patterns. And then, we propose a simple algorithm to detect executable code inside network flows using a reference ITPX which is learned from the known Windows Portable Executable files. We have tested the algorithm with more than thousands of executable and non-executable codes. The results show that it is very promising enough to use in real world.

  2. Generalized scale invariance, clouds and radiative transfer on multifractal clouds

    Energy Technology Data Exchange (ETDEWEB)

    Lovejoy, S.; Schertzer, D. [Univ. Pierre et Marie Curie, Paris (France)

    1995-09-01

    Recent systematic satellite studies (LANDSAT, AVHRR, METEOSAT) of cloud radiances using (isotropic) energy spectra have displayed excellent scaling from at least about 300m to about 4000km, even for individual cloud pictures. At first sight, this contradicts the observed diversity of cloud morphology, texture and type. The authors argue that the explanation of this apparent paradox is that the differences are due to anisotropy, e.g. differential stratification and rotation. A general framework for anisotropic scaling expressed in terms of isotropic self-similar scaling and fractals and multifractals is needed. Schertzer and Lovejoy have proposed Generalized Scale Invariance (GSI) in response to this need. In GSI, the statistics of the large and small scales of system can be related to each other by a scale changing operator T{sub {lambda}} which depends only on the scale ratio {lambda}{sub i} there is no characteristic size. 3 refs., 1 fig.

  3. Inertial Spontaneous Symmetry Breaking and Quantum Scale Invariance

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, Pedro G. [Oxford U.; Hill, Christopher T. [Fermilab; Ross, Graham G. [Oxford U., Theor. Phys.

    2018-01-23

    Weyl invariant theories of scalars and gravity can generate all mass scales spontaneously, initiated by a dynamical process of "inertial spontaneous symmetry breaking" that does not involve a potential. This is dictated by the structure of the Weyl current, $K_\\mu$, and a cosmological phase during which the universe expands and the Einstein-Hilbert effective action is formed. Maintaining exact Weyl invariance in the renormalised quantum theory is straightforward when renormalisation conditions are referred back to the VEV's of fields in the action of the theory, which implies a conserved Weyl current. We do not require scale invariant regulators. We illustrate the computation of a Weyl invariant Coleman-Weinberg potential.

  4. Weyl current, scale-invariant inflation, and Planck scale generation

    Science.gov (United States)

    Ferreira, Pedro G.; Hill, Christopher T.; Ross, Graham G.

    2017-02-01

    Scalar fields, ϕi, can be coupled nonminimally to curvature and satisfy the general criteria: (i) the theory has no mass input parameters, including MP=0 ; (ii) the ϕi have arbitrary values and gradients, but undergo a general expansion and relaxation to constant values that satisfy a nontrivial constraint, K (ϕi)=constant; (iii) this constraint breaks scale symmetry spontaneously, and the Planck mass is dynamically generated; (iv) there can be adequate inflation associated with slow roll in a scale-invariant potential subject to the constraint; (v) the final vacuum can have a small to vanishing cosmological constant; (vi) large hierarchies in vacuum expectation values can naturally form; (vii) there is a harmless dilaton which naturally eludes the usual constraints on massless scalars. These models are governed by a global Weyl scale symmetry and its conserved current, Kμ. At the quantum level the Weyl scale symmetry can be maintained by an invariant specification of renormalized quantities.

  5. Scale-invariant gauge theories of gravity: theoretical foundations

    CERN Document Server

    Lasenby, Anthony

    2015-01-01

    We consider the construction of gauge theories of gravity, focussing in particular on the extension of local Poincar\\'e invariance to include invariance under local changes of scale. We work exclusively in terms of finite transformations, which allow for a more transparent interpretation of such theories in terms of gauge fields in Minkowski spacetime. Our approach therefore differs from the usual geometrical description of locally scale-invariant Poincar\\'e gauge theory (PGT) and Weyl gauge theory (WGT) in terms of Riemann--Cartan and Weyl--Cartan spacetimes, respectively. In particular, we reconsider the interpretation of the Einstein gauge and also the equations of motion of matter fields and test particles in these theories. Inspired by the observation that the PGT and WGT matter actions for the Dirac field and electromagnetic field have more general invariance properties than those imposed by construction, we go on to present a novel alternative to WGT by considering an `extended' form for the transforma...

  6. Scale Invariance in Lateral Head Scans During Spatial Exploration

    Science.gov (United States)

    Yadav, Chetan K.; Doreswamy, Yoganarasimha

    2017-04-01

    Universality connects various natural phenomena through physical principles governing their dynamics, and has provided broadly accepted answers to many complex questions, including information processing in neuronal systems. However, its significance in behavioral systems is still elusive. Lateral head scanning (LHS) behavior in rodents might contribute to spatial navigation by actively managing (optimizing) the available sensory information. Our findings of scale invariant distributions in LHS lifetimes, interevent intervals and event magnitudes, provide evidence for the first time that the optimization takes place at a critical point in LHS dynamics. We propose that the LHS behavior is responsible for preprocessing of the spatial information content, critical for subsequent foolproof encoding by the respective downstream neural networks.

  7. Camera-Model Identification Using Markovian Transition Probability Matrix

    Science.gov (United States)

    Xu, Guanshuo; Gao, Shang; Shi, Yun Qing; Hu, Ruimin; Su, Wei

    Detecting the (brands and) models of digital cameras from given digital images has become a popular research topic in the field of digital forensics. As most of images are JPEG compressed before they are output from cameras, we propose to use an effective image statistical model to characterize the difference JPEG 2-D arrays of Y and Cb components from the JPEG images taken by various camera models. Specifically, the transition probability matrices derived from four different directional Markov processes applied to the image difference JPEG 2-D arrays are used to identify statistical difference caused by image formation pipelines inside different camera models. All elements of the transition probability matrices, after a thresholding technique, are directly used as features for classification purpose. Multi-class support vector machines (SVM) are used as the classification tool. The effectiveness of our proposed statistical model is demonstrated by large-scale experimental results.

  8. Maximum Entropy Estimation of Transition Probabilities of Reversible Markov Chains

    Directory of Open Access Journals (Sweden)

    Erik Van der Straeten

    2009-11-01

    Full Text Available In this paper, we develop a general theory for the estimation of the transition probabilities of reversible Markov chains using the maximum entropy principle. A broad range of physical models can be studied within this approach. We use one-dimensional classical spin systems to illustrate the theoretical ideas. The examples studied in this paper are: the Ising model, the Potts model and the Blume-Emery-Griffiths model.

  9. Energy levels and transition probabilities for Fe XXV ions

    Energy Technology Data Exchange (ETDEWEB)

    Norrington, P.H.; Kingston, A.E.; Boone, A.W. [Department of Applied Maths and Theoretical Physics, Queen' s University, Belfast BT7 1NN (United Kingdom)

    2000-05-14

    The energy levels of the 1s{sup 2}, 1s2l and 1s3l states of helium-like iron Fe XXV have been calculated using two sets of configuration-interaction wavefunctions. One set of wavefunctions was generated using the fully relativistic GRASP code and the other was obtained using CIV3, in which relativistic effects are introduced using the Breit-Pauli approximation. For transitions from the ground state to the n=2 and 3 states and for transitions between the n=2 and 3 states, the calculated excitation energies obtained by these two independent methods are in very good agreement and there is good agreement between these results and recent theoretical and experimental results. However, there is considerable disagreement between the various excitation energies for the transitions among the n=2 and also among the n=3 states. The two sets of wavefunctions are also used to calculate the E1, E2, M1 and M2 transition probabilities between all of the 1s{sup 2}, 1s2l and 1s3l states of helium-like iron Fe XXV. The results from the two calculations are found to be similar and to compare very well with other recent results for {delta}n=1 or 2 transitions. For {delta}n=0 transitions the agreement is much less satisfactory; this is mainly due to differences in the excitation energies. (author)

  10. Recursive recovery of Markov transition probabilities from boundary value data

    Energy Technology Data Exchange (ETDEWEB)

    Patch, Sarah Kathyrn [Univ. of California, Berkeley, CA (United States)

    1994-04-01

    In an effort to mathematically describe the anisotropic diffusion of infrared radiation in biological tissue Gruenbaum posed an anisotropic diffusion boundary value problem in 1989. In order to accommodate anisotropy, he discretized the temporal as well as the spatial domain. The probabilistic interpretation of the diffusion equation is retained; radiation is assumed to travel according to a random walk (of sorts). In this random walk the probabilities with which photons change direction depend upon their previous as well as present location. The forward problem gives boundary value data as a function of the Markov transition probabilities. The inverse problem requires finding the transition probabilities from boundary value data. Problems in the plane are studied carefully in this thesis. Consistency conditions amongst the data are derived. These conditions have two effects: they prohibit inversion of the forward map but permit smoothing of noisy data. Next, a recursive algorithm which yields a family of solutions to the inverse problem is detailed. This algorithm takes advantage of all independent data and generates a system of highly nonlinear algebraic equations. Pluecker-Grassmann relations are instrumental in simplifying the equations. The algorithm is used to solve the 4 x 4 problem. Finally, the smallest nontrivial problem in three dimensions, the 2 x 2 x 2 problem, is solved.

  11. Stark shifts and transition probabilities within the Kr I spectrum

    Science.gov (United States)

    Milosavljević, V.; Simić, Z.; Daniels, S.; Dimitrijević, M. S.

    2012-05-01

    On the basis of 28 experimentally determined prominent neutral krypton (Kr I) line shapes (in the 5s-5p and 5s-6p transitions), we have obtained electron (de) and ion (di) contributions to the total Stark shifts (dt). Stark shifts are also calculated using the semiclassical perturbation formalism (SCPF) for electrons, protons and helium ions as perturbers up to 50 000 K electron temperatures. Transition probabilities of spontaneous emission (Einstein's Ak, i values) have been obtained using the relative line-intensity ratio method. The separate electron (de) and ion (di) contributions to the total Stark shifts are presented, as well as the ion-broadening parameters, which describe the influence of the ion-dynamical effect on the shift of the line shape, for neutral krypton spectral lines. We made a comparison of our measured and calculated de data and compared both of these with other available experimental and theoretical de values.

  12. Strain estimation in elastography using scale-invariant keypoints tracking.

    Science.gov (United States)

    Xiao, Yang; Shen, Yang; Niu, Lili; Ling, Tao; Wang, Congzhi; Zheng, Hairong

    2013-04-01

    This paper proposes a novel strain estimator using scale-invariant keypoints tracking (SIKT) for ultrasonic elastography. This method is based on tracking stable features between the pre- and post-compression A-lines to obtain tissue displacement estimates. The proposed features, termed scaleinvariant keypoints, are independent of signal scale change according to the scale-space theory, and therefore can preserve their patterns while undergoing a substantial range of compression. The keypoints can be produced by searching for repeatedly assigned points across all possible scales constructed from the convolution with a one-parameter family of Gaussian kernels. Because of the distinctive property of the keypoints, the SIKT method could provide a reliable tracking over changing strains, an effective resistance to anamorphic noise and sonographic noise, and a significant reduction in processing time. Simulation and experimental results show that the SIKT method is able to provide better sensitivity, a larger dynamic range of the strain filter, higher resolution, and a better contrast- to-noise ratio (CNRe) than the conventional methods. Moreover, the computation time of the SIKT method is approximately 5 times that of the cross-correlation techniques.

  13. Scale invariant for one-sided multivariate likelihood ratio tests

    Directory of Open Access Journals (Sweden)

    Samruam Chongcharoen

    2010-07-01

    Full Text Available Suppose 1 2 , ,..., n X X X is a random sample from Np ( ,V distribution. Consider 0 1 2 : ... 0 p H      and1 : 0 for 1, 2,..., i H   i  p , let 1 0 H  H denote the hypothesis that 1 H holds but 0 H does not, and let ~ 0 H denote thehypothesis that 0 H does not hold. Because the likelihood ratio test (LRT of 0 H versus 1 0 H  H is complicated, severalad hoc tests have been proposed. Tang, Gnecco and Geller (1989 proposed an approximate LRT, Follmann (1996 suggestedrejecting 0 H if the usual test of 0 H versus ~ 0 H rejects 0 H with significance level 2 and a weighted sum of the samplemeans is positive, and Chongcharoen, Singh and Wright (2002 modified Follmann’s test to include information about thecorrelation structure in the sum of the sample means. Chongcharoen and Wright (2007, 2006 give versions of the Tang-Gnecco-Geller tests and Follmann-type tests, respectively, with invariance properties. With LRT’s scale invariant desiredproperty, we investigate its powers by using Monte Carlo techniques and compare them with the tests which we recommendin Chongcharoen and Wright (2007, 2006.

  14. Spectral-Spatial Scale Invariant Feature Transform for Hyperspectral Images.

    Science.gov (United States)

    Al-Khafaji, Suhad Lateef; Zhou, Jun; Zia, Ali; Liew, Alan Wee-Chung

    2017-09-04

    Spectral-spatial feature extraction is an important task in hyperspectral image processing. In this paper we propose a novel method to extract distinctive invariant features from hyperspectral images for registration of hyperspectral images with different spectral conditions. Spectral condition means images are captured with different incident lights, viewing angles, or using different hyperspectral cameras. In addition, spectral condition includes images of objects with the same shape but different materials. This method, which is named Spectral-Spatial Scale Invariant Feature Transform (SS-SIFT), explores both spectral and spatial dimensions simultaneously to extract spectral and geometric transformation invariant features. Similar to the classic SIFT algorithm, SS-SIFT consists of keypoint detection and descriptor construction steps. Keypoints are extracted from spectral-spatial scale space and are detected from extrema after 3D difference of Gaussian is applied to the data cube. Two descriptors are proposed for each keypoint by exploring the distribution of spectral-spatial gradient magnitude in its local 3D neighborhood. The effectiveness of the SS-SIFT approach is validated on images collected in different light conditions, different geometric projections, and using two hyperspectral cameras with different spectral wavelength ranges and resolutions. The experimental results show that our method generates robust invariant features for spectral-spatial image matching.

  15. Scale Invariant Gabor Descriptor-based Noncooperative Iris Recognition

    Directory of Open Access Journals (Sweden)

    Zhi Zhou

    2010-01-01

    Full Text Available A new noncooperative iris recognition method is proposed. In this method, the iris features are extracted using a Gabor descriptor. The feature extraction and comparison are scale, deformation, rotation, and contrast-invariant. It works with off-angle and low-resolution iris images. The Gabor wavelet is incorporated with scale-invariant feature transformation (SIFT for feature extraction to better extract the iris features. Both the phase and magnitude of the Gabor wavelet outputs were used in a novel way for local feature point description. Two feature region maps were designed to locally and globally register the feature points and each subregion in the map is locally adjusted to the dilation/contraction/deformation. We also developed a video-based non-cooperative iris recognition system by integrating video-based non-cooperative segmentation, segmentation evaluation, and score fusion units. The proposed method shows good performance for frontal and off-angle iris matching. Video-based recognition methods can improve non-cooperative iris recognition accuracy.

  16. Scale Invariant Gabor Descriptor-Based Noncooperative Iris Recognition

    Directory of Open Access Journals (Sweden)

    Du Yingzi

    2010-01-01

    Full Text Available Abstract A new noncooperative iris recognition method is proposed. In this method, the iris features are extracted using a Gabor descriptor. The feature extraction and comparison are scale, deformation, rotation, and contrast-invariant. It works with off-angle and low-resolution iris images. The Gabor wavelet is incorporated with scale-invariant feature transformation (SIFT for feature extraction to better extract the iris features. Both the phase and magnitude of the Gabor wavelet outputs were used in a novel way for local feature point description. Two feature region maps were designed to locally and globally register the feature points and each subregion in the map is locally adjusted to the dilation/contraction/deformation. We also developed a video-based non-cooperative iris recognition system by integrating video-based non-cooperative segmentation, segmentation evaluation, and score fusion units. The proposed method shows good performance for frontal and off-angle iris matching. Video-based recognition methods can improve non-cooperative iris recognition accuracy.

  17. Classical scale invariance in the inert doublet model

    Energy Technology Data Exchange (ETDEWEB)

    Plascencia, Alexis D. [Institute for Particle Physics Phenomenology, Department of Physics,Durham University, Durham DH1 3LE (United Kingdom)

    2015-09-04

    The inert doublet model (IDM) is a minimal extension of the Standard Model (SM) that can account for the dark matter in the universe. Naturalness arguments motivate us to study whether the model can be embedded into a theory with dynamically generated scales. In this work we study a classically scale invariant version of the IDM with a minimal hidden sector, which has a U(1){sub CW} gauge symmetry and a complex scalar Φ. The mass scale is generated in the hidden sector via the Coleman-Weinberg (CW) mechanism and communicated to the two Higgs doublets via portal couplings. Since the CW scalar remains light, acquires a vacuum expectation value and mixes with the SM Higgs boson, the phenomenology of this construction can be modified with respect to the traditional IDM. We analyze the impact of adding this CW scalar and the Z{sup ′} gauge boson on the calculation of the dark matter relic density and on the spin-independent nucleon cross section for direct detection experiments. Finally, by studying the RG equations we find regions in parameter space which remain valid all the way up to the Planck scale.

  18. Dark matter and leptogenesis linked by classical scale invariance

    Science.gov (United States)

    Khoze, Valentin V.; Plascencia, Alexis D.

    2016-11-01

    In this work we study a classically scale invariant extension of the Standard Model that can explain simultaneously dark matter and the baryon asymmetry in the universe. In our set-up we introduce a dark sector, namely a non-Abelian SU(2) hidden sector coupled to the SM via the Higgs portal, and a singlet sector responsible for generating Majorana masses for three right-handed sterile neutrinos. The gauge bosons of the dark sector are mass-degenerate and stable, and this makes them suitable as dark matter candidates. Our model also accounts for the matter-anti-matter asymmetry. The lepton flavour asymmetry is produced during CP-violating oscillations of the GeV-scale right-handed neutrinos, and converted to the baryon asymmetry by the electroweak sphalerons. All the characteristic scales in the model: the electro-weak, dark matter and the leptogenesis/neutrino mass scales, are generated radiatively, have a common origin and related to each other via scalar field couplings in perturbation theory.

  19. Higgs naturalness and dark matter stability by scale invariance

    Directory of Open Access Journals (Sweden)

    Jun Guo

    2015-09-01

    Full Text Available Extending the spacetime symmetries of standard model (SM by scale invariance (SI may address the Higgs naturalness problem. In this article we attempt to embed accidental dark matter (DM into SISM, requiring that the symmetry protecting DM stability is accidental due to the model structure rather than imposed by hand. In this framework, if the light SM-like Higgs boson is the pseudo Goldstone boson of SI spontaneously breaking, we can even pine down the model, two-Higgs-doublets plus a real singlet: The singlet is the DM candidate and the extra Higgs doublet triggers electroweak symmetry breaking via the Coleman–Weinberg mechanism; Moreover, it dominates DM dynamics. We study spontaneously breaking of SI using the Gillard–Weinberg approach and find that the second doublet should acquire vacuum expectation value near the weak scale. Moreover, its components should acquire masses around 380 GeV except for a light CP-odd Higgs boson. Based on these features, we explore viable ways to achieve the correct relic density of DM, facing stringent constraints from direct detections of DM. For instance, DM annihilates into bb¯ near the SM-like Higgs boson pole, or into a pair of CP-odd Higgs boson with mass above that pole.

  20. epsilon -meson coupling constants and scale invariance breaking

    CERN Document Server

    Petersen, J L

    1972-01-01

    A general method for obtaining ratios of coupling constants (defined by pole residues) in a way which is completely free of resonance /background separation troubles is devised and applied to the epsilon -meson. Huge discrepancies between previous determinations are shown to arise (i) from inherent ambiguities in the methods used, (ii) from lack of knowledge about the epsilon -pole position and (iii) from the well-known up-down ambiguity in the isospin-0 s-wave pi pi phase shift delta /sub 0//sup o/. Taking as input pi N phase shifts, available information on delta /sup 0//sub 0/ and including all possible uncertainties, the authors find for down-up or up-up delta /sup 0//sub 0/: g/sub epsilon NN//g/sub epsilon pi pi /=(6+or-3) mu /sup -1/, and for down-down or up-up delta /sup 0//sub 0/: g/sub epsilon NN//g/sub epsilon pi pi /=(1.8+or-0.5) mu /sup -1/ The precise validity of the scale invariance breaking prediction (g/sub epsilon NN//g/sub epsilon pi pi /). m/sup 2//sub epsilon //2M=1 is fulfilled in some th...

  1. Dark matter and leptogenesis linked by classical scale invariance

    Energy Technology Data Exchange (ETDEWEB)

    Khoze, Valentin V.; Plascencia, Alexis D. [Institute for Particle Physics Phenomenology, Department of Physics, Durham University,South Road, Durham, DH1 3LE United Kingdom (United Kingdom)

    2016-11-07

    In this work we study a classically scale invariant extension of the Standard Model that can explain simultaneously dark matter and the baryon asymmetry in the universe. In our set-up we introduce a dark sector, namely a non-Abelian SU(2) hidden sector coupled to the SM via the Higgs portal, and a singlet sector responsible for generating Majorana masses for three right-handed sterile neutrinos. The gauge bosons of the dark sector are mass-degenerate and stable, and this makes them suitable as dark matter candidates. Our model also accounts for the matter-anti-matter asymmetry. The lepton flavour asymmetry is produced during CP-violating oscillations of the GeV-scale right-handed neutrinos, and converted to the baryon asymmetry by the electroweak sphalerons. All the characteristic scales in the model: the electro-weak, dark matter and the leptogenesis/neutrino mass scales, are generated radiatively, have a common origin and related to each other via scalar field couplings in perturbation theory.

  2. Scale invariance of a diode-like tunnel junction

    Science.gov (United States)

    Cabrera, Hugo; Zanin, Danilo Andrea; de Pietro, Lorenzo Giuseppe; Michaels, Thomas; Thalmann, Peter; Ramsperger, Urs; Vindigni, Alessandro; Pescia, Danilo

    2013-03-01

    In Near Field-Emission SEM (NFESEM), electrostatic considerations favor a diode-like tunnel junction consisting of an atomic-sized source mounted at the apex of a thin wire placed at nanometric distances from a collector. The quantum mechanical tunnel process, instead, can provide a barrier toward miniaturization. In the first place, it deteriorates the generation of electrons by introducing non-linearities within the classically forbidden zone that exponentially increase with decreasing sizes. In addition, in the direct tunnelling regime, i.e. when the distance between emitter and collector d approaches the subnanometer range, a characteristic length appears, making the cross-over from the (almost) scale-invariant electric-field assisted regime to the essentially different STM-regime. We have observed that the experimental data relating the current I to the two experimental variables V (bias voltage between tip and collector) and d can be made (almost) collapse onto a ``scaling curve'' relating I to the single variable V .d-λ , λ being some exponent that depends solely on the geometry of the junction. This scaling property can be used to highlight non-linear aspects of the quantum mechanical tunnelling process.

  3. Scale invariant energy smoothing estimates for the Schr\\"odinger Equation with small Magnetic Potential

    OpenAIRE

    Georgiev, Vladimir; Tarulli, Mirko

    2005-01-01

    We consider some scale invariant generalizations of the smoothing estimates for the free Schr\\"odnger equation obtained by Kenig, Ponce and Vega. Applying these estimates and using appropriate commutator estimates, we obtain similar scale invariant smoothing estimates for perturbed Schr\\"odnger equation with small magnetic potential.

  4. Scale invariance implies conformal invariance for the three-dimensional Ising model.

    Science.gov (United States)

    Delamotte, Bertrand; Tissier, Matthieu; Wschebor, Nicolás

    2016-01-01

    Using the Wilson renormalization group, we show that if no integrated vector operator of scaling dimension -1 exists, then scale invariance implies conformal invariance. By using the Lebowitz inequalities, we prove that this necessary condition is fulfilled in all dimensions for the Ising universality class. This shows, in particular, that scale invariance implies conformal invariance for the three-dimensional Ising model.

  5. Energy probability distribution zeros: A route to study phase transitions

    Science.gov (United States)

    Costa, B. V.; Mól, L. A. S.; Rocha, J. C. S.

    2017-07-01

    In the study of phase transitions a very few models are accessible to exact solution. In most cases analytical simplifications have to be done or some numerical techniques have to be used to get insight about their critical properties. Numerically, the most common approaches are those based on Monte Carlo simulations together with finite size scaling analysis. The use of Monte Carlo techniques requires the estimation of quantities like the specific heat or susceptibilities in a wide range of temperaturesor the construction of the density of states in large intervals of energy. Although many of these techniques are well developed they may be very time consuming when the system size becomes large enough. It should be suitable to have a method that could surpass those difficulties. In this work we present an iterative method to study the critical behavior of a system based on the partial knowledge of the complex Fisher zeros set of the partition function. The method is general with advantages over most conventional techniques since it does not need to identify any order parameter a priori. The critical temperature and exponents can be obtained with great precision even in the most unamenable cases like the two dimensional XY model. To test the method and to show how it works we applied it to some selected models where the transitions are well known: The 2D Ising, Potts and XY models and to a homopolymer system. Our choices cover systems with first order, continuous and Berezinskii-Kosterlitz-Thouless transitions as well as the homopolymer that has two pseudo-transitions. The strategy can easily be adapted to any model, classical or quantum, once we are able to build the corresponding energy probability distribution.

  6. Transition probability generating function of a transitionless quantum parametric oscillator

    Science.gov (United States)

    Mishima, Hiroaki; Izumida, Yuki

    2017-07-01

    The transitionless tracking (TT) algorithm enables the exact tracking of quantum adiabatic dynamics in an arbitrary short time by adding a counterdiabatic Hamiltonian to the original adiabatic Hamiltonian. By applying Husimi's method originally developed for a quantum parametric oscillator (QPO) to the transitionless QPO achieved using the TT algorithm, we obtain the transition probability generating function with a time-dependent parameter constituted with solutions of the corresponding classical parametric oscillator (CPO). By obtaining the explicit solutions of this CPO using the phase-amplitude method, we find that the time-dependent parameter can be reduced to the frequency ratio between the Hamiltonians without and with the counterdiabatic Hamiltonian, from which we can easily characterize the result achieved by the TT algorithm. We illustrate our theory by showing the trajectories of the CPO on the classical phase space, which elucidate the effect of the counterdiabatic Hamiltonian of the QPO.

  7. Transition probabilities in neutron-rich Se,8684

    Science.gov (United States)

    Litzinger, J.; Blazhev, A.; Dewald, A.; Didierjean, F.; Duchêne, G.; Fransen, C.; Lozeva, R.; Sieja, K.; Verney, D.; de Angelis, G.; Bazzacco, D.; Birkenbach, B.; Bottoni, S.; Bracco, A.; Braunroth, T.; Cederwall, B.; Corradi, L.; Crespi, F. C. L.; Désesquelles, P.; Eberth, J.; Ellinger, E.; Farnea, E.; Fioretto, E.; Gernhäuser, R.; Goasduff, A.; Görgen, A.; Gottardo, A.; Grebosz, J.; Hackstein, M.; Hess, H.; Ibrahim, F.; Jolie, J.; Jungclaus, A.; Kolos, K.; Korten, W.; Leoni, S.; Lunardi, S.; Maj, A.; Menegazzo, R.; Mengoni, D.; Michelagnoli, C.; Mijatovic, T.; Million, B.; Möller, O.; Modamio, V.; Montagnoli, G.; Montanari, D.; Morales, A. I.; Napoli, D. R.; Niikura, M.; Pollarolo, G.; Pullia, A.; Quintana, B.; Recchia, F.; Reiter, P.; Rosso, D.; Sahin, E.; Salsac, M. D.; Scarlassara, F.; Söderström, P.-A.; Stefanini, A. M.; Stezowski, O.; Szilner, S.; Theisen, Ch.; Valiente Dobón, J. J.; Vandone, V.; Vogt, A.

    2015-12-01

    Reduced quadrupole transition probabilities for low-lying transitions in neutron-rich Se,8684 are investigated with a recoil distance Doppler shift (RDDS) experiment. The experiment was performed at the Istituto Nazionale di Fisica Nucleare (INFN) Laboratori Nazionali di Legnaro using the Cologne Plunger device for the RDDS technique and the AGATA Demonstrator array for the γ -ray detection coupled to the PRISMA magnetic spectrometer for an event-by-event particle identification. In 86Se the level lifetime of the yrast 21+ state and an upper limit for the lifetime of the 41+ state are determined for the first time. The results of 86Se are in agreement with previously reported predictions of large-scale shell-model calculations using Ni78-I and Ni78-II effective interactions. In addition, intrinsic shape parameters of lowest yrast states in 86Se are calculated. In semimagic 84Se level lifetimes of the yrast 41+ and 61+ states are determined for the first time. Large-scale shell-model calculations using effective interactions Ni78-II, JUN45, jj4b, and jj4pna are performed. The calculations describe B (E 2 ;21+→01+) and B (E 2 ;61+→41+) fairly well and point out problems in reproducing the experimental B (E 2 ;41+→21+) .

  8. Sphaleron and critical bubble in the scale invariant two Higgs doublet model

    Directory of Open Access Journals (Sweden)

    Kaori Fuyuto

    2015-07-01

    Full Text Available We revisit the electroweak phase transition and the critical bubble in the scale invariant two Higgs doublet model in the light of recent LHC data. Moreover, the sphaleron decoupling condition is newly evaluated in this model. The analysis is done by using the resummed finite-temperature one-loop effective potential. It is found that the 125 GeV Higgs boson inevitably leads to the strong first-order electroweak phase transition, and the strength of which is always large enough to satisfy the sphaleron decoupling condition, vN/TN>1.2, where TN denotes a nucleation temperature and vN is the Higgs vacuum expectation value at TN. In this model, even if the Higgs boson couplings to gauge bosons and fermions are similar to the standard model values, the signal strength of the Higgs decay to two photons is reduced by 10% and the triple Higgs boson coupling is enhanced by 82% compared to the standard model prediction.

  9. Time-scale invariances in preseismic electromagnetic radiation, magnetization and damage evolution of rocks

    Directory of Open Access Journals (Sweden)

    Y. Kawada

    2007-10-01

    Full Text Available We investigate the time-scale invariant changes in electromagnetic and mechanical energy releases prior to a rock failure or a large earthquake. The energy release processes are caused by damage evolutions such as crack propagation, motion of charged dislocation, area-enlargement of sheared asperities and repetitive creep-rate changes. Damage mechanics can be used to represent the time-scale invariant evolutions of both brittle and plastic damages. Irreversible thermodynamics applied to the damage mechanics reveals that the damage evolution produces the variations in charge, dipole and electromagnetic signals in addition to mechanical energy release, and yields the time-scale invariant patterns of Benioff electromagnetic radiation and cumulative Benioff strain-release. The irreversible thermodynamic framework of damage mechanics is also applicable to the seismo-magnetic effect, and the time-scale invariance is recognized in the remanent magnetization change associated with damage evolution prior to a rock failure.

  10. Two-measure approach to breaking scale-invariance in a standard-model extension

    Directory of Open Access Journals (Sweden)

    Eduardo I. Guendelman

    2017-02-01

    Full Text Available We introduce Weyl's scale-invariance as an additional global symmetry in the standard model of electroweak interactions. A natural consequence is the introduction of general relativity coupled to scalar fields à la Dirac, that includes the Higgs doublet and a singlet σ-field required for implementing global scale-invariance. We introduce a mechanism for ‘spontaneous breaking’ of scale-invariance by introducing a coupling of the σ-field to a new metric-independent measure Φ defined in terms of four scalars ϕi (i = 1, 2, 3, 4. Global scale-invariance is regained by combining it with internal diffeomorphism of these four scalars. We show that once the global scale-invariance is broken, the phenomenon (a generates Newton's gravitational constant GN and (b triggers spontaneous symmetry breaking in the normal manner resulting in masses for the conventional fermions and bosons. In the absence of fine-tuning the scale at which the scale-symmetry breaks can be of order Planck mass. If right-handed neutrinos are also introduced, their absence at present energy scales is attributed to their mass terms tied to the scale where scale-invariance breaks.

  11. Lack of exercise leads to significant and reversible loss of scale invariance in both aged and young mice

    Science.gov (United States)

    Gu, Changgui; Coomans, Claudia P.; Hu, Kun; Scheer, Frank A. J. L.; Stanley, H. Eugene; Meijer, Johanna H.

    2015-01-01

    In healthy humans and other animals, behavioral activity exhibits scale invariance over multiple timescales from minutes to 24 h, whereas in aging or diseased conditions, scale invariance is usually reduced significantly. Accordingly, scale invariance can be a potential marker for health. Given compelling indications that exercise is beneficial for mental and physical health, we tested to what extent a lack of exercise affects scale invariance in young and aged animals. We studied six or more mice in each of four age groups (0.5, 1, 1.5, and 2 y) and observed an age-related deterioration of scale invariance in activity fluctuations. We found that limiting the amount of exercise, by removing the running wheels, leads to loss of scale-invariant properties in all age groups. Remarkably, in both young and old animals a lack of exercise reduced the scale invariance in activity fluctuations to the same level. We next showed that scale invariance can be restored by returning the running wheels. Exercise during the active period also improved scale invariance during the resting period, suggesting that activity during the active phase may also be beneficial for the resting phase. Finally, our data showed that exercise had a stronger influence on scale invariance than the effect of age. The data suggest that exercise is beneficial as revealed by scale-invariant parameters and that, even in young animals, a lack of exercise leads to strong deterioration in these parameters. PMID:25675516

  12. Scale-invariant neuronal avalanche dynamics and the cut-off in size distributions.

    Directory of Open Access Journals (Sweden)

    Shan Yu

    Full Text Available Identification of cortical dynamics strongly benefits from the simultaneous recording of as many neurons as possible. Yet current technologies provide only incomplete access to the mammalian cortex from which adequate conclusions about dynamics need to be derived. Here, we identify constraints introduced by sub-sampling with a limited number of electrodes, i.e. spatial 'windowing', for well-characterized critical dynamics-neuronal avalanches. The local field potential (LFP was recorded from premotor and prefrontal cortices in two awake macaque monkeys during rest using chronically implanted 96-microelectrode arrays. Negative deflections in the LFP (nLFP were identified on the full as well as compact sub-regions of the array quantified by the number of electrodes N (10-95, i.e., the window size. Spatiotemporal nLFP clusters organized as neuronal avalanches, i.e., the probability in cluster size, p(s, invariably followed a power law with exponent -1.5 up to N, beyond which p(s declined more steeply producing a 'cut-off' that varied with N and the LFP filter parameters. Clusters of size s≤N consisted mainly of nLFPs from unique, non-repeated cortical sites, emerged from local propagation between nearby sites, and carried spatial information about cluster organization. In contrast, clusters of size s>N were dominated by repeated site activations and carried little spatial information, reflecting greatly distorted sampling conditions. Our findings were confirmed in a neuron-electrode network model. Thus, avalanche analysis needs to be constrained to the size of the observation window to reveal the underlying scale-invariant organization produced by locally unfolding, predominantly feed-forward neuronal cascades.

  13. On supersymmetric geometric flows and R2 inflation from scale invariant supergravity

    Science.gov (United States)

    Rajpoot, Subhash; Vacaru, Sergiu I.

    2017-09-01

    Models of geometric flows pertaining to R2 scale invariant (super) gravity theories coupled to conformally invariant matter fields are investigated. Related to this work are supersymmetric scalar manifolds that are isomorphic to the Kählerian spaces Mn = SU(1 , 1 + k) / U(1) × SU(1 + k) as generalizations of the non-supersymmetric analogs with SO(1 , 1 + k) / SO(1 + k) manifolds. For curved superspaces with geometric evolution of physical objects, a complete supersymmetric theory has to be elaborated on nonholonomic (super) manifolds and bundles determined by non-integrable superdistributions with additional constraints on (super) field dynamics and geometric evolution equations. We also consider generalizations of Perelman's functionals using such nonholonomic variables which result in the decoupling of geometric flow equations and Ricci soliton equations with supergravity modifications of the R2 gravity theory. As such, it is possible to construct exact non-homogeneous and locally anisotropic cosmological solutions for various types of (super) gravity theories modeled as modified Ricci soliton configurations. Such solutions are defined by employing the general ansatz encompassing coefficients of generic off-diagonal metrics and generalized connections that depend generically on all spacetime coordinates. We consider nonholonomic constraints resulting in diagonal homogeneous configurations encoding contributions from possible nonlinear parametric geometric evolution scenarios, off-diagonal interactions and anisotropic polarization/modification of physical constants. In particular, we analyze small parametric deformations when the underlying scale symmetry is preserved and the nontrivial anisotropic vacuum corresponds to generalized de Sitter spaces. Such configurations may mimic quantum effects whenever transitions to flat space are possible. Our approach allows us to generate solutions with scale violating terms induced by geometric flows, off

  14. Scale-invariant properties of public-debt growth

    Science.gov (United States)

    Petersen, A. M.; Podobnik, B.; Horvatic, D.; Stanley, H. E.

    2010-05-01

    Public debt is one of the important economic variables that quantitatively describes a nation's economy. Because bankruptcy is a risk faced even by institutions as large as governments (e.g., Iceland), national debt should be strictly controlled with respect to national wealth. Also, the problem of eliminating extreme poverty in the world is closely connected to the study of extremely poor debtor nations. We analyze the time evolution of national public debt and find "convergence": initially less-indebted countries increase their debt more quickly than initially more-indebted countries. We also analyze the public debt-to-GDP ratio {\\cal R} , a proxy for default risk, and approximate the probability density function P({\\cal R}) with a Gamma distribution, which can be used to establish thresholds for sustainable debt. We also observe "convergence" in {\\cal R} : countries with initially small {\\cal R} increase their {\\cal R} more quickly than countries with initially large {\\cal R} . The scaling relationships for debt and {\\cal R} have practical applications, e.g. the Maastricht Treaty requires members of the European Monetary Union to maintain {\\cal R} < 0.6 .

  15. Transition probabilities and duration analysis among disability states: Some evidence from Spanish data

    OpenAIRE

    López i Casasnovas, Guillem; Nicodemo, Catia

    2012-01-01

    In this paper we study the disability transition probabilities (as well as the mortality probabilities) due to concurrent factors to age such as income, gender and education. Although it is well known that ageing and socioeconomic status influence the probability of causing functional disorders, surprisingly little attention has been paid to the combined effect of those factors along the individuals' life and how this affects the transition from one degree of disability to another. The assump...

  16. Curing Black Hole Singularities with Local Scale Invariance

    Directory of Open Access Journals (Sweden)

    Predrag Dominis Prester

    2016-01-01

    Full Text Available We show that Weyl-invariant dilaton gravity provides a description of black holes without classical space-time singularities. Singularities appear due to the ill behaviour of gauge fixing conditions, one example being the gauge in which theory is classically equivalent to standard General Relativity. The main conclusions of our analysis are as follows: (1 singularities signal a phase transition from broken to unbroken phase of Weyl symmetry; (2 instead of a singularity, there is a “baby universe” or a white hole inside a black hole; (3 in the baby universe scenario, there is a critical mass after which reducing mass makes the black hole larger as viewed by outside observers; (4 if a black hole could be connected with white hole through the “singularity,” this would require breakdown of (classical geometric description; (5 the singularity of Schwarzschild BH solution is nongeneric and so it is dangerous to rely on it in deriving general results. Our results may have important consequences for resolving issues related to information loss puzzle. Though quantum effects are still crucial and may change the proposed classical picture, a position of building quantum theory around essentially regular classical solutions normally provides a much better starting point.

  17. Characteristic Changes of Scale Invariance of Seismicity Prior to Large Earthquakes: A Constructive Review

    Directory of Open Access Journals (Sweden)

    Qiang Li

    2013-01-01

    Full Text Available Recently, research on the characteristic changes of scale invariance of seismicity before large earthquakes has received considerable attention. However, in some circumstances, it is not easy to obtain these characteristic changes because the features of seismicity in different regions are various. In this paper, we firstly introduced some important research developments of the characteristic changes of scale invariance of seismicity before large earthquakes, which are of particular importance to the researchers in earthquake forecasting and seismic activity. We secondly discussed the strengths and weaknesses of different scale invariance methods such as the local scaling property, the multifractal spectrum, the Hurst exponent analysis, and the correlation dimension. We finally came up with a constructive suggestion for the research strategy in this topic. Our suggestion is that when people try to obtain the precursory information before large earthquakes or to study the fractal property of seismicity by means of the previous scale invariance methods, the strengths and weaknesses of these methods have to be taken into consideration for the purpose of increasing research efficiency. If they do not consider the strengths and weaknesses of these methods, the efficiency of their research might greatly decrease.

  18. Electronic cleansing for computed tomography (CT) colonography using a scale-invariant three-material model

    NARCIS (Netherlands)

    Serlie, Iwo W. O.; Vos, Frans M.; Truyen, Roel; Post, Frits H.; Stoker, Jaap; van Vliet, Lucas J.

    2010-01-01

    A well-known reading pitfall in computed tomography (CT) colonography is posed by artifacts at T-junctions, i.e., locations where air-fluid levels interface with the colon wall. This paper presents a scale-invariant method to determine material fractions in voxels near such T-junctions. The proposed

  19. Definition of fractal topography to essential understanding of scale-invariance

    Science.gov (United States)

    Jin, Yi; Wu, Ying; Li, Hui; Zhao, Mengyu; Pan, Jienan

    2017-01-01

    Fractal behavior is scale-invariant and widely characterized by fractal dimension. However, the cor-respondence between them is that fractal behavior uniquely determines a fractal dimension while a fractal dimension can be related to many possible fractal behaviors. Therefore, fractal behavior is independent of the fractal generator and its geometries, spatial pattern, and statistical properties in addition to scale. To mathematically describe fractal behavior, we propose a novel concept of fractal topography defined by two scale-invariant parameters, scaling lacunarity (P) and scaling coverage (F). The scaling lacunarity is defined as the scale ratio between two successive fractal generators, whereas the scaling coverage is defined as the number ratio between them. Consequently, a strictly scale-invariant definition for self-similar fractals can be derived as D = log F /log P. To reflect the direction-dependence of fractal behaviors, we introduce another parameter Hxy, a general Hurst exponent, which is analytically expressed by Hxy = log Px/log Py where Px and Py are the scaling lacunarities in the x and y directions, respectively. Thus, a unified definition of fractal dimension is proposed for arbitrary self-similar and self-affine fractals by averaging the fractal dimensions of all directions in a d-dimensional space, which . Our definitions provide a theoretical, mechanistic basis for understanding the essentials of the scale-invariant property that reduces the complexity of modeling fractals. PMID:28436450

  20. Discrete Scale Invariance in the Cascade Heart Rate Variability Of Healthy Humans

    OpenAIRE

    Lin, Der Chyan

    2004-01-01

    Evidence of discrete scale invariance (DSI) in daytime healthy heart rate variability (HRV) is presented based on the log-periodic power law scaling of the heart beat interval increment. Our analysis suggests multiple DSI groups and a dynamic cascading process. A cascade model is presented to simulate such a property.

  1. The social brain: scale-invariant layering of Erdős-Rényi networks in small-scale human societies.

    Science.gov (United States)

    Harré, Michael S; Prokopenko, Mikhail

    2016-05-01

    The cognitive ability to form social links that can bind individuals together into large cooperative groups for safety and resource sharing was a key development in human evolutionary and social history. The 'social brain hypothesis' argues that the size of these social groups is based on a neurologically constrained capacity for maintaining long-term stable relationships. No model to date has been able to combine a specific socio-cognitive mechanism with the discrete scale invariance observed in ethnographic studies. We show that these properties result in nested layers of self-organizing Erdős-Rényi networks formed by each individual's ability to maintain only a small number of social links. Each set of links plays a specific role in the formation of different social groups. The scale invariance in our model is distinct from previous 'scale-free networks' studied using much larger social groups; here, the scale invariance is in the relationship between group sizes, rather than in the link degree distribution. We also compare our model with a dominance-based hierarchy and conclude that humans were probably egalitarian in hunter-gatherer-like societies, maintaining an average maximum of four or five social links connecting all members in a largest social network of around 132 people. © 2016 The Author(s).

  2. Estimation and asymptotic theory for transition probabilities in Markov renewal multi-state models.

    Science.gov (United States)

    Spitoni, Cristian; Verduijn, Marion; Putter, Hein

    2012-08-07

    In this paper we discuss estimation of transition probabilities for semi-Markov multi-state models. Non-parametric and semi-parametric estimators of the transition probabilities for a large class of models (forward going models) are proposed. Large sample theory is derived using the functional delta method and the use of resampling is proposed to derive confidence bands for the transition probabilities. The last part of the paper concerns the presentation of the main ideas of the R implementation of the proposed estimators, and data from a renal replacement study are used to illustrate the behavior of the estimators proposed.

  3. Are Einstein's transition probabilities for spontaneous emission constant in plasmas?

    Science.gov (United States)

    Griem, H. R.; Huang, Y. W.; Wang, J.-S.; Moreno, J. C.

    1991-01-01

    An investigation is conducted with a ruby laser to experimentally confirm the quenching of spontaneous emission coefficients and propose a mechanism for the phenomenon. Results of previous experiments are examined to determine the consistency and validity of interpretations of the spontaneous emissions. For the C IV 3s-3p and 2s-3p transitions, the line-intensity ratios are found to be dependent on the separation of the laser from the target. Density gradients and Stark broadening are proposed to interpret the results in a way that does not invalidate the Einstein A values. The interpretation is extended to C III and N V, both of which demonstrate similar changes in A values in previous experiments. The apparent quenching of Ar II by photon collisions is explained by Rabi oscillations and power broadening in the argon-ion laser cavity. It is concluded that the changes in A values cannot result from dense plasma effects.

  4. Transition probabilities of health states for workers in Malaysia using a Markov chain model

    Science.gov (United States)

    Samsuddin, Shamshimah; Ismail, Noriszura

    2017-04-01

    The aim of our study is to estimate the transition probabilities of health states for workers in Malaysia who contribute to the Employment Injury Scheme under the Social Security Organization Malaysia using the Markov chain model. Our study uses four states of health (active, temporary disability, permanent disability and death) based on the data collected from the longitudinal studies of workers in Malaysia for 5 years. The transition probabilities vary by health state, age and gender. The results show that men employees are more likely to have higher transition probabilities to any health state compared to women employees. The transition probabilities can be used to predict the future health of workers in terms of a function of current age, gender and health state.

  5. Systematics of Absolute Gamma Ray Transition Probabilities in Deformed Odd-A Nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Malmskog, S.G.

    1965-11-15

    All known experimentally determined absolute gamma ray transition probabilities between different intrinsic states of deformed odd-A nuclei in the rare earth, region (153 < A < 181) and in the actinide region (A {>=} 227) are compared with transition probabilities (Weisskopf and Nilsson estimate). Systematic deviations from the theoretical values are found. Possible explanations for these deviations are given. This discussion includes Coriolis coupling, {delta}K ={+-}2 band-mixing effects and pairing interaction.

  6. Inflation and reheating in scale-invariant scalar-tensor gravity

    CERN Document Server

    Tambalo, Giovanni

    2016-01-01

    We consider the scale-invariant inflationary model studied in [1]. The Lagrangian includes all the scale-invariant operators that can be built with combinations of $R, R^{2}$ and one scalar field. The equations of motion show that the symmetry is spontaneously broken after an arbitrarily long inflationary period and a fundamental mass scale is generated. Upon symmetry breaking, and in the Jordan frame, both Hubble function and the scalar field undergo damped oscillations that can eventually amplify Standard Model fields and reheat the Universe. In the present work, we study in detail inflation and the reheating mechanism of this model in the Einstein frame and we compare some of the results with the latest observational data.

  7. A new dynamics of electroweak symmetry breaking with classically scale invariance

    Energy Technology Data Exchange (ETDEWEB)

    Haba, Naoyuki [Graduate School of Science and Engineering, Shimane University, Matsue 690-8504 (Japan); Ishida, Hiroyuki, E-mail: ishida@riko.shimane-u.ac.jp [Graduate School of Science and Engineering, Shimane University, Matsue 690-8504 (Japan); Kitazawa, Noriaki [Department of Physics, Tokyo Metropolitan University, Hachioji, Tokyo 192-0397 (Japan); Yamaguchi, Yuya [Graduate School of Science and Engineering, Shimane University, Matsue 690-8504 (Japan); Department of Physics, Faculty of Science, Hokkaido University, Sapporo 060-0810 (Japan)

    2016-04-10

    We propose a new dynamics of the electroweak symmetry breaking in a classically scale invariant version of the standard model. The scale invariance is broken by the condensations of additional fermions under a strong coupling dynamics. The electroweak symmetry breaking is triggered by negative mass squared of the elementary Higgs doublet, which is dynamically generated through the bosonic seesaw mechanism. We introduce a real pseudo-scalar singlet field interacting with additional fermions and Higgs doublet in order to avoid massless Nambu–Goldstone bosons from the chiral symmetry breaking in a strong coupling sector. We investigate the mass spectra and decay rates of these pseudo-Nambu–Goldstone bosons, and show they can decay fast enough without cosmological problems. We further show that our model can make the electroweak vacuum stable.

  8. A new dynamics of electroweak symmetry breaking with classically scale invariance

    Directory of Open Access Journals (Sweden)

    Naoyuki Haba

    2016-04-01

    Full Text Available We propose a new dynamics of the electroweak symmetry breaking in a classically scale invariant version of the standard model. The scale invariance is broken by the condensations of additional fermions under a strong coupling dynamics. The electroweak symmetry breaking is triggered by negative mass squared of the elementary Higgs doublet, which is dynamically generated through the bosonic seesaw mechanism. We introduce a real pseudo-scalar singlet field interacting with additional fermions and Higgs doublet in order to avoid massless Nambu–Goldstone bosons from the chiral symmetry breaking in a strong coupling sector. We investigate the mass spectra and decay rates of these pseudo-Nambu–Goldstone bosons, and show they can decay fast enough without cosmological problems. We further show that our model can make the electroweak vacuum stable.

  9. Direct detection of singlet dark matter in classically scale-invariant standard model

    Directory of Open Access Journals (Sweden)

    Kazuhiro Endo

    2015-10-01

    Full Text Available Classical scale invariance is one of the possible solutions to explain the origin of the electroweak scale. The simplest extension is the classically scale-invariant standard model augmented by a multiplet of gauge singlet real scalar. In the previous study it was shown that the properties of the Higgs potential deviate substantially, which can be observed in the International Linear Collider. On the other hand, since the multiplet does not acquire vacuum expectation value, the singlet components are stable and can be dark matter. In this letter we study the detectability of the real singlet scalar bosons in the experiment of the direct detection of dark matter. It is shown that a part of this model has already been excluded and the rest of the parameter space is within the reach of the future experiment.

  10. Reduced probabilities for E2 transitions between excited collective states of triaxial even–even nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Nadyrbekov, M. S., E-mail: nodirbekov@inp.uz; Bozarov, O. A. [Uzbek Academy of Sciences, Institute of Nuclear Physics (Uzbekistan)

    2017-01-15

    Reduced probabilities for intra- and interband E2 transitions in excited collective states of even–even lanthanide and actinide nuclei are analyzed on the basis of a model that admits an arbitrary triaxiality. They are studied in detail in the energy spectra of {sup 154}Sm, {sup 156}Gd, {sup 158}Dy, {sup 162,164}Er, {sup 230,232}Th, and {sup 232,234,236,238}U even–even nuclei. Theoretical and experimental values of the reduced probabilities for the respective E2 transitions are compared. This comparison shows good agreement for all states, including high-spin ones. The ratios of the reduced probabilities for the E2 transitions in question are compared with results following from the Alaga rules. These comparisons make it possible to assess the sensitivity of the probabilities being considered to the presence of quadrupole deformations.

  11. Two-loop scale-invariant scalar potential and quantum effective operators

    Energy Technology Data Exchange (ETDEWEB)

    Ghilencea, D.M. [National Institute of Physics and Nuclear Engineering (IFIN-HH), Theoretical Physics Department, Bucharest (Romania); CERN, Theory Division, Geneva 23 (Switzerland); Lalak, Z.; Olszewski, P. [University of Warsaw, Faculty of Physics, Institute of Theoretical Physics, Warsaw (Poland)

    2016-12-15

    Spontaneous breaking of quantum scale invariance may provide a solution to the hierarchy and cosmological constant problems. In a scale-invariant regularization, we compute the two-loop potential of a Higgs-like scalar φ in theories in which scale symmetry is broken only spontaneously by the dilaton (σ). Its VEV left angle σ right angle generates the DR subtraction scale (μ ∝ left angle σ right angle), which avoids the explicit scale symmetry breaking by traditional regularizations (where μ = fixed scale). The two-loop potential contains effective operators of non-polynomial nature as well as new corrections, beyond those obtained with explicit breaking (μ = fixed scale). These operators have the form φ{sup 6}/σ{sup 2}, φ{sup 8}/σ{sup 4}, etc., which generate an infinite series of higher dimensional polynomial operators upon expansion about left angle σ right angle >> left angle φ right angle, where such hierarchy is arranged by one initial, classical tuning. These operators emerge at the quantum level from evanescent interactions (∝ ε) between σ and φ that vanish in d = 4 but are required by classical scale invariance in d = 4 - 2ε. The Callan-Symanzik equation of the two-loop potential is respected and the two-loop beta functions of the couplings differ from those of the same theory regularized with μ = fixed scale. Therefore the running of the couplings enables one to distinguish between spontaneous and explicit scale symmetry breaking. (orig.)

  12. Dimensional reduction in momentum space and scale-invariant cosmological fluctuations

    Science.gov (United States)

    Amelino-Camelia, Giovanni; Arzano, Michele; Gubitosi, Giulia; Magueijo, João

    2013-11-01

    We adopt a framework where quantum gravity’s dynamical dimensional reduction of spacetime at short distances is described in terms of modified dispersion relations. We observe that by subjecting such models to a momentum-space diffeomorphism one obtains a “dual picture” with unmodified dispersion relations, but a modified measure of integration over momenta. We then find that the UV Hausdorff dimension of momentum space which can be inferred from this modified integration measure coincides with the short-distance spectral dimension of spacetime. This result sheds light into why scale-invariant fluctuations are obtained if the original model for two UV spectral dimensions is combined with Einstein gravity. By studying the properties of the inner product we derive the result that it is only in two energy-momentum dimensions that microphysical vacuum fluctuations are scale invariant. This is true ignoring gravity, but then we find that if Einstein gravity is postulated in the original frame, in the dual picture gravity switches off, since all matter becomes conformally coupled. We argue that our findings imply that the following concepts are closely connected: scale invariance of vacuum quantum fluctuations, conformal invariance of the gravitational coupling, UV reduction to spectral dimension two in position space, and UV reduction to Hausdorff dimension two in energy-momentum space.

  13. Dynamical and scale invariance of charged particles slipping on a rough surface with periodic excitation

    Science.gov (United States)

    Zhang, Hao; Luo, Pengcheng; Ding, Huifang

    2017-07-01

    This letter deals with the dynamical and scaling invariance of charged particles slipping on a rough surface with periodic excitation. A variant of the Fermi-Ulam model (FUM) is proposed to describe the transport behavior of the particles when the electric field force Fe is smaller or larger than the friction force Ff, i.e., A 0. For these two cases, the stability of fixed points is analyzed with the help of the eigenvalue analysis method, and further the invariant manifolds are constructed to investigate the dynamical invariance such as energy diffusion for some initial conditions in the case A > 0 and decay process in the case A law of the statistical behavior. It follows that both the FA phenomenon for A > 0 and the velocity decay process for A < 0 satisfy scaling invariance with respect to the nondimensional acceleration A. Besides, for A < 0, the transient number nx is proposed to evaluate the speed of the velocity decay process. More importantly, nx is found to possess the attribute of scaling invariance with respect to both the initial velocity V0 and the nondimensional acceleration A. These results are very useful for the in-depth understanding of the energy transport properties of charged particle systems.

  14. Time-Varying Transition Probability Matrix Estimation and Its Application to Brand Share Analysis

    Science.gov (United States)

    Chiba, Tomoaki; Akaho, Shotaro; Murata, Noboru

    2017-01-01

    In a product market or stock market, different products or stocks compete for the same consumers or purchasers. We propose a method to estimate the time-varying transition matrix of the product share using a multivariate time series of the product share. The method is based on the assumption that each of the observed time series of shares is a stationary distribution of the underlying Markov processes characterized by transition probability matrices. We estimate transition probability matrices for every observation under natural assumptions. We demonstrate, on a real-world dataset of the share of automobiles, that the proposed method can find intrinsic transition of shares. The resulting transition matrices reveal interesting phenomena, for example, the change in flows between TOYOTA group and GM group for the fiscal year where TOYOTA group’s sales beat GM’s sales, which is a reasonable scenario. PMID:28076383

  15. Time-Varying Transition Probability Matrix Estimation and Its Application to Brand Share Analysis.

    Science.gov (United States)

    Chiba, Tomoaki; Hino, Hideitsu; Akaho, Shotaro; Murata, Noboru

    2017-01-01

    In a product market or stock market, different products or stocks compete for the same consumers or purchasers. We propose a method to estimate the time-varying transition matrix of the product share using a multivariate time series of the product share. The method is based on the assumption that each of the observed time series of shares is a stationary distribution of the underlying Markov processes characterized by transition probability matrices. We estimate transition probability matrices for every observation under natural assumptions. We demonstrate, on a real-world dataset of the share of automobiles, that the proposed method can find intrinsic transition of shares. The resulting transition matrices reveal interesting phenomena, for example, the change in flows between TOYOTA group and GM group for the fiscal year where TOYOTA group's sales beat GM's sales, which is a reasonable scenario.

  16. Improving the Estimation of Markov Transition Probabilities Using Mechanistic-Empirical Models

    Directory of Open Access Journals (Sweden)

    Daijiro Mizutani

    2017-10-01

    Full Text Available In many current state-of-the-art bridge management systems, Markov models are used for both the prediction of deterioration and the determination of optimal intervention strategies. Although transition probabilities of Markov models are generally estimated using inspection data, it is not uncommon that there are situations where there are inadequate data available to estimate the transition probabilities. In this article, a methodology is proposed to estimate the transition probabilities from mechanistic-empirical models for reinforced concrete elements. The proposed methodology includes the estimation of the transition probabilities analytically when possible and when not through the use of Bayesian statistics, which requires the formulation of a likelihood function and the use of Markov Chain Monte Carlo simulations. In an example, the difference between the average condition predicted over a 100-year time period with a Markov model developed using the proposed methodology and the condition predicted using mechanistic-empirical models were found to be 54% of that when the state-of-the-art methodology, i.e., a methodology that estimates the transition probabilities using best fit curves based on yearly condition distributions, was used. The variation in accuracy of the Markov model as a function of the number of deterioration paths generated using the mechanistic-empirical models is also shown.

  17. Computation of probabilities of transit of gas particles through channels with rough wall

    Science.gov (United States)

    Kuznetsov, Maksim; Nekrasov, Kirill; Tokmantsev, Valeriy

    2017-11-01

    An approach to calculating the probability of transit of gas particles through the narrow channels with rough wall is suggested. The probability is computed in the free-molecular flow regime using the test particle Monte Carlo method. The microscopic structure of the inner surface of the wall is emulated with the aid of experimental data obtained by the atomic-force microscopy of a silicon carbide sample. A procedure that provides fast processing of the collisions of the gas particles with this model surface is developed and implemented. The simulation is applied to cylindrical channels of the length to radius ratio in the range from 0 to 400 and the average height of the microscopic irregularities up to 0.2 of the radius. The probability of transit of the gas particles through these channels is computed. The reduction in this probability caused by the irregularities reaches 50 %.

  18. Absolute Transition Probabilities from the 453.1 keV Level in {sup 183}W

    Energy Technology Data Exchange (ETDEWEB)

    Malmskog, S.G.

    1966-10-15

    The half life of the 453.1 keV level in {sup 183}W has been measured by the delayed coincidence method to 18.4 {+-} 0.5 nsec. This determines twelve absolute M1 and E2 transition probabilities, out of which nine are K-forbidden. All transition probabilities are compared with the single particle estimate. The three K-allowed E2, {delta}K = 2 transition rates to the 1/2{sup -} (510) rotational band are furthermore compared with the Nilsson model. An attempt to give a quantitative explanation of the observed transition rates has been made by including the effects from admixtures into the single particle wave functions.

  19. Transition probabilities for Be I, Be II, Mg I, and Mg II

    CERN Document Server

    Zheng Neng Wu; Yangru Yi; Zhou Tao; Ma Dong Xia; Wu Yong Gang; Xu Hai Ta

    2001-01-01

    The Weakest Bound Electron Potential Model (WBEPM) is used to calculate transition probabilities between LS multiplets of Be I, Be II, Mg I, and Mg II. In this calculation, a coupled set of equations is employed to determine effective charges Z* and effective quantum numbers n* and l* using, as input data, experimental energy levels and radial expectation values obtained with the numerical Coulomb approximation. Transition probabilities between highly excited states are evaluated using modified hydrogenic wavefunctions. Good agreement is seen in comparisons of the present results with those from other works.

  20. A Semi-Continuous State-Transition Probability HMM-Based Voice Activity Detector

    Directory of Open Access Journals (Sweden)

    Othman H

    2007-01-01

    Full Text Available We introduce an efficient hidden Markov model-based voice activity detection (VAD algorithm with time-variant state-transition probabilities in the underlying Markov chain. The transition probabilities vary in an exponential charge/discharge scheme and are softly merged with state conditional likelihood into a final VAD decision. Working in the domain of ITU-T G.729 parameters, with no additional cost for feature extraction, the proposed algorithm significantly outperforms G.729 Annex B VAD while providing a balanced tradeoff between clipping and false detection errors. The performance compares very favorably with the adaptive multirate VAD, option 2 (AMR2.

  1. Shift, rotation and scale invariant optical information authentication with binary digital holography

    Science.gov (United States)

    Jiao, Shuming; Zhou, Changyuan; Zou, Wenbin; Li, Xia

    2017-12-01

    An optical information authentication system using binary holography is proposed recently, with high security, flexibility and reduced cipher-text size. Despite the success, we point out one limitation of this system that it cannot well verify scaled and rotated versions of correct images and simply regard them as wrong images. In fact, this limitation generally exists in many other optical authentication systems. In this paper, a preprocessing method based Fourier transform and log polar transform is employed to allow the optical authentication systems shift, rotation and scale invariant. Numerical simulation results demonstrate that our proposed scheme significantly outperforms the existing method.

  2. Scale-invariance underlying the logistic equation and its social applications

    Energy Technology Data Exchange (ETDEWEB)

    Hernando, A., E-mail: alberto.hernando@irsamc.ups-tlse.fr [Laboratoire Collisions, Agrégats, Réactivité, IRSAMC, Université Paul Sabatier, 118 Route de Narbonne, 31062 Toulouse Cedex 09 (France); Plastino, A., E-mail: plastino@fisica.unlp.edu.ar [National University La Plata, IFLP-CCT-CONICET, C.C. 727, 1900 La Plata (Argentina); Universitat de les Illes Balears and IFISC-CSIC, 07122 Palma de Mallorca (Spain)

    2013-01-03

    On the basis of dynamical principles we i) advance a derivation of the Logistic Equation (LE), widely employed (among multiple applications) in the simulation of population growth, and ii) demonstrate that scale-invariance and a mean-value constraint are sufficient and necessary conditions for obtaining it. We also generalize the LE to multi-component systems and show that the above dynamical mechanisms underlie a large number of scale-free processes. Examples are presented regarding city-populations, diffusion in complex networks, and popularity of technological products, all of them obeying the multi-component logistic equation in an either stochastic or deterministic way.

  3. The proton mass and scale-invariant hidden local symmetry for compressed baryonic matter

    Science.gov (United States)

    Rho, Mannque

    2017-12-01

    I discuss how to access dense baryonic matter of compact stars by combining hidden local symmetry (HLS) of light-quark vector mesons with spontaneously broken scale invariance of a (pseudo) Nambu-Goldstone boson, dilaton, in a description that parallels the approach to dilatonic Higgs. Some of the surprising observations are that the bulk of proton mass is not Nambu-Goldstonian, parity doubling emerges at high density and the EoS of baryonic matter can be soft enough for heavy-ion processes at low density and stiff enough at high density for ˜ 2 solar mass neutron stars.

  4. E1, M1, E2 transition energies and probabilities of W$^{54+}$ ions

    CERN Document Server

    Ding, Xiao-bin; Liu, Jia-xin; Koike, Fumihiro; Murakami, Izumi; Kato, Daiji; Sakaue, Hiroyuki A; Nakamura, Nobuyuki; Dong, Chen-zhong

    2016-01-01

    A comprehensive theoretical study of the E1, M1, E2 transitions of Ca-like tungsten ions is presented. Using multi-configuration Dirac-Fock (MCDF) method with a restricted active space treatment, the wavelengths and probabilities of the M1 and E2 transitions between the multiplets of the ground state configuration ([Ne]3s$^{2}$3p$^{6}$3d$^{2}$) and of the E1 transitions between [Ne]3s$^{2}$3p$^{5}$3d$^{3}$ and [Ne]3s$^{2}$3p$^{6}$3d$^{2}$ have been calculated. The results are in reasonable agreement with available experimental data. The present E1 and M1 calculations are compared with previous theoretical values. For E2 transitions, the importance of electron correlation from 3s and 3p orbitals is pointed out. Several strong E1 transitions are predicted, which have potential advantage for plasma diagnostics.

  5. K X-ray relative transition probabilities for 23 <= Z <= 33

    CERN Document Server

    Chen Xi Meng; Liu Zhao Yuan; Ma Shu Xun; Zhang Hua Lin; Cai Xiao

    2003-01-01

    The K X-ray relative transition probabilities K beta/K alpha of some elements for atomic numbers 23 <= Z <= 33 induced by 3 MeV protons were measured. The experimental results are compared with the relativistic Hartree-Fock (RHF) calculations. Good agreements have been obtained considering the experimental error.

  6. A Computational Model of Word Segmentation from Continuous Speech Using Transitional Probabilities of Atomic Acoustic Events

    Science.gov (United States)

    Rasanen, Okko

    2011-01-01

    Word segmentation from continuous speech is a difficult task that is faced by human infants when they start to learn their native language. Several studies indicate that infants might use several different cues to solve this problem, including intonation, linguistic stress, and transitional probabilities between subsequent speech sounds. In this…

  7. Direct modeling of regression effects for transition probabilities in the progressive illness-death model

    DEFF Research Database (Denmark)

    Azarang, Leyla; Scheike, Thomas; de Uña-Álvarez, Jacobo

    2017-01-01

    In this work, we present direct regression analysis for the transition probabilities in the possibly non-Markov progressive illness–death model. The method is based on binomial regression, where the response is the indicator of the occupancy for the given state along time. Randomly weighted score...... equations that are able to remove the bias due to censoring are introduced. By solving these equations, one can estimate the possibly time-varying regression coefficients, which have an immediate interpretation as covariate effects on the transition probabilities. The performance of the proposed estimator...... is investigated through simulations. We apply the method to data from the Registry of Systematic Lupus Erythematosus RELESSER, a multicenter registry created by the Spanish Society of Rheumatology. Specifically, we investigate the effect of age at Lupus diagnosis, sex, and ethnicity on the probability of damage...

  8. Generating scale-invariant tensor perturbations in the non-inflationary universe

    Directory of Open Access Journals (Sweden)

    Mingzhe Li

    2014-09-01

    Full Text Available It is believed that the recent detection of large tensor perturbations strongly favors the inflation scenario in the early universe. This common sense depends on the assumption that Einstein's general relativity is valid at the early universe. In this paper we show that nearly scale-invariant primordial tensor perturbations can be generated during a contracting phase before the radiation dominated epoch if the theory of gravity is modified by the scalar–tensor theory at that time. The scale-invariance protects the tensor perturbations from suppressing at large scales and they may have significant amplitudes to fit BICEP2's result. We construct a model to achieve this purpose and show that the universe can bounce to the hot big bang after long time contraction, and at almost the same time the theory of gravity approaches to general relativity through stabilizing the scalar field. Theoretically, such models are dual to inflation models if we change to the frame in which the theory of gravity is general relativity. Dual models are related by the conformal transformations. With this study we reinforce the point that only the conformal invariant quantities such as the scalar and tensor perturbations are physical. How did the background evolve before the radiation time depends on the frame and has no physical meaning. It is impossible to distinguish different pictures by later time cosmological probes.

  9. Two-loop scale-invariant scalar potential and quantum effective operators

    CERN Document Server

    Ghilencea, D.M.

    2016-11-29

    Spontaneous breaking of quantum scale invariance may provide a solution to the hierarchy and cosmological constant problems. In a scale-invariant regularization, we compute the two-loop potential of a higgs-like scalar $\\phi$ in theories in which scale symmetry is broken only spontaneously by the dilaton ($\\sigma$). Its vev $\\langle\\sigma\\rangle$ generates the DR subtraction scale ($\\mu\\sim\\langle\\sigma\\rangle$), which avoids the explicit scale symmetry breaking by traditional regularizations (where $\\mu$=fixed scale). The two-loop potential contains effective operators of non-polynomial nature as well as new corrections, beyond those obtained with explicit breaking ($\\mu$=fixed scale). These operators have the form: $\\phi^6/\\sigma^2$, $\\phi^8/\\sigma^4$, etc, which generate an infinite series of higher dimensional polynomial operators upon expansion about $\\langle\\sigma\\rangle\\gg \\langle\\phi\\rangle$, where such hierarchy is arranged by {\\it one} initial, classical tuning. These operators emerge at the quantum...

  10. Estimating transition probabilities for stage-based population projection matrices using capture-recapture data

    Science.gov (United States)

    Nichols, J.D.; Sauer, J.R.; Pollock, K.H.; Hestbeck, J.B.

    1992-01-01

    In stage-based demography, animals are often categorized into size (or mass) classes, and size-based probabilities of surviving and changing mass classes must be estimated before demographic analyses can be conducted. In this paper, we develop two procedures for the estimation of mass transition probabilities from capture-recapture data. The first approach uses a multistate capture-recapture model that is parameterized directly with the transition probabilities of interest. Maximum likelihood estimates are then obtained numerically using program SURVIV. The second approach involves a modification of Pollock's robust design. Estimation proceeds by conditioning on animals caught in a particualr class at time i, and then using closed models to estimate the number of these that are alive in other classes at i + 1. Both methods are illustrated by application to meadow vole, Microtus pennsylvanicus, capture-recapture data. The two methods produced reasonable estimates that were similar. Advantages of these two approaches include the directness of estimation, the absence of need for restrictive assumptions about the independence of survival and growth, the testability of assumptions, and the testability of related hypotheses of ecological interest (e.g., the hypothesis of temporal variation in transition probabilities).

  11. Calculation of transition probabilities and ac Stark shifts in two-photon laser transitions of antiprotonic helium

    CERN Document Server

    Hori, Masaki

    2010-01-01

    Numerical ab initio variational calculations of the transition probabilities and ac Stark shifts in two-photon transitions of antiprotonic helium atoms driven by two counter-propagating laser beams are presented. We found that sub-Doppler spectroscopy is in principle possible by exciting transitions of the type (n,L)->(n-2,L-2) between antiprotonic states of principal and angular momentum quantum numbers n~L-1~35, first by using highly monochromatic, nanosecond laser beams of intensities 10^4-10^5 W/cm^2, and then by tuning the virtual intermediate state close (e.g., within 10-20 GHz) to the real state (n-1,L-1) to enhance the nonlinear transition probability. We expect that ac Stark shifts of a few MHz or more will become an important source of systematic error at fractional precisions of better than a few parts in 10^9. These shifts can in principle be minimized and even canceled by selecting an optimum combination of laser intensities and frequencies. We simulated the resonance profiles of some two-photon ...

  12. Limit law for transition probabilities and moderate deviations for Sinai's random walk in random environment

    CERN Document Server

    Comets, F

    2003-01-01

    We consider a one-dimensional random walk in random environment in the Sinai's regime. Our main result is that logarithms of the transition probabilities, after a suitable rescaling, converge in distribution as time tends to infinity, to some functional of the Brownian motion. We compute the law of this functional when the initial and final points agree. Also, among other things, we estimate the probability of being at time~$t$ at distance at least $z$ from the initial position, when $z$ is larger than $\\ln^2 t$, but still of logarithmic order in time.

  13. Location Prediction Based on Transition Probability Matrices Constructing from Sequential Rules for Spatial-Temporal K-Anonymity Dataset.

    Directory of Open Access Journals (Sweden)

    Haitao Zhang

    Full Text Available Spatial-temporal k-anonymity has become a mainstream approach among techniques for protection of users' privacy in location-based services (LBS applications, and has been applied to several variants such as LBS snapshot queries and continuous queries. Analyzing large-scale spatial-temporal anonymity sets may benefit several LBS applications. In this paper, we propose two location prediction methods based on transition probability matrices constructing from sequential rules for spatial-temporal k-anonymity dataset. First, we define single-step sequential rules mined from sequential spatial-temporal k-anonymity datasets generated from continuous LBS queries for multiple users. We then construct transition probability matrices from mined single-step sequential rules, and normalize the transition probabilities in the transition matrices. Next, we regard a mobility model for an LBS requester as a stationary stochastic process and compute the n-step transition probability matrices by raising the normalized transition probability matrices to the power n. Furthermore, we propose two location prediction methods: rough prediction and accurate prediction. The former achieves the probabilities of arriving at target locations along simple paths those include only current locations, target locations and transition steps. By iteratively combining the probabilities for simple paths with n steps and the probabilities for detailed paths with n-1 steps, the latter method calculates transition probabilities for detailed paths with n steps from current locations to target locations. Finally, we conduct extensive experiments, and correctness and flexibility of our proposed algorithm have been verified.

  14. Location Prediction Based on Transition Probability Matrices Constructing from Sequential Rules for Spatial-Temporal K-Anonymity Dataset.

    Science.gov (United States)

    Zhang, Haitao; Chen, Zewei; Liu, Zhao; Zhu, Yunhong; Wu, Chenxue

    2016-01-01

    Spatial-temporal k-anonymity has become a mainstream approach among techniques for protection of users' privacy in location-based services (LBS) applications, and has been applied to several variants such as LBS snapshot queries and continuous queries. Analyzing large-scale spatial-temporal anonymity sets may benefit several LBS applications. In this paper, we propose two location prediction methods based on transition probability matrices constructing from sequential rules for spatial-temporal k-anonymity dataset. First, we define single-step sequential rules mined from sequential spatial-temporal k-anonymity datasets generated from continuous LBS queries for multiple users. We then construct transition probability matrices from mined single-step sequential rules, and normalize the transition probabilities in the transition matrices. Next, we regard a mobility model for an LBS requester as a stationary stochastic process and compute the n-step transition probability matrices by raising the normalized transition probability matrices to the power n. Furthermore, we propose two location prediction methods: rough prediction and accurate prediction. The former achieves the probabilities of arriving at target locations along simple paths those include only current locations, target locations and transition steps. By iteratively combining the probabilities for simple paths with n steps and the probabilities for detailed paths with n-1 steps, the latter method calculates transition probabilities for detailed paths with n steps from current locations to target locations. Finally, we conduct extensive experiments, and correctness and flexibility of our proposed algorithm have been verified.

  15. Radiative lifetimes, branching fractions, and transition probabilities for Lu I, Lu II, and Lu III

    Science.gov (United States)

    Fedchak, J. A.; den Hartog, E. A.; Lawler, J. E.; Biemont, E.; Palmeri, P.; Quinet, P.

    2000-06-01

    In astrophysics, rare-earth abundances are particularly relevant to the study of chemically peculiar stars, stellar nucleosynthesis, and other problems. Accurate oscillator strengths are required to disentangle blends and obtain reliable abundance values. Rare-earth salts are also used in many commercial metal-halide high intensity discharge lamps. Accurate transition probabilities are required in the models used for lamp design and for diagnostics. We have determined accurate radiative lifetimes for the first three spectra of Lu using time-resolved laser-induced fluorescence on a slow beam of Lu ions and atoms. Lu I branching fractions have been determined from an emission spectra taken with a 1.0 m Fourier transform spectrometer at the National Solar Observatory (NSO). These are combined with the radiative lifetimes to produce 38 accurate transition probabilities for Lu I. The Lu I measurements are compared to new relativistic Hartree-Fock calculations.

  16. The Time Course of the Probability of Transition Into and Out of REM Sleep

    Science.gov (United States)

    Bassi, Alejandro; Vivaldi, Ennio A.; Ocampo-Garcés, Adrián

    2009-01-01

    Study Objectives: A model of rapid eye movement (REM) sleep expression is proposed that assumes underlying regulatory mechanisms operating as inhomogenous Poisson processes, the overt results of which are the transitions into and out of REM sleep. Design: Based on spontaneously occurring REM sleep episodes (“Episode”) and intervals without REM sleep (“Interval”), 3 variables are defined and evaluated over discrete 15-second epochs using a nonlinear logistic regression method: “Propensity” is the instantaneous rate of into-REM transition occurrence throughout an Interval, “Volatility” is the instantaneous rate of out-of-REM transition occurrence throughout an Episode, and “Opportunity” is the probability of being in non-REM (NREM) sleep at a given time throughout an Interval, a requisite for transition. Setting: 12:12 light:dark cycle, isolated boxes. Participants: Sixteen male Sprague-Dawley rats Interventions: None. Spontaneous sleep cycles. Measurements and Results: The highest levels of volatility and propensity occur, respectively, at the very beginning of Episodes and Intervals. The new condition stabilizes rapidly, and variables reach nadirs at minute 1.25 and 2.50, respectively. Afterward, volatility increases markedly, reaching values close to the initial level. Propensity increases moderately, the increment being stronger through NREM sleep bouts occurring at the end of long Intervals. Short-term homeostasis is evidenced by longer REM sleep episodes lowering propensity in the following Interval. Conclusions: The stabilization after transitions into Episodes or Intervals and the destabilization after remaining for some time in either condition may be described as resulting from continuous processes building up during Episodes and Intervals. These processes underlie the overt occurrence of transitions. Citation: Bassi A; Vivaldi EA; Ocampo-Garcées A. The time course of the probability of transition into and out of REM sleep. SLEEP 2009

  17. Theoretical oscillator strengths, transition probabilities, and radiative lifetimes of levels in Pb V

    Energy Technology Data Exchange (ETDEWEB)

    Colón, C., E-mail: cristobal.colon@upm.es [Dpto. Física Aplicada. E.U.I.T. Industrial, Universidad Politécnica de Madrid, Ronda de Valencia 3, 28012 Madrid (Spain); Alonso-Medina, A. [Dpto. Física Aplicada. E.U.I.T. Industrial, Universidad Politécnica de Madrid, Ronda de Valencia 3, 28012 Madrid (Spain); Porcher, P. [Laboratoire de Chimie Appliquée de l’Etat Solide, CNRS-UMR 7574, Paris (France)

    2014-01-15

    Theoretical values of oscillator strengths and transition probabilities for 306 spectral lines arising from the 5d{sup 9}ns(n=7,8,9),5d{sup 9}np(n=6,7),5d{sup 9}6d, and 5d{sup 9} 5f configurations, and radiative lifetimes of 9 levels, of Pb V have been obtained. These values were obtained in intermediate coupling (IC) and using ab initio relativistic Hartree–Fock calculations including core-polarization effects. We use for the IC calculations the standard method of least squares fitting of experimental energy levels by means of computer codes from Cowan. We included in these calculations the 5d{sup 8}6s6p and 5d{sup 8}6s{sup 2} configurations. These calculations have facilitated the identification of the 214.25, 216.79, and 227.66 nm spectral lines of Pb V. In the absence of experimental results of oscillator strengths and transition probabilities, we could not make a direct comparison with our results. However, the Stark broadening parameters calculated from these values are in excellent agreement with experimental widening found in the literature. -- Highlights: •Theoretical values of transition probabilities of Pb V have been obtained. •We use for the IC calculations the standard method of least square. •The parameters calculated from these values are in agreement with the experimental values.

  18. Using optimal transport theory to estimate transition probabilities in metapopulation dynamics

    Science.gov (United States)

    Nichols, Jonathan M.; Spendelow, Jeffrey A.; Nichols, James D.

    2017-01-01

    This work considers the estimation of transition probabilities associated with populations moving among multiple spatial locations based on numbers of individuals at each location at two points in time. The problem is generally underdetermined as there exists an extremely large number of ways in which individuals can move from one set of locations to another. A unique solution therefore requires a constraint. The theory of optimal transport provides such a constraint in the form of a cost function, to be minimized in expectation over the space of possible transition matrices. We demonstrate the optimal transport approach on marked bird data and compare to the probabilities obtained via maximum likelihood estimation based on marked individuals. It is shown that by choosing the squared Euclidean distance as the cost, the estimated transition probabilities compare favorably to those obtained via maximum likelihood with marked individuals. Other implications of this cost are discussed, including the ability to accurately interpolate the population's spatial distribution at unobserved points in time and the more general relationship between the cost and minimum transport energy.

  19. New Transition Probabilities for Neutral Gadolinium from Boltzmann Analysis of Fourier Transform Spectra

    Science.gov (United States)

    Nitz, David; Ouyang, Chao; Atomic Spectroscopy Team

    2015-05-01

    The recent availability of a large set of absolute transition probabilities for neutral gadolinium (Lawler et al., J. Phys. B: At. Mol. Opt. Phys. 44, 095001 (2011)) makes it possible to investigate the relative populations of a large range of upper levels in radiometrically-calibrated Gd spectra. In cases where these populations follow a Boltzmann distribution, the effective temperature which characterizes the distribution provides a means of obtaining new transition probabilities for observable decay branches of nearby levels. While not as accurate as measurements based on branching fractions and lifetimes, this method can be applied to levels whose lifetimes are not known and does not require accounting for all of the decay branches. We are analyzing Fourier Transform spectra of Gd from the National Solar Observatory data archive at Kitt Peak used by Lawler et al. in their study and have identified two broadband spectra (9000-24000 cm-1) which exhibit Boltzmann behavior for energy levels in the range 17750-36650 cm-1. These analyses and a summary of new transition probabilities obtained from them to date will be presented. Work supported by St. Olaf College Collaborative Undergraduate Research program.

  20. Scale-invariance underlying the logistic equation and its social applications

    CERN Document Server

    Hernando, A

    2012-01-01

    On the basis of dynamical principles we derive the Logistic Equation (LE), widely employed (among multiple applications) in the simulation of population growth, and demonstrate that scale-invariance and a mean-value constraint are sufficient and necessary conditions for obtaining it. We also generalize the LE to multi-component systems and show that the above dynamical mechanisms underlie large number of scale-free processes. Examples are presented regarding city-populations, diffusion in complex networks, and popularity of technological products, all of them obeying the multi-component logistic equation in an either stochastic or deterministic way. So as to assess the predictability-power of our present formalism, we advance a prediction, regarding the next 60 months, for the number of users of the three main web browsers (Explorer, Firefox and Chrome) popularly referred as "Browser Wars".

  1. Retinal Identification Based on an Improved Circular Gabor Filter and Scale Invariant Feature Transform

    Directory of Open Access Journals (Sweden)

    Xiaoming Xi

    2013-07-01

    Full Text Available Retinal identification based on retinal vasculatures in the retina provides the most secure and accurate means of authentication among biometrics and has primarily been used in combination with access control systems at high security facilities. Recently, there has been much interest in retina identification. As digital retina images always suffer from deformations, the Scale Invariant Feature Transform (SIFT, which is known for its distinctiveness and invariance for scale and rotation, has been introduced to retinal based identification. However, some shortcomings like the difficulty of feature extraction and mismatching exist in SIFT-based identification. To solve these problems, a novel preprocessing method based on the Improved Circular Gabor Transform (ICGF is proposed. After further processing by the iterated spatial anisotropic smooth method, the number of uninformative SIFT keypoints is decreased dramatically. Tested on the VARIA and eight simulated retina databases combining rotation and scaling, the developed method presents promising results and shows robustness to rotations and scale changes.

  2. High momentum transfer inelastic muon scattering and test of scale invariance at NAL

    Energy Technology Data Exchange (ETDEWEB)

    Chen, K.Wendell, (Spokesperson); /Princeton U.; Hand, L.N.; /Cornell U., LNS

    1970-06-01

    We propose a relatively simple first stage experiment with muons in the 50-150 GeV range. The experiment is designed to optimize conditions for testing scale invariance while providing some information about the final state, as a test of various theories of high energy interactions. The proposed use of an iron spectrometer and of a high Z (>1) target with a low intensity ({approx}10{sup 6}/sec) muon beam should greatly reduce the cost and complexity of the experiment and especially ease the construction of the beam. It may even be possible to make an adequate muon beam for this purpose from the planned 3.5 mrad high intensity pion beam. A higher intensity muon beam can be used to extend the range in q{sup 2}. Information gained in this first experiment could greatly assist the planning of a more sophisticated experiment proposed for the high intensity {mu} beam.

  3. 1 / f β noise for scale-invariant processes: how long you wait matters

    Science.gov (United States)

    Leibovich, Nava; Barkai, Eli

    2017-11-01

    We study the power spectrum which is estimated from a nonstationary signal. In particular we examine the case when the signal is observed in a measurement time window [tw, tw + tm], namely the observation started after a waiting time tw, and tm is the measurement duration. We introduce a generalized aging Wiener-Khinchin theorem which relates between the spectrum and the time- and ensemble-averaged correlation functions for arbitrary tm and tw. Furthermore we provide a general relation between the non-analytical behavior of the scale-invariant correlation function and the aging 1/fβ noise. We illustrate our general results with two-state renewal models with sojourn times' distributions having a broad tail. Contribution to the Topical Issue "Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.

  4. Estimating Transitional Probabilities with Cross-Sectional Data to Assess Smoking Behavior Progression: A Validation Analysis.

    Science.gov (United States)

    Chen, Xinguang; Lin, Feng

    2012-09-03

    New analytical tools are needed to advance tobacco research, tobacco control planning and tobacco use prevention practice. In this study, we validated a method to extract information from cross-sectional survey for quantifying population dynamics of adolescent smoking behavior progression. With a 3-stage 7-path model, probabilities of smoking behavior progression were estimated employing the Probabilistic Discrete Event System (PDES) method and the cross-sectional data from 1997-2006 National Survey on Drug Use and Health (NSDUH). Validity of the PDES method was assessed using data from the National Longitudinal Survey of Youth 1997 and trends in smoking transition covering the period during which funding for tobacco control was cut substantively in 2003 in the United States. Probabilities for all seven smoking progression paths were successfully estimated with the PDES method and the NSDUH data. The absolute difference in the estimated probabilities between the two approaches varied from 0.002 to 0.076 (p>0.05 for all) and were highly correlated with each other (R(2) =0.998, pcross-sectional survey data. The estimated transitional probabilities add new evidence supporting more advanced tobacco research, tobacco control planning and tobacco use prevention practice. This method can be easily extended to study other health risk behaviors.

  5. Self-organization without conservation: true or just apparent scale-invariance?

    Science.gov (United States)

    Bonachela, Juan A.; Muñoz, Miguel A.

    2009-09-01

    The existence of true scale-invariance in slowly driven models of self-organized criticality without a conservation law, such as forest-fires or earthquake automata, is scrutinized in this paper. By using three different levels of description—(i) a simple mean field, (ii) a more detailed mean-field description in terms of a (self-organized) branching processes, and (iii) a full stochastic representation in terms of a Langevin equation—it is shown on general grounds that non-conserving dynamics does not lead to bona fide criticality. Contrary to the case for conserving systems, a parameter, which we term the 're-charging' rate (e.g. the tree-growth rate in forest-fire models), needs to be fine-tuned in non-conserving systems to obtain criticality. In the infinite-size limit, such a fine-tuning of the loading rate is easy to achieve, as it emerges by imposing a second separation of timescales but, for any finite size, a precise tuning is required to achieve criticality and a coherent finite-size scaling picture. Using the approaches above, we shed light on the common mechanisms by which 'apparent criticality' is observed in non-conserving systems, and explain in detail (both qualitatively and quantitatively) the difference with respect to true criticality obtained in conserving systems. We propose to call this self-organized quasi-criticality (SOqC). Some of the reported results are already known and some of them are new. We hope that the unified framework presented here will help to elucidate the confusing and contradictory literature in this field. In a forthcoming paper, we shall discuss the implications of the general results obtained here for models of neural avalanches in neuroscience for which self-organized scale-invariance in the absence of conservation has been claimed.

  6. Assessing Uncertainties of Theoretical Atomic Transition Probabilities with Monte Carlo Random Trials

    Directory of Open Access Journals (Sweden)

    Alexander Kramida

    2014-04-01

    Full Text Available This paper suggests a method of evaluation of uncertainties in calculated transition probabilities by randomly varying parameters of an atomic code and comparing the results. A control code has been written to randomly vary the input parameters with a normal statistical distribution around initial values with a certain standard deviation. For this particular implementation, Cowan’s suite of atomic codes (R.D. Cowan, The Theory of Atomic Structure and Spectra, Berkeley, CA: University of California Press, 1981 was used to calculate radiative rates of magnetic-dipole and electric-quadrupole transitions within the ground configuration of titanium-like iron, Fe V. The Slater parameters used in the calculations were adjusted to fit experimental energy levels with Cowan’s least-squares fitting program, RCE. The standard deviations of the fitted parameters were used as input of the control code providing the distribution widths of random trials for these parameters. Propagation of errors through the matrix diagonalization and summation of basis state expansions leads to significant variations in the resulting transition rates. These variations vastly differ in their magnitude for different transitions, depending on their sensitivity to errors in parameters. With this method, the rate uncertainty can be individually assessed for each calculated transition.

  7. Suppression of the Landau-Zener transition probability by weak classical noise

    Science.gov (United States)

    Malla, Rajesh K.; Mishchenko, E. G.; Raikh, M. E.

    2017-08-01

    When the drive, which causes the level crossing in a qubit, is slow, the probability PL Z of the Landau-Zener transition is close to 1. In this regime, which is most promising for applications, the noise due to the coupling to the environment reduces the average PL Z. At the same time, the survival probability, 1 -PL Z , which is exponentially small for a slow drive, can be completely dominated by noise-induced correction. Our main message is that the effect of weak classical noise can be captured analytically by treating it as a perturbation in the Schrödinger equation. This allows us to study the dependence of the noise-induced correction to PL Z on the correlation time of the noise. As this correlation time exceeds the bare Landau-Zener transition time, the effect of noise becomes negligible. On the physical level, the mechanism of enhancement of the survival probability can be viewed as an absorption of the "noise quanta" across the gap. With characteristic energy of the quantum governed by the noise spectrum, the slower the noise is, the lower the number of quanta for which absorption is allowed energetically is. We consider two conventional realizations of noise: Gaussian noise and telegraph noise.

  8. Calculating Absolute Transition Probabilities for Deformed Nuclei in the Rare-Earth Region

    Science.gov (United States)

    Stratman, Anne; Casarella, Clark; Aprahamian, Ani

    2017-09-01

    Absolute transition probabilities are the cornerstone of understanding nuclear structure physics in comparison to nuclear models. We have developed a code to calculate absolute transition probabilities from measured lifetimes, using a Python script and a Mathematica notebook. Both of these methods take pertinent quantities such as the lifetime of a given state, the energy and intensity of the emitted gamma ray, and the multipolarities of the transitions to calculate the appropriate B(E1), B(E2), B(M1) or in general, any B(σλ) values. The program allows for the inclusion of mixing ratios of different multipolarities and the electron conversion of gamma-rays to correct for their intensities, and yields results in absolute units or results normalized to Weisskopf units. The code has been tested against available data in a wide range of nuclei from the rare earth region (28 in total), including 146-154Sm, 154-160Gd, 158-164Dy, 162-170Er, 168-176Yb, and 174-182Hf. It will be available from the Notre Dame Nuclear Science Laboratory webpage for use by the community. This work was supported by the University of Notre Dame College of Science, and by the National Science Foundation, under Contract PHY-1419765.

  9. A Procedure for Deriving Formulas to Convert Transition Rates to Probabilities for Multistate Markov Models.

    Science.gov (United States)

    Jones, Edmund; Epstein, David; García-Mochón, Leticia

    2017-10-01

    For health-economic analyses that use multistate Markov models, it is often necessary to convert from transition rates to transition probabilities, and for probabilistic sensitivity analysis and other purposes it is useful to have explicit algebraic formulas for these conversions, to avoid having to resort to numerical methods. However, if there are four or more states then the formulas can be extremely complicated. These calculations can be made using packages such as R, but many analysts and other stakeholders still prefer to use spreadsheets for these decision models. We describe a procedure for deriving formulas that use intermediate variables so that each individual formula is reasonably simple. Once the formulas have been derived, the calculations can be performed in Excel or similar software. The procedure is illustrated by several examples and we discuss how to use a computer algebra system to assist with it. The procedure works in a wide variety of scenarios but cannot be employed when there are several backward transitions and the characteristic equation has no algebraic solution, or when the eigenvalues of the transition rate matrix are very close to each other.

  10. NDVI, scale invariance and the modifiable areal unit problem : An assessment of vegetation in the Adelaide Parklands

    NARCIS (Netherlands)

    Nouri, Hamideh; Anderson, Sharolyn; Sutton, Paul; Beecham, Simon; Nagler, Pamela; Jarchow, Christopher J.; Roberts, Dar A.

    2017-01-01

    This research addresses the question as to whether or not the Normalised Difference Vegetation Index (NDVI) is scale invariant (i.e. constant over spatial aggregation) for pure pixels of urban vegetation. It has been long recognized that there are issues related to the modifiable areal unit problem

  11. Unsupervised Video Shot Detection Using Clustering Ensemble with a Color Global Scale-Invariant Feature Transform Descriptor

    Directory of Open Access Journals (Sweden)

    Hong Yi

    2008-01-01

    Full Text Available Abstract Scale-invariant feature transform (SIFT transforms a grayscale image into scale-invariant coordinates of local features that are invariant to image scale, rotation, and changing viewpoints. Because of its scale-invariant properties, SIFT has been successfully used for object recognition and content-based image retrieval. The biggest drawback of SIFT is that it uses only grayscale information and misses important visual information regarding color. In this paper, we present the development of a novel color feature extraction algorithm that addresses this problem, and we also propose a new clustering strategy using clustering ensembles for video shot detection. Based on Fibonacci lattice-quantization, we develop a novel color global scale-invariant feature transform (CGSIFT for better description of color contents in video frames for video shot detection. CGSIFT first quantizes a color image, representing it with a small number of color indices, and then uses SIFT to extract features from the quantized color index image. We also develop a new space description method using small image regions to represent global color features as the second step of CGSIFT. Clustering ensembles focusing on knowledge reuse are then applied to obtain better clustering results than using single clustering methods for video shot detection. Evaluation of the proposed feature extraction algorithm and the new clustering strategy using clustering ensembles reveals very promising results for video shot detection.

  12. Unsupervised Video Shot Detection Using Clustering Ensemble with a Color Global Scale-Invariant Feature Transform Descriptor

    Directory of Open Access Journals (Sweden)

    Yuchou Chang

    2008-02-01

    Full Text Available Scale-invariant feature transform (SIFT transforms a grayscale image into scale-invariant coordinates of local features that are invariant to image scale, rotation, and changing viewpoints. Because of its scale-invariant properties, SIFT has been successfully used for object recognition and content-based image retrieval. The biggest drawback of SIFT is that it uses only grayscale information and misses important visual information regarding color. In this paper, we present the development of a novel color feature extraction algorithm that addresses this problem, and we also propose a new clustering strategy using clustering ensembles for video shot detection. Based on Fibonacci lattice-quantization, we develop a novel color global scale-invariant feature transform (CGSIFT for better description of color contents in video frames for video shot detection. CGSIFT first quantizes a color image, representing it with a small number of color indices, and then uses SIFT to extract features from the quantized color index image. We also develop a new space description method using small image regions to represent global color features as the second step of CGSIFT. Clustering ensembles focusing on knowledge reuse are then applied to obtain better clustering results than using single clustering methods for video shot detection. Evaluation of the proposed feature extraction algorithm and the new clustering strategy using clustering ensembles reveals very promising results for video shot detection.

  13. Online fringe projection profilometry based on scale-invariant feature transform

    Science.gov (United States)

    Li, Hongru; Feng, Guoying; Yang, Peng; Wang, Zhaomin; Zhou, Shouhuan; Asundi, Anand

    2016-08-01

    An online fringe projection profilometry (OFPP) based on scale-invariant feature transform (SIFT) is proposed. Both rotary and linear models are discussed. First, the captured images are enhanced by "retinex" theory for better contrast and an improved reprojection technique is carried out to rectify pixel size while keeping the right aspect ratio. Then the SIFT algorithm with random sample consensus algorithm is used to match feature points between frames. In this process, quick response code is innovatively adopted as a feature pattern as well as object modulation. The characteristic parameters, which include rotation angle in rotary OFPP and rectilinear displacement in linear OFPP, are calculated by a vector-based solution. Moreover, a statistical filter is applied to obtain more accurate values. The equivalent aligned fringe patterns are then extracted from each frame. The equal step algorithm, advanced iterative algorithm, and principal component analysis are eligible for phase retrieval according to whether the object moving direction accords with the fringe direction or not. The three-dimensional profile of the moving object can finally be reconstructed. Numerical simulations and experimental results verified the validity and feasibility of the proposed method.

  14. Electronic cleansing for computed tomography (CT) colonography using a scale-invariant three-material model.

    Science.gov (United States)

    Serlie, Iwo W O; Vos, Frans M; Truyen, Roel; Post, Frits H; Stoker, Jaap; van Vliet, Lucas J

    2010-06-01

    A well-known reading pitfall in computed tomography (CT) colonography is posed by artifacts at T-junctions, i.e., locations where air-fluid levels interface with the colon wall. This paper presents a scale-invariant method to determine material fractions in voxels near such T-junctions. The proposed electronic cleansing method particularly improves the segmentation at those locations. The algorithm takes a vector of Gaussian derivatives as input features. The measured features are made invariant to the orientation-dependent apparent scale of the data and normalized in a way to obtain equal noise variance. A so-called parachute model is introduced that maps Gaussian derivatives onto material fractions near T-junctions. Projection of the noisy derivatives onto the model yields improved estimates of the true, underlying feature values. The method is shown to render an accurate representation of the object boundary without artifacts near junctions. Therefore, it enhances the reading of CT colonography in a 3-D display mode.

  15. A Registration Scheme for Multispectral Systems Using Phase Correlation and Scale Invariant Feature Matching

    Directory of Open Access Journals (Sweden)

    Hanlun Li

    2016-01-01

    Full Text Available In the past few years, many multispectral systems which consist of several identical monochrome cameras equipped with different bandpass filters have been developed. However, due to the significant difference in the intensity between different band images, image registration becomes very difficult. Considering the common structural characteristic of the multispectral systems, this paper proposes an effective method for registering different band images. First we use the phase correlation method to calculate the parameters of a coarse-offset relationship between different band images. Then we use the scale invariant feature transform (SIFT to detect the feature points. For every feature point in a reference image, we can use the coarse-offset parameters to predict the location of its matching point. We only need to compare the feature point in the reference image with the several near feature points from the predicted location instead of the feature points all over the input image. Our experiments show that this method does not only avoid false matches and increase correct matches, but also solve the matching problem between an infrared band image and a visible band image in cases lacking man-made objects.

  16. Discriminative phenomenological features of scale invariant models for electroweak symmetry breaking

    Directory of Open Access Journals (Sweden)

    Katsuya Hashino

    2016-01-01

    Full Text Available Classical scale invariance (CSI may be one of the solutions for the hierarchy problem. Realistic models for electroweak symmetry breaking based on CSI require extended scalar sectors without mass terms, and the electroweak symmetry is broken dynamically at the quantum level by the Coleman–Weinberg mechanism. We discuss discriminative features of these models. First, using the experimental value of the mass of the discovered Higgs boson h(125, we obtain an upper bound on the mass of the lightest additional scalar boson (≃543 GeV, which does not depend on its isospin and hypercharge. Second, a discriminative prediction on the Higgs-photon–photon coupling is given as a function of the number of charged scalar bosons, by which we can narrow down possible models using current and future data for the di-photon decay of h(125. Finally, for the triple Higgs boson coupling a large deviation (∼+70% from the SM prediction is universally predicted, which is independent of masses, quantum numbers and even the number of additional scalars. These models based on CSI can be well tested at LHC Run II and at future lepton colliders.

  17. Equation of state with scale-invariant hidden local symmetry and gravitational waves

    Directory of Open Access Journals (Sweden)

    Lee Hyun Kyu

    2018-01-01

    Full Text Available The equation of state (EoS for the effective theory proposed recently in the frame work of the scale-invariant hidden local symmetry is discussed briefly. The EoS is found to be relatively stiffer at lower density and but relatively softer at higher density. The particular features of EoS on the gravitational waves are discussed. A relatively stiffer EoS for the neutron stars with the lower density induces a larger deviation of the gravitational wave form from the point-particle-approximation. On the other hand, a relatively softer EoS for the merger remnant of the higher density inside might invoke a possibility of the immediate formation of a black hole for short gamma ray bursts or the appearance of the higher peak frequency for gravitational waves from remnant oscillations. It is anticipated that this particular features could be probed in detail by the detections of gravitational waves from the binary neutron star mergers.

  18. Stochastic Stability for Time-Delay Markovian Jump Systems with Sector-Bounded Nonlinearities and More General Transition Probabilities

    Directory of Open Access Journals (Sweden)

    Dan Ye

    2013-01-01

    Full Text Available This paper is concerned with delay-dependent stochastic stability for time-delay Markovian jump systems (MJSs with sector-bounded nonlinearities and more general transition probabilities. Different from the previous results where the transition probability matrix is completely known, a more general transition probability matrix is considered which includes completely known elements, boundary known elements, and completely unknown ones. In order to get less conservative criterion, the state and transition probability information is used as much as possible to construct the Lyapunov-Krasovskii functional and deal with stability analysis. The delay-dependent sufficient conditions are derived in terms of linear matrix inequalities to guarantee the stability of systems. Finally, numerical examples are exploited to demonstrate the effectiveness of the proposed method.

  19. Strong dynamics in a classically scale invariant extension of the standard model with a flat potential

    Science.gov (United States)

    Haba, Naoyuki; Yamada, Toshifumi

    2017-06-01

    We investigate the scenario where the standard model is extended with classical scale invariance, which is broken by chiral symmetry breaking and confinement in a new strongly coupled gauge theory that resembles QCD. The standard model Higgs field emerges as a result of the mixing of a scalar meson in the new strong dynamics and a massless elementary scalar field. The mass and scalar decay constant of that scalar meson, which are generated dynamically in the new gauge theory, give rise to the Higgs field mass term, automatically possessing the correct negative sign by the bosonic seesaw mechanism. Using analogy with QCD, we evaluate the dynamical scale of the new gauge theory and further make quantitative predictions for light pseudo-Nambu-Goldstone bosons associated with the spontaneous breaking of axial symmetry along chiral symmetry breaking in the new gauge theory. A prominent consequence of the scenario is that there should be a standard model gauge singlet pseudo-Nambu-Goldstone boson with mass below 220 GeV, which couples to two electroweak gauge bosons through the Wess-Zumino-Witten term, whose strength is thus determined by the dynamical scale of the new gauge theory. Other pseudo-Nambu-Goldstone bosons, charged under the electroweak gauge groups, also appear. Concerning the theoretical aspects, it is shown that the scalar quartic coupling can vanish at the Planck scale with the top quark pole mass as large as 172.5 GeV, realizing the flatland scenario without being in tension with the current experimental data.

  20. A Hidden Semi-Markov Model with Duration-Dependent State Transition Probabilities for Prognostics

    Directory of Open Access Journals (Sweden)

    Ning Wang

    2014-01-01

    Full Text Available Realistic prognostic tools are essential for effective condition-based maintenance systems. In this paper, a Duration-Dependent Hidden Semi-Markov Model (DD-HSMM is proposed, which overcomes the shortcomings of traditional Hidden Markov Models (HMM, including the Hidden Semi-Markov Model (HSMM: (1 it allows explicit modeling of state transition probabilities between the states; (2 it relaxes observations’ independence assumption by accommodating a connection between consecutive observations; and (3 it does not follow the unrealistic Markov chain’s memoryless assumption and therefore it provides a more powerful modeling and analysis capability for real world problems. To facilitate the computation of the proposed DD-HSMM methodology, new forward-backward algorithm is developed. The demonstration and evaluation of the proposed methodology is carried out through a case study. The experimental results show that the DD-HSMM methodology is effective for equipment health monitoring and management.

  1. VizieR Online Data Catalog: Transition probabilities for 183 lines of Cr II (Lawler+, 2017)

    Science.gov (United States)

    Lawler, J. E.; Sneden, C.; Nave, G.; den Hartog, E. A.; Emrahoglu, N.; Cowan, J. J.

    2017-03-01

    New emission branching fraction (BF) measurements for 183 lines of the second spectrum of chromium (Cr II) and new radiative lifetime measurements from laser-induced fluorescence for 8 levels of Cr+ are reported. The goals of this study are to improve transition probability measurements in Cr II and reconcile solar and stellar Cr abundance values based on Cr I and Cr II lines. Eighteen spectra from three Fourier Transform Spectrometers supplemented with ultraviolet spectra from a high-resolution echelle spectrometer are used in the BF measurements. Radiative lifetimes from this study and earlier publications are used to convert the BFs into absolute transition probabilities. These new laboratory data are applied to determine the Cr abundance log{epsilon} in the Sun and metal-poor star HD 84937. The mean result in the Sun is =5.624+/-0.009 compared to =5.644+/-0.006 on a scale with the hydrogen abundance log{epsilon}(H)=12 and with the uncertainty representing only line-to-line scatter. A Saha (ionization balance) test on the photosphere of HD 84937 is also performed, yielding =3.417+/-0.006 and potential E.P.>0eV)>=3.374+/-0.011 for this dwarf star. We find a correlation of Cr with the iron-peak element Ti, suggesting an associated nucleosynthetic production. Four iron-peak elements (Cr along with Ti, V, and Sc) appear to have a similar (or correlated) production history-other iron-peak elements appear not to be associated with Cr. (1 data file).

  2. Radiative Lifetimes and Atomic Transition Probabilities for Rare-Earth Elements

    Science.gov (United States)

    den Hartog, E. A.; Curry, J. J.; Anderson, Heidi M.; Wickliffe, M. E.; Lawler, J. E.

    1997-10-01

    Interest in rare-earth elements has been on the rise in recent years in both the lighting and astrophysics communities. The lighting industry is increasingly using rare-earths in high-intensity discharge (HID) lamps and require comprehensive sets of accurate oscillator strengths for the modelling of these lamps. Spectroscopic data on rare-earths is also needed in astrophysical studies such as those dealing with the evolution of chemically-peculiar stars. The present work is addressing this need with extensive radiative lifetime and branching fraction measurements, which when combined will yield a large database of absolute transition probabilities of the elements thulium, dysprosium, and holmium. Radiative lifetimes are measured using laser-induced fluorescence of a slow atomic/ionic beam. Branching fractions are determined from spectra recorded using the 1.0 meter Fourier-transform spectrometer at the National Solar Observatory. Lifetimes for 298 levels of Tm I and Tm II and 440 levels of Dy I and Dy II are complete. Branching fractions have been measured for 522 transitions of Tm I and Tm II. Work is underway on lifetimes of Ho and branching fractions of Dy. Representative lifetime and branching fraction data will be presented and discussed.

  3. Effects of heterogeneity in site-site couplings for tight-binding models on scale-invariant structures

    Science.gov (United States)

    Yang, Bingjia; Xie, Pinchen; Zhang, Zhongzhi

    2017-11-01

    We studied the thermodynamic behaviors of non-interacting bosons and fermions trapped by a scale-invariant branching structure of adjustable degree of heterogeneity. The full energy spectrum in tight-binding approximation was analytically solved. We found that the log-periodic oscillation of the specific heat for Fermi gas depended on the heterogeneity of hopping. Also, low dimensional Bose-Einstein condensation occurred only for non-homogeneous setup.

  4. Project SEMACODE : a scale-invariant object recognition system for content-based queries in image databases

    OpenAIRE

    Brause, Rüdiger W.; Arlt, Björn; Tratar, Erwin

    1999-01-01

    For the efficient management of large image databases, the automated characterization of images and the usage of that characterization for searching and ordering tasks is highly desirable. The purpose of the project SEMACODE is to combine the still unsolved problem of content-oriented characterization of images with scale-invariant object recognition and modelbased compression methods. To achieve this goal, existing techniques as well as new concepts related to pattern matching, image encodin...

  5. Dynamical Effects of the Scale Invariance of the Empty Space: The Fall of Dark Matter?

    Science.gov (United States)

    Maeder, Andre

    2017-11-01

    The hypothesis of the scale invariance of the macroscopic empty space, which intervenes through the cosmological constant, has led to new cosmological models. They show an accelerated cosmic expansion after the initial stages and satisfy several major cosmological tests. No unknown particles are needed. Developing the weak-field approximation, we find that the here-derived equation of motion corresponding to Newton’s equation also contains a small outward acceleration term. Its order of magnitude is about \\sqrt{{\\varrho }{{c}}/\\varrho } × Newton’s gravity (ϱ being the mean density of the system and {\\varrho }{{c}} the usual critical density). The new term is thus particularly significant for very low density systems. A modified virial theorem is derived and applied to clusters of galaxies. For the Coma Cluster and Abell 2029, the dynamical masses are about a factor of 5-10 smaller than in the standard case. This tends to leave no room for dark matter in these clusters. Then, the two-body problem is studied and an equation corresponding to the Binet equation is obtained. It implies some secular variations of the orbital parameters. The results are applied to the rotation curve of the outer layers of the Milky Way. Starting backward from the present rotation curve, we calculate the past evolution of the Galactic rotation and find that, in the early stages, it was steep and Keplerian. Thus, the flat rotation curves of galaxies appear as an age effect, a result consistent with recent observations of distant galaxies by Genzel et al. and Lang et al. Finally, in an appendix we also study the long-standing problem of the increase with age of the vertical velocity dispersion in the Galaxy. The observed increase appears to result from the new small acceleration term in the equation of the harmonic oscillator describing stellar motions around the Galactic plane. Thus, we tend to conclude that neither dark energy nor dark matter seems to be needed in the proposed

  6. Transition probability-based stochastic geological modeling using airborne geophysical data and borehole data

    Science.gov (United States)

    He, Xin; Koch, Julian; Sonnenborg, Torben O.; Jørgensen, Flemming; Schamper, Cyril; Christian Refsgaard, Jens

    2014-04-01

    Geological heterogeneity is a very important factor to consider when developing geological models for hydrological purposes. Using statistically based stochastic geological simulations, the spatial heterogeneity in such models can be accounted for. However, various types of uncertainties are associated with both the geostatistical method and the observation data. In the present study, TProGS is used as the geostatistical modeling tool to simulate structural heterogeneity for glacial deposits in a head water catchment in Denmark. The focus is on how the observation data uncertainty can be incorporated in the stochastic simulation process. The study uses two types of observation data: borehole data and airborne geophysical data. It is commonly acknowledged that the density of the borehole data is usually too sparse to characterize the horizontal heterogeneity. The use of geophysical data gives an unprecedented opportunity to obtain high-resolution information and thus to identify geostatistical properties more accurately especially in the horizontal direction. However, since such data are not a direct measurement of the lithology, larger uncertainty of point estimates can be expected as compared to the use of borehole data. We have proposed a histogram probability matching method in order to link the information on resistivity to hydrofacies, while considering the data uncertainty at the same time. Transition probabilities and Markov Chain models are established using the transformed geophysical data. It is shown that such transformation is in fact practical; however, the cutoff value for dividing the resistivity data into facies is difficult to determine. The simulated geological realizations indicate significant differences of spatial structure depending on the type of conditioning data selected. It is to our knowledge the first time that grid-to-grid airborne geophysical data including the data uncertainty are used in conditional geostatistical simulations in TPro

  7. Delineation of an Optimal Location for Oil Sand Exploration through Transition Probabilities of Composing Lithology

    Science.gov (United States)

    Kwon, M.; Jeong, J.; Park, E.; Han, W. S.; Kim, K. Y.

    2014-12-01

    Three-dimensional geostatistical studies of delineating an optimal exploitation location for oil sand in McMurray Formation, Athabasca, Canada were carried out. The area is mainly composed of unconsolidated to semi-consolidated sand where breccia, mud, clay, etc. are associated as alternating layers. For the prediction of the optimal location of steam assisted gravity drainage (SAGD) technique, the conventional approach of cumulating the predicted thickness of the media with higher bitumen bearing possibility (i.e. Breccia and Sand) was pursued. As an alternative measure, mean vertical extension of the permeable media was also assessed based on vertical transition rate of each media and the corresponding optimal location was decided. For the both predictions, 110 borehole data acquired from the study area were analyzed under Markovian transition probability (TP) framework and three-dimensional distributions of the composing media were predicted stochastically through an existing TP based geostatistical model. The effectiveness of the two competing measures (cumulative thickness and mean vertical extension) for SAGD applications was verified through two-dimensional dual-phase flow simulations where high temperature steam was injected in the delineated reservoirs, and the size of steam chamber was compared. The results of the two-dimensional SAGD simulation has shown that the geologic formation containing the highest mean vertical extension of permeable media is more suitable for the development of the oil sand by developing larger size of steam chamber compared to that from the highest cumulative thickness. Given those two-dimensional results, the cumulative thickness alone may not be a sufficient criterion for an optimal SAGD site and the mean vertical extension of the permeable media needs to be jointly considered for the sound selections.

  8. Evaluation of aquifer heterogeneity effects on river flow loss using a transition probability framework

    Science.gov (United States)

    Engdahl, Nicholas B.; Vogler, Eric T.; Weissmann, Gary S.

    2010-01-01

    River-aquifer exchange is considered within a transition probability framework along the Rio Grande in Albuquerque, New Mexico, to provide a stochastic estimate of aquifer heterogeneity and river loss. Six plausible hydrofacies configurations were determined using categorized drill core and wetland survey data processed through the TPROGS geostatistical package. A base case homogeneous model was also constructed for comparison. River loss was simulated for low, moderate, and high Rio Grande stages and several different riverside drain stage configurations. Heterogeneity effects were quantified by determining the mean and variance of the K field for each realization compared to the root-mean-square (RMS) error of the observed groundwater head data. Simulation results showed that the heterogeneous models produced smaller estimates of loss than the homogeneous approximation. Differences between heterogeneous and homogeneous model results indicate that the use of a homogeneous K in a regional-scale model may result in an overestimation of loss but comparable RMS error. We find that the simulated river loss is dependent on the aquifer structure and is most sensitive to the volumetric proportion of fines within the river channel.

  9. {ital E}3 transition probabilities in the platinum, mercury, and lead isotopes

    Energy Technology Data Exchange (ETDEWEB)

    Egido, J.L.; Martin, V.; Robledo, L.M.; Sun, Y. [Departamento de Fisica Teorica C-XI, Universidad Autonoma de Madrid, E-28049 Madrid (Spain)]|[Analisis Numerico, Facultad de Informatica, Universidad Politecnica de Madrid, E-28660 Madrid (Spain)]|[Department of Physics and Atmospheric Science, Drexel University, Philadelphia, Pennsylvania 19104 (United States)]|[Joint Institute for Heavy Ion Research, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States)

    1996-06-01

    Spectroscopical properties of the platinum, mercury, and lead isotopes are studied within the Hartree-Fock plus BCS framework with the finite range density-dependent Gogny force. These properties are also studied beyond mean-field theory by combining the use of generator-coordinate-method-like wave functions with the angular momentum projection technique as to generate many-body correlated wave functions that are at the same time eigenstates of the angular momentum operator. We apply this formalism to the calculation of reduced transition probabilities {ital B}({ital E}3) from the lowest-lying octupole collective state to the ground state of several isotopes of the platinum, mercury, and lead nuclei whose experimental {ital B}({ital E}3) values present a peculiar behavior. The projected calculations show a large improvement over the unprojected ones when compared with the experimental data. The unprojected calculations are unable to predict any structure in the {ital B}({ital E}3). {copyright} {ital 1996 The American Physical Society.}

  10. Evaluation of aquifer heterogeneity effects on river flow loss using a transition probability framework

    Science.gov (United States)

    Engdahl, N.B.; Vogler, E.T.; Weissmann, G.S.

    2010-01-01

    River-aquifer exchange is considered within a transition probability framework along the Rio Grande in Albuquerque, New Mexico, to provide a stochastic estimate of aquifer heterogeneity and river loss. Six plausible hydrofacies configurations were determined using categorized drill core and wetland survey data processed through the TPROGS geostatistical package. A base case homogeneous model was also constructed for comparison. River loss was simulated for low, moderate, and high Rio Grande stages and several different riverside drain stage configurations. Heterogeneity effects were quantified by determining the mean and variance of the K field for each realization compared to the root-mean-square (RMS) error of the observed groundwater head data. Simulation results showed that the heterogeneous models produced smaller estimates of loss than the homogeneous approximation. Differences between heterogeneous and homogeneous model results indicate that the use of a homogeneous K in a regional-scale model may result in an overestimation of loss but comparable RMS error. We find that the simulated river loss is dependent on the aquifer structure and is most sensitive to the volumetric proportion of fines within the river channel. Copyright 2010 by the American Geophysical Union.

  11. Inverse modeling of hydraulic tests in fractured crystalline rock based on a transition probability geostatistical approach

    Science.gov (United States)

    Blessent, Daniela; Therrien, René; Lemieux, Jean-Michel

    2011-12-01

    This paper presents numerical simulations of a series of hydraulic interference tests conducted in crystalline bedrock at Olkiluoto (Finland), a potential site for the disposal of the Finnish high-level nuclear waste. The tests are in a block of crystalline bedrock of about 0.03 km3 that contains low-transmissivity fractures. Fracture density, orientation, and fracture transmissivity are estimated from Posiva Flow Log (PFL) measurements in boreholes drilled in the rock block. On the basis of those data, a geostatistical approach relying on a transitional probability and Markov chain models is used to define a conceptual model based on stochastic fractured rock facies. Four facies are defined, from sparsely fractured bedrock to highly fractured bedrock. Using this conceptual model, three-dimensional groundwater flow is then simulated to reproduce interference pumping tests in either open or packed-off boreholes. Hydraulic conductivities of the fracture facies are estimated through automatic calibration using either hydraulic heads or both hydraulic heads and PFL flow rates as targets for calibration. The latter option produces a narrower confidence interval for the calibrated hydraulic conductivities, therefore reducing the associated uncertainty and demonstrating the usefulness of the measured PFL flow rates. Furthermore, the stochastic facies conceptual model is a suitable alternative to discrete fracture network models to simulate fluid flow in fractured geological media.

  12. Radiative lifetimes and transition probabilities for electric-dipole delta n equals zero transitions in highly stripped sulfur ions

    Science.gov (United States)

    Pegg, D. J.; Elston, S. B.; Griffin, P. M.; Forester, J. P.; Thoe, R. S.; Peterson, R. S.; Sellin, I. A.; Hayden, H. C.

    1976-01-01

    The beam-foil time-of-flight method has been used to investigate radiative lifetimes and transition rates involving allowed intrashell transitions within the L shell of highly ionized sulfur. The results for these transitions, which can be particularly correlation-sensitive, are compared with current calculations based upon multiconfigurational models.

  13. Alzheimer's Disease as Subcellular `Cancer' --- The Scale-Invariant Principles Underlying the Mechanisms of Aging ---

    Science.gov (United States)

    Murase, M.

    1996-01-01

    with self-organization, has been thought to underlie `creative' aspects of biological phenomena such as the origin of life, adaptive evolution of viruses, immune recognition and brain function. It therefore must be surprising to find that the same principles will also underlie `non-creative' aspects, for example, the development of cancer and the aging of complex organisms. Although self-organization has extensively been studied in nonliving things such as chemical reactions and laser physics, it is undoubtedly true that the similar sources of the order are available to living things at different levels and scales. Several paradigm shifts are, however, required to realize how the general principles of natural selection can be extensible to non-DNA molecules which do not possess the intrinsic nature of self-reproduction. One of them is, from the traditional, genetic inheritance view that DNA (or RNA) molecules are the ultimate unit of heritable variations and natural selection at any organization level, to the epigenetic (nongenetic) inheritance view that any non-DNA molecule can be the target of heritable variations and molecular selection to accumulate in certain biochemical environment. Because they are all enriched with a β-sheet content, ready to mostly interact with one another, different denatured proteins like β-amyloid, PHF and prions can individually undergo self-templating or self-aggregating processes out of gene control. Other paradigm shifts requisite for a break-through in the etiology of neurodegenerative disorders will be discussed. As it is based on the scale-invariant principles, the present theory also predicts plausible mechanisms underlying quite different classes of disorders such as amyotrophic lateral sclerosis (ALS), atherosclerosis, senile cataract and many other symptoms of aging. The present theory, thus, provides the consistent and comprehensive account to the origin of aging by means of natural selection and self-organization.

  14. Impulsive synchronization of Markovian jumping randomly coupled neural networks with partly unknown transition probabilities via multiple integral approach.

    Science.gov (United States)

    Chandrasekar, A; Rakkiyappan, R; Cao, Jinde

    2015-10-01

    This paper studies the impulsive synchronization of Markovian jumping randomly coupled neural networks with partly unknown transition probabilities via multiple integral approach. The array of neural networks are coupled in a random fashion which is governed by Bernoulli random variable. The aim of this paper is to obtain the synchronization criteria, which is suitable for both exactly known and partly unknown transition probabilities such that the coupled neural network is synchronized with mixed time-delay. The considered impulsive effects can be synchronized at partly unknown transition probabilities. Besides, a multiple integral approach is also proposed to strengthen the Markovian jumping randomly coupled neural networks with partly unknown transition probabilities. By making use of Kronecker product and some useful integral inequalities, a novel Lyapunov-Krasovskii functional was designed for handling the coupled neural network with mixed delay and then impulsive synchronization criteria are solvable in a set of linear matrix inequalities. Finally, numerical examples are presented to illustrate the effectiveness and advantages of the theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Numerical calculations of the probabilities for quantum transitions in atoms and molecules by the path integral method

    Science.gov (United States)

    Biryukov, Alexander; Degtyareva, Yana

    2017-10-01

    The probabilities of molecular quantum transitions induced by electromagnetic field are expressed as path integrals of a real alternating functional. We propose a new method for computing these integrals by means of recurrence relations. We apply this approach to description of the two-photon Rabi oscillations.

  16. Remark about Transition Probabilities Calculation for Single Server Queues with Lognormal Inter-Arrival or Service Time Distributions

    Science.gov (United States)

    Lee, Moon Ho; Dudin, Alexander; Shaban, Alexy; Pokhrel, Subash Shree; Ma, Wen Ping

    Formulae required for accurate approximate calculation of transition probabilities of embedded Markov chain for single-server queues of the GI/M/1, GI/M/1/K, M/G/1, M/G/1/K type with heavy-tail lognormal distribution of inter-arrival or service time are given.

  17. Robust Guaranteed Cost Observer Design for Singular Markovian Jump Time-Delay Systems with Generally Incomplete Transition Probability

    Directory of Open Access Journals (Sweden)

    Yanbo Li

    2014-01-01

    Full Text Available This paper is devoted to the investigation of the design of robust guaranteed cost observer for a class of linear singular Markovian jump time-delay systems with generally incomplete transition probability. In this singular model, each transition rate can be completely unknown or only its estimate value is known. Based on stability theory of stochastic differential equations and linear matrix inequality (LMI technique, we design an observer to ensure that, for all uncertainties, the resulting augmented system is regular, impulse free, and robust stochastically stable with the proposed guaranteed cost performance. Finally, a convex optimization problem with LMI constraints is formulated to design the suboptimal guaranteed cost filters for linear singular Markovian jump time-delay systems with generally incomplete transition probability.

  18. Scale genesis and gravitational wave in a classically scale invariant extension of the standard model

    Energy Technology Data Exchange (ETDEWEB)

    Kubo, Jisuke [Institute for Theoretical Physics, Kanazawa University,Kanazawa 920-1192 (Japan); Yamada, Masatoshi [Department of Physics, Kyoto University,Kyoto 606-8502 (Japan); Institut für Theoretische Physik, Universität Heidelberg,Philosophenweg 16, 69120 Heidelberg (Germany)

    2016-12-01

    We assume that the origin of the electroweak (EW) scale is a gauge-invariant scalar-bilinear condensation in a strongly interacting non-abelian gauge sector, which is connected to the standard model via a Higgs portal coupling. The dynamical scale genesis appears as a phase transition at finite temperature, and it can produce a gravitational wave (GW) background in the early Universe. We find that the critical temperature of the scale phase transition lies above that of the EW phase transition and below few O(100) GeV and it is strongly first-order. We calculate the spectrum of the GW background and find the scale phase transition is strong enough that the GW background can be observed by DECIGO.

  19. Tomographic probability representation in the problem of transitions between the Landau levels

    OpenAIRE

    Zhebrak, E. D.

    2012-01-01

    The problem of moving of a charged particle in electromagnetic field is considered in terms of tomographic probability representation. The coherent and Fock states of a charge moving in varying homogeneous magnetic field are studied in the tomographic probability representation of quantum mechanics. The states are expressed in terms of quantum tomograms. The Fock state tomograms are given in the form of probability distributions described by multivariable Hermite polynomials with time-depende...

  20. Transition probabilities for two-photon H (1з–2з) and He (1 1з–2 1з ...

    Indian Academy of Sciences (India)

    Transition amplitudes and transition probabilities for the two-photon 1-2 transition in the hydrogen atom and 11-21 transition in helium atom have been calculated using a partialclosure approach. The dominant term is calculated exactly and the remaining sum over intermediate states is calculated using a mean ...

  1. scEpath: Energy landscape-based inference of transition probabilities and cellular trajectories from single-cell transcriptomic data.

    Science.gov (United States)

    Jin, Suoqin; MacLean, Adam L; Peng, Tao; Nie, Qing

    2018-02-05

    Single-cell RNA-sequencing (scRNA-seq) offers unprecedented resolution for studying cellular decision-making processes. Robust inference of cell state transition paths and probabilities is an important yet challenging step in the analysis of these data. Here we present scEpath, an algorithm that calculates energy landscapes and probabilistic directed graphs in order to reconstruct developmental trajectories. We quantify the energy landscape using "single-cell energy" and distance-based measures, and find that the combination of these enables robust inference of the transition probabilities and lineage relationships between cell states. We also identify marker genes and gene expression patterns associated with cell state transitions. Our approach produces pseudotemporal orderings that are - in combination - more robust and accurate than current methods, and offers higher resolution dynamics of the cell state transitions, leading to new insight into key transition events during differentiation and development. Moreover, scEpath is robust to variation in the size of the input gene set, and is broadly unsupervised, requiring few parameters to be set by the user. Applications of scEpath led to the identification of a cell-cell communication network implicated in early human embryo development, and novel transcription factors important for myoblast differentiation. scEpath allows us to identify common and specific temporal dynamics and transcriptional factor programs along branched lineages, as well as the transition probabilities that control cell fates. A MATLAB package of scEpath is available at https://github.com/sqjin/scEpath. qnie@uci.edu. Supplementary data are available at Bioinformatics online.

  2. INVESTIGATION OF SCALE-INVARIANT PROPERTY OF ORGANIZATION SYSTEM OF TRAIN TRAFFIC VOLUME BASED ON THE PERCOLATION THEORY

    Directory of Open Access Journals (Sweden)

    A. V. Prokhorchenko

    2014-10-01

    Full Text Available Purpose. The work is devoted to the study the property of scaling invariance of the organization system of train traffic volume on Ukrainian railways. Methodology. To prove the real network origin of Trains Formation Plan (TFP destination to the type of so-called scale-invariant networks it is proposed to generate scale-free networks with different dimensions, Barabási–Albert type with parameters that real networks of TFP destination has and to investigate their structure on survivability using the procedure of percolation nodes. Percolation process is proposed to be considered as a modified version of the spatial movement of cars on the network by increasing the number of railway stations, which have lost the ability to perform the basic function to pass cars on TFP destination in terms of adverse effects (an accident, overload. Findings. Comparative analysis of percolation at random and targeted destructive impact on network nodes has shown matching with the results of real network percolation of TFP destination, which proves the existence of self-similarity. Comparable figures in percolation were: percentage of remote stations in the network, in which the network fragmentation occurs, the average inverse path between network nodes, the diameter of the graph structure, the size meaning of the second largest cluster in the network from the steps of destruction. Originality. For the first time the hypothesis of the existence of scaling invariance properties of the graph TFP destinations on the railways of Ukraine, which can be attributed to a class of the graph scale-free networks was confirmed. Existing knowledge in the field theory of scale-free networks can be used to describe the survivability of system transportation on the railways of Ukraine. Practical value. Based on the identified properties of system directions of train traffic volumes, it is possible to create a mathematical model in the future that will predict the behavior of the

  3. Scale invariance and scaling law of Thomson backscatter spectra by electron moving in laser-magnetic resonance regime

    CERN Document Server

    Fu, Yi-Jia; Wan, Feng; Sang, Hai-Bo; Xie, Bai-Song

    2016-01-01

    The Thomson scattering spectra by an electron moving in the laser-magnetic resonance acceleration regime are computed numerically and analytically. The dependence of fundamental frequency on the laser intensity and magnetic resonance parameter is examined carefully. By calculating the emission of a single electron in a circularly polarized plane-wave laser field and constant external magnetic field, the scale invariance of the radiation spectra is evident in terms of harmonic orders. The scaling law of backscattered spectra are exhibited remarkably for the laser intensity as well for the initial axial momentum of the electron when the cyclotron frequency of the electron approaches the laser frequency. The results indicate that the magnetic resonance parameter plays an important role on the strength of emission. And the rich features of scattering spectra found may be applicable to the radiation source tunability.

  4. Wireless capsule endoscopy video segmentation using an unsupervised learning approach based on probabilistic latent semantic analysis with scale invariant features.

    Science.gov (United States)

    Shen, Yao; Guturu, Parthasarathy Partha; Buckles, Bill P

    2012-01-01

    Since wireless capsule endoscopy (WCE) is a novel technology for recording the videos of the digestive tract of a patient, the problem of segmenting the WCE video of the digestive tract into subvideos corresponding to the entrance, stomach, small intestine, and large intestine regions is not well addressed in the literature. A selected few papers addressing this problem follow supervised leaning approaches that presume availability of a large database of correctly labeled training samples. Considering the difficulties in procuring sizable WCE training data sets needed for achieving high classification accuracy, we introduce in this paper an unsupervised learning approach that employs Scale Invariant Feature Transform (SIFT) for extraction of local image features and the probabilistic latent semantic analysis (pLSA) model used in the linguistic content analysis for data clustering. Results of experimentation indicate that this method compares well in classification accuracy with the state-of-the-art supervised classification approaches to WCE video segmentation.

  5. Strong correlations in model of the scale-invariance (2+1) dimensional nonlinear Schroedinger equation

    CERN Document Server

    Protogenov, A P

    2001-01-01

    The brief review of events, conditioned by the nonlinear modes strong correlations in the planar systems is presented. The analysis is limited by the Schroedinger nonlinear equation model. The fields stationary distributions are determined. The dependence of the particles number on the parameter characterizing the degree of looking, of the universal oscillation lines, is obtained. It is shown that by small values of this parameter there exists on the two-dimensional lattice the universal gravitation, which may be the dynamic cause of transition to the coherent state. The connection of the chiral nonlinear boundary modes with the violations of the Galilean-invariance of the considered system is discussed

  6. Scaling invariance for the escape of particles from a periodically corrugated waveguide

    Energy Technology Data Exchange (ETDEWEB)

    Leonel, Edson D., E-mail: edleonel@rc.unesp.br [Departamento de Estatística, Matemática Aplicada e Computação, UNESP – Univ Estadual Paulista, Av. 24A, 1515, CEP 13506-900, Rio Claro, SP (Brazil); Costa, Diogo R. da [Departamento de Estatística, Matemática Aplicada e Computação, UNESP – Univ Estadual Paulista, Av. 24A, 1515, CEP 13506-900, Rio Claro, SP (Brazil); Instituto de Física, Univ São Paulo, Rua do Matão, Cidade Universitária, CEP 05314-970, São Paulo, SP (Brazil); Dettmann, Carl P. [School of Mathematics, University of Bristol, Bristol BS8 1TW (United Kingdom)

    2012-01-09

    The escape dynamics of a classical light ray inside a corrugated waveguide is characterised by the use of scaling arguments. The model is described via a two-dimensional nonlinear and area preserving mapping. The phase space of the mapping contains a set of periodic islands surrounded by a large chaotic sea that is confined by a set of invariant tori. When a hole is introduced in the chaotic sea, letting the ray escape, the histogram of frequency of the number of escaping particles exhibits rapid growth, reaching a maximum value at n{sub p} and later decaying asymptotically to zero. The behaviour of the histogram of escape frequency is characterised using scaling arguments. The scaling formalism is widely applicable to critical phenomena and useful in characterisation of phase transitions, including transitions from limited to unlimited energy growth in two-dimensional time varying billiard problems. -- Highlights: ► Escape of light ray inside a corrugated waveguide ► Two-dimensional nonlinear and area preserving mapping ► Scaling for escaping particles.

  7. Statistical properties of a dissipative kicked system: Critical exponents and scaling invariance

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Diego F.M., E-mail: diegofregolente@gmail.com [CAMTP – Center for Applied Mathematics and Theoretical Physics, University of Maribor, Krekova 2, SI-2000 Maribor (Slovenia); Robnik, Marko, E-mail: robnik@uni-mb.si [CAMTP – Center for Applied Mathematics and Theoretical Physics, University of Maribor, Krekova 2, SI-2000 Maribor (Slovenia); Leonel, Edson D., E-mail: edleonel@rc.unesp.br [Departamento de Estatística, Matemática Aplicada e Computação, UNESP – Universidade Estadual Paulista, Av. 24A, 1515, Bela Vista, 13506-900 Rio Claro, SP (Brazil)

    2012-01-16

    A new universal empirical function that depends on a single critical exponent (acceleration exponent) is proposed to describe the scaling behavior in a dissipative kicked rotator. The scaling formalism is used to describe two regimes of dissipation: (i) strong dissipation and (ii) weak dissipation. For case (i) the model exhibits a route to chaos known as period doubling and the Feigenbaum constant along the bifurcations is obtained. When weak dissipation is considered the average action as well as its standard deviation are described using scaling arguments with critical exponents. The universal empirical function describes remarkably well a phase transition from limited to unlimited growth of the average action. -- Highlights: ► A new universal empirical function is proposed. ► The scaling formalism is used to describe two regimes of dissipation. ► The model exhibits a route to chaos known as period doubling. ► The average action as well as its standard deviation are described using scaling.

  8. Dynamical properties of a particle in a wave packet: Scaling invariance and boundary crisis

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Diego F.M., E-mail: diegofregolente@gmail.com [CAMTP, Center For Applied Mathematics and Theoretical Physics, University of Maribor, Krekova 2, SI-2000 Maribor (Slovenia); Robnik, Marko, E-mail: robnik@uni-mb.si [CAMTP, Center For Applied Mathematics and Theoretical Physics, University of Maribor, Krekova 2, SI-2000 Maribor (Slovenia); Leonel, Edson D., E-mail: edleonel@rc.unesp.br [Departamento de Estatistica, Matematica Aplicada e Computacao, UNESP, Univ Estadual Paulista, Av. 24A, 1515-Bela Vista, 13506-900 Rio Claro, SP (Brazil)

    2011-10-15

    Highlights: > Acceleration of particles in a wave packet. > The location of the first invariant spanning curve which borders the chaotic sea. > Scaling to characterise the transition from integrability to non-integrability. > The property of area preservation is broken and attractors emerge. > After a tiny increase of the dissipation the system experience a boundary crisis. - Abstract: Some dynamical properties present in a problem concerning the acceleration of particles in a wave packet are studied. The dynamics of the model is described in terms of a two-dimensional area preserving map. We show that the phase space is mixed in the sense that there are regular and chaotic regions coexisting. We use a connection with the standard map in order to find the position of the first invariant spanning curve which borders the chaotic sea. We find that the position of the first invariant spanning curve increases as a power of the control parameter with the exponent 2/3. The standard deviation of the kinetic energy of an ensemble of initial conditions obeys a power law as a function of time, and saturates after some crossover. Scaling formalism is used in order to characterise the chaotic region close to the transition from integrability to nonintegrability and a relationship between the power law exponents is derived. The formalism can be applied in many different systems with mixed phase space. Then, dissipation is introduced into the model and therefore the property of area preservation is broken, and consequently attractors are observed. We show that after a small change of the dissipation, the chaotic attractor as well as its basin of attraction are destroyed, thus leading the system to experience a boundary crisis. The transient after the crisis follows a power law with exponent -2.

  9. Mitigation of Power frequency Magnetic Fields. Using Scale Invariant and Shape Optimization Methods

    Energy Technology Data Exchange (ETDEWEB)

    Salinas, Ener; Yueqiang Liu; Daalder, Jaap; Cruz, Pedro; Antunez de Souza, Paulo Roberto Jr; Atalaya, Juan Carlos; Paula Marciano, Fabianna de; Eskinasy, Alexandre

    2006-10-15

    The present report describes the development and application of two novel methods for implementing mitigation techniques of magnetic fields at power frequencies. The first method makes use of scaling rules for electromagnetic quantities, while the second one applies a 2D shape optimization algorithm based on gradient methods. Before this project, the first method had already been successfully applied (by some of the authors of this report) to electromagnetic designs involving pure conductive Material (e.g. copper, aluminium) which implied a linear formulation. Here we went beyond this approach and tried to develop a formulation involving ferromagnetic (i.e. non-linear) Materials. Surprisingly, we obtained good equivalent replacement for test-transformers by varying the input current. In spite of the validity of this equivalence constrained to regions not too close to the source, the results can still be considered useful, as most field mitigation techniques are precisely developed for reducing the magnetic field in regions relatively far from the sources. The shape optimization method was applied in this project to calculate the optimal geometry of a pure conductive plate to mitigate the magnetic field originated from underground cables. The objective function was a weighted combination of magnetic energy at the region of interest and dissipated heat at the shielding Material. To our surprise, shapes of complex structure, difficult to interpret (and probably even harder to anticipate) were the results of the applied process. However, the practical implementation (using some approximation of these shapes) gave excellent experimental mitigation factors.

  10. Scale invariant SURF detector and automatic clustering segmentation for infrared small targets detection

    Science.gov (United States)

    Zhang, Haiying; Bai, Jiaojiao; Li, Zhengjie; Liu, Yan; Liu, Kunhong

    2017-06-01

    The detection and discrimination of infrared small dim targets is a challenge in automatic target recognition (ATR), because there is no salient information of size, shape and texture. Many researchers focus on mining more discriminative information of targets in temporal-spatial. However, such information may not be available with the change of imaging environments, and the targets size and intensity keep changing in different imaging distance. So in this paper, we propose a novel research scheme using density-based clustering and backtracking strategy. In this scheme, the speeded up robust feature (SURF) detector is applied to capture candidate targets in single frame at first. And then, these points are mapped into one frame, so that target traces form a local aggregation pattern. In order to isolate the targets from noises, a newly proposed density-based clustering algorithm, fast search and find of density peak (FSFDP for short), is employed to cluster targets by the spatial intensive distribution. Two important factors of the algorithm, percent and γ , are exploited fully to determine the clustering scale automatically, so as to extract the trace with highest clutter suppression ratio. And at the final step, a backtracking algorithm is designed to detect and discriminate target trace as well as to eliminate clutter. The consistence and continuity of the short-time target trajectory in temporal-spatial is incorporated into the bounding function to speed up the pruning. Compared with several state-of-arts methods, our algorithm is more effective for the dim targets with lower signal-to clutter ratio (SCR). Furthermore, it avoids constructing the candidate target trajectory searching space, so its time complexity is limited to a polynomial level. The extensive experimental results show that it has superior performance in probability of detection (Pd) and false alarm suppressing rate aiming at variety of complex backgrounds.

  11. From Mathematical Monsters to Generalized Scale Invariance in Geophysics: Highlights of the Multifractal Saga

    Science.gov (United States)

    Schertzer, D. J.; Tchiguirinskaia, I.; Lovejoy, S.

    2013-12-01

    Fractals and multifractals are very illustrative of the profound synergies between mathematics and geophysics. The book ';Fractal Geometry of Nature' (Mandelbrot, 1982) brilliantly demonstrated the genericity in geophysics of geometric forms like Cantor set, Peano curve and Koch snowflake, which were once considered as mathematical monsters. However, to tame the geophysical monsters (e.g. extreme weather, floods, earthquakes), it was required to go beyond geometry and a unique fractal dimension. The concept of multifractal was coined in the course of rather theoretical debates on intermittency in hydrodynamic turbulence, sometimes with direct links to atmospheric dynamics. The latter required a generalized notion of scale in order to deal both with scale symmetries and strong anisotropies (e.g. time vs. space, vertical vs. horizontal). It was thus possible to show that the consequences of intermittency are of first order, not just 'corrections' with respect to the classical non-intermittent modeling. This was in fact a radical paradigm shift for geophysics: the extreme variability of geophysical fields over wide ranges of scale, which had long been so often acknowledged and deplored, suddenly became handy. Recent illustrations are the possibility to track down in large date sets the Higgs boson of intermittence, i.e. a first order multifractal phase transition leading to self-organized criticality, and to simulate intermittent vector fields with the help of Lie cascades, based for instance on random Clifford algebra. It is rather significant that this revolution is no longer limited to fundamental and theoretical problems of geophysics, but now touches many applications including environmental management, in particular for urban management and resilience. These applications are particularly stimulating when taken in their full complexity.

  12. Finite-Time Boundedness of Markov Jump System with Piecewise-Constant Transition Probabilities via Dynamic Output Feedback Control

    Directory of Open Access Journals (Sweden)

    Bin Yan

    2015-01-01

    Full Text Available This paper first investigates the problem of finite-time boundedness of Markovian jump system with piecewise-constant transition probabilities via dynamic output feedback control, which leads to both stochastic jumps and deterministic switches. Based on stochastic Lyapunov functional, the concept of finite-time boundedness, average dwell time, and the coupling relationship among time delays, several sufficient conditions are established for finite-time boundedness and H∞ filtering finite-time boundedness. The system trajectory stays within a prescribed bound. Finally, an example is given to illustrate the efficiency of the proposed method.

  13. Modeling spatial variability of sand-lenses in clay till settings using transition probability and multiple-point geostatistics

    DEFF Research Database (Denmark)

    Kessler, Timo Christian; Nilsson, Bertel; Klint, Knud Erik

    2010-01-01

    The construction of detailed geological models for heterogeneous settings such as clay till is important to describe transport processes, particularly with regard to potential contamination pathways. In low-permeability clay matrices transport is controlled by diffusion, but fractures and sand...... of sand-lenses in clay till. Sand-lenses mainly account for horizontal transport and are prioritised in this study. Based on field observations, the distribution has been modeled using two different geostatistical approaches. One method uses a Markov chain model calculating the transition probabilities...

  14. NDVI, scale invariance and the modifiable areal unit problem: An assessment of vegetation in the Adelaide Parklands.

    Science.gov (United States)

    Nouri, Hamideh; Anderson, Sharolyn; Sutton, Paul; Beecham, Simon; Nagler, Pamela; Jarchow, Christopher J; Roberts, Dar A

    2017-04-15

    This research addresses the question as to whether or not the Normalised Difference Vegetation Index (NDVI) is scale invariant (i.e. constant over spatial aggregation) for pure pixels of urban vegetation. It has been long recognized that there are issues related to the modifiable areal unit problem (MAUP) pertaining to indices such as NDVI and images at varying spatial resolutions. These issues are relevant to using NDVI values in spatial analyses. We compare two different methods of calculation of a mean NDVI: 1) using pixel values of NDVI within feature/object boundaries and 2) first calculating the mean red and mean near-infrared across all feature pixels and then calculating NDVI. We explore the nature and magnitude of these differences for images taken from two sensors, a 1.24m resolution WorldView-3 and a 0.1m resolution digital aerial image. We apply these methods over an urban park located in the Adelaide Parklands of South Australia. We demonstrate that the MAUP is not an issue for calculation of NDVI within a sensor for pure urban vegetation pixels. This may prove useful for future rule-based monitoring of the ecosystem functioning of green infrastructure. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. A novel and robust rotation and scale invariant structuring elements based descriptor for pedestrian classification in infrared images

    Science.gov (United States)

    Soundrapandiyan, Rajkumar; Chandra Mouli, P. V. S. S. R.

    2016-09-01

    In this paper, a novel and robust rotation and scale invariant structuring elements based descriptor (RSSED) for pedestrian classification in infrared (IR) images is proposed. In addition, a segmentation method using difference of Gaussian (DoG) and horizontal intensity projection is proposed. The three major steps are moving object segmentation, feature extraction and classification of objects as pedestrian or non-pedestrian. The segmentation result is used to extract the RSSED feature descriptor. To extract features, the segmentation result is encoded using local directional pattern (LDP). This helps in the identification of local textural patterns. The LDP encoded image is further quantized adaptively to four levels. Finally the proposed RSSED is used to formalize the descriptor from the quantized image. Support vector machine is employed for classification of the moving objects in a given IR image into pedestrian and non-pedestrian classes. The segmentation results shows the robustness in extracting the moving objects. The classification results obtained from SVM classifier shows the efficacy of the proposed method.

  16. Content-based image retrieval using scale invariant feature transform and gray level co-occurrence matrix

    Science.gov (United States)

    Srivastava, Prashant; Khare, Manish; Khare, Ashish

    2017-06-01

    The rapid growth of different types of images has posed a great challenge to the scientific fraternity. As the images are increasing everyday, it is becoming a challenging task to organize the images for efficient and easy access. The field of image retrieval attempts to solve this problem through various techniques. This paper proposes a novel technique of image retrieval by combining Scale Invariant Feature Transform (SIFT) and Co-occurrence matrix. For construction of feature vector, SIFT descriptors of gray scale images are computed and normalized using z-score normalization followed by construction of Gray-Level Co-occurrence Matrix (GLCM) of normalized SIFT keypoints. The constructed feature vector is matched with those of images in database to retrieve visually similar images. The proposed method is tested on Corel-1K dataset and the performance is measured in terms of precision and recall. The experimental results demonstrate that the proposed method outperforms some of the other state-of-the-art methods.

  17. Scale invariant feature transform in adaptive radiation therapy: a tool for deformable image registration assessment and re-planning indication

    Science.gov (United States)

    Paganelli, Chiara; Peroni, Marta; Riboldi, Marco; Sharp, Gregory C.; Ciardo, Delia; Alterio, Daniela; Orecchia, Roberto; Baroni, Guido

    2013-01-01

    Adaptive radiation therapy (ART) aims at compensating for anatomic and pathological changes to improve delivery along a treatment fraction sequence. Current ART protocols require time-consuming manual updating of all volumes of interest on the images acquired during treatment. Deformable image registration (DIR) and contour propagation stand as a state of the ART method to automate the process, but the lack of DIR quality control methods hinder an introduction into clinical practice. We investigated the scale invariant feature transform (SIFT) method as a quantitative automated tool (1) for DIR evaluation and (2) for re-planning decision-making in the framework of ART treatments. As a preliminary test, SIFT invariance properties at shape-preserving and deformable transformations were studied on a computational phantom, granting residual matching errors below the voxel dimension. Then a clinical dataset composed of 19 head and neck ART patients was used to quantify the performance in ART treatments. For the goal (1) results demonstrated SIFT potential as an operator-independent DIR quality assessment metric. We measured DIR group systematic residual errors up to 0.66 mm against 1.35 mm provided by rigid registration. The group systematic errors of both bony and all other structures were also analyzed, attesting the presence of anatomical deformations. The correct automated identification of 18 patients who might benefit from ART out of the total 22 cases using SIFT demonstrated its capabilities toward goal (2) achievement.

  18. NDVI, scale invariance and the modifiable areal unit problem: An assessment of vegetation in the Adelaide Parklands

    Science.gov (United States)

    Nouri, Hamideh; Anderson, Sharolyn; Sutton, Paul; Beecham, Simon; Nagler, Pamela L.; Jarchow, Christopher J.; Roberts, Dar A.

    2017-01-01

    This research addresses the question as to whether or not the Normalised Difference Vegetation Index (NDVI) is scale invariant (i.e. constant over spatial aggregation) for pure pixels of urban vegetation. It has been long recognized that there are issues related to the modifiable areal unit problem (MAUP) pertaining to indices such as NDVI and images at varying spatial resolutions. These issues are relevant to using NDVI values in spatial analyses. We compare two different methods of calculation of a mean NDVI: 1) using pixel values of NDVI within feature/object boundaries and 2) first calculating the mean red and mean near-infrared across all feature pixels and then calculating NDVI. We explore the nature and magnitude of these differences for images taken from two sensors, a 1.24 m resolution WorldView-3 and a 0.1 m resolution digital aerial image. We apply these methods over an urban park located in the Adelaide Parklands of South Australia. We demonstrate that the MAUP is not an issue for calculation of NDVI within a sensor for pure urban vegetation pixels. This may prove useful for future rule-based monitoring of the ecosystem functioning of green infrastructure.

  19. ENHANCED MAGNETIC COMPRESSIBILITY AND ISOTROPIC SCALE INVARIANCE AT SUB-ION LARMOR SCALES IN SOLAR WIND TURBULENCE

    Energy Technology Data Exchange (ETDEWEB)

    Kiyani, K. H.; Fauvarque, O. [Department of Electrical and Electronic Engineering, Imperial College London, London SW7 2AZ (United Kingdom); Chapman, S. C.; Hnat, B. [Centre for Fusion, Space and Astrophysics, University of Warwick, Coventry CV4 7AL (United Kingdom); Sahraoui, F. [Laboratoire de Physique des Plasmas, Observatoire de Saint-Maur, F-94107 Saint-Maur-Des-Fosses (France); Khotyaintsev, Yu. V., E-mail: k.kiyani@imperial.ac.uk [Swedish Institute of Space Physics, SE-75121 Uppsala (Sweden)

    2013-01-20

    The anisotropic nature of solar wind magnetic turbulence fluctuations is investigated scale by scale using high cadence in situ magnetic field measurements from the Cluster and ACE spacecraft missions. The data span five decades in scales from the inertial range to the electron Larmor radius. In contrast to the inertial range, there is a successive increase toward isotropy between parallel and transverse power at scales below the ion Larmor radius, with isotropy being achieved at the electron Larmor radius. In the context of wave-mediated theories of turbulence, we show that this enhancement in magnetic fluctuations parallel to the local mean background field is qualitatively consistent with the magnetic compressibility signature of kinetic Alfven wave solutions of the linearized Vlasov equation. More generally, we discuss how these results may arise naturally due to the prominent role of the Hall term at sub-ion Larmor scales. Furthermore, computing higher-order statistics, we show that the full statistical signature of the fluctuations at scales below the ion Larmor radius is that of a single isotropic globally scale-invariant process distinct from the anisotropic statistics of the inertial range.

  20. Enhanced Magnetic Compressibility and Isotropic Scale Invariance at Sub-ion Larmor Scales in Solar Wind Turbulence

    Science.gov (United States)

    Kiyani, K. H.; Chapman, S. C.; Sahraoui, F.; Hnat, B.; Fauvarque, O.; Khotyaintsev, Yu. V.

    2013-01-01

    The anisotropic nature of solar wind magnetic turbulence fluctuations is investigated scale by scale using high cadence in situ magnetic field measurements from the Cluster and ACE spacecraft missions. The data span five decades in scales from the inertial range to the electron Larmor radius. In contrast to the inertial range, there is a successive increase toward isotropy between parallel and transverse power at scales below the ion Larmor radius, with isotropy being achieved at the electron Larmor radius. In the context of wave-mediated theories of turbulence, we show that this enhancement in magnetic fluctuations parallel to the local mean background field is qualitatively consistent with the magnetic compressibility signature of kinetic Alfvén wave solutions of the linearized Vlasov equation. More generally, we discuss how these results may arise naturally due to the prominent role of the Hall term at sub-ion Larmor scales. Furthermore, computing higher-order statistics, we show that the full statistical signature of the fluctuations at scales below the ion Larmor radius is that of a single isotropic globally scale-invariant process distinct from the anisotropic statistics of the inertial range.

  1. Optical trapping of ultracold dysprosium atoms: transition probabilities, dynamic dipole polarizabilities and van der Waals $C_6$ coefficients

    CERN Document Server

    Li, Hui; Dulieu, Olivier; Nascimbene, Sylvain; Lepers, Maxence

    2016-01-01

    The efficiency of optical trapping of ultracold atoms depend on the atomic dynamic dipole polarizability governing the atom-field interaction. In this article, we have calculated the real and imaginary parts of the dynamic dipole polarizability of dysprosium in the ground and first excited level. Due to the high electronic angular momentum of those two states, the polarizabilities possess scalar, vector and tensor contributions that we have computed, on a wide range of trapping wavelengths, using the sum-over-state formula. Using the same formalism, we have also calculated the $C_6$ coefficients characterizing the van der Waals interaction between two dysprosium atoms in the two lowest levels. We have computed the energies of excited states and the transition probabilities appearing in the sums, using a combination of \\textit{ab initio} and least-square-fitting techniques provided by the Cowan codes and extended in our group. Regarding the real part of the polarizability, for field frequencies far from atomic...

  2. Relativistic Many-body Moller-Plesset Perturbation Theory Calculations of the Energy Levels and Transition Probabilities in Na- to P-like Xe Ions

    Energy Technology Data Exchange (ETDEWEB)

    Vilkas, M J; Ishikawa, Y; Trabert, E

    2007-03-27

    Relativistic multireference many-body perturbation theory calculations have been performed on Xe{sup 43+}-Xe{sup 39+} ions, resulting in energy levels, electric dipole transition probabilities, and level lifetimes. The second-order many-body perturbation theory calculation of energy levels included mass shifts, frequency-dependent Breit correction and Lamb shifts. The calculated transition energies and E1 transition rates are used to present synthetic spectra in the extreme ultraviolet range for some of the Xe ions.

  3. Fracture Scale-Invariance in Antarctic Shelf Ice: Wing and Comb Crevasses along Shear Faults within the Minna Bluff Region

    Science.gov (United States)

    Arcone, S. A.

    2016-12-01

    Wing and comb crevasses at the 0.1-10 km scale are associated with three of five large rifts presently off Minna Bluff on the western side of the Ross Ice Shelf, Antarctica. Their similarity to millimeter-scale parent-wing structures that grow from random fractures in biaxially compressed polycrystalline ice specimens demonstrates fracture scale-invariance for these phenomena, as previously shown for sea ice at multi-km scale. Historical WorldView and Landsat images show that these rifts, at least partially filled with marine ice, initiate in a small parent-double wing structure near the Bluff. The tip of the east wing then grows to multi-km lengths eastward into the shelf as it is wedged open by sea water and marine ice to form a rift. The northern edge of each rift is now a right lateral transform fault, with motion caused by expansion rather than by compression in the crystallographic case. RADARSAT imagery differentiates these shear faults from true crevasses. Because of this shear the north edge becomes a new parent. On its relatively faster north side, these new parents have acutely angled stick-slip crevasses. 25 m of movement along the fault relative to the south side occurred over a 20 month period from 2010 to 2011. On the relatively slower south side, as in the crystallographic case the shear has generated multi-km-long curvilinear wings starting at the fault tips, curvilinear wing mouth crevasses that eventually converge far to the east, and comb crevasses (known as teeth) that parallel the wings, all starting more nearly orthogonally to the fault direction. Wings and combs can be as long as parents. Wings are also characterized by a shear fault from which new combs grow. Such evidence for shear along wings has not been seen in SEM crystallographic images, so that the Minna Bluff scale appears to have revealed this new phenomenon. By late 2015 shear crevasses beneath the north parent edge of this one particular rift had virtually closed, which reflects

  4. The FERRUM project: Experimental lifetimes and transition probabilities from highly excited even 4d levels in Fe ii

    Science.gov (United States)

    Hartman, H.; Nilsson, H.; Engström, L.; Lundberg, H.

    2015-12-01

    We report lifetime measurements of the 6 levels in the 3d6(5D)4d e6G term in Fe ii at an energy of 10.4 eV, and f-values for 14 transitions from the investigated levels. The lifetimes were measured using time-resolved laser-induced fluorescence on ions in a laser-produced plasma. The high excitation energy, and the fact that the levels have the same parity as the the low-lying states directly populated in the plasma, necessitated the use of a two-photon excitation scheme. The probability for this process is greatly enhanced by the presence of the 3d6(5D)4p z6F levels at roughly half the energy difference. The f-values are obtained by combining the experimental lifetimes with branching fractions derived using relative intensities from a hollow cathode discharge lamp recorded with a Fourier transform spectrometer. The data is important for benchmarking atomic calculations of astrophysically important quantities and useful for spectroscopy of hot stars.

  5. Geologic heterogeneity and a comparison of two geostatistical models: Sequential Gaussian and transition probability-based geostatistical simulation

    Science.gov (United States)

    Lee, Si-Yong; Carle, Steven F.; Fogg, Graham E.

    2007-09-01

    A covariance-based model-fitting approach is often considered valid to represent field spatial variability of hydraulic properties. This study examines the representation of geologic heterogeneity in two types of geostatistical models under the same mean and spatial covariance structure, and subsequently its effect on the hydraulic response to a pumping test based on 3D high-resolution numerical simulation and field data. Two geostatistical simulation methods, sequential Gaussian simulation (SGS) and transition probability indicator simulation (TPROGS) were applied to create conditional realizations of alluvial fan aquifer systems in the Lawrence Livermore National Laboratory (LLNL) area. The simulated K fields were then used in a numerical groundwater flow model to simulate a pumping test performed at the LLNL site. Spatial connectivity measures of high- K materials (channel facies) captured connectivity characteristics of each geostatistical model and revealed that the TPROGS model created an aquifer (channel) network having greater lateral connectivity. SGS realizations neglected important geologic structures associated with channel and overbank (levee) facies, even though the covariance model used to create these realizations provided excellent fits to sample covariances computed from exhaustive samplings of TPROGS realizations. Observed drawdown response in monitoring wells during a pumping test and its numerical simulation shows that in an aquifer system with strongly connected network of high- K materials, the Gaussian approach could not reproduce a similar behavior in simulated drawdown response found in TPROGS case. Overall, the simulated drawdown responses demonstrate significant disagreement between TPROGS and SGS realizations. This study showed that important geologic characteristics may not be captured by a spatial covariance model, even if that model is exhaustively determined and closely fits the exponential function.

  6. Scale invariance of the η-deformed AdS5×S5 superstring, T-duality and modified type II equations

    Directory of Open Access Journals (Sweden)

    G. Arutyunov

    2016-02-01

    Full Text Available We consider the ABF background underlying the η-deformed AdS5×S5 sigma model. This background fails to satisfy the standard IIB supergravity equations which indicates that the corresponding sigma model is not Weyl invariant, i.e. does not define a critical string theory in the usual sense. We argue that the ABF background should still define a UV finite theory on a flat 2d world-sheet implying that the η-deformed model is scale invariant. This property follows from the formal relation via T-duality between the η-deformed model and the one defined by an exact type IIB supergravity solution that has 6 isometries albeit broken by a linear dilaton. We find that the ABF background satisfies candidate type IIB scale invariance conditions which for the R–R field strengths are of the second order in derivatives. Surprisingly, we also find that the ABF background obeys an interesting modification of the standard IIB supergravity equations that are first order in derivatives of R–R fields. These modified equations explicitly depend on Killing vectors of the ABF background and, although not universal, they imply the universal scale invariance conditions. Moreover, we show that it is precisely the non-isometric dilaton of the T-dual solution that leads, after T-duality, to modification of type II equations from their standard form. We conjecture that the modified equations should follow from κ-symmetry of the η-deformed model. All our observations apply also to η-deformations of AdS3×S3×T4and AdS2×S2×T6models.

  7. Transition probabilities between levels of K and K+; Probabilidades de transicion entre niveles en el atomo de potasio y en el ion K+

    Energy Technology Data Exchange (ETDEWEB)

    Campos, J.; Martin, A.

    1984-07-01

    In this work transition probabilities between Ievels of n < 11 for K and for the known of K+ are calculated. Two computer programs based on the Coulomb approximation and the most suitable coupling schemes has been used. Lifetimes of all these levels are also calculated. (Author)

  8. Estimating the transitional probabilities of smoking stages with cross-sectional data and 10-year projection for smoking behavior in Iranian adolescents

    Directory of Open Access Journals (Sweden)

    Ahmad Khosravi

    2016-01-01

    Conclusions: The present study showed a moderately but concerning prevalence of current smoking in Iranian adolescents and introduced a novel method for estimation of transitional probabilities from a cross-sectional study. The increasing trend of cigarette use among adolescents indicated the necessity of paying more attention to this group.

  9. The sticking probability for H-2 on some transition metals at a hydrogen pressure of 1 bar

    DEFF Research Database (Denmark)

    Johansson, Martin; Lytken, Ole; Chorkendorff, Ib

    2008-01-01

    The sticking probability for hydrogen on films of Co, Ni, Cu, Ru, Rh, Pd, Ir, and Pt supported on graphite has been measured at a hydrogen pressure of 1 bar in the temperature range 40–200 °C. The sticking probability is found to increase in the order Ni, Co, Ir, Pd, Pt, Rh, and Ru at temperatures...... below 150 °C, whereas at higher temperatures, the sticking probability for Pd is higher than for Pt. The sticking probability for Cu is below the detection limit of the measurement. The measured sticking probabilities are slightly lower than those obtained at high hydrogen coverage under ultrahigh...... vacuum conditions. This could be a consequence of the higher hydrogen pressure used here. The apparent desorption energies extracted from the steady-state desorption rate are found to agree reasonably well with published values for the heat of adsorption at high coverage. However, the sticking...

  10. Expressions for Neutrino Wave Functions and Transition Probabilities at Three-Neutrino Oscillations in Vacuum and Some of Their Applications

    CERN Document Server

    Beshtoev, K M

    2006-01-01

    I have considered three-neutrino vacuum transitions and oscillations in the general case and obtained expressions for neutrino wave functions in three cases: with $CP$ violation, without $CP$ violation and in the case when direct $\

  11. Method for measurement of transition probabilities by laser-induced breakdown spectroscopy based on CSigma graphs-Application to Ca II spectral lines

    Science.gov (United States)

    Aguilera, J. A.; Aragón, C.; Manrique, J.

    2015-07-01

    We propose a method for determination of transition probabilities by laser-induced breakdown spectroscopy that avoids the error due to self-absorption. The method relies on CSigma graphs, a generalization of curves of growth which allows including several lines of various elements in the same ionization state. CSigma graphs are constructed including reference lines of an emitting species with well-known transition probabilities, together with the lines of interest, both in the same ionization state. The samples are fused glass disks prepared from small concentrations of compounds. When the method is applied, the concentration of the element of interest in the sample must be controlled to avoid the failure of the homogeneous plasma model. To test the method, the transition probabilities of 9 Ca II lines arising from the 4d, 5s, 5d and 6s configurations are measured using Fe II reference lines. The data for 5 of the studied lines, mainly from the 5d and 6s configurations, had not been measured previously.

  12. Probability model of solid to liquid-like transition of a fluid suspension after a shear flow onset

    Czech Academy of Sciences Publication Activity Database

    Nouar, C.; Říha, Pavel

    2008-01-01

    Roč. 34, č. 5 (2008), s. 477-483 ISSN 0301-9322 R&D Projects: GA AV ČR IAA200600803 Institutional research plan: CEZ:AV0Z20600510 Keywords : laminar suspension flow * liquid-liquid interface * probability model Subject RIV: BK - Fluid Dynamics Impact factor: 1.497, year: 2008

  13. Estimating the Transitional Probabilities of Smoking Stages with Cross-sectional Data and 10-Year Projection for Smoking Behavior in Iranian Adolescents.

    Science.gov (United States)

    Khosravi, Ahmad; Mansournia, Mohammad Ali; Mahmoodi, Mahmood; Pouyan, Ali Akbar; Holakouie-Naieni, Kourosh

    2016-01-01

    Cigarette smoking is one of the most important health-related risk factors in terms of morbidity and mortality. In this study, we introduced a new method for deriving the transitional probabilities of smoking stages from a cross-sectional study and simulated a long-term smoking behavior for adolescents. In this study in 2010, a total of 4853 high school students were randomly selected and were completed a self-administered questionnaire about cigarette smoking. We used smoothed age- and sex-specific prevalence of smoking stages in a probabilistic discrete event system for estimating of transitional probabilities. A nonhomogenous discrete time Markov chain analysis was used to model the progression of the smoking in 10 years ahead in the same population. The mean age of the students was 15.69 ± 0.73 years (range: 14-19). The smoothed prevalence proportion of current smoking varies between 3.58 and 26.14%. The age-adjusted odds of initiation in boys is 8.9 (95% confidence interval [CI]: 7.9-10.0) times of the odds of initiation of smoking in girls. Our study predicted that the prevalence proportion of current smokers increased from 7.55% in 2010 to 20.31% (95% CI: 19.44-21.37) for 2019. The present study showed a moderately but concerning prevalence of current smoking in Iranian adolescents and introduced a novel method for estimation of transitional probabilities from a cross-sectional study. The increasing trend of cigarette use among adolescents indicated the necessity of paying more attention to this group.

  14. HMI Data Driven Magnetohydrodynamic Model Predicted Active Region Photospheric Heating Rates: Their Scale Invariant, Flare Like Power Law Distributions, and Their Possible Association With Flares

    Science.gov (United States)

    Goodman, Michael L.; Kwan, Chiman; Ayhan, Bulent; Shang, Eric L.

    2017-01-01

    A data driven, near photospheric, 3 D, non-force free magnetohydrodynamic model predicts time series of the complete current density, and the resistive heating rate Q at the photosphere in neutral line regions (NLRs) of 14 active regions (ARs). The model is driven by time series of the magnetic field B observed by the Helioseismic and Magnetic Imager on the Solar Dynamics Observatory (SDO) satellite. Spurious Doppler periods due to SDO orbital motion are filtered out of the time series for B in every AR pixel. Errors in B due to these periods can be significant. The number of occurrences N(q) of values of Q > or = q for each AR time series is found to be a scale invariant power law distribution, N(Q) / Q-s, above an AR dependent threshold value of Q, where 0.3952 or = E obeys the same type of distribution, N(E) / E-S, above an AR dependent threshold value of E, with 0.38 < or approx. S < or approx. 0.60, also with little variation among ARs. Within error margins the ranges of s and S are nearly identical. This strong similarity between N(Q) and N(E) suggests a fundamental connection between the process that drives coronal flares and the process that drives photospheric NLR heating rates in ARs. In addition, results suggest it is plausible that spikes in Q, several orders of magnitude above background values, are correlated with times of the subsequent occurrence of M or X flares.

  15. The neolithic demographic transition in Europe: correlation with juvenility index supports interpretation of the summed calibrated radiocarbon date probability distribution (SCDPD as a valid demographic proxy.

    Directory of Open Access Journals (Sweden)

    Sean S Downey

    Full Text Available Analysis of the proportion of immature skeletons recovered from European prehistoric cemeteries has shown that the transition to agriculture after 9000 BP triggered a long-term increase in human fertility. Here we compare the largest analysis of European cemeteries to date with an independent line of evidence, the summed calibrated date probability distribution of radiocarbon dates (SCDPD from archaeological sites. Our cemetery reanalysis confirms increased growth rates after the introduction of agriculture; the radiocarbon analysis also shows this pattern, and a significant correlation between both lines of evidence confirms the demographic validity of SCDPDs. We analyze the areal extent of Neolithic enclosures and demographic data from ethnographically known farming and foraging societies and we estimate differences in population levels at individual sites. We find little effect on the overall shape and precision of the SCDPD and we observe a small increase in the correlation with the cemetery trends. The SCDPD analysis supports the hypothesis that the transition to agriculture dramatically increased demographic growth, but it was followed within centuries by a general pattern of collapse even after accounting for higher settlement densities during the Neolithic. The study supports the unique contribution of SCDPDs as a valid demographic proxy for the demographic patterns associated with early agriculture.

  16. The Neuronal Transition Probability (NTP) Model for the Dynamic Progression of Non-REM Sleep EEG: The Role of the Suprachiasmatic Nucleus

    CERN Document Server

    Merica, H

    2011-01-01

    Little attention has gone into linking to its neuronal substrates the dynamic structure of non-rapid-eye-movement (NREM) sleep, defined as the pattern of time-course power in all frequency bands across an entire episode. Using the spectral power time-courses in the sleep electroencephalogram (EEG), we showed in the typical first episode, several moves towards-and-away from deep sleep, each having an identical pattern linking the major frequency bands beta, sigma and delta. The neuronal transition probability model (NTP) - in fitting the data well - successfully explained the pattern as resulting from stochastic transitions of the firing-rates of the thalamically-projecting brainstem-activating neurons, alternating between two steady dynamic-states (towards-and-away from deep sleep) each initiated by a so-far unidentified flip-flop. The aims here are to identify this flip-flop and to demonstrate that the model fits well all NREM episodes, not just the first. Using published data on suprachiasmatic nucleus (SCN...

  17. The neuronal transition probability (NTP model for the dynamic progression of non-REM sleep EEG: the role of the suprachiasmatic nucleus.

    Directory of Open Access Journals (Sweden)

    Helli Merica

    Full Text Available Little attention has gone into linking to its neuronal substrates the dynamic structure of non-rapid-eye-movement (NREM sleep, defined as the pattern of time-course power in all frequency bands across an entire episode. Using the spectral power time-courses in the sleep electroencephalogram (EEG, we showed in the typical first episode, several moves towards-and-away from deep sleep, each having an identical pattern linking the major frequency bands beta, sigma and delta. The neuronal transition probability model (NTP--in fitting the data well--successfully explained the pattern as resulting from stochastic transitions of the firing-rates of the thalamically-projecting brainstem-activating neurons, alternating between two steady dynamic-states (towards-and-away from deep sleep each initiated by a so-far unidentified flip-flop. The aims here are to identify this flip-flop and to demonstrate that the model fits well all NREM episodes, not just the first. Using published data on suprachiasmatic nucleus (SCN activity we show that the SCN has the information required to provide a threshold-triggered flip-flop for TIMING the towards-and-away alternations, information provided by sleep-relevant feedback to the SCN. NTP then determines the PATTERN of spectral power within each dynamic-state. NTP was fitted to individual NREM episodes 1-4, using data from 30 healthy subjects aged 20-30 years, and the quality of fit for each NREM measured. We show that the model fits well all NREM episodes and the best-fit probability-set is found to be effectively the same in fitting all subject data. The significant model-data agreement, the constant probability parameter and the proposed role of the SCN add considerable strength to the model. With it we link for the first time findings at cellular level and detailed time-course data at EEG level, to give a coherent picture of NREM dynamics over the entire night and over hierarchic brain levels all the way from the SCN

  18. Self-Organized Bistability Associated with First-Order Phase Transitions

    Science.gov (United States)

    di Santo, Serena; Burioni, Raffaella; Vezzani, Alessandro; Muñoz, Miguel A.

    2016-06-01

    Self-organized criticality elucidates the conditions under which physical and biological systems tune themselves to the edge of a second-order phase transition, with scale invariance. Motivated by the empirical observation of bimodal distributions of activity in neuroscience and other fields, we propose and analyze a theory for the self-organization to the point of phase coexistence in systems exhibiting a first-order phase transition. It explains the emergence of regular avalanches with attributes of scale invariance that coexist with huge anomalous ones, with realizations in many fields.

  19. New Analysis Method Application in Metallographic Images through the Construction of Mosaics Via Speeded Up Robust Features and Scale Invariant Feature Transform

    Directory of Open Access Journals (Sweden)

    Pedro Pedrosa Rebouças Filho

    2015-06-01

    results and expediting the decision making process. Two different methods are proposed: One using the transformed Scale Invariant Feature Transform (SIFT, and the second using features extractor Speeded Up Robust Features (SURF. Although slower, the SIFT method is more stable and has a better performance than the SURF method and can be applied to real applications. The best results were obtained using SIFT with Peak Signal-to-Noise Ratio = 61.38, Mean squared error = 0.048 and mean-structural-similarity = 0.999, and processing time of 4.91 seconds for mosaic building. The methodology proposed shows be more promissory in aiding specialists during analysis of metallographic images.

  20. Handbook of probability

    CERN Document Server

    Florescu, Ionut

    2013-01-01

    THE COMPLETE COLLECTION NECESSARY FOR A CONCRETE UNDERSTANDING OF PROBABILITY Written in a clear, accessible, and comprehensive manner, the Handbook of Probability presents the fundamentals of probability with an emphasis on the balance of theory, application, and methodology. Utilizing basic examples throughout, the handbook expertly transitions between concepts and practice to allow readers an inclusive introduction to the field of probability. The book provides a useful format with self-contained chapters, allowing the reader easy and quick reference. Each chapter includes an introductio

  1. High-Polarity Solvents Decreasing the Two-Photon Transition Probability of Through-Space Charge-Transfer Systems - A Surprising In Silico Observation.

    Science.gov (United States)

    Alam, Md Mehboob; Chattopadhyaya, Mausumi; Chakrabarti, Swapan; Ruud, Kenneth

    2012-04-19

    In the Letter, we address the question as to why larger two-photon absorption cross sections are observed in nonpolar than in polar solvents for through-space charge-transfer (TSCT) systems such as [2,2]-paracyclophane derivatives. In order to answer this question, we have performed ab initio calculations on two well-known TSCT systems, namely, a [2.2]-paracyclophane derivative and a molecular tweezer-trinitrofluorinone complex, and found that the two-photon transition probability values of these systems decreases with increasing solvent polarity. To rationalize this result, we have analyzed the role of different optical channels associated with the two-photon process and noticed that, in TSCTs, the interference between the optical channels is mostly destructive and that its magnitude increases with increasing solvent polarity. Moreover, it is also found that a destructive interference may sometimes even become a constructive one in a nonpolar solvent, making the two-photon activity of TSCTs in polar solvents less than that in nonpolar solvents.

  2. No Magic Bullet: A Theory-Based Meta-Analysis of Markov Transition Probabilities in Studies of Service Systems for Persons With Mental Disabilities.

    Science.gov (United States)

    Leff, Hugh Stephen; Chow, Clifton M; Graves, Stephen C

    2017-03-01

    A random-effects meta-analysis of studies that used Markov transition probabilities (TPs) to describe outcomes for mental health service systems of differing quality for persons with serious mental illness was implemented to improve the scientific understanding of systems performance, to use in planning simulations to project service system costs and outcomes over time, and to test a theory of how outcomes for systems varying in quality differ. Nineteen systems described in 12 studies were coded as basic (B), maintenance (M), and recovery oriented (R) on the basis of descriptions of services provided. TPs for studies were aligned with a common functional-level framework, converted to a one-month time period, synthesized, and compared with theory-based expectations. Meta-regression was employed to explore associations between TPs and characteristics of service recipients and studies. R systems performed better than M and B systems. However, M systems did not perform better than B systems. All systems showed negative as well as positive TPs. For approximately one-third of synthesized TPs, substantial interstudy heterogeneity was noted. Associations were found between TPs and service recipient and study variables Conclusions: Conceptualizing systems as B, M, and R has potential for improving scientific understanding and systems planning. R systems appear more effective than B and M systems, although there is no "magic bullet" system for all service recipients. Interstudy heterogeneity indicates need for common approaches to reporting service recipient states, time periods for TPs, service recipient attributes, and service system characteristics. TPs found should be used in Markov simulations to project system effectiveness and costs of over time.

  3. Ruin probabilities

    DEFF Research Database (Denmark)

    Asmussen, Søren; Albrecher, Hansjörg

    , extensions of the classical compound Poisson model to allow for reserve-dependent premiums, Markov-modulation, periodicity, change of measure techniques, phase-type distributions as a computational vehicle and the connection to other applied probability areas, like queueing theory. In this substantially......The book gives a comprehensive treatment of the classical and modern ruin probability theory. Some of the topics are Lundberg's inequality, the Cramér-Lundberg approximation, exact solutions, other approximations (e.g., for heavy-tailed claim size distributions), finite horizon ruin probabilities...

  4. Ruin probabilities

    DEFF Research Database (Denmark)

    Asmussen, Søren; Albrecher, Hansjörg

    The book gives a comprehensive treatment of the classical and modern ruin probability theory. Some of the topics are Lundberg's inequality, the Cramér-Lundberg approximation, exact solutions, other approximations (e.g., for heavy-tailed claim size distributions), finite horizon ruin probabilities...... updated and extended second version, new topics include stochastic control, fluctuation theory for Levy processes, Gerber–Shiu functions and dependence......., extensions of the classical compound Poisson model to allow for reserve-dependent premiums, Markov-modulation, periodicity, change of measure techniques, phase-type distributions as a computational vehicle and the connection to other applied probability areas, like queueing theory. In this substantially...

  5. Probability-1

    CERN Document Server

    Shiryaev, Albert N

    2016-01-01

    This book contains a systematic treatment of probability from the ground up, starting with intuitive ideas and gradually developing more sophisticated subjects, such as random walks, martingales, Markov chains, the measure-theoretic foundations of probability theory, weak convergence of probability measures, and the central limit theorem. Many examples are discussed in detail, and there are a large number of exercises. The book is accessible to advanced undergraduates and can be used as a text for independent study. To accommodate the greatly expanded material in the third edition of Probability, the book is now divided into two volumes. This first volume contains updated references and substantial revisions of the first three chapters of the second edition. In particular, new material has been added on generating functions, the inclusion-exclusion principle, theorems on monotonic classes (relying on a detailed treatment of “π-λ” systems), and the fundamental theorems of mathematical statistics.

  6. Ignition Probability

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — USFS, State Forestry, BLM, and DOI fire occurrence point locations from 1987 to 2008 were combined and converted into a fire occurrence probability or density grid...

  7. Risk Probabilities

    DEFF Research Database (Denmark)

    Rojas-Nandayapa, Leonardo

    Tail probabilities of sums of heavy-tailed random variables are of a major importance in various branches of Applied Probability, such as Risk Theory, Queueing Theory, Financial Management, and are subject to intense research nowadays. To understand their relevance one just needs to think....... By doing so, we will obtain a deeper insight into how events involving large values of sums of heavy-tailed random variables are likely to occur....

  8. Quantum Probabilities as Behavioral Probabilities

    Directory of Open Access Journals (Sweden)

    Vyacheslav I. Yukalov

    2017-03-01

    Full Text Available We demonstrate that behavioral probabilities of human decision makers share many common features with quantum probabilities. This does not imply that humans are some quantum objects, but just shows that the mathematics of quantum theory is applicable to the description of human decision making. The applicability of quantum rules for describing decision making is connected with the nontrivial process of making decisions in the case of composite prospects under uncertainty. Such a process involves deliberations of a decision maker when making a choice. In addition to the evaluation of the utilities of considered prospects, real decision makers also appreciate their respective attractiveness. Therefore, human choice is not based solely on the utility of prospects, but includes the necessity of resolving the utility-attraction duality. In order to justify that human consciousness really functions similarly to the rules of quantum theory, we develop an approach defining human behavioral probabilities as the probabilities determined by quantum rules. We show that quantum behavioral probabilities of humans do not merely explain qualitatively how human decisions are made, but they predict quantitative values of the behavioral probabilities. Analyzing a large set of empirical data, we find good quantitative agreement between theoretical predictions and observed experimental data.

  9. Probability theory

    CERN Document Server

    S Varadhan, S R

    2001-01-01

    This volume presents topics in probability theory covered during a first-year graduate course given at the Courant Institute of Mathematical Sciences. The necessary background material in measure theory is developed, including the standard topics, such as extension theorem, construction of measures, integration, product spaces, Radon-Nikodym theorem, and conditional expectation. In the first part of the book, characteristic functions are introduced, followed by the study of weak convergence of probability distributions. Then both the weak and strong limit theorems for sums of independent rando

  10. A random matrix/transition state theory for the probability distribution of state-specific unimolecular decay rates: Generalization to include total angular momentum conservation and other dynamical symmetries

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez, R.; Miller, W.H.; Moore, C.B. (Department of Chemistry, University of California, and Chemical Sciences Division, Lawrence Berkeley Laboratory, Berkeley, California 94720 (United States)); Polik, W.F. (Department of Chemistry, Hope College, Holland, Michigan 49423 (United States))

    1993-07-15

    A previously developed random matrix/transition state theory (RM/TST) model for the probability distribution of state-specific unimolecular decay rates has been generalized to incorporate total angular momentum conservation and other dynamical symmetries. The model is made into a predictive theory by using a semiclassical method to determine the transmission probabilities of a nonseparable rovibrational Hamiltonian at the transition state. The overall theory gives a good description of the state-specific rates for the D[sub 2]CO[r arrow]D[sub 2]+CO unimolecular decay; in particular, it describes the dependence of the distribution of rates on total angular momentum [ital J]. Comparison of the experimental values with results of the RM/TST theory suggests that there is mixing among the rovibrational states.

  11. Phase transitions triggered by quantum fluctuations in the inflationary universe

    Science.gov (United States)

    Nagasawa, Michiyasu; Yokoyama, Junichi

    1991-01-01

    The dynamics of a second-order phase transition during inflation, which is induced by time-variation of spacetime curvature, is studied as a natural mechanism to produce topological defects of typical grand unification scales such as cosmic strings or global textures. It is shown that their distribution is almost scale-invariant with small- and large-scale cutoffs. Also discussed is how these cutoffs are given.

  12. The sticking probability for H-2 in presence of CO on some transition metals at a hydrogen pressure of 1 bar

    DEFF Research Database (Denmark)

    Johansson, Martin; Lytken, Ole; Chorkendorff, Ib

    2008-01-01

    The sticking probability for H-2 on Ni, Co, Cu, Rh, Ru, Pd, it and Pt metal films supported on graphite has been investigated in a gas mixture consisting of 10 ppm carbon monoxide in hydrogen at a total pressure of 1 bar in the temperature range 40-200 degrees C. Carbon monoxide inhibits...

  13. Transition probabilities for lines of Cr II, Na II and Sb I by laser produced plasma atomic emission spectroscopy; Probabilidades de transicion de algunos niveles de Cr II, Na II y Sb I medediante espectroscopia de plasma producidos por laser

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez, A. M.; Ortiz, M.; Campos, J.

    1995-07-01

    Absolute transition probabilities for lines of CR II, Na II and Sb I were determined by emission spectroscopy of laser induced plasmas. the plasma was produced focusing the emission of a pulsed Nd-Yag laser on solid samples containing the atom in study. the light arising from the plasma region was collected by and spectrometer. the detector used was a time-resolved optical multichannel analyzer (OMA III EG and G). The wavelengths of the measured transitions range from 2000 sto 4100 A. The spectral resolution of the system was 0. 2 A. The method can be used in insulators materials as Cl Na crystals and in metallic samples as Al-Cr and Sn-Sn alloys. to avoid self-absorption effects the alloys were made with low Sb or Cr content. Relative transition probabilities have been determined from measurements of emission-line intensities and were placed on an absolute scale by using, where possible, accurate experimental lifetime values form the literature or theoretical data. From these measurements, values for plasma temperature (8000-24000 K), electron densities ({approx}{approx} 10''16 cm ''-3) and self-absorption coefficients have been obtained. (Author) 56 refs.

  14. Multiconfiguration Dirac-Hartree-Fock energy levels, oscillator strengths, transition probabilities, hyperfine constants and Landé g-factor of intermediate Rydberg series in neutral argon atom

    Science.gov (United States)

    Salah, Wa'el; Hassouneh, Ola

    2017-04-01

    We computed the energy levels, oscillator strengths f_{ij}, the radiative transition rates A_{ij}, the Landé g -factor, the magnetic dipole moment and the electric quadrupole hyperfine constants of the intermediate Rydberg series ns [k]J ( 4 ≤ n ≤ 6), nd [k]J (3 ≤ n ≤ 4), np [k]J (4 ≤ n ≤ 5) relative to the ground state 3p6 1S0 for neutral argon atom spectra. The values are obtained in the framework of the multiconfiguration Dirac-Hartree-Fock (MCDHF) approach. In this approach, Breit interaction, leading quantum electrodynamics (QED) effects and self-energy correction are taken into account. Moreover, these spectroscopic parameters have been calculated for many levels belonging to the configuration 3p54s, 3p55s, 3p56s, 3p53d, 3p54d, 3p54p, 3p55p as well as for transitions between levels 3p54s-3p54p, 3p54p-3p53d, 3p54p-3p55s, 3p55s-3p55p and 3p55p-3p56s. The large majority of the lines from the 4p-5s and 4p-3d, 5s-5p and 5p-6s transition arrays have been observed and the calculations are consistent with the J -file-sum rule. The obtained theoretical values are compared with previous experimental and theoretical data available in the literature. An overall satisfactory agreement is noticed allowing assessing the reliability of our data.

  15. Energy levels, oscillator strengths, line strengths, and transition probabilities in Si-like ions of La XLIII, Er LIV, Tm LV, and Yb LVI

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Zhan-Bin, E-mail: chenzb008@qq.com [College of Science, National University of Defense Technology, Changsha, Hunan 410073 (China); Ma, Kun [School of Information Engineering, Huangshan University, Huangshan 245041 (China); Wang, Hong-Jian [Chongqing Key Laboratory for Design and Control of Manufacturing Equipment, Chongqing Technology and Business University, Chongqing 40067 (China); Wang, Kai, E-mail: wangkai@hbu.edu.cn [Hebei Key Lab of Optic-electronic Information and Materials, The College of Physics Science and Technology, Hebei University, Baoding 071002 (China); Liu, Xiao-Bin [Department of Physics, Tianshui Normal University, Tianshui 741001 (China); Zeng, Jiao-Long [College of Science, National University of Defense Technology, Changsha, Hunan 410073 (China)

    2017-01-15

    Detailed calculations using the multi-configuration Dirac–Fock (MCDF) method are carried out for the lowest 64 fine-structure levels of the 3s{sup 2}3p{sup 2}, 3s{sup 2}3p3d, 3s3p{sup 3}, 3s3p{sup 2}3d, 3s{sup 2}3d{sup 2}, and 3p{sup 4} configurations in Si-like ions of La XLIII, Er LIV, Tm LV, and Yb LVI. Energies, oscillator strengths, wavelengths, line strengths, and radiative electric dipole transition rates are given for all ions. A parallel calculation using the many-body perturbation theory (MBPT) method is also carried out to assess the present energy levels accuracy. Comparisons are performed between these two sets of energy levels, as well as with other available results, showing that they are in good agreement with each other within 0.5%. These high accuracy results can be used to the modeling and the interpretation of astrophysical objects and fusion plasmas. - Highlights: • Energy levels and E1 transition rates of Si-like ions are presented. • Breit interaction and Quantum Electrodynamics effects are discussed. • Present results should be useful in the astrophysical application and plasma modeling.

  16. Scale Invariant Jets: From Blazars to Microquasars

    Science.gov (United States)

    Liodakis, Ioannis; Pavlidou, Vasiliki; Papadakis, Iossif; Angelakis, Emmanouil; Marchili, Nicola; Zensus, Johann A.; Fuhrmann, Lars; Karamanavis, Vassilis; Myserlis, Ioannis; Nestoras, Ioannis; Palaiologou, Efthymios; Readhead, Anthony C. S.

    2017-12-01

    Black holes, anywhere in the stellar-mass to supermassive range, are often associated with relativistic jets. Models suggest that jet production may be a universal process common in all black hole systems regardless of their mass. Although in many cases observations support such hypotheses for microquasars and Seyfert galaxies, little is known regarding whether boosted blazar jets also comply with such universal scaling laws. We use uniquely rich multi-wavelength radio light curves from the F-GAMMA program and the most accurate Doppler factors available to date to probe blazar jets in their emission rest frame with unprecedented accuracy. We identify for the first time a strong correlation between the blazar intrinsic broadband radio luminosity and black hole mass, which extends over ∼9 orders of magnitude down to microquasar scales. Our results reveal the presence of a universal scaling law that bridges the observing and emission rest frames in beamed sources and allows us to effectively constrain jet models. They consequently provide an independent method for estimating the Doppler factor and for predicting expected radio luminosities of boosted jets operating in systems of intermediate or tens of solar mass black holes, which are immediately applicable to cases such as those recently observed by LIGO.

  17. Probability Aggregates in Probability Answer Set Programming

    OpenAIRE

    Saad, Emad

    2013-01-01

    Probability answer set programming is a declarative programming that has been shown effective for representing and reasoning about a variety of probability reasoning tasks. However, the lack of probability aggregates, e.g. {\\em expected values}, in the language of disjunctive hybrid probability logic programs (DHPP) disallows the natural and concise representation of many interesting problems. In this paper, we extend DHPP to allow arbitrary probability aggregates. We introduce two types of p...

  18. Stretching Probability Explorations with Geoboards

    Science.gov (United States)

    Wheeler, Ann; Champion, Joe

    2016-01-01

    Students are faced with many transitions in their middle school mathematics classes. To build knowledge, skills, and confidence in the key areas of algebra and geometry, students often need to practice using numbers and polygons in a variety of contexts. Teachers also want students to explore ideas from probability and statistics. Teachers know…

  19. Scaling Qualitative Probability

    OpenAIRE

    Burgin, Mark

    2017-01-01

    There are different approaches to qualitative probability, which includes subjective probability. We developed a representation of qualitative probability based on relational systems, which allows modeling uncertainty by probability structures and is more coherent than existing approaches. This setting makes it possible proving that any comparative probability is induced by some probability structure (Theorem 2.1), that classical probability is a probability structure (Theorem 2.2) and that i...

  20. Infants Segment Continuous Events Using Transitional Probabilities

    Science.gov (United States)

    Stahl, Aimee E.; Romberg, Alexa R.; Roseberry, Sarah; Golinkoff, Roberta Michnick; Hirsh-Pasek, Kathryn

    2014-01-01

    Throughout their 1st year, infants adeptly detect statistical structure in their environment. However, little is known about whether statistical learning is a primary mechanism for event segmentation. This study directly tests whether statistical learning alone is sufficient to segment continuous events. Twenty-eight 7- to 9-month-old infants…

  1. Forbidden Transition Probabilities of Astrophysical Interest among ...

    Indian Academy of Sciences (India)

    tions, lifetime data and selected weighted oscillator strengths are also reported. Key words. ... Pasternak & Godschmidt (1972) and some oscillator strengths were determined by. Roberts (1973), who also ... are useful indicators of plasma densities, indicating a need for both experimental and theoretical data for such ...

  2. Contributions to quantum probability

    Energy Technology Data Exchange (ETDEWEB)

    Fritz, Tobias

    2010-06-25

    finite set can occur as the outcome distribution of a quantum-mechanical von Neumann measurement with postselection, given that the scalar product between the initial and the final state is known as well as the success probability of the postselection. An intermediate von Neumann measurement can enhance transition probabilities between states such that the error probability shrinks by a factor of up to 2. Chapter 4: A presentation of the category of stochastic matrices. This chapter gives generators and relations for the strict monoidal category of probabilistic maps on finite cardinals (i.e., stochastic matrices). Chapter 5: Convex Spaces: Definition and Examples. We try to promote convex spaces as an abstract concept of convexity which was introduced by Stone as ''barycentric calculus''. A convex space is a set where one can take convex combinations in a consistent way. By identifying the corresponding Lawvere theory as the category from chapter 4 and using the results obtained there, we give a different proof of a result of Swirszcz which shows that convex spaces can be identified with algebras of a finitary version of the Giry monad. After giving an extensive list of examples of convex sets as they appear throughout mathematics and theoretical physics, we note that there also exist convex spaces that cannot be embedded into a vector space: semilattices are a class of examples of purely combinatorial type. In an information-theoretic interpretation, convex subsets of vector spaces are probabilistic, while semilattices are possibilistic. Convex spaces unify these two concepts. (orig.)

  3. Scale Invariant Power Laws Capture the 3-D Coupling Between Water, Energy and Carbon Budgets Across River Basins of Increasing Horton-Strahler Orders in the Andes-Amazon System

    Science.gov (United States)

    Poveda, G.; Zapata, A. F.

    2016-12-01

    The Andes-Amazon system exhibits complex interactions and feedbacks between hydrological, ecological, biogeochemical and climatic factors in a broad range of temporal and spatial scales. We aim to understand the coupling existing between water, energy and carbon budgets in the Andes-Amazon system, by performing a systematic study of the system for river basins of increasing Horton-Strahler orders, from the headwaters of the Amazon River basin along the Andes (order ω=1 river sub-basins) to the low-lying larger river sub-basins (order ω=10). To that end, this works introduces a 3-D generalization of the Budyko framework that aims to link the water, energy, and Carbon budgets in river basins. The newly proposed 3-D non-dimensional space is defined by: (1) the ratio between long-term mean values of Actual Evapotranspiration (AET) and Precipitation (P), α=AET/P, representing the water balance; (2) the ratio between AET and Potential Evapotranspiration (PET), β=AET/PET, representing the energy balance; and (3) the ratio between AET and Aboveground Net Primary Productivity, δ=AET/ANPP, representing the carbon budget. We use a 3" Digital Elevation Model (DEM), which allows defining river basins with Horton-Strahler orders from 1 to 10. The long-term water, energy, and carbon budgets are estimated for increasing values of the Horton-Strahler orders during the period 1987-2007. Data sets pertaining to the water balance come from ORE-HYBAM, potential evapotranspiration (PET) from GLEAM (Global Land-surface Evaporation: the Amsterdam Methodology). Data for the energy budget are from the Surface Radiation Budget (SRB). Data for the Carbon budget (annual mean net primary productivity, ANPP, gross primary productivity, GPP, and respiration rates, Rr, come from AMAZALERT and ORCHEDEE (Organizing Carbon and Hydrology In Dynamic EcosystEms), as well as from Flux Tower Data and the LBA project. Our results show that scale invariant power-laws emerge to capture the three 2-D

  4. Probability an introduction

    CERN Document Server

    Goldberg, Samuel

    1960-01-01

    Excellent basic text covers set theory, probability theory for finite sample spaces, binomial theorem, probability distributions, means, standard deviations, probability function of binomial distribution, more. Includes 360 problems with answers for half.

  5. Probability 1/e

    Science.gov (United States)

    Koo, Reginald; Jones, Martin L.

    2011-01-01

    Quite a number of interesting problems in probability feature an event with probability equal to 1/e. This article discusses three such problems and attempts to explain why this probability occurs with such frequency.

  6. Quantum probability measures and tomographic probability densities

    NARCIS (Netherlands)

    Amosov, GG; Man'ko, [No Value

    2004-01-01

    Using a simple relation of the Dirac delta-function to generalized the theta-function, the relationship between the tomographic probability approach and the quantum probability measure approach with the description of quantum states is discussed. The quantum state tomogram expressed in terms of the

  7. Agreeing Probability Measures for Comparative Probability Structures

    NARCIS (Netherlands)

    P.P. Wakker (Peter)

    1981-01-01

    textabstractIt is proved that fine and tight comparative probability structures (where the set of events is assumed to be an algebra, not necessarily a σ-algebra) have agreeing probability measures. Although this was often claimed in the literature, all proofs the author encountered are not valid

  8. Stationary algorithmic probability

    National Research Council Canada - National Science Library

    Müller, Markus

    2010-01-01

    ...,sincetheiractualvaluesdependonthechoiceoftheuniversal referencecomputer.Inthispaper,weanalyzeanaturalapproachtoeliminatethismachine- dependence. Our method is to assign algorithmic probabilities to the different...

  9. Factual and cognitive probability

    OpenAIRE

    Chuaqui, Rolando

    2012-01-01

    This modification separates the two aspects of probability: probability as a part of physical theories (factual), and as a basis for statistical inference (cognitive). Factual probability is represented by probability structures as in the earlier papers, but now built independently of the language. Cognitive probability is interpreted as a form of "partial truth". The paper also contains a discussion of the Principle of Insufficient Reason and of Bayesian and classical statistical methods, in...

  10. Evaluating probability forecasts

    OpenAIRE

    Lai, Tze Leung; Gross, Shulamith T.; Shen, David Bo

    2011-01-01

    Probability forecasts of events are routinely used in climate predictions, in forecasting default probabilities on bank loans or in estimating the probability of a patient's positive response to treatment. Scoring rules have long been used to assess the efficacy of the forecast probabilities after observing the occurrence, or nonoccurrence, of the predicted events. We develop herein a statistical theory for scoring rules and propose an alternative approach to the evaluation of probability for...

  11. Test of the X(5) symmetry in {sup 156}Dy and {sup 178}Os by measurement of electromagnetic transition probabilities; Test der X(5)-Symmetrie in {sup 156}Dy und {sup 178}Os durch Messung elektromagnetischer Uebergangswahrscheinlichkeiten

    Energy Technology Data Exchange (ETDEWEB)

    Moeller, O.

    2005-07-01

    This work reports on results from two Recoil-Distance-Doppler-Shift lifetime measurements of excited states in {sup 155}Dy and {sup 178}Os. The experiments were carried out at the GASP spektrometer of the Laboratori Nazional i di Legnaro in combination with the Cologne plunger apparatus. The main purpose of the performed experiments was to test the predictions of the X(5) critical point symmetry in these two nuclei. In {sup 156}Dy and {sup 178}Os 29 lifetimes of excited states were derived using the Differential-Decay-Curve method. In weaker reaction channels the nuclei {sup 155}Dy, {sup 157}Dy and {sup 177}Os were populated. In these nuclei 32 additional lifetimes were measured, most of them for the first time. In order to calculate absolute transition probabilities from the measured lifetimes of the first excited band in {sup 156}Dy, essential branching ratios were derived from the measured data with a very small systematic error (<5%). The most important results can be summarized as mentioned below: Lifetimes measured in the first excited band, confirm that this nucleus can be located close to the critical point X(5). With model calculations, special criteria of the X(5) model were found that can be used to identify other X(5)-like nuclei. Using these criterias a new region of X(5)-like nuclei could be suggested within the osmium isotopes in the A=180 mass region. The measured lifetimes in {sup 178}Os confirm the consistency of a X(5) description in these nuclei. A comparision with the well established X(5)-like nuclei in the N=90 isotones gives an agreement with the X(5) description of at least the same quality. (orig.)

  12. What Are Probability Surveys?

    Science.gov (United States)

    The National Aquatic Resource Surveys (NARS) use probability-survey designs to assess the condition of the nation’s waters. In probability surveys (also known as sample-surveys or statistical surveys), sampling sites are selected randomly.

  13. Efficient probability sequence

    OpenAIRE

    Regnier, Eva

    2014-01-01

    A probability sequence is an ordered set of probability forecasts for the same event. Although single-period probabilistic forecasts and methods for evaluating them have been extensively analyzed, we are not aware of any prior work on evaluating probability sequences. This paper proposes an efficiency condition for probability sequences and shows properties of efficient forecasting systems, including memorylessness and increasing discrimination. These results suggest tests for efficiency and ...

  14. Efficient probability sequences

    OpenAIRE

    Regnier, Eva

    2014-01-01

    DRMI working paper A probability sequence is an ordered set of probability forecasts for the same event. Although single-period probabilistic forecasts and methods for evaluating them have been extensively analyzed, we are not aware of any prior work on evaluating probability sequences. This paper proposes an efficiency condition for probability sequences and shows properties of efficiency forecasting systems, including memorylessness and increasing discrimination. These res...

  15. Philosophical theories of probability

    CERN Document Server

    Gillies, Donald

    2000-01-01

    The Twentieth Century has seen a dramatic rise in the use of probability and statistics in almost all fields of research. This has stimulated many new philosophical ideas on probability. Philosophical Theories of Probability is the first book to present a clear, comprehensive and systematic account of these various theories and to explain how they relate to one another. Gillies also offers a distinctive version of the propensity theory of probability, and the intersubjective interpretation, which develops the subjective theory.

  16. Estimating Subjective Probabilities

    DEFF Research Database (Denmark)

    Andersen, Steffen; Fountain, John; Harrison, Glenn W.

    Subjective probabilities play a central role in many economic decisions, and act as an immediate confound of inferences about behavior, unless controlled for. Several procedures to recover subjective probabilities have been proposed, but in order to recover the correct latent probability one must...

  17. Estimating Subjective Probabilities

    DEFF Research Database (Denmark)

    Andersen, Steffen; Fountain, John; Harrison, Glenn W.

    2014-01-01

    Subjective probabilities play a central role in many economic decisions and act as an immediate confound of inferences about behavior, unless controlled for. Several procedures to recover subjective probabilities have been proposed, but in order to recover the correct latent probability one must ...

  18. Non-Archimedean Probability

    NARCIS (Netherlands)

    Benci, Vieri; Horsten, Leon; Wenmackers, Sylvia

    We propose an alternative approach to probability theory closely related to the framework of numerosity theory: non-Archimedean probability (NAP). In our approach, unlike in classical probability theory, all subsets of an infinite sample space are measurable and only the empty set gets assigned

  19. Interpretations of probability

    CERN Document Server

    Khrennikov, Andrei

    2009-01-01

    This is the first fundamental book devoted to non-Kolmogorov probability models. It provides a mathematical theory of negative probabilities, with numerous applications to quantum physics, information theory, complexity, biology and psychology. The book also presents an interesting model of cognitive information reality with flows of information probabilities, describing the process of thinking, social, and psychological phenomena.

  20. Oxygen boundary crossing probabilities.

    Science.gov (United States)

    Busch, N A; Silver, I A

    1987-01-01

    The probability that an oxygen particle will reach a time dependent boundary is required in oxygen transport studies involving solution methods based on probability considerations. A Volterra integral equation is presented, the solution of which gives directly the boundary crossing probability density function. The boundary crossing probability is the probability that the oxygen particle will reach a boundary within a specified time interval. When the motion of the oxygen particle may be described as strongly Markovian, then the Volterra integral equation can be rewritten as a generalized Abel equation, the solution of which has been widely studied.

  1. Philosophy and probability

    CERN Document Server

    Childers, Timothy

    2013-01-01

    Probability is increasingly important for our understanding of the world. What is probability? How do we model it, and how do we use it? Timothy Childers presents a lively introduction to the foundations of probability and to philosophical issues it raises. He keeps technicalities to a minimum, and assumes no prior knowledge of the subject. He explains the main interpretations of probability-frequentist, propensity, classical, Bayesian, and objective Bayesian-and uses stimulatingexamples to bring the subject to life. All students of philosophy will benefit from an understanding of probability,

  2. In All Probability, Probability is not All

    Science.gov (United States)

    Helman, Danny

    2004-01-01

    The national lottery is often portrayed as a game of pure chance with no room for strategy. This misperception seems to stem from the application of probability instead of expectancy considerations, and can be utilized to introduce the statistical concept of expectation.

  3. Nuclear structure of tellurium 133 via beta decay and shell model calculations in the doubly magic tin 132 region. [J,. pi. , transition probabilities, neutron and proton separation, g factors

    Energy Technology Data Exchange (ETDEWEB)

    Lane, S.M.

    1979-08-01

    An experimental investigation of the level structure of /sup 133/Te was performed by spectroscopy of gamma-rays following the beta-decay of 2.7 min /sup 133/Sb. Multiscaled gamma-ray singles spectra and 2.5 x 10/sup 7/ gamma-gamma coincidence events were used in the assignment of 105 of the approximately 400 observed gamma-rays to /sup 133/Sb decay and in the construction of the /sup 133/Te level scheme with 29 excited levels. One hundred twenty-two gamma-rays were identified as originating in the decay of other isotopes of Sb or their daughter products. The remaining gamma-rays were associated with the decay of impurity atoms or have as yet not been identified. A new computer program based on the Lanczos tridiagonalization algorithm using an uncoupled m-scheme basis and vector manipulations was written. It was used to calculate energy levels, parities, spins, model wavefunctions, neutron and proton separation energies, and some electromagnetic transition probabilities for the following nuclei in the /sup 132/Sn region: /sup 128/Sn, /sup 129/Sn, /sup 130/Sn, /sup 131/Sn, /sup 130/Sb, /sup 131/Sb, /sup 132/Sb, /sup 133/Sb, /sup 132/Te, /sup 133/Te, /sup 134/Te, /sup 134/I, /sup 135/I, /sup 135/Xe, and /sup 136/Xe. The results are compared with experiment and the agreement is generally good. For non-magic nuclei: the lg/sub 7/2/, 2d/sub 5/2/, 2d/sub 3/2/, 1h/sub 11/2/, and 3s/sub 1/2/ orbitals are available to valence protons and the 2d/sub 5/2/, 2d/sub 3/2/, 1h/sub 11/2/, and 3s/sub 1/2/ orbitals are available to valence neutron holes. The present CDC7600 computer code can accommodate 59 single particle states and vectors comprised of 30,000 Slater determinants. The effective interaction used was that of Petrovich, McManus, and Madsen, a modification of the Kallio-Kolltveit realistic force. Single particle energies, effective charges and effective g-factors were determined from experimental data for nuclei in the /sup 132/Sn region. 116 references.

  4. Choice Probability Generating Functions

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; McFadden, Daniel L; Bierlaire, Michel

    This paper considers discrete choice, with choice probabilities coming from maximization of preferences from a random utility field perturbed by additive location shifters (ARUM). Any ARUM can be characterized by a choice-probability generating function (CPGF) whose gradient gives the choice...... probabilities, and every CPGF is consistent with an ARUM. We relate CPGF to multivariate extreme value distributions, and review and extend methods for constructing CPGF for applications....

  5. Choice probability generating functions

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; McFadden, Daniel; Bierlaire, Michel

    2013-01-01

    This paper considers discrete choice, with choice probabilities coming from maximization of preferences from a random utility field perturbed by additive location shifters (ARUM). Any ARUM can be characterized by a choice-probability generating function (CPGF) whose gradient gives the choice...... probabilities, and every CPGF is consistent with an ARUM. We relate CPGF to multivariate extreme value distributions, and review and extend methods for constructing CPGF for applications. The choice probabilities of any ARUM may be approximated by a cross-nested logit model. The results for ARUM are extended...

  6. Statistics and Probability

    Directory of Open Access Journals (Sweden)

    Laktineh Imad

    2010-04-01

    Full Text Available This ourse constitutes a brief introduction to probability applications in high energy physis. First the mathematical tools related to the diferent probability conepts are introduced. The probability distributions which are commonly used in high energy physics and their characteristics are then shown and commented. The central limit theorem and its consequences are analysed. Finally some numerical methods used to produce diferent kinds of probability distribution are presented. The full article (17 p. corresponding to this lecture is written in french and is provided in the proceedings of the book SOS 2008.

  7. Real analysis and probability

    CERN Document Server

    Ash, Robert B; Lukacs, E

    1972-01-01

    Real Analysis and Probability provides the background in real analysis needed for the study of probability. Topics covered range from measure and integration theory to functional analysis and basic concepts of probability. The interplay between measure theory and topology is also discussed, along with conditional probability and expectation, the central limit theorem, and strong laws of large numbers with respect to martingale theory.Comprised of eight chapters, this volume begins with an overview of the basic concepts of the theory of measure and integration, followed by a presentation of var

  8. On Probability Domains IV

    Science.gov (United States)

    Frič, Roman; Papčo, Martin

    2017-12-01

    Stressing a categorical approach, we continue our study of fuzzified domains of probability, in which classical random events are replaced by measurable fuzzy random events. In operational probability theory (S. Bugajski) classical random variables are replaced by statistical maps (generalized distribution maps induced by random variables) and in fuzzy probability theory (S. Gudder) the central role is played by observables (maps between probability domains). We show that to each of the two generalized probability theories there corresponds a suitable category and the two resulting categories are dually equivalent. Statistical maps and observables become morphisms. A statistical map can send a degenerated (pure) state to a non-degenerated one —a quantum phenomenon and, dually, an observable can map a crisp random event to a genuine fuzzy random event —a fuzzy phenomenon. The dual equivalence means that the operational probability theory and the fuzzy probability theory coincide and the resulting generalized probability theory has two dual aspects: quantum and fuzzy. We close with some notes on products and coproducts in the dual categories.

  9. Difficulties related to Probabilities

    OpenAIRE

    Rosinger, Elemer Elad

    2010-01-01

    Probability theory is often used as it would have the same ontological status with, for instance, Euclidean Geometry or Peano Arithmetics. In this regard, several highly questionable aspects of probability theory are mentioned which have earlier been presented in two arxiv papers.

  10. On Randomness and Probability

    Indian Academy of Sciences (India)

    casinos and gambling houses? How does one interpret a statement like "there is a 30 per cent chance of rain tonight" - a statement we often hear on the news? Such questions arise in the mind of every student when she/he is taught probability as part of mathematics. Many students who go on to study probability and ...

  11. Dynamic update with probabilities

    NARCIS (Netherlands)

    Van Benthem, Johan; Gerbrandy, Jelle; Kooi, Barteld

    2009-01-01

    Current dynamic-epistemic logics model different types of information change in multi-agent scenarios. We generalize these logics to a probabilistic setting, obtaining a calculus for multi-agent update with three natural slots: prior probability on states, occurrence probabilities in the relevant

  12. Elements of quantum probability

    NARCIS (Netherlands)

    Kummerer, B.; Maassen, H.

    1996-01-01

    This is an introductory article presenting some basic ideas of quantum probability. From a discussion of simple experiments with polarized light and a card game we deduce the necessity of extending the body of classical probability theory. For a class of systems, containing classical systems with

  13. Introduction to probability

    CERN Document Server

    Freund, John E

    1993-01-01

    Thorough, lucid coverage of permutations and factorials, probabilities and odds, frequency interpretation, mathematical expectation, decision making, postulates of probability, rule of elimination, binomial distribution, geometric distribution, standard deviation, law of large numbers, and much more. Exercises with some solutions. Summary. Bibliography. Includes 42 black-and-white illustrations. 1973 edition.

  14. On Probability Domains IV

    Science.gov (United States)

    Frič, Roman; Papčo, Martin

    2017-06-01

    Stressing a categorical approach, we continue our study of fuzzified domains of probability, in which classical random events are replaced by measurable fuzzy random events. In operational probability theory (S. Bugajski) classical random variables are replaced by statistical maps (generalized distribution maps induced by random variables) and in fuzzy probability theory (S. Gudder) the central role is played by observables (maps between probability domains). We show that to each of the two generalized probability theories there corresponds a suitable category and the two resulting categories are dually equivalent. Statistical maps and observables become morphisms. A statistical map can send a degenerated (pure) state to a non-degenerated one —a quantum phenomenon and, dually, an observable can map a crisp random event to a genuine fuzzy random event —a fuzzy phenomenon. The dual equivalence means that the operational probability theory and the fuzzy probability theory coincide and the resulting generalized probability theory has two dual aspects: quantum and fuzzy. We close with some notes on products and coproducts in the dual categories.

  15. Janus-faced probability

    CERN Document Server

    Rocchi, Paolo

    2014-01-01

    The problem of probability interpretation was long overlooked before exploding in the 20th century, when the frequentist and subjectivist schools formalized two conflicting conceptions of probability. Beyond the radical followers of the two schools, a circle of pluralist thinkers tends to reconcile the opposing concepts. The author uses two theorems in order to prove that the various interpretations of probability do not come into opposition and can be used in different contexts. The goal here is to clarify the multifold nature of probability by means of a purely mathematical approach and to show how philosophical arguments can only serve to deepen actual intellectual contrasts. The book can be considered as one of the most important contributions in the analysis of probability interpretation in the last 10-15 years.

  16. Multifractals embedded in short time series: An unbiased estimation of probability moment

    Science.gov (United States)

    Qiu, Lu; Yang, Tianguang; Yin, Yanhua; Gu, Changgui; Yang, Huijie

    2016-12-01

    An exact estimation of probability moments is the base for several essential concepts, such as the multifractals, the Tsallis entropy, and the transfer entropy. By means of approximation theory we propose a new method called factorial-moment-based estimation of probability moments. Theoretical prediction and computational results show that it can provide us an unbiased estimation of the probability moments of continuous order. Calculations on probability redistribution model verify that it can extract exactly multifractal behaviors from several hundred recordings. Its powerfulness in monitoring evolution of scaling behaviors is exemplified by two empirical cases, i.e., the gait time series for fast, normal, and slow trials of a healthy volunteer, and the closing price series for Shanghai stock market. By using short time series with several hundred lengths, a comparison with the well-established tools displays significant advantages of its performance over the other methods. The factorial-moment-based estimation can evaluate correctly the scaling behaviors in a scale range about three generations wider than the multifractal detrended fluctuation analysis and the basic estimation. The estimation of partition function given by the wavelet transform modulus maxima has unacceptable fluctuations. Besides the scaling invariance focused in the present paper, the proposed factorial moment of continuous order can find its various uses, such as finding nonextensive behaviors of a complex system and reconstructing the causality relationship network between elements of a complex system.

  17. Probability and Measure

    CERN Document Server

    Billingsley, Patrick

    2012-01-01

    Praise for the Third Edition "It is, as far as I'm concerned, among the best books in math ever written....if you are a mathematician and want to have the top reference in probability, this is it." (Amazon.com, January 2006) A complete and comprehensive classic in probability and measure theory Probability and Measure, Anniversary Edition by Patrick Billingsley celebrates the achievements and advancements that have made this book a classic in its field for the past 35 years. Now re-issued in a new style and format, but with the reliable content that the third edition was revered for, this

  18. Probabilities in physics

    CERN Document Server

    Hartmann, Stephan

    2011-01-01

    Many results of modern physics--those of quantum mechanics, for instance--come in a probabilistic guise. But what do probabilistic statements in physics mean? Are probabilities matters of objective fact and part of the furniture of the world, as objectivists think? Or do they only express ignorance or belief, as Bayesians suggest? And how are probabilistic hypotheses justified and supported by empirical evidence? Finally, what does the probabilistic nature of physics imply for our understanding of the world? This volume is the first to provide a philosophical appraisal of probabilities in all of physics. Its main aim is to make sense of probabilistic statements as they occur in the various physical theories and models and to provide a plausible epistemology and metaphysics of probabilities. The essays collected here consider statistical physics, probabilistic modelling, and quantum mechanics, and critically assess the merits and disadvantages of objectivist and subjectivist views of probabilities in these fie...

  19. Probability in physics

    CERN Document Server

    Hemmo, Meir

    2012-01-01

    What is the role and meaning of probability in physical theory, in particular in two of the most successful theories of our age, quantum physics and statistical mechanics? Laws once conceived as universal and deterministic, such as Newton‘s laws of motion, or the second law of thermodynamics, are replaced in these theories by inherently probabilistic laws. This collection of essays by some of the world‘s foremost experts presents an in-depth analysis of the meaning of probability in contemporary physics. Among the questions addressed are: How are probabilities defined? Are they objective or subjective? What is their  explanatory value? What are the differences between quantum and classical probabilities? The result is an informative and thought-provoking book for the scientifically inquisitive. 

  20. Probability an introduction

    CERN Document Server

    Grimmett, Geoffrey

    2014-01-01

    Probability is an area of mathematics of tremendous contemporary importance across all aspects of human endeavour. This book is a compact account of the basic features of probability and random processes at the level of first and second year mathematics undergraduates and Masters' students in cognate fields. It is suitable for a first course in probability, plus a follow-up course in random processes including Markov chains. A special feature is the authors' attention to rigorous mathematics: not everything is rigorous, but the need for rigour is explained at difficult junctures. The text is enriched by simple exercises, together with problems (with very brief hints) many of which are taken from final examinations at Cambridge and Oxford. The first eight chapters form a course in basic probability, being an account of events, random variables, and distributions - discrete and continuous random variables are treated separately - together with simple versions of the law of large numbers and the central limit th...

  1. Probability for statisticians

    CERN Document Server

    Shorack, Galen R

    2017-01-01

    This 2nd edition textbook offers a rigorous introduction to measure theoretic probability with particular attention to topics of interest to mathematical statisticians—a textbook for courses in probability for students in mathematical statistics. It is recommended to anyone interested in the probability underlying modern statistics, providing a solid grounding in the probabilistic tools and techniques necessary to do theoretical research in statistics. For the teaching of probability theory to post graduate statistics students, this is one of the most attractive books available. Of particular interest is a presentation of the major central limit theorems via Stein's method either prior to or alternative to a characteristic function presentation. Additionally, there is considerable emphasis placed on the quantile function as well as the distribution function. The bootstrap and trimming are both presented. Martingale coverage includes coverage of censored data martingales. The text includes measure theoretic...

  2. Probability and Bayesian statistics

    CERN Document Server

    1987-01-01

    This book contains selected and refereed contributions to the "Inter­ national Symposium on Probability and Bayesian Statistics" which was orga­ nized to celebrate the 80th birthday of Professor Bruno de Finetti at his birthplace Innsbruck in Austria. Since Professor de Finetti died in 1985 the symposium was dedicated to the memory of Bruno de Finetti and took place at Igls near Innsbruck from 23 to 26 September 1986. Some of the pa­ pers are published especially by the relationship to Bruno de Finetti's scientific work. The evolution of stochastics shows growing importance of probability as coherent assessment of numerical values as degrees of believe in certain events. This is the basis for Bayesian inference in the sense of modern statistics. The contributions in this volume cover a broad spectrum ranging from foundations of probability across psychological aspects of formulating sub­ jective probability statements, abstract measure theoretical considerations, contributions to theoretical statistics an...

  3. Concepts of probability theory

    CERN Document Server

    Pfeiffer, Paul E

    1979-01-01

    Using the Kolmogorov model, this intermediate-level text discusses random variables, probability distributions, mathematical expectation, random processes, more. For advanced undergraduates students of science, engineering, or math. Includes problems with answers and six appendixes. 1965 edition.

  4. Quantum computing and probability.

    Science.gov (United States)

    Ferry, David K

    2009-11-25

    Over the past two decades, quantum computing has become a popular and promising approach to trying to solve computationally difficult problems. Missing in many descriptions of quantum computing is just how probability enters into the process. Here, we discuss some simple examples of how uncertainty and probability enter, and how this and the ideas of quantum computing challenge our interpretations of quantum mechanics. It is found that this uncertainty can lead to intrinsic decoherence, and this raises challenges for error correction.

  5. Elements of quantum probability

    OpenAIRE

    Kummerer, B.; Maassen, Hans

    1996-01-01

    This is an introductory article presenting some basic ideas of quantum probability. From a discussion of simple experiments with polarized light and a card game we deduce the necessity of extending the body of classical probability theory. For a class of systems, containing classical systems with finitely many states, a probabilistic model is developed. It can describe, in particular, the polarization experiments. Some examples of ‘quantum coin tosses’ are discussed, closely related to V.F.R....

  6. Probability in quantum mechanics

    Directory of Open Access Journals (Sweden)

    J. G. Gilson

    1982-01-01

    Full Text Available By using a fluid theory which is an alternative to quantum theory but from which the latter can be deduced exactly, the long-standing problem of how quantum mechanics is related to stochastic processes is studied. It can be seen how the Schrödinger probability density has a relationship to time spent on small sections of an orbit, just as the probability density has in some classical contexts.

  7. The perception of probability.

    Science.gov (United States)

    Gallistel, C R; Krishan, Monika; Liu, Ye; Miller, Reilly; Latham, Peter E

    2014-01-01

    We present a computational model to explain the results from experiments in which subjects estimate the hidden probability parameter of a stepwise nonstationary Bernoulli process outcome by outcome. The model captures the following results qualitatively and quantitatively, with only 2 free parameters: (a) Subjects do not update their estimate after each outcome; they step from one estimate to another at irregular intervals. (b) The joint distribution of step widths and heights cannot be explained on the assumption that a threshold amount of change must be exceeded in order for them to indicate a change in their perception. (c) The mapping of observed probability to the median perceived probability is the identity function over the full range of probabilities. (d) Precision (how close estimates are to the best possible estimate) is good and constant over the full range. (e) Subjects quickly detect substantial changes in the hidden probability parameter. (f) The perceived probability sometimes changes dramatically from one observation to the next. (g) Subjects sometimes have second thoughts about a previous change perception, after observing further outcomes. (h) The frequency with which they perceive changes moves in the direction of the true frequency over sessions. (Explaining this finding requires 2 additional parametric assumptions.) The model treats the perception of the current probability as a by-product of the construction of a compact encoding of the experienced sequence in terms of its change points. It illustrates the why and the how of intermittent Bayesian belief updating and retrospective revision in simple perception. It suggests a reinterpretation of findings in the recent literature on the neurobiology of decision making. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  8. Experimental Probability in Elementary School

    Science.gov (United States)

    Andrew, Lane

    2009-01-01

    Concepts in probability can be more readily understood if students are first exposed to probability via experiment. Performing probability experiments encourages students to develop understandings of probability grounded in real events, as opposed to merely computing answers based on formulae.

  9. The pleasures of probability

    CERN Document Server

    Isaac, Richard

    1995-01-01

    The ideas of probability are all around us. Lotteries, casino gambling, the al­ most non-stop polling which seems to mold public policy more and more­ these are a few of the areas where principles of probability impinge in a direct way on the lives and fortunes of the general public. At a more re­ moved level there is modern science which uses probability and its offshoots like statistics and the theory of random processes to build mathematical descriptions of the real world. In fact, twentieth-century physics, in embrac­ ing quantum mechanics, has a world view that is at its core probabilistic in nature, contrary to the deterministic one of classical physics. In addition to all this muscular evidence of the importance of probability ideas it should also be said that probability can be lots of fun. It is a subject where you can start thinking about amusing, interesting, and often difficult problems with very little mathematical background. In this book, I wanted to introduce a reader with at least a fairl...

  10. Improving Ranking Using Quantum Probability

    OpenAIRE

    Melucci, Massimo

    2011-01-01

    The paper shows that ranking information units by quantum probability differs from ranking them by classical probability provided the same data used for parameter estimation. As probability of detection (also known as recall or power) and probability of false alarm (also known as fallout or size) measure the quality of ranking, we point out and show that ranking by quantum probability yields higher probability of detection than ranking by classical probability provided a given probability of ...

  11. Collision Probability Analysis

    DEFF Research Database (Denmark)

    Hansen, Peter Friis; Pedersen, Preben Terndrup

    1998-01-01

    It is the purpose of this report to apply a rational model for prediction of ship-ship collision probabilities as function of the ship and the crew characteristics and the navigational environment for MS Dextra sailing on a route between Cadiz and the Canary Islands.The most important ship and crew...... characteristics are: ship speed, ship manoeuvrability, the layout of the navigational bridge, the radar system, the number and the training of navigators, the presence of a look out etc. The main parameters affecting the navigational environment are ship traffic density, probability distributions of wind speeds...... probability, i.e. a study of the navigator's role in resolving critical situations, a causation factor is derived as a second step.The report documents the first step in a probabilistic collision damage analysis. Future work will inlcude calculation of energy released for crushing of structures giving...

  12. Choice probability generating functions

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; McFadden, Daniel; Bierlaire, Michel

    2010-01-01

    This paper establishes that every random utility discrete choice model (RUM) has a representation that can be characterized by a choice-probability generating function (CPGF) with specific properties, and that every function with these specific properties is consistent with a RUM. The choice...... probabilities from the RUM are obtained from the gradient of the CPGF. Mixtures of RUM are characterized by logarithmic mixtures of their associated CPGF. The paper relates CPGF to multivariate extreme value distributions, and reviews and extends methods for constructing generating functions for applications....... The choice probabilities of any ARUM may be approximated by a cross-nested logit model. The results for ARUM are extended to competing risk survival models....

  13. Probability and stochastic modeling

    CERN Document Server

    Rotar, Vladimir I

    2012-01-01

    Basic NotionsSample Space and EventsProbabilitiesCounting TechniquesIndependence and Conditional ProbabilityIndependenceConditioningThe Borel-Cantelli TheoremDiscrete Random VariablesRandom Variables and VectorsExpected ValueVariance and Other Moments. Inequalities for DeviationsSome Basic DistributionsConvergence of Random Variables. The Law of Large NumbersConditional ExpectationGenerating Functions. Branching Processes. Random Walk RevisitedBranching Processes Generating Functions Branching Processes Revisited More on Random WalkMarkov ChainsDefinitions and Examples. Probability Distributions of Markov ChainsThe First Step Analysis. Passage TimesVariables Defined on a Markov ChainErgodicity and Stationary DistributionsA Classification of States and ErgodicityContinuous Random VariablesContinuous DistributionsSome Basic Distributions Continuous Multivariate Distributions Sums of Independent Random Variables Conditional Distributions and ExpectationsDistributions in the General Case. SimulationDistribution F...

  14. Classic Problems of Probability

    CERN Document Server

    Gorroochurn, Prakash

    2012-01-01

    "A great book, one that I will certainly add to my personal library."—Paul J. Nahin, Professor Emeritus of Electrical Engineering, University of New Hampshire Classic Problems of Probability presents a lively account of the most intriguing aspects of statistics. The book features a large collection of more than thirty classic probability problems which have been carefully selected for their interesting history, the way they have shaped the field, and their counterintuitive nature. From Cardano's 1564 Games of Chance to Jacob Bernoulli's 1713 Golden Theorem to Parrondo's 1996 Perplexin

  15. Estimating tail probabilities

    Energy Technology Data Exchange (ETDEWEB)

    Carr, D.B.; Tolley, H.D.

    1982-12-01

    This paper investigates procedures for univariate nonparametric estimation of tail probabilities. Extrapolated values for tail probabilities beyond the data are also obtained based on the shape of the density in the tail. Several estimators which use exponential weighting are described. These are compared in a Monte Carlo study to nonweighted estimators, to the empirical cdf, to an integrated kernel, to a Fourier series estimate, to a penalized likelihood estimate and a maximum likelihood estimate. Selected weighted estimators are shown to compare favorably to many of these standard estimators for the sampling distributions investigated.

  16. Introduction to imprecise probabilities

    CERN Document Server

    Augustin, Thomas; de Cooman, Gert; Troffaes, Matthias C M

    2014-01-01

    In recent years, the theory has become widely accepted and has been further developed, but a detailed introduction is needed in order to make the material available and accessible to a wide audience. This will be the first book providing such an introduction, covering core theory and recent developments which can be applied to many application areas. All authors of individual chapters are leading researchers on the specific topics, assuring high quality and up-to-date contents. An Introduction to Imprecise Probabilities provides a comprehensive introduction to imprecise probabilities, includin

  17. Epistemology and Probability

    CERN Document Server

    Plotnitsky, Arkady

    2010-01-01

    Offers an exploration of the relationships between epistemology and probability in the work of Niels Bohr, Werner Heisenberg, and Erwin Schrodinger; in quantum mechanics; and in modern physics. This book considers the implications of these relationships and of quantum theory for our understanding of the nature of thinking and knowledge in general

  18. Huygens' foundations of probability

    NARCIS (Netherlands)

    Freudenthal, Hans

    It is generally accepted that Huygens based probability on expectation. The term “expectation,” however, stems from Van Schooten's Latin translation of Huygens' treatise. A literal translation of Huygens' Dutch text shows more clearly what Huygens actually meant and how he proceeded.

  19. Counterexamples in probability

    CERN Document Server

    Stoyanov, Jordan M

    2013-01-01

    While most mathematical examples illustrate the truth of a statement, counterexamples demonstrate a statement's falsity. Enjoyable topics of study, counterexamples are valuable tools for teaching and learning. The definitive book on the subject in regards to probability, this third edition features the author's revisions and corrections plus a substantial new appendix.

  20. Probably Almost Bayes Decisions

    DEFF Research Database (Denmark)

    Anoulova, S.; Fischer, Paul; Poelt, S.

    1996-01-01

    In this paper, we investigate the problem of classifying objects which are given by feature vectors with Boolean entries. Our aim is to "(efficiently) learn probably almost optimal classifications" from examples. A classical approach in pattern recognition uses empirical estimations of the Bayesian...

  1. Univariate Probability Distributions

    Science.gov (United States)

    Leemis, Lawrence M.; Luckett, Daniel J.; Powell, Austin G.; Vermeer, Peter E.

    2012-01-01

    We describe a web-based interactive graphic that can be used as a resource in introductory classes in mathematical statistics. This interactive graphic presents 76 common univariate distributions and gives details on (a) various features of the distribution such as the functional form of the probability density function and cumulative distribution…

  2. The Theory of Probability

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 3; Issue 4. The Theory of Probability. Andrei Nikolaevich Kolmogorov. Classics Volume 3 Issue 4 April 1998 pp 103-112. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/003/04/0103-0112. Author Affiliations.

  3. Probability Theory Without Tears!

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 2. Probability Theory Without Tears! S Ramasubramanian. Book Review Volume 1 Issue 2 February 1996 pp 115-116. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/001/02/0115-0116 ...

  4. probably mostly white

    African Journals Online (AJOL)

    Willem Scholtz

    internet – the (probably mostly white) public's interest in the so-called Border War is ostensibly at an all-time high. By far most of the publications are written by ex- ... understanding of this very important episode in the history of Southern Africa. It was, therefore, with some anticipation that one waited for this book, which.

  5. the theory of probability

    Indian Academy of Sciences (India)

    important practical applications in statistical quality control. Of a similar kind are the laws of probability for the scattering of missiles, which are basic in the ..... deviations for different ranges for each type of gun and of shell are found empirically in firing practice on an artillery range. But the subsequent solution of all possible ...

  6. On Randomness and Probability

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 2. On Randomness and Probability How to Mathematically Model Uncertain Events ... Author Affiliations. Rajeeva L Karandikar1. Statistics and Mathematics Unit, Indian Statistical Institute, 7 S J S Sansanwal Marg, New Delhi 110 016, India.

  7. On Probability Domains

    Science.gov (United States)

    Frič, Roman; Papčo, Martin

    2010-12-01

    Motivated by IF-probability theory (intuitionistic fuzzy), we study n-component probability domains in which each event represents a body of competing components and the range of a state represents a simplex S n of n-tuples of possible rewards-the sum of the rewards is a number from [0,1]. For n=1 we get fuzzy events, for example a bold algebra, and the corresponding fuzzy probability theory can be developed within the category ID of D-posets (equivalently effect algebras) of fuzzy sets and sequentially continuous D-homomorphisms. For n=2 we get IF-events, i.e., pairs ( μ, ν) of fuzzy sets μ, ν∈[0,1] X such that μ( x)+ ν( x)≤1 for all x∈ X, but we order our pairs (events) coordinatewise. Hence the structure of IF-events (where ( μ 1, ν 1)≤( μ 2, ν 2) whenever μ 1≤ μ 2 and ν 2≤ ν 1) is different and, consequently, the resulting IF-probability theory models a different principle. The category ID is cogenerated by I=[0,1] (objects of ID are subobjects of powers I X ), has nice properties and basic probabilistic notions and constructions are categorical. For example, states are morphisms. We introduce the category S n D cogenerated by Sn=\\{(x1,x2,ldots ,xn)in In;sum_{i=1}nxi≤ 1\\} carrying the coordinatewise partial order, difference, and sequential convergence and we show how basic probability notions can be defined within S n D.

  8. Negative probability in the framework of combined probability

    OpenAIRE

    Burgin, Mark

    2013-01-01

    Negative probability has found diverse applications in theoretical physics. Thus, construction of sound and rigorous mathematical foundations for negative probability is important for physics. There are different axiomatizations of conventional probability. So, it is natural that negative probability also has different axiomatic frameworks. In the previous publications (Burgin, 2009; 2010), negative probability was mathematically formalized and rigorously interpreted in the context of extende...

  9. [On Atomic Nuclear Fusion Processes at Low-Temperatures. An Enhancement of the Probability of Transition through a Potential Barrier Due to the So-Called Barrier Anti-Zeno Effect].

    Science.gov (United States)

    Namiot, V A

    2016-01-01

    It is known that in quantum mechanics the act of observing the experiment can affect the experimental findings in some cases. In particular, it happens under the so-called Zeno effect. In this work it is shown that in contrast to the "standard" Zeno-effect where the act of observing a process reduces the probability of its reality, an inverse situation when a particle transmits through a potential barrier (a so-called barrier anti-Zeno effect) can be observed, the observation of the particle essentially increases the probability of its transmission through the barrier. The possibility of using the barrier anti-Zeno effect is discussed to explain paradoxical results of experiments on "cold nuclear fusion" observed in various systems including biological ones. (According to the observers who performed the observations, the energy generation, which could not be explained by any chemical processes, as well as the change in the isotope and even element composition of the studied object may occur in these systems.

  10. Paradoxes in probability theory

    CERN Document Server

    Eckhardt, William

    2013-01-01

    Paradoxes provide a vehicle for exposing misinterpretations and misapplications of accepted principles. This book discusses seven paradoxes surrounding probability theory.  Some remain the focus of controversy; others have allegedly been solved, however the accepted solutions are demonstrably incorrect. Each paradox is shown to rest on one or more fallacies.  Instead of the esoteric, idiosyncratic, and untested methods that have been brought to bear on these problems, the book invokes uncontroversial probability principles, acceptable both to frequentists and subjectivists. The philosophical disputation inspired by these paradoxes is shown to be misguided and unnecessary; for instance, startling claims concerning human destiny and the nature of reality are directly related to fallacious reasoning in a betting paradox, and a problem analyzed in philosophy journals is resolved by means of a computer program.

  11. Probability, Nondeterminism and Concurrency

    DEFF Research Database (Denmark)

    Varacca, Daniele

    reveals the computational intuition lying behind the mathematics. In the second part of the thesis we provide an operational reading of continuous valuations on certain domains (the distributive concrete domains of Kahn and Plotkin) through the model of probabilistic event structures. Event structures......Nondeterminism is modelled in domain theory by the notion of a powerdomain, while probability is modelled by that of the probabilistic powerdomain. Some problems arise when we want to combine them in order to model computation in which both nondeterminism and probability are present. In particular...... there is no categorical distributive law between them. We introduce the powerdomain of indexed valuations which modifies the usual probabilistic powerdomain to take more detailed account of where probabilistic choices are made. We show the existence of a distributive law between the powerdomain of indexed valuations...

  12. Waste Package Misload Probability

    Energy Technology Data Exchange (ETDEWEB)

    J.K. Knudsen

    2001-11-20

    The objective of this calculation is to calculate the probability of occurrence for fuel assembly (FA) misloads (i.e., Fa placed in the wrong location) and FA damage during FA movements. The scope of this calculation is provided by the information obtained from the Framatome ANP 2001a report. The first step in this calculation is to categorize each fuel-handling events that occurred at nuclear power plants. The different categories are based on FAs being damaged or misloaded. The next step is to determine the total number of FAs involved in the event. Using the information, a probability of occurrence will be calculated for FA misload and FA damage events. This calculation is an expansion of preliminary work performed by Framatome ANP 2001a.

  13. Measurement uncertainty and probability

    CERN Document Server

    Willink, Robin

    2013-01-01

    A measurement result is incomplete without a statement of its 'uncertainty' or 'margin of error'. But what does this statement actually tell us? By examining the practical meaning of probability, this book discusses what is meant by a '95 percent interval of measurement uncertainty', and how such an interval can be calculated. The book argues that the concept of an unknown 'target value' is essential if probability is to be used as a tool for evaluating measurement uncertainty. It uses statistical concepts, such as a conditional confidence interval, to present 'extended' classical methods for evaluating measurement uncertainty. The use of the Monte Carlo principle for the simulation of experiments is described. Useful for researchers and graduate students, the book also discusses other philosophies relating to the evaluation of measurement uncertainty. It employs clear notation and language to avoid the confusion that exists in this controversial field of science.

  14. Probability theory and applications

    CERN Document Server

    Hsu, Elton P

    1999-01-01

    This volume, with contributions by leading experts in the field, is a collection of lecture notes of the six minicourses given at the IAS/Park City Summer Mathematics Institute. It introduces advanced graduates and researchers in probability theory to several of the currently active research areas in the field. Each course is self-contained with references and contains basic materials and recent results. Topics include interacting particle systems, percolation theory, analysis on path and loop spaces, and mathematical finance. The volume gives a balanced overview of the current status of probability theory. An extensive bibliography for further study and research is included. This unique collection presents several important areas of current research and a valuable survey reflecting the diversity of the field.

  15. Superpositions of probability distributions.

    Science.gov (United States)

    Jizba, Petr; Kleinert, Hagen

    2008-09-01

    Probability distributions which can be obtained from superpositions of Gaussian distributions of different variances v=sigma;{2} play a favored role in quantum theory and financial markets. Such superpositions need not necessarily obey the Chapman-Kolmogorov semigroup relation for Markovian processes because they may introduce memory effects. We derive the general form of the smearing distributions in v which do not destroy the semigroup property. The smearing technique has two immediate applications. It permits simplifying the system of Kramers-Moyal equations for smeared and unsmeared conditional probabilities, and can be conveniently implemented in the path integral calculus. In many cases, the superposition of path integrals can be evaluated much easier than the initial path integral. Three simple examples are presented, and it is shown how the technique is extended to quantum mechanics.

  16. Bayesian Probability Theory

    Science.gov (United States)

    von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo

    2014-06-01

    Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.

  17. Measurement uncertainty and probability

    National Research Council Canada - National Science Library

    Willink, Robin

    2013-01-01

    ... and probability models 3.4 Inference and confidence 3.5 Two central limit theorems 3.6 The Monte Carlo method and process simulation 4 The randomization of systematic errors page xi xii 3 3 5 7 10 12 16 19 21 21 23 28 30 32 33 39 43 45 52 53 56 viiviii 4.1 4.2 4.3 4.4 4.5 Contents The Working Group of 1980 From classical repetition to practica...

  18. Structural Minimax Probability Machine.

    Science.gov (United States)

    Gu, Bin; Sun, Xingming; Sheng, Victor S

    2017-07-01

    Minimax probability machine (MPM) is an interesting discriminative classifier based on generative prior knowledge. It can directly estimate the probabilistic accuracy bound by minimizing the maximum probability of misclassification. The structural information of data is an effective way to represent prior knowledge, and has been found to be vital for designing classifiers in real-world problems. However, MPM only considers the prior probability distribution of each class with a given mean and covariance matrix, which does not efficiently exploit the structural information of data. In this paper, we use two finite mixture models to capture the structural information of the data from binary classification. For each subdistribution in a finite mixture model, only its mean and covariance matrix are assumed to be known. Based on the finite mixture models, we propose a structural MPM (SMPM). SMPM can be solved effectively by a sequence of the second-order cone programming problems. Moreover, we extend a linear model of SMPM to a nonlinear model by exploiting kernelization techniques. We also show that the SMPM can be interpreted as a large margin classifier and can be transformed to support vector machine and maxi-min margin machine under certain special conditions. Experimental results on both synthetic and real-world data sets demonstrate the effectiveness of SMPM.

  19. Probability via expectation

    CERN Document Server

    Whittle, Peter

    1992-01-01

    This book is a complete revision of the earlier work Probability which ap­ peared in 1970. While revised so radically and incorporating so much new material as to amount to a new text, it preserves both the aim and the approach of the original. That aim was stated as the provision of a 'first text in probability, de­ manding a reasonable but not extensive knowledge of mathematics, and taking the reader to what one might describe as a good intermediate level'. In doing so it attempted to break away from stereotyped applications, and consider applications of a more novel and significant character. The particular novelty of the approach was that expectation was taken as the prime concept, and the concept of expectation axiomatized rather than that of a probability measure. In the preface to the original text of 1970 (reproduced below, together with that to the Russian edition of 1982) I listed what I saw as the advantages of the approach in as unlaboured a fashion as I could. I also took the view that the text...

  20. Oral-facial-digital syndrome with mesoaxial polysyndactyly, common AV canal, hirschsprung disease and sacral dysgenesis: Probably a transitional type between II, VI, variant of type VI or a new type

    Directory of Open Access Journals (Sweden)

    Rabah M. Shawky

    2014-07-01

    Full Text Available We report a 4 month old male infant, the first in order of birth of healthy first cousin consanguineous parents who has many typical features of oral-facial-digital syndrome type VI (OFDS VI including hypertelorism, bilateral convergent squint, depressed nasal bridge, and wide upturned nares, low set posteriorly rotated ears, long philtrum, gum hyperplasia with notches of the alveolar borders, high arched palate, and hyperplastic oral frenula. He has mesoaxial and postaxial, polysyndactyly which is the specific feature of OFDS VI, however the cerebellum is normal on MRI brain. He has also some rare congenital anomalies including common atrioventricular canal, hirschsprung disease, and sacral dysgenesis. This patient may have a transitional type between II and VI, a variant of type VI or a new type.

  1. Estudio de la invarianza de escala mediante el método de cálculo integral en la medición de la calidad percibida de los servicios deportivos. (Analysing scale invariance through integral calculus when measuring perceived quality in sports services.

    Directory of Open Access Journals (Sweden)

    José Antonio Martínez García

    2009-04-01

    Full Text Available ResumenEsta investigación presenta un nuevo método para el estudio de la invarianza de escala que complementa otros métodos existentes, lo que contribuye a realizar un análisis ecléctico y multifocal de un problema importante en la investigación de marketing, y en particular en la investigación de servicios deportivos. Este método está basado en la utilización del cálculo integral y tiene una sencilla interpretación geométrica. Se describen y comparan varios procedimientos para testar la invarianza de escala, y se realiza un re-análisis de la investigación de Martínez y Martínez (2008b sobre la percepción de calidad del consumidor de servicios deportivos. Los resultados muestran cómo existen diferencias sobre las conclusiones originales de estos autores. De este modo, las escalas de siete opciones de respuesta sí son invariantes, mientras que la de cinco opciones no lo son. Finalmente, se discuten las bondades y las limitaciones del método integral, abogando por la triangulación estadística para dar robustez a los resultados empíricos.AbstractThis research introduces a new method to analyse scale invariance, which overcomes some shortcomings of other procedures. Under an eclectic perspective, this method must help to provide insights in the marketing research discipline, and specifically in the sports service management. The method is grounded on the use of definite integrals to compute the area between two functions. In addition, several procedures for testing scale invariance are depicted and compared. An empirical application is achieved by re-analysing the study of Martínez & Martínez (2008b on perceived quality in sports services. Results shows that misleading conclusions were derived from the original study of those authors. Finally, advantages and shortcomings of the new method are discussed.

  2. Probabilities for Solar Siblings

    Science.gov (United States)

    Valtonen, Mauri; Bajkova, A. T.; Bobylev, V. V.; Mylläri, A.

    2015-02-01

    We have shown previously (Bobylev et al. Astron Lett 37:550-562, 2011) that some of the stars in the solar neighborhood today may have originated in the same star cluster as the Sun, and could thus be called Solar Siblings. In this work we investigate the sensitivity of this result to galactic models and to parameters of these models, and also extend the sample of orbits. There are a number of good candidates for the sibling category, but due to the long period of orbit evolution since the break-up of the birth cluster of the Sun, one can only attach probabilities of membership. We find that up to 10 % (but more likely around 1 %) of the members of the Sun's birth cluster could be still found within 100 pc from the Sun today.

  3. Measure, integral and probability

    CERN Document Server

    Capiński, Marek

    2004-01-01

    Measure, Integral and Probability is a gentle introduction that makes measure and integration theory accessible to the average third-year undergraduate student. The ideas are developed at an easy pace in a form that is suitable for self-study, with an emphasis on clear explanations and concrete examples rather than abstract theory. For this second edition, the text has been thoroughly revised and expanded. New features include: · a substantial new chapter, featuring a constructive proof of the Radon-Nikodym theorem, an analysis of the structure of Lebesgue-Stieltjes measures, the Hahn-Jordan decomposition, and a brief introduction to martingales · key aspects of financial modelling, including the Black-Scholes formula, discussed briefly from a measure-theoretical perspective to help the reader understand the underlying mathematical framework. In addition, further exercises and examples are provided to encourage the reader to become directly involved with the material.

  4. A seismic probability map

    Directory of Open Access Journals (Sweden)

    J. M. MUNUERA

    1964-06-01

    Full Text Available The material included in former two papers (SB and EF
    which summs 3307 shocks corresponding to 2360 years, up to I960, was
    reduced to a 50 years period by means the weight obtained for each epoch.
    The weitliing factor is the ratio 50 and the amount of years for every epoch.
    The frequency has been referred over basis VII of the international
    seismic scale of intensity, for all cases in which the earthquakes are equal or
    greater than VI and up to IX. The sum of products: frequency and parameters
    previously exposed, is the probable frequency expected for the 50
    years period.
    On each active small square, we have made the corresponding computation
    and so we have drawn the Map No 1, in percentage. The epicenters with
    intensity since X to XI are plotted in the Map No 2, in order to present a
    complementary information.
    A table shows the return periods obtained for all data (VII to XI,
    and after checking them with other computed from the first up to last shock,
    a list includes the probable approximate return periods estimated for the area.
    The solution, we suggest, is an appropriated form to express the seismic
    contingent phenomenon and it improves the conventional maps showing
    the equal intensity curves corresponding to the maximal values of given side.

  5. Transposition and Time-Scale Invariant Geometric Music Retrieval

    Science.gov (United States)

    Lemström, Kjell

    This paper considers how to adapt geometric algorithms, developed for content-based music retrieval of symbolically encoded music, to be robust against time deformations required by real-world applications. In this setting, music is represented by sets of points in plane. A matching, pertinent to the application, involves two such sets of points and invariances under translations and time scalings. We give an algorithm for finding exact occurrences, under such a setting, of a given query point set, of size m, within a database point set, of size n, with running time O(mn 2logn); partial occurrences are found in O(m 2 n 2logn) time. The algorithms resemble the sweepline algorithm introduced in [1].

  6. Scale invariance properties of rainfall in AMMA-CATCH observatory ...

    African Journals Online (AJOL)

    صﺧﻟﻣﻟا. تﯾرﺟأ. هذھ. ﺔﺳاردﻟا. ﯽﻟﻋ. ﺔﻧﺳ نﯾﺑ رﺎطﻣﻷا طﻗﺎﺳﺗﻟ ﺔﻧﯾﻌﻣ تﺎﻗوأ ﻲﻓ تﺎﯾطﻌﻣ. 1999. -. 2012. نﯾﺛﻼﺛﺑ. ﺔطﺣﻣ. AMMA-CACTH. -. Benin . دﺟوﺗ. تﺎظﺣﻟﻟا نﯾﺑ ةدﯾطو ﺔﻗﻼﻋ. ﺔﯾﺋﺎﺻﺣﻹا. تﺎﻧﺎﯾﺑﻟ. لوطھ. رﺎطﻣﻷا. ﺎﮭطﻗﺎﺳﺗ ةدﻣو . كﺎﻧھ. كوﻟﺳ. نﯾﺑ سﺎﯾﻘﻣﻟا تﺎﺑﺛ نﯾﺑﯾ. تﺎظﺣﻟﻟا. ﺔﯾﺋﺎﺻﺣﻹا. تارﺗﻓو. لوطﮭﺑ ﺔﺻﺎﺧﻟا ﺔﻌﺑﺎﺗﻣﻟا. ،رﺎطﻣﻷا. ﻊﻣ. سﻷا. تﺎﺑﺛ. ﯽﻟﻋ. قﺎطﻧ. ﺔﯾﺑﻟﺗو. مدﻋ. ةاوﺎﺳﻣﻟا. : 0.5. > n. > 1. صﺋﺎﺻﺧﻟا. ﺔﯾﺋﺎﺻﺣﻹا. لوطﮭﻟ.

  7. Analysis and modeling of scale-invariance in plankton abundance

    CERN Document Server

    Pelletier, J D

    1996-01-01

    The power spectrum, $S$, of horizontal transects of plankton abundance are often observed to have a power-law dependence on wavenumber, $k$, with exponent close to $-2$: $S(k)\\propto k^{-2}$ over a wide range of scales. I present power spectral analyses of aircraft lidar measurements of phytoplankton abundance from scales of 1 to 100 km. A power spectrum $S(k)\\propto k^{-2}$ is obtained. As a model for this observation, I consider a stochastic growth equation where the rate of change of plankton abundance is determined by turbulent mixing, modeled as a diffusion process in two dimensions, and exponential growth with a stochastically variable net growth rate representing a fluctuating environment. The model predicts a lognormal distribution of abundance and a power spectrum of horizontal transects $S(k)\\propto k^{-1.8}$, close to the observed spectrum. The model equation predicts that the power spectrum of variations in abundance in time at a point in space is $S(f)\\propto f^{-1.5}$ (where $f$ is the frequency...

  8. A Scale Invariant Distribution of the Prime Numbers

    Directory of Open Access Journals (Sweden)

    Wayne S. Kendal

    2015-10-01

    Full Text Available The irregular distribution of prime numbers amongst the integers has found multiple uses, from engineering applications of cryptography to quantum theory. The degree to which this distribution can be predicted thus has become a subject of current interest. Here, we present a computational analysis of the deviations between the actual positions of the prime numbers and their predicted positions from Riemann’s counting formula, focused on the variance function of these deviations from sequential enumerative bins. We show empirically that these deviations can be described by a class of probabilistic models known as the Tweedie exponential dispersion models that are characterized by a power law relationship between the variance and the mean, known by biologists as Taylor’s power law and by engineers as fluctuation scaling. This power law behavior of the prime number deviations is remarkable in that the same behavior has been found within the distribution of genes and single nucleotide polymorphisms (SNPs within the human genome, the distribution of animals and plants within their habitats, as well as within many other biological and physical processes. We explain the common features of this behavior through a statistical convergence effect related to the central limit theorem that also generates 1/f noise.

  9. Scale invariance properties of rainfall in AMMA-CATCH observatory ...

    African Journals Online (AJOL)

    on real rainfall data and the number of such investigations is still rather low. Recent investigations have mostly dealt with either radar data (e.g. [6-8,10-13]), or rainfall time series (e.g. [1,14,15]). In spite of recent advances in the investigation of the scaling properties of hydrological fields, very few studies from different ...

  10. Scale invariance in the 2003 2005 Iraq conflict

    Science.gov (United States)

    Alvarez-Ramirez, Jose; Rodriguez, Eduardo; Urrea, Rafael

    2007-04-01

    The number of reported social systems that apparently display power-law correlations (i.e., scale-free patterns) has increased dramatically in recent years, ranging from city growth and economics to global terrorism. Using the set of violence events in the 2003-2005 Iraq stabilization phase (i.e., from May 1, 2005), existence of scale-free patterns in event fatalities is shown. This property is also present in the tail of distributions of events divided into groups based on the type of used weapon. Lognormal distribution description was also tried, showing the superiority of the power-law function to describe the behavior of heavy tails. Time series for civilian and military fatalities were studied using the so-called detrended fluctuation analysis. Civilian fatalities showed uncorrelated behavior, implying a lack of memory effects on the evolution of daily civilian fatalities. In contrast, military fatalities displayed long-range correlated behavior.

  11. Duality and scale invariant magnetic fields from bouncing universes

    DEFF Research Database (Denmark)

    Chowdhury, Debika; Sriramkumar, L.; Jain, Rajeev Kumar

    2016-01-01

    of such models. We illustrate that, for cosmological scales which have wave numbers much smaller than the wave number associated with the bounce, the shape of the spectrum is preserved across the bounce. Using the analytic solutions obtained, we also illustrate that the problem of backreaction is severe...

  12. Optimal Weather Conditions, Economic Growth, and Political Transitions

    National Research Council Canada - National Science Library

    Cáceres, Neila; Malone, Samuel W

    2015-01-01

    .... Previous studies significantly overestimate the increase in the probability of democratic transitions resulting from negative growth shocks, although we find leadership transition frequencies rise...

  13. A Tale of Two Probabilities

    Science.gov (United States)

    Falk, Ruma; Kendig, Keith

    2013-01-01

    Two contestants debate the notorious probability problem of the sex of the second child. The conclusions boil down to explication of the underlying scenarios and assumptions. Basic principles of probability theory are highlighted.

  14. The stationary probability density of a class of bounded Markov processes

    OpenAIRE

    Ramli, Muhamad Azfar; Leng, Gerard

    2010-01-01

    In this paper we generalize a bounded Markov process, described by Stoyanov and Pacheco-González for a class of transition probability functions. A recursive integral equation for the probability density of these bounded Markov processes is derived and the stationary probability density is obtained by solving an equivalent differential equation. Examples of stationary densities for different transition probability functions are given and an application for designing a roboti...

  15. Calculation of fractional electron capture probabilities

    CERN Document Server

    Schoenfeld, E

    1998-01-01

    A 'Table of Radionuclides' is being prepared which will supersede the 'Table de Radionucleides' formerly issued by the LMRI/LPRI (France). In this effort it is desirable to have a uniform basis for calculating theoretical values of fractional electron capture probabilities. A table has been compiled which allows one to calculate conveniently and quickly the fractional probabilities P sub K , P sub L , P sub M , P sub N and P sub O , their ratios and the assigned uncertainties for allowed and non-unique first forbidden electron capture transitions of known transition energy for radionuclides with atomic numbers from Z=3 to 102. These results have been applied to a total of 28 transitions of 14 radionuclides ( sup 7 Be, sup 2 sup 2 Na, sup 5 sup 1 Cr, sup 5 sup 4 Mn, sup 5 sup 5 Fe, sup 6 sup 8 Ge , sup 6 sup 8 Ga, sup 7 sup 5 Se, sup 1 sup 0 sup 9 Cd, sup 1 sup 2 sup 5 I, sup 1 sup 3 sup 9 Ce, sup 1 sup 6 sup 9 Yb, sup 1 sup 9 sup 7 Hg, sup 2 sup 0 sup 2 Tl). The values are in reasonable agreement with measure...

  16. Transition path time distributions

    Science.gov (United States)

    Laleman, M.; Carlon, E.; Orland, H.

    2017-12-01

    Biomolecular folding, at least in simple systems, can be described as a two state transition in a free energy landscape with two deep wells separated by a high barrier. Transition paths are the short part of the trajectories that cross the barrier. Average transition path times and, recently, their full probability distribution have been measured for several biomolecular systems, e.g., in the folding of nucleic acids or proteins. Motivated by these experiments, we have calculated the full transition path time distribution for a single stochastic particle crossing a parabolic barrier, including inertial terms which were neglected in previous studies. These terms influence the short time scale dynamics of a stochastic system and can be of experimental relevance in view of the short duration of transition paths. We derive the full transition path time distribution as well as the average transition path times and discuss the similarities and differences with the high friction limit.

  17. Probability workshop to be better in probability topic

    Science.gov (United States)

    Asmat, Aszila; Ujang, Suriyati; Wahid, Sharifah Norhuda Syed

    2015-02-01

    The purpose of the present study was to examine whether statistics anxiety and attitudes towards probability topic among students in higher education level have an effect on their performance. 62 fourth semester science students were given statistics anxiety questionnaires about their perception towards probability topic. Result indicated that students' performance in probability topic is not related to anxiety level, which means that the higher level in statistics anxiety will not cause lower score in probability topic performance. The study also revealed that motivated students gained from probability workshop ensure that their performance in probability topic shows a positive improvement compared before the workshop. In addition there exists a significance difference in students' performance between genders with better achievement among female students compared to male students. Thus, more initiatives in learning programs with different teaching approaches is needed to provide useful information in improving student learning outcome in higher learning institution.

  18. SLAP: Specification logic of actions with probability

    CSIR Research Space (South Africa)

    Rens, G

    2014-06-01

    Full Text Available with Probability Gavin Rensa, Thomas Meyera, Gerhard Lakemeyerb aCentre for Artificial Intelligence Research, University of KwaZulu-Natal, and CSIR Meraka, South Africa bRWTH Aachen University, Germany Abstract A logic for specifying probabilistic transition... containing (x,⊥) at the end of the applicable branch. A na¨ıve solution might be to add a tableau rule which deals with this case. However, there are many subtle cases and designing rules to cover all cases is very difficult. And proving that the tableau...

  19. Transition Probabilities and Different Levels of Prominence in Segmentation

    Science.gov (United States)

    Ordin, Mikhail; Nespor, Marina

    2013-01-01

    A large body of empirical research demonstrates that people exploit a wide variety of cues for the segmentation of continuous speech in artificial languages, including rhythmic properties, phrase boundary cues, and statistical regularities. However, less is known regarding how the different cues interact. In this study we addressed the question of…

  20. Anomalous behaviour of transition probabilities in75Kr

    Science.gov (United States)

    Skoda, S.; Wood, J. L.; Eberth, J.; Busch, J.; Liebchen, M.; Mylaeus, T.; Schmal, N.; Sefzig, R.; Teichert, W.; Wiosna, M.

    1990-12-01

    Two collective bands of75Kr have been extended up to spin 21/2 using the compound reactions64Zn(14N, p2 n)75Kr and50Cr(28Si, 2 pn)75Kr. Spins and parities were assigned from neutron-gated γ-ray angular distributions and excitation functions using the OSIRIS anti-Compton spectrometer. The bands are interpreted to be built on the well-deformed Nilsson states: [442] 5/2 and [301] 3/2. Energies for both bands and the order of magnitude of the mixing ratios in the f 5/2 band can be reproduced within the single-particle-plus-rotor model, while the experimental Q(I→I-1)/ Q(I→I-2) ratios, deduced from mixing ratios and branching ratios, exhibit large deviations by a factor 4 to 6 from theoretical values (which are around one). An explanation of this effect may be found by treating the two rotational bands each as a result of mixing between rotational bands of oblate and prolate states; thus explaining the large difference between B( E2, I→I-1) and B( E2, I →I-2) in the bands of75Kr.

  1. Anomalous behaviour of transition probabilities in sup 75 Kr

    Energy Technology Data Exchange (ETDEWEB)

    Skoda, S.; Eberth, J.; Busch, J.; Liebchen, M.; Mylaeus, T.; Schmal, N.; Sefzig, R.; Teichert, W.; Wiosna, M. (Koeln Univ. (Germany, F.R.). Inst. fuer Kernphysik); Wood, J.L. (Georgia Inst. of Tech., Atlanta (USA). School of Physics)

    1990-08-01

    Two collective bands of {sup 75}Kr have been extended up to spin 21/2 using the compound reactions {sup 64}Zn({sup 14}N, p2n){sup 75}Kr and {sup 50}Cr({sup 28}Si, 2pn){sup 75}Kr. Spins and parities were assigned from neutron-gated {gamma}-ray angular distributions and excitation functions using the OSIRIS anti-Compton spectrometer. The bands are interpreted to be built on the well-deformed Nilsson states: (442) 5/2 and (301) 3/2. Energies for both bands and the order of magnitude of the mixing ratios in the f{sub 5/2} band can be reproduced within the single-particle-plus-rotor model, while the experimental Q(I{yields}I-1)/Q(I{yields}I-2) ratios, deduced from mixing ratios and branching ratios, exhibit large deviations by a factor 4 to 6 from theoretical values (which are around one). An explanation of this effect may be found by treating the two rotational bands each as a result of mixing between rotational bands of oblate and prolate states; thus explaining the large difference between B(E2, I{yields}I-1) and B(E2, I{yields}I-2) in the bands of {sup 75}Kr. (orig.).

  2. Applied probability and stochastic processes

    CERN Document Server

    Sumita, Ushio

    1999-01-01

    Applied Probability and Stochastic Processes is an edited work written in honor of Julien Keilson. This volume has attracted a host of scholars in applied probability, who have made major contributions to the field, and have written survey and state-of-the-art papers on a variety of applied probability topics, including, but not limited to: perturbation method, time reversible Markov chains, Poisson processes, Brownian techniques, Bayesian probability, optimal quality control, Markov decision processes, random matrices, queueing theory and a variety of applications of stochastic processes. The book has a mixture of theoretical, algorithmic, and application chapters providing examples of the cutting-edge work that Professor Keilson has done or influenced over the course of his highly-productive and energetic career in applied probability and stochastic processes. The book will be of interest to academic researchers, students, and industrial practitioners who seek to use the mathematics of applied probability i...

  3. Probability and statistics: selected problems

    OpenAIRE

    Machado, J.A. Tenreiro; Pinto, Carla M. A.

    2014-01-01

    Probability and Statistics—Selected Problems is a unique book for senior undergraduate and graduate students to fast review basic materials in Probability and Statistics. Descriptive statistics are presented first, and probability is reviewed secondly. Discrete and continuous distributions are presented. Sample and estimation with hypothesis testing are presented in the last two chapters. The solutions for proposed excises are listed for readers to references.

  4. Sharp transition towards shared vocabularies in multi-agent systems

    Science.gov (United States)

    Baronchelli, Andrea; Felici, Maddalena; Loreto, Vittorio; Caglioti, Emanuele; Steels, Luc

    2006-06-01

    What processes can explain how very large populations are able to converge on the use of a particular word or grammatical construction without global coordination? Answering this question helps to understand why new language constructs usually propagate along an S-shaped curve with a rather sudden transition towards global agreement. It also helps to analyse and design new technologies that support or orchestrate self-organizing communication systems, such as recent social tagging systems for the web. The article introduces and studies a microscopic model of communicating autonomous agents performing language games without any central control. We show that the system undergoes a disorder/order transition, going through a sharp symmetry breaking process to reach a shared set of conventions. Before the transition, the system builds up non-trivial scale-invariant correlations, for instance in the distribution of competing synonyms, which display a Zipf-like law. These correlations make the system ready for the transition towards shared conventions, which, observed on the timescale of collective behaviours, becomes sharper and sharper with system size. This surprising result not only explains why human language can scale up to very large populations but also suggests ways to optimize artificial semiotic dynamics.

  5. Training Teachers to Teach Probability

    Science.gov (United States)

    Batanero, Carmen; Godino, Juan D.; Roa, Rafael

    2004-01-01

    In this paper we analyze the reasons why the teaching of probability is difficult for mathematics teachers, describe the contents needed in the didactical preparation of teachers to teach probability and analyze some examples of activities to carry out this training. These activities take into account the experience at the University of Granada,…

  6. Expected utility with lower probabilities

    DEFF Research Database (Denmark)

    Hendon, Ebbe; Jacobsen, Hans Jørgen; Sloth, Birgitte

    1994-01-01

    An uncertain and not just risky situation may be modeled using so-called belief functions assigning lower probabilities to subsets of outcomes. In this article we extend the von Neumann-Morgenstern expected utility theory from probability measures to belief functions. We use this theory...

  7. Introduction to probability with Mathematica

    CERN Document Server

    Hastings, Kevin J

    2009-01-01

    Discrete ProbabilityThe Cast of Characters Properties of Probability Simulation Random SamplingConditional ProbabilityIndependenceDiscrete DistributionsDiscrete Random Variables, Distributions, and ExpectationsBernoulli and Binomial Random VariablesGeometric and Negative Binomial Random Variables Poisson DistributionJoint, Marginal, and Conditional Distributions More on ExpectationContinuous ProbabilityFrom the Finite to the (Very) Infinite Continuous Random Variables and DistributionsContinuous ExpectationContinuous DistributionsThe Normal Distribution Bivariate Normal DistributionNew Random Variables from OldOrder Statistics Gamma DistributionsChi-Square, Student's t, and F-DistributionsTransformations of Normal Random VariablesAsymptotic TheoryStrong and Weak Laws of Large Numbers Central Limit TheoremStochastic Processes and ApplicationsMarkov ChainsPoisson Processes QueuesBrownian MotionFinancial MathematicsAppendixIntroduction to Mathematica Glossary of Mathematica Commands for Probability Short Answers...

  8. Probability machines: consistent probability estimation using nonparametric learning machines.

    Science.gov (United States)

    Malley, J D; Kruppa, J; Dasgupta, A; Malley, K G; Ziegler, A

    2012-01-01

    Most machine learning approaches only provide a classification for binary responses. However, probabilities are required for risk estimation using individual patient characteristics. It has been shown recently that every statistical learning machine known to be consistent for a nonparametric regression problem is a probability machine that is provably consistent for this estimation problem. The aim of this paper is to show how random forests and nearest neighbors can be used for consistent estimation of individual probabilities. Two random forest algorithms and two nearest neighbor algorithms are described in detail for estimation of individual probabilities. We discuss the consistency of random forests, nearest neighbors and other learning machines in detail. We conduct a simulation study to illustrate the validity of the methods. We exemplify the algorithms by analyzing two well-known data sets on the diagnosis of appendicitis and the diagnosis of diabetes in Pima Indians. Simulations demonstrate the validity of the method. With the real data application, we show the accuracy and practicality of this approach. We provide sample code from R packages in which the probability estimation is already available. This means that all calculations can be performed using existing software. Random forest algorithms as well as nearest neighbor approaches are valid machine learning methods for estimating individual probabilities for binary responses. Freely available implementations are available in R and may be used for applications.

  9. Probability Machines: Consistent Probability Estimation Using Nonparametric Learning Machines

    Science.gov (United States)

    Malley, J. D.; Kruppa, J.; Dasgupta, A.; Malley, K. G.; Ziegler, A.

    2011-01-01

    Summary Background Most machine learning approaches only provide a classification for binary responses. However, probabilities are required for risk estimation using individual patient characteristics. It has been shown recently that every statistical learning machine known to be consistent for a nonparametric regression problem is a probability machine that is provably consistent for this estimation problem. Objectives The aim of this paper is to show how random forests and nearest neighbors can be used for consistent estimation of individual probabilities. Methods Two random forest algorithms and two nearest neighbor algorithms are described in detail for estimation of individual probabilities. We discuss the consistency of random forests, nearest neighbors and other learning machines in detail. We conduct a simulation study to illustrate the validity of the methods. We exemplify the algorithms by analyzing two well-known data sets on the diagnosis of appendicitis and the diagnosis of diabetes in Pima Indians. Results Simulations demonstrate the validity of the method. With the real data application, we show the accuracy and practicality of this approach. We provide sample code from R packages in which the probability estimation is already available. This means that all calculations can be performed using existing software. Conclusions Random forest algorithms as well as nearest neighbor approaches are valid machine learning methods for estimating individual probabilities for binary responses. Freely available implementations are available in R and may be used for applications. PMID:21915433

  10. Failure probability under parameter uncertainty.

    Science.gov (United States)

    Gerrard, R; Tsanakas, A

    2011-05-01

    In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the log-normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications. © 2010 Society for Risk Analysis.

  11. Probability with applications and R

    CERN Document Server

    Dobrow, Robert P

    2013-01-01

    An introduction to probability at the undergraduate level Chance and randomness are encountered on a daily basis. Authored by a highly qualified professor in the field, Probability: With Applications and R delves into the theories and applications essential to obtaining a thorough understanding of probability. With real-life examples and thoughtful exercises from fields as diverse as biology, computer science, cryptology, ecology, public health, and sports, the book is accessible for a variety of readers. The book's emphasis on simulation through the use of the popular R software language c

  12. A philosophical essay on probabilities

    CERN Document Server

    Laplace, Marquis de

    1996-01-01

    A classic of science, this famous essay by ""the Newton of France"" introduces lay readers to the concepts and uses of probability theory. It is of especial interest today as an application of mathematical techniques to problems in social and biological sciences.Generally recognized as the founder of the modern phase of probability theory, Laplace here applies the principles and general results of his theory ""to the most important questions of life, which are, in effect, for the most part, problems in probability."" Thus, without the use of higher mathematics, he demonstrates the application

  13. Introduction to probability and measure

    CERN Document Server

    Parthasarathy, K R

    2005-01-01

    According to a remark attributed to Mark Kac 'Probability Theory is a measure theory with a soul'. This book with its choice of proofs, remarks, examples and exercises has been prepared taking both these aesthetic and practical aspects into account.

  14. Free probability and random matrices

    CERN Document Server

    Mingo, James A

    2017-01-01

    This volume opens the world of free probability to a wide variety of readers. From its roots in the theory of operator algebras, free probability has intertwined with non-crossing partitions, random matrices, applications in wireless communications, representation theory of large groups, quantum groups, the invariant subspace problem, large deviations, subfactors, and beyond. This book puts a special emphasis on the relation of free probability to random matrices, but also touches upon the operator algebraic, combinatorial, and analytic aspects of the theory. The book serves as a combination textbook/research monograph, with self-contained chapters, exercises scattered throughout the text, and coverage of important ongoing progress of the theory. It will appeal to graduate students and all mathematicians interested in random matrices and free probability from the point of view of operator algebras, combinatorics, analytic functions, or applications in engineering and statistical physics.

  15. Considerations on a posteriori probability

    Directory of Open Access Journals (Sweden)

    Corrado Gini

    2015-06-01

    Full Text Available In this first paper of 1911 relating to the sex ratio at birth, Gini repurposed a Laplace’s succession rule according to a Bayesian version. The Gini's intuition consisted in assuming for prior probability a Beta type distribution and introducing the "method of results (direct and indirect" for the determination of  prior probabilities according to the statistical frequency obtained from statistical data.

  16. DECOFF Probabilities of Failed Operations

    DEFF Research Database (Denmark)

    Gintautas, Tomas

    A statistical procedure of estimation of Probabilities of Failed Operations is described and exemplified using ECMWF weather forecasts and SIMO output from Rotor Lift test case models. Also safety factor influence is investigated. DECOFF statistical method is benchmarked against standard Alpha......-factor method defined by (DNV, 2011) and model performance is evaluated. Also, the effects that weather forecast uncertainty has on the output Probabilities of Failure is analysed and reported....

  17. Lost in Transit

    DEFF Research Database (Denmark)

    Lange, Ida Sofie Gøtzsche; Laursen, Lea Louise Holst; Lassen, Claus

    Thinking of Transit Places, the first sites that comes to mind will probably be airports, train stations and motorways. Such places are overall mono-functional with the embedded rationales of people's desires to move (themselves or goods) from one place to another. Often different service functions...

  18. Probably not future prediction using probability and statistical inference

    CERN Document Server

    Dworsky, Lawrence N

    2008-01-01

    An engaging, entertaining, and informative introduction to probability and prediction in our everyday lives Although Probably Not deals with probability and statistics, it is not heavily mathematical and is not filled with complex derivations, proofs, and theoretical problem sets. This book unveils the world of statistics through questions such as what is known based upon the information at hand and what can be expected to happen. While learning essential concepts including "the confidence factor" and "random walks," readers will be entertained and intrigued as they move from chapter to chapter. Moreover, the author provides a foundation of basic principles to guide decision making in almost all facets of life including playing games, developing winning business strategies, and managing personal finances. Much of the book is organized around easy-to-follow examples that address common, everyday issues such as: How travel time is affected by congestion, driving speed, and traffic lights Why different gambling ...

  19. Mixing Up Nondeterminism and Probability: A Preliminary Report

    NARCIS (Netherlands)

    den Hartog, Jeremy; De Vink, E.P.

    For a process language with both nondeterministic and probabilistic choice, and a form of failure a transition system is given from which, in a modular way, various operational models corresponding to various interpretations of nondeterminism and probability can be obtained. The effect of failure of

  20. Normal probability plots with confidence.

    Science.gov (United States)

    Chantarangsi, Wanpen; Liu, Wei; Bretz, Frank; Kiatsupaibul, Seksan; Hayter, Anthony J; Wan, Fang

    2015-01-01

    Normal probability plots are widely used as a statistical tool for assessing whether an observed simple random sample is drawn from a normally distributed population. The users, however, have to judge subjectively, if no objective rule is provided, whether the plotted points fall close to a straight line. In this paper, we focus on how a normal probability plot can be augmented by intervals for all the points so that, if the population distribution is normal, then all the points should fall into the corresponding intervals simultaneously with probability 1-α. These simultaneous 1-α probability intervals provide therefore an objective mean to judge whether the plotted points fall close to the straight line: the plotted points fall close to the straight line if and only if all the points fall into the corresponding intervals. The powers of several normal probability plot based (graphical) tests and the most popular nongraphical Anderson-Darling and Shapiro-Wilk tests are compared by simulation. Based on this comparison, recommendations are given in Section 3 on which graphical tests should be used in what circumstances. An example is provided to illustrate the methods. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Probability theory a foundational course

    CERN Document Server

    Pakshirajan, R P

    2013-01-01

    This book shares the dictum of J. L. Doob in treating Probability Theory as a branch of Measure Theory and establishes this relation early. Probability measures in product spaces are introduced right at the start by way of laying the ground work to later claim the existence of stochastic processes with prescribed finite dimensional distributions. Other topics analysed in the book include supports of probability measures, zero-one laws in product measure spaces, Erdos-Kac invariance principle, functional central limit theorem and functional law of the iterated logarithm for independent variables, Skorohod embedding, and the use of analytic functions of a complex variable in the study of geometric ergodicity in Markov chains. This book is offered as a text book for students pursuing graduate programs in Mathematics and or Statistics. The book aims to help the teacher present the theory with ease, and to help the student sustain his interest and joy in learning the subject.

  2. Approximation methods in probability theory

    CERN Document Server

    Čekanavičius, Vydas

    2016-01-01

    This book presents a wide range of well-known and less common methods used for estimating the accuracy of probabilistic approximations, including the Esseen type inversion formulas, the Stein method as well as the methods of convolutions and triangle function. Emphasising the correct usage of the methods presented, each step required for the proofs is examined in detail. As a result, this textbook provides valuable tools for proving approximation theorems. While Approximation Methods in Probability Theory will appeal to everyone interested in limit theorems of probability theory, the book is particularly aimed at graduate students who have completed a standard intermediate course in probability theory. Furthermore, experienced researchers wanting to enlarge their toolkit will also find this book useful.

  3. VIBRATION ISOLATION SYSTEM PROBABILITY ANALYSIS

    Directory of Open Access Journals (Sweden)

    Smirnov Vladimir Alexandrovich

    2012-10-01

    Full Text Available The article deals with the probability analysis for a vibration isolation system of high-precision equipment, which is extremely sensitive to low-frequency oscillations even of submicron amplitude. The external sources of low-frequency vibrations may include the natural city background or internal low-frequency sources inside buildings (pedestrian activity, HVAC. Taking Gauss distribution into account, the author estimates the probability of the relative displacement of the isolated mass being still lower than the vibration criteria. This problem is being solved in the three dimensional space, evolved by the system parameters, including damping and natural frequency. According to this probability distribution, the chance of exceeding the vibration criteria for a vibration isolation system is evaluated. Optimal system parameters - damping and natural frequency - are being developed, thus the possibility of exceeding vibration criteria VC-E and VC-D is assumed to be less than 0.04.

  4. Probability on real Lie algebras

    CERN Document Server

    Franz, Uwe

    2016-01-01

    This monograph is a progressive introduction to non-commutativity in probability theory, summarizing and synthesizing recent results about classical and quantum stochastic processes on Lie algebras. In the early chapters, focus is placed on concrete examples of the links between algebraic relations and the moments of probability distributions. The subsequent chapters are more advanced and deal with Wigner densities for non-commutative couples of random variables, non-commutative stochastic processes with independent increments (quantum Lévy processes), and the quantum Malliavin calculus. This book will appeal to advanced undergraduate and graduate students interested in the relations between algebra, probability, and quantum theory. It also addresses a more advanced audience by covering other topics related to non-commutativity in stochastic calculus, Lévy processes, and the Malliavin calculus.

  5. Flood hazard probability mapping method

    Science.gov (United States)

    Kalantari, Zahra; Lyon, Steve; Folkeson, Lennart

    2015-04-01

    In Sweden, spatially explicit approaches have been applied in various disciplines such as landslide modelling based on soil type data and flood risk modelling for large rivers. Regarding flood mapping, most previous studies have focused on complex hydrological modelling on a small scale whereas just a few studies have used a robust GIS-based approach integrating most physical catchment descriptor (PCD) aspects on a larger scale. The aim of the present study was to develop methodology for predicting the spatial probability of flooding on a general large scale. Factors such as topography, land use, soil data and other PCDs were analysed in terms of their relative importance for flood generation. The specific objective was to test the methodology using statistical methods to identify factors having a significant role on controlling flooding. A second objective was to generate an index quantifying flood probability value for each cell, based on different weighted factors, in order to provide a more accurate analysis of potential high flood hazards than can be obtained using just a single variable. The ability of indicator covariance to capture flooding probability was determined for different watersheds in central Sweden. Using data from this initial investigation, a method to subtract spatial data for multiple catchments and to produce soft data for statistical analysis was developed. It allowed flood probability to be predicted from spatially sparse data without compromising the significant hydrological features on the landscape. By using PCD data, realistic representations of high probability flood regions was made, despite the magnitude of rain events. This in turn allowed objective quantification of the probability of floods at the field scale for future model development and watershed management.

  6. Incompatible Stochastic Processes and Complex Probabilities

    Science.gov (United States)

    Zak, Michail

    1997-01-01

    The definition of conditional probabilities is based upon the existence of a joint probability. However, a reconstruction of the joint probability from given conditional probabilities imposes certain constraints upon the latter, so that if several conditional probabilities are chosen arbitrarily, the corresponding joint probability may not exist.

  7. Probability measures on metric spaces

    CERN Document Server

    Parthasarathy, K R

    2005-01-01

    In this book, the author gives a cohesive account of the theory of probability measures on complete metric spaces (which is viewed as an alternative approach to the general theory of stochastic processes). After a general description of the basics of topology on the set of measures, the author discusses regularity, tightness, and perfectness of measures, properties of sampling distributions, and metrizability and compactness theorems. Next, he describes arithmetic properties of probability measures on metric groups and locally compact abelian groups. Covered in detail are notions such as decom

  8. Probability and Statistics: 5 Questions

    DEFF Research Database (Denmark)

    Probability and Statistics: 5 Questions is a collection of short interviews based on 5 questions presented to some of the most influential and prominent scholars in probability and statistics. We hear their views on the fields, aims, scopes, the future direction of research and how their work fit...... in these respects. Interviews with Nick Bingham, Luc Bovens, Terrence L. Fine, Haim Gaifman, Donald Gillies, James Hawthorne, Carl Hoefer, James M. Joyce, Joseph B. Kadane Isaac Levi, D.H. Mellor, Patrick Suppes, Jan von Plato, Carl Wagner, Sandy Zabell...

  9. Knowledge typology for imprecise probabilities.

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, G. D. (Gregory D.); Zucker, L. J. (Lauren J.)

    2002-01-01

    When characterizing the reliability of a complex system there are often gaps in the data available for specific subsystems or other factors influencing total system reliability. At Los Alamos National Laboratory we employ ethnographic methods to elicit expert knowledge when traditional data is scarce. Typically, we elicit expert knowledge in probabilistic terms. This paper will explore how we might approach elicitation if methods other than probability (i.e., Dempster-Shafer, or fuzzy sets) prove more useful for quantifying certain types of expert knowledge. Specifically, we will consider if experts have different types of knowledge that may be better characterized in ways other than standard probability theory.

  10. Probability, Statistics, and Stochastic Processes

    CERN Document Server

    Olofsson, Peter

    2011-01-01

    A mathematical and intuitive approach to probability, statistics, and stochastic processes This textbook provides a unique, balanced approach to probability, statistics, and stochastic processes. Readers gain a solid foundation in all three fields that serves as a stepping stone to more advanced investigations into each area. This text combines a rigorous, calculus-based development of theory with a more intuitive approach that appeals to readers' sense of reason and logic, an approach developed through the author's many years of classroom experience. The text begins with three chapters that d

  11. Probability, statistics, and queueing theory

    CERN Document Server

    Allen, Arnold O

    1990-01-01

    This is a textbook on applied probability and statistics with computer science applications for students at the upper undergraduate level. It may also be used as a self study book for the practicing computer science professional. The successful first edition of this book proved extremely useful to students who need to use probability, statistics and queueing theory to solve problems in other fields, such as engineering, physics, operations research, and management science. The book has also been successfully used for courses in queueing theory for operations research students. This second edit

  12. Exact Probability Distribution versus Entropy

    Directory of Open Access Journals (Sweden)

    Kerstin Andersson

    2014-10-01

    Full Text Available The problem  addressed concerns the determination of the average number of successive attempts  of guessing  a word of a certain  length consisting of letters with given probabilities of occurrence. Both first- and second-order approximations  to a natural language are considered.  The guessing strategy used is guessing words in decreasing order of probability. When word and alphabet sizes are large, approximations  are necessary in order to estimate the number of guesses.  Several  kinds of approximations  are discussed demonstrating moderate requirements regarding both memory and central processing unit (CPU time. When considering realistic  sizes of alphabets and words (100, the number of guesses can be estimated  within minutes with reasonable accuracy (a few percent and may therefore constitute an alternative to, e.g., various entropy expressions.  For many probability  distributions,  the density of the logarithm of probability products is close to a normal distribution. For those cases, it is possible to derive an analytical expression for the average number of guesses. The proportion  of guesses needed on average compared to the total number  decreases almost exponentially with the word length. The leading term in an asymptotic  expansion can be used to estimate the number of guesses for large word lengths. Comparisons with analytical lower bounds and entropy expressions are also provided.

  13. Probability inequalities for decomposition integrals

    Czech Academy of Sciences Publication Activity Database

    Agahi, H.; Mesiar, Radko

    2017-01-01

    Roč. 315, č. 1 (2017), s. 240-248 ISSN 0377-0427 Institutional support: RVO:67985556 Keywords : Decomposition integral * Superdecomposition integral * Probability inequalities Subject RIV: BA - General Mathematics Impact factor: 1.357, year: 2016 http:// library .utia.cas.cz/separaty/2017/E/mesiar-0470959.pdf

  14. A Novel Approach to Probability

    CERN Document Server

    Kafri, Oded

    2016-01-01

    When P indistinguishable balls are randomly distributed among L distinguishable boxes, and considering the dense system in which P much greater than L, our natural intuition tells us that the box with the average number of balls has the highest probability and that none of boxes are empty; however in reality, the probability of the empty box is always the highest. This fact is with contradistinction to sparse system in which the number of balls is smaller than the number of boxes (i.e. energy distribution in gas) in which the average value has the highest probability. Here we show that when we postulate the requirement that all possible configurations of balls in the boxes have equal probabilities, a realistic "long tail" distribution is obtained. This formalism when applied for sparse systems converges to distributions in which the average is preferred. We calculate some of the distributions resulted from this postulate and obtain most of the known distributions in nature, namely, Zipf law, Benford law, part...

  15. Probability and statistics: A reminder

    Science.gov (United States)

    Clément, Benoit

    2013-07-01

    The main purpose of these lectures is to provide the reader with the tools needed to data analysis in the framework of physics experiments. Basic concepts are introduced together with examples of application in experimental physics. The lecture is divided into two parts: probability and statistics. It is build on the introduction from "data analysis in experimental sciences" given in [1

  16. GPS: Geometry, Probability, and Statistics

    Science.gov (United States)

    Field, Mike

    2012-01-01

    It might be said that for most occupations there is now less of a need for mathematics than there was say fifty years ago. But, the author argues, geometry, probability, and statistics constitute essential knowledge for everyone. Maybe not the geometry of Euclid, but certainly geometrical ways of thinking that might enable us to describe the world…

  17. Probability and statistics: A reminder

    OpenAIRE

    Clément Benoit

    2013-01-01

    The main purpose of these lectures is to provide the reader with the tools needed to data analysis in the framework of physics experiments. Basic concepts are introduced together with examples of application in experimental physics. The lecture is divided into two parts: probability and statistics. It is build on the introduction from “data analysis in experimental sciences” given in [1

  18. Estimating Probabilities in Recommendation Systems

    OpenAIRE

    Sun, Mingxuan; Lebanon, Guy; Kidwell, Paul

    2010-01-01

    Recommendation systems are emerging as an important business application with significant economic impact. Currently popular systems include Amazon's book recommendations, Netflix's movie recommendations, and Pandora's music recommendations. In this paper we address the problem of estimating probabilities associated with recommendation system data using non-parametric kernel smoothing. In our estimation we interpret missing items as randomly censored observations and obtain efficient computat...

  19. Steered transition path sampling.

    Science.gov (United States)

    Guttenberg, Nicholas; Dinner, Aaron R; Weare, Jonathan

    2012-06-21

    We introduce a path sampling method for obtaining statistical properties of an arbitrary stochastic dynamics. The method works by decomposing a trajectory in time, estimating the probability of satisfying a progress constraint, modifying the dynamics based on that probability, and then reweighting to calculate averages. Because the progress constraint can be formulated in terms of occurrences of events within time intervals, the method is particularly well suited for controlling the sampling of currents of dynamic events. We demonstrate the method for calculating transition probabilities in barrier crossing problems and survival probabilities in strongly diffusive systems with absorbing states, which are difficult to treat by shooting. We discuss the relation of the algorithm to other methods.

  20. Large deviations and idempotent probability

    CERN Document Server

    Puhalskii, Anatolii

    2001-01-01

    In the view of many probabilists, author Anatolii Puhalskii''s research results stand among the most significant achievements in the modern theory of large deviations. In fact, his work marked a turning point in the depth of our understanding of the connections between the large deviation principle (LDP) and well-known methods for establishing weak convergence results.Large Deviations and Idempotent Probability expounds upon the recent methodology of building large deviation theory along the lines of weak convergence theory. The author develops an idempotent (or maxitive) probability theory, introduces idempotent analogues of martingales (maxingales), Wiener and Poisson processes, and Ito differential equations, and studies their properties. The large deviation principle for stochastic processes is formulated as a certain type of convergence of stochastic processes to idempotent processes. The author calls this large deviation convergence.The approach to establishing large deviation convergence uses novel com...

  1. Probability, Information and Statistical Physics

    Science.gov (United States)

    Kuzemsky, A. L.

    2016-03-01

    In this short survey review we discuss foundational issues of the probabilistic approach to information theory and statistical mechanics from a unified standpoint. Emphasis is on the inter-relations between theories. The basic aim is tutorial, i.e. to carry out a basic introduction to the analysis and applications of probabilistic concepts to the description of various aspects of complexity and stochasticity. We consider probability as a foundational concept in statistical mechanics and review selected advances in the theoretical understanding of interrelation of the probability, information and statistical description with regard to basic notions of statistical mechanics of complex systems. It includes also a synthesis of past and present researches and a survey of methodology. The purpose of this terse overview is to discuss and partially describe those probabilistic methods and approaches that are used in statistical mechanics with the purpose of making these ideas easier to understanding and to apply.

  2. Probability on compact Lie groups

    CERN Document Server

    Applebaum, David

    2014-01-01

    Probability theory on compact Lie groups deals with the interaction between “chance” and “symmetry,” a beautiful area of mathematics of great interest in its own sake but which is now also finding increasing applications in statistics and engineering (particularly with respect to signal processing). The author gives a comprehensive introduction to some of the principle areas of study, with an emphasis on applicability. The most important topics presented are: the study of measures via the non-commutative Fourier transform, existence and regularity of densities, properties of random walks and convolution semigroups of measures, and the statistical problem of deconvolution. The emphasis on compact (rather than general) Lie groups helps readers to get acquainted with what is widely seen as a difficult field but which is also justified by the wealth of interesting results at this level and the importance of these groups for applications. The book is primarily aimed at researchers working in probability, s...

  3. Comments on quantum probability theory.

    Science.gov (United States)

    Sloman, Steven

    2014-01-01

    Quantum probability theory (QP) is the best formal representation available of the most common form of judgment involving attribute comparison (inside judgment). People are capable, however, of judgments that involve proportions over sets of instances (outside judgment). Here, the theory does not do so well. I discuss the theory both in terms of descriptive adequacy and normative appropriateness. Copyright © 2013 Cognitive Science Society, Inc.

  4. Probability Measures on Groups IX

    CERN Document Server

    1989-01-01

    The latest in this series of Oberwolfach conferences focussed on the interplay between structural probability theory and various other areas of pure and applied mathematics such as Tauberian theory, infinite-dimensional rotation groups, central limit theorems, harmonizable processes, and spherical data. Thus it was attended by mathematicians whose research interests range from number theory to quantum physics in conjunction with structural properties of probabilistic phenomena. This volume contains 5 survey articles submitted on special invitation and 25 original research papers.

  5. The Probabilities of Unique Events

    Science.gov (United States)

    2012-08-30

    compensation (a $10 lottery ) on Amazon Mechanical Turk, an online platform hosted on Amazon.com [31]. All of the participants stated that they were native...Probability, Statistics and Truth (Allen & Unwin, London). 4. de Finetti B (1970) Logical foundations and measurement of subjective probabil- ity...F.P. Ramsey: Philosophical Papers, ed Mellor DH (Cam- bridge University Press, Cambridge). 7. Savage L (1972) The Foundations of Statistics (Dover

  6. Probability and statistics: A reminder

    Directory of Open Access Journals (Sweden)

    Clément Benoit

    2013-07-01

    Full Text Available The main purpose of these lectures is to provide the reader with the tools needed to data analysis in the framework of physics experiments. Basic concepts are introduced together with examples of application in experimental physics. The lecture is divided into two parts: probability and statistics. It is build on the introduction from “data analysis in experimental sciences” given in [1

  7. Probability, Statistics, and Computational Science

    OpenAIRE

    Beerenwinkel, Niko; Siebourg, Juliane

    2012-01-01

    In this chapter, we review basic concepts from probability theory and computational statistics that are fundamental to evolutionary genomics. We provide a very basic introduction to statistical modeling and discuss general principles, including maximum likelihood and Bayesian inference. Markov chains, hidden Markov models, and Bayesian network models are introduced in more detail as they occur frequently and in many variations in genomics applications. In particular, we discuss efficient infe...

  8. Superconductor-insulator transition on annealed complex networks

    Science.gov (United States)

    Bianconi, Ginestra

    2012-06-01

    Cuprates show multiphase and multiscale complexity that has hindered physicists search for the mechanism of high Tc for many years. Recently the interest has been addressed to a possible optimum inhomogeneity of dopants, defects, and interstitials, and the structural scale invariance of dopants detected by scanning micro-x-ray diffraction has been reported to promote the critical temperature. In order to shed light on critical phenomena on granular materials, here we propose a stylized model capturing the essential characteristics of the superconducting-insulator transition of a highly dynamical, heterogeneous granular material: the random transverse Ising model (RTIM) on annealed complex network. We show that when the networks encode for high heterogeneity of the expected degrees described by a power-law distribution, the critical temperature for the onset of the superconducting phase diverges to infinity as the power-law exponent γ of the expected degree distribution is less than 3, i.e., γelectronic background is triggered by an external parameter g that determines an exponential cutoff in the power-law expected degree distribution characterized by an exponent γ. We find that for g=gc the critical temperature for the superconducting-insulator transition has a maximum if γ>3 and diverges if γ<3.

  9. Entropy in probability and statistics

    Energy Technology Data Exchange (ETDEWEB)

    Rolke, W.A.

    1992-01-01

    The author develops a theory of entropy, where entropy is defined as the Legendre-Fenchel transform of the logarithmic moment generating function of a probability measure on a Banach space. A variety of properties relating the probability measure and its entropy are proven. It is shown that the entropy of a large class of stochastic processes can be approximated by the entropies of the finite-dimensional distributions of the process. For several types of measures the author finds explicit formulas for the entropy, for example for stochastic processes with independent increments and for Gaussian processes. For the entropy of Markov chains, evaluated at the observations of the process, the author proves a central limit theorem. Theorems relating weak convergence of probability measures on a finite dimensional space and pointwise convergence of their entropies are developed and then used to give a new proof of Donsker's theorem. Finally the use of entropy in statistics is discussed. The author shows the connection between entropy and Kullback's minimum discrimination information. A central limit theorem yields a test for the independence of a sequence of observations.

  10. Frequentist probability and frequentist statistics

    Energy Technology Data Exchange (ETDEWEB)

    Neyman, J.

    1977-01-01

    A brief, nontechnical outline is given of the author's views on the ''frequentist'' theory of probability and the ''frequentist'' theory of statistics; their applications are illustrated in a few domains of study of nature. The phenomenon of apparently stable relative frequencies as the source of the frequentist theories of probability and statistics is taken up first. Three steps are set out: empirical establishment of apparently stable long-run relative frequencies of events judged interesting, as they develop in nature; guessing and then verifying the chance mechanism, the repeated operation of which produced the observed frequencies--this is a problem of frequentist probability theory; using the hypothetical chance mechanism of the phenomenon studied to deduce rules of adjusting our actions to the observations to ensure the highest ''measure'' of ''success''. Illustrations of the three steps are given. The theory of testing statistical hypotheses is sketched: basic concepts, simple and composite hypotheses, hypothesis tested, importance of the power of the test used, practical applications of the theory of testing statistical hypotheses. Basic ideas and an example of the randomization of experiments are discussed, and an ''embarrassing'' example is given. The problem of statistical estimation is sketched: example of an isolated problem, example of connected problems treated routinely, empirical Bayes theory, point estimation. The theory of confidence intervals is outlined: basic concepts, anticipated misunderstandings, construction of confidence intervals: regions of acceptance. Finally, the theory of estimation by confidence intervals or regions is considered briefly. 4 figures. (RWR)

  11. Probability of Detection Demonstration Transferability

    Science.gov (United States)

    Parker, Bradford H.

    2008-01-01

    The ongoing Mars Science Laboratory (MSL) Propellant Tank Penetrant Nondestructive Evaluation (NDE) Probability of Detection (POD) Assessment (NESC activity) has surfaced several issues associated with liquid penetrant POD demonstration testing. This presentation lists factors that may influence the transferability of POD demonstration tests. Initial testing will address the liquid penetrant inspection technique. Some of the factors to be considered in this task are crack aspect ratio, the extent of the crack opening, the material and the distance between the inspection surface and the inspector's eye.

  12. Probability, statistics, and computational science.

    Science.gov (United States)

    Beerenwinkel, Niko; Siebourg, Juliane

    2012-01-01

    In this chapter, we review basic concepts from probability theory and computational statistics that are fundamental to evolutionary genomics. We provide a very basic introduction to statistical modeling and discuss general principles, including maximum likelihood and Bayesian inference. Markov chains, hidden Markov models, and Bayesian network models are introduced in more detail as they occur frequently and in many variations in genomics applications. In particular, we discuss efficient inference algorithms and methods for learning these models from partially observed data. Several simple examples are given throughout the text, some of which point to models that are discussed in more detail in subsequent chapters.

  13. Nash equilibrium with lower probabilities

    DEFF Research Database (Denmark)

    Groes, Ebbe; Jacobsen, Hans Jørgen; Sloth, Birgitte

    1998-01-01

    We generalize the concept of Nash equilibrium in mixed strategies for strategic form games to allow for ambiguity in the players' expectations. In contrast to other contributions, we model ambiguity by means of so-called lower probability measures or belief functions, which makes it possible...... to distinguish between a player's assessment of ambiguity and his attitude towards ambiguity. We also generalize the concept of trembling hand perfect equilibrium. Finally, we demonstrate that for certain attitudes towards ambiguity it is possible to explain cooperation in the one-shot Prisoner's Dilemma...

  14. Lectures on probability and statistics

    Energy Technology Data Exchange (ETDEWEB)

    Yost, G.P.

    1984-09-01

    These notes are based on a set of statistics lectures delivered at Imperial College to the first-year postgraduate students in High Energy Physics. They are designed for the professional experimental scientist. We begin with the fundamentals of probability theory, in which one makes statements about the set of possible outcomes of an experiment, based upon a complete a priori understanding of the experiment. For example, in a roll of a set of (fair) dice, one understands a priori that any given side of each die is equally likely to turn up. From that, we can calculate the probability of any specified outcome. We finish with the inverse problem, statistics. Here, one begins with a set of actual data (e.g., the outcomes of a number of rolls of the dice), and attempts to make inferences about the state of nature which gave those data (e.g., the likelihood of seeing any given side of any given die turn up). This is a much more difficult problem, of course, and one's solutions often turn out to be unsatisfactory in one respect or another.

  15. Future southcentral US wildfire probability due to climate change

    Science.gov (United States)

    Stambaugh, Michael C.; Guyette, Richard P.; Stroh, Esther D.; Struckhoff, Matthew A.; Whittier, Joanna B.

    2018-01-01

    Globally, changing fire regimes due to climate is one of the greatest threats to ecosystems and society. In this paper, we present projections of future fire probability for the southcentral USA using downscaled climate projections and the Physical Chemistry Fire Frequency Model (PC2FM). Future fire probability is projected to both increase and decrease across the study region of Oklahoma, New Mexico, and Texas. Among all end-of-century projections, change in fire probabilities (CFPs) range from − 51 to + 240%. Greatest absolute increases in fire probability are shown for areas within the range of approximately 75 to 160 cm mean annual precipitation (MAP), regardless of climate model. Although fire is likely to become more frequent across the southcentral USA, spatial patterns may remain similar unless significant increases in precipitation occur, whereby more extensive areas with increased fire probability are predicted. Perhaps one of the most important results is illumination of climate changes where fire probability response (+, −) may deviate (i.e., tipping points). Fire regimes of southcentral US ecosystems occur in a geographic transition zone from reactant- to reaction-limited conditions, potentially making them uniquely responsive to different scenarios of temperature and precipitation changes. Identification and description of these conditions may help anticipate fire regime changes that will affect human health, agriculture, species conservation, and nutrient and water cycling.

  16. Transitional Justice

    DEFF Research Database (Denmark)

    Gissel, Line Engbo

    This presentation builds on an earlier published article, 'Contemporary Transitional Justice: Normalising a Politics of Exception'. It argues that the field of transitional justice has undergone a shift in conceptualisation and hence practice. Transitional justice is presently understood to be th...

  17. Probability theory a comprehensive course

    CERN Document Server

    Klenke, Achim

    2014-01-01

    This second edition of the popular textbook contains a comprehensive course in modern probability theory. Overall, probabilistic concepts play an increasingly important role in mathematics, physics, biology, financial engineering and computer science. They help us in understanding magnetism, amorphous media, genetic diversity and the perils of random developments at financial markets, and they guide us in constructing more efficient algorithms.   To address these concepts, the title covers a wide variety of topics, many of which are not usually found in introductory textbooks, such as:   • limit theorems for sums of random variables • martingales • percolation • Markov chains and electrical networks • construction of stochastic processes • Poisson point process and infinite divisibility • large deviation principles and statistical physics • Brownian motion • stochastic integral and stochastic differential equations. The theory is developed rigorously and in a self-contained way, with the c...

  18. Lévy laws in free probability

    OpenAIRE

    Barndorff-Nielsen, Ole E.; Thorbjørnsen, Steen

    2002-01-01

    This article and its sequel outline recent developments in the theory of infinite divisibility and Lévy processes in free probability, a subject area belonging to noncommutative (or quantum) probability. The present paper discusses the classes of infinitely divisible probability measures in classical and free probability, respectively, via a study of the Bercovici–Pata bijection between these classes.

  19. Do Induced Abortions Affect the First Birth Probability?

    DEFF Research Database (Denmark)

    Hansen, Marie-Louise H; Stage, Louise; Knudsen, Lisbeth B.

    abortion is examined by cumulative first birth probabilities, derived from a life table analysis. Main findings and conclusion: Previous abortions increased the first birth probability, though this effect was almost entirely confined to single women. For cohabiting and married women, previous abortions had......Objective: The focus of this paper is to study, on a national basis, how the event of an induced abortion modifies the transition to first birth for Danish women aged 20-39 years in the period 1982-2001, taking into account also educational level, family situation, and urbanisation. Data...... and methods: The data are obtained by linking several national public registers in Denmark, using the unique personal identification number. Initially, a logistic regression analysis is employed in order to model the first birth probability in a given year. Secondly, the long-term effect of an induced...

  20. Probability theory and mathematical statistics for engineers

    CERN Document Server

    Pugachev, V S

    1984-01-01

    Probability Theory and Mathematical Statistics for Engineers focuses on the concepts of probability theory and mathematical statistics for finite-dimensional random variables.The publication first underscores the probabilities of events, random variables, and numerical characteristics of random variables. Discussions focus on canonical expansions of random vectors, second-order moments of random vectors, generalization of the density concept, entropy of a distribution, direct evaluation of probabilities, and conditional probabilities. The text then examines projections of random vector

  1. A practical overview on probability distributions

    OpenAIRE

    Viti, Andrea; Terzi, Alberto; Bertolaccini, Luca

    2015-01-01

    Aim of this paper is a general definition of probability, of its main mathematical features and the features it presents under particular circumstances. The behavior of probability is linked to the features of the phenomenon we would predict. This link can be defined probability distribution. Given the characteristics of phenomena (that we can also define variables), there are defined probability distribution. For categorical (or discrete) variables, the probability can be described by a bino...

  2. Introduction to probability theory with contemporary applications

    CERN Document Server

    Helms, Lester L

    2010-01-01

    This introduction to probability theory transforms a highly abstract subject into a series of coherent concepts. Its extensive discussions and clear examples, written in plain language, expose students to the rules and methods of probability. Suitable for an introductory probability course, this volume requires abstract and conceptual thinking skills and a background in calculus.Topics include classical probability, set theory, axioms, probability functions, random and independent random variables, expected values, and covariance and correlations. Additional subjects include stochastic process

  3. Geomagnetic polarity transitions

    Science.gov (United States)

    Merrill, Ronald T.; McFadden, Phillip L.

    1999-05-01

    reasonable to draw the following conclusions with varying degrees of confidence. There appears to be a substantial decrease in the mean intensity of the dipole field during a transition to ˜25% of its usual value. The duration of an average geomagnetic polarity transition is not well known but probably lies between 1000 and 8000 years. Values outside these bounds have been reported, but we give reasons as to why such outliers are likely to be artifacts. The reversal process is probably longer than the manifestation of the reversal at Earth's surface as recorded in paleomagnetic directional data. Convection hiatus during a geomagnetic polarity transition seems unlikely, and free-decay models for reversals appear to be generally incompatible with the data. This implies that certain theorems in dynamo theory, such as Cowling's theorem, should not be invoked to explain the origin of reversals. Unfortunately, the detailed description of directional changes during transitions remains controversial. Contrary to common belief, certain low-degree nondipole fields can produce significant longitudinal confinement of virtual geomagnetic poles (VGP) during a transition. The data are currently inadequate to refute or verify claims of longitudinal dipole confinement, VGP clustering, or other systematics during polarity transitions.

  4. Controlled quantum evolutions and transitions

    Energy Technology Data Exchange (ETDEWEB)

    Petroni, Nicola Cufaro [INFN Sezione di Bari, INFM Unitadi Bari and Dipartimento Interateneo di Fisica dell' Universitae del Politecnico di Bari, Bari (Italy); De Martino, Salvatore; De Siena, Silvio; Illuminati, Fabrizio [INFM Unitadi Salerno, INFN Sezione di Napoli - Gruppo collegato di Salerno and Dipartimento di Fisica dell' Universitadi Salerno, Baronissi, Salerno (Italy)

    1999-10-29

    We study the nonstationary solutions of Fokker-Planck equations associated to either stationary or non stationary quantum states. In particular, we discuss the stationary states of quantum systems with singular velocity fields. We introduce a technique that allows arbitrary evolutions ruled by these equations to account for controlled quantum transitions. As a first significant application we present a detailed treatment of the transition probabilities and of the controlling time-dependent potentials associated to the transitions between the stationary, the coherent, and the squeezed states of the harmonic oscillator. (author)

  5. Analytic Neutrino Oscillation Probabilities in Matter: Revisited

    Energy Technology Data Exchange (ETDEWEB)

    Parke, Stephen J. [Fermilab; Denton, Peter B. [Copenhagen U.; Minakata, Hisakazu [Madrid, IFT

    2018-01-02

    We summarize our recent paper on neutrino oscillation probabilities in matter, explaining the importance, relevance and need for simple, highly accurate approximations to the neutrino oscillation probabilities in matter.

  6. The Probability Distribution for a Biased Spinner

    Science.gov (United States)

    Foster, Colin

    2012-01-01

    This article advocates biased spinners as an engaging context for statistics students. Calculating the probability of a biased spinner landing on a particular side makes valuable connections between probability and other areas of mathematics. (Contains 2 figures and 1 table.)

  7. Pre-Aggregation with Probability Distributions

    DEFF Research Database (Denmark)

    Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach

    2006-01-01

    Motivated by the increasing need to analyze complex, uncertain multidimensional data this paper proposes probabilistic OLAP queries that are computed using probability distributions rather than atomic values. The paper describes how to create probability distributions from base data, and how...

  8. Probability output modeling for support vector machines

    Science.gov (United States)

    Zhang, Xiang; Xiao, Xiaoling; Tian, Jinwen; Liu, Jian

    2007-11-01

    In this paper we propose an approach to model the posterior probability output of multi-class SVMs. The sigmoid function is used to estimate the posterior probability output in binary classification. This approach modeling the posterior probability output of multi-class SVMs is achieved by directly solving the equations that are based on the combination of the probability outputs of binary classifiers using the Bayes's rule. The differences and different weights among these two-class SVM classifiers, based on the posterior probability, are considered and given for the combination of the probability outputs among these two-class SVM classifiers in this method. The comparative experiment results show that our method achieves the better classification precision and the better probability distribution of the posterior probability than the pairwise couping method and the Hastie's optimization method.

  9. Conditional Probability Modulates Visual Search Efficiency

    Directory of Open Access Journals (Sweden)

    Bryan eCort

    2013-10-01

    Full Text Available We investigated the effects of probability on visual search. Previous work has shown that people can utilize spatial and sequential probability information to improve target detection. We hypothesized that performance improvements from probability information would extend to the efficiency of visual search. Our task was a simple visual search in which the target was always present among a field of distractors, and could take one of two colors. The absolute probability of the target being either color was 0.5; however, the conditional probability – the likelihood of a particular color given a particular combination of two cues – varied from 0.1 to 0.9. We found that participants searched more efficiently for high conditional probability targets and less efficiently for low conditional probability targets, but only when they were explicitly informed of the probability relationship between cues and target color.

  10. An approximation method for solving the steady-state probability distribution of probabilistic Boolean networks.

    Science.gov (United States)

    Ching, Wai-Ki; Zhang, Shuqin; Ng, Michael K; Akutsu, Tatsuya

    2007-06-15

    Probabilistic Boolean networks (PBNs) have been proposed to model genetic regulatory interactions. The steady-state probability distribution of a PBN gives important information about the captured genetic network. The computation of the steady-state probability distribution usually includes construction of the transition probability matrix and computation of the steady-state probability distribution. The size of the transition probability matrix is 2(n)-by-2(n) where n is the number of genes in the genetic network. Therefore, the computational costs of these two steps are very expensive and it is essential to develop a fast approximation method. In this article, we propose an approximation method for computing the steady-state probability distribution of a PBN based on neglecting some Boolean networks (BNs) with very small probabilities during the construction of the transition probability matrix. An error analysis of this approximation method is given and theoretical result on the distribution of BNs in a PBN with at most two Boolean functions for one gene is also presented. These give a foundation and support for the approximation method. Numerical experiments based on a genetic network are given to demonstrate the efficiency of the proposed method.

  11. Observational biases for transiting planets

    Science.gov (United States)

    Kipping, David M.; Sandford, Emily

    2016-12-01

    Observational biases distort our view of nature, such that the patterns we see within a surveyed population of interest are often unrepresentative of the truth we seek. Transiting planets currently represent the most informative data set on the ensemble properties of exoplanets within 1 au of their star. However, the transit method is inherently biased due to both geometric and detection-driven effects. In this work, we derive the overall observational biases affecting the most basic transit parameters from first principles. By assuming a trapezoidal transit and using conditional probability, we infer the expected distribution of these terms both as a joint distribution and in a marginalized form. These general analytic results provide a baseline against which to compare trends predicted by mission-tailored injection/recovery simulations and offer a simple way to correct for observational bias. Our results explain why the observed population of transiting planets displays a non-uniform impact parameter distribution, with a bias towards near-equatorial geometries. We also find that the geometric bias towards observed planets transiting near periastron is attenuated by the longer durations which occur near apoastron. Finally, we predict that the observational bias with respect to ratio-of-radii is super-quadratic, scaling as (RP/R⋆)5/2, driven by an enhanced geometric transit probability and modestly longer durations.

  12. Scoring Rules for Subjective Probability Distributions

    DEFF Research Database (Denmark)

    Harrison, Glenn W.; Martínez-Correa, Jimmy; Swarthout, J. Todd

    report the true subjective probability of a binary event, even under Subjective Expected Utility. To address this one can “calibrate” inferences about true subjective probabilities from elicited subjective probabilities over binary events, recognizing the incentives that risk averse agents have...

  13. Probability of flooding: An uncertainty analysis

    NARCIS (Netherlands)

    Slijkhuis, K.A.H.; Frijters, M.P.C.; Cooke, R.M.; Vrouwenvelder, A.C.W.M.

    1998-01-01

    In the Netherlands a new safety approach concerning the flood defences will probably be implemented in the near future. Therefore, an uncertainty analysis is currently being carried out to determine the uncertainty in the probability of flooding . The uncertainty of the probability of flooding could

  14. Lévy processes in free probability

    OpenAIRE

    Barndorff-Nielsen, Ole E.; Thorbjørnsen, Steen

    2002-01-01

    This is the continuation of a previous article that studied the relationship between the classes of infinitely divisible probability measures in classical and free probability, respectively, via the Bercovici–Pata bijection. Drawing on the results of the preceding article, the present paper outlines recent developments in the theory of Lévy processes in free probability.

  15. The trajectory of the target probability effect.

    Science.gov (United States)

    Hon, Nicholas; Yap, Melvin J; Jabar, Syaheed B

    2013-05-01

    The effect of target probability on detection times is well-established: Even when detection accuracy is high, lower probability targets are detected more slowly than higher probability ones. Although this target probability effect on detection times has been well-studied, one aspect of it has remained largely unexamined: How the effect develops over the span of an experiment. Here, we investigated this issue with two detection experiments that assessed different target probability ratios. Conventional block segment analysis and linear mixed-effects modeling converged on two key findings. First, we found that the magnitude of the target probability effect increases as one progresses through a block of trials. Second, we found, by examining the trajectories of the low- and high-probability targets, that this increase in effect magnitude was driven by the low-probability targets. Specifically, we found that low-probability targets were detected more slowly as a block of trials progressed. Performance to high-probability targets, on the other hand, was largely invariant across the block. The latter finding is of particular interest because it cannot be reconciled with accounts that propose that the target probability effect is driven by the high-probability targets.

  16. 47 CFR 1.1623 - Probability calculation.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Probability calculation. 1.1623 Section 1.1623... Mass Media Services General Procedures § 1.1623 Probability calculation. (a) All calculations shall be computed to no less than three significant digits. Probabilities will be truncated to the number of...

  17. Using Playing Cards to Differentiate Probability Interpretations

    Science.gov (United States)

    López Puga, Jorge

    2014-01-01

    The aprioristic (classical, naïve and symmetric) and frequentist interpretations of probability are commonly known. Bayesian or subjective interpretation of probability is receiving increasing attention. This paper describes an activity to help students differentiate between the three types of probability interpretations.

  18. Pre-Service Teachers' Conceptions of Probability

    Science.gov (United States)

    Odafe, Victor U.

    2011-01-01

    Probability knowledge and skills are needed in science and in making daily decisions that are sometimes made under uncertain conditions. Hence, there is the need to ensure that the pre-service teachers of our children are well prepared to teach probability. Pre-service teachers' conceptions of probability are identified, and ways of helping them…

  19. Transitional Care

    Science.gov (United States)

    Naylor, Mary; Keating, Stacen A.

    2008-01-01

    Transitional care encompasses a broad range of services and environments designed to promote the safe and timely passage of patients between levels of health care and across care settings. High-quality transitional care is especially important for older adults with multiple chronic conditions and complex therapeutic regimens, as well as for their…

  20. An Objective Theory of Probability (Routledge Revivals)

    CERN Document Server

    Gillies, Donald

    2012-01-01

    This reissue of D. A. Gillies highly influential work, first published in 1973, is a philosophical theory of probability which seeks to develop von Mises' views on the subject. In agreement with von Mises, the author regards probability theory as a mathematical science like mechanics or electrodynamics, and probability as an objective, measurable concept like force, mass or charge. On the other hand, Dr Gillies rejects von Mises' definition of probability in terms of limiting frequency and claims that probability should be taken as a primitive or undefined term in accordance with modern axioma

  1. Paraconsistent Probabilities: Consistency, Contradictions and Bayes’ Theorem

    Directory of Open Access Journals (Sweden)

    Juliana Bueno-Soler

    2016-09-01

    Full Text Available This paper represents the first steps towards constructing a paraconsistent theory of probability based on the Logics of Formal Inconsistency (LFIs. We show that LFIs encode very naturally an extension of the notion of probability able to express sophisticated probabilistic reasoning under contradictions employing appropriate notions of conditional probability and paraconsistent updating, via a version of Bayes’ theorem for conditionalization. We argue that the dissimilarity between the notions of inconsistency and contradiction, one of the pillars of LFIs, plays a central role in our extended notion of probability. Some critical historical and conceptual points about probability theory are also reviewed.

  2. Adolescents' misinterpretation of health risk probability expressions.

    Science.gov (United States)

    Cohn, L D; Schydlower, M; Foley, J; Copeland, R L

    1995-05-01

    To determine if differences exist between adolescents and physicians in their numerical translation of 13 commonly used probability expressions (eg, possibly, might). Cross-sectional. Adolescent medicine and pediatric orthopedic outpatient units. 150 adolescents and 51 pediatricians, pediatric orthopedic surgeons, and nurses. Numerical ratings of the degree of certainty implied by 13 probability expressions (eg, possibly, probably). Adolescents were significantly more likely than physicians to display comprehension errors, reversing or equating the meaning of terms such as probably/possibly and likely/possibly. Numerical expressions of uncertainty (eg, 30% chance) elicited less variability in ratings than lexical expressions of uncertainty (eg, possibly). Physicians should avoid using probability expressions such as probably, possibly, and likely when communicating health risks to children and adolescents. Numerical expressions of uncertainty may be more effective for conveying the likelihood of an illness than lexical expressions of uncertainty (eg, probably).

  3. A practical overview on probability distributions.

    Science.gov (United States)

    Viti, Andrea; Terzi, Alberto; Bertolaccini, Luca

    2015-03-01

    Aim of this paper is a general definition of probability, of its main mathematical features and the features it presents under particular circumstances. The behavior of probability is linked to the features of the phenomenon we would predict. This link can be defined probability distribution. Given the characteristics of phenomena (that we can also define variables), there are defined probability distribution. For categorical (or discrete) variables, the probability can be described by a binomial or Poisson distribution in the majority of cases. For continuous variables, the probability can be described by the most important distribution in statistics, the normal distribution. Distributions of probability are briefly described together with some examples for their possible application.

  4. Integrated statistical modelling of spatial landslide probability

    Science.gov (United States)

    Mergili, M.; Chu, H.-J.

    2015-09-01

    Statistical methods are commonly employed to estimate spatial probabilities of landslide release at the catchment or regional scale. Travel distances and impact areas are often computed by means of conceptual mass point models. The present work introduces a fully automated procedure extending and combining both concepts to compute an integrated spatial landslide probability: (i) the landslide inventory is subset into release and deposition zones. (ii) We employ a simple statistical approach to estimate the pixel-based landslide release probability. (iii) We use the cumulative probability density function of the angle of reach of the observed landslide pixels to assign an impact probability to each pixel. (iv) We introduce the zonal probability i.e. the spatial probability that at least one landslide pixel occurs within a zone of defined size. We quantify this relationship by a set of empirical curves. (v) The integrated spatial landslide probability is defined as the maximum of the release probability and the product of the impact probability and the zonal release probability relevant for each pixel. We demonstrate the approach with a 637 km2 study area in southern Taiwan, using an inventory of 1399 landslides triggered by the typhoon Morakot in 2009. We observe that (i) the average integrated spatial landslide probability over the entire study area corresponds reasonably well to the fraction of the observed landside area; (ii) the model performs moderately well in predicting the observed spatial landslide distribution; (iii) the size of the release zone (or any other zone of spatial aggregation) influences the integrated spatial landslide probability to a much higher degree than the pixel-based release probability; (iv) removing the largest landslides from the analysis leads to an enhanced model performance.

  5. Approximating the Probability of Mortality Due to Protracted Radiation Exposures

    Science.gov (United States)

    2016-06-01

    2.25 168 573.4 924.7 1291.1 2.25 Radiological weapons (“dirty bombs”) will in most cases disperse radionuclides whose half-life is long enough that...Under the current Nuclear Survivability and Forensics contract, HDTRA1-14-D-0003; 0005, Dr. Paul Blake of DTRA/NTPR has supported the transition of...present approximate methods for estimating the probability of mortality due to radiological environments from nuclear weapon detonations or from a

  6. Continuation of probability density functions using a generalized Lyapunov approach

    Energy Technology Data Exchange (ETDEWEB)

    Baars, S., E-mail: s.baars@rug.nl [Johann Bernoulli Institute for Mathematics and Computer Science, University of Groningen, P.O. Box 407, 9700 AK Groningen (Netherlands); Viebahn, J.P., E-mail: viebahn@cwi.nl [Centrum Wiskunde & Informatica (CWI), P.O. Box 94079, 1090 GB, Amsterdam (Netherlands); Mulder, T.E., E-mail: t.e.mulder@uu.nl [Institute for Marine and Atmospheric research Utrecht, Department of Physics and Astronomy, Utrecht University, Princetonplein 5, 3584 CC Utrecht (Netherlands); Kuehn, C., E-mail: ckuehn@ma.tum.de [Technical University of Munich, Faculty of Mathematics, Boltzmannstr. 3, 85748 Garching bei München (Germany); Wubs, F.W., E-mail: f.w.wubs@rug.nl [Johann Bernoulli Institute for Mathematics and Computer Science, University of Groningen, P.O. Box 407, 9700 AK Groningen (Netherlands); Dijkstra, H.A., E-mail: h.a.dijkstra@uu.nl [Institute for Marine and Atmospheric research Utrecht, Department of Physics and Astronomy, Utrecht University, Princetonplein 5, 3584 CC Utrecht (Netherlands); School of Chemical and Biomolecular Engineering, Cornell University, Ithaca, NY (United States)

    2017-05-01

    Techniques from numerical bifurcation theory are very useful to study transitions between steady fluid flow patterns and the instabilities involved. Here, we provide computational methodology to use parameter continuation in determining probability density functions of systems of stochastic partial differential equations near fixed points, under a small noise approximation. Key innovation is the efficient solution of a generalized Lyapunov equation using an iterative method involving low-rank approximations. We apply and illustrate the capabilities of the method using a problem in physical oceanography, i.e. the occurrence of multiple steady states of the Atlantic Ocean circulation.

  7. Working directly with probabilities in quantum field theory

    Science.gov (United States)

    Dickinson, R.; Forshaw, J.; Millington, P.

    2017-08-01

    We present a novel approach to computing transition probabilities in quantum field theory, which allows them to be written directly in terms of expectation values of nested commutators and anti-commutators of field operators, rather than squared matrix elements. We show that this leads to a diagrammatic expansion in which the retarded propagator plays a dominant role. As a result, one is able to see clearly how faster-than-light signalling is prevented between sources and detectors. Finally, we comment on potential implications of this approach for dealing with infra-red divergences.

  8. Transitional determinacies.

    Science.gov (United States)

    Luelsdorff, P A

    1992-01-01

    In classic generative grammar a distinction is drawn between linguistic 'competence' and linguistic 'performance', the former referring to linguistic knowledge, the latter to how linguistic knowledge is used. However, this controversial differentiation obscures the additional dichotomy between linguistic knowledge for production and linguistic knowledge for recognition. In this article it is shown that production and recognition differ, that recognition is not simply the inverse of production, and that the derivation of production from recognition and recognition from production require a small set of generalizable 'transitional determinacies'. Secondly, it is shown that transitional determinacies explain the difference between 'overt' and 'covert' recognition recently observed in prosopagnosics, patients unable to recognize familiar faces. Prosopagnosics and normals are found to differ in their transitional determinacies, such that prosopagnosics require more binders (precisors) for covert recognition than normals. In general, it is concluded that transitional determinacies are as necessary to the theory of grammar as determinacies themselves.

  9. Probability concepts in quality risk management.

    Science.gov (United States)

    Claycamp, H Gregg

    2012-01-01

    Essentially any concept of risk is built on fundamental concepts of chance, likelihood, or probability. Although risk is generally a probability of loss of something of value, given that a risk-generating event will occur or has occurred, it is ironic that the quality risk management literature and guidelines on quality risk management tools are relatively silent on the meaning and uses of "probability." The probability concept is typically applied by risk managers as a combination of frequency-based calculation and a "degree of belief" meaning of probability. Probability as a concept that is crucial for understanding and managing risk is discussed through examples from the most general, scenario-defining and ranking tools that use probability implicitly to more specific probabilistic tools in risk management. A rich history of probability in risk management applied to other fields suggests that high-quality risk management decisions benefit from the implementation of more thoughtful probability concepts in both risk modeling and risk management. Essentially any concept of risk is built on fundamental concepts of chance, likelihood, or probability. Although "risk" generally describes a probability of loss of something of value, given that a risk-generating event will occur or has occurred, it is ironic that the quality risk management literature and guidelines on quality risk management methodologies and respective tools focus on managing severity but are relatively silent on the in-depth meaning and uses of "probability." Pharmaceutical manufacturers are expanding their use of quality risk management to identify and manage risks to the patient that might occur in phases of the pharmaceutical life cycle from drug development to manufacture, marketing to product discontinuation. A probability concept is typically applied by risk managers as a combination of data-based measures of probability and a subjective "degree of belief" meaning of probability. Probability as

  10. Experience Matters: Information Acquisition Optimizes Probability Gain

    Science.gov (United States)

    Nelson, Jonathan D.; McKenzie, Craig R.M.; Cottrell, Garrison W.; Sejnowski, Terrence J.

    2010-01-01

    Deciding which piece of information to acquire or attend to is fundamental to perception, categorization, medical diagnosis, and scientific inference. Four statistical theories of the value of information—information gain, Kullback-Liebler distance, probability gain (error minimization), and impact—are equally consistent with extant data on human information acquisition. Three experiments, designed via computer optimization to be maximally informative, tested which of these theories best describes human information search. Experiment 1, which used natural sampling and experience-based learning to convey environmental probabilities, found that probability gain explained subjects’ information search better than the other statistical theories or the probability-of-certainty heuristic. Experiments 1 and 2 found that subjects behaved differently when the standard method of verbally presented summary statistics (rather than experience-based learning) was used to convey environmental probabilities. Experiment 3 found that subjects’ preference for probability gain is robust, suggesting that the other models contribute little to subjects’ search behavior. PMID:20525915

  11. Experience matters: information acquisition optimizes probability gain.

    Science.gov (United States)

    Nelson, Jonathan D; McKenzie, Craig R M; Cottrell, Garrison W; Sejnowski, Terrence J

    2010-07-01

    Deciding which piece of information to acquire or attend to is fundamental to perception, categorization, medical diagnosis, and scientific inference. Four statistical theories of the value of information-information gain, Kullback-Liebler distance, probability gain (error minimization), and impact-are equally consistent with extant data on human information acquisition. Three experiments, designed via computer optimization to be maximally informative, tested which of these theories best describes human information search. Experiment 1, which used natural sampling and experience-based learning to convey environmental probabilities, found that probability gain explained subjects' information search better than the other statistical theories or the probability-of-certainty heuristic. Experiments 1 and 2 found that subjects behaved differently when the standard method of verbally presented summary statistics (rather than experience-based learning) was used to convey environmental probabilities. Experiment 3 found that subjects' preference for probability gain is robust, suggesting that the other models contribute little to subjects' search behavior.

  12. UT Biomedical Informatics Lab (BMIL) Probability Wheel.

    Science.gov (United States)

    Huang, Sheng-Cheng; Lee, Sara; Wang, Allen; Cantor, Scott B; Sun, Clement; Fan, Kaili; Reece, Gregory P; Kim, Min Soon; Markey, Mia K

    2016-01-01

    A probability wheel app is intended to facilitate communication between two people, an "investigator" and a "participant," about uncertainties inherent in decision-making. Traditionally, a probability wheel is a mechanical prop with two colored slices. A user adjusts the sizes of the slices to indicate the relative value of the probabilities assigned to them. A probability wheel can improve the adjustment process and attenuate the effect of anchoring bias when it is used to estimate or communicate probabilities of outcomes. The goal of this work was to develop a mobile application of the probability wheel that is portable, easily available, and more versatile. We provide a motivating example from medical decision-making, but the tool is widely applicable for researchers in the decision sciences.

  13. Fundamentals of applied probability and random processes

    CERN Document Server

    Ibe, Oliver

    2005-01-01

    This book is based on the premise that engineers use probability as a modeling tool, and that probability can be applied to the solution of engineering problems. Engineers and students studying probability and random processes also need to analyze data, and thus need some knowledge of statistics. This book is designed to provide students with a thorough grounding in probability and stochastic processes, demonstrate their applicability to real-world problems, and introduce the basics of statistics. The book''s clear writing style and homework problems make it ideal for the classroom or for self-study.* Good and solid introduction to probability theory and stochastic processes * Logically organized; writing is presented in a clear manner * Choice of topics is comprehensive within the area of probability * Ample homework problems are organized into chapter sections

  14. A probability space for quantum models

    Science.gov (United States)

    Lemmens, L. F.

    2017-06-01

    A probability space contains a set of outcomes, a collection of events formed by subsets of the set of outcomes and probabilities defined for all events. A reformulation in terms of propositions allows to use the maximum entropy method to assign the probabilities taking some constraints into account. The construction of a probability space for quantum models is determined by the choice of propositions, choosing the constraints and making the probability assignment by the maximum entropy method. This approach shows, how typical quantum distributions such as Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein are partly related with well-known classical distributions. The relation between the conditional probability density, given some averages as constraints and the appropriate ensemble is elucidated.

  15. Fracture probability along a fatigue crack path

    Energy Technology Data Exchange (ETDEWEB)

    Makris, P. [Technical Univ., Athens (Greece)

    1995-03-01

    Long experience has shown that the strength of materials under fatigue load has a stochastic behavior, which can be expressed through the fracture probability. This paper deals with a new analytically derived law for the distribution of the fracture probability along a fatigue crack path. The knowledge of the distribution of the fatigue fracture probability along the crack path helps the connection between stress conditions and the expected fatigue life of a structure under stochasticly varying loads. (orig.)

  16. Probability and statistics: models for research

    National Research Council Canada - National Science Library

    Bailey, Daniel Edgar

    1971-01-01

    This book is an interpretative presentation of the mathematical and logical basis of probability and statistics, indulging in some mathematics, but concentrating on the logical and scientific meaning...

  17. Advantages of the probability amplitude over the probability density in quantum mechanics

    OpenAIRE

    Kurihara, Yoshimasa; Quach, Nhi My Uyen

    2013-01-01

    We discuss reasons why a probability amplitude, which becomes a probability density after squaring, is considered as one of the most basic ingredients of quantum mechanics. First, the Heisenberg/Schrodinger equation, an equation of motion in quantum mechanics, describes a time evolution of the probability amplitude rather than of a probability density. There may be reasons why dynamics of a physical system are described by amplitude. In order to investigate one role of the probability amplitu...

  18. Method of self-consistent evaluation of absolute emission probabilities of particles and gamma rays

    Science.gov (United States)

    Badikov, Sergei; Chechev, Valery

    2017-09-01

    In assumption of well installed decay scheme the method provides a) exact balance relationships, b) lower (compared to the traditional techniques) uncertainties of recommended absolute emission probabilities of particles and gamma rays, c) evaluation of correlations between the recommended emission probabilities (for the same and different decay modes). Application of the method for the decay data evaluation for even curium isotopes led to paradoxical results. The multidimensional confidence regions for the probabilities of the most intensive alpha transitions constructed on the basis of present and the ENDF/B-VII.1, JEFF-3.1, DDEP evaluations are inconsistent whereas the confidence intervals for the evaluated probabilities of single transitions agree with each other.

  19. Method of self-consistent evaluation of absolute emission probabilities of particles and gamma rays

    Directory of Open Access Journals (Sweden)

    Badikov Sergei

    2017-01-01

    Full Text Available In assumption of well installed decay scheme the method provides a exact balance relationships, b lower (compared to the traditional techniques uncertainties of recommended absolute emission probabilities of particles and gamma rays, c evaluation of correlations between the recommended emission probabilities (for the same and different decay modes. Application of the method for the decay data evaluation for even curium isotopes led to paradoxical results. The multidimensional confidence regions for the probabilities of the most intensive alpha transitions constructed on the basis of present and the ENDF/B-VII.1, JEFF-3.1, DDEP evaluations are inconsistent whereas the confidence intervals for the evaluated probabilities of single transitions agree with each other.

  20. Probability of Grounding and Collision Events

    DEFF Research Database (Denmark)

    Pedersen, Preben Terndrup

    1996-01-01

    To quantify the risks involved in ship traffic, rational criteria for collision and grounding accidents have to be developed. This implies that probabilities as well as inherent consequences have to be analyzed and assessed.The present notes outline a method for evaluation of the probability...

  1. Probability of Grounding and Collision Events

    DEFF Research Database (Denmark)

    Pedersen, Preben Terndrup

    1996-01-01

    To quantify the risks involved in ship traffic, rational criteria for collision and grounding accidents are developed. This implies that probabilities as well as inherent consequences can be analysed and assessed. The presnt paper outlines a method for evaluation of the probability of ship...

  2. Simulations of Probabilities for Quantum Computing

    Science.gov (United States)

    Zak, M.

    1996-01-01

    It has been demonstrated that classical probabilities, and in particular, probabilistic Turing machine, can be simulated by combining chaos and non-LIpschitz dynamics, without utilization of any man-made devices (such as random number generators). Self-organizing properties of systems coupling simulated and calculated probabilities and their link to quantum computations are discussed.

  3. Selected papers on probability and statistics

    CERN Document Server

    2009-01-01

    This volume contains translations of papers that originally appeared in the Japanese journal Sūgaku. The papers range over a variety of topics in probability theory, statistics, and applications. This volume is suitable for graduate students and research mathematicians interested in probability and statistics.

  4. Analytical Study of Thermonuclear Reaction Probability Integrals

    OpenAIRE

    Chaudhry, M.A.; Haubold, H. J.; Mathai, A. M.

    2000-01-01

    An analytic study of the reaction probability integrals corresponding to the various forms of the slowly varying cross-section factor $S(E)$ is attempted. Exact expressions for reaction probability integrals are expressed in terms of the extended gamma functions.

  5. Examples of Neutrosophic Probability in Physics

    Directory of Open Access Journals (Sweden)

    Fu Yuhua

    2015-01-01

    Full Text Available This paper re-discusses the problems of the so-called “law of nonconservation of parity” and “accelerating expansion of the universe”, and presents the examples of determining Neutrosophic Probability of the experiment of Chien-Shiung Wu et al in 1957, and determining Neutrosophic Probability of accelerating expansion of the partial universe.

  6. Probability of Failure in Random Vibration

    DEFF Research Database (Denmark)

    Nielsen, Søren R.K.; Sørensen, John Dalsgaard

    1988-01-01

    Close approximations to the first-passage probability of failure in random vibration can be obtained by integral equation methods. A simple relation exists between the first-passage probability density function and the distribution function for the time interval spent below a barrier before out...

  7. Probability numeracy and health insurance purchase

    NARCIS (Netherlands)

    Dillingh, Rik; Kooreman, Peter; Potters, Jan

    2016-01-01

    This paper provides new field evidence on the role of probability numeracy in health insurance purchase. Our regression results, based on rich survey panel data, indicate that the expenditure on two out of three measures of health insurance first rises with probability numeracy and then falls again.

  8. Teaching Probability: A Socio-Constructivist Perspective

    Science.gov (United States)

    Sharma, Sashi

    2015-01-01

    There is a considerable and rich literature on students' misconceptions in probability. However, less attention has been paid to the development of students' probabilistic thinking in the classroom. This paper offers a sequence, grounded in socio-constructivist perspective for teaching probability.

  9. Probability: A Matter of Life and Death

    Science.gov (United States)

    Hassani, Mehdi; Kippen, Rebecca; Mills, Terence

    2016-01-01

    Life tables are mathematical tables that document probabilities of dying and life expectancies at different ages in a society. Thus, the life table contains some essential features of the health of a population. Probability is often regarded as a difficult branch of mathematics. Life tables provide an interesting approach to introducing concepts…

  10. Stimulus Probability Effects in Absolute Identification

    Science.gov (United States)

    Kent, Christopher; Lamberts, Koen

    2016-01-01

    This study investigated the effect of stimulus presentation probability on accuracy and response times in an absolute identification task. Three schedules of presentation were used to investigate the interaction between presentation probability and stimulus position within the set. Data from individual participants indicated strong effects of…

  11. prep misestimates the probability of replication

    NARCIS (Netherlands)

    Iverson, G.; Lee, M.D.; Wagenmakers, E.-J.

    2009-01-01

    The probability of "replication," prep, has been proposed as a means of identifying replicable and reliable effects in the psychological sciences. We conduct a basic test of prep that reveals that it misestimates the true probability of replication, especially for small effects. We show how these

  12. An introduction to probability and stochastic processes

    CERN Document Server

    Melsa, James L

    2013-01-01

    Geared toward college seniors and first-year graduate students, this text is designed for a one-semester course in probability and stochastic processes. Topics covered in detail include probability theory, random variables and their functions, stochastic processes, linear system response to stochastic processes, Gaussian and Markov processes, and stochastic differential equations. 1973 edition.

  13. Recent Developments in Applied Probability and Statistics

    CERN Document Server

    Devroye, Luc; Kohler, Michael; Korn, Ralf

    2010-01-01

    This book presents surveys on recent developments in applied probability and statistics. The contributions include topics such as nonparametric regression and density estimation, option pricing, probabilistic methods for multivariate interpolation, robust graphical modelling and stochastic differential equations. Due to its broad coverage of different topics the book offers an excellent overview of recent developments in applied probability and statistics.

  14. Exact probability distribution for the Bernoulli-Malthus-Verhulst model driven by a multiplicative colored noise

    Science.gov (United States)

    Calisto, H.; Bologna, M.

    2007-05-01

    We report an exact result for the calculation of the probability distribution of the Bernoulli-Malthus-Verhulst model driven by a multiplicative colored noise. We study the conditions under which the probability distribution of the Malthus-Verhulst model can exhibit a transition from a unimodal to a bimodal distribution depending on the value of a critical parameter. Also we show that the mean value of x(t) in the latter model always approaches asymptotically the value 1.

  15. Exact numerical calculation of fixation probability and time on graphs.

    Science.gov (United States)

    Hindersin, Laura; Möller, Marius; Traulsen, Arne; Bauer, Benedikt

    2016-12-01

    The Moran process on graphs is a popular model to study the dynamics of evolution in a spatially structured population. Exact analytical solutions for the fixation probability and time of a new mutant have been found for only a few classes of graphs so far. Simulations are time-expensive and many realizations are necessary, as the variance of the fixation times is high. We present an algorithm that numerically computes these quantities for arbitrary small graphs by an approach based on the transition matrix. The advantage over simulations is that the calculation has to be executed only once. Building the transition matrix is automated by our algorithm. This enables a fast and interactive study of different graph structures and their effect on fixation probability and time. We provide a fast implementation in C with this note (Hindersin et al., 2016). Our code is very flexible, as it can handle two different update mechanisms (Birth-death or death-Birth), as well as arbitrary directed or undirected graphs. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  16. Optimizing Probability of Detection Point Estimate Demonstration

    Science.gov (United States)

    Koshti, Ajay M.

    2017-01-01

    Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-18231and associated mh18232POD software gives most common methods of POD analysis. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using Point Estimate Method. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible.

  17. Multinomial mixture model with heterogeneous classification probabilities

    Science.gov (United States)

    Holland, M.D.; Gray, B.R.

    2011-01-01

    Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.

  18. Echoes of inflationary first-order phase transitions in the CMB

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Hongliang, E-mail: hjiangag@connect.ust.hk [Department of Physics, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong Special Administrative Region (Hong Kong); Liu, Tao, E-mail: taoliu@ust.hk [Department of Physics, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong Special Administrative Region (Hong Kong); Sun, Sichun, E-mail: sichun@uw.edu [Jockey Club Institute for Advanced Study, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong Special Administrative Region (Hong Kong); Wang, Yi, E-mail: phyw@ust.hk [Department of Physics, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong Special Administrative Region (Hong Kong)

    2017-02-10

    Cosmological phase transitions (CPTs), such as the Grand Unified Theory (GUT) and the electroweak (EW) ones, play a significant role in both particle physics and cosmology. In this letter, we propose to probe the first-order CPTs, by detecting gravitational waves (GWs) which are generated during the phase transitions through the cosmic microwave background (CMB). If happened around the inflation era, the first-order CPTs may yield low-frequency GWs due to bubble dynamics, leaving imprints on the CMB. In contrast to the nearly scale-invariant primordial GWs caused by vacuum fluctuation, these bubble-generated GWs are scale dependent and have non-trivial B-mode spectra. If decoupled from inflaton, the EWPT during inflation may serve as a probe for the one after reheating where the baryon asymmetry could be generated via EW baryogenesis (EWBG). The CMB thus provides a potential way to test the feasibility of the EWBG, complementary to the collider measurements of Higgs potential and the direct detection of GWs generated during EWPT.

  19. Echoes of inflationary first-order phase transitions in the CMB

    Directory of Open Access Journals (Sweden)

    Hongliang Jiang

    2017-02-01

    Full Text Available Cosmological phase transitions (CPTs, such as the Grand Unified Theory (GUT and the electroweak (EW ones, play a significant role in both particle physics and cosmology. In this letter, we propose to probe the first-order CPTs, by detecting gravitational waves (GWs which are generated during the phase transitions through the cosmic microwave background (CMB. If happened around the inflation era, the first-order CPTs may yield low-frequency GWs due to bubble dynamics, leaving imprints on the CMB. In contrast to the nearly scale-invariant primordial GWs caused by vacuum fluctuation, these bubble-generated GWs are scale dependent and have non-trivial B-mode spectra. If decoupled from inflaton, the EWPT during inflation may serve as a probe for the one after reheating where the baryon asymmetry could be generated via EW baryogenesis (EWBG. The CMB thus provides a potential way to test the feasibility of the EWBG, complementary to the collider measurements of Higgs potential and the direct detection of GWs generated during EWPT.

  20. Upgrading Probability via Fractions of Events

    Directory of Open Access Journals (Sweden)

    Frič Roman

    2016-08-01

    Full Text Available The influence of “Grundbegriffe” by A. N. Kolmogorov (published in 1933 on education in the area of probability and its impact on research in stochastics cannot be overestimated. We would like to point out three aspects of the classical probability theory “calling for” an upgrade: (i classical random events are black-and-white (Boolean; (ii classical random variables do not model quantum phenomena; (iii basic maps (probability measures and observables { dual maps to random variables have very different “mathematical nature”. Accordingly, we propose an upgraded probability theory based on Łukasiewicz operations (multivalued logic on events, elementary category theory, and covering the classical probability theory as a special case. The upgrade can be compared to replacing calculations with integers by calculations with rational (and real numbers. Namely, to avoid the three objections, we embed the classical (Boolean random events (represented by the f0; 1g-valued indicator functions of sets into upgraded random events (represented by measurable {0; 1}-valued functions, the minimal domain of probability containing “fractions” of classical random events, and we upgrade the notions of probability measure and random variable.

  1. Robust Model-Free Multiclass Probability Estimation

    Science.gov (United States)

    Wu, Yichao; Zhang, Hao Helen; Liu, Yufeng

    2010-01-01

    Classical statistical approaches for multiclass probability estimation are typically based on regression techniques such as multiple logistic regression, or density estimation approaches such as linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA). These methods often make certain assumptions on the form of probability functions or on the underlying distributions of subclasses. In this article, we develop a model-free procedure to estimate multiclass probabilities based on large-margin classifiers. In particular, the new estimation scheme is employed by solving a series of weighted large-margin classifiers and then systematically extracting the probability information from these multiple classification rules. A main advantage of the proposed probability estimation technique is that it does not impose any strong parametric assumption on the underlying distribution and can be applied for a wide range of large-margin classification methods. A general computational algorithm is developed for class probability estimation. Furthermore, we establish asymptotic consistency of the probability estimates. Both simulated and real data examples are presented to illustrate competitive performance of the new approach and compare it with several other existing methods. PMID:21113386

  2. Uncertainty about probability: a decision analysis perspective

    Energy Technology Data Exchange (ETDEWEB)

    Howard, R.A.

    1988-03-01

    The issue of how to think about uncertainty about probability is framed and analyzed from the viewpoint of a decision analyst. The failure of nuclear power plants is used as an example. The key idea is to think of probability as describing a state of information on an uncertain event, and to pose the issue of uncertainty in this quantity as uncertainty about a number that would be definitive: it has the property that you would assign it as the probability if you knew it. Logical consistency requires that the probability to assign to a single occurrence in the absence of further information be the mean of the distribution of this definitive number, not the medium as is sometimes suggested. Any decision that must be made without the benefit of further information must also be made using the mean of the definitive number's distribution. With this formulation, they find further that the probability of r occurrences in n exchangeable trials will depend on the first n moments of the definitive number's distribution. In making decisions, the expected value of clairvoyance on the occurrence of the event must be at least as great as that on the definitive number. If one of the events in question occurs, then the increase in probability of another such event is readily computed. This means, in terms of coin tossing, that unless one is absolutely sure of the fairness of a coin, seeing a head must increase the probability of heads, in distinction to usual thought. A numerical example for nuclear power shows that the failure of one plant of a group with a low probability of failure can significantly increase the probability that must be assigned to failure of a second plant in the group.

  3. Confinement/deconfinement transition from symmetry breaking in gauge/gravity duality

    Energy Technology Data Exchange (ETDEWEB)

    Čubrović, Mihailo [Institute for Theoretical Physics, University of Cologne,Zülpicher Strasse 77, D-50937, Cologne (Germany)

    2016-10-19

    We study the confinement/deconfinement transition in a strongly coupled system triggered by an independent symmetry-breaking quantum phase transition in gauge/gravity duality. The gravity dual is an Einstein-scalar-dilaton system with AdS near-boundary behavior and soft wall interior at zero scalar condensate. We study the cases of neutral and charged condensate separately. In the former case the condensation breaks the discrete ℤ{sub 2} symmetry while a charged condensate breaks the continuous U(1) symmetry. After the condensation of the order parameter, the non-zero vacuum expectation value of the scalar couples to the dilaton, changing the soft wall geometry into a non-confining and anisotropically scale-invariant infrared metric. In other words, the formation of long-range order is immediately followed by the deconfinement transition and the two critical points coincide. The confined phase has a scale — the confinement scale (energy gap) which vanishes in the deconfined case. Therefore, the breaking of the symmetry of the scalar (ℤ{sub 2} or U(1)) in turn restores the scaling symmetry in the system and neither phase has a higher overall symmetry than the other. When the scalar is charged the phase transition is continuous which goes against the Ginzburg-Landau theory where such transitions generically only occur discontinuously. This phenomenon has some commonalities with the scenario of deconfined criticality. The mechanism we have found has applications mainly in effective field theories such as quantum magnetic systems. We briefly discuss these applications and the relation to real-world systems.

  4. The Theoretical Transition Probabilities Between the B(sup 3)Pi(sub g) and the A(sup 3)Sigma(Sup +, sub u), W(sup 3)Delta(sub u), B'(sup 3)Sigma(sup -, sub u) States of N2

    Science.gov (United States)

    Thuemmel, Helmar T.; Partridge, Harry; Huo, Winifred M.; Langhoff, Stephen (Technical Monitor)

    1995-01-01

    The electronic transition moment functions between the B(sup 3)Pi(sub g) and the A(sup 3)Sigma(sup +, sub u), W(sup 3)Delta(sub u), B'(sup 3)Sigma(sup -, sub u) states of N2 are studied using the internally contracted multireference configuration interaction (ICMRCI) method based upon complete active space SCF (CASSCF) reference wave-functions. The dependence of the moments on both the one and n-particle basis sets has been investigated in detail. The calculated radiative lifetimes for the vibrational levels of B(sup 3)Pi(sub g) are in excellent agreement with the most recent measurement of Euler and Pipkin (1983)

  5. A Course on Elementary Probability Theory

    OpenAIRE

    Lo, Gane Samb

    2017-01-01

    This book introduces to the theory of probabilities from the beginning. Assuming that the reader possesses the normal mathematical level acquired at the end of the secondary school, we aim to equip him with a solid basis in probability theory. The theory is preceded by a general chapter on counting methods. Then, the theory of probabilities is presented in a discrete framework. Two objectives are sought. The first is to give the reader the ability to solve a large number of problems related t...

  6. Comparing linear probability model coefficients across groups

    DEFF Research Database (Denmark)

    Holm, Anders; Ejrnæs, Mette; Karlson, Kristian Bernt

    2015-01-01

    This article offers a formal identification analysis of the problem in comparing coefficients from linear probability models between groups. We show that differences in coefficients from these models can result not only from genuine differences in effects, but also from differences in one or more...... of the following three components: outcome truncation, scale parameters and distributional shape of the predictor variable. These results point to limitations in using linear probability model coefficients for group comparisons. We also provide Monte Carlo simulations and real examples to illustrate...... these limitations, and we suggest a restricted approach to using linear probability model coefficients in group comparisons....

  7. Pre-aggregation for Probability Distributions

    DEFF Research Database (Denmark)

    Timko, Igor; Dyreson, Curtis E.; Pedersen, Torben Bach

    Motivated by the increasing need to analyze complex uncertain multidimensional data (e.g., in order to optimize and personalize location-based services), this paper proposes novel types of {\\em probabilistic} OLAP queries that operate on aggregate values that are probability distributions...... and the techniques to process these queries. The paper also presents the methods for computing the probability distributions, which enables pre-aggregation, and for using the pre-aggregated distributions for further aggregation. In order to achieve good time and space efficiency, the methods perform approximate...... multidimensional data analysis that is considered in this paper (i.e., approximate processing of probabilistic OLAP queries over probability distributions)....

  8. Concept of probability in statistical physics

    CERN Document Server

    Guttmann, Y M

    1999-01-01

    Foundational issues in statistical mechanics and the more general question of how probability is to be understood in the context of physical theories are both areas that have been neglected by philosophers of physics. This book fills an important gap in the literature by providing a most systematic study of how to interpret probabilistic assertions in the context of statistical mechanics. The book explores both subjectivist and objectivist accounts of probability, and takes full measure of work in the foundations of probability theory, in statistical mechanics, and in mathematical theory. It will be of particular interest to philosophers of science, physicists and mathematicians interested in foundational issues, and also to historians of science.

  9. Handbook of probability theory and applications

    CERN Document Server

    Rudas, Tamas

    2008-01-01

    ""This is a valuable reference guide for readers interested in gaining a basic understanding of probability theory or its applications in problem solving in the other disciplines.""-CHOICEProviding cutting-edge perspectives and real-world insights into the greater utility of probability and its applications, the Handbook of Probability offers an equal balance of theory and direct applications in a non-technical, yet comprehensive, format. Editor Tamás Rudas and the internationally-known contributors present the material in a manner so that researchers of vari

  10. Computation of the Complex Probability Function

    Energy Technology Data Exchange (ETDEWEB)

    Trainer, Amelia Jo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ledwith, Patrick John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-22

    The complex probability function is important in many areas of physics and many techniques have been developed in an attempt to compute it for some z quickly and e ciently. Most prominent are the methods that use Gauss-Hermite quadrature, which uses the roots of the nth degree Hermite polynomial and corresponding weights to approximate the complex probability function. This document serves as an overview and discussion of the use, shortcomings, and potential improvements on the Gauss-Hermite quadrature for the complex probability function.

  11. Probability an introduction with statistical applications

    CERN Document Server

    Kinney, John J

    2014-01-01

    Praise for the First Edition""This is a well-written and impressively presented introduction to probability and statistics. The text throughout is highly readable, and the author makes liberal use of graphs and diagrams to clarify the theory.""  - The StatisticianThoroughly updated, Probability: An Introduction with Statistical Applications, Second Edition features a comprehensive exploration of statistical data analysis as an application of probability. The new edition provides an introduction to statistics with accessible coverage of reliability, acceptance sampling, confidence intervals, h

  12. Eliciting Subjective Probabilities with Binary Lotteries

    DEFF Research Database (Denmark)

    Harrison, Glenn W.; Martínez-Correa, Jimmy; Swarthout, J. Todd

    2014-01-01

    We evaluate a binary lottery procedure for inducing risk neutral behavior in a subjective belief elicitation task. Prior research has shown this procedure to robustly induce risk neutrality when subjects are given a single risk task defined over objective probabilities. Drawing a sample from...... the same subject population, we find evidence that the binary lottery procedure also induces linear utility in a subjective probability elicitation task using the Quadratic Scoring Rule. We also show that the binary lottery procedure can induce direct revelation of subjective probabilities in subjects...

  13. Encounter Probability of Individual Wave Height

    DEFF Research Database (Denmark)

    Liu, Z.; Burcharth, H. F.

    1998-01-01

    wave height corresponding to a certain exceedence probability within a structure lifetime (encounter probability), based on the statistical analysis of long-term extreme significant wave height. Then the design individual wave height is calculated as the expected maximum individual wave height...... associated with the design significant wave height, with the assumption that the individual wave heights follow the Rayleigh distribution. However, the exceedence probability of such a design individual wave height within the structure lifetime is unknown. The paper presents a method for the determination...... of the design individual wave height corresponding to an exceedence probability within the structure lifetime, given the long-term extreme significant wave height. The method can also be applied for estimation of the number of relatively large waves for fatigue analysis of constructions....

  14. Modelling the probability of building fires

    Directory of Open Access Journals (Sweden)

    Vojtěch Barták

    2014-12-01

    Full Text Available Systematic spatial risk analysis plays a crucial role in preventing emergencies.In the Czech Republic, risk mapping is currently based on the risk accumulationprinciple, area vulnerability, and preparedness levels of Integrated Rescue Systemcomponents. Expert estimates are used to determine risk levels for individualhazard types, while statistical modelling based on data from actual incidents andtheir possible causes is not used. Our model study, conducted in cooperation withthe Fire Rescue Service of the Czech Republic as a model within the Liberec andHradec Králové regions, presents an analytical procedure leading to the creation ofbuilding fire probability maps based on recent incidents in the studied areas andon building parameters. In order to estimate the probability of building fires, aprediction model based on logistic regression was used. Probability of fire calculatedby means of model parameters and attributes of specific buildings can subsequentlybe visualized in probability maps.

  15. Probability Analysis of a Quantum Computer

    OpenAIRE

    Einarsson, Göran

    2003-01-01

    The quantum computer algorithm by Peter Shor for factorization of integers is studied. The quantum nature of a QC makes its outcome random. The output probability distribution is investigated and the chances of a successful operation is determined

  16. Nanoformulations and Clinical Trial Candidates as Probably ...

    African Journals Online (AJOL)

    Nanoformulations and Clinical Trial Candidates as Probably Effective and Safe Therapy for Tuberculosis. Madeeha Laghari, Yusrida Darwis, Abdul Hakeem Memon, Arshad Ali Khan, Ibrahim Mohammed Tayeb Abdulbaqi, Reem Abou Assi ...

  17. Probability and statistics with integrated software routines

    CERN Document Server

    Deep, Ronald

    2005-01-01

    Probability & Statistics with Integrated Software Routines is a calculus-based treatment of probability concurrent with and integrated with statistics through interactive, tailored software applications designed to enhance the phenomena of probability and statistics. The software programs make the book unique.The book comes with a CD containing the interactive software leading to the Statistical Genie. The student can issue commands repeatedly while making parameter changes to observe the effects. Computer programming is an excellent skill for problem solvers, involving design, prototyping, data gathering, testing, redesign, validating, etc, all wrapped up in the scientific method.See also: CD to accompany Probability and Stats with Integrated Software Routines (0123694698)* Incorporates more than 1,000 engaging problems with answers* Includes more than 300 solved examples* Uses varied problem solving methods

  18. Sampling, Probability Models and Statistical Reasoning Statistical ...

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 5. Sampling, Probability Models and Statistical Reasoning Statistical Inference. Mohan Delampady V R Padmawar. General Article Volume 1 Issue 5 May 1996 pp 49-58 ...

  19. Zika Probably Not Spread Through Saliva: Study

    Science.gov (United States)

    ... page: https://medlineplus.gov/news/fullstory_167531.html Zika Probably Not Spread Through Saliva: Study Research with ... HealthDay News) -- Scientists have some interesting news about Zika: You're unlikely to get the virus from ...

  20. Liquefaction Probability Curves for Surficial Geologic Units

    Science.gov (United States)

    Holzer, T. L.; Noce, T. E.; Bennett, M. J.

    2009-12-01

    Liquefaction probability curves that predict the probability of surface manifestations of earthquake-induced liquefaction are developed for 14 different surficial geologic deposits. The geologic units include alluvial fan, beach ridge, river delta, eolian dune, point bar, floodbasin, natural river levee, abandoned river channel, deep-water lake, lagoonal, sandy artificial fill, and valley train deposits. Probability is conditioned on earthquake magnitude and peak ground acceleration. Curves are developed for water table depths of 1.5 and 5.0 m. Probabilities were derived from complementary cumulative frequency distributions of the liquefaction potential index (LPI) that were computed from 935 cone penetration tests. Most of the curves can be fit with a 3-parameter logistic function, which facilitates computations of probability. For natural deposits with a water table at 1.5 m depth and subjected to an M7.5 earthquake with a PGA = 0.25 g, probabilities range from 0.5 for fluvial point bar, barrier island beach ridge, and deltaic deposits. Retrospective predictions of liquefaction during historical earthquakes based on the curves compare favorably to post-earthquake observations. We also have used the curves to assign ranges of liquefaction probabilities to the susceptibility categories proposed by Youd and Perkins (1978) for different geologic deposits. For the earthquake loading and conditions described above, probabilities range from 0-0.08 for low, 0.09-0.30 for moderate, 0.31-0.62 for high, to 0.63-1.00 for very high susceptibility. Liquefaction probability curves have two primary practical applications. First, the curves can be combined with seismic source characterizations to transform surficial geologic maps into probabilistic liquefaction hazard maps. Geographic specific curves are clearly desirable, but in the absence of such information, generic liquefaction probability curves provide a first approximation of liquefaction hazard. Such maps are useful both

  1. Representing Uncertainty by Probability and Possibility

    DEFF Research Database (Denmark)

    Uncertain parameters in modeling are usually represented by probability distributions reflecting either the objective uncertainty of the parameters or the subjective belief held by the model builder. This approach is particularly suited for representing the statistical nature or variance of uncer......Uncertain parameters in modeling are usually represented by probability distributions reflecting either the objective uncertainty of the parameters or the subjective belief held by the model builder. This approach is particularly suited for representing the statistical nature or variance...

  2. On $\\varphi$-families of probability distributions

    OpenAIRE

    Rui F. Vigelis; Cavalcante, Charles C.

    2011-01-01

    We generalize the exponential family of probability distributions. In our approach, the exponential function is replaced by a $\\varphi$-function, resulting in a $\\varphi$-family of probability distributions. We show how $\\varphi$-families are constructed. In a $\\varphi$-family, the analogue of the cumulant-generating function is a normalizing function. We define the $\\varphi$-divergence as the Bregman divergence associated to the normalizing function, providing a generalization of the Kullbac...

  3. An Illustrative Problem in Computational Probability.

    Science.gov (United States)

    1980-06-01

    easily evaluated. In general, the (n)probabilities #j (t) my be comuted by the numerical solution of the simple differential equations d Cu) * (n) Rt#n...algorithmically tractable solutions to problem in probability adds an interesting new dimension to their analysis. Zn the con- struction of efficient...signif icence. This serves to Illustrate out first point. )hatbematica3lk equivalent solutions sma be vastly diLfferent In their sutIlIty for

  4. Imprecise Probability Methods for Weapons UQ

    Energy Technology Data Exchange (ETDEWEB)

    Picard, Richard Roy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Vander Wiel, Scott Alan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-13

    Building on recent work in uncertainty quanti cation, we examine the use of imprecise probability methods to better characterize expert knowledge and to improve on misleading aspects of Bayesian analysis with informative prior distributions. Quantitative approaches to incorporate uncertainties in weapons certi cation are subject to rigorous external peer review, and in this regard, certain imprecise probability methods are well established in the literature and attractive. These methods are illustrated using experimental data from LANL detonator impact testing.

  5. Probability and statistics for computer science

    CERN Document Server

    Johnson, James L

    2011-01-01

    Comprehensive and thorough development of both probability and statistics for serious computer scientists; goal-oriented: ""to present the mathematical analysis underlying probability results"" Special emphases on simulation and discrete decision theory Mathematically-rich, but self-contained text, at a gentle pace Review of calculus and linear algebra in an appendix Mathematical interludes (in each chapter) which examine mathematical techniques in the context of probabilistic or statistical importance Numerous section exercises, summaries, historical notes, and Further Readings for reinforcem

  6. Presidential Transitions

    Science.gov (United States)

    2006-06-09

    done to facilitate the transition.52 CRS-12 53 David T. Stanley, Changing Administrations (Washington: Brookings Institution, 1965), p. 6. 54 “Pre...Conference of Mayors; Sharleen Hirsch, an educational administrator; and Jule Sugarman , a public administrator. Staff members were assigned to task forces...Issues,” Washington Post, Nov. 13, 1980, p. Al. 77 David Hoffman, “Bush Names Baker Secretary of State,” Washington Post, Nov. 10, 1988, pp. Al and

  7. Improving Transit Predictions of Known Exoplanets with TERMS

    Directory of Open Access Journals (Sweden)

    Mahadevan S.

    2011-02-01

    Full Text Available Transiting planet discoveries have largely been restricted to the short-period or low-periastron distance regimes due to the bias inherent in the geometric transit probability. Through the refinement of planetary orbital parameters, and hence reducing the size of transit windows, long-period planets become feasible targets for photometric follow-up. Here we describe the TERMS project that is monitoring these host stars at predicted transit times.

  8. Estimating the Probabilities of Low-Weight Differential and Linear Approximations on PRESENT-like Ciphers

    DEFF Research Database (Denmark)

    Abdelraheem, Mohamed Ahmed

    2012-01-01

    We use large but sparse correlation and transition-difference-probability submatrices to find the best linear and differential approximations respectively on PRESENT-like ciphers. This outperforms the branch and bound algorithm when the number of low-weight differential and linear characteristics...

  9. Hillslopes to Hollows to Channels: Identifying Process Transitions and Domains using Characteristic Scaling Relations

    Science.gov (United States)

    Williams, K.; Locke, W. W.

    2011-12-01

    Headwater catchments are partitioned into hillslopes, unchanneled valleys (hollows), and channels. Low order (less than or equal to two) channels comprise most of the stream length in the drainage network so defining where hillslopes end and hollows begin, and where hollows end and channels begin, is important for calibration and verification of hydrologic runoff and sediment production modeling. We test the use of landscape scaling relations to detect flow regimes characteristic of diffusive, concentrated, and incisive runoff, and use these flow regimes as proxies for hillslope, hollow, and channeled landforms. We use LiDAR-derived digital elevation models (DEMs) of two pairs of headwater catchments in southwest and north-central Montana to develop scaling relations of flowpath length, total stream power, and contributing area. The catchment pairs contrast low versus high drainage density and north versus south aspect. Inflections in scaling relations of contributing area and flowpath length in a single basin (modified Hack's law) and contributing area and total stream power were used to identify hillslope and fluvial process domain transitions. In the modified Hack's law, inflections in the slope of the log-log power law are hypothesized to correspond to changes in flow regime used as proxies for hillslope, hollow, and channeled landforms. Similarly, rate of change of total stream power with contributing area is hypothesized to become constant and then decrease at the hillslope to fluvial domain transition. Power law scaling of frequency-magnitude plots of curvature and an aspect-related parameter were also tested as an indicator of the transition from scale-dependent hillslope length to the scale invariant fluvial domain. Curvature and aspect were calculated at each cell in spectrally filtered DEMs. Spectral filtering by fast Fourier and wavelet transforms enhances detection of fine-scale fluvial features by removing long wavelength topography. Using the

  10. Probability of pregnancy in beef heifers

    Directory of Open Access Journals (Sweden)

    D.P. Faria

    2014-12-01

    Full Text Available This study aimed to evaluate the influence of initial weight, initial age, average daily gain in initial weight, average daily gain in total weight and genetic group on the probability of pregnancy in primiparous females of the Nellore, 1/2 Simmental + 1/2 Nellore, and 3/4 Nellore + 1/4 Simmental genetic groups. Data were collected from the livestock file of the Farpal Farm, located in the municipality of Jaíba, Minas Gerais State, Brazil. The pregnancy diagnosis results (success = 1 and failure = 0 were used to determine the probability of pregnancy that was modeled using logistic regression by the Proc Logistic procedure available on SAS (Statistical..., 2004 software, from the regressor variables initial weight, average daily gain in initial weight, average daily gain in total weight, and genetic group. Initial weight (IW was the most important variable in the probability of pregnancy in heifers, and 1-kg increments in IW allowed for increases of 5.8, 9.8 and 3.4% in the probability of pregnancy in Nellore, 1/2 Simmental + 1/2 Nellore and, 3/4 Nellore + 1/4 Simmental heifers, respectively. The initial age influenced the probability of pregnancy in Nellore heifers. From the estimates of the effects of each variable it was possible to determine the minimum initial weights for each genetic group. This information can be used to monitor the development of heifers until the breeding season and increase the pregnancy rate.

  11. Laboratory-tutorial activities for teaching probability

    Directory of Open Access Journals (Sweden)

    Michael C. Wittmann

    2006-08-01

    Full Text Available We report on the development of students’ ideas of probability and probability density in a University of Maine laboratory-based general education physics course called Intuitive Quantum Physics. Students in the course are generally math phobic with unfavorable expectations about the nature of physics and their ability to do it. We describe a set of activities used to teach concepts of probability and probability density. Rudimentary knowledge of mechanics is needed for one activity, but otherwise the material requires no additional preparation. Extensions of the activities include relating probability density to potential energy graphs for certain “touchstone” examples. Students have difficulties learning the target concepts, such as comparing the ratio of time in a region to total time in all regions. Instead, they often focus on edge effects, pattern match to previously studied situations, reason about necessary but incomplete macroscopic elements of the system, use the gambler’s fallacy, and use expectations about ensemble results rather than expectation values to predict future events. We map the development of their thinking to provide examples of problems rather than evidence of a curriculum’s success.

  12. Failure probability of regional flood defences

    Directory of Open Access Journals (Sweden)

    Lendering Kasper

    2016-01-01

    Full Text Available Polders in the Netherlands are protected from flooding by primary and regional flood defence systems. During the last decade, scientific research in flood risk focused on the development of a probabilistic approach to quantify the probability of flooding of the primary flood defence system. This paper proposed a methodology to quantify the probability of flooding of regional flood defence systems, which required several additions to the methodology used for the primary flood defence system. These additions focused on a method to account for regulation of regional water levels, the possibility of (reduced intrusion resistance due to maintenance dredging in regional water, the probability of traffic loads and the influence of dependence between regional water levels and the phreatic surface of a regional flood defence. In addition, reliability updating is used to demonstrate the potential for updating the probability of failure of regional flood defences with performance observations. The results demonstrated that the proposed methodology can be used to determine the probability of flooding of a regional flood defence system. In doing so, the methodology contributes to improving flood risk management in these systems.

  13. The role of probabilities in physics.

    Science.gov (United States)

    Le Bellac, Michel

    2012-09-01

    Although modern physics was born in the XVIIth century as a fully deterministic theory in the form of Newtonian mechanics, the use of probabilistic arguments turned out later on to be unavoidable. Three main situations can be distinguished. (1) When the number of degrees of freedom is very large, on the order of Avogadro's number, a detailed dynamical description is not possible, and in fact not useful: we do not care about the velocity of a particular molecule in a gas, all we need is the probability distribution of the velocities. This statistical description introduced by Maxwell and Boltzmann allows us to recover equilibrium thermodynamics, gives a microscopic interpretation of entropy and underlies our understanding of irreversibility. (2) Even when the number of degrees of freedom is small (but larger than three) sensitivity to initial conditions of chaotic dynamics makes determinism irrelevant in practice, because we cannot control the initial conditions with infinite accuracy. Although die tossing is in principle predictable, the approach to chaotic dynamics in some limit implies that our ignorance of initial conditions is translated into a probabilistic description: each face comes up with probability 1/6. (3) As is well-known, quantum mechanics is incompatible with determinism. However, quantum probabilities differ in an essential way from the probabilities introduced previously: it has been shown from the work of John Bell that quantum probabilities are intrinsic and cannot be given an ignorance interpretation based on a hypothetical deeper level of description. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. On the Possibility of Assigning Probabilities to Singular Cases, or: Probability Is Subjective Too!

    Directory of Open Access Journals (Sweden)

    Mark R. Crovelli

    2009-06-01

    Full Text Available Both Ludwig von Mises and Richard von Mises claimed that numerical probability could not be legitimately applied to singular cases. This paper challenges this aspect of the von Mises brothers’ theory of probability. It is argued that their denial that numerical probability could be applied to singular cases was based solely upon Richard von Mises’ exceptionally restrictive definition of probability. This paper challenges Richard von Mises’ definition of probability by arguing that the definition of probability necessarily depends upon whether the world is governed by time-invariant causal laws. It is argued that if the world is governed by time-invariant causal laws, a subjective definition of probability must be adopted. It is further argued that both the nature of human action and the relative frequency method for calculating numerical probabilities both presuppose that the world is indeed governed by time-invariant causal laws. It is finally argued that the subjective definition of probability undercuts the von Mises claim that numerical probability cannot legitimately be applied to singular, non-replicable cases.

  15. Rapid transitions

    Energy Technology Data Exchange (ETDEWEB)

    Hamrin, J.G.

    1980-01-01

    Solar energy programs are entering a critical transitional period as we move from the initial marketing of solar technologies into a phase of widespread commercialization. We face the dual challenge of trying to get enough solar systems in place fast enough to prove solar is a viable alternative, while trying to ensure the systems are designed and installed properly, proving the energy savings as promised. This is a period of both great opportunity and high risk as the field becomes crowded with new solar cheerleaders and supporters but seldom enough competent players. The status of existing and proposed programs for the accelerated commercialization of solar energy in California is described.

  16. Transit space

    DEFF Research Database (Denmark)

    Raahauge, Kirsten Marie

    2008-01-01

    This article deals with representations of one specific city, Århus, Denmark, especially its central district. The analysis is based on anthropological fieldwork conducted in Skåde Bakker and Fedet, two well-off neighborhoods. The overall purpose of the project is to study perceptions of space...... and the interaction of cultural, social, and spatial organizations, as seen from the point of view of people living in Skåde Bakker and Fedet. The focus is on the city dwellers’ representations of the central district of Århus with specific reference to the concept of transit space. When applied to various Århusian...

  17. Comparing coefficients of nested nonlinear probability models

    DEFF Research Database (Denmark)

    Kohler, Ulrich; Karlson, Kristian Bernt; Holm, Anders

    2011-01-01

    In a series of recent articles, Karlson, Holm and Breen have developed a method for comparing the estimated coeffcients of two nested nonlinear probability models. This article describes this method and the user-written program khb that implements the method. The KHB-method is a general decomposi......In a series of recent articles, Karlson, Holm and Breen have developed a method for comparing the estimated coeffcients of two nested nonlinear probability models. This article describes this method and the user-written program khb that implements the method. The KHB-method is a general...... decomposition method that is unaffected by the rescaling or attenuation bias that arise in cross-model comparisons in nonlinear models. It recovers the degree to which a control variable, Z, mediates or explains the relationship between X and a latent outcome variable, Y*, underlying the nonlinear probability...

  18. Classicality versus quantumness in Born's probability

    Science.gov (United States)

    Luo, Shunlong

    2017-11-01

    Born's rule, which postulates the probability of a measurement outcome in a quantum state, is pivotal to interpretations and applications of quantum mechanics. By exploiting the departure of the product of two Hermitian operators in Born's rule from Hermiticity, we prescribe an intrinsic and natural scheme to decompose Born's probability into a classical part and a quantum part, which have significant implications in quantum information theory. The classical part constitutes the information compatible with the associated measurement operator, while the quantum part represents the quantum coherence of the state with respect to the measurement operator. Fundamental properties of the decomposition are revealed. As applications, we establish several trade-off relations for the classicality and quantumness in Born's probability, which may be interpreted as alternative realizations of Heisenberg's uncertainty principle. The results shed physical lights on related issues concerning quantification of complementarity, coherence, and uncertainty, as well as the classical-quantum interplay.

  19. Uncertainty the soul of modeling, probability & statistics

    CERN Document Server

    Briggs, William

    2016-01-01

    This book presents a philosophical approach to probability and probabilistic thinking, considering the underpinnings of probabilistic reasoning and modeling, which effectively underlie everything in data science. The ultimate goal is to call into question many standard tenets and lay the philosophical and probabilistic groundwork and infrastructure for statistical modeling. It is the first book devoted to the philosophy of data aimed at working scientists and calls for a new consideration in the practice of probability and statistics to eliminate what has been referred to as the "Cult of Statistical Significance". The book explains the philosophy of these ideas and not the mathematics, though there are a handful of mathematical examples. The topics are logically laid out, starting with basic philosophy as related to probability, statistics, and science, and stepping through the key probabilistic ideas and concepts, and ending with statistical models. Its jargon-free approach asserts that standard methods, suc...

  20. 7th High Dimensional Probability Meeting

    CERN Document Server

    Mason, David; Reynaud-Bouret, Patricia; Rosinski, Jan

    2016-01-01

    This volume collects selected papers from the 7th High Dimensional Probability meeting held at the Institut d'Études Scientifiques de Cargèse (IESC) in Corsica, France. High Dimensional Probability (HDP) is an area of mathematics that includes the study of probability distributions and limit theorems in infinite-dimensional spaces such as Hilbert spaces and Banach spaces. The most remarkable feature of this area is that it has resulted in the creation of powerful new tools and perspectives, whose range of application has led to interactions with other subfields of mathematics, statistics, and computer science. These include random matrices, nonparametric statistics, empirical processes, statistical learning theory, concentration of measure phenomena, strong and weak approximations, functional estimation, combinatorial optimization, and random graphs. The contributions in this volume show that HDP theory continues to thrive and develop new tools, methods, techniques and perspectives to analyze random phenome...