WorldWideScience

Sample records for large threshold corrections

  1. Phenomenology of threshold corrections for inclusive jet production at hadron colliders

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, M.C. [Hamburg Univ. (Germany). II. Inst. fuer Theoretische Physik; Moch, S. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Hamburg Univ. (Germany). II. Inst. fuer Theoretische Physik

    2013-09-15

    We study one-jet inclusive hadro-production and compute the QCD threshold corrections for large transverse momentum of the jet in the soft-gluon resummation formalism at next-to-leading logarithmic accuracy. We use the resummed result to generate approximate QCD corrections at next-to-next-to leading order, compare with results in the literature and present rapidity integrated distributions of the jet's transverse momentum for Tevatron and LHC. For the threshold approximation we investigate its kinematical range of validity as well as its dependence on the jet's cone size and kinematics.

  2. Gauge threshold corrections for local string models

    International Nuclear Information System (INIS)

    Conlon, Joseph P.

    2009-01-01

    We study gauge threshold corrections for local brane models embedded in a large compact space. A large bulk volume gives important contributions to the Konishi and super-Weyl anomalies and the effective field theory analysis implies the unification scale should be enhanced in a model-independent way from M s to RM s . For local D3/D3 models this result is supported by the explicit string computations. In this case the scale RM s comes from the necessity of global cancellation of RR tadpoles sourced by the local model. We also study D3/D7 models and discuss discrepancies with the effective field theory analysis. We comment on phenomenological implications for gauge coupling unification and for the GUT scale.

  3. Superstring threshold corrections to Yukawa couplings

    International Nuclear Information System (INIS)

    Antoniadis, I.; Taylor, T.R.

    1992-12-01

    A general method of computing string corrections to the Kaehler metric and Yukawa couplings is developed at the one-loop level for a general compactification of the heterotic superstring theory. It also provides a direct determination of the so-called Green-Schwarz term. The matter metric has an infrared divergent part which reproduces the field-theoretical anomalous dimensions, and a moduli-dependent part which gives rise to threshold corrections in the physical Yukawa couplings. Explicit expressions are derived for symmetric orbifold compactifications. (author). 20 refs

  4. Gauge threshold corrections for local orientifolds

    International Nuclear Information System (INIS)

    Conlon, Joseph P.; Palti, Eran

    2009-01-01

    We study gauge threshold corrections for systems of fractional branes at local orientifold singularities and compare with the general Kaplunovsky-Louis expression for locally supersymmetric N = 1 gauge theories. We focus on branes at orientifolds of the C 3 /Z 4 , C 3 /Z 6 and C 3 /Z 6 ' singularities. We provide a CFT construction of these theories and compute the threshold corrections. Gauge coupling running undergoes two phases: one phase running from the bulk winding scale to the string scale, and a second phase running from the string scale to the infrared. The first phase is associated to the contribution of N = 2 sectors to the IR β functions and the second phase to the contribution of both N = 1 and N = 2 sectors. In contrast, naive application of the Kaplunovsky-Louis formula gives single running from the bulk winding mode scale. The discrepancy is resolved through 1-loop non-universality of the holomorphic gauge couplings at the singularity, induced by a 1-loop redefinition of the twisted blow-up moduli which couple differently to different gauge nodes. We also study the physics of anomalous and non-anomalous U(1)s and give a CFT description of how masses for non-anomalous U(1)s depend on the global properties of cycles.

  5. Threshold corrections and gauge symmetry in twisted superstring models

    International Nuclear Information System (INIS)

    Pierce, D.M.

    1994-01-01

    Threshold corrections to the running of gauge couplings are calculated for superstring models with free complex world sheet fermions. For two N=1 SU(2)xU(1) 5 models, the threshold corrections lead to a small increase in the unification scale. Examples are given to illustrate how a given particle spectrum can be described by models with different boundary conditions on the internal fermions. We also discuss how complex twisted fermions can enhance the symmetry group of an N=4, SU(3)xU(1)xU(1) model to the gauge group SU(3)xSU(2)xU(1). It is then shown how a mixing angle analogous to the Weinberg angle depends on the boundary conditions of the internal fermions

  6. Quark and lepton masses at the GUT scale including supersymmetric threshold corrections

    International Nuclear Information System (INIS)

    Antusch, S.; Spinrath, M.

    2008-01-01

    We investigate the effect of supersymmetric (SUSY) threshold corrections on the values of the running quark and charged lepton masses at the grand unified theory (GUT) scale within the large tanβ regime of the minimal supersymmetric standard model. In addition to the typically dominant SUSY QCD contributions for the quarks, we also include the electroweak contributions for quarks and leptons and show that they can have significant effects. We provide the GUT scale ranges of quark and charged lepton Yukawa couplings as well as of the ratios m μ /m s , m e /m d , y τ /y b and y t /y b for three example ranges of SUSY parameters. We discuss how the enlarged ranges due to threshold effects might open up new possibilities for constructing GUT models of fermion masses and mixings.

  7. Reduced modular symmetries of threshold corrections and gauge coupling unification

    Energy Technology Data Exchange (ETDEWEB)

    Bailin, David; Love, Alex [Department of Physics & Astronomy, University of Sussex,Brighton, BN1 9QH (United Kingdom)

    2015-04-01

    We revisit the question of gauge coupling unification at the string scale in orbifold compactifications of the heterotic string for the supersymmetric Standard Model. In the presence of discrete Wilson lines threshold corrections with modular symmetry that is a subgroup of the full modular group arise. We find that reduced modular symmetries not previously reported are possible. We conjecture that the effects of such threshold corrections can be simulated using sums of terms built from Dedekind eta functions to obtain the appropriate modular symmetry. For the cases of the ℤ{sub 8}-I orbifold and the ℤ{sub 3}×ℤ{sub 6} orbifold it is easily possible to obtain gauge coupling unification at the “observed” scale with Kähler moduli T of approximately one.

  8. String Loop Threshold Corrections for N=1 Generalized Coxeter Orbifolds

    OpenAIRE

    Kokorelis, Christos

    2000-01-01

    We discuss the calculation of threshold corrections to gauge coupling constants for the, only, non-decomposable class of abelian (2, 2) symmetric N=1 four dimensional heterotic orbifold models, where the internal twist is realized as a generalized Coxeter automorphism. The latter orbifold was singled out in earlier work as the only N=1 heterotic $Z_N$ orbifold that satisfy the phenomenological criteria of correct minimal gauge coupling unification and cancellation of target space modular anom...

  9. String Threshold corrections in models with spondaneously broken supersymmetry

    CERN Document Server

    Kiritsis, Elias B; Petropoulos, P M; Rizos, J

    1999-01-01

    We analyse a class of four-dimensional heterotic ground states with N=2 space-time supersymmetry. From the ten-dimensional perspective, such models can be viewed as compactifications on a six-dimensional manifold with SU(2) holonomy, which is locally but not globally K3 x T^2. The maximal N=4 supersymmetry is spontaneously broken to N=2. The masses of the two massive gravitinos depend on the (T,U) moduli of T^2. We evaluate the one-loop threshold corrections of gauge and R^2 couplings and we show that they fall in several universality classes, in contrast to what happens in usual K3 x T^2 compactifications, where the N=4 supersymmetry is explicitly broken to N=2, and where a single universality class appears. These universality properties follow from the structure of the elliptic genus. The behaviour of the threshold corrections as functions of the moduli is analysed in detail: it is singular across several rational lines of the T^2 moduli because of the appearance of extra massless states, and suffers only f...

  10. Threshold corrections to dimension-six proton decay operators in non-minimal SUSY SU(5 GUTs

    Directory of Open Access Journals (Sweden)

    Borut Bajc

    2016-09-01

    Full Text Available We calculate the high and low scale threshold corrections to the D=6 proton decay mode in supersymmetric SU(5 grand unified theories with higher-dimensional representation Higgs multiplets. In particular, we focus on a missing-partner model in which the grand unified group is spontaneously broken by the 75-dimensional Higgs multiplet and the doublet–triplet splitting problem is solved. We find that in the missing-partner model the D=6 proton decay rate gets suppressed by about 60%, mainly due to the threshold effect at the GUT scale, while the SUSY-scale threshold corrections are found to be less prominent when sfermions are heavy.

  11. Nonperturbative correction to the threshold production of t anti t-pairs

    International Nuclear Information System (INIS)

    Fadin, V.S.; Yakovlev, O.I.

    1991-01-01

    Calculations are performed of the nonperturbative correction to the cross-section for the tt-pair near the threshold in e + e - -annihilation, which is connected with the existence of a gluon condensate. These have been made using a constant chromoelectric field approximation. 15 refs

  12. GUT scale threshold corrections in a complete supersymmetric SO(10) model: αs(MZ) versus proton lifetime

    International Nuclear Information System (INIS)

    Lucas, V.; Raby, S.

    1996-01-01

    We show that one-loop GUT scale threshold corrections to gauge couplings are a significant constraint on the GUT symmetry-breaking sector of the theory. The one-loop threshold corrections relate the prediction for α s (M Z ) to the proton lifetime. We have calculated these corrections in a new complete SO(10) SUSY GUT. The results are consistent with the low-energy measurement of α s (M Z ). We have also calculated the proton lifetime and branching ratios in this model. We show that proton decay rates provide a powerful test for theories of fermion masses. copyright 1996 The American Physical Society

  13. On next-to-eikonal corrections to threshold resummation for the Drell-Yan and DIS cross sections

    International Nuclear Information System (INIS)

    Laenen, Eric; Magnea, Lorenzo; Stavenga, Gerben

    2008-01-01

    We study corrections suppressed by one power of the soft gluon energy to the resummation of threshold logarithms for the Drell-Yan cross section and for Deep Inelastic structure functions. While no general factorization theorem is known for these next-to-eikonal (NE) corrections, it is conjectured that at least a subset will exponentiate, along with the logarithms arising at leading power. Here we develop some general tools to study NE logarithms, and we construct an ansatz for threshold resummation that includes various sources of NE corrections, implementing in this context the improved collinear evolution recently proposed by Dokshitzer, Marchesini and Salam (DMS). We compare our ansatz to existing exact results at two and three loops, finding evidence for the exponentiation of leading NE logarithms and confirming the predictivity of DMS evolution

  14. Coulomb Force Correction to the Decay b→ccs in the Threshold(Particles and Fields)

    OpenAIRE

    Kouhei, HASEGAWA; Department of Physics, University of Alberta

    2007-01-01

    We study the physical origins of the O(α_s) and O(α^2_s) corrections to the c-s current in the decay b→ccs in the threshold region δ=(M_b-2m_c)/2M_b ≪1. We obtain the corrections which are produced by the Coulomb force between the anti-charm and strange quarks. The Coulomb corrections C_Fπ^2 at O(α_s) and -C^2_Fπ^2ln δ at O(α^2_s) account for 300% and 120% of the corresponding terms in the Abelian-type perturbative corrections respectively. The differences between the Coulomb and perturbative...

  15. Large nondipole correlation effects near atomic photoionization thresholds

    International Nuclear Information System (INIS)

    Amusia, M.Y.; Felfli, Z.; Msezane, A.Z.; Amusia, M.Y.; Amusia, M.Y.; Baltenkov, A.S.

    1999-01-01

    The parameter that determines the nondipole correction to the angular distribution is calculated for Ar 1s and 3s subshells in the Hartree-Fock (HF) approximation and taking account of the multielectron correlations, using the random-phase approximation with exchange. In the photoelectron energy range 0 - 100 eV the parameter, which for s subshells is nonzero at threshold, is found for Ar 3s to be strongly affected by multielectron correlations. Results are also presented for He and Be in the HF approximation. copyright 1999 The American Physical Society

  16. A large multi-cell threshold gas Cerenkov counter

    International Nuclear Information System (INIS)

    Declais, Y.; Aubert, J.J.; Bassompierre, G.; Payre, P.; Thenard, J.M.; Urban, L.

    1980-08-01

    A large multi-cell threshold gas Cerenkov counter consisting of 78 cells has been built for use in a high energy muon scattering experiment at CERN (European Muon Collaboration). It is used with neon, nitrogen or a mixture of those two gases, allowing the pion threshold to be varied between 6 and 20 GeV/c. The sensitive region of the counter has a length of 4.0 m and entrance and exit windows of 1.1 x 2.4 m 2 and 2.4 x 5.0 m 2 , respectively

  17. Gravitational threshold corrections in non-supersymmetric heterotic strings

    Directory of Open Access Journals (Sweden)

    Ioannis Florakis

    2017-03-01

    Full Text Available We compute one-loop quantum corrections to gravitational couplings in the effective action of four-dimensional heterotic strings where supersymmetry is spontaneously broken by Scherk–Schwarz fluxes. We show that in both heterotic and type II theories of this class, no moduli dependent corrections to the Planck mass are generated. We explicitly compute the one-loop corrections to the R2 coupling and find that, despite the absence of supersymmetry, its contributions may still be organised into representations of subgroups of the modular group, and admit a universal form, determined uniquely by the multiplicities of the ground states of the theory. Moreover, similarly to the case of gauge couplings, also the gravitational sector may become strongly coupled in models which dynamically induce large volume for the extra dimensions.

  18. Large nondipole correlation effects near atomic photoionization thresholds

    Energy Technology Data Exchange (ETDEWEB)

    Amusia, M.Y.; Felfli, Z.; Msezane, A.Z. [Department of Physics and Center for Theoretical Studies of Physical Systems, Clark Atlanta University, Atlanta, Georgia 30314 (United States); Amusia, M.Y. [The Racah Institute of Physics, Hebrew University, Jerusalem 91904 (Israel); Amusia, M.Y. [A. F. Ioffe Physical-Technical Institute, St. Petersburg 194021 (Russia); Baltenkov, A.S. [Arifov Institute of Electronics, Akademgorodok, 700143 Tashkent, Republic of (Uzbekistan)

    1999-04-01

    The parameter that determines the nondipole correction to the angular distribution is calculated for Ar 1s and 3s subshells in the Hartree-Fock (HF) approximation and taking account of the multielectron correlations, using the random-phase approximation with exchange. In the photoelectron energy range 0{endash}100 eV the parameter, which for {ital s} subshells is nonzero at threshold, is found for Ar 3s to be strongly affected by multielectron correlations. Results are also presented for He and Be in the HF approximation. thinsp {copyright} {ital 1999} {ital The American Physical Society}

  19. Lowered threshold energy for femtosecond laser induced optical breakdown in a water based eye model by aberration correction with adaptive optics.

    Science.gov (United States)

    Hansen, Anja; Géneaux, Romain; Günther, Axel; Krüger, Alexander; Ripken, Tammo

    2013-06-01

    In femtosecond laser ophthalmic surgery tissue dissection is achieved by photodisruption based on laser induced optical breakdown. In order to minimize collateral damage to the eye laser surgery systems should be optimized towards the lowest possible energy threshold for photodisruption. However, optical aberrations of the eye and the laser system distort the irradiance distribution from an ideal profile which causes a rise in breakdown threshold energy even if great care is taken to minimize the aberrations of the system during design and alignment. In this study we used a water chamber with an achromatic focusing lens and a scattering sample as eye model and determined breakdown threshold in single pulse plasma transmission loss measurements. Due to aberrations, the precise lower limit for breakdown threshold irradiance in water is still unknown. Here we show that the threshold energy can be substantially reduced when using adaptive optics to improve the irradiance distribution by spatial beam shaping. We found that for initial aberrations with a root-mean-square wave front error of only one third of the wavelength the threshold energy can still be reduced by a factor of three if the aberrations are corrected to the diffraction limit by adaptive optics. The transmitted pulse energy is reduced by 17% at twice the threshold. Furthermore, the gas bubble motions after breakdown for pulse trains at 5 kilohertz repetition rate show a more transverse direction in the corrected case compared to the more spherical distribution without correction. Our results demonstrate how both applied and transmitted pulse energy could be reduced during ophthalmic surgery when correcting for aberrations. As a consequence, the risk of retinal damage by transmitted energy and the extent of collateral damage to the focal volume could be minimized accordingly when using adaptive optics in fs-laser surgery.

  20. Non-abelian factorisation for next-to-leading-power threshold logarithms

    NARCIS (Netherlands)

    Bonocore, D.; Laenen, E.; Magnea, L.; Vernazza, L.; White, C.D.

    2016-01-01

    Soft and collinear radiation is responsible for large corrections to many hadronic cross sections, near thresholds for the production of heavy final states. There is much interest in extending our understanding of this radiation to next-to-leading power (NLP) in the threshold expansion. In this

  1. Corrections to scaling in random resistor networks and diluted continuous spin models near the percolation threshold.

    Science.gov (United States)

    Janssen, Hans-Karl; Stenull, Olaf

    2004-02-01

    We investigate corrections to scaling induced by irrelevant operators in randomly diluted systems near the percolation threshold. The specific systems that we consider are the random resistor network and a class of continuous spin systems, such as the x-y model. We focus on a family of least irrelevant operators and determine the corrections to scaling that originate from this family. Our field theoretic analysis carefully takes into account that irrelevant operators mix under renormalization. It turns out that long standing results on corrections to scaling are respectively incorrect (random resistor networks) or incomplete (continuous spin systems).

  2. Detector correction in large container inspection systems

    CERN Document Server

    Kang Ke Jun; Chen Zhi Qiang

    2002-01-01

    In large container inspection systems, the image is constructed by parallel scanning with a one-dimensional detector array with a linac used as the X-ray source. The linear nonuniformity and nonlinearity of multiple detectors and the nonuniform intensity distribution of the X-ray sector beam result in horizontal striations in the scan image. This greatly impairs the image quality, so the image needs to be corrected. The correction parameters are determined experimentally by scaling the detector responses at multiple points with logarithm interpolation of the results. The horizontal striations are eliminated by modifying the original image data with the correction parameters. This method has proven to be effective and applicable in large container inspection systems

  3. Threshold Learning Dynamics in Social Networks

    Science.gov (United States)

    González-Avella, Juan Carlos; Eguíluz, Victor M.; Marsili, Matteo; Vega-Redondo, Fernado; San Miguel, Maxi

    2011-01-01

    Social learning is defined as the ability of a population to aggregate information, a process which must crucially depend on the mechanisms of social interaction. Consumers choosing which product to buy, or voters deciding which option to take with respect to an important issue, typically confront external signals to the information gathered from their contacts. Economic models typically predict that correct social learning occurs in large populations unless some individuals display unbounded influence. We challenge this conclusion by showing that an intuitive threshold process of individual adjustment does not always lead to such social learning. We find, specifically, that three generic regimes exist separated by sharp discontinuous transitions. And only in one of them, where the threshold is within a suitable intermediate range, the population learns the correct information. In the other two, where the threshold is either too high or too low, the system either freezes or enters into persistent flux, respectively. These regimes are generally observed in different social networks (both complex or regular), but limited interaction is found to promote correct learning by enlarging the parameter region where it occurs. PMID:21637714

  4. Predicting the sparticle spectrum from GUTs via SUSY threshold corrections with SusyTC

    Energy Technology Data Exchange (ETDEWEB)

    Antusch, Stefan [Department of Physics, University of Basel,Klingelbergstr. 82, CH-4056 Basel (Switzerland); Max-Planck-Institut für Physik (Werner-Heisenberg-Institut),Föhringer Ring 6, D-80805 München (Germany); Sluka, Constantin [Department of Physics, University of Basel,Klingelbergstr. 82, CH-4056 Basel (Switzerland)

    2016-07-21

    Grand Unified Theories (GUTs) can feature predictions for the ratios of quark and lepton Yukawa couplings at high energy, which can be tested with the increasingly precise results for the fermion masses, given at low energies. To perform such tests, the renormalization group (RG) running has to be performed with sufficient accuracy. In supersymmetric (SUSY) theories, the one-loop threshold corrections (TC) are of particular importance and, since they affect the quark-lepton mass relations, link a given GUT flavour model to the sparticle spectrum. To accurately study such predictions, we extend and generalize various formulas in the literature which are needed for a precision analysis of SUSY flavour GUT models. We introduce the new software tool SusyTC, a major extension to the Mathematica package REAP http://dx.doi.org/10.1088/1126-6708/2005/03/024, where these formulas are implemented. SusyTC extends the functionality of REAP by a full inclusion of the (complex) MSSM SUSY sector and a careful calculation of the one-loop SUSY threshold corrections for the full down-type quark, up-type quark and charged lepton Yukawa coupling matrices in the electroweak-unbroken phase. Among other useful features, SusyTC calculates the one-loop corrected pole mass of the charged (or the CP-odd) Higgs boson as well as provides output in SLHA conventions, i.e. the necessary input for external software, e.g. for performing a two-loop Higgs mass calculation. We apply SusyTC to study the predictions for the parameters of the CMSSM (mSUGRA) SUSY scenario from the set of GUT scale Yukawa relations ((y{sub e})/(y{sub d}))=−(1/2), ((y{sub μ})/(y{sub s}))=6, and ((y{sub τ})/(y{sub b}))=−(3/2), which has been proposed recently in the context of SUSY GUT flavour models.

  5. Third-order QCD corrections to heavy quark pair production near threshold

    Energy Technology Data Exchange (ETDEWEB)

    Schuller, Kurt

    2008-11-07

    The measurement of the top quark mass is an important task at the future International Linear Collider. The most promising process is the top quark pair production in the threshold region. In this region the top quarks behave non-relativistically and a perturbative treatment using effective field theories is possible. Current second order theoretical predictions in a fixed order approach show an uncertainty which is bigger than the expected experimental errors. Therefore, an improvement of the cross section calculation is desirable. There are two ways to incorporate higher order effects, one is to calculate the full next order in the fixed order approach, another possibility is to resum large logarithms. In this work, the fixed order calculation has been extended to the third order in perturbation theory for the QCD corrections. The result is a strongly improved scale behavior and a better understanding of heavy quarkonium systems. The Green function result is given in a semi-analytic form. The energy levels and wave functions for heavy quarkonium states have been calculated from the poles of the Green function and are presented for arbitrary quantum number n. The results have been implemented in a Mathematica program which makes the data easily accessible. Once some missing matching coefficients are calculated, and a complete electroweak calculation is available, the results of this work can be used to improve the precision of the top quark mass measurement to an uncertainty of less than 50 MeV. An inclusion of initial state radiation and beam effects are essential for a realistic observable. In the future, the results obtained could be used for a third order resummation of large logarithms. Further applications are also the extraction of the bottom quark mass with sum rules. (orig.)

  6. Non-abelian factorisation for next-to-leading-power threshold logarithms

    International Nuclear Information System (INIS)

    Bonocore, D.; Laenen, E.; Magnea, L.; Vernazza, L.; White, C.D.

    2016-01-01

    Soft and collinear radiation is responsible for large corrections to many hadronic cross sections, near thresholds for the production of heavy final states. There is much interest in extending our understanding of this radiation to next-to-leading power (NLP) in the threshold expansion. In this paper, we generalise a previously proposed all-order NLP factorisation formula to include non-abelian corrections. We define a non-abelian radiative jet function, organising collinear enhancements at NLP, and compute it for quark jets at one loop. We discuss in detail the issue of double counting between soft and collinear regions. Finally, we verify our prescription by reproducing all NLP logarithms in Drell-Yan production up to NNLO, including those associated with double real emission. Our results constitute an important step in the development of a fully general resummation formalism for NLP threshold effects.

  7. Non-abelian factorisation for next-to-leading-power threshold logarithms

    Energy Technology Data Exchange (ETDEWEB)

    Bonocore, D. [Nikhef, Science Park 105, NL-1098 XG Amsterdam (Netherlands); Institute for Theoretical Particle Physics and Cosmology, RWTH Aachen University, Sommerfeldstr. 16, 52074 Aachen (Germany); Laenen, E. [Nikhef, Science Park 105, NL-1098 XG Amsterdam (Netherlands); ITFA, University of Amsterdam, Science Park 904, Amsterdam (Netherlands); ITF, Utrecht University, Leuvenlaan 4, Utrecht (Netherlands); Kavli Institute for Theoretical Physics, University of California, Santa Barbara, CA 93106-4030 (United States); Magnea, L. [Dipartimento di Fisica, Università di Torino and INFN, Sezione di Torino, Via P. Giuria 1, I-10125 Torino (Italy); Vernazza, L. [Higgs Centre for Theoretical Physics, School of Physics and Astronomy, The University of Edinburgh, Edinburgh EH9 3JZ, Scotland (United Kingdom); White, C.D. [Centre for Research in String Theory, School of Physics and Astronomy, Queen Mary University of London, 327 Mile End Road, London E1 4NS (United Kingdom)

    2016-12-22

    Soft and collinear radiation is responsible for large corrections to many hadronic cross sections, near thresholds for the production of heavy final states. There is much interest in extending our understanding of this radiation to next-to-leading power (NLP) in the threshold expansion. In this paper, we generalise a previously proposed all-order NLP factorisation formula to include non-abelian corrections. We define a non-abelian radiative jet function, organising collinear enhancements at NLP, and compute it for quark jets at one loop. We discuss in detail the issue of double counting between soft and collinear regions. Finally, we verify our prescription by reproducing all NLP logarithms in Drell-Yan production up to NNLO, including those associated with double real emission. Our results constitute an important step in the development of a fully general resummation formalism for NLP threshold effects.

  8. A project of X-ray hardening correction in large ICT

    International Nuclear Information System (INIS)

    Fang Min; Liu Yinong; Ni Jianping

    2005-01-01

    This paper presents a means of polychromatic X-ray beam hardening correction using a standard function to transform the polychromatic projection to monochromatic projection in large Industrial Computed Tomography (ICT). Some parameters were defined to verify the validity of hardening correction in large ICT and optimized. Simulated experiments were used to prove that without prior knowledge of the composition of the scanned object, the correction method using monochromatic reconstruction arithmetic could remove beam hardening artifact greatly. (authors)

  9. Exact Covariance Thresholding into Connected Components for Large-Scale Graphical Lasso.

    Science.gov (United States)

    Mazumder, Rahul; Hastie, Trevor

    2012-03-01

    We consider the sparse inverse covariance regularization problem or graphical lasso with regularization parameter λ. Suppose the sample covariance graph formed by thresholding the entries of the sample covariance matrix at λ is decomposed into connected components. We show that the vertex-partition induced by the connected components of the thresholded sample covariance graph (at λ) is exactly equal to that induced by the connected components of the estimated concentration graph, obtained by solving the graphical lasso problem for the same λ. This characterizes a very interesting property of a path of graphical lasso solutions. Furthermore, this simple rule, when used as a wrapper around existing algorithms for the graphical lasso, leads to enormous performance gains. For a range of values of λ, our proposal splits a large graphical lasso problem into smaller tractable problems, making it possible to solve an otherwise infeasible large-scale problem. We illustrate the graceful scalability of our proposal via synthetic and real-life microarray examples.

  10. Absence of log correction in entropy of large black holes

    Energy Technology Data Exchange (ETDEWEB)

    Ghosh, A., E-mail: amit.ghosh@saha.ac.in; Mitra, P., E-mail: parthasarathi.mitra@saha.ac.in

    2014-06-27

    Earlier calculations of black hole entropy in loop quantum gravity led to a dominant term proportional to the area, but there was a correction involving the logarithm of the area, the Chern–Simons level being assumed to be large. We find that the calculations yield an entropy proportional to the area eigenvalue with no such correction if the Chern–Simons level is finite, so that the area eigenvalue can be relatively large.

  11. Radiative corrections in neutrino-deuterium disintegration

    International Nuclear Information System (INIS)

    Kurylov, A.; Ramsey-Musolf, M.J.; Vogel, P.

    2002-01-01

    The radiative corrections of order α for the charged- and neutral-current neutrino-deuterium disintegration for energies relevant to the SNO experiment are evaluated. Particular attention is paid to the issue of the bremsstrahlung detection threshold. It is shown that the radiative corrections to the total cross section for the charged current reaction are independent of that threshold, as they must be for consistency, and amount to a slowly decreasing function of the neutrino energy E ν , varying from about 4% at low energies to 3% at the end of the 8 B spectrum. The differential cross section corrections, on the other hand, do depend on the bremsstrahlung detection threshold. Various choices of the threshold are discussed. It is shown that for a realistic choice of the threshold and for the actual electron energy threshold of the SNO detector, the deduced 8 B ν e flux should be decreased by about 2%. The radiative corrections to the neutral-current reaction are also evaluated

  12. Large-aperture, high-damage-threshold optics for beamlet

    International Nuclear Information System (INIS)

    Campbell, J.H.; Atherton, L.J.; DeYoreo, J.J.; Kozlowski, M.R.; Maney, R.T.; Montesanti, R.C.; Sheehan, L.M.; Barker, C.E.

    1995-01-01

    Beamlet serves as a test bed for the proposed NIF laser design and components. Therefore, its optics are similar in size and quality to those proposed for the NIF. In general, the optics in the main laser cavity and transport section of Beamlet are larger and have higher damage thresholds than the optics manufactured for any of our previous laser systems. In addition, the quality of the Beamlet optical materials is higher, leading to better wavefront quality, higher optical transmission, and lower-intensity modulation of the output laser beam than, for example, that typically achieved on Nova. In this article, we discuss the properties and characteristics of the large-aperture optics used on Beamlet

  13. Damage threshold from large retinal spot size repetitive-pulse laser exposures.

    Science.gov (United States)

    Lund, Brian J; Lund, David J; Edsall, Peter R

    2014-10-01

    The retinal damage thresholds for large spot size, multiple-pulse exposures to a Q-switched, frequency doubled Nd:YAG laser (532 nm wavelength, 7 ns pulses) have been measured for 100 μm and 500 μm retinal irradiance diameters. The ED50, expressed as energy per pulse, varies only weakly with the number of pulses, n, for these extended spot sizes. The previously reported threshold for a multiple-pulse exposure for a 900 μm retinal spot size also shows the same weak dependence on the number of pulses. The multiple-pulse ED50 for an extended spot-size exposure does not follow the n dependence exhibited by small spot size exposures produced by a collimated beam. Curves derived by using probability-summation models provide a better fit to the data.

  14. A rule based method for context sensitive threshold segmentation in SPECT using simulation

    International Nuclear Information System (INIS)

    Fleming, John S.; Alaamer, Abdulaziz S.

    1998-01-01

    Robust techniques for automatic or semi-automatic segmentation of objects in single photon emission computed tomography (SPECT) are still the subject of development. This paper describes a threshold based method which uses empirical rules derived from analysis of computer simulated images of a large number of objects. The use of simulation allowed the factors affecting the threshold which correctly segmented objects to be investigated systematically. Rules could then be derived from these data to define the threshold in any particular context. The technique operated iteratively and calculated local context sensitive thresholds along radial profiles from the centre of gravity of the object. It was evaluated in a further series of simulated objects and in human studies, and compared to the use of a global fixed threshold. The method was capable of improving accuracy of segmentation and volume assessment compared to the global threshold technique. The improvements were greater for small volumes, shapes with large surface area to volume ratio, variable surrounding activity and non-uniform distributions. The method was applied successfully to simulated objects and human studies and is considered to be a significant advance on global fixed threshold techniques. (author)

  15. From a Proven Correct Microkernel to Trustworthy Large Systems

    Science.gov (United States)

    Andronick, June

    The seL4 microkernel was the world's first general-purpose operating system kernel with a formal, machine-checked proof of correctness. The next big step in the challenge of building truly trustworthy systems is to provide a framework for developing secure systems on top of seL4. This paper first gives an overview of seL4's correctness proof, together with its main implications and assumptions, and then describes our approach to provide formal security guarantees for large, complex systems.

  16. Correction of the near threshold behavior of electron collisional excitation cross-sections in the plane-wave Born approximation

    Science.gov (United States)

    Kilcrease, D. P.; Brookes, S.

    2013-12-01

    The modeling of NLTE plasmas requires the solution of population rate equations to determine the populations of the various atomic levels relevant to a particular problem. The equations require many cross sections for excitation, de-excitation, ionization and recombination. A simple and computational fast way to calculate electron collisional excitation cross-sections for ions is by using the plane-wave Born approximation. This is essentially a high-energy approximation and the cross section suffers from the unphysical problem of going to zero near threshold. Various remedies for this problem have been employed with varying degrees of success. We present a correction procedure for the Born cross-sections that employs the Elwert-Sommerfeld factor to correct for the use of plane waves instead of Coulomb waves in an attempt to produce a cross-section similar to that from using the more time consuming Coulomb Born approximation. We compare this new approximation with other, often employed correction procedures. We also look at some further modifications to our Born Elwert procedure and its combination with Y.K. Kim's correction of the Coulomb Born approximation for singly charged ions that more accurately approximate convergent close coupling calculations.

  17. Correction factors for clinical dosemeters used in large field dosimetry

    International Nuclear Information System (INIS)

    Campos, L.L.; Caldas, L.

    1989-08-01

    The determination of the absorbed dose in high-energy photon and electron beams by the user is carried out as a two-step procedure. First the ionization chamber is calibrated at a reference quality by the user at a standard laboratory, and then the chamber is used to determine the absorbed dose with the user's beam. A number of conversion and correction factors have to be applied. Different sets of factors are needed depending on the physical quantity the calibration refers to, the calibration geometry and the chamber design. Another correction factor to be introduced for the absorbed dose determination in large fields refers to radiation effects on the stem, cable and sometimes connectors. A simple method was developed to be suggested to hospital physicists to be followed during large radiation field dosimetry, in order to evaluate the radiation effects of cables and connectors and to determine correction factors for each system or geometry. (author) [pt

  18. Potts glass reflection of the decoding threshold for qudit quantum error correcting codes

    Science.gov (United States)

    Jiang, Yi; Kovalev, Alexey A.; Pryadko, Leonid P.

    We map the maximum likelihood decoding threshold for qudit quantum error correcting codes to the multicritical point in generalized Potts gauge glass models, extending the map constructed previously for qubit codes. An n-qudit quantum LDPC code, where a qudit can be involved in up to m stabilizer generators, corresponds to a ℤd Potts model with n interaction terms which can couple up to m spins each. We analyze general properties of the phase diagram of the constructed model, give several bounds on the location of the transitions, bounds on the energy density of extended defects (non-local analogs of domain walls), and discuss the correlation functions which can be used to distinguish different phases in the original and the dual models. This research was supported in part by the Grants: NSF PHY-1415600 (AAK), NSF PHY-1416578 (LPP), and ARO W911NF-14-1-0272 (LPP).

  19. Comparison of memory thresholds for planar qudit geometries

    Science.gov (United States)

    Marks, Jacob; Jochym-O'Connor, Tomas; Gheorghiu, Vlad

    2017-11-01

    We introduce and analyze a new type of decoding algorithm called general color clustering, based on renormalization group methods, to be used in qudit color codes. The performance of this decoder is analyzed under a generalized bit-flip error model, and is used to obtain the first memory threshold estimates for qudit 6-6-6 color codes. The proposed decoder is compared with similar decoding schemes for qudit surface codes as well as the current leading qubit decoders for both sets of codes. We find that, as with surface codes, clustering performs sub-optimally for qubit color codes, giving a threshold of 5.6 % compared to the 8.0 % obtained through surface projection decoding methods. However, the threshold rate increases by up to 112% for large qudit dimensions, plateauing around 11.9 % . All the analysis is performed using QTop, a new open-source software for simulating and visualizing topological quantum error correcting codes.

  20. Ultrahigh Error Threshold for Surface Codes with Biased Noise

    Science.gov (United States)

    Tuckett, David K.; Bartlett, Stephen D.; Flammia, Steven T.

    2018-02-01

    We show that a simple modification of the surface code can exhibit an enormous gain in the error correction threshold for a noise model in which Pauli Z errors occur more frequently than X or Y errors. Such biased noise, where dephasing dominates, is ubiquitous in many quantum architectures. In the limit of pure dephasing noise we find a threshold of 43.7(1)% using a tensor network decoder proposed by Bravyi, Suchara, and Vargo. The threshold remains surprisingly large in the regime of realistic noise bias ratios, for example 28.2(2)% at a bias of 10. The performance is, in fact, at or near the hashing bound for all values of the bias. The modified surface code still uses only weight-4 stabilizers on a square lattice, but merely requires measuring products of Y instead of Z around the faces, as this doubles the number of useful syndrome bits associated with the dominant Z errors. Our results demonstrate that large efficiency gains can be found by appropriately tailoring codes and decoders to realistic noise models, even under the locality constraints of topological codes.

  1. Large-scale simulations of error-prone quantum computation devices

    International Nuclear Information System (INIS)

    Trieu, Doan Binh

    2009-01-01

    The theoretical concepts of quantum computation in the idealized and undisturbed case are well understood. However, in practice, all quantum computation devices do suffer from decoherence effects as well as from operational imprecisions. This work assesses the power of error-prone quantum computation devices using large-scale numerical simulations on parallel supercomputers. We present the Juelich Massively Parallel Ideal Quantum Computer Simulator (JUMPIQCS), that simulates a generic quantum computer on gate level. It comprises an error model for decoherence and operational errors. The robustness of various algorithms in the presence of noise has been analyzed. The simulation results show that for large system sizes and long computations it is imperative to actively correct errors by means of quantum error correction. We implemented the 5-, 7-, and 9-qubit quantum error correction codes. Our simulations confirm that using error-prone correction circuits with non-fault-tolerant quantum error correction will always fail, because more errors are introduced than being corrected. Fault-tolerant methods can overcome this problem, provided that the single qubit error rate is below a certain threshold. We incorporated fault-tolerant quantum error correction techniques into JUMPIQCS using Steane's 7-qubit code and determined this threshold numerically. Using the depolarizing channel as the source of decoherence, we find a threshold error rate of (5.2±0.2) x 10 -6 . For Gaussian distributed operational over-rotations the threshold lies at a standard deviation of 0.0431±0.0002. We can conclude that quantum error correction is especially well suited for the correction of operational imprecisions and systematic over-rotations. For realistic simulations of specific quantum computation devices we need to extend the generic model to dynamic simulations, i.e. time-dependent Hamiltonian simulations of realistic hardware models. We focus on today's most advanced technology, i

  2. Age correction in monitoring audiometry: method to update OSHA age-correction tables to include older workers.

    Science.gov (United States)

    Dobie, Robert A; Wojcik, Nancy C

    2015-07-13

    The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60 years. By comparison, recent data (1999-2006) show that hearing thresholds in the US population have improved. Because hearing thresholds have improved, and because older people are increasingly represented in noisy occupations, the OSHA tables no longer represent the current US workforce. This paper presents 2 options for updating the age-correction tables and extending values to age 75 years using recent population-based hearing survey data from the US National Health and Nutrition Examination Survey (NHANES). Both options provide scientifically derived age-correction values that can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Regression analysis was used to derive new age-correction values using audiometric data from the 1999-2006 US NHANES. Using the NHANES median, better-ear thresholds fit to simple polynomial equations, new age-correction values were generated for both men and women for ages 20-75 years. The new age-correction values are presented as 2 options. The preferred option is to replace the current OSHA tables with the values derived from the NHANES median better-ear thresholds for ages 20-75 years. The alternative option is to retain the current OSHA age-correction values up to age 60 years and use the NHANES-based values for ages 61-75 years. Recent NHANES data offer a simple solution to the need for updated, population-based, age-correction tables for OSHA. The options presented here provide scientifically valid and relevant age-correction values which can be easily adopted by OSHA to expand their regulatory guidance to

  3. Hard decoding algorithm for optimizing thresholds under general Markovian noise

    Science.gov (United States)

    Chamberland, Christopher; Wallman, Joel; Beale, Stefanie; Laflamme, Raymond

    2017-04-01

    Quantum error correction is instrumental in protecting quantum systems from noise in quantum computing and communication settings. Pauli channels can be efficiently simulated and threshold values for Pauli error rates under a variety of error-correcting codes have been obtained. However, realistic quantum systems can undergo noise processes that differ significantly from Pauli noise. In this paper, we present an efficient hard decoding algorithm for optimizing thresholds and lowering failure rates of an error-correcting code under general completely positive and trace-preserving (i.e., Markovian) noise. We use our hard decoding algorithm to study the performance of several error-correcting codes under various non-Pauli noise models by computing threshold values and failure rates for these codes. We compare the performance of our hard decoding algorithm to decoders optimized for depolarizing noise and show improvements in thresholds and reductions in failure rates by several orders of magnitude. Our hard decoding algorithm can also be adapted to take advantage of a code's non-Pauli transversal gates to further suppress noise. For example, we show that using the transversal gates of the 5-qubit code allows arbitrary rotations around certain axes to be perfectly corrected. Furthermore, we show that Pauli twirling can increase or decrease the threshold depending upon the code properties. Lastly, we show that even if the physical noise model differs slightly from the hypothesized noise model used to determine an optimized decoder, failure rates can still be reduced by applying our hard decoding algorithm.

  4. Non-perturbative treatment of relativistic quantum corrections in large Z atoms

    International Nuclear Information System (INIS)

    Dietz, K.; Weymans, G.

    1983-09-01

    Renormalised g-Hartree-Dirac equations incorporating Dirac sea contributions are derived. Their implications for the non-perturbative, selfconsistent calculation of quantum corrections in large Z atoms are discussed. (orig.)

  5. Kuramoto model with uniformly spaced frequencies: Finite-N asymptotics of the locking threshold.

    Science.gov (United States)

    Ottino-Löffler, Bertrand; Strogatz, Steven H

    2016-06-01

    We study phase locking in the Kuramoto model of coupled oscillators in the special case where the number of oscillators, N, is large but finite, and the oscillators' natural frequencies are evenly spaced on a given interval. In this case, stable phase-locked solutions are known to exist if and only if the frequency interval is narrower than a certain critical width, called the locking threshold. For infinite N, the exact value of the locking threshold was calculated 30 years ago; however, the leading corrections to it for finite N have remained unsolved analytically. Here we derive an asymptotic formula for the locking threshold when N≫1. The leading correction to the infinite-N result scales like either N^{-3/2} or N^{-1}, depending on whether the frequencies are evenly spaced according to a midpoint rule or an end-point rule. These scaling laws agree with numerical results obtained by Pazó [D. Pazó, Phys. Rev. E 72, 046211 (2005)PLEEE81539-375510.1103/PhysRevE.72.046211]. Moreover, our analysis yields the exact prefactors in the scaling laws, which also match the numerics.

  6. Large-scale simulations of error-prone quantum computation devices

    Energy Technology Data Exchange (ETDEWEB)

    Trieu, Doan Binh

    2009-07-01

    The theoretical concepts of quantum computation in the idealized and undisturbed case are well understood. However, in practice, all quantum computation devices do suffer from decoherence effects as well as from operational imprecisions. This work assesses the power of error-prone quantum computation devices using large-scale numerical simulations on parallel supercomputers. We present the Juelich Massively Parallel Ideal Quantum Computer Simulator (JUMPIQCS), that simulates a generic quantum computer on gate level. It comprises an error model for decoherence and operational errors. The robustness of various algorithms in the presence of noise has been analyzed. The simulation results show that for large system sizes and long computations it is imperative to actively correct errors by means of quantum error correction. We implemented the 5-, 7-, and 9-qubit quantum error correction codes. Our simulations confirm that using error-prone correction circuits with non-fault-tolerant quantum error correction will always fail, because more errors are introduced than being corrected. Fault-tolerant methods can overcome this problem, provided that the single qubit error rate is below a certain threshold. We incorporated fault-tolerant quantum error correction techniques into JUMPIQCS using Steane's 7-qubit code and determined this threshold numerically. Using the depolarizing channel as the source of decoherence, we find a threshold error rate of (5.2{+-}0.2) x 10{sup -6}. For Gaussian distributed operational over-rotations the threshold lies at a standard deviation of 0.0431{+-}0.0002. We can conclude that quantum error correction is especially well suited for the correction of operational imprecisions and systematic over-rotations. For realistic simulations of specific quantum computation devices we need to extend the generic model to dynamic simulations, i.e. time-dependent Hamiltonian simulations of realistic hardware models. We focus on today's most advanced

  7. Understanding the many-body expansion for large systems. III. Critical role of four-body terms, counterpoise corrections, and cutoffs

    Science.gov (United States)

    Liu, Kuan-Yu; Herbert, John M.

    2017-10-01

    Papers I and II in this series [R. M. Richard et al., J. Chem. Phys. 141, 014108 (2014); K. U. Lao et al., ibid. 144, 164105 (2016)] have attempted to shed light on precision and accuracy issues affecting the many-body expansion (MBE), which only manifest in larger systems and thus have received scant attention in the literature. Many-body counterpoise (CP) corrections are shown to accelerate convergence of the MBE, which otherwise suffers from a mismatch between how basis-set superposition error affects subsystem versus supersystem calculations. In water clusters ranging in size up to (H2O)37, four-body terms prove necessary to achieve accurate results for both total interaction energies and relative isomer energies, but the sheer number of tetramers makes the use of cutoff schemes essential. To predict relative energies of (H2O)20 isomers, two approximations based on a lower level of theory are introduced and an ONIOM-type procedure is found to be very well converged with respect to the appropriate MBE benchmark, namely, a CP-corrected supersystem calculation at the same level of theory. Results using an energy-based cutoff scheme suggest that if reasonable approximations to the subsystem energies are available (based on classical multipoles, say), then the number of requisite subsystem calculations can be reduced even more dramatically than when distance-based thresholds are employed. The end result is several accurate four-body methods that do not require charge embedding, and which are stable in large basis sets such as aug-cc-pVTZ that have sometimes proven problematic for fragment-based quantum chemistry methods. Even with aggressive thresholding, however, the four-body approach at the self-consistent field level still requires roughly ten times more processors to outmatch the performance of the corresponding supersystem calculation, in test cases involving 1500-1800 basis functions.

  8. Understanding the many-body expansion for large systems. III. Critical role of four-body terms, counterpoise corrections, and cutoffs.

    Science.gov (United States)

    Liu, Kuan-Yu; Herbert, John M

    2017-10-28

    Papers I and II in this series [R. M. Richard et al., J. Chem. Phys. 141, 014108 (2014); K. U. Lao et al., ibid. 144, 164105 (2016)] have attempted to shed light on precision and accuracy issues affecting the many-body expansion (MBE), which only manifest in larger systems and thus have received scant attention in the literature. Many-body counterpoise (CP) corrections are shown to accelerate convergence of the MBE, which otherwise suffers from a mismatch between how basis-set superposition error affects subsystem versus supersystem calculations. In water clusters ranging in size up to (H 2 O) 37 , four-body terms prove necessary to achieve accurate results for both total interaction energies and relative isomer energies, but the sheer number of tetramers makes the use of cutoff schemes essential. To predict relative energies of (H 2 O) 20 isomers, two approximations based on a lower level of theory are introduced and an ONIOM-type procedure is found to be very well converged with respect to the appropriate MBE benchmark, namely, a CP-corrected supersystem calculation at the same level of theory. Results using an energy-based cutoff scheme suggest that if reasonable approximations to the subsystem energies are available (based on classical multipoles, say), then the number of requisite subsystem calculations can be reduced even more dramatically than when distance-based thresholds are employed. The end result is several accurate four-body methods that do not require charge embedding, and which are stable in large basis sets such as aug-cc-pVTZ that have sometimes proven problematic for fragment-based quantum chemistry methods. Even with aggressive thresholding, however, the four-body approach at the self-consistent field level still requires roughly ten times more processors to outmatch the performance of the corresponding supersystem calculation, in test cases involving 1500-1800 basis functions.

  9. Age correction in monitoring audiometry: method to update OSHA age-correction tables to include older workers

    OpenAIRE

    Dobie, Robert A; Wojcik, Nancy C

    2015-01-01

    Objectives The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60?years. By comparison, recent data (1999?2006) show that hearing thresholds in the US population have improved....

  10. Large Covariance Estimation by Thresholding Principal Orthogonal Complements.

    Science.gov (United States)

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2013-09-01

    This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented.

  11. Generalized threshold resummation in inclusive DIS and semi-inclusive electron-positron annihilation

    International Nuclear Information System (INIS)

    Almasy, A.A.; Lo Presti, N.A.; Vogt, A.

    2015-11-01

    We present analytic all-order results for the highest three threshold logarithms of the space-like and time-like off-diagonal splitting functions and the corresponding coefficient functions for inclusive deep-inelastic scattering (DIS) and semi-inclusive e + e - annihilation. All these results, obtained through an order-by-order analysis of the structure of the corresponding unfactorized quantities in dimensional regularization, can be expressed in terms of the Bernoulli functions introduced by one of us and leading-logarithmic soft-gluon exponentials. The resulting numerical corrections are small for the splitting functions but large for the coefficient functions. In both cases more terms in the threshold expansion need to be determined in order to arrive at quantitatively reliable results.

  12. Evaluation of refractive correction for standard automated perimetry in eyes wearing multifocal contact lenses

    Directory of Open Access Journals (Sweden)

    Kazunori Hirasawa

    2017-10-01

    Full Text Available AIM: To evaluate the refractive correction for standard automated perimetry (SAP in eyes with refractive multifocal contact lenses (CL in healthy young participants. METHODS: Twenty-nine eyes of 29 participants were included. Accommodation was paralyzed in all participants with 1% cyclopentolate hydrochloride. SAP was performed using the Humphrey SITA-standard 24-2 and 10-2 protocol under three refractive conditions: monofocal CL corrected for near distance (baseline; multifocal CL corrected for distance (mCL-D; and mCL-D corrected for near vision using a spectacle lens (mCL-N. Primary outcome measures were the foveal threshold, mean deviation (MD, and pattern standard deviation (PSD. RESULTS: The foveal threshold of mCL-N with both the 24-2 and 10-2 protocols significantly decreased by 2.2-2.5 dB CONCLUSION: Despite the induced mydriasis and the optical design of the multifocal lens used in this study, our results indicated that, when the dome-shaped visual field test is performed with eyes with large pupils and wearing refractive multifocal CLs, distance correction without additional near correction is to be recommended.

  13. Large Covariance Estimation by Thresholding Principal Orthogonal Complements

    Science.gov (United States)

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2012-01-01

    This paper deals with the estimation of a high-dimensional covariance with a conditional sparsity structure and fast-diverging eigenvalues. By assuming sparse error covariance matrix in an approximate factor model, we allow for the presence of some cross-sectional correlation even after taking out common but unobservable factors. We introduce the Principal Orthogonal complEment Thresholding (POET) method to explore such an approximate factor structure with sparsity. The POET estimator includes the sample covariance matrix, the factor-based covariance matrix (Fan, Fan, and Lv, 2008), the thresholding estimator (Bickel and Levina, 2008) and the adaptive thresholding estimator (Cai and Liu, 2011) as specific examples. We provide mathematical insights when the factor analysis is approximately the same as the principal component analysis for high-dimensional data. The rates of convergence of the sparse residual covariance matrix and the conditional sparse covariance matrix are studied under various norms. It is shown that the impact of estimating the unknown factors vanishes as the dimensionality increases. The uniform rates of convergence for the unobserved factors and their factor loadings are derived. The asymptotic results are also verified by extensive simulation studies. Finally, a real data application on portfolio allocation is presented. PMID:24348088

  14. Defect Detection of Steel Surfaces with Global Adaptive Percentile Thresholding of Gradient Image

    Science.gov (United States)

    Neogi, Nirbhar; Mohanta, Dusmanta K.; Dutta, Pranab K.

    2017-12-01

    Steel strips are used extensively for white goods, auto bodies and other purposes where surface defects are not acceptable. On-line surface inspection systems can effectively detect and classify defects and help in taking corrective actions. For detection of defects use of gradients is very popular in highlighting and subsequently segmenting areas of interest in a surface inspection system. Most of the time, segmentation by a fixed value threshold leads to unsatisfactory results. As defects can be both very small and large in size, segmentation of a gradient image based on percentile thresholding can lead to inadequate or excessive segmentation of defective regions. A global adaptive percentile thresholding of gradient image has been formulated for blister defect and water-deposit (a pseudo defect) in steel strips. The developed method adaptively changes the percentile value used for thresholding depending on the number of pixels above some specific values of gray level of the gradient image. The method is able to segment defective regions selectively preserving the characteristics of defects irrespective of the size of the defects. The developed method performs better than Otsu method of thresholding and an adaptive thresholding method based on local properties.

  15. NEUTRON SPECTRUM MEASUREMENTS USING MULTIPLE THRESHOLD DETECTORS

    Energy Technology Data Exchange (ETDEWEB)

    Gerken, William W.; Duffey, Dick

    1963-11-15

    From American Nuclear Society Meeting, New York, Nov. 1963. The use of threshold detectors, which simultaneously undergo reactions with thermal neutrons and two or more fast neutron threshold reactions, was applied to measurements of the neutron spectrum in a reactor. A number of different materials were irradiated to determine the most practical ones for use as multiple threshold detectors. These results, as well as counting techniques and corrections, are presented. Some materials used include aluminum, alloys of Al -Ni, aluminum-- nickel oxides, and magesium orthophosphates. (auth)

  16. Quantum Corrections to the 'Atomistic' MOSFET Simulations

    Science.gov (United States)

    Asenov, Asen; Slavcheva, G.; Kaya, S.; Balasubramaniam, R.

    2000-01-01

    We have introduced in a simple and efficient manner quantum mechanical corrections in our 3D 'atomistic' MOSFET simulator using the density gradient formalism. We have studied in comparison with classical simulations the effect of the quantum mechanical corrections on the simulation of random dopant induced threshold voltage fluctuations, the effect of the single charge trapping on interface states and the effect of the oxide thickness fluctuations in decanano MOSFETs with ultrathin gate oxides. The introduction of quantum corrections enhances the threshold voltage fluctuations but does not affect significantly the amplitude of the random telegraph noise associated with single carrier trapping. The importance of the quantum corrections for proper simulation of oxide thickness fluctuation effects has also been demonstrated.

  17. Modeling jointly low, moderate, and heavy rainfall intensities without a threshold selection

    KAUST Repository

    Naveau, Philippe

    2016-04-09

    In statistics, extreme events are often defined as excesses above a given large threshold. This definition allows hydrologists and flood planners to apply Extreme-Value Theory (EVT) to their time series of interest. Even in the stationary univariate context, this approach has at least two main drawbacks. First, working with excesses implies that a lot of observations (those below the chosen threshold) are completely disregarded. The range of precipitation is artificially shopped down into two pieces, namely large intensities and the rest, which necessarily imposes different statistical models for each piece. Second, this strategy raises a nontrivial and very practical difficultly: how to choose the optimal threshold which correctly discriminates between low and heavy rainfall intensities. To address these issues, we propose a statistical model in which EVT results apply not only to heavy, but also to low precipitation amounts (zeros excluded). Our model is in compliance with EVT on both ends of the spectrum and allows a smooth transition between the two tails, while keeping a low number of parameters. In terms of inference, we have implemented and tested two classical methods of estimation: likelihood maximization and probability weighed moments. Last but not least, there is no need to choose a threshold to define low and high excesses. The performance and flexibility of this approach are illustrated on simulated and hourly precipitation recorded in Lyon, France.

  18. Theory of threshold phenomena

    International Nuclear Information System (INIS)

    Hategan, Cornel

    2002-01-01

    Theory of Threshold Phenomena in Quantum Scattering is developed in terms of Reduced Scattering Matrix. Relationships of different types of threshold anomalies both to nuclear reaction mechanisms and to nuclear reaction models are established. Magnitude of threshold effect is related to spectroscopic factor of zero-energy neutron state. The Theory of Threshold Phenomena, based on Reduced Scattering Matrix, does establish relationships between different types of threshold effects and nuclear reaction mechanisms: the cusp and non-resonant potential scattering, s-wave threshold anomaly and compound nucleus resonant scattering, p-wave anomaly and quasi-resonant scattering. A threshold anomaly related to resonant or quasi resonant scattering is enhanced provided the neutron threshold state has large spectroscopic amplitude. The Theory contains, as limit cases, Cusp Theories and also results of different nuclear reactions models as Charge Exchange, Weak Coupling, Bohr and Hauser-Feshbach models. (author)

  19. Large radiative corrections to the effective potential and the gauge hierarchy problem

    International Nuclear Information System (INIS)

    Sachrajda, C.T.C.

    1982-01-01

    We study the higher order corrections to the effective potential in a simple toy model and in the SU(5) grand unified theory, with a view to seeing what their effects are on the stability equations, and hence on the gauge hierarchy problem for these theories. These corrections contain powers of log (v 2 /h 2 ), where v and h are the large and small vacuum expectation values respectively, and hence cannot a priori be neglected. Nevertheless, after summing these large logarithms we find that the stability equations always contain two equations for v (i.e. these equations are independent of h) and hence can only be satisfied by a special (and hence unnatural) choice of parameters. This we claim is the precise statement of the gauge hierarchy problem. (orig.)

  20. Threshold Law For Positron Impact Ionization of Atoms

    International Nuclear Information System (INIS)

    Ihra, W.; Mota-Furtado, F.; OMahony, P.F.; Macek, J.H.; Macek, J.H.

    1997-01-01

    We demonstrate that recent experiments for positron impact ionization of He and H 2 can be interpreted by extending Wannier theory to higher energies. Anharmonicities in the expansion of the three-particle potential around the Wannier configuration give rise to corrections in the threshold behavior of the breakup cross section. These corrections are taken into account perturbatively by employing the hidden crossing theory. The resulting threshold law is σ(E)∝E 2.640 exp[ -0.73√(E)] . The actual energy range for which the Wannier law is valid is found to be smaller for positron impact ionization than for electron impact ionization. copyright 1997 The American Physical Society

  1. Building rainfall thresholds for large-scales landslides by extracting occurrence time of landslides from seismic records

    Science.gov (United States)

    Yen, Hsin-Yi; Lin, Guan-Wei

    2017-04-01

    Understanding the rainfall condition which triggers mass moment on hillslope is the key to forecast rainfall-induced slope hazards, and the exact time of landslide occurrence is one of the basic information for rainfall statistics. In the study, we focused on large-scale landslides (LSLs) with disturbed area larger than 10 ha and conducted a string of studies including the recognition of landslide-induced ground motions and the analyses of different terms of rainfall thresholds. More than 10 heavy typhoons during the periods of 2005-2014 in Taiwan induced more than hundreds of LSLs and provided the opportunity to characterize the rainfall conditions which trigger LSLs. A total of 101 landslide-induced seismic signals were identified from the records of Taiwan seismic network. These signals exposed the occurrence time of landslide to assess rainfall conditions. Rainfall analyses showed that LSLs occurred when cumulative rainfall exceeded 500 mm. The results of rainfall-threshold analyses revealed that it is difficult to distinct LSLs from small-scale landslides (SSLs) by the I-D and R-D methods, but the I-R method can achieve the discrimination. Besides, an enhanced three-factor threshold considering deep water content was proposed as the rainfall threshold for LSLs.

  2. Attenuation correction for the large non-human primate brain imaging using microPET

    International Nuclear Information System (INIS)

    Naidoo-Variawa, S; Lehnert, W; Kassiou, M; Banati, R; Meikle, S R

    2010-01-01

    Assessment of the biodistribution and pharmacokinetics of radiopharmaceuticals in vivo is often performed on animal models of human disease prior to their use in humans. The baboon brain is physiologically and neuro-anatomically similar to the human brain and is therefore a suitable model for evaluating novel CNS radioligands. We previously demonstrated the feasibility of performing baboon brain imaging on a dedicated small animal PET scanner provided that the data are accurately corrected for degrading physical effects such as photon attenuation in the body. In this study, we investigated factors affecting the accuracy and reliability of alternative attenuation correction strategies when imaging the brain of a large non-human primate (papio hamadryas) using the microPET Focus 220 animal scanner. For measured attenuation correction, the best bias versus noise performance was achieved using a 57 Co transmission point source with a 4% energy window. The optimal energy window for a 68 Ge transmission source operating in singles acquisition mode was 20%, independent of the source strength, providing bias-noise performance almost as good as for 57 Co. For both transmission sources, doubling the acquisition time had minimal impact on the bias-noise trade-off for corrected emission images, despite observable improvements in reconstructed attenuation values. In a [ 18 F]FDG brain scan of a female baboon, both measured attenuation correction strategies achieved good results and similar SNR, while segmented attenuation correction (based on uncorrected emission images) resulted in appreciable regional bias in deep grey matter structures and the skull. We conclude that measured attenuation correction using a single pass 57 Co (4% energy window) or 68 Ge (20% window) transmission scan achieves an excellent trade-off between bias and propagation of noise when imaging the large non-human primate brain with a microPET scanner.

  3. Reconciling threshold and subthreshold expansions for pion-nucleon scattering

    Science.gov (United States)

    Siemens, D.; Ruiz de Elvira, J.; Epelbaum, E.; Hoferichter, M.; Krebs, H.; Kubis, B.; Meißner, U.-G.

    2017-07-01

    Heavy-baryon chiral perturbation theory (ChPT) at one loop fails in relating the pion-nucleon amplitude in the physical region and for subthreshold kinematics due to loop effects enhanced by large low-energy constants. Studying the chiral convergence of threshold and subthreshold parameters up to fourth order in the small-scale expansion, we address the question to what extent this tension can be mitigated by including the Δ (1232) as an explicit degree of freedom and/or using a covariant formulation of baryon ChPT. We find that the inclusion of the Δ indeed reduces the low-energy constants to more natural values and thereby improves consistency between threshold and subthreshold kinematics. In addition, even in the Δ-less theory the resummation of 1 /mN corrections in the covariant scheme improves the results markedly over the heavy-baryon formulation, in line with previous observations in the single-baryon sector of ChPT that so far have evaded a profound theoretical explanation.

  4. Reconciling threshold and subthreshold expansions for pion–nucleon scattering

    Directory of Open Access Journals (Sweden)

    D. Siemens

    2017-07-01

    Full Text Available Heavy-baryon chiral perturbation theory (ChPT at one loop fails in relating the pion–nucleon amplitude in the physical region and for subthreshold kinematics due to loop effects enhanced by large low-energy constants. Studying the chiral convergence of threshold and subthreshold parameters up to fourth order in the small-scale expansion, we address the question to what extent this tension can be mitigated by including the Δ(1232 as an explicit degree of freedom and/or using a covariant formulation of baryon ChPT. We find that the inclusion of the Δ indeed reduces the low-energy constants to more natural values and thereby improves consistency between threshold and subthreshold kinematics. In addition, even in the Δ-less theory the resummation of 1/mN corrections in the covariant scheme improves the results markedly over the heavy-baryon formulation, in line with previous observations in the single-baryon sector of ChPT that so far have evaded a profound theoretical explanation.

  5. Heavy-ion fusion: Channel-coupling effects, the barrier penetration model, and the threshold anomaly for heavy-ion potentials

    International Nuclear Information System (INIS)

    Satchler, G.R.; Nagarajan, M.A.; Lilley, J.S.; Thompson, I.J.

    1987-01-01

    We study the formal structure of the influence of channel coupling on near- and sub-barrier fusion. The reduction to a one-channel description is studied, with emphasis on the channel-coupling effects being manifest primarily as an energy dependence (the ''threshold anomaly'') of the real nuclear potential. The relation to the barrier penetration model is examined critically. The results of large-scale coupled-channel calculations are used as ''data'' to illustrate the discussion. Particular emphasis is placed on the importance of reproducing correctly the partial-wave (or ''spin'') distributions. The simple barrier penetration model is found to be adequate to exhibit the strong enhancements due to channel couplings when the threshold anomaly is taken into account, although there may be important corrections due to the long-ranged peripheral absorption, especially from Coulomb excitation. copyright 1987 Academic Press, Inc

  6. Combined threshold and transverse momentum resummation for inclusive observables

    International Nuclear Information System (INIS)

    Muselli, Claudio; Forte, Stefano; Ridolfi, Giovanni

    2017-01-01

    We present a combined resummation for the transverse momentum distribution of a colorless final state in perturbative QCD, expressed as a function of transverse momentum p T and the scaling variable x. Its expression satisfies three requirements: it reduces to standard transverse momentum resummation to any desired logarithmic order in the limit p T →0 for fixed x, up to power suppressed corrections in p T ; it reduces to threshold resummation to any desired logarithmic order in the limit x→1 for fixed p T , up to power suppressed correction in 1−x; upon integration over transverse momentum it reproduces the resummation of the total cross cross at any given logarithmic order in the threshold x→1 limit, up to power suppressed correction in 1−x. Its main ingredient, and our main new result, is a modified form of transverse momentum resummation, which leads to threshold resummation upon integration over p T , and for which we provide a simple closed-form analytic expression in Fourier-Mellin (b,N) space. We give explicit coefficients up to NNLL order for the specific case of Higgs production in gluon fusion in the effective field theory limit. Our result allows for a systematic improvement of the transverse momentum distribution through threshold resummation which holds for all p T , and elucidates the relation between transverse momentum resummation and threshold resummation at the inclusive level, specifically by providing within perturbative QCD a simple derivation of the main consequence of the so-called collinear anomaly of SCET.

  7. Combined threshold and transverse momentum resummation for inclusive observables

    Energy Technology Data Exchange (ETDEWEB)

    Muselli, Claudio; Forte, Stefano [Tif Lab, Dipartimento di Fisica, Università di Milano and INFN, Sezione di Milano,Via Celoria 16, I-20133 Milano (Italy); Ridolfi, Giovanni [Dipartimento di Fisica, Università di Genova and INFN, Sezione di Genova,Via Dodecaneso 33, I-16146 Genova (Italy)

    2017-03-21

    We present a combined resummation for the transverse momentum distribution of a colorless final state in perturbative QCD, expressed as a function of transverse momentum p{sub T} and the scaling variable x. Its expression satisfies three requirements: it reduces to standard transverse momentum resummation to any desired logarithmic order in the limit p{sub T}→0 for fixed x, up to power suppressed corrections in p{sub T}; it reduces to threshold resummation to any desired logarithmic order in the limit x→1 for fixed p{sub T}, up to power suppressed correction in 1−x; upon integration over transverse momentum it reproduces the resummation of the total cross cross at any given logarithmic order in the threshold x→1 limit, up to power suppressed correction in 1−x. Its main ingredient, and our main new result, is a modified form of transverse momentum resummation, which leads to threshold resummation upon integration over p{sub T}, and for which we provide a simple closed-form analytic expression in Fourier-Mellin (b,N) space. We give explicit coefficients up to NNLL order for the specific case of Higgs production in gluon fusion in the effective field theory limit. Our result allows for a systematic improvement of the transverse momentum distribution through threshold resummation which holds for all p{sub T}, and elucidates the relation between transverse momentum resummation and threshold resummation at the inclusive level, specifically by providing within perturbative QCD a simple derivation of the main consequence of the so-called collinear anomaly of SCET.

  8. Integration of community structure data reveals observable effects below sediment guideline thresholds in a large estuary

    KAUST Repository

    Tremblay, Louis A.

    2017-04-07

    The sustainable management of estuarine and coastal ecosystems requires robust frameworks due to the presence of multiple physical and chemical stressors. In this study, we assessed whether ecological health decline, based on community structure composition changes along a pollution gradient, occurred at levels below guideline threshold values for copper, zinc and lead. Canonical analysis of principal coordinates (CAP) was used to characterise benthic communities along a metal contamination gradient. The analysis revealed changes in benthic community distribution at levels below the individual guideline values for the three metals. These results suggest that field-based measures of ecological health analysed with multivariate tools can provide additional information to single metal guideline threshold values to monitor large systems exposed to multiple stressors.

  9. Calculation of abort thresholds for the Beam Loss Monitoring System of the Large Hadron Collider at CERN

    CERN Document Server

    Nemcic, Martin; Dehning, Bernd

    The Beam Loss Monitoring (BLM) System is one of the most critical machine protection systems for the Large Hadron Collider (LHC) at the European Organization for Nuclear Research (CERN), Switzerland. Its main purpose is to protect the superconducting magnets from quenches and other equipment from damage by requesting a beam abort when the measured losses exceed any of the predefined threshold levels. The system consist of circa 4000 ionization chambers which are installed around the 27 kilometres ring (LHC). This study aims to choose a technical platform and produce a system that addresses all of the limitations with the current system that is used for the calculation of the LHC BLM abort threshold values. To achieve this, a comparison and benchmarking of the Java and .NET technical platforms is performed in order to establish the most suitable solution. To establish which technical platform is a successful replacement of the current abort threshold calculator, comparable prototype systems in Java and .NET we...

  10. Adiabatic theory of Wannier threshold laws and ionization cross sections

    International Nuclear Information System (INIS)

    Macek, J.H.; Ovchinnikov, S.Y.

    1994-01-01

    Adiabatic energy eigenvalues of H 2 + are computed for complex values of the internuclear distance R. The infinite number of bound-state eigenenergies are represented by a function ε(R) that is single valued on a multisheeted Riemann surface. A region is found where ε(R) and the corresponding eigenfunctions exhibit harmonic-oscillator structure characteristic of electron motion on a potential saddle. The Schroedinger equation is solved in the adiabatic approximation along a path in the complex R plane to compute ionization cross sections. The cross section thus obtained joins the Wannier threshold region with the keV energy region, but the exponent near the ionization threshold disagrees with well-accepted values. Accepted values are obtained when a lowest-order diabatic correction is employed, indicating that adiabatic approximations do not give the correct zero velocity limit for ionization cross sections. Semiclassical eigenvalues for general top-of-barrier motion are given and the theory is applied to the ionization of atomic hydrogen by electron impact. The theory with a first diabatic correction gives the Wannier threshold law even for this case

  11. Inflation from M-theory with fourth-order corrections and large extra dimensions

    International Nuclear Information System (INIS)

    Maeda, Kei-ichi; Ohta, Nobuyoshi

    2004-01-01

    We study inflationary solutions in the M-theory. Including the fourth-order curvature correction terms, we find three generalized de Sitter solutions, in which our 3-space expands exponentially. Taking one of the solutions, we propose an inflationary scenario of the early universe. This provides us a natural explanation for large extra dimensions in a brane world, and suggests some connection between the 60 e-folding expansion of inflation and TeV gravity based on the large extra dimensions

  12. Radar rainfall estimation for the identification of debris-flow precipitation thresholds

    Science.gov (United States)

    Marra, Francesco; Nikolopoulos, Efthymios I.; Creutin, Jean-Dominique; Borga, Marco

    2014-05-01

    Identification of rainfall thresholds for the prediction of debris-flow occurrence is a common approach for warning procedures. Traditionally the debris-flow triggering rainfall is derived from the closest available raingauge. However, the spatial and temporal variability of intense rainfall on mountainous areas, where debris flows take place, may lead to large uncertainty in point-based estimates. Nikolopoulos et al. (2014) have shown that this uncertainty translates into a systematic underestimation of the rainfall thresholds, leading to a step degradation of the performances of the rainfall threshold for identification of debris flows occurrence under operational conditions. A potential solution to this limitation lies on use of rainfall estimates from weather radar. Thanks to their high spatial and temporal resolutions, these estimates offer the advantage of providing rainfall information over the actual debris flow location. The aim of this study is to analyze the value of radar precipitation estimations for the identification of debris flow precipitation thresholds. Seven rainfall events that triggered debris flows in the Adige river basin (Eastern Italian Alps) are analyzed using data from a dense raingauge network and a C-Band weather radar. Radar data are elaborated by using a set of correction algorithms specifically developed for weather radar rainfall application in mountainous areas. Rainfall thresholds for the triggering of debris flows are identified in the form of average intensity-duration power law curves using a frequentist approach by using both radar rainfall estimates and raingauge data. Sampling uncertainty associated to the derivation of the thresholds is assessed by using a bootstrap technique (Peruccacci et al. 2012). Results show that radar-based rainfall thresholds are largely exceeding those obtained by using raingauge data. Moreover, the differences between the two thresholds may be related to the spatial characteristics (i.e., spatial

  13. Tornado risk analysis at Savannah River Plant using windspeed damage thresholds and single building strike frequencies

    International Nuclear Information System (INIS)

    Taylor, D.H.; McDonald, J.R.; Twisdale, L.A.

    1985-01-01

    Tornado risk analysis at the Savannah River Plant has taken a two pronged approach: (1) developing a catalogue of damage thresholds as a function of windspeed for processing buildings and other representative site structures; (2) developing a method of estimating, for each building, the probability of a tornado exceeding each damage threshold. Wind resistance of building construction at SRP varies widely depending on the function of the structure. It was recognized that all tornadoes do not necessarily seriously damage buildings, but the damage thresholds were unknown. In order to evaluate the safety of existing structures and properly design new structures, an analysis of tornado resistance was conducted by J.R. McDonald on each process building at SRP and other buildings by type. Damage estimates were catalogued for each Fujita class windspeed interval and windspeeds were catalogued as a function of increased levels of damage. Tornado single point and structure specific strike probabilities for the SRP site were determined by L.A. Twisdale using the TORRISK computer code. To calculate the structure specific strike probability, a correction factor is determined from a set of curves using building area and aspect ratio (length/width relative to north) as parameters. The structure specific probability is then the product of the correction factor and the point probability. The correction factor increases as a function of building size and windspeed. For large buildings (10 5 ft 2 ) and very intense storms (250 mph), the correction factor is equal to or greater than 4. The cumulative probability of a tornado striking any building type (process, personnel, etc.) was also calculated

  14. Algorithmic detectability threshold of the stochastic block model

    Science.gov (United States)

    Kawamoto, Tatsuro

    2018-03-01

    The assumption that the values of model parameters are known or correctly learned, i.e., the Nishimori condition, is one of the requirements for the detectability analysis of the stochastic block model in statistical inference. In practice, however, there is no example demonstrating that we can know the model parameters beforehand, and there is no guarantee that the model parameters can be learned accurately. In this study, we consider the expectation-maximization (EM) algorithm with belief propagation (BP) and derive its algorithmic detectability threshold. Our analysis is not restricted to the community structure but includes general modular structures. Because the algorithm cannot always learn the planted model parameters correctly, the algorithmic detectability threshold is qualitatively different from the one with the Nishimori condition.

  15. Winner's Curse Correction and Variable Thresholding Improve Performance of Polygenic Risk Modeling Based on Genome-Wide Association Study Summary-Level Data.

    Directory of Open Access Journals (Sweden)

    Jianxin Shi

    2016-12-01

    Full Text Available Recent heritability analyses have indicated that genome-wide association studies (GWAS have the potential to improve genetic risk prediction for complex diseases based on polygenic risk score (PRS, a simple modelling technique that can be implemented using summary-level data from the discovery samples. We herein propose modifications to improve the performance of PRS. We introduce threshold-dependent winner's-curse adjustments for marginal association coefficients that are used to weight the single-nucleotide polymorphisms (SNPs in PRS. Further, as a way to incorporate external functional/annotation knowledge that could identify subsets of SNPs highly enriched for associations, we propose variable thresholds for SNPs selection. We applied our methods to GWAS summary-level data of 14 complex diseases. Across all diseases, a simple winner's curse correction uniformly led to enhancement of performance of the models, whereas incorporation of functional SNPs was beneficial only for selected diseases. Compared to the standard PRS algorithm, the proposed methods in combination led to notable gain in efficiency (25-50% increase in the prediction R2 for 5 of 14 diseases. As an example, for GWAS of type 2 diabetes, winner's curse correction improved prediction R2 from 2.29% based on the standard PRS to 3.10% (P = 0.0017 and incorporating functional annotation data further improved R2 to 3.53% (P = 2×10-5. Our simulation studies illustrate why differential treatment of certain categories of functional SNPs, even when shown to be highly enriched for GWAS-heritability, does not lead to proportionate improvement in genetic risk-prediction because of non-uniform linkage disequilibrium structure.

  16. Ballistic deficit correction methods for large Ge detectors-high counting rate study

    International Nuclear Information System (INIS)

    Duchene, G.; Moszynski, M.

    1995-01-01

    This study presents different ballistic correction methods versus input count rate (from 3 to 50 kcounts/s) using four large Ge detectors of about 70 % relative efficiency. It turns out that the Tennelec TC245 linear amplifier in the BDC mode (Hinshaw method) is the best compromise for energy resolution throughout. All correction methods lead to narrow sum-peaks indistinguishable from single Γ lines. The full energy peak throughput is found representative of the pile-up inspection dead time of the corrector circuits. This work also presents a new and simple representation, plotting simultaneously energy resolution and throughput versus input count rate. (TEC). 12 refs., 11 figs

  17. Disaggregated energy consumption and GDP in Taiwan: A threshold co-integration analysis

    International Nuclear Information System (INIS)

    Hu, J.-L.; Lin, C.-H.

    2008-01-01

    Energy consumption growth is much higher than economic growth for Taiwan in recent years, worsening its energy efficiency. This paper provides a solid explanation by examining the equilibrium relationship between GDP and disaggregated energy consumption under a non-linear framework. The threshold co-integration test developed with asymmetric dynamic adjusting processes proposed by Hansen and Seo [Hansen, B.E., Seo, B., 2002. Testing for two-regime threshold cointegration in vector error-correction models. Journal of Econometrics 110, 293-318.] is applied. Non-linear co-integrations between GDP and disaggregated energy consumptions are confirmed except for oil consumption. The two-regime vector error-correction models (VECM) show that the adjustment process of energy consumption toward equilibrium is highly persistent when an appropriately threshold is reached. There is mean-reverting behavior when the threshold is reached, making aggregate and disaggregated energy consumptions grow faster than GDP in Taiwan

  18. Group music performance causes elevated pain thresholds and social bonding in small and large groups of singers

    Science.gov (United States)

    Weinstein, Daniel; Launay, Jacques; Pearce, Eiluned; Dunbar, Robin I. M.; Stewart, Lauren

    2016-01-01

    Over our evolutionary history, humans have faced the problem of how to create and maintain social bonds in progressively larger groups compared to those of our primate ancestors. Evidence from historical and anthropological records suggests that group music-making might act as a mechanism by which this large-scale social bonding could occur. While previous research has shown effects of music making on social bonds in small group contexts, the question of whether this effect ‘scales up’ to larger groups is particularly important when considering the potential role of music for large-scale social bonding. The current study recruited individuals from a community choir that met in both small (n = 20 – 80) and large (a ‘megachoir’ combining individuals from the smaller subchoirs n = 232) group contexts. Participants gave self-report measures (via a survey) of social bonding and had pain threshold measurements taken (as a proxy for endorphin release) before and after 90 minutes of singing. Results showed that feelings of inclusion, connectivity, positive affect, and measures of endorphin release all increased across singing rehearsals and that the influence of group singing was comparable for pain thresholds in the large versus small group context. Levels of social closeness were found to be greater at pre- and post-levels for the small choir condition. However, the large choir condition experienced a greater change in social closeness as compared to the small condition. The finding that singing together fosters social closeness – even in large contexts where individuals are not known to each other – is consistent with evolutionary accounts that emphasize the role of music in social bonding, particularly in the context of creating larger cohesive groups than other primates are able to manage. PMID:27158219

  19. A critical experimental study of the classical tactile threshold theory

    Directory of Open Access Journals (Sweden)

    Medina Leonel E

    2010-06-01

    Full Text Available Abstract Background The tactile sense is being used in a variety of applications involving tactile human-machine interfaces. In a significant number of publications the classical threshold concept plays a central role in modelling and explaining psychophysical experimental results such as in stochastic resonance (SR phenomena. In SR, noise enhances detection of sub-threshold stimuli and the phenomenon is explained stating that the required amplitude to exceed the sensory threshold barrier can be reached by adding noise to a sub-threshold stimulus. We designed an experiment to test the validity of the classical vibrotactile threshold. Using a second choice experiment, we show that individuals can order sensorial events below the level known as the classical threshold. If the observer's sensorial system is not activated by stimuli below the threshold, then a second choice could not be above the chance level. Nevertheless, our experimental results are above that chance level contradicting the definition of the classical tactile threshold. Results We performed a three alternative forced choice detection experiment on 6 subjects asking them first and second choices. In each trial, only one of the intervals contained a stimulus and the others contained only noise. According to the classical threshold assumptions, a correct second choice response corresponds to a guess attempt with a statistical frequency of 50%. Results show an average of 67.35% (STD = 1.41% for the second choice response that is not explained by the classical threshold definition. Additionally, for low stimulus amplitudes, second choice correct detection is above chance level for any detectability level. Conclusions Using a second choice experiment, we show that individuals can order sensorial events below the level known as a classical threshold. If the observer's sensorial system is not activated by stimuli below the threshold, then a second choice could not be above the chance

  20. Adiabatic theory of Wannier threshold laws and ionization cross sections

    International Nuclear Information System (INIS)

    Macek, J.H.; Ovchinnikov, S.Yu.

    1994-01-01

    The Wannier threshold law for three-particle fragmentation is reviewed. By integrating the Schroedinger equation along a path where the reaction coordinate R is complex, anharmonic corrections to the simple power law are obtained. These corrections are found to be non-analytic in the energy E, in contrast to the expected analytic dependence upon E

  1. On R4 threshold corrections in type IIB string theory and (p,q)-string instantons

    International Nuclear Information System (INIS)

    Kiritsis, E.; Pioline, B.

    1997-01-01

    We obtain the exact non-perturbative thresholds of R 4 terms in type IIB string theory compactified to eight and seven dimensions. These thresholds are given by the perturbative tree-level and one-loop results together with the contribution of the D-instantons and of the (p,q)-string instantons. The invariance under U-duality is made manifest by rewriting the sum as a non-holomorphic-invariant modular function of the corresponding discrete U-duality group. In the eight-dimensional case, the threshold is the sum of an order-1 Eisenstein series for SL(2,Z) and an order-3/2 Eisenstein series for SL(3,Z). The seven-dimensional result is given by the order-3/2 Eisenstein series for SL(5,Z). We also conjecture formulae for the non-perturbative thresholds in lower-dimensional compactifications and discuss the relation with M-theory. (orig.)

  2. QCD threshold corrections for gluino pair production at hadron colliders

    Energy Technology Data Exchange (ETDEWEB)

    Langenfeld, Ulrich [Wuerzburg Univ. (Germany); Moch, Sven-Olaf; Pfoh, Torsten [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2012-11-15

    We present the complete threshold enhanced predictions in QCD for the total cross section of gluino pair production at hadron colliders at next-to-next-to-leading order. Thanks to the computation of the required one-loop hard matching coefficients our results are accurate to the next-to-next-to-leading logarithm. In a brief phenomenological study we provide predictions for the total hadronic cross sections at the LHC and we discuss the uncertainties arising from scale variations and the parton distribution functions.

  3. Resummations in QCD hard-scattering at large and small x

    CERN Document Server

    Kidonakis, Nikolaos; Stephens, Philip

    2008-01-01

    We discuss different resummations of large logarithms that arise in hard-scattering cross sections of quarks and gluons in regions of large and small x. The large-x logarithms are typically dominant near threshold for the production of a specified final state. These soft and collinear gluon corrections produce large enhancements of the cross section for many processes, notably top quark and Higgs production, and typically the higher-order corrections reduce the factorization and renormalization scale dependence of the cross section. The small-x logarithms are dominant in the regime where the momentum transfer of the hard sub-process is much smaller than the total collision energy. These logarithms are important to describe multijet final states in deep inelastic scattering and hadron colliders, and in the study of parton distribution functions. The resummations at small and large x are linked by the eikonal approximation and are dominated by soft gluon anomalous dimensions. We will review their role in both c...

  4. Threshold resummation for slepton-pair production at hadron colliders

    International Nuclear Information System (INIS)

    Bozzi, Giuseppe; Fuks, Benjamin; Klasen, Michael

    2007-01-01

    We present a first and extensive study of threshold resummation effects for supersymmetric (SUSY) particle production at hadron colliders, focusing on Drell-Yan like slepton-pair and slepton-sneutrino associated production. After confirming the known next-to-leading order (NLO) QCD corrections and generalizing the NLO SUSY-QCD corrections to the case of mixing squarks in the virtual loop contributions, we employ the usual Mellin N-space resummation formalism with the minimal prescription for the inverse Mellin-transform and improve it by resumming 1/N-suppressed and a class of N-independent universal contributions. Numerically, our results increase the theoretical cross sections by 5 to 15% with respect to the NLO predictions and stabilize them by reducing the scale dependence from up to 20% at NLO to less than 10% with threshold resummation

  5. Threshold resummation for slepton-pair production at hadron colliders

    International Nuclear Information System (INIS)

    Bozzi, Giuseppe; Fuks, Benjamin; Klasen, Michael

    2007-01-01

    We present a first and extensive study of threshold resummation effects for supersymmetric (SUSY) particle production at hadron colliders, focusing on Drell-Yan like slepton-pair and slepton-sneutrino associated production. After confirming the known next-to-leading order (NLO) QCD corrections and generalizing the NLO SUSY-QCD corrections to the case of mixing squarks in the virtual loop contributions, we employ the usual Mellin N-space resummation formalism with the minimal prescription for the inverse Mellin-transform and improve it by re-summing 1/N-suppressed and a class of N-independent universal contributions. Numerically, our results increase the theoretical cross sections by 5 to 15% with respect to the NLO predictions and stabilize them by reducing the scale dependence from up to 20% at NLO to less than 10% with threshold resummation. (authors)

  6. Adiabatic theory of Wannier threshold laws and ionization cross sections

    International Nuclear Information System (INIS)

    Macek, J.H.; Ovchinnikov, S.Y.

    1995-01-01

    The Wannier threshold law for three-particle fragmentation is reviewed. By integrating the Schroedinger equation along a path where the reaction coordinate R is complex, anharmonic corrections to the simple power law are obtained. These corrections are found to be non-analytic in the energy E, in contrast to the expected analytic dependence upon E. copyright 1995 American Institute of Physics

  7. Small-threshold behaviour of two-loop self-energy diagrams: two-particle thresholds

    International Nuclear Information System (INIS)

    Berends, F.A.; Davydychev, A.I.; Moskovskij Gosudarstvennyj Univ., Moscow; Smirnov, V.A.; Moskovskij Gosudarstvennyj Univ., Moscow

    1996-01-01

    The behaviour of two-loop two-point diagrams at non-zero thresholds corresponding to two-particle cuts is analyzed. The masses involved in a cut and the external momentum are assumed to be small as compared to some of the other masses of the diagram. By employing general formulae of asymptotic expansions of Feynman diagrams in momenta and masses, we construct an algorithm to derive analytic approximations to the diagrams. In such a way, we calculate several first coefficients of the expansion. Since no conditions on relative values of the small masses and the external momentum are imposed, the threshold irregularities are described analytically. Numerical examples, using diagrams occurring in the standard model, illustrate the convergence of the expansion below the first large threshold. (orig.)

  8. Threshold Assessment of Gear Diagnostic Tools on Flight and Test Rig Data

    Science.gov (United States)

    Dempsey, Paula J.; Mosher, Marianne; Huff, Edward M.

    2003-01-01

    A method for defining thresholds for vibration-based algorithms that provides the minimum number of false alarms while maintaining sensitivity to gear damage was developed. This analysis focused on two vibration based gear damage detection algorithms, FM4 and MSA. This method was developed using vibration data collected during surface fatigue tests performed in a spur gearbox rig. The thresholds were defined based on damage progression during tests with damage. The thresholds false alarm rates were then evaluated on spur gear tests without damage. Next, the same thresholds were applied to flight data from an OH-58 helicopter transmission. Results showed that thresholds defined in test rigs can be used to define thresholds in flight to correctly classify the transmission operation as normal.

  9. Generalised universality of gauge thresholds in heterotic vacua with and without supersymmetry

    CERN Document Server

    Angelantonj, Carlo; Tsulaia, Mirian

    2015-01-01

    We study one-loop quantum corrections to gauge couplings in heterotic vacua with spontaneous supersymmetry breaking. Although in non-supersymmetric constructions these corrections are not protected and are typically model dependent, we show how a universal behaviour of threshold differences, typical of supersymmetric vacua, may still persist. We formulate specific conditions on the way supersymmetry should be broken for this to occur. Our analysis implies a generalised notion of threshold universality even in the case of unbroken supersymmetry, whenever extra charged massless states appear at enhancement points in the bulk of moduli space. Several examples with universality, including non-supersymmetric chiral models in four dimensions, are presented.

  10. Heavy quark threshold dynamics in higher order

    Energy Technology Data Exchange (ETDEWEB)

    Piclum, J.H.

    2007-05-15

    In this work we discuss an important building block for the next-to-next-to-next-to leading order corrections to the pair production of top quarks at threshold. Specifically, we explain the calculation of the third order strong corrections to the matching coefficient of the vector current in non-relativistic Quantum Chromodynamics and provide the result for the fermionic part, containing at least one loop of massless quarks. As a byproduct, we obtain the matching coefficients of the axial-vector, pseudo-scalar and scalar current at the same order. Furthermore, we calculate the three-loop corrections to the quark renormalisation constants in the on-shell scheme in the framework of dimensional regularisation and dimensional reduction. Finally, we compute the third order strong corrections to the chromomagnetic interaction in Heavy Quark Effective Theory. The calculational methods are discussed in detail and results for the master integrals are given. (orig.)

  11. Re: Request for Correction - IRIS Assessment for Trichloroethylene

    Science.gov (United States)

    Letter from Faye Graul providing supplemental information to her Request for Correction for Threshold of Trichloroethylene Contamination of Maternal Drinking Waters submitted under the Information Quality Act.

  12. Low energy response calibration of the BATSE large area detectors onboard the Compton Observatory

    Energy Technology Data Exchange (ETDEWEB)

    Laird, C.E. [Dept. of Physics and Astronomy, Eastern Kentucky University, Moore 351, 521 Lancaster Avenue, Richmond, KY 40475-3124 (United States)]. E-mail: Chris.Laird@eku.edu; Harmon, B.A. [XD12 NASA/Marshall Space Flight Center, Huntsville, AL 35812 (United States); Wilson, Colleen A. [XD12 NASA/Marshall Space Flight Center, Huntsville, AL 35812 (United States); Hunter, David [Dept. of Physics and Astronomy, Eastern Kentucky University, Moore 351, 521 Lancaster Avenue, Richmond, KY 40475-3124 (United States); Isaacs, Jason [Dept. of Physics and Astronomy, Eastern Kentucky University, Moore 351, 521 Lancaster Avenue, Richmond, KY 40475-3124 (United States)

    2006-10-15

    The low-energy attenuation of the covering material of the Burst and Transient Source Experiment (BATSE) large area detectors (LADs) on the Compton Gamma Ray Observatory as well as the small-angle response of the LADs have been studied. These effects are shown to be more significant than previously assumed. The LAD entrance window included layers of an aluminum-epoxy composite (hexel) that acted as a collimator for the lowest energy photons entering the detector just above threshold (20-50 keV). Simplifying assumptions made concerning the entrance window materials and the angular response at incident angles near normal to the detector face in the original BATSE response matrix formalism had little effect on {gamma}-ray burst measurements; however, these assumptions created serious errors in measured fluxes of galactic sources, whose emission is strongest near the LAD energy threshold. Careful measurements of the angular and low-energy dependence of the attenuation due to the hexel plates only partially improved the response. A systematic study of Crab Nebula spectra showed the need for additional corrections: an angular-dependent correction for all detectors and an angular-independent correction for each detector. These corrections have been applied as part of an overall energy and angular-dependent correction to the BATSE response matrices.

  13. Sextupole correction for a ring with large chromaticity and the influence of magnetic errors on its parameters

    International Nuclear Information System (INIS)

    Kamiya, Y.; Katoh, M.; Honjo, I.

    1987-01-01

    A future ring with a low emittance and large circumference, specifically dedicated to a synchrotron light source, will have a large chromaticity, so that it is important to employ a sophisticated sextupole correction as well as the design of linear lattice to obtain the stable beam. The authors tried a method of sextupole correction for a lattice with a large chromaticity and small dispersion function. In such a lattice the sextupole magnets are obliged to become large in strength to compensate the chromaticity. Then the nonlinear effects of the sextupole magnets will become more serious than their chromatic effects. Furthermore, a ring with strong quadrupole magnets to get a very small emittance and with strong sextupole magnets to compensate the generated chromaticity will be very sensitive to their magnetic errors. The authors also present simple formulae to evaluate the effects on the beam parameters. The details will appear in a KEK Report

  14. A Threshold Continuum for Aeolian Sand Transport

    Science.gov (United States)

    Swann, C.; Ewing, R. C.; Sherman, D. J.

    2015-12-01

    The threshold of motion for aeolian sand transport marks the initial entrainment of sand particles by the force of the wind. This is typically defined and modeled as a singular wind speed for a given grain size and is based on field and laboratory experimental data. However, the definition of threshold varies significantly between these empirical models, largely because the definition is based on visual-observations of initial grain movement. For example, in his seminal experiments, Bagnold defined threshold of motion when he observed that 100% of the bed was in motion. Others have used 50% and lesser values. Differences in threshold models, in turn, result is large errors in predicting the fluxes associated with sand and dust transport. Here we use a wind tunnel and novel sediment trap to capture the fractions of sand in creep, reptation and saltation at Earth and Mars pressures and show that the threshold of motion for aeolian sand transport is best defined as a continuum in which grains progress through stages defined by the proportion of grains in creep and saltation. We propose the use of scale dependent thresholds modeled by distinct probability distribution functions that differentiate the threshold based on micro to macro scale applications. For example, a geologic timescale application corresponds to a threshold when 100% of the bed in motion whereas a sub-second application corresponds to a threshold when a single particle is set in motion. We provide quantitative measurements (number and mode of particle movement) corresponding to visual observations, percent of bed in motion and degrees of transport intermittency for Earth and Mars. Understanding transport as a continuum provides a basis for revaluating sand transport thresholds on Earth, Mars and Titan.

  15. Detection thresholds of macaque otolith afferents.

    Science.gov (United States)

    Yu, Xiong-Jie; Dickman, J David; Angelaki, Dora E

    2012-06-13

    The vestibular system is our sixth sense and is important for spatial perception functions, yet the sensory detection and discrimination properties of vestibular neurons remain relatively unexplored. Here we have used signal detection theory to measure detection thresholds of otolith afferents using 1 Hz linear accelerations delivered along three cardinal axes. Direction detection thresholds were measured by comparing mean firing rates centered on response peak and trough (full-cycle thresholds) or by comparing peak/trough firing rates with spontaneous activity (half-cycle thresholds). Thresholds were similar for utricular and saccular afferents, as well as for lateral, fore/aft, and vertical motion directions. When computed along the preferred direction, full-cycle direction detection thresholds were 7.54 and 3.01 cm/s(2) for regular and irregular firing otolith afferents, respectively. Half-cycle thresholds were approximately double, with excitatory thresholds being half as large as inhibitory thresholds. The variability in threshold among afferents was directly related to neuronal gain and did not depend on spike count variance. The exact threshold values depended on both the time window used for spike count analysis and the filtering method used to calculate mean firing rate, although differences between regular and irregular afferent thresholds were independent of analysis parameters. The fact that minimum thresholds measured in macaque otolith afferents are of the same order of magnitude as human behavioral thresholds suggests that the vestibular periphery might determine the limit on our ability to detect or discriminate small differences in head movement, with little noise added during downstream processing.

  16. Threshold effects on renormalization group running of neutrino parameters in the low-scale seesaw model

    International Nuclear Information System (INIS)

    Bergstroem, Johannes; Ohlsson, Tommy; Zhang He

    2011-01-01

    We show that, in the low-scale type-I seesaw model, renormalization group running of neutrino parameters may lead to significant modifications of the leptonic mixing angles in view of so-called seesaw threshold effects. Especially, we derive analytical formulas for radiative corrections to neutrino parameters in crossing the different seesaw thresholds, and show that there may exist enhancement factors efficiently boosting the renormalization group running of the leptonic mixing angles. We find that, as a result of the seesaw threshold corrections to the leptonic mixing angles, various flavor symmetric mixing patterns (e.g., bi-maximal and tri-bimaximal mixing patterns) can be easily accommodated at relatively low energy scales, which is well within the reach of running and forthcoming experiments (e.g., the LHC).

  17. A numerical study of threshold states

    International Nuclear Information System (INIS)

    Ata, M.S.; Grama, C.; Grama, N.; Hategan, C.

    1979-01-01

    There are some experimental evidences of charged particle threshold states. On the statistical background of levels, some simple structures were observed in excitation spectrum. They occur near the coulombian threshold and have a large reduced width for the decay in the threshold channel. These states were identified as charged cluster threshold states. Such threshold states were observed in sup(15,16,17,18)O, sup(18,19)F, sup(19,20)Ne, sup(24)Mg, sup(32)S. The types of clusters involved were d, t, 3 He, α and even 12 C. They were observed in heavy-ions transfer reactions in the residual nucleus as strong excited levels. The charged particle threshold states occur as simple structures at high excitation energy. They could be interesting both from nuclear structure as well as nuclear reaction mechanism point of view. They could be excited as simple structures both in compound and residual nucleus. (author)

  18. Proton therapy for prostate cancer treatment employing online image guidance and an action level threshold.

    Science.gov (United States)

    Vargas, Carlos; Falchook, Aaron; Indelicato, Daniel; Yeung, Anamaria; Henderson, Randall; Olivier, Kenneth; Keole, Sameer; Williams, Christopher; Li, Zuofeng; Palta, Jatinder

    2009-04-01

    The ability to determine the accuracy of the final prostate position within a determined action level threshold for image-guided proton therapy is unclear. Three thousand one hundred ten images for 20 consecutive patients treated in 1 of our 3 proton prostate protocols from February to May of 2007 were analyzed. Daily kV images and patient repositioning were performed employing an action-level threshold (ALT) of > or = 2.5 mm for each beam. Isocentric orthogonal x-rays were obtained, and prostate position was defined via 3 gold markers for each patient in the 3 axes. To achieve and confirm our action level threshold, an average of 2 x-rays sets (median 2; range, 0-4) was taken daily for each patient. Based on our ALT, we made no corrections in 8.7% (range, 0%-54%), 1 correction in 82% (41%-98%), and 2 to 3 corrections in 9% (0-27%). No patient needed 4 or more corrections. All patients were treated with a confirmed error of < 2.5 mm for every beam delivered. After all corrections, the mean and standard deviations were: anterior-posterior (z): 0.003 +/- 0.094 cm; superior-inferior (y): 0.028 +/- 0.073 cm; and right-left (x) -0.013 +/- 0.08 cm. It is feasible to limit all final prostate positions to less than 2.5 mm employing an action level image-guided radiation therapy (IGRT) process. The residual errors after corrections were very small.

  19. NNLO QCD corrections to Higgs boson production at large transverse momentum

    Energy Technology Data Exchange (ETDEWEB)

    Chen, X. [Center for High Energy Physics, Peking University,Beijing 100871 (China); Cruz-Martinez, J. [Institute for Particle Physics Phenomenology, Department of Physics, University of Durham,Durham, DH1 3LE (United Kingdom); Gehrmann, T. [Department of Physics, University of Zürich,CH-8057 Zürich (Switzerland); Glover, E.W.N. [Institute for Particle Physics Phenomenology, Department of Physics, University of Durham,Durham, DH1 3LE (United Kingdom); Jaquier, M. [Albert-Ludwigs-Universität Freiburg, Physikalisches Institut,D-79104 Freiburg (Germany)

    2016-10-13

    We derive the second-order QCD corrections to the production of a Higgs boson recoiling against a parton with finite transverse momentum, working in the effective field theory in which the top quark contributions are integrated out. To account for quark mass effects, we supplement the effective field theory result by the full quark mass dependence at leading order. Our calculation is fully differential in the final state kinematics and includes the decay of the Higgs boson to a photon pair. It allows one to make next-to-next-to-leading order (NNLO)-accurate theory predictions for Higgs-plus-jet final states and for the transverse momentum distribution of the Higgs boson, accounting for the experimental definition of the fiducial cross sections. The NNLO QCD corrections are found to be moderate and positive, they lead to a substantial reduction of the theory uncertainty on the predictions. We compare our results to 8 TeV LHC data from ATLAS and CMS. While the shape of the data is well-described for both experiments, we agree on the normalization only for CMS. By normalizing data and theory to the inclusive fiducial cross section for Higgs production, good agreement is found for both experiments, however at the expense of an increased theory uncertainty. We make predictions for Higgs production observables at the 13 TeV LHC, which are in good agreement with recent ATLAS data. At this energy, the leading order mass corrections to the effective field theory prediction become significant at large transverse momenta, and we discuss the resulting uncertainties on the predictions.

  20. NNLO QCD corrections to Higgs boson production at large transverse momentum

    Science.gov (United States)

    Chen, X.; Cruz-Martinez, J.; Gehrmann, T.; Glover, E. W. N.; Jaquier, M.

    2016-10-01

    We derive the second-order QCD corrections to the production of a Higgs boson recoiling against a parton with finite transverse momentum, working in the effective field theory in which the top quark contributions are integrated out. To account for quark mass effects, we supplement the effective field theory result by the full quark mass dependence at leading order. Our calculation is fully differential in the final state kinematics and includes the decay of the Higgs boson to a photon pair. It allows one to make next-to-next-to-leading order (NNLO)-accurate theory predictions for Higgs-plus-jet final states and for the transverse momentum distribution of the Higgs boson, accounting for the experimental definition of the fiducial cross sections. The NNLO QCD corrections are found to be moderate and positive, they lead to a substantial reduction of the theory uncertainty on the predictions. We compare our results to 8 TeV LHC data from ATLAS and CMS. While the shape of the data is well-described for both experiments, we agree on the normalization only for CMS. By normalizing data and theory to the inclusive fiducial cross section for Higgs production, good agreement is found for both experiments, however at the expense of an increased theory uncertainty. We make predictions for Higgs production observables at the 13 TeV LHC, which are in good agreement with recent ATLAS data. At this energy, the leading order mass corrections to the effective field theory prediction become significant at large transverse momenta, and we discuss the resulting uncertainties on the predictions.

  1. Threshold law for the triplet state for electron-impact ionization in the Temkin-Poet model

    International Nuclear Information System (INIS)

    Ihra, W.; Mota-Furtado, F.; OMahony, P.F.; Macek, J.H.

    1997-01-01

    We derive the analytical threshold behavior for the triplet cross section for electron-impact ionization in the Temkin-Poet model. The analytical results indicate that the most recent numerical calculations may fail to reproduce the correct threshold behavior in an energy regime below about E=0.1 a.u. We also present an analytical expression for the energy distribution of the two electrons near threshold. copyright 1997 The American Physical Society

  2. Correction: Large-scale electricity storage utilizing reversible solid oxide cells combined with underground storage of CO2 and CH4

    DEFF Research Database (Denmark)

    Jensen, Søren Højgaard; Graves, Christopher R.; Mogensen, Mogens Bjerg

    2017-01-01

    Correction for ‘Large-scale electricity storage utilizing reversible solid oxide cells combined with underground storage of CO2 and CH4’ by S. H. Jensen et al., Energy Environ. Sci., 2015, 8, 2471–2479.......Correction for ‘Large-scale electricity storage utilizing reversible solid oxide cells combined with underground storage of CO2 and CH4’ by S. H. Jensen et al., Energy Environ. Sci., 2015, 8, 2471–2479....

  3. Error Correction for Non-Abelian Topological Quantum Computation

    Directory of Open Access Journals (Sweden)

    James R. Wootton

    2014-03-01

    Full Text Available The possibility of quantum computation using non-Abelian anyons has been considered for over a decade. However, the question of how to obtain and process information about what errors have occurred in order to negate their effects has not yet been considered. This is in stark contrast with quantum computation proposals for Abelian anyons, for which decoding algorithms have been tailor-made for many topological error-correcting codes and error models. Here, we address this issue by considering the properties of non-Abelian error correction, in general. We also choose a specific anyon model and error model to probe the problem in more detail. The anyon model is the charge submodel of D(S_{3}. This shares many properties with important models such as the Fibonacci anyons, making our method more generally applicable. The error model is a straightforward generalization of those used in the case of Abelian anyons for initial benchmarking of error correction methods. It is found that error correction is possible under a threshold value of 7% for the total probability of an error on each physical spin. This is remarkably comparable with the thresholds for Abelian models.

  4. Evaluation of refractive correction for standard automated perimetry in eyes wearing multifocal contact lenses.

    Science.gov (United States)

    Hirasawa, Kazunori; Ito, Hikaru; Ohori, Yukari; Takano, Yui; Shoji, Nobuyuki

    2017-01-01

    To evaluate the refractive correction for standard automated perimetry (SAP) in eyes with refractive multifocal contact lenses (CL) in healthy young participants. Twenty-nine eyes of 29 participants were included. Accommodation was paralyzed in all participants with 1% cyclopentolate hydrochloride. SAP was performed using the Humphrey SITA-standard 24-2 and 10-2 protocol under three refractive conditions: monofocal CL corrected for near distance (baseline); multifocal CL corrected for distance (mCL-D); and mCL-D corrected for near vision using a spectacle lens (mCL-N). Primary outcome measures were the foveal threshold, mean deviation (MD), and pattern standard deviation (PSD). The foveal threshold of mCL-N with both the 24-2 and 10-2 protocols significantly decreased by 2.2-2.5 dB ( P correction without additional near correction is to be recommended.

  5. Threshold enhancement of diphoton resonances

    CERN Document Server

    Bharucha, Aoife; Goudelis, Andreas

    2016-10-10

    The data collected by the LHC collaborations at an energy of 13 TeV indicates the presence of an excess in the diphoton spectrum that would correspond to a resonance of a 750 GeV mass. The apparently large production cross section is nevertheless very difficult to explain in minimal models. We consider the possibility that the resonance is a pseudoscalar boson $A$ with a two--photon decay mediated by a charged and uncolored fermion having a mass at the $\\frac12 M_A$ threshold and a very small decay width, $\\ll 1$ MeV; one can then generate a large enhancement of the $A\\gamma\\gamma$ amplitude which explains the excess without invoking a large multiplicity of particles propagating in the loop, large electric charges and/or very strong Yukawa couplings. The implications of such a threshold enhancement are discussed in two explicit scenarios: i) the Minimal Supersymmetric Standard Model in which the $A$ state is produced via the top quark mediated gluon fusion process and decays into photons predominantly through...

  6. THRESHOLD PARAMETER OF THE EXPECTED LOSSES

    Directory of Open Access Journals (Sweden)

    Josip Arnerić

    2012-12-01

    Full Text Available The objective of extreme value analysis is to quantify the probabilistic behavior of unusually large losses using only extreme values above some high threshold rather than using all of the data which gives better fit to tail distribution in comparison to traditional methods with assumption of normality. In our case we estimate market risk using daily returns of the CROBEX index at the Zagreb Stock Exchange. Therefore, it’s necessary to define the excess distribution above some threshold, i.e. Generalized Pareto Distribution (GPD is used as much more reliable than the normal distribution due to the fact that gives the accent on the extreme values. Parameters of GPD distribution will be estimated using maximum likelihood method (MLE. The contribution of this paper is to specify threshold which is large enough so that GPD approximation valid but low enough so that a sufficient number of observations are available for a precise fit.

  7. The threshold contrast thickness evaluated with different CDMAM phantoms and software

    Directory of Open Access Journals (Sweden)

    Fabiszewska Ewa

    2016-03-01

    Full Text Available The image quality in digital mammography is described by specifying the thickness and diameter of disks with threshold visibility. The European Commission recommends the CDMAM phantom as a tool to evaluate threshold contrast visibility in digital mammography [1, 2]. Inaccuracy of the manufacturing process of CDMAM 3.4 phantoms (Artinis Medical System BV, as well as differences between software used to analyze the images, may lead to discrepancies in the evaluation of threshold contrast visibility. The authors of this work used three CDMAM 3.4 phantoms with serial numbers 1669, 1840, and 1841 and two mammography systems of the same manufacturer with an identical types of detectors. The images were analyzed with EUREF software (version 1.5.5 with CDCOM 1.6. exe file and Artinis software (version 1.2 with CDCOM 1.6. exe file. The differences between the observed thicknesses of the threshold contrast structures, which were caused by differences between the CDMAM 3.4 phantoms, were not reproduced in the same way on two mammography units of the same type. The thickness reported by the Artinis software (version 1.2 with CDCOM 1.6. exe file was generally greater than the one determined by the EUREF software (version 1.5.5 with CDCOM 1.6. exe file, but the ratio of the results depended on the phantom and diameter of the structure. It was not possible to establish correction factors, which would allow correction of the differences between the results obtained for different CDMAM 3.4 phantoms, or to correct the differences between software. Great care must be taken when results of the tests performed with different CDMAM 3.4 phantoms and with different software application are interpreted.

  8. Thresholds in the response of free-floating plant abundance to variation in hydraulic connectivity, nutrients, and macrophyte abundance in a large floodplain river

    Science.gov (United States)

    Giblin, Shawn M.; Houser, Jeffrey N.; Sullivan, John F.; Langrehr, H.A.; Rogala, James T.; Campbell, Benjamin D.

    2014-01-01

    Duckweed and other free-floating plants (FFP) can form dense surface mats that affect ecosystem condition and processes, and can impair public use of aquatic resources. FFP obtain their nutrients from the water column, and the formation of dense FFP mats can be a consequence and indicator of river eutrophication. We conducted two complementary surveys of diverse aquatic areas of the Upper Mississippi River as an in situ approach for estimating thresholds in the response of FFP abundance to nutrient concentration and physical conditions in a large, floodplain river. Local regression analysis was used to estimate thresholds in the relations between FFP abundance and phosphorus (P) concentration (0.167 mg l−1L), nitrogen (N) concentration (0.808 mg l−1), water velocity (0.095 m s−1), and aquatic macrophyte abundance (65 % cover). FFP tissue concentrations suggested P limitation was more likely in spring, N limitation was more likely in late summer, and N limitation was most likely in backwaters with minimal hydraulic connection to the channel. The thresholds estimated here, along with observed patterns in nutrient limitation, provide river scientists and managers with criteria to consider when attempting to modify FFP abundance in off-channel areas of large river systems.

  9. Binding and Pauli principle corrections in subthreshold pion-nucleus scattering

    International Nuclear Information System (INIS)

    Kam, J. de

    1981-01-01

    In this investigation I develop a three-body model for the single scattering optical potential in which the nucleon binding and the Pauli principle are accounted for. A unitarity pole approximation is used for the nucleon-core interaction. Calculations are presented for the π- 4 He elastic scattering cross sections at energies below the inelastic threshold and for the real part of the π- 4 He scattering length by solving the three-body equations. Off-shell kinematics and the Pauli principle are carefully taken into account. The binding correction and the Pauli principle correction each have an important effect on the differential cross sections and the scattering length. However, large cancellations occur between these two effects. I find an increase in the π- 4 He scattering length by 100%; an increase in the cross sections by 20-30% and shift of the minimum in π - - 4 He scattering to forward angles by 10 0 . (orig.)

  10. Noise Threshold and Resource Cost of Fault-Tolerant Quantum Computing with Majorana Fermions in Hybrid Systems.

    Science.gov (United States)

    Li, Ying

    2016-09-16

    Fault-tolerant quantum computing in systems composed of both Majorana fermions and topologically unprotected quantum systems, e.g., superconducting circuits or quantum dots, is studied in this Letter. Errors caused by topologically unprotected quantum systems need to be corrected with error-correction schemes, for instance, the surface code. We find that the error-correction performance of such a hybrid topological quantum computer is not superior to a normal quantum computer unless the topological charge of Majorana fermions is insusceptible to noise. If errors changing the topological charge are rare, the fault-tolerance threshold is much higher than the threshold of a normal quantum computer and a surface-code logical qubit could be encoded in only tens of topological qubits instead of about 1,000 normal qubits.

  11. Towards self-correcting quantum memories

    Science.gov (United States)

    Michnicki, Kamil

    This thesis presents a model of self-correcting quantum memories where quantum states are encoded using topological stabilizer codes and error correction is done using local measurements and local dynamics. Quantum noise poses a practical barrier to developing quantum memories. This thesis explores two types of models for suppressing noise. One model suppresses thermalizing noise energetically by engineering a Hamiltonian with a high energy barrier between code states. Thermalizing dynamics are modeled phenomenologically as a Markovian quantum master equation with only local generators. The second model suppresses stochastic noise with a cellular automaton that performs error correction using syndrome measurements and a local update rule. Several ways of visualizing and thinking about stabilizer codes are presented in order to design ones that have a high energy barrier: the non-local Ising model, the quasi-particle graph and the theory of welded stabilizer codes. I develop the theory of welded stabilizer codes and use it to construct a code with the highest known energy barrier in 3-d for spin Hamiltonians: the welded solid code. Although the welded solid code is not fully self correcting, it has some self correcting properties. It has an increased memory lifetime for an increased system size up to a temperature dependent maximum. One strategy for increasing the energy barrier is by mediating an interaction with an external system. I prove a no-go theorem for a class of Hamiltonians where the interaction terms are local, of bounded strength and commute with the stabilizer group. Under these conditions the energy barrier can only be increased by a multiplicative constant. I develop cellular automaton to do error correction on a state encoded using the toric code. The numerical evidence indicates that while there is no threshold, the model can extend the memory lifetime significantly. While of less theoretical importance, this could be practical for real

  12. Self-interaction error in density functional theory: a mean-field correction for molecules and large systems

    International Nuclear Information System (INIS)

    Ciofini, Ilaria; Adamo, Carlo; Chermette, Henry

    2005-01-01

    Corrections to the self-interaction error which is rooted in all standard exchange-correlation functionals in the density functional theory (DFT) have become the object of an increasing interest. After an introduction reminding the origin of the self-interaction error in the DFT formalism, and a brief review of the self-interaction free approximations, we present a simple, yet effective, self-consistent method to correct this error. The model is based on an average density self-interaction correction (ADSIC), where both exchange and Coulomb contributions are screened by a fraction of the electron density. The ansatz on which the method is built makes it particularly appealing, due to its simplicity and its favorable scaling with the size of the system. We have tested the ADSIC approach on one of the classical pathological problem for density functional theory: the direct estimation of the ionization potential from orbital eigenvalues. A large set of different chemical systems, ranging from simple atoms to large fullerenes, has been considered as test cases. Our results show that the ADSIC approach provides good numerical values for all the molecular systems, the agreement with the experimental values increasing, due to its average ansatz, with the size (conjugation) of the systems

  13. Repeat-aware modeling and correction of short read errors.

    Science.gov (United States)

    Yang, Xiao; Aluru, Srinivas; Dorman, Karin S

    2011-02-15

    High-throughput short read sequencing is revolutionizing genomics and systems biology research by enabling cost-effective deep coverage sequencing of genomes and transcriptomes. Error detection and correction are crucial to many short read sequencing applications including de novo genome sequencing, genome resequencing, and digital gene expression analysis. Short read error detection is typically carried out by counting the observed frequencies of kmers in reads and validating those with frequencies exceeding a threshold. In case of genomes with high repeat content, an erroneous kmer may be frequently observed if it has few nucleotide differences with valid kmers with multiple occurrences in the genome. Error detection and correction were mostly applied to genomes with low repeat content and this remains a challenging problem for genomes with high repeat content. We develop a statistical model and a computational method for error detection and correction in the presence of genomic repeats. We propose a method to infer genomic frequencies of kmers from their observed frequencies by analyzing the misread relationships among observed kmers. We also propose a method to estimate the threshold useful for validating kmers whose estimated genomic frequency exceeds the threshold. We demonstrate that superior error detection is achieved using these methods. Furthermore, we break away from the common assumption of uniformly distributed errors within a read, and provide a framework to model position-dependent error occurrence frequencies common to many short read platforms. Lastly, we achieve better error correction in genomes with high repeat content. The software is implemented in C++ and is freely available under GNU GPL3 license and Boost Software V1.0 license at "http://aluru-sun.ece.iastate.edu/doku.php?id = redeem". We introduce a statistical framework to model sequencing errors in next-generation reads, which led to promising results in detecting and correcting errors

  14. The effect of random dopant fluctuation on threshold voltage and drain current variation in junctionless nanotransistors

    International Nuclear Information System (INIS)

    Rezapour, Arash; Rezapour, Pegah

    2015-01-01

    We investigate the effect of dopant random fluctuation on threshold voltage and drain current variation in a two-gate nanoscale transistor. We used a quantum-corrected technology computer aided design simulation to run the simulation (10000 randomizations). With this simulation, we could study the effects of varying the dimensions (length and width), and thicknesses of oxide and dopant factors of a transistor on the threshold voltage and drain current in subthreshold region (off) and overthreshold (on). It was found that in the subthreshold region the variability of the drain current and threshold voltage is relatively fixed while in the overthreshold region the variability of the threshold voltage and drain current decreases remarkably, despite the slight reduction of gate voltage diffusion (compared with that of the subthreshold). These results have been interpreted by using previously reported models for threshold current variability, load displacement, and simple analytical calculations. Scaling analysis shows that the variability of the characteristics of this semiconductor increases as the effects of the short channel increases. Therefore, with a slight increase of length and a reduction of width, oxide thickness, and dopant factor, we could correct the effect of the short channel. (paper)

  15. Simultaneous correction of large low-order and high-order aberrations with a new deformable mirror technology

    Science.gov (United States)

    Rooms, F.; Camet, S.; Curis, J. F.

    2010-02-01

    A new technology of deformable mirror will be presented. Based on magnetic actuators, these deformable mirrors feature record strokes (more than +/- 45μm of astigmatism and focus correction) with an optimized temporal behavior. Furthermore, the development has been made in order to have a large density of actuators within a small clear aperture (typically 52 actuators within a diameter of 9.0mm). We will present the key benefits of this technology for vision science: simultaneous correction of low and high order aberrations, AO-SLO image without artifacts due to the membrane vibration, optimized control, etc. Using recent papers published by Doble, Thibos and Miller, we show the performances that can be achieved by various configurations using statistical approach. The typical distribution of wavefront aberrations (both the low order aberration (LOA) and high order aberration (HOA)) have been computed and the correction applied by the mirror. We compare two configurations of deformable mirrors (52 and 97 actuators) and highlight the influence of the number of actuators on the fitting error, the photon noise error and the effective bandwidth of correction.

  16. Re: Supplement to Request for Correction - IRIS Assessment of Trichloroethylene

    Science.gov (United States)

    Letter from Faye Graul providing supplemental information to her Request for Correction for Threshold of Trichloroethylene Contamination of Maternal Drinking Waters submitted under the Information Quality Act.

  17. Thresholds of ion turbulence in tokamaks

    International Nuclear Information System (INIS)

    Garbet, X.; Laurent, L.; Mourgues, F.; Roubin, J.P.; Samain, A.; Zou, X.L.

    1991-01-01

    The linear thresholds of ionic turbulence are numerically calculated for the Tokamaks JET and TORE SUPRA. It is proved that the stability domain at η i >0 is determined by trapped ion modes and is characterized by η i ≥1 and a threshold L Ti /R of order (0.2/0.3)/(1+T i /T e ). The latter value is significantly smaller than what has been previously predicted. Experimental temperature profiles in heated discharges are usually marginal with respect to this criterium. It is also shown that the eigenmodes are low frequency, low wavenumber ballooned modes, which may produce a very large transport once the threshold ion temperature gradient is reached

  18. Erratum: Correction to Table 3, in: Equivalent threshold sound pressure levels (ETSPL) for Sennheiser HDA 280 supra-aural audiometric earphones in the frequency range 125 Hz to 8000 Hz (International Journal of Audiology (2009) 48 (271-276))

    DEFF Research Database (Denmark)

    Poulsen, Torben

    2014-01-01

    The main results in Poulsen & Oakley (2009) are given as the equivalent threshold sound pressure level, ETSPL, measured in an acoustic coupler specifi ed in IEC 60318-3. These results are all correct. The ETSPL values for the ear simulator specifi ed in IEC 60318-1 were calculated from acoustic...

  19. Large-Scale Corrections to the CMB Anisotropy from Asymptotic de Sitter Mode

    Science.gov (United States)

    Sojasi, A.

    2018-01-01

    In this study, large-scale effects from asymptotic de Sitter mode on the CMB anisotropy are investigated. Besides the slow variation of the Hubble parameter onset of the last stage of inflation, the recent observational constraints from Planck and WMAP on spectral index confirm that the geometry of the universe can not be pure de Sitter in this era. Motivated by these evidences, we use this mode to calculate the power spectrum of the CMB anisotropy on the large scale. It is found that the CMB spectrum is dependent on the index of Hankel function ν which in the de Sitter limit ν → 3/2, the power spectrum reduces to the scale invariant result. Also, the result shows that the spectrum of anisotropy is dependent on angular scale and slow-roll parameter and these additional corrections are swept away by a cutoff scale parameter H ≪ M ∗ < M P .

  20. The O(α{sub s}{sup 2}) heavy quark corrections to charged current deep-inelastic scattering at large virtualities

    Energy Technology Data Exchange (ETDEWEB)

    Blümlein, Johannes, E-mail: Johannes.Bluemlein@desy.de [Deutsches Elektronen–Synchrotron, DESY, Platanenallee 6, D-15738 Zeuthen (Germany); Hasselhuhn, Alexander [Deutsches Elektronen–Synchrotron, DESY, Platanenallee 6, D-15738 Zeuthen (Germany); Research Institute for Symbolic Computation (RISC), Johannes Kepler University, Altenbergerstraße 69, A-4040 Linz (Austria); Pfoh, Torsten [Deutsches Elektronen–Synchrotron, DESY, Platanenallee 6, D-15738 Zeuthen (Germany)

    2014-04-15

    We calculate the O(α{sub s}{sup 2}) heavy flavor corrections to charged current deep-inelastic scattering at large scales Q{sup 2}≫m{sup 2}. The contributing Wilson coefficients are given as convolutions between massive operator matrix elements and massless Wilson coefficients. Foregoing results in the literature are extended and corrected. Numerical results are presented for the kinematic region of the HERA data.

  1. Generalized radiative corrections for hadronic targets

    International Nuclear Information System (INIS)

    Calan, C. de; Navelet, H.; Picard, J.

    1990-02-01

    Besides the radiative corrections theory at the order α 2 for reactions involving an arbitrary number of particles, this report gives the complete formula for the correction factor δ in dσ = dσ Born (1 + δ). The only approximation made here - unavoidable in this formulation - is to assume that the Born amplitude can be factorized. This calculation is valid for spin zero bosons. In the 1/2 fermion case, an extra contribution appears which has been analytically computed using a minor approximation. Special care has been devoted to the 1/v divergence of the amplitude near thresholds [fr

  2. Dental age estimation: the role of probability estimates at the 10 year threshold.

    Science.gov (United States)

    Lucas, Victoria S; McDonald, Fraser; Neil, Monica; Roberts, Graham

    2014-08-01

    The use of probability at the 18 year threshold has simplified the reporting of dental age estimates for emerging adults. The availability of simple to use widely available software has enabled the development of the probability threshold for individual teeth in growing children. Tooth development stage data from a previous study at the 10 year threshold were reused to estimate the probability of developing teeth being above or below the 10 year thresh-hold using the NORMDIST Function in Microsoft Excel. The probabilities within an individual subject are averaged to give a single probability that a subject is above or below 10 years old. To test the validity of this approach dental panoramic radiographs of 50 female and 50 male children within 2 years of the chronological age were assessed with the chronological age masked. Once the whole validation set of 100 radiographs had been assessed the masking was removed and the chronological age and dental age compared. The dental age was compared with chronological age to determine whether the dental age correctly or incorrectly identified a validation subject as above or below the 10 year threshold. The probability estimates correctly identified children as above or below on 94% of occasions. Only 2% of the validation group with a chronological age of less than 10 years were assigned to the over 10 year group. This study indicates the very high accuracy of assignment at the 10 year threshold. Further work at other legally important age thresholds is needed to explore the value of this approach to the technique of age estimation. Copyright © 2014. Published by Elsevier Ltd.

  3. Large tan β in gauge-mediated SUSY-breaking models

    International Nuclear Information System (INIS)

    Rattazzi, R.

    1997-01-01

    We explore some topics in the phenomenology of gauge-mediated SUSY-breaking scenarios having a large hierarchy of Higgs VEVs, v U /v D = tan β>>1. Some motivation for this scenario is first presented. We then use a systematic, analytic expansion (including some threshold corrections) to calculate the μ-parameter needed for proper electroweak breaking and the radiative corrections to the B-parameter, which fortuitously cancel at leading order. If B = 0 at the messenger scale then tan β is naturally large and calculable; we calculate it. We then confront this prediction with classical and quantum vacuum stability constraints arising from the Higgs-slepton potential, and indicate the preferred values of the top quark mass and messenger scale(s). The possibility of vacuum instability in a different direction yields an upper bound on the messenger mass scale complementary to the familiar bound from gravitino relic abundance. Next, we calculate the rate for b→sγ and show the possibility of large deviations (in the direction currently favored by experiment) from standard-model and small tan β predictions. Finally, we discuss the implications of these findings and their applicability to future, broader and more detailed investigations. (orig.)

  4. Correction for polychromatic aberration in computed tomography images

    International Nuclear Information System (INIS)

    Naparstek, A.

    1979-01-01

    A method and apparatus for correcting a computed tomography image for polychromatic aberration caused by the non-linear interaction (i.e. the energy dependent attenuation characteristics) of different body constituents, such as bone and soft tissue, with a polychromatic X-ray beam are described in detail. An initial image is conventionally computed from path measurements made as source and detector assembly scan a body section. In the improvement, each image element of the initial computed image representing attenuation is recorded in a store and is compared with two thresholds, one representing bone and the other soft tissue. Depending on the element value relative to the thresholds, a proportion of the respective constituent is allocated to that element location and corresponding bone and soft tissue projections are determined and stored. An error projection generator calculates projections of polychromatic aberration errors in the raw image data from recalled bone and tissue projections using a multidimensional polynomial function which approximates the non-linear interaction involved. After filtering, these are supplied to an image reconstruction computer to compute image element correction values which are subtracted from raw image element values to provide a corrected reconstructed image for display. (author)

  5. Adaptive optics for reduced threshold energy in femtosecond laser induced optical breakdown in water based eye model

    Science.gov (United States)

    Hansen, Anja; Krueger, Alexander; Ripken, Tammo

    2013-03-01

    In ophthalmic microsurgery tissue dissection is achieved using femtosecond laser pulses to create an optical breakdown. For vitreo-retinal applications the irradiance distribution in the focal volume is distorted by the anterior components of the eye causing a raised threshold energy for breakdown. In this work, an adaptive optics system enables spatial beam shaping for compensation of aberrations and investigation of wave front influence on optical breakdown. An eye model was designed to allow for aberration correction as well as detection of optical breakdown. The eye model consists of an achromatic lens for modeling the eye's refractive power, a water chamber for modeling the tissue properties, and a PTFE sample for modeling the retina's scattering properties. Aberration correction was performed using a deformable mirror in combination with a Hartmann-Shack-sensor. The influence of an adaptive optics aberration correction on the pulse energy required for photodisruption was investigated using transmission measurements for determination of the breakdown threshold and video imaging of the focal region for study of the gas bubble dynamics. The threshold energy is considerably reduced when correcting for the aberrations of the system and the model eye. Also, a raise in irradiance at constant pulse energy was shown for the aberration corrected case. The reduced pulse energy lowers the potential risk of collateral damage which is especially important for retinal safety. This offers new possibilities for vitreo-retinal surgery using femtosecond laser pulses.

  6. Optimal threshold estimation for binary classifiers using game theory.

    Science.gov (United States)

    Sanchez, Ignacio Enrique

    2016-01-01

    Many bioinformatics algorithms can be understood as binary classifiers. They are usually compared using the area under the receiver operating characteristic ( ROC ) curve. On the other hand, choosing the best threshold for practical use is a complex task, due to uncertain and context-dependent skews in the abundance of positives in nature and in the yields/costs for correct/incorrect classification. We argue that considering a classifier as a player in a zero-sum game allows us to use the minimax principle from game theory to determine the optimal operating point. The proposed classifier threshold corresponds to the intersection between the ROC curve and the descending diagonal in ROC space and yields a minimax accuracy of 1-FPR. Our proposal can be readily implemented in practice, and reveals that the empirical condition for threshold estimation of "specificity equals sensitivity" maximizes robustness against uncertainties in the abundance of positives in nature and classification costs.

  7. Spike-threshold adaptation predicted by membrane potential dynamics in vivo.

    Directory of Open Access Journals (Sweden)

    Bertrand Fontaine

    2014-04-01

    Full Text Available Neurons encode information in sequences of spikes, which are triggered when their membrane potential crosses a threshold. In vivo, the spiking threshold displays large variability suggesting that threshold dynamics have a profound influence on how the combined input of a neuron is encoded in the spiking. Threshold variability could be explained by adaptation to the membrane potential. However, it could also be the case that most threshold variability reflects noise and processes other than threshold adaptation. Here, we investigated threshold variation in auditory neurons responses recorded in vivo in barn owls. We found that spike threshold is quantitatively predicted by a model in which the threshold adapts, tracking the membrane potential at a short timescale. As a result, in these neurons, slow voltage fluctuations do not contribute to spiking because they are filtered by threshold adaptation. More importantly, these neurons can only respond to input spikes arriving together on a millisecond timescale. These results demonstrate that fast adaptation to the membrane potential captures spike threshold variability in vivo.

  8. Possibly Large Corrections to the Inflationary Observables

    CERN Document Server

    Bartolo, N

    2008-01-01

    We point out that the theoretical predictions for the inflationary observables may be generically altered by the presence of fields which are heavier than the Hubble rate during inflation and whose dynamics is usually neglected. They introduce corrections which may be easily larger than both the second-order contributions in the slow-roll parameters and the accuracy expected in the forthcoming experiments.

  9. Features and performance of a large gas Cherenkov detector with threshold regulation

    Energy Technology Data Exchange (ETDEWEB)

    Alberdi, J.; Alvarez-Taviel, J.; Asenjo, L.; Colino, N.; Diez-Hedo. F.; Duran, I.; Gonzalez, J.; Hernandez, J.J.; Ladron de Guevara, P.; Marquina, M.A.

    1988-01-15

    We present here the development, main features and calibration procedures for a new type of gas Cherenkov detector, based upon the ability to control its threshold by regulating the temperature of the gas used as radiator. We also include the performance of this detector in particle identification.

  10. Coloring geographical threshold graphs

    Energy Technology Data Exchange (ETDEWEB)

    Bradonjic, Milan [Los Alamos National Laboratory; Percus, Allon [Los Alamos National Laboratory; Muller, Tobias [EINDHOVEN UNIV. OF TECH

    2008-01-01

    We propose a coloring algorithm for sparse random graphs generated by the geographical threshold graph (GTG) model, a generalization of random geometric graphs (RGG). In a GTG, nodes are distributed in a Euclidean space, and edges are assigned according to a threshold function involving the distance between nodes as well as randomly chosen node weights. The motivation for analyzing this model is that many real networks (e.g., wireless networks, the Internet, etc.) need to be studied by using a 'richer' stochastic model (which in this case includes both a distance between nodes and weights on the nodes). Here, we analyze the GTG coloring algorithm together with the graph's clique number, showing formally that in spite of the differences in structure between GTG and RGG, the asymptotic behavior of the chromatic number is identical: {chi}1n 1n n / 1n n (1 + {omicron}(1)). Finally, we consider the leading corrections to this expression, again using the coloring algorithm and clique number to provide bounds on the chromatic number. We show that the gap between the lower and upper bound is within C 1n n / (1n 1n n){sup 2}, and specify the constant C.

  11. Finite temperature QCD corrections to lepton-pair formation in a quark-gluon plasma

    International Nuclear Information System (INIS)

    Altherr, T.

    1989-02-01

    We discuss the O(α S ) corrections to lepton-pair production in a quark-gluon plasma in equilibrium. The corrections are found to be very small in the domain of interest for ultrarelativistic heavy ions collisions. Interesting effects, however, appear at the annihilation threshold of the thermalized quarks

  12. The H-mode power threshold in JET

    Energy Technology Data Exchange (ETDEWEB)

    Start, D F.H.; Bhatnagar, V P; Campbell, D J; Cordey, J G; Esch, H P.L. de; Gormezano, C; Hawkes, N; Horton, L; Jones, T T.C.; Lomas, P J; Lowry, C; Righi, E; Rimini, F G; Saibene, G; Sartori, R; Sips, G; Stork, D; Thomas, P; Thomsen, K; Tubbing, B J.D.; Von Hellermann, M; Ward, D J [Commission of the European Communities, Abingdon (United Kingdom). JET Joint Undertaking

    1994-07-01

    New H-mode threshold data over a range of toroidal field and density values have been obtained from the present campaign. The scaling with n{sub e} B{sub t} is almost identical with that of the 91/92 period for the same discharge conditions. The scaling with toroidal field alone gives somewhat higher thresholds than the older data. The 1991/2 database shows a scaling of P{sub th} (power threshold) with n{sub e} B{sub t} which is approximately linear and agrees well with that observed on other tokamaks. For NBI and carbon target tiles the threshold power is a factor of two higher with the ion {Nu}B drift away from the target compared with the value found with the drift towards the target. The combination of ICRH and beryllium tiles appears to be beneficial for reducing P{sub th}. The power threshold is largely insensitive to plasma current, X-point height and distance between the last closed flux surface and the limiter, at least for values greater than 2 cm. (authors). 3 refs., 6 figs.

  13. Validation and evaluation of epistemic uncertainty in rainfall thresholds for regional scale landslide forecasting

    Science.gov (United States)

    Gariano, Stefano Luigi; Brunetti, Maria Teresa; Iovine, Giulio; Melillo, Massimo; Peruccacci, Silvia; Terranova, Oreste Giuseppe; Vennari, Carmela; Guzzetti, Fausto

    2015-04-01

    Prediction of rainfall-induced landslides can rely on empirical rainfall thresholds. These are obtained from the analysis of past rainfall events that have (or have not) resulted in slope failures. Accurate prediction requires reliable thresholds, which need to be validated before their use in operational landslide warning systems. Despite the clear relevance of validation, only a few studies have addressed the problem, and have proposed and tested robust validation procedures. We propose a validation procedure that allows for the definition of optimal thresholds for early warning purposes. The validation is based on contingency table, skill scores, and receiver operating characteristic (ROC) analysis. To establish the optimal threshold, which maximizes the correct landslide predictions and minimizes the incorrect predictions, we propose an index that results from the linear combination of three weighted skill scores. Selection of the optimal threshold depends on the scope and the operational characteristics of the early warning system. The choice is made by selecting appropriately the weights, and by searching for the optimal (maximum) value of the index. We discuss weakness in the validation procedure caused by the inherent lack of information (epistemic uncertainty) on landslide occurrence typical of large study areas. When working at the regional scale, landslides may have occurred and may have not been reported. This results in biases and variations in the contingencies and the skill scores. We introduce two parameters to represent the unknown proportion of rainfall events (above and below the threshold) for which landslides occurred and went unreported. We show that even a very small underestimation in the number of landslides can result in a significant decrease in the performance of a threshold measured by the skill scores. We show that the variations in the skill scores are different for different uncertainty of events above or below the threshold. This

  14. Universality in radiative corrections for non-supersymmetric heterotic vacua

    CERN Document Server

    Angelantonj, C; Tsulaia, Mirian

    2016-01-01

    Properties of moduli-dependent gauge threshold corrections in non-supersymmetric heterotic vacua are reviewed. In the absence of space-time supersymmetry these amplitudes are no longer protected and receive contributions from the whole tower of string states, BPS and not. Never-theless, the difference of gauge thresholds for non-Abelian gauge groups displays a remarkable universality property, even when supersymmetry is absent. We present a simple heterotic construction that shares this universal behaviour and expose the necessary conditions on the super-symmetry breaking mechanism for universality to occur.

  15. Ballistic deficit correction

    International Nuclear Information System (INIS)

    Duchene, G.; Moszynski, M.; Curien, D.

    1991-01-01

    The EUROGAM data-acquisition has to handle a large number of events/s. Typical in-beam experiments using heavy-ion fusion reactions assume the production of about 50 000 compound nuclei per second deexciting via particle and γ-ray emissions. The very powerful γ-ray detection of EUROGAM is expected to produce high-fold event rates as large as 10 4 events/s. Such high count rates introduce, in a common dead time mode, large dead times for the whole system associated with the processing of the pulse, its digitization and its readout (from the preamplifier pulse up to the readout of the information). In order to minimize the dead time the shaping time constant τ, usually about 3 μs for large volume Ge detectors has to be reduced. Smaller shaping times, however, will adversely affect the energy resolution due to ballistic deficit. One possible solution is to operate the linear amplifier, with a somewhat smaller shaping time constant (in the present case we choose τ = 1.5 μs), in combination with a ballistic deficit compensator. The ballistic deficit can be corrected in different ways using a Gated Integrator, a hardware correction or even a software correction. In this paper we present a comparative study of the software and hardware corrections as well as gated integration

  16. Threshold resummation and higher order effects in QCD

    International Nuclear Information System (INIS)

    Ringer, Felix Maximilian

    2015-01-01

    Quantum chromodynamics (QCD) is a quantum field theory that describes the strong interactions between quarks and gluons, the building blocks of all hadrons. Thanks to the experimental progress over the past decades, there has been an ever-growing need for QCD precision calculations for scattering processes involving hadrons. For processes at large momentum transfer, perturbative QCD offers a systematic approach for obtaining precise predictions. This approach relies on two key concepts: the asymptotic freedom of QCD and factorization. In a perturbative calculation at higher orders, the infrared cancellation between virtual and real emission diagrams generally leaves behind logarithmic contributions. In many observables relevant for hadronic scattering these logarithms are associated with a kinematic threshold and are hence known as ''threshold logarithms''. They become large when the available phase space for real gluon emission shrinks. In order to obtain a reliable prediction from QCD, the threshold logarithms need to be taken into account to all orders in the strong coupling constant, a procedure known as ''threshold resummation''. The main focus of my PhD thesis is on studies of QCD threshold resummation effects beyond the next-to-leading logarithmic order. Here we primarily consider the production of hadron pairs in hadronic collisions as an example. In addition, we also consider hadronic jet production, which is particularly interesting for the phenomenology at the LHC. For both processes, we fully take into account the non-trivial QCD color structure of the underlying partonic hard- scattering cross sections. We find that threshold resummation leads to sizable numerical effects in the kinematic regimes relevant for comparisons to experimental data.

  17. Quantum Error Correction and Fault Tolerant Quantum Computing

    CERN Document Server

    Gaitan, Frank

    2008-01-01

    It was once widely believed that quantum computation would never become a reality. However, the discovery of quantum error correction and the proof of the accuracy threshold theorem nearly ten years ago gave rise to extensive development and research aimed at creating a working, scalable quantum computer. Over a decade has passed since this monumental accomplishment yet no book-length pedagogical presentation of this important theory exists. Quantum Error Correction and Fault Tolerant Quantum Computing offers the first full-length exposition on the realization of a theory once thought impo

  18. On the effect of the t anti t threshold on electroweak parameters

    International Nuclear Information System (INIS)

    Kniehl, B.A.; Sirlin, A.

    1992-09-01

    Threshold effects in e + e - →tanti t induce contributions to key electroweak parameters such as Δρ, Δr, and sin 2 θ w beyond the scope of perturbative calculations of O(α) and O(αα s ). We quantitatively analyze these effects using once-subtracted dispersion relations which manifestly satisfy relevant Ward identities. The derivation and properties of the dispersion relations are discussed at some length. We find that the threshold effects enhance the familiar perturbative O(αα s ) corrections by between 25% and 40%, depending on the t-quark mass. The shift in the predicted value of the W-boson mass due to the threshold effects ranges from -8MeV at m t =91 GeV to -45 MeV at m t =250 GeV. (orig.)

  19. QCD corrections, virtual heavy quark effects and electroweak precision measurements

    International Nuclear Information System (INIS)

    Kniehl, B.A.; Kuehn, J.H.; Stuart, R.G.

    1988-01-01

    QCD corrections to virtual heavy quark effects on electroweak parameters are calculated, which may affect planned precision measurements at SLC and LEP. The influence of toponium and T b resonances is incorporated as well as the proper threshold behaviour of the imaginary part of the vacuum polarization function. The shift of the W-boson mass from these corrections and their influence on the polarization asymmetry are calculated and compared to the envisaged experimental precision. (orig.)

  20. 78 FR 6272 - Rules Relating to Additional Medicare Tax; Correction

    Science.gov (United States)

    2013-01-30

    ... Rules Relating to Additional Medicare Tax; Correction AGENCY: Internal Revenue Service (IRS), Treasury... regulations are relating to Additional Hospital Insurance Tax on income above threshold amounts (``Additional Medicare Tax''), as added by the Affordable Care Act. Specifically, these proposed regulations provide...

  1. Threshold-improved predictions for charm production in deep-inelastic scattering

    International Nuclear Information System (INIS)

    Lo Presti, N.A.; Kawamura, H.; Vogt, A.

    2010-08-01

    We have extended previous results on the threshold expansion of the gluon coefficient function for the charm contribution to the deep-inelastic structure function F 2 by deriving all thresholdenhanced contributions at the next-to-next-to-leading order. The size of these corrections is briefly illustrated, and a first step towards extending this improvement to more differential charmproduction cross sections is presented. (orig.)

  2. Corrections to the large-angle scattering amplitude

    International Nuclear Information System (INIS)

    Goloskokov, S.V.; Kudinov, A.V.; Kuleshov, S.P.

    1979-01-01

    High-energy behaviour of scattering amplitudes is considered within the frames of Logunov-Tavchelidze quasipotential approach. The representation of scattering amplitude of two scalar particles, convenient for the study of its asymptotic properties is given. Obtained are corrections of the main value of scattering amplitude of the first and the second orders in 1/p, where p is the pulse of colliding particles in the system of the inertia centre. An example of the obtained formulas use for a concrete quasipotential is given

  3. Radiative corrections for associated ZH production at future e+e- colliders

    International Nuclear Information System (INIS)

    Kniehl, B.A.

    1991-11-01

    The ZHfanti f four-point function is calculated in the one-loop approximation of the Standard Model and full analytic results are presented. The loop contributions due to both light and new heavy fermions are inspected in detail. The dominant mechanisms of Higgs-boson production from fermions are compared. The effect of radiative corrections on the cross section of fanti f→ZH including bremsstrahlung is studied. The spectrum of hard bremsstrahlung is integrated analytically. The implications for Higgs-boson searches at future e + e - colliders in the energy range 200 GeV≤√s≤1.5 TeV, which includes both LEP 2 and the Next Linear Collider, are analyzed. At √s=500 GeV, for instance, weak corrections in the modified on-mass-shell scheme vary between -2% and +7%, depending on the actual values of the Higgs-boson and top-quark masses. Electromagnetic corrections strongly reduce the cross section close to the ZH-production threshold, while they may considerably enhance it far above threshold. (orig.)

  4. Low-Threshold Active Teaching Methods for Mathematic Instruction

    Science.gov (United States)

    Marotta, Sebastian M.; Hargis, Jace

    2011-01-01

    In this article, we present a large list of low-threshold active teaching methods categorized so the instructor can efficiently access and target the deployment of conceptually based lessons. The categories include teaching strategies for lecture on large and small class sizes; student action individually, in pairs, and groups; games; interaction…

  5. Threshold corrections, generalised prepotentials and Eichler integrals

    CERN Document Server

    Angelantonj, Carlo; Pioline, Boris

    2015-06-12

    We continue our study of one-loop integrals associated to BPS-saturated amplitudes in $\\mathcal{N}=2$ heterotic vacua. We compute their large-volume behaviour, and express them as Fourier series in the complexified volume, with Fourier coefficients given in terms of Niebur-Poincar\\'e series in the complex structure modulus. The closure of Niebur-Poincar\\'e series under modular derivatives implies that such integrals derive from holomorphic prepotentials $f_n$, generalising the familiar prepotential of $\\mathcal{N}=2$ supergravity. These holomorphic prepotentials transform anomalously under T-duality, in a way characteristic of Eichler integrals. We use this observation to compute their quantum monodromies under the duality group. We extend the analysis to modular integrals with respect to Hecke congruence subgroups, which naturally arise in compactifications on non-factorisable tori and freely-acting orbifolds. In this case, we derive new explicit results including closed-form expressions for integrals involv...

  6. Threshold quantum cryptography

    International Nuclear Information System (INIS)

    Tokunaga, Yuuki; Okamoto, Tatsuaki; Imoto, Nobuyuki

    2005-01-01

    We present the concept of threshold collaborative unitary transformation or threshold quantum cryptography, which is a kind of quantum version of threshold cryptography. Threshold quantum cryptography states that classical shared secrets are distributed to several parties and a subset of them, whose number is greater than a threshold, collaborates to compute a quantum cryptographic function, while keeping each share secretly inside each party. The shared secrets are reusable if no cheating is detected. As a concrete example of this concept, we show a distributed protocol (with threshold) of conjugate coding

  7. A threshold-voltage model for small-scaled GaAs nMOSFET with stacked high-k gate dielectric

    International Nuclear Information System (INIS)

    Liu Chaowen; Xu Jingping; Liu Lu; Lu Hanhan; Huang Yuan

    2016-01-01

    A threshold-voltage model for a stacked high-k gate dielectric GaAs MOSFET is established by solving a two-dimensional Poisson's equation in channel and considering the short-channel, DIBL and quantum effects. The simulated results are in good agreement with the Silvaco TCAD data, confirming the correctness and validity of the model. Using the model, impacts of structural and physical parameters of the stack high-k gate dielectric on the threshold-voltage shift and the temperature characteristics of the threshold voltage are investigated. The results show that the stacked gate dielectric structure can effectively suppress the fringing-field and DIBL effects and improve the threshold and temperature characteristics, and on the other hand, the influence of temperature on the threshold voltage is overestimated if the quantum effect is ignored. (paper)

  8. An algorithm to correct saturated mass spectrometry ion abundances for enhanced quantitation and mass accuracy in omic studies

    Energy Technology Data Exchange (ETDEWEB)

    Bilbao, Aivett; Gibbons, Bryson C.; Slysz, Gordon W.; Crowell, Kevin L.; Monroe, Matthew E.; Ibrahim, Yehia M.; Smith, Richard D.; Payne, Samuel H.; Baker, Erin S.

    2018-04-01

    The mass accuracy and peak intensity of ions detected by mass spectrometry (MS) measurements are essential to facilitate compound identification and quantitation. However, high concentration species can easily cause problems if their ion intensities reach beyond the limits of the detection system, leading to distorted and non-ideal detector response (e.g. saturation), and largely precluding the calculation of accurate m/z and intensity values. Here we present an open source computational method to correct peaks above a defined intensity (saturated) threshold determined by the MS instrumentation such as the analog-to-digital converters or time-to-digital converters used in conjunction with time-of-flight MS. In this method, the isotopic envelope for each observed ion above the saturation threshold is compared to its expected theoretical isotopic distribution. The most intense isotopic peak for which saturation does not occur is then utilized to re-calculate the precursor m/z and correct the intensity, resulting in both higher mass accuracy and greater dynamic range. The benefits of this approach were evaluated with proteomic and lipidomic datasets of varying complexities. After correcting the high concentration species, reduced mass errors and enhanced dynamic range were observed for both simple and complex omic samples. Specifically, the mass error dropped by more than 50% in most cases with highly saturated species and dynamic range increased by 1-2 orders of magnitude for peptides in a blood serum sample.

  9. Development of the heated length to diameter correction factor on critical heat flux using the artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Ho; Baek, Won Pil; Chang, Soon Heung [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of); Chun, Tae Hyun [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1999-12-31

    With using artificial neural networks (ANNs), an analytical study related to the heated length effect on critical heat flux (CHF) has been carried out to make an improvement of the CHF prediction accuracy based on local condition correlations or table. It has been carried out to suggest a feasible criterion of the threshold length-to-diameter (L/D) value in which heated length could affect CHF. And within the criterion, a L/D correction factor has been developed through conventional regression. In order to validate the developed L/D correction factor, CHF experiments for various heated lengths have been carried out under low and intermediate pressure conditions. The developed threshold L/D correlation provides a new feasible criterion of L/D threshold value. The developed correction factor gives a reasonable accuracy for the original database, showing the error of -2.18% for average and 27.75% for RMS, and promising results for new experimental data. 7 refs., 12 figs., 1 tab. (Author)

  10. Development of the heated length to diameter correction factor on critical heat flux using the artificial neural networks

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Ho; Baek, Won Pil; Chang, Soon Heung [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of); Chun, Tae Hyun [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    With using artificial neural networks (ANNs), an analytical study related to the heated length effect on critical heat flux (CHF) has been carried out to make an improvement of the CHF prediction accuracy based on local condition correlations or table. It has been carried out to suggest a feasible criterion of the threshold length-to-diameter (L/D) value in which heated length could affect CHF. And within the criterion, a L/D correction factor has been developed through conventional regression. In order to validate the developed L/D correction factor, CHF experiments for various heated lengths have been carried out under low and intermediate pressure conditions. The developed threshold L/D correlation provides a new feasible criterion of L/D threshold value. The developed correction factor gives a reasonable accuracy for the original database, showing the error of -2.18% for average and 27.75% for RMS, and promising results for new experimental data. 7 refs., 12 figs., 1 tab. (Author)

  11. Thermal neutron self-shielding correction factors for large sample instrumental neutron activation analysis using the MCNP code

    International Nuclear Information System (INIS)

    Tzika, F.; Stamatelatos, I.E.

    2004-01-01

    Thermal neutron self-shielding within large samples was studied using the Monte Carlo neutron transport code MCNP. The code enabled a three-dimensional modeling of the actual source and geometry configuration including reactor core, graphite pile and sample. Neutron flux self-shielding correction factors derived for a set of materials of interest for large sample neutron activation analysis are presented and evaluated. Simulations were experimentally verified by measurements performed using activation foils. The results of this study can be applied in order to determine neutron self-shielding factors of unknown samples from the thermal neutron fluxes measured at the surface of the sample

  12. An investigation of the effect of load ratio on near-threshold fatigue crack propagation in a Ni-Base superalloy

    International Nuclear Information System (INIS)

    Schooling, J.M.; Reed, P.A.S.

    1995-01-01

    The near-threshold fatigue crack growth behavior of Waspaloy has been investigated to elucidate important parameters relevant to the development of a modelling program for fatigue behavior in Ni-base superalloys. At low values of load-ratio, R, threshold stress intensity values are found to be highly sensitive to R. This behavior is rationalized in terms of roughness induced crack closure. At high load ratios there is less sensitivity to R, and stage II behavior appears to persist to threshold. The threshold stress intensity at high R-ratios is lower than that for closure corrected Stage I (low load ratio) threshold behavior, indicating the existence of two intrinsic threshold values. This difference appears to be due not only to crack branching and deflection in Stage I, but also to be intrinsic difference in resistance to threshold behavior in the two growth modes. (author)

  13. Correction of population stratification in large multi-ethnic association studies.

    Directory of Open Access Journals (Sweden)

    David Serre

    2008-01-01

    Full Text Available The vast majority of genetic risk factors for complex diseases have, taken individually, a small effect on the end phenotype. Population-based association studies therefore need very large sample sizes to detect significant differences between affected and non-affected individuals. Including thousands of affected individuals in a study requires recruitment in numerous centers, possibly from different geographic regions. Unfortunately such a recruitment strategy is likely to complicate the study design and to generate concerns regarding population stratification.We analyzed 9,751 individuals representing three main ethnic groups - Europeans, Arabs and South Asians - that had been enrolled from 154 centers involving 52 countries for a global case/control study of acute myocardial infarction. All individuals were genotyped at 103 candidate genes using 1,536 SNPs selected with a tagging strategy that captures most of the genetic diversity in different populations. We show that relying solely on self-reported ethnicity is not sufficient to exclude population stratification and we present additional methods to identify and correct for stratification.Our results highlight the importance of carefully addressing population stratification and of carefully "cleaning" the sample prior to analyses to obtain stronger signals of association and to avoid spurious results.

  14. Dealing with ocular artifacts on lateralized ERPs in studies of visual-spatial attention and memory: ICA correction versus epoch rejection.

    Science.gov (United States)

    Drisdelle, Brandi Lee; Aubin, Sébrina; Jolicoeur, Pierre

    2017-01-01

    The objective of the present study was to assess the robustness and reliability of independent component analysis (ICA) as a method for ocular artifact correction in electrophysiological studies of visual-spatial attention and memory. The N2pc and sustained posterior contralateral negativity (SPCN), electrophysiological markers of visual-spatial attention and memory, respectively, are lateralized posterior ERPs typically observed following the presentation of lateral stimuli (targets and distractors) along with instructions to maintain fixation on the center of the visual search for the entire trial. Traditionally, trials in which subjects may have displaced their gaze are rejected based on a cutoff threshold, minimizing electrophysiological contamination by saccades. Given the loss of data resulting from rejection, we examined ocular correction by comparing results using standard fixation instructions against a condition where subjects were instructed to shift their gaze toward possible targets. Both conditions were analyzed using a rejection threshold and ICA correction for saccade activity management. Results demonstrate that ICA conserves data that would have otherwise been removed and leaves the underlying neural activity intact, as demonstrated by experimental manipulations previously shown to modulate the N2pc and the SPCN. Not only does ICA salvage and not distort data, but also large eye movements had only subtle effects. Overall, the findings provide convincing evidence for ICA correction for not only special cases (e.g., subjects did not follow fixation instruction) but also as a candidate for standard ocular artifact management in electrophysiological studies interested in visual-spatial attention and memory. © 2016 Society for Psychophysiological Research.

  15. Three-Dimensional Color Code Thresholds via Statistical-Mechanical Mapping

    Science.gov (United States)

    Kubica, Aleksander; Beverland, Michael E.; Brandão, Fernando; Preskill, John; Svore, Krysta M.

    2018-05-01

    Three-dimensional (3D) color codes have advantages for fault-tolerant quantum computing, such as protected quantum gates with relatively low overhead and robustness against imperfect measurement of error syndromes. Here we investigate the storage threshold error rates for bit-flip and phase-flip noise in the 3D color code (3DCC) on the body-centered cubic lattice, assuming perfect syndrome measurements. In particular, by exploiting a connection between error correction and statistical mechanics, we estimate the threshold for 1D stringlike and 2D sheetlike logical operators to be p3DCC (1 )≃1.9 % and p3DCC (2 )≃27.6 % . We obtain these results by using parallel tempering Monte Carlo simulations to study the disorder-temperature phase diagrams of two new 3D statistical-mechanical models: the four- and six-body random coupling Ising models.

  16. 'TrueCoinc' software utility for calculation of the true coincidence correction

    International Nuclear Information System (INIS)

    Sudar, S.

    2002-01-01

    The true coincidence correction plays an important role in the overall accuracy of the γ ray spectrometry especially in the case of present-day high volume detectors. The calculation of true coincidence corrections needs detailed nuclear structure information. Recently these data are available in computerized form from the Nuclear Data Centers through the Internet or on a CD-ROM of the Table of Isotopes. The aim has been to develop software for this calculation, using available databases for the levels data. The user has to supply only the parameters of the detector to be used. The new computer program runs under the Windows 95/98 operating system. In the framework of the project a new formula was prepared for calculating the summing out correction and calculation of the intensity of alias lines (sum peaks). The file converter for reading the ENDSF-2 type files was completed. Reading and converting the original ENDSF was added to the program. A computer accessible database of the X rays energies and intensities was created. The X ray emissions were taken in account in the 'summing out' calculation. Calculation of the true coincidence 'summing in' correction was done. The output was arranged to show independently two types of corrections and to calculate the final correction as multiplication of the two. A minimal intensity threshold can be set to show the final list only for the strongest lines. The calculation takes into account all the transitions, independently of the threshold. The program calculates the intensity of X rays (K, L lines). The true coincidence corrections for X rays were calculated. The intensities of the alias γ lines were calculated. (author)

  17. Towards a unifying basis of auditory thresholds: binaural summation.

    Science.gov (United States)

    Heil, Peter

    2014-04-01

    Absolute auditory threshold decreases with increasing sound duration, a phenomenon explainable by the assumptions that the sound evokes neural events whose probabilities of occurrence are proportional to the sound's amplitude raised to an exponent of about 3 and that a constant number of events are required for threshold (Heil and Neubauer, Proc Natl Acad Sci USA 100:6151-6156, 2003). Based on this probabilistic model and on the assumption of perfect binaural summation, an equation is derived here that provides an explicit expression of the binaural threshold as a function of the two monaural thresholds, irrespective of whether they are equal or unequal, and of the exponent in the model. For exponents >0, the predicted binaural advantage is largest when the two monaural thresholds are equal and decreases towards zero as the monaural threshold difference increases. This equation is tested and the exponent derived by comparing binaural thresholds with those predicted on the basis of the two monaural thresholds for different values of the exponent. The thresholds, measured in a large sample of human subjects with equal and unequal monaural thresholds and for stimuli with different temporal envelopes, are compatible only with an exponent close to 3. An exponent of 3 predicts a binaural advantage of 2 dB when the two ears are equally sensitive. Thus, listening with two (equally sensitive) ears rather than one has the same effect on absolute threshold as doubling duration. The data suggest that perfect binaural summation occurs at threshold and that peripheral neural signals are governed by an exponent close to 3. They might also shed new light on mechanisms underlying binaural summation of loudness.

  18. Novel Threshold Changeable Secret Sharing Schemes Based on Polynomial Interpolation.

    Science.gov (United States)

    Yuan, Lifeng; Li, Mingchu; Guo, Cheng; Choo, Kim-Kwang Raymond; Ren, Yizhi

    2016-01-01

    After any distribution of secret sharing shadows in a threshold changeable secret sharing scheme, the threshold may need to be adjusted to deal with changes in the security policy and adversary structure. For example, when employees leave the organization, it is not realistic to expect departing employees to ensure the security of their secret shadows. Therefore, in 2012, Zhang et al. proposed (t → t', n) and ({t1, t2,⋯, tN}, n) threshold changeable secret sharing schemes. However, their schemes suffer from a number of limitations such as strict limit on the threshold values, large storage space requirement for secret shadows, and significant computation for constructing and recovering polynomials. To address these limitations, we propose two improved dealer-free threshold changeable secret sharing schemes. In our schemes, we construct polynomials to update secret shadows, and use two-variable one-way function to resist collusion attacks and secure the information stored by the combiner. We then demonstrate our schemes can adjust the threshold safely.

  19. The issue of threshold states

    International Nuclear Information System (INIS)

    Luck, L.

    1994-01-01

    The states which have not joined the Non-proliferation Treaty nor have undertaken any other internationally binding commitment not to develop or otherwise acquire nuclear weapons are considered a threshold states. Their nuclear status is rendered opaque as a conscious policy. Nuclear threshold status remains a key disarmament issue. For those few states, as India, Pakistan, Israel, who have put themselves in this position, the security returns have been transitory and largely illusory. The cost to them, and to the international community committed to the norm of non-proliferation, has been huge. The decisions which could lead to recovery from the situation in which they find themselves are essentially at their own hands. Whatever assistance the rest of international community is able to extend, it will need to be accompanied by a vital political signal

  20. Threshold Theory Tested in an Organizational Setting

    DEFF Research Database (Denmark)

    Christensen, Bo T.; Hartmann, Peter V. W.; Hedegaard Rasmussen, Thomas

    2017-01-01

    A large sample of leaders (N = 4257) was used to test the link between leader innovativeness and intelligence. The threshold theory of the link between creativity and intelligence assumes that below a certain IQ level (approximately IQ 120), there is some correlation between IQ and creative...... potential, but above this cutoff point, there is no correlation. Support for the threshold theory of creativity was found, in that the correlation between IQ and innovativeness was positive and significant below a cutoff point of IQ 120. Above the cutoff, no significant relation was identified, and the two...... correlations differed significantly. The finding was stable across distinct parts of the sample, providing support for the theory, although the correlations in all subsamples were small. The findings lend support to the existence of threshold effects using perceptual measures of behavior in real...

  1. A Threshold Cointegration Analysis of Asymmetric Adjustment of OPEC and non-OPEC Monthly Crude Oil Prices

    OpenAIRE

    Ghassan, Hassan B.; Banerjee, Prashanta K.

    2013-01-01

    The purpose of this paper is to analyze the dynamics of crude oil prices of OPEC and non-OPEC countries using threshold cointegration. To capture the long run asymmetric price transmission mechanism, we develop an error correction model within a threshold cointegration and CGARCH errors framework. The empirical contribution of our paper specifies the cointegrating relation between OPEC price and non-OPEC prices and estimates how and to what extent the respective prices adjust to eliminate dis...

  2. Threshold corrections, generalised prepotentials and Eichler integrals

    Directory of Open Access Journals (Sweden)

    Carlo Angelantonj

    2015-08-01

    Full Text Available We continue our study of one-loop integrals associated to BPS-saturated amplitudes in N=2 heterotic vacua. We compute their large-volume behaviour, and express them as Fourier series in the complexified volume, with Fourier coefficients given in terms of Niebur–Poincaré series in the complex structure modulus. The closure of Niebur–Poincaré series under modular derivatives implies that such integrals derive from holomorphic prepotentials fn, generalising the familiar prepotential of N=2 supergravity. These holomorphic prepotentials transform anomalously under T-duality, in a way characteristic of Eichler integrals. We use this observation to compute their quantum monodromies under the duality group. We extend the analysis to modular integrals with respect to Hecke congruence subgroups, which naturally arise in compactifications on non-factorisable tori and freely-acting orbifolds. In this case, we derive new explicit results including closed-form expressions for integrals involving the Γ0(N Hauptmodul, a full characterisation of holomorphic prepotentials including their quantum monodromies, as well as concrete formulæ for holomorphic Yukawa couplings.

  3. A threshold-voltage model for small-scaled GaAs nMOSFET with stacked high-k gate dielectric

    Science.gov (United States)

    Chaowen, Liu; Jingping, Xu; Lu, Liu; Hanhan, Lu; Yuan, Huang

    2016-02-01

    A threshold-voltage model for a stacked high-k gate dielectric GaAs MOSFET is established by solving a two-dimensional Poisson's equation in channel and considering the short-channel, DIBL and quantum effects. The simulated results are in good agreement with the Silvaco TCAD data, confirming the correctness and validity of the model. Using the model, impacts of structural and physical parameters of the stack high-k gate dielectric on the threshold-voltage shift and the temperature characteristics of the threshold voltage are investigated. The results show that the stacked gate dielectric structure can effectively suppress the fringing-field and DIBL effects and improve the threshold and temperature characteristics, and on the other hand, the influence of temperature on the threshold voltage is overestimated if the quantum effect is ignored. Project supported by the National Natural Science Foundation of China (No. 61176100).

  4. Percolation bounds for decoding thresholds with correlated erasures in quantum LDPC codes

    Science.gov (United States)

    Hamilton, Kathleen; Pryadko, Leonid

    Correlations between errors can dramatically affect decoding thresholds, in some cases eliminating the threshold altogether. We analyze the existence of a threshold for quantum low-density parity-check (LDPC) codes in the case of correlated erasures. When erasures are positively correlated, the corresponding multi-variate Bernoulli distribution can be modeled in terms of cluster errors, where qubits in clusters of various size can be marked all at once. In a code family with distance scaling as a power law of the code length, erasures can be always corrected below percolation on a qubit adjacency graph associated with the code. We bound this correlated percolation transition by weighted (uncorrelated) percolation on a specially constructed cluster connectivity graph, and apply our recent results to construct several bounds for the latter. This research was supported in part by the NSF Grant PHY-1416578 and by the ARO Grant W911NF-14-1-0272.

  5. Efficiency calibration and coincidence summing correction for large arrays of NaI(Tl) detectors in soccer-ball and castle geometries

    International Nuclear Information System (INIS)

    Anil Kumar, G.; Mazumdar, I.; Gothe, D.A.

    2009-01-01

    Efficiency calibration and coincidence summing correction have been performed for two large arrays of NaI(Tl) detectors in two different configurations. They are, a compact array of 32 conical detectors of pentagonal and hexagonal shapes in soccer-ball geometry and an array of 14 straight hexagonal NaI(Tl) detectors in castle geometry. Both of these arrays provide a large solid angle of detection, leading to considerable coincidence summing of gamma rays. The present work aims to understand the effect of coincidence summing of gamma rays while determining the energy dependence of efficiencies of these two arrays. We have carried out extensive GEANT4 simulations with radio-nuclides that decay with a two-step cascade, considering both arrays in their realistic geometries. The absolute efficiencies have been simulated for gamma energies from 700 to 2800 keV using four different double-photon emitters, namely, 60 Co, 46 Sc, 94 Nb and 24 Na. The efficiencies so obtained have been corrected for coincidence summing using the method proposed by Vidmar et al. . The simulations have also been carried out for the same energies assuming mono-energetic point sources, for comparison. Experimental measurements have also been carried out using calibrated point sources of 137 Cs and 60 Co. The simulated and the experimental results are found to be in good agreement. This demonstrates the reliability of the correction method for efficiency calibration of two large arrays in very different configurations.

  6. Efficiency calibration and coincidence summing correction for large arrays of NaI(Tl) detectors in soccer-ball and castle geometries

    Energy Technology Data Exchange (ETDEWEB)

    Anil Kumar, G., E-mail: anilg@tifr.res.i [Department of Nuclear and Atomic Physics, Tata Institute of Fundamental Research, Homi Bhabha Road, Colaba, Mumbai 400005 (India); Mazumdar, I.; Gothe, D.A. [Department of Nuclear and Atomic Physics, Tata Institute of Fundamental Research, Homi Bhabha Road, Colaba, Mumbai 400005 (India)

    2009-11-21

    Efficiency calibration and coincidence summing correction have been performed for two large arrays of NaI(Tl) detectors in two different configurations. They are, a compact array of 32 conical detectors of pentagonal and hexagonal shapes in soccer-ball geometry and an array of 14 straight hexagonal NaI(Tl) detectors in castle geometry. Both of these arrays provide a large solid angle of detection, leading to considerable coincidence summing of gamma rays. The present work aims to understand the effect of coincidence summing of gamma rays while determining the energy dependence of efficiencies of these two arrays. We have carried out extensive GEANT4 simulations with radio-nuclides that decay with a two-step cascade, considering both arrays in their realistic geometries. The absolute efficiencies have been simulated for gamma energies from 700 to 2800 keV using four different double-photon emitters, namely, {sup 60}Co, {sup 46}Sc, {sup 94}Nb and {sup 24}Na. The efficiencies so obtained have been corrected for coincidence summing using the method proposed by Vidmar et al. . The simulations have also been carried out for the same energies assuming mono-energetic point sources, for comparison. Experimental measurements have also been carried out using calibrated point sources of {sup 137}Cs and {sup 60}Co. The simulated and the experimental results are found to be in good agreement. This demonstrates the reliability of the correction method for efficiency calibration of two large arrays in very different configurations.

  7. Room Temperature Ultralow Threshold GaN Nanowire Polariton Laser

    KAUST Repository

    Das, Ayan

    2011-08-01

    We report ultralow threshold polariton lasing from a single GaN nanowire strongly coupled to a large-area dielectric microcavity. The threshold carrier density is 3 orders of magnitude lower than that of photon lasing observed in the same device, and 2 orders of magnitude lower than any existing room-temperature polariton devices. Spectral, polarization, and coherence properties of the emission were measured to confirm polariton lasing. © 2011 American Physical Society.

  8. Radiative corrections in K→3π decays

    International Nuclear Information System (INIS)

    Bissegger, M.; Fuhrer, A.; Gasser, J.; Kubis, B.; Rusetsky, A.

    2009-01-01

    We investigate radiative corrections to K→3π decays. In particular, we extend the non-relativistic framework developed recently to include real and virtual photons and show that, in a well-defined power counting scheme, the results reproduce corrections obtained in the relativistic calculation. Real photons are included exactly, beyond the soft-photon approximation, and we compare the result with the latter. The singularities generated by pionium near threshold are investigated, and a region is identified where standard perturbation theory in the fine structure constant α may be applied. We expect that the formulae provided allow one to extract S-wave ππ scattering lengths from the cusp effect in these decays with high precision

  9. High-frequency (8 to 16 kHz) reference thresholds and intrasubject threshold variability relative to ototoxicity criteria using a Sennheiser HDA 200 earphone.

    Science.gov (United States)

    Frank, T

    2001-04-01

    The first purpose of this study was to determine high-frequency (8 to 16 kHz) thresholds for standardizing reference equivalent threshold sound pressure levels (RETSPLs) for a Sennheiser HDA 200 earphone. The second and perhaps more important purpose of this study was to determine whether repeated high-frequency thresholds using a Sennheiser HDA 200 earphone had a lower intrasubject threshold variability than the ASHA 1994 significant threshold shift criteria for ototoxicity. High-frequency thresholds (8 to 16 kHz) were obtained for 100 (50 male, 50 female) normally hearing (0.25 to 8 kHz) young adults (mean age of 21.2 yr) in four separate test sessions using a Sennheiser HDA 200 earphone. The mean and median high-frequency thresholds were similar for each test session and increased as frequency increased. At each frequency, the high-frequency thresholds were not significantly (p > 0.05) different for gender, test ear, or test session. The median thresholds at each frequency were similar to the 1998 interim ISO RETSPLs; however, large standard deviations and wide threshold distributions indicated very high intersubject threshold variability, especially at 14 and 16 kHz. Threshold repeatability was determined by finding the threshold differences between each possible test session comparison (N = 6). About 98% of all of the threshold differences were within a clinically acceptable range of +/-10 dB from 8 to 14 kHz. The threshold differences between each subject's second, third, and fourth minus their first test session were also found to determine whether intrasubject threshold variability was less than the ASHA 1994 criteria for determining a significant threshold shift due to ototoxicity. The results indicated a false-positive rate of 0% for a threshold shift > or = 20 dB at any frequency and a false-positive rate of 2% for a threshold shift >10 dB at two consecutive frequencies. This study verified that the output of high-frequency audiometers at 0 dB HL using

  10. Handling of BLM abort thresholds in the LHC

    CERN Document Server

    Nebot Del Busto, E; Holzer, EB; Zamantzas, C; Kruk, G; Nordt, A; Sapinski, M; Nemcic, M; Orecka, A; Jackson, S; Roderick, C; Skaugen, A

    2011-01-01

    The Beam Loss Monitoring system (BLM) for the LHC consists of about 3600 Ionization Chambers (IC) located around the ring. Its main purpose is to request a beam abort when the measured losses exceed a certain threshold. The BLM detectors integrate the measured signals in 12 different time intervals (running from 40us to 83.8s) enabling for a different set of abort thresholds depending on the duration of the beam loss. Furthermore, 32 energy levels running from 450GeV to 7TeV account for the fact that the energy density of a particle shower increases with the energy of the primary particle, i.e. the beam energy. Thus, a set of ! 3600 × 12 × 32 = 1.3 · 106 thresholds must be handled. These thresholds are highly critical for the safety of the machine and depend to a large part on human judgment, which cannot be replaced by automatic test procedures. The BLM team has defined well established procedures to compute, set and check new BLM thresholds, in order to avoid and/or find non-conformities due to manipulat...

  11. Can adaptive threshold-based metabolic tumor volume (MTV) and lean body mass corrected standard uptake value (SUL) predict prognosis in head and neck cancer patients treated with definitive radiotherapy/chemoradiotherapy?

    Science.gov (United States)

    Akagunduz, Ozlem Ozkaya; Savas, Recep; Yalman, Deniz; Kocacelebi, Kenan; Esassolak, Mustafa

    2015-11-01

    To evaluate the predictive value of adaptive threshold-based metabolic tumor volume (MTV), maximum standardized uptake value (SUVmax) and maximum lean body mass corrected SUV (SULmax) measured on pretreatment positron emission tomography and computed tomography (PET/CT) imaging in head and neck cancer patients treated with definitive radiotherapy/chemoradiotherapy. Pretreatment PET/CT of the 62 patients with locally advanced head and neck cancer who were treated consecutively between May 2010 and February 2013 were reviewed retrospectively. The maximum FDG uptake of the primary tumor was defined according to SUVmax and SULmax. Multiple threshold levels between 60% and 10% of the SUVmax and SULmax were tested with intervals of 5% to 10% in order to define the most suitable threshold value for the metabolic activity of each patient's tumor (adaptive threshold). MTV was calculated according to this value. We evaluated the relationship of mean values of MTV, SUVmax and SULmax with treatment response, local recurrence, distant metastasis and disease-related death. Receiver-operating characteristic (ROC) curve analysis was done to obtain optimal predictive cut-off values for MTV and SULmax which were found to have a predictive value. Local recurrence-free (LRFS), disease-free (DFS) and overall survival (OS) were examined according to these cut-offs. Forty six patients had complete response, 15 had partial response, and 1 had stable disease 6 weeks after the completion of treatment. Median follow-up of the entire cohort was 18 months. Of 46 complete responders 10 had local recurrence, and of 16 partial or no responders 10 had local progression. Eighteen patients died. Adaptive threshold-based MTV had significant predictive value for treatment response (p=0.011), local recurrence/progression (p=0.050), and disease-related death (p=0.024). SULmax had a predictive value for local recurrence/progression (p=0.030). ROC curves analysis revealed a cut-off value of 14.00 mL for

  12. Rainfall thresholds for the possible occurrence of landslides in Italy

    Directory of Open Access Journals (Sweden)

    M. T. Brunetti

    2010-03-01

    Full Text Available In Italy, rainfall is the primary trigger of landslides that frequently cause fatalities and large economic damage. Using a variety of information sources, we have compiled a catalogue listing 753 rainfall events that have resulted in landslides in Italy. For each event in the catalogue, the exact or approximate location of the landslide and the time or period of initiation of the slope failure is known, together with information on the rainfall duration D, and the rainfall mean intensity I, that have resulted in the slope failure. The catalogue represents the single largest collection of information on rainfall-induced landslides in Italy, and was exploited to determine the minimum rainfall conditions necessary for landslide occurrence in Italy, and in the Abruzzo Region, central Italy. For the purpose, new national rainfall thresholds for Italy and new regional rainfall thresholds for the Abruzzo Region were established, using two independent statistical methods, including a Bayesian inference method and a new Frequentist approach. The two methods proved complementary, with the Bayesian method more suited to analyze small data sets, and the Frequentist method performing better when applied to large data sets. The new regional thresholds for the Abruzzo Region are lower than the new national thresholds for Italy, and lower than the regional thresholds proposed in the literature for the Piedmont and Lombardy Regions in northern Italy, and for the Campania Region in southern Italy. This is important, because it shows that landslides in Italy can be triggered by less severe rainfall conditions than previously recognized. The Frequentist method experimented in this work allows for the definition of multiple minimum rainfall thresholds, each based on a different exceedance probability level. This makes the thresholds suited for the design of probabilistic schemes for the prediction of rainfall-induced landslides. A scheme based on four

  13. Error correction and degeneracy in surface codes suffering loss

    International Nuclear Information System (INIS)

    Stace, Thomas M.; Barrett, Sean D.

    2010-01-01

    Many proposals for quantum information processing are subject to detectable loss errors. In this paper, we give a detailed account of recent results in which we showed that topological quantum memories can simultaneously tolerate both loss errors and computational errors, with a graceful tradeoff between the threshold for each. We further discuss a number of subtleties that arise when implementing error correction on topological memories. We particularly focus on the role played by degeneracy in the matching algorithms and present a systematic study of its effects on thresholds. We also discuss some of the implications of degeneracy for estimating phase transition temperatures in the random bond Ising model.

  14. CARA Risk Assessment Thresholds

    Science.gov (United States)

    Hejduk, M. D.

    2016-01-01

    Warning remediation threshold (Red threshold): Pc level at which warnings are issued, and active remediation considered and usually executed. Analysis threshold (Green to Yellow threshold): Pc level at which analysis of event is indicated, including seeking additional information if warranted. Post-remediation threshold: Pc level to which remediation maneuvers are sized in order to achieve event remediation and obviate any need for immediate follow-up maneuvers. Maneuver screening threshold: Pc compliance level for routine maneuver screenings (more demanding than regular Red threshold due to additional maneuver uncertainty).

  15. Validity and reliability of in-situ air conduction thresholds measured through hearing aids coupled to closed and open instant-fit tips.

    Science.gov (United States)

    O'Brien, Anna; Keidser, Gitte; Yeend, Ingrid; Hartley, Lisa; Dillon, Harvey

    2010-12-01

    Audiometric measurements through a hearing aid ('in-situ') may facilitate provision of hearing services where these are limited. This study investigated the validity and reliability of in-situ air conduction hearing thresholds measured with closed and open domes relative to thresholds measured with insert earphones, and explored sources of variability in the measures. Twenty-four adults with sensorineural hearing impairment attended two sessions in which thresholds and real-ear-to-dial-difference (REDD) values were measured. Without correction, significantly higher low-frequency thresholds in dB HL were measured in-situ than with insert earphones. Differences were due predominantly to differences in ear canal SPL, as measured with the REDD, which were attributed to leaking low-frequency energy. Test-retest data yielded higher variability with the closed dome coupling due to inconsistent seals achieved with this tip. For all three conditions, inter-participant variability in the REDD values was greater than intra-participant variability. Overall, in-situ audiometry is as valid and reliable as conventional audiometry provided appropriate REDD corrections are made and ambient sound in the test environment is controlled.

  16. Quantum error correction for beginners

    International Nuclear Information System (INIS)

    Devitt, Simon J; Nemoto, Kae; Munro, William J

    2013-01-01

    Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future. (review article)

  17. Effect of background correction on peak detection and quantification in online comprehensive two-dimensional liquid chromatography using diode array detection.

    Science.gov (United States)

    Allen, Robert C; John, Mallory G; Rutan, Sarah C; Filgueira, Marcelo R; Carr, Peter W

    2012-09-07

    A singular value decomposition-based background correction (SVD-BC) technique is proposed for the reduction of background contributions in online comprehensive two-dimensional liquid chromatography (LC×LC) data. The SVD-BC technique was compared to simply subtracting a blank chromatogram from a sample chromatogram and to a previously reported background correction technique for one dimensional chromatography, which uses an asymmetric weighted least squares (AWLS) approach. AWLS was the only background correction technique to completely remove the background artifacts from the samples as evaluated by visual inspection. However, the SVD-BC technique greatly reduced or eliminated the background artifacts as well and preserved the peak intensity better than AWLS. The loss in peak intensity by AWLS resulted in lower peak counts at the detection thresholds established using standards samples. However, the SVD-BC technique was found to introduce noise which led to detection of false peaks at the lower detection thresholds. As a result, the AWLS technique gave more precise peak counts than the SVD-BC technique, particularly at the lower detection thresholds. While the AWLS technique resulted in more consistent percent residual standard deviation values, a statistical improvement in peak quantification after background correction was not found regardless of the background correction technique used. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. Thresholds in Xeric Hydrology and Biogeochemistry

    Science.gov (United States)

    Meixner, T.; Brooks, P. D.; Simpson, S. C.; Soto, C. D.; Yuan, F.; Turner, D.; Richter, H.

    2011-12-01

    Due to water limitation, thresholds in hydrologic and biogeochemical processes are common in arid and semi-arid systems. Some of these thresholds such as those focused on rainfall runoff relationships have been well studied. However to gain a full picture of the role that thresholds play in driving the hydrology and biogeochemistry of xeric systems a full view of the entire array of processes at work is needed. Here a walk through the landscape of xeric systems will be conducted illustrating the powerful role of hydrologic thresholds on xeric system biogeochemistry. To understand xeric hydro-biogeochemistry two key ideas need to be focused on. First, it is important to start from a framework of reaction and transport. Second an understanding of the temporal and spatial components of thresholds that have a large impact on hydrologic and biogeochemical fluxes needs to be offered. In the uplands themselves episodic rewetting and drying of soils permits accelerated biogeochemical processing but also more gradual drainage of water through the subsurface than expected in simple conceptions of biogeochemical processes. Hydrologic thresholds (water content above hygroscopic) results in a stop start nutrient spiral of material across the landscape since runoff connecting uplands to xeric perennial riparian is episodic and often only transports materials a short distance (100's of m). This episodic movement results in important and counter-intuitive nutrient inputs to riparian zones but also significant processing and uptake of nutrients. The floods that transport these biogeochemicals also result in significant input to riparian groundwater and may be key to sustaining these critical ecosystems. Importantly the flood driven recharge process itself is a threshold process dependent on flood characteristics (floods greater than 100 cubic meters per second) and antecedent conditions (losing to near neutral gradients). Floods also appear to influence where arid and semi

  19. Classification error of the thresholded independence rule

    DEFF Research Database (Denmark)

    Bak, Britta Anker; Fenger-Grøn, Morten; Jensen, Jens Ledet

    We consider classification in the situation of two groups with normally distributed data in the ‘large p small n’ framework. To counterbalance the high number of variables we consider the thresholded independence rule. An upper bound on the classification error is established which is taylored...

  20. Application of habitat thresholds in conservation: Considerations, limitations, and future directions

    Directory of Open Access Journals (Sweden)

    Yntze van der Hoek

    2015-01-01

    Full Text Available Habitat thresholds are often interpreted as the minimum required area of habitat, and subsequently promoted as conservation targets in natural resource policies and planning. Unfortunately, several recent reviews and messages of caution on the application of habitat thresholds in conservation have largely fallen on deaf ears, leading to a dangerous oversimplification and generalization of the concept. We highlight the prevalence of oversimplification/over-generalization of results from habitat threshold studies in policy documentation, the consequences of such over-generalization, and directions for habitat threshold studies that have conservation applications without risking overgeneralization. We argue that in order to steer away from misapplication of habitat thresholds in conservation, we should not focus on generalized nominal habitat values (i.e., amounts or percentages of habitat, but on the use of habitat threshold modeling for comparative exercises of area-sensitivity or the identification of environmental dangers. In addition, we should remain focused on understanding the processes and mechanisms underlying species responses to habitat change. Finally, studies could that focus on deriving nominal value threshold amounts should do so only if the thresholds are detailed, species-specific, and translated to conservation targets particular to the study area only.

  1. Rainfall threshold definition using an entropy decision approach and radar data

    Directory of Open Access Journals (Sweden)

    V. Montesarchio

    2011-07-01

    Full Text Available Flash flood events are floods characterised by a very rapid response of basins to storms, often resulting in loss of life and property damage. Due to the specific space-time scale of this type of flood, the lead time available for triggering civil protection measures is typically short. Rainfall threshold values specify the amount of precipitation for a given duration that generates a critical discharge in a given river cross section. If the threshold values are exceeded, it can produce a critical situation in river sites exposed to alluvial risk. It is therefore possible to directly compare the observed or forecasted precipitation with critical reference values, without running online real-time forecasting systems. The focus of this study is the Mignone River basin, located in Central Italy. The critical rainfall threshold values are evaluated by minimising a utility function based on the informative entropy concept and by using a simulation approach based on radar data. The study concludes with a system performance analysis, in terms of correctly issued warnings, false alarms and missed alarms.

  2. Automatic Semiconductor Wafer Image Segmentation for Defect Detection Using Multilevel Thresholding

    Directory of Open Access Journals (Sweden)

    Saad N.H.

    2016-01-01

    Full Text Available Quality control is one of important process in semiconductor manufacturing. A lot of issues trying to be solved in semiconductor manufacturing industry regarding the rate of production with respect to time. In most semiconductor assemblies, a lot of wafers from various processes in semiconductor wafer manufacturing need to be inspected manually using human experts and this process required full concentration of the operators. This human inspection procedure, however, is time consuming and highly subjective. In order to overcome this problem, implementation of machine vision will be the best solution. This paper presents automatic defect segmentation of semiconductor wafer image based on multilevel thresholding algorithm which can be further adopted in machine vision system. In this work, the defect image which is in RGB image at first is converted to the gray scale image. Median filtering then is implemented to enhance the gray scale image. Then the modified multilevel thresholding algorithm is performed to the enhanced image. The algorithm worked in three main stages which are determination of the peak location of the histogram, segmentation the histogram between the peak and determination of first global minimum of histogram that correspond to the threshold value of the image. The proposed approach is being evaluated using defected wafer images. The experimental results shown that it can be used to segment the defect correctly and outperformed other thresholding technique such as Otsu and iterative thresholding.

  3. Thresholds of Toxicological Concern - Setting a threshold for testing below which there is little concern.

    Science.gov (United States)

    Hartung, Thomas

    2017-01-01

    Low dose, low risk; very low dose, no real risk. Setting a pragmatic threshold below which concerns become negligible is the purpose of thresholds of toxicological concern (TTC). The idea is that such threshold values do not need to be established for each and every chemical based on experimental data, but that by analyzing the distribution of lowest or no-effect doses of many chemicals, a TTC can be defined - typically using the 5th percentile of this distribution and lowering it by an uncertainty factor of, e.g., 100. In doing so, TTC aims to compare exposure information (dose) with a threshold below which any hazard manifestation is very unlikely to occur. The history and current developments of this concept are reviewed and the application of TTC for different regulated products and their hazards is discussed. TTC lends itself as a pragmatic filter to deprioritize testing needs whenever real-life exposures are much lower than levels where hazard manifestation would be expected, a situation that is called "negligible exposure" in the REACH legislation, though the TTC concept has not been fully incorporated in its implementation (yet). Other areas and regulations - especially in the food sector and for pharmaceutical impurities - are more proactive. Large, curated databases on toxic effects of chemicals provide us with the opportunity to set TTC for many hazards and substance classes and thus offer a precautionary second tier for risk assessments if hazard cannot be excluded. This allows focusing testing efforts better on relevant exposures to chemicals.

  4. Preoperative thresholds for pulmonary valve replacement in patients with corrected tetralogy of Fallot using cardiovascular magnetic resonance.

    NARCIS (Netherlands)

    Oosterhof, T.; Straten, A. van; Vliegen, H.W.; Meijboom, F.J.; Dijk, A.P.J. van; Spijkerboer, A.M.; Bouma, B.J.; Zwinderman, A.H.; Hazekamp, M.G.; Roos, A.; Mulder, B.J.M.

    2007-01-01

    BACKGROUND: To facilitate the optimal timing of pulmonary valve replacement, we analyzed preoperative thresholds of right ventricular (RV) volumes above which no decrease or normalization of RV size takes place after surgery. METHODS AND RESULTS: Between 1993 and 2006, 71 adult patients with

  5. Preoperative thresholds for pulmonary valve replacement in patients with corrected tetralogy of Fallot using cardiovascular magnetic resonance

    NARCIS (Netherlands)

    Oosterhof, Thomas; van Straten, Alexander; Vliegen, Hubert W.; Meijboom, Folkert J.; van Dijk, Arie P. J.; Spijkerboer, Anje M.; Bouma, Berto J.; Zwinderman, Aeilko H.; Hazekamp, Mark G.; de Roos, Albert; Mulder, Barbara J. M.

    2007-01-01

    Background - To facilitate the optimal timing of pulmonary valve replacement, we analyzed preoperative thresholds of right ventricular ( RV) volumes above which no decrease or normalization of RV size takes place after surgery. Methods and Results - Between 1993 and 2006, 71 adult patients with

  6. Data-Driven Jump Detection Thresholds for Application in Jump Regressions

    Directory of Open Access Journals (Sweden)

    Robert Davies

    2018-03-01

    Full Text Available This paper develops a method to select the threshold in threshold-based jump detection methods. The method is motivated by an analysis of threshold-based jump detection methods in the context of jump-diffusion models. We show that over the range of sampling frequencies a researcher is most likely to encounter that the usual in-fill asymptotics provide a poor guide for selecting the jump threshold. Because of this we develop a sample-based method. Our method estimates the number of jumps over a grid of thresholds and selects the optimal threshold at what we term the ‘take-off’ point in the estimated number of jumps. We show that this method consistently estimates the jumps and their indices as the sampling interval goes to zero. In several Monte Carlo studies we evaluate the performance of our method based on its ability to accurately locate jumps and its ability to distinguish between true jumps and large diffusive moves. In one of these Monte Carlo studies we evaluate the performance of our method in a jump regression context. Finally, we apply our method in two empirical studies. In one we estimate the number of jumps and report the jump threshold our method selects for three commonly used market indices. In the other empirical application we perform a series of jump regressions using our method to select the jump threshold.

  7. Improving the accuracy of CT dimensional metrology by a novel beam hardening correction method

    International Nuclear Information System (INIS)

    Zhang, Xiang; Li, Lei; Zhang, Feng; Xi, Xiaoqi; Deng, Lin; Yan, Bin

    2015-01-01

    Its powerful nondestructive characteristics are attracting more and more research into the study of computed tomography (CT) for dimensional metrology, which offers a practical alternative to the common measurement methods. However, the inaccuracy and uncertainty severely limit the further utilization of CT for dimensional metrology due to many factors, among which the beam hardening (BH) effect plays a vital role. This paper mainly focuses on eliminating the influence of the BH effect in the accuracy of CT dimensional metrology. To correct the BH effect, a novel exponential correction model is proposed. The parameters of the model are determined by minimizing the gray entropy of the reconstructed volume. In order to maintain the consistency and contrast of the corrected volume, a punishment term is added to the cost function, enabling more accurate measurement results to be obtained by the simple global threshold method. The proposed method is efficient, and especially suited to the case where there is a large difference in gray value between material and background. Different spheres with known diameters are used to verify the accuracy of dimensional measurement. Both simulation and real experimental results demonstrate the improvement in measurement precision. Moreover, a more complex workpiece is also tested to show that the proposed method is of general feasibility. (paper)

  8. Cochlear neuropathy and the coding of supra-threshold sound.

    Science.gov (United States)

    Bharadwaj, Hari M; Verhulst, Sarah; Shaheen, Luke; Liberman, M Charles; Shinn-Cunningham, Barbara G

    2014-01-01

    Many listeners with hearing thresholds within the clinically normal range nonetheless complain of difficulty hearing in everyday settings and understanding speech in noise. Converging evidence from human and animal studies points to one potential source of such difficulties: differences in the fidelity with which supra-threshold sound is encoded in the early portions of the auditory pathway. Measures of auditory subcortical steady-state responses (SSSRs) in humans and animals support the idea that the temporal precision of the early auditory representation can be poor even when hearing thresholds are normal. In humans with normal hearing thresholds (NHTs), paradigms that require listeners to make use of the detailed spectro-temporal structure of supra-threshold sound, such as selective attention and discrimination of frequency modulation (FM), reveal individual differences that correlate with subcortical temporal coding precision. Animal studies show that noise exposure and aging can cause a loss of a large percentage of auditory nerve fibers (ANFs) without any significant change in measured audiograms. Here, we argue that cochlear neuropathy may reduce encoding precision of supra-threshold sound, and that this manifests both behaviorally and in SSSRs in humans. Furthermore, recent studies suggest that noise-induced neuropathy may be selective for higher-threshold, lower-spontaneous-rate nerve fibers. Based on our hypothesis, we suggest some approaches that may yield particularly sensitive, objective measures of supra-threshold coding deficits that arise due to neuropathy. Finally, we comment on the potential clinical significance of these ideas and identify areas for future investigation.

  9. Cochlear Neuropathy and the Coding of Supra-threshold Sound

    Directory of Open Access Journals (Sweden)

    Hari M Bharadwaj

    2014-02-01

    Full Text Available Many listeners with hearing thresholds within the clinically normal range nonetheless complain of difficulty hearing in everyday settings and understanding speech in noise. Converging evidence from human and animal studies points to one potential source of such difficulties: differences in the fidelity with which supra-threshold sound is encoded in the early portions of the auditory pathway. Measures of auditory subcortical steady-state responses in humans and animals support the idea that the temporal precision of the early auditory representation can be poor even when hearing thresholds are normal. In humans with normal hearing thresholds, behavioral ability in paradigms that require listeners to make use of the detailed spectro-temporal structure of supra-threshold sound, such as selective attention and discrimination of frequency modulation, correlate with subcortical temporal coding precision. Animal studies show that noise exposure and aging can cause a loss of a large percentage of auditory nerve fibers without any significant change in measured audiograms. Here, we argue that cochlear neuropathy may reduce encoding precision of supra-threshold sound, and that this manifests both behaviorally and in subcortical steady-state responses in humans. Furthermore, recent studies suggest that noise-induced neuropathy may be selective for higher-threshold, lower-spontaneous-rate nerve fibers. Based on our hypothesis, we suggest some approaches that may yield particularly sensitive, objective measures of supra-threshold coding deficits that arise due to neuropathy. Finally, we comment on the potential clinical significance of these ideas and identify areas for future investigation.

  10. Radiative corrections in K{yields}3{pi} decays

    Energy Technology Data Exchange (ETDEWEB)

    Bissegger, M. [Institute for Theoretical Physics, University of Bern, Sidlerstr. 5, CH-3012 Bern (Switzerland); Fuhrer, A. [Institute for Theoretical Physics, University of Bern, Sidlerstr. 5, CH-3012 Bern (Switzerland); Physics Department, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093-0319 (United States); Gasser, J. [Institute for Theoretical Physics, University of Bern, Sidlerstr. 5, CH-3012 Bern (Switzerland); Kubis, B. [Helmholtz-Institut fuer Strahlen-und Kernphysik, Universitaet Bonn, Nussallee 14-16, D-53115 Bonn (Germany)], E-mail: kubis@itkp.uni-bonn.de; Rusetsky, A. [Helmholtz-Institut fuer Strahlen-und Kernphysik, Universitaet Bonn, Nussallee 14-16, D-53115 Bonn (Germany)

    2009-01-01

    We investigate radiative corrections to K{yields}3{pi} decays. In particular, we extend the non-relativistic framework developed recently to include real and virtual photons and show that, in a well-defined power counting scheme, the results reproduce corrections obtained in the relativistic calculation. Real photons are included exactly, beyond the soft-photon approximation, and we compare the result with the latter. The singularities generated by pionium near threshold are investigated, and a region is identified where standard perturbation theory in the fine structure constant {alpha} may be applied. We expect that the formulae provided allow one to extract S-wave {pi}{pi} scattering lengths from the cusp effect in these decays with high precision.

  11. Empirical assessment of a threshold model for sylvatic plague

    DEFF Research Database (Denmark)

    Davis, Stephen; Leirs, Herwig; Viljugrein, H.

    2007-01-01

    Plague surveillance programmes established in Kazakhstan, Central Asia, during the previous century, have generated large plague archives that have been used to parameterize an abundance threshold model for sylvatic plague in great gerbil (Rhombomys opimus) populations. Here, we assess the model...... examine six hypotheses that could explain the resulting false positive predictions, namely (i) including end-of-outbreak data erroneously lowers the estimated threshold, (ii) too few gerbils were tested, (iii) plague becomes locally extinct, (iv) the abundance of fleas was too low, (v) the climate...

  12. Particles near threshold

    International Nuclear Information System (INIS)

    Bhattacharya, T.; Willenbrock, S.

    1993-01-01

    We propose returning to the definition of the width of a particle in terms of the pole in the particle's propagator. Away from thresholds, this definition of width is equivalent to the standard perturbative definition, up to next-to-leading order; however, near a threshold, the two definitions differ significantly. The width as defined by the pole position provides more information in the threshold region than the standard perturbative definition and, in contrast with the perturbative definition, does not vanish when a two-particle s-wave threshold is approached from below

  13. Gamma ray auto absorption correction evaluation methodology

    International Nuclear Information System (INIS)

    Gugiu, Daniela; Roth, Csaba; Ghinescu, Alecse

    2010-01-01

    Neutron activation analysis (NAA) is a well established nuclear technique, suited to investigate the microstructural or elemental composition and can be applied to studies of a large variety of samples. The work with large samples involves, beside the development of large irradiation devices with well know neutron field characteristics, the knowledge of perturbing phenomena and adequate evaluation of correction factors like: neutron self shielding, extended source correction, gamma ray auto absorption. The objective of the works presented in this paper is to validate an appropriate methodology for gamma ray auto absorption correction evaluation for large inhomogeneous samples. For this purpose a benchmark experiment has been defined - a simple gamma ray transmission experiment, easy to be reproduced. The gamma ray attenuation in pottery samples has been measured and computed using MCNP5 code. The results show a good agreement between the computed and measured values, proving that the proposed methodology is able to evaluate the correction factors. (authors)

  14. Approximation of the cross-sections for charged-particle emission reactions near the threshold

    International Nuclear Information System (INIS)

    Badikov, S.A.; Pashchenko, A.B.

    1990-01-01

    We perform an analytical approximation of the energy dependence of the cross-sections for the reactions (n,p) and (n,γ) from the BOSPOR library, correct them for the latest differential and integral experimental data using the common features, characteristic of the energy dependence of the threshold reaction cross-section and making some physical assumptions. 19 refs, 1 fig., 1 tab

  15. Near-threshold infrared photodetachment of Al-: A determination of the electron affinity of aluminum and the range of validity of the Wigner law

    International Nuclear Information System (INIS)

    Calabrese, D.; Covington, A.M.; Thompson, J.S.; Marawar, R.W.; Farley, J.W.

    1996-01-01

    The relative photodetachment cross section of Al - has been measured in the wavelength range 2420 endash 2820 nm (0.440 endash 0.512 eV), using a coaxial ion-laser beams apparatus, in which a 2.98-keV Al - beam is merged with a beam from an F-center laser. The cross-section data near the 3 P 0,1,2 → 2 P 1/2,3/2 photodetachment threshold have been fitted to the Wigner threshold law and to the zero-core-contribution theory of photodetachment. The electron affinity of aluminum was determined to be 0.44094(+0.00066/-0.00048) eV, after correcting the experimental threshold for unresolved fine structure in the ground states of Al - and Al. The new measurement is in agreement with the best previous measurement (0.441±0.010 eV) and is 20 times more precise. The Wigner law agrees with experiment within a few percent for photon energies within 3% of threshold. A proposed leading correction to the Wigner law is discussed. copyright 1996 The American Physical Society

  16. High-Damage-Threshold Pinhole for Glass Fusion Laser Applications

    International Nuclear Information System (INIS)

    Kumit, N.A.; Letzring, S.A.; Johnson, R.P.

    1998-01-01

    We are investigating methods to fabricate high-damage-threshold spatial-filter pinholes that might not be susceptible to plasma closure for relatively high energies and long pulses. These are based on the observation that grazing-incidence reflection from glass can withstand in excess of 5 kJ/cm 2 (normal to the beam) without plasma formation. The high damage threshold results from both the cos q spreading of the energy across the surface and the reflection of a large fraction of the energy from the surface, thereby greatly reducing the field strength within the medium

  17. Social Thresholds and their Translation into Social-ecological Management Practices

    Directory of Open Access Journals (Sweden)

    Lisa Christensen

    2012-03-01

    Full Text Available The objective of this paper is to provide a preliminary discussion of how to improve our conceptualization of social thresholds using (1 a more sociological analysis of social resilience, and (2 results from research carried out in collaboration with the Champagne and Aishihik First Nations of the Yukon Territory, Canada. Our sociological analysis of the concept of resilience begins with a review of the literature followed by placement of the concept in the domain of sociological theory to gain insight into its strengths and limitations. A new notion of social thresholds is proposed and case study research discussed to support the proposition. Our findings suggest that rather than view social thresholds as breakpoints between two regimes, as thresholds are typically conceived in the resilience literature, that they be viewed in terms of collectively recognized points that signify new experiences. Some examples of thresholds identified in our case study include power in decision making, level of healing from historical events, and a preference for small-scale development over large capital intensive projects.

  18. Optical breakdown threshold investigation of 1064 nm laser induced air plasmas

    International Nuclear Information System (INIS)

    Thiyagarajan, Magesh; Thompson, Shane

    2012-01-01

    classical microwave breakdown theory after correcting for the multiphoton ionization process for different pressures and good agreement, regarding both pressure dependence and breakdown threshold electric fields, is obtained. The effect of the presence of submicron particles on the 1064 nm breakdown threshold was also investigated. The measurements show that higher breakdown field is required, especially at lower pressures, and in close agreement with classical microwave breakdown theory and measurements in air.

  19. Interactive thresholded volumetry of abdominal fat using breath-hold T1-weighted magnetic resonance imaging

    International Nuclear Information System (INIS)

    Wittsack, H.J.; Cohnen, M.; Jung, G.; Moedder, U.; Poll, L.; Kapitza, C.; Heinemann, L.

    2006-01-01

    Purpose: development of a feasible and reliable method for determining abdominal fat using breath-hold T1-weighted magnetic resonance imaging. Materials and methods: the high image contrast of T1-weighted gradient echo MR sequences makes it possible to differentiate between abdominal fat and non-fat tissue. To obtain a high signal-to-noise ratio, the measurements are usually performed using phased array surface coils. Inhomogeneity of the coil sensitivity leads to inhomogeneity of the image intensities. Therefore, to examine the volume of abdominal fat, an automatic algorithm for intensity correction must be implemented. The analysis of the image histogram results in a threshold to separate fat from other tissue. Automatic segmentation using this threshold results directly in the fat volumes. The separation of intraabdominal and subcutaneous fat is performed by interactive selection in a last step. Results: the described correction of inhomogeneity allows for the segmentation of the images using a global threshold. The use of semiautomatic interactive volumetry makes the analysis more subjective. The variance of volumetry between observers was 4.6%. The mean time for image analysis of a T1-weighted investigation lasted less than 6 minutes. Conclusion: the described method facilitates reliable determination of abdominal fat within a reasonable period of time. Using breath-hold MR sequences, the time of examination is less than 5 minutes per patient. (orig.)

  20. Text Induced Spelling Correction

    NARCIS (Netherlands)

    Reynaert, M.W.C.

    2004-01-01

    We present TISC, a language-independent and context-sensitive spelling checking and correction system designed to facilitate the automatic removal of non-word spelling errors in large corpora. Its lexicon is derived from a very large corpus of raw text, without supervision, and contains word

  1. Music effect on pain threshold evaluated with current perception threshold

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    AIM: Music relieves anxiety and psychotic tension. This effect of music is applied to surgical operation in the hospital and dental office. It is still unclear whether this music effect is only limited to the psychological aspect but not to the physical aspect or whether its music effect is influenced by the mood or emotion of audience. To elucidate these issues, we evaluated the music effect on pain threshold by current perception threshold (CPT) and profile of mood states (POMC) test. METHODS: Healthy 30 subjects (12 men, 18 women, 25-49 years old, mean age 34.9) were tested. (1)After POMC test, all subjects were evaluated pain threshold with CPT by Neurometer (Radionics, USA) under 6 conditions, silence, listening to the slow tempo classic music, nursery music, hard rock music, classic paino music and relaxation music with 30 seconds interval. (2)After Stroop color word test as the stresser, pain threshold was evaluated with CPT under 2 conditions, silence and listening to the slow tempo classic music. RESULTS: Under litening to the music, CPT sores increased, especially 2 000 Hz level related with compression, warm and pain sensation. Type of music, preference of music and stress also affected CPT score. CONCLUSION: The present study demonstrated that the concentration on the music raise the pain threshold and that stress and mood influence the music effect on pain threshold.

  2. Method for determining correction factors induced by irradiation of ionization chamber cables in large radiation field

    International Nuclear Information System (INIS)

    Rodrigues, L.L.C.

    1988-01-01

    A simple method was developed to be suggested to hospital physicists in order to be followed during large radiation field dosimetry, to evaluate the effects of cables, connectors and extension cables irradiation and to determine correction factors for each system or geometry. All quality control tests were performed according to the International Electrotechnical Commission for three clinical dosimeters. Photon and electron irradiation effects for cables, connectors and extention cables were investigated under different experimental conditions by means of measurements of chamber sensitivity to a standard radiation source of 90 Sr. The radiation induced leakage current was also measured for cables, connectors and extension cables irradiated by photons and electrons. All measurements were performed at standard dosimetry conditions. Finally, measurements were performed in large fields. Cable factors and leakage factors were determined by the relation between chamber responses for irradiated and unirradiated cables. (author) [pt

  3. Study of the p + p → π+ + d reaction close to threshold

    International Nuclear Information System (INIS)

    Drochner, M.; Kemmerling, G.; Zwoll, K.; Frekers, D.; Garske, W.; Klimala, W.; Kolev, D.; Tsenov, R.; Kutsarova, T.

    1996-01-01

    The p + p → π + + d reaction has been studied at excess energies between 0.275 MeV and 3.86 MeV. The experiments were performed with the external proton beam of the COoler SYnchrotron (COSY) in Julich. Differential and total cross sections were measured employing a high resolution magnetic spectrometer with nearly 4π acceptance in the centre of the mass system. The values of the total cross sections are - when corrected for the Coulomb effects - in agreement with the results obtained from the time reversed reactions as well as from isospin related reactions. The measured anisotropies between 0.008 and 0.29 indicate that the p-wave is not negligible even so close to threshold. The s-wave and p-wave contributions at threshold are deduced. (author)

  4. Threshold-dependent sample sizes for selenium assessment with stream fish tissue

    Science.gov (United States)

    Hitt, Nathaniel P.; Smith, David R.

    2015-01-01

    Natural resource managers are developing assessments of selenium (Se) contamination in freshwater ecosystems based on fish tissue concentrations. We evaluated the effects of sample size (i.e., number of fish per site) on the probability of correctly detecting mean whole-body Se values above a range of potential management thresholds. We modeled Se concentrations as gamma distributions with shape and scale parameters fitting an empirical mean-to-variance relationship in data from southwestern West Virginia, USA (63 collections, 382 individuals). We used parametric bootstrapping techniques to calculate statistical power as the probability of detecting true mean concentrations up to 3 mg Se/kg above management thresholds ranging from 4 to 8 mg Se/kg. Sample sizes required to achieve 80% power varied as a function of management thresholds and Type I error tolerance (α). Higher thresholds required more samples than lower thresholds because populations were more heterogeneous at higher mean Se levels. For instance, to assess a management threshold of 4 mg Se/kg, a sample of eight fish could detect an increase of approximately 1 mg Se/kg with 80% power (given α = 0.05), but this sample size would be unable to detect such an increase from a management threshold of 8 mg Se/kg with more than a coin-flip probability. Increasing α decreased sample size requirements to detect above-threshold mean Se concentrations with 80% power. For instance, at an α-level of 0.05, an 8-fish sample could detect an increase of approximately 2 units above a threshold of 8 mg Se/kg with 80% power, but when α was relaxed to 0.2, this sample size was more sensitive to increasing mean Se concentrations, allowing detection of an increase of approximately 1.2 units with equivalent power. Combining individuals into 2- and 4-fish composite samples for laboratory analysis did not decrease power because the reduced number of laboratory samples was compensated for by increased

  5. Stress relaxation insensitive designs for metal compliant mechanism threshold accelerometers

    Directory of Open Access Journals (Sweden)

    Carlos Vilorio

    2015-12-01

    Full Text Available We present two designs for metal compliant mechanisms for use as threshold accelerometers which require zero external power. Both designs rely on long, thin flexures positioned orthogonally to a flat body. The first design involves cutting or stamping a thin spring-steel sheet and then bending elements to form the necessary thin flexors. The second design uses precut spring-steel flexure elements mounted into a mold which is then filled with molten tin to form a bimetallic device. Accelerations necessary to switch the devices between bistable states were measured using a centrifuge. Both designs showed very little variation in threshold acceleration due to stress relaxation over a period of several weeks. Relatively large variations in threshold acceleration were observed for devices of the same design, most likely due to variations in the angle of the flexor elements relative to the main body of the devices. Keywords: Structural health monitoring, Sensor, Accelerometer, Zero power, Shock, Threshold

  6. Selection Strategies for Social Influence in the Threshold Model

    Science.gov (United States)

    Karampourniotis, Panagiotis; Szymanski, Boleslaw; Korniss, Gyorgy

    The ubiquity of online social networks makes the study of social influence extremely significant for its applications to marketing, politics and security. Maximizing the spread of influence by strategically selecting nodes as initiators of a new opinion or trend is a challenging problem. We study the performance of various strategies for selection of large fractions of initiators on a classical social influence model, the Threshold model (TM). Under the TM, a node adopts a new opinion only when the fraction of its first neighbors possessing that opinion exceeds a pre-assigned threshold. The strategies we study are of two kinds: strategies based solely on the initial network structure (Degree-rank, Dominating Sets, PageRank etc.) and strategies that take into account the change of the states of the nodes during the evolution of the cascade, e.g. the greedy algorithm. We find that the performance of these strategies depends largely on both the network structure properties, e.g. the assortativity, and the distribution of the thresholds assigned to the nodes. We conclude that the optimal strategy needs to combine the network specifics and the model specific parameters to identify the most influential spreaders. Supported in part by ARL NS-CTA, ARO, and ONR.

  7. Evaluation of liver fat in the presence of iron with MRI using T2* correction: a clinical approach.

    Science.gov (United States)

    Henninger, Benjamin; Benjamin, Henninger; Kremser, Christian; Christian, Kremser; Rauch, Stefan; Stefan, Rauch; Eder, Robert; Robert, Eder; Judmaier, Werner; Werner, Judmaier; Zoller, Heinz; Heinz, Zoller; Michaely, Henrik; Henrik, Michaely; Schocke, Michael; Michael, Schocke

    2013-06-01

    To assess magnetic resonance imaging (MRI) with conventional chemical shift-based sequences with and without T2* correction for the evaluation of steatosis hepatitis (SH) in the presence of iron. Thirty-one patients who underwent MRI and liver biopsy because of clinically suspected diffuse liver disease were retrospectively analysed. The signal intensity (SI) was calculated in co-localised regions of interest (ROIs) using conventional spoiled gradient-echo T1 FLASH in-phase and opposed-phase (IP/OP). T2* relaxation time was recorded in a fat-saturated multi-echo-gradient-echo sequence. The fat fraction (FF) was calculated with non-corrected and T2*-corrected SIs. Results were correlated with liver biopsy. There was significant difference (P T2* corrected FF in patients with SH and concomitant hepatic iron overload (HIO). Using 5 % as a threshold resulted in eight false negative results with uncorrected FF whereas T2* corrected FF lead to true positive results in 5/8 patients. ROC analysis calculated three threshold values (8.97 %, 5.3 % and 3.92 %) for T2* corrected FF with accuracy 84 %, sensitivity 83-91 % and specificity 63-88 %. FF with T2* correction is accurate for the diagnosis of hepatic fat in the presence of HIO. Findings of our study suggest the use of IP/OP imaging in combination with T2* correction. • Magnetic resonance helps quantify both iron and fat content within the liver • T2* correction helps to predict the correct diagnosis of steatosis hepatitis • "Fat fraction" from T2*-corrected chemical shift-based sequences accurately quantifies hepatic fat • "Fat fraction" without T2* correction underestimates hepatic fat with iron overload.

  8. Development of a Detailed Volumetric Finite Element Model of the Spine to Simulate Surgical Correction of Spinal Deformities

    Directory of Open Access Journals (Sweden)

    Mark Driscoll

    2013-01-01

    Full Text Available A large spectrum of medical devices exists; it aims to correct deformities associated with spinal disorders. The development of a detailed volumetric finite element model of the osteoligamentous spine would serve as a valuable tool to assess, compare, and optimize spinal devices. Thus the purpose of the study was to develop and initiate validation of a detailed osteoligamentous finite element model of the spine with simulated correction from spinal instrumentation. A finite element of the spine from T1 to L5 was developed using properties and geometry from the published literature and patient data. Spinal instrumentation, consisting of segmental translation of a scoliotic spine, was emulated. Postoperative patient and relevant published data of intervertebral disc stress, screw/vertebra pullout forces, and spinal profiles was used to evaluate the models validity. Intervertebral disc and vertebral reaction stresses respected published in vivo, ex vivo, and in silico values. Screw/vertebra reaction forces agreed with accepted pullout threshold values. Cobb angle measurements of spinal deformity following simulated surgical instrumentation corroborated with patient data. This computational biomechanical analysis validated a detailed volumetric spine model. Future studies seek to exploit the model to explore the performance of corrective spinal devices.

  9. Thresholds of parametric instabilities near the lower hybrid frequency

    International Nuclear Information System (INIS)

    Berger, R.L.; Perkins, F.W.

    1975-06-01

    Resonant decay instabilities of a pump wave with frequency ω 0 near the lower-hybrid frequency ω/sub LH/ are analyzed with respect to the wavenumber k of the decay waves and the ratio ω 0 /ω/sub LH/ to determine the decay process with the minimum threshold. It was found that the lowest thresholds are for decay into an electron plasma (lower hybrid) wave plus either a backward ion-cyclotron wave, an ion Bernstein wave, or a low frequency sound wave. For ω 0 less than (2ω/sub LH/)/sup 1 / 2 /, it was found that these decay processes can occur and have faster growth than ion quasimodes provided the drift velocity (cE 0 /B 0 ) is much less than the sound speed. In many cases of interest, electromagnetic corrections to the lower-hybrid wave rule out decay into all but short wavelength (k rho/sub i/ greater than 1) waves. The experimental results are consistent with the linear theory of parametric instabilities in a homogeneous plasma. (U.S.)

  10. Use of erythropoietin is associated with threshold retinopathy of prematurity (ROP) in preterm ELBW neonates: a retrospective, cohort study from two large tertiary NICUs in Italy.

    Science.gov (United States)

    Manzoni, Paolo; Memo, Luigi; Mostert, Michael; Gallo, Elena; Guardione, Roberta; Maestri, Andrea; Saia, Onofrio Sergio; Opramolla, Anna; Calabrese, Sara; Tavella, Elena; Luparia, Martina; Farina, Daniele

    2014-09-01

    Retinopathy of prematurity (ROP) is a multifactorial disease with evidence of many associated risk factors. Erythropoietin has been reported to be associated with this disorder in a murine model, as well as in humans in some single-center reports. We reviewed the data from two large tertiary NICUs in Italy to test the hypothesis that the use of erythropoietin may be associated with the development of the most severe stages of ROP in extremely low birth weight (ELBW) neonates. Retrospective study by review of patient charts and eye examination index cards on infants with birth weight large tertiary NICUs in Northern Italy (Sant'Anna Hospital NICU in Torino, and Ca' Foncello Hospital Neonatology in Treviso) in the years 2005 to 2007. Standard protocol of administration of EPO in the two NICUs consisted of 250 UI/kg three times a week for 6-week courses (4-week in 1001-1500g infants). Univariate analysis was performed to assess whether the use of EPO was associated with severe (threshold) ROP. A control, multivariate statistical analysis was performed by entering into a logistic regression model a number of neonatal and perinatal variables that - in univariate analysis - had been associated with threshold ROP. During the study period, 211 ELBW infants were born at the two facilities and survived till discharge. Complete data were obtained for 197 of them. Threshold retinopathy of prematurity occurred in 26.9% (29 of 108) of ELBW infants who received erythropoietin therapy, as compared with 13.5% (12 of 89) of those who did not receive erythropoietin (OR 2.35; 95% CI 1.121-4.949; p=0.02 in univariate analysis, and p=0.04 at multivariate logistic regression after controlling for the following variables: birth weight, gestational age, days on supplemental oxygen, systemic fungal infection, vaginal delivery). Use of erythropoietin was not significantly associated with other major sequelae of prematurity (intraventricular hemorrhage, bronchopulmonary dysplasia, necrotizing

  11. Passive quantum error correction of linear optics networks through error averaging

    Science.gov (United States)

    Marshman, Ryan J.; Lund, Austin P.; Rohde, Peter P.; Ralph, Timothy C.

    2018-02-01

    We propose and investigate a method of error detection and noise correction for bosonic linear networks using a method of unitary averaging. The proposed error averaging does not rely on ancillary photons or control and feedforward correction circuits, remaining entirely passive in its operation. We construct a general mathematical framework for this technique and then give a series of proof of principle examples including numerical analysis. Two methods for the construction of averaging are then compared to determine the most effective manner of implementation and probe the related error thresholds. Finally we discuss some of the potential uses of this scheme.

  12. Threshold-voltage modulated phase change heterojunction for application of high density memory

    International Nuclear Information System (INIS)

    Yan, Baihan; Tong, Hao; Qian, Hang; Miao, Xiangshui

    2015-01-01

    Phase change random access memory is one of the most important candidates for the next generation non-volatile memory technology. However, the ability to reduce its memory size is compromised by the fundamental limitations inherent in the CMOS technology. While 0T1R configuration without any additional access transistor shows great advantages in improving the storage density, the leakage current and small operation window limit its application in large-scale arrays. In this work, phase change heterojunction based on GeTe and n-Si is fabricated to address those problems. The relationship between threshold voltage and doping concentration is investigated, and energy band diagrams and X-ray photoelectron spectroscopy measurements are provided to explain the results. The threshold voltage is modulated to provide a large operational window based on this relationship. The switching performance of the heterojunction is also tested, showing a good reverse characteristic, which could effectively decrease the leakage current. Furthermore, a reliable read-write-erase function is achieved during the tests. Phase change heterojunction is proposed for high-density memory, showing some notable advantages, such as modulated threshold voltage, large operational window, and low leakage current

  13. Threshold-voltage modulated phase change heterojunction for application of high density memory

    Science.gov (United States)

    Yan, Baihan; Tong, Hao; Qian, Hang; Miao, Xiangshui

    2015-09-01

    Phase change random access memory is one of the most important candidates for the next generation non-volatile memory technology. However, the ability to reduce its memory size is compromised by the fundamental limitations inherent in the CMOS technology. While 0T1R configuration without any additional access transistor shows great advantages in improving the storage density, the leakage current and small operation window limit its application in large-scale arrays. In this work, phase change heterojunction based on GeTe and n-Si is fabricated to address those problems. The relationship between threshold voltage and doping concentration is investigated, and energy band diagrams and X-ray photoelectron spectroscopy measurements are provided to explain the results. The threshold voltage is modulated to provide a large operational window based on this relationship. The switching performance of the heterojunction is also tested, showing a good reverse characteristic, which could effectively decrease the leakage current. Furthermore, a reliable read-write-erase function is achieved during the tests. Phase change heterojunction is proposed for high-density memory, showing some notable advantages, such as modulated threshold voltage, large operational window, and low leakage current.

  14. Practical Bias Correction in Aerial Surveys of Large Mammals: Validation of Hybrid Double-Observer with Sightability Method against Known Abundance of Feral Horse (Equus caballus) Populations.

    Science.gov (United States)

    Lubow, Bruce C; Ransom, Jason I

    2016-01-01

    Reliably estimating wildlife abundance is fundamental to effective management. Aerial surveys are one of the only spatially robust tools for estimating large mammal populations, but statistical sampling methods are required to address detection biases that affect accuracy and precision of the estimates. Although various methods for correcting aerial survey bias are employed on large mammal species around the world, these have rarely been rigorously validated. Several populations of feral horses (Equus caballus) in the western United States have been intensively studied, resulting in identification of all unique individuals. This provided a rare opportunity to test aerial survey bias correction on populations of known abundance. We hypothesized that a hybrid method combining simultaneous double-observer and sightability bias correction techniques would accurately estimate abundance. We validated this integrated technique on populations of known size and also on a pair of surveys before and after a known number was removed. Our analysis identified several covariates across the surveys that explained and corrected biases in the estimates. All six tests on known populations produced estimates with deviations from the known value ranging from -8.5% to +13.7% and corrected by our statistical models. Our results validate the hybrid method, highlight its potentially broad applicability, identify some limitations, and provide insight and guidance for improving survey designs.

  15. On ambiguities in the exponentiation of large QCD perturbative corrections

    International Nuclear Information System (INIS)

    Chyla, Jiri

    1986-01-01

    Ambiguities and some practical questions connected with the exponentiation of higher-order QCD perturbative corrections are discussed for the case of deep inelastic lepton-hadron scattering in the non-singlet channel. The importance of still higher-order calculations for resolving these ambiguities is stressed. (author)

  16. QCD NLO with POWHEG matching and top threshold matching in WHIZARD

    Energy Technology Data Exchange (ETDEWEB)

    Reuter, Juergen; Nejad, Bijan Chokoufe [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Theory Group; Bach, Fabian [European Commission, Luxembourg (Luxembourg); Kilian, Wolfgang [Siegen Univ. (Germany); Stahlhofen, Maximilian [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Theory Group; Mainz Univ. (Germany). PRISMA Cluster of Excellence; Weiss, Christian [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Theory Group; Siegen Univ. (Germany)

    2016-01-15

    We present the status of the automation of NLO processes within the event generator WHIZARD. The program provides an automated FKS subtraction and phase space integration over the FKS regions, while the (QCD) NLO matrix element is accessed via the Binoth Les Houches Interface from an externally linked one-loop program. Massless and massive test cases and validation are shown for several e{sup +}e{sup -} processes. Furthermore, we discuss work in progress and future plans. The second part covers the matching of the NRQCD prediction with NLL threshold resummation to the NLO continuum top pair production at lepton colliders. Both the S-wave and P-wave production of the top pair are taken into account in the resummation. The inclusion in WHIZARD allows to study more exclusive observables than just the total cross section and automatically accounts for important electroweak and relativistic corrections in the threshold region.

  17. QCD NLO with POWHEG matching and top threshold matching in WHIZARD

    International Nuclear Information System (INIS)

    Reuter, Juergen; Nejad, Bijan Chokoufe; Kilian, Wolfgang; Stahlhofen, Maximilian

    2016-01-01

    We present the status of the automation of NLO processes within the event generator WHIZARD. The program provides an automated FKS subtraction and phase space integration over the FKS regions, while the (QCD) NLO matrix element is accessed via the Binoth Les Houches Interface from an externally linked one-loop program. Massless and massive test cases and validation are shown for several e + e - processes. Furthermore, we discuss work in progress and future plans. The second part covers the matching of the NRQCD prediction with NLL threshold resummation to the NLO continuum top pair production at lepton colliders. Both the S-wave and P-wave production of the top pair are taken into account in the resummation. The inclusion in WHIZARD allows to study more exclusive observables than just the total cross section and automatically accounts for important electroweak and relativistic corrections in the threshold region.

  18. Large area damage testing of optics

    International Nuclear Information System (INIS)

    Sheehan, L.; Kozlowski, M.; Stolz, C.

    1996-01-01

    The damage threshold specifications for the National Ignition Facility will include a mixture of standard small-area tests and new large-area tests. During our studies of laser damage and conditioning processes of various materials we have found that some damage morphologies are fairly small and this damage does not grow with further illumination. This type of damage might not be detrimental to the laser performance. We should therefore assume that some damage can be allowed on the optics, but decide on a maximum damage allowance of damage. A new specification of damage threshold termed open-quotes functional damage thresholdclose quotes was derived. Further correlation of damage size and type to system performance must be determined in order to use this measurement, but it is clear that it will be a large factor in the optics performance specifications. Large-area tests have verified that small-area testing is not always sufficient when the optic in question has defect-initiated damage. This was evident for example on sputtered polarizer and mirror coatings where the defect density was low enough that the features could be missed by standard small- area testing. For some materials, the scale-length at which damage non-uniformities occur will effect the comparison of small-area and large-area tests. An example of this was the sub-aperture tests on KD*P crystals on the Beamlet test station. The tests verified the large-area damage threshold to be similar to that found when testing a small-area. Implying that for this KD*P material, the dominate damage mechanism is of sufficiently small scale-length that small-area testing is capable of determining the threshold. The Beamlet test station experiments also demonstrated the use of on-line laser conditioning to increase the crystals damage threshold

  19. Analysis and optimization of surface profile correcting mechanism of the pitch lap in large-aperture annular polishing

    Science.gov (United States)

    Zhang, Huifang; Yang, Minghong; Xu, Xueke; Wu, Lunzhe; Yang, Weiguang; Shao, Jianda

    2017-10-01

    The surface figure control of the conventional annular polishing system is realized ordinarily by the interaction between the conditioner and the lap. The surface profile of the pitch lap corrected by the marble conditioner has been measured and analyzed as a function of kinematics, loading conditions, and polishing time. The surface profile measuring equipment of the large lap based on laser alignment was developed with the accuracy of about 1μm. The conditioning mechanism of the conditioner is simply determined by the kinematics and fully fitting principle, but the unexpected surface profile deviation of the lap emerged frequently due to numerous influencing factors including the geometrical relationship, the pressure distribution at the conditioner/lap interface. Both factors are quantitatively evaluated and described, and have been combined to develop a spatial and temporal model to simulate the surface profile evolution of pitch lap. The simulations are consistent with the experiments. This study is an important step toward deterministic full-aperture annular polishing, providing a beneficial guidance for the surface profile correction of the pitch lap.

  20. Attenuation correction with region growing method used in the positron emission mammography imaging system

    Science.gov (United States)

    Gu, Xiao-Yue; Li, Lin; Yin, Peng-Fei; Yun, Ming-Kai; Chai, Pei; Huang, Xian-Chao; Sun, Xiao-Li; Wei, Long

    2015-10-01

    The Positron Emission Mammography imaging system (PEMi) provides a novel nuclear diagnosis method dedicated for breast imaging. With a better resolution than whole body PET, PEMi can detect millimeter-sized breast tumors. To address the requirement of semi-quantitative analysis with a radiotracer concentration map of the breast, a new attenuation correction method based on a three-dimensional seeded region growing image segmentation (3DSRG-AC) method has been developed. The method gives a 3D connected region as the segmentation result instead of image slices. The continuity property of the segmentation result makes this new method free of activity variation of breast tissues. The threshold value chosen is the key process for the segmentation method. The first valley in the grey level histogram of the reconstruction image is set as the lower threshold, which works well in clinical application. Results show that attenuation correction for PEMi improves the image quality and the quantitative accuracy of radioactivity distribution determination. Attenuation correction also improves the probability of detecting small and early breast tumors. Supported by Knowledge Innovation Project of The Chinese Academy of Sciences (KJCX2-EW-N06)

  1. The main postulates of adaptive correction of distortions of the wave front in large-size optical systems

    Directory of Open Access Journals (Sweden)

    V. V. Sychev

    2014-01-01

    medium on the transmitted radiation WF;•  the lack of a reference source at the wavelength of trasnmitted laser radiation, which is required to implement methods for adaptive correction of the distorted WF;•  the unique to laser systems additional distorting factors available in transmission systems.These distorting factors include:• length of the optical path due to need in spatial diversity of high power laser source with a large number of matching optical elements;• thermal self-action of power laser radiation in the transport path of the radiation before its entry into forming optical system;• instability of spatio-temporal characteristics of the laser radiation source itself to take a turn for the worse conditions of radiation transmission both inside the optical path, and in the free atmosphere;• thermal irregularities and thermal deformation.It is shown that the adaptive systems differ from the active optics in that radiation wave front distortion is corrected in real time per totality of distorting factors (not only on the effect of the atmosphere with the speed ten times exceeding the effect of distortion itself. Here, the correction quality is estimated by criterion of the primary image quality.In this case, the correction continuously takes into account data about optical system parameters such as current space, temperature, time, and adjusting, thereby supporting the high quality of images under the action of distorting factors.The paper formulates and proposes the basic postulates of adaptive correction.Postulates are a set of statements and assertions, allowing us to implement effective means of adaptive correction of distortions.The paper also shows the real capabilities the methods and means of adaptive optics offer in case of efficient use of laser radiation power and what ways are possible to solve these tasks. First of all, these are:- forming a system of assumptions and minimization of distortions in the optical path, which includes a

  2. Power corrections to exclusive processes in QCD

    Energy Technology Data Exchange (ETDEWEB)

    Mankiewicz, Lech

    2002-02-01

    In practice applicability of twist expansion crucially depends on the magnitude to power corrections to the leading-twist amplitude. I illustrate this point by considering explicit examples of two hard exclusive processes in QCD. In the case of {gamma}{sup *}{gamma} {yields} {pi}{pi} amplitude power corrections are small enough such that it should be possible to describe current experimental data by the leading-twist QCD prediction. The photon helicity-flip amplitude in DVCS on a nucleon receives large kinematical power corrections which screen the leading-twist prediction up to large values of the hard photon virtuality.

  3. Stringy instanton corrections to N=2 gauge couplings

    CERN Document Server

    Billo', Marco; Fucito, Francesco; Lerda, Alberto; Morales, Jose F; Poghosyan, Rubik

    2010-01-01

    We discuss a string model where a conformal four-dimensional N=2 gauge theory receives corrections to its gauge kinetic functions from "stringy" instantons. These contributions are explicitly evaluated by exploiting the localization properties of the integral over the stringy instanton moduli space. The model we consider corresponds to a setup with D7/D3-branes in type I' theory compactified on T4/Z2 x T2, and possesses a perturbatively computable heterotic dual. In the heteoric side the corrections to the quadratic gauge couplings are provided by a 1-loop threshold computation and, under the duality map, match precisely the first few stringy instanton effects in the type I' setup. This agreement represents a very non-trivial test of our approach to the exotic instanton calculus.

  4. Effect of infrared radiation on the threshold behavior of scattering (and decay) processes

    International Nuclear Information System (INIS)

    Mohanty, A.K.; Rosenberg, L.; Spruch, L.

    1988-01-01

    An analysis is given of the effect of radiative corrections on the threshold behavior of the cross section for the inelastic scattering of a light charged particle by a neutral composite system. Explicit results are obtained for a model problem where the target consists of a proton and antiproton bound under their mutual Coulomb interaction and excited to a 2p state from its 1s ground state by electron impact, but the conclusions drawn are applicable, qualitatively, to a wide range of problems. It is found that when the energy resolution Δepsilon-c of the electron detector is small compared with the kinetic energy K' of the electron in the final state, the more careful treatment given here, which properly accounts for the rapid variation of the cross section for scattering energies near threshold, leads to only small modifications in the standard form of the radiative correction factor δ. For sufficiently high resolution in energy of a (high-energy) incident beam, the modification could be significant if Δepsilon-c is comparable with K'. The above considerations are applicable not only to scattering cross sections but to endpoints of the energy spectrum of the charged particle in a decay process in which only one charged particle is emitted

  5. Large epidemic thresholds emerge in heterogeneous networks of heterogeneous nodes

    Science.gov (United States)

    Yang, Hui; Tang, Ming; Gross, Thilo

    2015-08-01

    One of the famous results of network science states that networks with heterogeneous connectivity are more susceptible to epidemic spreading than their more homogeneous counterparts. In particular, in networks of identical nodes it has been shown that network heterogeneity, i.e. a broad degree distribution, can lower the epidemic threshold at which epidemics can invade the system. Network heterogeneity can thus allow diseases with lower transmission probabilities to persist and spread. However, it has been pointed out that networks in which the properties of nodes are intrinsically heterogeneous can be very resilient to disease spreading. Heterogeneity in structure can enhance or diminish the resilience of networks with heterogeneous nodes, depending on the correlations between the topological and intrinsic properties. Here, we consider a plausible scenario where people have intrinsic differences in susceptibility and adapt their social network structure to the presence of the disease. We show that the resilience of networks with heterogeneous connectivity can surpass those of networks with homogeneous connectivity. For epidemiology, this implies that network heterogeneity should not be studied in isolation, it is instead the heterogeneity of infection risk that determines the likelihood of outbreaks.

  6. Large epidemic thresholds emerge in heterogeneous networks of heterogeneous nodes.

    Science.gov (United States)

    Yang, Hui; Tang, Ming; Gross, Thilo

    2015-08-21

    One of the famous results of network science states that networks with heterogeneous connectivity are more susceptible to epidemic spreading than their more homogeneous counterparts. In particular, in networks of identical nodes it has been shown that network heterogeneity, i.e. a broad degree distribution, can lower the epidemic threshold at which epidemics can invade the system. Network heterogeneity can thus allow diseases with lower transmission probabilities to persist and spread. However, it has been pointed out that networks in which the properties of nodes are intrinsically heterogeneous can be very resilient to disease spreading. Heterogeneity in structure can enhance or diminish the resilience of networks with heterogeneous nodes, depending on the correlations between the topological and intrinsic properties. Here, we consider a plausible scenario where people have intrinsic differences in susceptibility and adapt their social network structure to the presence of the disease. We show that the resilience of networks with heterogeneous connectivity can surpass those of networks with homogeneous connectivity. For epidemiology, this implies that network heterogeneity should not be studied in isolation, it is instead the heterogeneity of infection risk that determines the likelihood of outbreaks.

  7. Error Correcting Codes

    Indian Academy of Sciences (India)

    Science and Automation at ... the Reed-Solomon code contained 223 bytes of data, (a byte ... then you have a data storage system with error correction, that ..... practical codes, storing such a table is infeasible, as it is generally too large.

  8. Threshold Signature Schemes Application

    Directory of Open Access Journals (Sweden)

    Anastasiya Victorovna Beresneva

    2015-10-01

    Full Text Available This work is devoted to an investigation of threshold signature schemes. The systematization of the threshold signature schemes was done, cryptographic constructions based on interpolation Lagrange polynomial, elliptic curves and bilinear pairings were examined. Different methods of generation and verification of threshold signatures were explored, the availability of practical usage of threshold schemes in mobile agents, Internet banking and e-currency was shown. The topics of further investigation were given and it could reduce a level of counterfeit electronic documents signed by a group of users.

  9. Thresholds in radiobiology

    International Nuclear Information System (INIS)

    Katz, R.; Hofmann, W.

    1982-01-01

    Interpretations of biological radiation effects frequently use the word 'threshold'. The meaning of this word is explored together with its relationship to the fundamental character of radiation effects and to the question of perception. It is emphasised that although the existence of either a dose or an LET threshold can never be settled by experimental radiobiological investigations, it may be argued on fundamental statistical grounds that for all statistical processes, and especially where the number of observed events is small, the concept of a threshold is logically invalid. (U.K.)

  10. Follow-up of hearing thresholds among forge hammering workers

    Energy Technology Data Exchange (ETDEWEB)

    Kamal, A.A.; Mikael, R.A.; Faris, R. (Ain Shams Univ., Abbasia, Cairo (Egypt))

    1989-01-01

    Hearing threshold was reexamined in a group of forge hammering workers investigated 8 years ago with consideration of the age effect and of auditory symptoms. Workers were exposed to impact noise that ranged from 112 to 139 dB(A)--at an irregular rate of 20 to 50 drop/minute--and a continuous background noise that ranged from 90 to 94 dB(A). Similar to what was observed 8 years ago, the present permanent threshold shift (PTS) showed a maximum notch at the frequency of 6 kHz and considerable elevations at the frequencies of 0.25-1 kHz. The age-corrected PTS and the postexposure hearing threshold were significantly higher than the corresponding previous values at the frequencies 0.25, 0.5, 1, and 8 kHz only. The rise was more evident at the low than at the high frequencies. Temporary threshold shift (TTS) values were significantly less than those 8 years ago. Contrary to the previous TTS, the present TTS were higher at low than at high frequencies. Although progression of PTS at the frequencies 0.25 and 0.5 kHz was continuous throughout the observed durations of exposure, progression at higher frequencies occurred essentially in the first 10 to 15 years of exposure. Thereafter, it followed a much slower rate. Tinnitus was significantly associated with difficulty in hearing the human voice and with elevation of PTS at all the tested frequencies, while acoustic after-image was significantly associated with increment of PTS at the frequencies 0.25-2 kHz. No relation between PTS and smoking was found. PTS at low frequencies may provide an indication of progression of hearing damage when the sensitivity at 6 and 4 kHz diminishes after prolonged years of exposure. Tinnitus and acoustic after-image are related to the auditory effect of forge hammering noise.

  11. Mass corrections in deep-inelastic scattering

    International Nuclear Information System (INIS)

    Gross, D.J.; Treiman, S.B.; Wilczek, F.A.

    1977-01-01

    The moment sum rules for deep-inelastic lepton scattering are expected for asymptotically free field theories to display a characteristic pattern of logarithmic departures from scaling at large enough Q 2 . In the large-Q 2 limit these patterns do not depend on hadron or quark masses m. For modest values of Q 2 one expects corrections at the level of powers of m 2 /Q 2 . We discuss the question whether these mass effects are accessible in perturbation theory, as applied to the twist-2 Wilson coefficients and more generally. Our conclusion is that some part of the mass effects must arise from a nonperturbative origin. We also discuss the corrections which arise from higher orders in perturbation theory for very large Q 2 , where mass effects can perhaps be ignored. The emphasis here is on a characterization of the Q 2 , x domain where higher-order corrections are likely to be unimportant

  12. Attenuation correction method for single photon emission CT

    Energy Technology Data Exchange (ETDEWEB)

    Morozumi, Tatsuru; Nakajima, Masato [Keio Univ., Yokohama (Japan). Faculty of Science and Technology; Ogawa, Koichi; Yuta, Shinichi

    1983-10-01

    A correction method (Modified Correction Matrix method) is proposed to implement iterative correction by exactly measuring attenuation constant distribution in a test body, calculating a correction factor for every picture element, then multiply the image by these factors. Computer simulation for the comparison of the results showed that the proposed method was specifically more effective to an application to the test body, in which the rate of attenuation constant change is large, than the conventional correction matrix method. Since the actual measurement data always contain quantum noise, the noise was taken into account in the simulation. However, the correction effect was large even under the noise. For verifying its clinical effectiveness, the experiment using an acrylic phantom was also carried out. As the result, the recovery of image quality in the parts with small attenuation constant was remarkable as compared with the conventional method.

  13. Hearing Threshold Level Inworkers of Meybod Tile Factory

    Directory of Open Access Journals (Sweden)

    F Nourani

    2008-04-01

    Full Text Available Introduction: Occupational exposure to excessive noise is commonly encountered in a large number of industries in Iran. This study evaluated the hearing threshold and hearing loss in Meybod tile factory workers. Methods: This cross-sectional study included 371 tile factoryworkers during summer and autumn of 2005. Current noise exposure was estimated using sound level meter .A specially formatted questionnaire was used. Totoscophc examination and conductive air audiometery were used to assess the hearing loss in each subject .Finally data was analyzed using SPSS version 11.5. Results: Occupational noise increased mean of hearing threshold at all frequencies which was significant at 3, 4 KHz in both ears (p<0.05.Prevalence of hearing impairment at high and low frequencies were 39.2% and 46.5%.Prevalence of occupational NIHL was 12.9% and the odds of NIHL significantly increased with noise exposure of more than 10 years. The hearing threshold was worse in both ears of workers with tinnitus. Conclusion: High prevalence of hearing loss and NIHL emphasizes on the necessity of hearing conservational programs in tile factory workers.

  14. Cognitive Abilities, Monitoring Confidence, and Control Thresholds Explain Individual Differences in Heuristics and Biases.

    Science.gov (United States)

    Jackson, Simon A; Kleitman, Sabina; Howie, Pauline; Stankov, Lazar

    2016-01-01

    In this paper, we investigate whether individual differences in performance on heuristic and biases tasks can be explained by cognitive abilities, monitoring confidence, and control thresholds. Current theories explain individual differences in these tasks by the ability to detect errors and override automatic but biased judgments, and deliberative cognitive abilities that help to construct the correct response. Here we retain cognitive abilities but disentangle error detection, proposing that lower monitoring confidence and higher control thresholds promote error checking. Participants ( N = 250) completed tasks assessing their fluid reasoning abilities, stable monitoring confidence levels, and the control threshold they impose on their decisions. They also completed seven typical heuristic and biases tasks such as the cognitive reflection test and Resistance to Framing. Using structural equation modeling, we found that individuals with higher reasoning abilities, lower monitoring confidence, and higher control threshold performed significantly and, at times, substantially better on the heuristic and biases tasks. Individuals with higher control thresholds also showed lower preferences for risky alternatives in a gambling task. Furthermore, residual correlations among the heuristic and biases tasks were reduced to null, indicating that cognitive abilities, monitoring confidence, and control thresholds accounted for their shared variance. Implications include the proposal that the capacity to detect errors does not differ between individuals. Rather, individuals might adopt varied strategies that promote error checking to different degrees, regardless of whether they have made a mistake or not. The results support growing evidence that decision-making involves cognitive abilities that construct actions and monitoring and control processes that manage their initiation.

  15. The Threshold of a Stochastic SIRS Model with Vertical Transmission and Saturated Incidence

    Directory of Open Access Journals (Sweden)

    Chunjuan Zhu

    2017-01-01

    Full Text Available The threshold of a stochastic SIRS model with vertical transmission and saturated incidence is investigated. If the noise is small, it is shown that the threshold of the stochastic system determines the extinction and persistence of the epidemic. In addition, we find that if the noise is large, the epidemic still prevails. Finally, numerical simulations are given to illustrate the results.

  16. Recirculating beam-breakup thresholds for polarized higher-order modes with optical coupling

    Directory of Open Access Journals (Sweden)

    Georg H. Hoffstaetter

    2007-04-01

    Full Text Available Here we will derive the general theory of the beam-breakup (BBU instability in recirculating linear accelerators with coupled beam optics and with polarized higher-order dipole modes. The bunches do not have to be at the same radio-frequency phase during each recirculation turn. This is important for the description of energy recovery linacs (ERLs where beam currents become very large and coupled optics are used on purpose to increase the threshold current. This theory can be used for the analysis of phase errors of recirculated bunches, and of errors in the optical coupling arrangement. It is shown how the threshold current for a given linac can be computed and a remarkable agreement with tracking data is demonstrated. General formulas are then analyzed for several analytically solvable problems: (a Why can different higher order modes (HOM in one cavity couple and why can they then not be considered individually, even when their frequencies are separated by much more than the resonance widths of the HOMs? For the Cornell ERL as an example, it is noted that optimum advantage is taken of coupled optics when the cavities are designed with an x-y HOM frequency splitting of above 50 MHz. The simulated threshold current is then far above the design current of this accelerator. To justify that the simulation can represent an actual accelerator, we simulate cavities with 1 to 8 modes and show that using a limited number of modes is reasonable. (b How does the x-y coupling in the particle optics determine when modes can be considered separately? (c How much of an increase in threshold current can be obtained by coupled optics and why does the threshold current for polarized modes diminish roughly with the square root of the HOMs’ quality factors. Because of this square root scaling, polarized modes with coupled optics increase the threshold current more effectively for cavities that have rather large HOM quality factors, e.g. those without very

  17. Isochronicity correction in the CR storage ring

    International Nuclear Information System (INIS)

    Litvinov, S.; Toprek, D.; Weick, H.; Dolinskii, A.

    2013-01-01

    A challenge for nuclear physics is to measure masses of exotic nuclei up to the limits of nuclear existence which are characterized by low production cross-sections and short half-lives. The large acceptance Collector Ring (CR) [1] at FAIR [2] tuned in the isochronous ion-optical mode offers unique possibilities for measuring short-lived and very exotic nuclides. However, in a ring designed for maximal acceptance, many factors limit the resolution. One point is a limit in time resolution inversely proportional to the transverse emittance. But most of the time aberrations can be corrected and others become small for large number of turns. We show the relations of the time correction to the corresponding transverse focusing and that the main correction for large emittance corresponds directly to the chromaticity correction for transverse focusing of the beam. With the help of Monte-Carlo simulations for the full acceptance we demonstrate how to correct the revolution times so that in principle resolutions of Δm/m=10 −6 can be achieved. In these calculations the influence of magnet inhomogeneities and extended fringe fields are considered and a calibration scheme also for ions with different mass-to-charge ratio is presented

  18. Shifts in the relationship between motor unit recruitment thresholds versus derecruitment thresholds during fatigue.

    Science.gov (United States)

    Stock, Matt S; Mota, Jacob A

    2017-12-01

    Muscle fatigue is associated with diminished twitch force amplitude. We examined changes in the motor unit recruitment versus derecruitment threshold relationship during fatigue. Nine men (mean age = 26 years) performed repeated isometric contractions at 50% maximal voluntary contraction (MVC) knee extensor force until exhaustion. Surface electromyographic signals were detected from the vastus lateralis, and were decomposed into their constituent motor unit action potential trains. Motor unit recruitment and derecruitment thresholds and firing rates at recruitment and derecruitment were evaluated at the beginning, middle, and end of the protocol. On average, 15 motor units were studied per contraction. For the initial contraction, three subjects showed greater recruitment thresholds than derecruitment thresholds for all motor units. Five subjects showed greater recruitment thresholds than derecruitment thresholds for only low-threshold motor units at the beginning, with a mean cross-over of 31.6% MVC. As the muscle fatigued, many motor units were derecruited at progressively higher forces. In turn, decreased slopes and increased y-intercepts were observed. These shifts were complemented by increased firing rates at derecruitment relative to recruitment. As the vastus lateralis fatigued, the central nervous system's compensatory adjustments resulted in a shift of the regression line of the recruitment versus derecruitment threshold relationship. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  19. A threshold-based fixed predictor for JPEG-LS image compression

    Science.gov (United States)

    Deng, Lihua; Huang, Zhenghua; Yao, Shoukui

    2018-03-01

    In JPEG-LS, fixed predictor based on median edge detector (MED) only detect horizontal and vertical edges, and thus produces large prediction errors in the locality of diagonal edges. In this paper, we propose a threshold-based edge detection scheme for the fixed predictor. The proposed scheme can detect not only the horizontal and vertical edges, but also diagonal edges. For some certain thresholds, the proposed scheme can be simplified to other existing schemes. So, it can also be regarded as the integration of these existing schemes. For a suitable threshold, the accuracy of horizontal and vertical edges detection is higher than the existing median edge detection in JPEG-LS. Thus, the proposed fixed predictor outperforms the existing JPEG-LS predictors for all images tested, while the complexity of the overall algorithm is maintained at a similar level.

  20. Effects of threshold on the topology of gene co-expression networks.

    Science.gov (United States)

    Couto, Cynthia Martins Villar; Comin, César Henrique; Costa, Luciano da Fontoura

    2017-09-26

    Several developments regarding the analysis of gene co-expression profiles using complex network theory have been reported recently. Such approaches usually start with the construction of an unweighted gene co-expression network, therefore requiring the selection of a suitable threshold defining which pairs of vertices will be connected. We aimed at addressing such an important problem by suggesting and comparing five different approaches for threshold selection. Each of the methods considers a respective biologically-motivated criterion for electing a potentially suitable threshold. A set of 21 microarray experiments from different biological groups was used to investigate the effect of applying the five proposed criteria to several biological situations. For each experiment, we used the Pearson correlation coefficient to measure the relationship between each gene pair, and the resulting weight matrices were thresholded considering several values, generating respective adjacency matrices (co-expression networks). Each of the five proposed criteria was then applied in order to select the respective threshold value. The effects of these thresholding approaches on the topology of the resulting networks were compared by using several measurements, and we verified that, depending on the database, the impact on the topological properties can be large. However, a group of databases was verified to be similarly affected by most of the considered criteria. Based on such results, it can be suggested that when the generated networks present similar measurements, the thresholding method can be chosen with greater freedom. If the generated networks are markedly different, the thresholding method that better suits the interests of each specific research study represents a reasonable choice.

  1. Rcorrector: efficient and accurate error correction for Illumina RNA-seq reads.

    Science.gov (United States)

    Song, Li; Florea, Liliana

    2015-01-01

    Next-generation sequencing of cellular RNA (RNA-seq) is rapidly becoming the cornerstone of transcriptomic analysis. However, sequencing errors in the already short RNA-seq reads complicate bioinformatics analyses, in particular alignment and assembly. Error correction methods have been highly effective for whole-genome sequencing (WGS) reads, but are unsuitable for RNA-seq reads, owing to the variation in gene expression levels and alternative splicing. We developed a k-mer based method, Rcorrector, to correct random sequencing errors in Illumina RNA-seq reads. Rcorrector uses a De Bruijn graph to compactly represent all trusted k-mers in the input reads. Unlike WGS read correctors, which use a global threshold to determine trusted k-mers, Rcorrector computes a local threshold at every position in a read. Rcorrector has an accuracy higher than or comparable to existing methods, including the only other method (SEECER) designed for RNA-seq reads, and is more time and memory efficient. With a 5 GB memory footprint for 100 million reads, it can be run on virtually any desktop or server. The software is available free of charge under the GNU General Public License from https://github.com/mourisl/Rcorrector/.

  2. Space Active Optics: toward optimized correcting mirrors for future large spaceborne observatories

    Science.gov (United States)

    Laslandes, Marie; Hugot, Emmanuel; Ferrari, Marc; Lemaitre, Gérard; Liotard, Arnaud

    2011-10-01

    Wave-front correction in optical instruments is often needed, either to compensate Optical Path Differences, off-axis aberrations or mirrors deformations. Active optics techniques are developed to allow efficient corrections with deformable mirrors. In this paper, we will present the conception of particular deformation systems which could be used in space telescopes and instruments in order to improve their performances while allowing relaxing specifications on the global system stability. A first section will be dedicated to the design and performance analysis of an active mirror specifically designed to compensate for aberrations that might appear in future 3m-class space telescopes, due to lightweight primary mirrors, thermal variations or weightless conditions. A second section will be dedicated to a brand new design of active mirror, able to compensate for given combinations of aberrations with a single actuator. If the aberrations to be corrected in an instrument and their evolutions are known in advance, an optimal system geometry can be determined thanks to the elasticity theory and Finite Element Analysis.

  3. Dynamic correction of the laser beam coordinate in fabrication of large-sized diffractive elements for testing aspherical mirrors

    Science.gov (United States)

    Shimansky, R. V.; Poleshchuk, A. G.; Korolkov, V. P.; Cherkashin, V. V.

    2017-05-01

    This paper presents a method of improving the accuracy of a circular laser system in fabrication of large-diameter diffractive optical elements by means of a polar coordinate system and the results of their use. An algorithm for correcting positioning errors of a circular laser writing system developed at the Institute of Automation and Electrometry, SB RAS, is proposed and tested. Highprecision synthesized holograms fabricated by this method and the results of using these elements for testing the 6.5 m diameter aspheric mirror of the James Webb space telescope (JWST) are described..

  4. Threshold factorization redux

    Science.gov (United States)

    Chay, Junegone; Kim, Chul

    2018-05-01

    We reanalyze the factorization theorems for the Drell-Yan process and for deep inelastic scattering near threshold, as constructed in the framework of the soft-collinear effective theory (SCET), from a new, consistent perspective. In order to formulate the factorization near threshold in SCET, we should include an additional degree of freedom with small energy, collinear to the beam direction. The corresponding collinear-soft mode is included to describe the parton distribution function (PDF) near threshold. The soft function is modified by subtracting the contribution of the collinear-soft modes in order to avoid double counting on the overlap region. As a result, the proper soft function becomes infrared finite, and all the factorized parts are free of rapidity divergence. Furthermore, the separation of the relevant scales in each factorized part becomes manifest. We apply the same idea to the dihadron production in e+e- annihilation near threshold, and show that the resultant soft function is also free of infrared and rapidity divergences.

  5. Rainfall thresholds as a landslide indicator for engineered slopes on the Irish Rail network

    Science.gov (United States)

    Martinović, Karlo; Gavin, Kenneth; Reale, Cormac; Mangan, Cathal

    2018-04-01

    Rainfall thresholds express the minimum levels of rainfall that need to be reached or exceeded in order for landslides to occur in a particular area. They are a common tool in expressing the temporal portion of landslide hazard analysis. Numerous rainfall thresholds have been developed for different areas worldwide, however none of these are focused on landslides occurring on the engineered slopes on transport infrastructure networks. This paper uses empirical method to develop the rainfall thresholds for landslides on the Irish Rail network earthworks. For comparison, rainfall thresholds are also developed for natural terrain in Ireland. The results show that particular thresholds involving relatively low rainfall intensities are applicable for Ireland, owing to the specific climate. Furthermore, the comparison shows that rainfall thresholds for engineered slopes are lower than those for landslides occurring on the natural terrain. This has severe implications as it indicates that there is a significant risk involved when using generic weather alerts (developed largely for natural terrain) for infrastructure management, and showcases the need for developing railway and road specific rainfall thresholds for landslides.

  6. Comparison between intensity- duration thresholds and cumulative rainfall thresholds for the forecasting of landslide

    Science.gov (United States)

    Lagomarsino, Daniela; Rosi, Ascanio; Rossi, Guglielmo; Segoni, Samuele; Catani, Filippo

    2014-05-01

    This work makes a quantitative comparison between the results of landslide forecasting obtained using two different rainfall threshold models, one using intensity-duration thresholds and the other based on cumulative rainfall thresholds in an area of northern Tuscany of 116 km2. The first methodology identifies rainfall intensity-duration thresholds by means a software called MaCumBA (Massive CUMulative Brisk Analyzer) that analyzes rain-gauge records, extracts the intensities (I) and durations (D) of the rainstorms associated with the initiation of landslides, plots these values on a diagram, and identifies thresholds that define the lower bounds of the I-D values. A back analysis using data from past events can be used to identify the threshold conditions associated with the least amount of false alarms. The second method (SIGMA) is based on the hypothesis that anomalous or extreme values of rainfall are responsible for landslide triggering: the statistical distribution of the rainfall series is analyzed, and multiples of the standard deviation (σ) are used as thresholds to discriminate between ordinary and extraordinary rainfall events. The name of the model, SIGMA, reflects the central role of the standard deviations in the proposed methodology. The definition of intensity-duration rainfall thresholds requires the combined use of rainfall measurements and an inventory of dated landslides, whereas SIGMA model can be implemented using only rainfall data. These two methodologies were applied in an area of 116 km2 where a database of 1200 landslides was available for the period 2000-2012. The results obtained are compared and discussed. Although several examples of visual comparisons between different intensity-duration rainfall thresholds are reported in the international literature, a quantitative comparison between thresholds obtained in the same area using different techniques and approaches is a relatively undebated research topic.

  7. The absolute threshold of colour vision in the horse.

    Directory of Open Access Journals (Sweden)

    Lina S V Roth

    Full Text Available Arrhythmic mammals are active both during day and night if they are allowed. The arrhythmic horses are in possession of one of the largest terrestrial animal eyes and the purpose of this study is to reveal whether their eye is sensitive enough to see colours at night. During the day horses are known to have dichromatic colour vision. To disclose whether they can discriminate colours in dim light a behavioural dual choice experiment was performed. We started the training and testing at daylight intensities and the horses continued to choose correctly at a high frequency down to light intensities corresponding to moonlight. One Shetland pony mare, was able to discriminate colours at 0.08 cd/m(2, while a half blood gelding, still discriminated colours at 0.02 cd/m(2. For comparison, the colour vision limit for several human subjects tested in the very same experiment was also 0.02 cd/m(2. Hence, the threshold of colour vision for the horse that performed best was similar to that of the humans. The behavioural results are in line with calculations of the sensitivity of cone vision where the horse eye and human eye again are similar. The advantage of the large eye of the horse lies not in colour vision at night, but probably instead in achromatic tasks where presumably signal summation enhances sensitivity.

  8. Aeolian Erosion on Mars - a New Threshold for Saltation

    Science.gov (United States)

    Teiser, J.; Musiolik, G.; Kruss, M.; Demirci, T.; Schrinski, B.; Daerden, F.; Smith, M. D.; Neary, L.; Wurm, G.

    2017-12-01

    The Martian atmosphere shows a large variety of dust activity, ranging from local dust devils to global dust storms. Also, sand motion has been observed in form of moving dunes. The dust entrainment into the Martian atmosphere is not well understood due to the small atmospheric pressure of only a few mbar. Laboratory experiments on Earth and numerical models were developed to understand these processes leading to dust lifting and saltation. Experiments so far suggested that large wind velocities are needed to reach the threshold shear velocity and to entrain dust into the atmosphere. In global circulation models this threshold shear velocity is typically reduced artificially to reproduce the observed dust activity. Although preceding experiments were designed to simulate Martian conditions, no experiment so far could scale all parameters to Martian conditions, as either the atmospheric or the gravitational conditions were not scaled. In this work, a first experimental study of saltation under Martian conditions is presented. Martian gravity is reached by a centrifuge on a parabolic flight, while pressure (6 mbar) and atmospheric composition (95% CO2, 5% air) are adjusted to Martian levels. A sample of JSC 1A (grain sizes from 10 - 100 µm) was used to simulate Martian regolith. The experiments showed that the reduced gravity (0.38 g) not only affects the weight of the dust particles, but also influences the packing density within the soil and therefore also the cohesive forces. The measured threshold shear velocity of 0.82 m/s is significantly lower than the measured value for 1 g in ground experiments (1.01 m/s). Feeding the measured value into a Global Circulation Model showed that no artificial reduction of the threshold shear velocity might be needed to reproduce the global dust distribution in the Martian atmosphere.

  9. Correction between B and H, and the analysis of the magnetization into uniaxial superconductor in the limit at large values of B

    International Nuclear Information System (INIS)

    Oliveira, I.G. de.

    1994-04-01

    Using the London theory, a correction is obtained between the direction of the magnetic induction B and the applied magnetic field H in superconductors with uniaxial anisotropy when the Ginsburg-Landau constant is not so large. One analysis of the magnetization as function of angle α is made. (author). 5 refs, 2 figs

  10. Correction between B and H, and the analysis of the magnetization into uniaxial superconductor in the limit at large values of B

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, I.G. de

    1994-04-01

    Using the London theory, a correction is obtained between the direction of the magnetic induction B and the applied magnetic field H in superconductors with uniaxial anisotropy when the Ginsburg-Landau constant is not so large. One analysis of the magnetization as function of angle {alpha} is made. (author). 5 refs, 2 figs.

  11. Renormalization group evolution of neutrino parameters in presence of seesaw threshold effects and Majorana phases

    Directory of Open Access Journals (Sweden)

    Shivani Gupta

    2015-04-01

    Full Text Available We examine the renormalization group evolution (RGE for different mixing scenarios in the presence of seesaw threshold effects from high energy scale (GUT to the low electroweak (EW scale in the Standard Model (SM and Minimal Supersymmetric Standard Model (MSSM. We consider four mixing scenarios namely Tri–Bimaximal Mixing, Bimaximal Mixing, Hexagonal Mixing and Golden Ratio Mixing which come from different flavor symmetries at the GUT scale. We find that the Majorana phases play an important role in the RGE running of these mixing patterns along with the seesaw threshold corrections. We present a comparative study of the RGE of all these mixing scenarios both with and without Majorana CP phases when seesaw threshold corrections are taken into consideration. We find that in the absence of these Majorana phases both the RGE running and seesaw effects may lead to θ13<5° at low energies both in the SM and MSSM. However, if the Majorana phases are incorporated into the mixing matrix the running can be enhanced both in the SM and MSSM. Even by incorporating non-zero Majorana CP phases in the SM, we do not get θ13 in its present 3σ range. The current values of the two mass squared differences and mixing angles including θ13 can be produced in the MSSM case with tan⁡β=10 and non-zero Majorana CP phases at low energy. We also calculate the order of effective Majorana mass and Jarlskog Invariant for each scenario under consideration.

  12. An AMOLED AC-Biased Pixel Design Compensating the Threshold Voltage and I-R Drop

    Directory of Open Access Journals (Sweden)

    Ching-Lin Fan

    2011-01-01

    Full Text Available We propose a novel pixel design and an AC bias driving method for active-matrix organic light-emitting diode (AM-OLED displays using low-temperature polycrystalline silicon thin-film transistors (LTPS-TFTs. The proposed threshold voltage and I-R drop compensation circuit, which comprised three transistors and one capacitor, have been verified to supply uniform output current by simulation work using the Automatic Integrated Circuit Modeling Simulation Program with Integrated Circuit Emphasis (AIM-SPICE simulator. The simulated results demonstrate excellent properties such as low error rate of OLED anode voltage variation (<0.7% and low voltage drop of VDD power line. The proposed pixel circuit effectively enables threshold-voltage-deviation correction of driving TFT and compensates for the voltage drop of VDD power line using AC bias on OLED cathode.

  13. Threshold guidance update

    International Nuclear Information System (INIS)

    Wickham, L.E.

    1986-01-01

    The Department of Energy (DOE) is developing the concept of threshold quantities for use in determining which waste materials must be handled as radioactive waste and which may be disposed of as nonradioactive waste at its sites. Waste above this concentration level would be managed as radioactive or mixed waste (if hazardous chemicals are present); waste below this level would be handled as sanitary waste. Last years' activities (1984) included the development of a threshold guidance dose, the development of threshold concentrations corresponding to the guidance dose, the development of supporting documentation, review by a technical peer review committee, and review by the DOE community. As a result of the comments, areas have been identified for more extensive analysis, including an alternative basis for selection of the guidance dose and the development of quality assurance guidelines. Development of quality assurance guidelines will provide a reasonable basis for determining that a given waste stream qualifies as a threshold waste stream and can then be the basis for a more extensive cost-benefit analysis. The threshold guidance and supporting documentation will be revised, based on the comments received. The revised documents will be provided to DOE by early November. DOE-HQ has indicated that the revised documents will be available for review by DOE field offices and their contractors

  14. Topographies/topologies of the camp: Auschwitz as a spatial threshold

    NARCIS (Netherlands)

    Giaccaria, P.; Minca, C.

    2011-01-01

    This paper, largely inspired by Giorgio Agamben’s conceptualization of the camp, reflects on the relationship between the ‘topographical’ and the ‘topological’ in reference to Auschwitz–Birkenau and its spatialities. After having discussed the concept of soglia (threshold), we briefly introduce the

  15. QED radiative corrections in exclusive ρ0 leptoproduction

    International Nuclear Information System (INIS)

    Kurek, K.

    1996-09-01

    A semi-analytical approach to the model independent calculation of radiative corrections for exclusive ρ 0 meson leptoproduction (i.e. electron and muon scattering experiments) is presented. The corrections to ρ 0 production at large Q 2 as well as to ρ 0 photoproduction are studied in detail. The numerical results are calculated for two different experimental analyses: NMC (muoproduction at large Q 2 ) and ZEUS at HERA (quasi-real photoproduction). It is shown that the corrections are 2-5% for NMC and below 2% for the ZEUS measurement. The application of the presented approach to other vector meson production is straightforward. (orig.)

  16. ECG signal performance de-noising assessment based on threshold tuning of dual-tree wavelet transform.

    Science.gov (United States)

    El B'charri, Oussama; Latif, Rachid; Elmansouri, Khalifa; Abenaou, Abdenbi; Jenkal, Wissam

    2017-02-07

    Since the electrocardiogram (ECG) signal has a low frequency and a weak amplitude, it is sensitive to miscellaneous mixed noises, which may reduce the diagnostic accuracy and hinder the physician's correct decision on patients. The dual tree wavelet transform (DT-WT) is one of the most recent enhanced versions of discrete wavelet transform. However, threshold tuning on this method for noise removal from ECG signal has not been investigated yet. In this work, we shall provide a comprehensive study on the impact of the choice of threshold algorithm, threshold value, and the appropriate wavelet decomposition level to evaluate the ECG signal de-noising performance. A set of simulations is performed on both synthetic and real ECG signals to achieve the promised results. First, the synthetic ECG signal is used to observe the algorithm response. The evaluation results of synthetic ECG signal corrupted by various types of noise has showed that the modified unified threshold and wavelet hyperbolic threshold de-noising method is better in realistic and colored noises. The tuned threshold is then used on real ECG signals from the MIT-BIH database. The results has shown that the proposed method achieves higher performance than the ordinary dual tree wavelet transform into all kinds of noise removal from ECG signal. The simulation results indicate that the algorithm is robust for all kinds of noises with varying degrees of input noise, providing a high quality clean signal. Moreover, the algorithm is quite simple and can be used in real time ECG monitoring.

  17. Measurement of inclusive eta production in e+e- interactions near charm threshold

    International Nuclear Information System (INIS)

    Partridge, R.; Peck, C.; Porter, F.C.; Gu, Y.F.; Kollmann, W.; Richardson, M.; Strauch, K.; Wacker, K.; Aschman, D.; Bagger, J.; Burnett, T.; Cavalli-Sforza, M.; Coyne, D.; Joy, M.; Sadrozinski, H.F.W.; Hofstadter, R.; Horisberger, R.; Kirkbride, I.; Kolanoski, H.; Koenigsmann, K.; Liberman, A.; O'Reilly, J.; Osterheld, A.; Tompkins, J.; Bloom, E.; Bulos, F.; Chestnut, R.; Gaiser, J.; Godfrey, G.; Kiesling, C.; Lockman, W.; Oreglia, M.

    1981-01-01

    We have measured the inclusive cross section for eta production in e + e - interactions near charm threshold using the Crystal Ball detector. No pronounced structure in the energy dependence is observed. By comparing cross sections above and below charm threshold we obtain the limits (90% confidence limit): R(e + e - →FF-barX)Br(F→etax) <0.15--0.32 (for E/sub c.m./ from 4.0 to 4.5 GeV), Br(D→etax)<0.13 [averaged over charged and neutral D components of the psi''(3770) decays]. Our results are inconsistent with a previous report of a large energy dependence of the eta cross section ascribed to the crossing the FF* and F*F* production thresholds

  18. Workplace violence in a large correctional health service in New South Wales, Australia: a retrospective review of incident management records

    Science.gov (United States)

    2012-01-01

    Background Little is known about workplace violence among correctional health professionals. This study aimed to describe the patterns, severity and outcomes of incidents of workplace violence among employees of a large correctional health service, and to explore the help-seeking behaviours of staff following an incident. Methods The study setting was Justice Health, a statutory health corporation established to provide health care to people who come into contact with the criminal justice system in New South Wales, Australia. We reviewed incident management records describing workplace violence among Justice Health staff. The three-year study period was 1/7/2007-30/6/2010. Results During the period under review, 208 incidents of workplace violence were recorded. Verbal abuse (71%) was more common than physical abuse (29%). The most (44%) incidents of workplace violence (including both verbal and physical abuse) occurred in adult male prisons, although the most (50%) incidents of physical abuse occurred in a forensic hospital. Most (90%) of the victims were nurses and two-thirds were females. Younger employees and males were most likely to be a victim of physical abuse. Preparing or dispensing medication and attempting to calm and/or restrain an aggressive patient were identified as ‘high risk’ work duties for verbal abuse and physical abuse, respectively. Most (93%) of the incidents of workplace violence were initiated by a prisoner/patient. Almost all of the incidents received either a medium (46%) or low (52%) Severity Assessment Code. Few victims of workplace violence incurred a serious physical injury – there were no workplace deaths during the study period. However, mental stress was common, especially among the victims of verbal abuse (85%). Few (6%) victims of verbal abuse sought help from a health professional. Conclusions Among employees of a large correctional health service, verbal abuse in the workplace was substantially more common than physical

  19. Workplace violence in a large correctional health service in New South Wales, Australia: a retrospective review of incident management records

    Directory of Open Access Journals (Sweden)

    Cashmore Aaron W

    2012-08-01

    Full Text Available Abstract Background Little is known about workplace violence among correctional health professionals. This study aimed to describe the patterns, severity and outcomes of incidents of workplace violence among employees of a large correctional health service, and to explore the help-seeking behaviours of staff following an incident. Methods The study setting was Justice Health, a statutory health corporation established to provide health care to people who come into contact with the criminal justice system in New South Wales, Australia. We reviewed incident management records describing workplace violence among Justice Health staff. The three-year study period was 1/7/2007-30/6/2010. Results During the period under review, 208 incidents of workplace violence were recorded. Verbal abuse (71% was more common than physical abuse (29%. The most (44% incidents of workplace violence (including both verbal and physical abuse occurred in adult male prisons, although the most (50% incidents of physical abuse occurred in a forensic hospital. Most (90% of the victims were nurses and two-thirds were females. Younger employees and males were most likely to be a victim of physical abuse. Preparing or dispensing medication and attempting to calm and/or restrain an aggressive patient were identified as ‘high risk’ work duties for verbal abuse and physical abuse, respectively. Most (93% of the incidents of workplace violence were initiated by a prisoner/patient. Almost all of the incidents received either a medium (46% or low (52% Severity Assessment Code. Few victims of workplace violence incurred a serious physical injury – there were no workplace deaths during the study period. However, mental stress was common, especially among the victims of verbal abuse (85%. Few (6% victims of verbal abuse sought help from a health professional. Conclusions Among employees of a large correctional health service, verbal abuse in the workplace was substantially more

  20. Intermediate structure and threshold phenomena

    International Nuclear Information System (INIS)

    Hategan, Cornel

    2004-01-01

    The Intermediate Structure, evidenced through microstructures of the neutron strength function, is reflected in open reaction channels as fluctuations in excitation function of nuclear threshold effects. The intermediate state supporting both neutron strength function and nuclear threshold effect is a micro-giant neutron threshold state. (author)

  1. Equation-Method for correcting clipping errors in OFDM signals.

    Science.gov (United States)

    Bibi, Nargis; Kleerekoper, Anthony; Muhammad, Nazeer; Cheetham, Barry

    2016-01-01

    Orthogonal frequency division multiplexing (OFDM) is the digital modulation technique used by 4G and many other wireless communication systems. OFDM signals have significant amplitude fluctuations resulting in high peak to average power ratios which can make an OFDM transmitter susceptible to non-linear distortion produced by its high power amplifiers (HPA). A simple and popular solution to this problem is to clip the peaks before an OFDM signal is applied to the HPA but this causes in-band distortion and introduces bit-errors at the receiver. In this paper we discuss a novel technique, which we call the Equation-Method, for correcting these errors. The Equation-Method uses the Fast Fourier Transform to create a set of simultaneous equations which, when solved, return the amplitudes of the peaks before they were clipped. We show analytically and through simulations that this method can, correct all clipping errors over a wide range of clipping thresholds. We show that numerical instability can be avoided and new techniques are needed to enable the receiver to differentiate between correctly and incorrectly received frequency-domain constellation symbols.

  2. Threshold resummation at N3LL accuracy and soft-virtual cross sections at N3LO

    International Nuclear Information System (INIS)

    Catani, Stefano; Cieri, Leandro; Florian, Daniel de; Ferrera, Giancarlo; Grazzini, Massimiliano

    2014-01-01

    We consider QCD radiative corrections to the production of colorless high-mass systems in hadron collisions. We show that the recent computation of the soft-virtual corrections to Higgs boson production at N 3 LO [1] together with the universality structure of soft-gluon emission can be exploited to extract the general expression of the hard-virtual coefficient that contributes to threshold resummation at N 3 LL accuracy. The hard-virtual coefficient is directly related to the process-dependent virtual amplitude through a universal (process-independent) factorization formula that we explicitly evaluate up to three-loop order. As an application, we present the explicit expression of the soft-virtual N 3 LO corrections for the production of an arbitrary colorless system. In the case of the Drell–Yan process, we confirm the recent result of Ref. [2

  3. Threshold resummation at N3LL accuracy and soft-virtual cross sections at N3LO

    Directory of Open Access Journals (Sweden)

    Stefano Catani

    2014-11-01

    Full Text Available We consider QCD radiative corrections to the production of colorless high-mass systems in hadron collisions. We show that the recent computation of the soft-virtual corrections to Higgs boson production at N3LO [1] together with the universality structure of soft-gluon emission can be exploited to extract the general expression of the hard-virtual coefficient that contributes to threshold resummation at N3LL accuracy. The hard-virtual coefficient is directly related to the process-dependent virtual amplitude through a universal (process-independent factorization formula that we explicitly evaluate up to three-loop order. As an application, we present the explicit expression of the soft-virtual N3LO corrections for the production of an arbitrary colorless system. In the case of the Drell–Yan process, we confirm the recent result of Ref. [2].

  4. Nuclear threshold effects and neutron strength function

    International Nuclear Information System (INIS)

    Hategan, Cornel; Comisel, Horia

    2003-01-01

    One proves that a Nuclear Threshold Effect is dependent, via Neutron Strength Function, on Spectroscopy of Ancestral Neutron Threshold State. The magnitude of the Nuclear Threshold Effect is proportional to the Neutron Strength Function. Evidence for relation of Nuclear Threshold Effects to Neutron Strength Functions is obtained from Isotopic Threshold Effect and Deuteron Stripping Threshold Anomaly. The empirical and computational analysis of the Isotopic Threshold Effect and of the Deuteron Stripping Threshold Anomaly demonstrate their close relationship to Neutron Strength Functions. It was established that the Nuclear Threshold Effects depend, in addition to genuine Nuclear Reaction Mechanisms, on Spectroscopy of (Ancestral) Neutron Threshold State. The magnitude of the effect is proportional to the Neutron Strength Function, in their dependence on mass number. This result constitutes also a proof that the origins of these threshold effects are Neutron Single Particle States at zero energy. (author)

  5. Z-correction, a method for achieving ultraprecise self-calibration on large area coordinate measurement machines for photomasks

    Science.gov (United States)

    Ekberg, Peter; Stiblert, Lars; Mattsson, Lars

    2014-05-01

    High-quality photomasks are a prerequisite for the production of flat panel TVs, tablets and other kinds of high-resolution displays. During the past years, the resolution demand has become more and more accelerated, and today, the high-definition standard HD, 1920 × 1080 pixels2, is well established, and already the next-generation so-called ultra-high-definition UHD or 4K display is entering the market. Highly advanced mask writers are used to produce the photomasks needed for the production of such displays. The dimensional tolerance in X and Y on absolute pattern placement on these photomasks, with sizes of square meters, has been in the range of 200-300 nm (3σ), but is now on the way to be <150 nm (3σ). To verify these photomasks, 2D ultra-precision coordinate measurement machines are used with even tighter tolerance requirements. The metrology tool MMS15000 is today the world standard tool used for the verification of large area photomasks. This paper will present a method called Z-correction that has been developed for the purpose of improving the absolute X, Y placement accuracy of features on the photomask in the writing process. However, Z-correction is also a prerequisite for achieving X and Y uncertainty levels <90 nm (3σ) in the self-calibration process of the MMS15000 stage area of 1.4 × 1.5 m2. When talking of uncertainty specifications below 200 nm (3σ) of such a large area, the calibration object used, here an 8-16 mm thick quartz plate of size approximately a square meter, cannot be treated as a rigid body. The reason for this is that the absolute shape of the plate will be affected by gravity and will therefore not be the same at different places on the measurement machine stage when it is used in the self-calibration process. This mechanical deformation will stretch or compress the top surface (i.e. the image side) of the plate where the pattern resides, and therefore spatially deform the mask pattern in the X- and Y-directions. Errors due

  6. Z-correction, a method for achieving ultraprecise self-calibration on large area coordinate measurement machines for photomasks

    International Nuclear Information System (INIS)

    Ekberg, Peter; Stiblert, Lars; Mattsson, Lars

    2014-01-01

    High-quality photomasks are a prerequisite for the production of flat panel TVs, tablets and other kinds of high-resolution displays. During the past years, the resolution demand has become more and more accelerated, and today, the high-definition standard HD, 1920 × 1080 pixels 2 , is well established, and already the next-generation so-called ultra-high-definition UHD or 4K display is entering the market. Highly advanced mask writers are used to produce the photomasks needed for the production of such displays. The dimensional tolerance in X and Y on absolute pattern placement on these photomasks, with sizes of square meters, has been in the range of 200–300 nm (3σ), but is now on the way to be <150 nm (3σ). To verify these photomasks, 2D ultra-precision coordinate measurement machines are used with even tighter tolerance requirements. The metrology tool MMS15000 is today the world standard tool used for the verification of large area photomasks. This paper will present a method called Z-correction that has been developed for the purpose of improving the absolute X, Y placement accuracy of features on the photomask in the writing process. However, Z-correction is also a prerequisite for achieving X and Y uncertainty levels <90 nm (3σ) in the self-calibration process of the MMS15000 stage area of 1.4 × 1.5 m 2 . When talking of uncertainty specifications below 200 nm (3σ) of such a large area, the calibration object used, here an 8–16 mm thick quartz plate of size approximately a square meter, cannot be treated as a rigid body. The reason for this is that the absolute shape of the plate will be affected by gravity and will therefore not be the same at different places on the measurement machine stage when it is used in the self-calibration process. This mechanical deformation will stretch or compress the top surface (i.e. the image side) of the plate where the pattern resides, and therefore spatially deform the mask pattern in the X- and Y

  7. Threshold Dynamics of a Stochastic Chemostat Model with Two Nutrients and One Microorganism

    Directory of Open Access Journals (Sweden)

    Jian Zhang

    2017-01-01

    Full Text Available A new stochastic chemostat model with two substitutable nutrients and one microorganism is proposed and investigated. Firstly, for the corresponding deterministic model, the threshold for extinction and permanence of the microorganism is obtained by analyzing the stability of the equilibria. Then, for the stochastic model, the threshold of the stochastic chemostat for extinction and permanence of the microorganism is explored. Difference of the threshold of the deterministic model and the stochastic model shows that a large stochastic disturbance can affect the persistence of the microorganism and is harmful to the cultivation of the microorganism. To illustrate this phenomenon, we give some computer simulations with different intensity of stochastic noise disturbance.

  8. Self-consistency corrections in effective-interaction calculations

    International Nuclear Information System (INIS)

    Starkand, Y.; Kirson, M.W.

    1975-01-01

    Large-matrix extended-shell-model calculations are used to compute self-consistency corrections to the effective interaction and to the linked-cluster effective interaction. The corrections are found to be numerically significant and to affect the rate of convergence of the corresponding perturbation series. The influence of various partial corrections is tested. It is concluded that self-consistency is an important effect in determining the effective interaction and improving the rate of convergence. (author)

  9. Swallowing thresholds of mandibular implant-retained overdentures with variable portion sizes

    NARCIS (Netherlands)

    Fontijn-Tekamp, F.A.; Slagter, A.P.; Van der Bilt, A.; Van't Hof, M.A.; Kalk, W.; Jansen, J.A.

    2004-01-01

    We analysed the effect of three portion sizes Optocal Plus (small, medium and large) on swallowing thresholds in subjects with either conventional complete dentures or mandibular implant-retained overdentures (transmandibular and permucosal cylindric implants). Tests were carried out in 52 women and

  10. Seven benzimidazole pesticides combined at sub-threshold levels induce micronuclei in vitro

    Science.gov (United States)

    Ermler, Sibylle; Scholze, Martin; Kortenkamp, Andreas

    2013-01-01

    Benzimidazoles act by disrupting microtubule polymerisation and are capable of inducing the formation of micronuclei. Considering the similarities in their mechanisms of action (inhibition of microtubule assembly by binding to the colchicine-binding site on tubulin monomers), combination effects according to the principles of concentration addition might occur. If so, it is to be expected that several benzimidazoles contribute to micronucleus formation even when each single one is present at or below threshold levels. This would have profound implications for risk assessment, but the idea has never been tested rigorously. To fill this gap, we analysed micronucleus frequencies for seven benzimidazoles, including the fungicide benomyl, its metabolite carbendazim, the anthelmintics albendazole, albendazole oxide, flubendazole, mebendazole and oxibendazole. Thiabendazole was also tested but was inactive. We used the cytochalasin-blocked micronucleus assay with CHO-K1 cells according to OECD guidelines, and employed an automated micronucleus scoring system based on image analysis to establish quantitative concentration–response relationships for the seven active benzimidazoles. Based on this information, we predicted additive combination effects for a mixture of the seven benzimidazoles by using the concepts of concentration addition and independent action. The observed effects of the mixture agreed very well with those predicted by concentration addition. Independent action underestimated the observed combined effects by a large margin. With a mixture that combined all benzimidazoles at their estimated threshold concentrations for micronucleus induction, micronucleus frequencies of ~15.5% were observed, correctly anticipated by concentration addition. On the basis of independent action, this mixture was expected to produce no effects. Our data provide convincing evidence that concentration addition is applicable to combinations of benzimidazoles that form micronuclei

  11. Calculating the dim light melatonin onset: the impact of threshold and sampling rate.

    Science.gov (United States)

    Molina, Thomas A; Burgess, Helen J

    2011-10-01

    The dim light melatonin onset (DLMO) is the most reliable circadian phase marker in humans, but the cost of assaying samples is relatively high. Therefore, the authors examined differences between DLMOs calculated from hourly versus half-hourly sampling and differences between DLMOs calculated with two recommended thresholds (a fixed threshold of 3 pg/mL and a variable "3k" threshold equal to the mean plus two standard deviations of the first three low daytime points). The authors calculated these DLMOs from salivary dim light melatonin profiles collected from 122 individuals (64 women) at baseline. DLMOs derived from hourly sampling occurred on average only 6-8 min earlier than the DLMOs derived from half-hourly saliva sampling, and they were highly correlated with each other (r ≥ 0.89, p 30 min from the DLMO derived from half-hourly sampling. The 3 pg/mL threshold produced significantly less variable DLMOs than the 3k threshold. However, the 3k threshold was significantly lower than the 3 pg/mL threshold (p < .001). The DLMOs calculated with the 3k method were significantly earlier (by 22-24 min) than the DLMOs calculated with the 3 pg/mL threshold, regardless of sampling rate. These results suggest that in large research studies and clinical settings, the more affordable and practical option of hourly sampling is adequate for a reasonable estimate of circadian phase. Although the 3 pg/mL fixed threshold is less variable than the 3k threshold, it produces estimates of the DLMO that are further from the initial rise of melatonin.

  12. A method of camera calibration with adaptive thresholding

    Science.gov (United States)

    Gao, Lei; Yan, Shu-hua; Wang, Guo-chao; Zhou, Chun-lei

    2009-07-01

    In order to calculate the parameters of the camera correctly, we must figure out the accurate coordinates of the certain points in the image plane. Corners are the important features in the 2D images. Generally speaking, they are the points that have high curvature and lie in the junction of different brightness regions of images. So corners detection has already widely used in many fields. In this paper we use the pinhole camera model and SUSAN corner detection algorithm to calibrate the camera. When using the SUSAN corner detection algorithm, we propose an approach to retrieve the gray difference threshold, adaptively. That makes it possible to pick up the right chessboard inner comers in all kinds of gray contrast. The experiment result based on this method was proved to be feasible.

  13. Threshold quantum secret sharing based on single qubit

    Science.gov (United States)

    Lu, Changbin; Miao, Fuyou; Meng, Keju; Yu, Yue

    2018-03-01

    Based on unitary phase shift operation on single qubit in association with Shamir's ( t, n) secret sharing, a ( t, n) threshold quantum secret sharing scheme (or ( t, n)-QSS) is proposed to share both classical information and quantum states. The scheme uses decoy photons to prevent eavesdropping and employs the secret in Shamir's scheme as the private value to guarantee the correctness of secret reconstruction. Analyses show it is resistant to typical intercept-and-resend attack, entangle-and-measure attack and participant attacks such as entanglement swapping attack. Moreover, it is easier to realize in physic and more practical in applications when compared with related ones. By the method in our scheme, new ( t, n)-QSS schemes can be easily constructed using other classical ( t, n) secret sharing.

  14. Ecological thresholds: The key to successful enviromental management or an important concept with no practical application?

    Science.gov (United States)

    Groffman, P.M.; Baron, Jill S.; Blett, T.; Gold, A.J.; Goodman, I.; Gunderson, L.H.; Levinson, B.M.; Palmer, Margaret A.; Paerl, H.W.; Peterson, G.D.; Poff, N.L.; Rejeski, D.W.; Reynolds, J.F.; Turner, M.G.; Weathers, K.C.; Wiens, J.

    2006-01-01

    An ecological threshold is the point at which there is an abrupt change in an ecosystem quality, property or phenomenon, or where small changes in an environmental driver produce large responses in the ecosystem. Analysis of thresholds is complicated by nonlinear dynamics and by multiple factor controls that operate at diverse spatial and temporal scales. These complexities have challenged the use and utility of threshold concepts in environmental management despite great concern about preventing dramatic state changes in valued ecosystems, the need for determining critical pollutant loads and the ubiquity of other threshold-based environmental problems. In this paper we define the scope of the thresholds concept in ecological science and discuss methods for identifying and investigating thresholds using a variety of examples from terrestrial and aquatic environments, at ecosystem, landscape and regional scales. We end with a discussion of key research needs in this area.

  15. A New Wavelet Threshold Function and Denoising Application

    Directory of Open Access Journals (Sweden)

    Lu Jing-yi

    2016-01-01

    Full Text Available In order to improve the effects of denoising, this paper introduces the basic principles of wavelet threshold denoising and traditional structures threshold functions. Meanwhile, it proposes wavelet threshold function and fixed threshold formula which are both improved here. First, this paper studies the problems existing in the traditional wavelet threshold functions and introduces the adjustment factors to construct the new threshold function basis on soft threshold function. Then, it studies the fixed threshold and introduces the logarithmic function of layer number of wavelet decomposition to design the new fixed threshold formula. Finally, this paper uses hard threshold, soft threshold, Garrote threshold, and improved threshold function to denoise different signals. And the paper also calculates signal-to-noise (SNR and mean square errors (MSE of the hard threshold functions, soft thresholding functions, Garrote threshold functions, and the improved threshold function after denoising. Theoretical analysis and experimental results showed that the proposed approach could improve soft threshold functions with constant deviation and hard threshold with discontinuous function problems. The proposed approach could improve the different decomposition scales that adopt the same threshold value to deal with the noise problems, also effectively filter the noise in the signals, and improve the SNR and reduce the MSE of output signals.

  16. Bias correction method for climate change impact assessment at a basin scale

    Science.gov (United States)

    Nyunt, C.; Jaranilla-sanchez, P. A.; Yamamoto, A.; Nemoto, T.; Kitsuregawa, M.; Koike, T.

    2012-12-01

    Climate change impact studies are mainly based on the general circulation models GCM and these studies play an important role to define suitable adaptation strategies for resilient environment in a basin scale management. For this purpose, this study summarized how to select appropriate GCM to decrease the certain uncertainty amount in analysis. This was applied to the Pampanga, Angat and Kaliwa rivers in Luzon Island, the main island of Philippine and these three river basins play important roles in irrigation water supply, municipal water source for Metro Manila. According to the GCM scores of both seasonal evolution of Asia summer monsoon and spatial correlation and root mean squared error of atmospheric variables over the region, finally six GCM is chosen. Next, we develop a complete, efficient and comprehensive statistical bias correction scheme covering extremes events, normal rainfall and frequency of dry period. Due to the coarse resolution and parameterization scheme of GCM, extreme rainfall underestimation, too many rain days with low intensity and poor representation of local seasonality have been known as bias of GCM. Extreme rainfall has unusual characteristics and it should be focused specifically. Estimated maximum extreme rainfall is crucial for planning and design of infrastructures in river basin. Developing countries have limited technical, financial and management resources for implementing adaptation measures and they need detailed information of drought and flood for near future. Traditionally, the analysis of extreme has been examined using annual maximum series (AMS) adjusted to a Gumbel or Lognormal distribution. The drawback is the loss of the second, third etc, largest rainfall. Another approach is partial duration series (PDS) constructed using the values above a selected threshold and permit more than one event per year. The generalized Pareto distribution (GPD) has been used to model PDS and it is the series of excess over a threshold

  17. ISR corrections to associated HZ production at future Higgs factories

    Directory of Open Access Journals (Sweden)

    Mario Greco

    2018-02-01

    Full Text Available We evaluate the QED corrections due to initial state radiation (ISR to associated Higgs boson production in electron–positron (e+e− annihilation at typical energies of interest for the measurement of the Higgs properties at future e+e− colliders, such as CEPC and FCC–ee. We apply the QED Structure Function approach to the four-fermion production process e+e−→μ+μ−bb¯, including both signal and background contributions. We emphasize the relevance of the ISR corrections particularly near threshold and show that finite third order collinear contributions are mandatory to meet the expected experimental accuracy. We analyze in turn the rôle played by a full four-fermion calculation and beam energy spread in precision calculations for Higgs physics at future e+e− colliders.

  18. ISR corrections to associated HZ production at future Higgs factories

    Science.gov (United States)

    Greco, Mario; Montagna, Guido; Nicrosini, Oreste; Piccinini, Fulvio; Volpi, Gabriele

    2018-02-01

    We evaluate the QED corrections due to initial state radiation (ISR) to associated Higgs boson production in electron-positron (e+e-) annihilation at typical energies of interest for the measurement of the Higgs properties at future e+e- colliders, such as CEPC and FCC-ee. We apply the QED Structure Function approach to the four-fermion production process e+e- →μ+μ- b b bar , including both signal and background contributions. We emphasize the relevance of the ISR corrections particularly near threshold and show that finite third order collinear contributions are mandatory to meet the expected experimental accuracy. We analyze in turn the rôle played by a full four-fermion calculation and beam energy spread in precision calculations for Higgs physics at future e+e- colliders.

  19. Very long spatial and temporal spontaneous coherence of 2D polariton condensates across the parametric threshold

    DEFF Research Database (Denmark)

    Spano, R.; Cuadra, J.; Lingg, C.

    2011-01-01

    , and a relative large beam area (∅~50 μm) to obtain a true 2D condensate. Its coherence properties are measured with a Michelson interferometer. A finite correlation length is measured at an energy δE=-0.19 meV from the parametric threshold, as shown in Fig. 1(A). Once the threshold is reached, by changing...

  20. A geometrical correction for the inter- and intra-molecular basis set superposition error in Hartree-Fock and density functional theory calculations for large systems

    Science.gov (United States)

    Kruse, Holger; Grimme, Stefan

    2012-04-01

    chemistry yields MAD=0.68 kcal/mol, which represents a huge improvement over plain B3LYP/6-31G* (MAD=2.3 kcal/mol). Application of gCP-corrected B97-D3 and HF-D3 on a set of large protein-ligand complexes prove the robustness of the method. Analytical gCP gradients make optimizations of large systems feasible with small basis sets, as demonstrated for the inter-ring distances of 9-helicene and most of the complexes in Hobza's S22 test set. The method is implemented in a freely available FORTRAN program obtainable from the author's website.

  1. A geometrical correction for the inter- and intra-molecular basis set superposition error in Hartree-Fock and density functional theory calculations for large systems.

    Science.gov (United States)

    Kruse, Holger; Grimme, Stefan

    2012-04-21

    chemistry yields MAD=0.68 kcal/mol, which represents a huge improvement over plain B3LYP/6-31G* (MAD=2.3 kcal/mol). Application of gCP-corrected B97-D3 and HF-D3 on a set of large protein-ligand complexes prove the robustness of the method. Analytical gCP gradients make optimizations of large systems feasible with small basis sets, as demonstrated for the inter-ring distances of 9-helicene and most of the complexes in Hobza's S22 test set. The method is implemented in a freely available FORTRAN program obtainable from the author's website.

  2. Threshold behavior in electron-atom scattering

    International Nuclear Information System (INIS)

    Sadeghpour, H.R.; Greene, C.H.

    1996-01-01

    Ever since the classic work of Wannier in 1953, the process of treating two threshold electrons in the continuum of a positively charged ion has been an active field of study. The authors have developed a treatment motivated by the physics below the double ionization threshold. By modeling the double ionization as a series of Landau-Zener transitions, they obtain an analytical formulation of the absolute threshold probability which has a leading power law behavior, akin to Wannier's law. Some of the noteworthy aspects of this derivation are that the derivation can be conveniently continued below threshold giving rise to a open-quotes cuspclose quotes at threshold, and that on both sides of the threshold, absolute values of the cross sections are obtained

  3. Incorporation of QCD effects in basic corrections of the electroweak theory

    CERN Document Server

    Fanchiotti, Sergio; Sirlin, Alberto; Fanchiotti, Sergio; Kniehl, Bernd; Sirlin, Alberto

    1993-01-01

    We study the incorporation of QCD effects in the basic electroweak corrections \\drcar, \\drcarw, and \\dr. They include perturbative \\Ord{\\alpha\\alpha_s} contributions and $t\\bar{t}$ threshold effects. The latter are studied in the resonance and Green-function approaches, in the framework of dispersion relations that automatically satisfy relevant Ward identities. Refinements in the treatment of the electroweak corrections, in both the \\ms\\ and the on-shell schemes of renormalization, are introduced, including the decoupling of the top quark in certain amplitudes, its effect on $\\hat{e}^2(\\mz)$ and \\sincarmz, the incorporation of recent results on the leading irreducible \\Ord{\\alpha^2} corrections, and simple expressions for the residual, i.e.\\ ``non-electromagnetic'', parts of \\drcar, \\drcarw, and \\dr. The results are used to obtain accurate values for \\mw\\ and \\sincarmz, as functions of \\mt\\ and \\mh. The higher-order effects induce shifts in these parameters comparable to the expected experimental accuracy, a...

  4. Stability thresholds of a disk-shaped Migma

    International Nuclear Information System (INIS)

    Wong, H.V.; Rosenbluth, M.N.; Berk, H.L.

    1988-08-01

    The stability of a Migma disc is re-examined to determined the threshold to the interchange instability. It is shown that a previous calculation which assumes a rigid mode eigenfunction, is inaccurate at the predicted particle number for marginal stability. As a result the integral equation for the system must be solved. A variational method of solution is developed and is shown to give good agreement with a direct numerical solution is developed and is shown to give good agreement with a direct numerical solution. The threshold for instability is found to be sensitive to the details of the distribution function. For highly focused system, where all ions pass close to the axis, the threshold particle number (N/sup u1/) for instability is substantially below that predicted by rigid mode theory (N/sup rigid/)(by a factor /approximately/8ε 2 where ε = r 1 /r/sub L/, r 1 the spread in the distance of closest approach to the axis and r/sub L/ the ion Larmor radius). At a higher density a second band of stability appears that again destabilizes at yet higher article number (N/sub u2/). If ε /much lt/ 1, N/sub u2/ is substantially below the rigid mode prediction, while for 0.2 < ε < 0.3, N/sub u2/ is comparable to the rigid mode prediction. At moderate values of ε(ε ∼ 0.3 /minus/ 0.4) the second stability band disappears and the instability particle number threshold varies from about .4ε, when ε = 0.4, to .7ε when ε is about unity. The stability criteria wound be consistent with the observed particle storage number obtained in experimental configurations if the spread in ε is sufficiently large. 11 refs., 6 figs., 6 tabs

  5. Method for measuring multiple scattering corrections between liquid scintillators

    Energy Technology Data Exchange (ETDEWEB)

    Verbeke, J.M., E-mail: verbeke2@llnl.gov; Glenn, A.M., E-mail: glenn22@llnl.gov; Keefer, G.J., E-mail: keefer1@llnl.gov; Wurtz, R.E., E-mail: wurtz1@llnl.gov

    2016-07-21

    A time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.

  6. The adaptive value of gluttony: predators mediate the life history trade-offs of satiation threshold.

    Science.gov (United States)

    Pruitt, J N; Krauel, J J

    2010-10-01

    Animals vary greatly in their tendency to consume large meals. Yet, whether or how meal size influences fitness in wild populations is infrequently considered. Using a predator exclusion, mark-recapture experiment, we estimated selection on the amount of food accepted during an ad libitum feeding bout (hereafter termed 'satiation threshold') in the wolf spider Schizocosa ocreata. Individually marked, size-matched females of known satiation threshold were assigned to predator exclusion and predator inclusion treatments and tracked for a 40-day period. We also estimated the narrow-sense heritability of satiation threshold using dam-on-female-offspring regression. In the absence of predation, high satiation threshold was positively associated with larger and faster egg case production. However, these selective advantages were lost when predators were present. We estimated the heritability of satiation threshold to be 0.56. Taken together, our results suggest that satiation threshold can respond to selection and begets a life history trade-off in this system: high satiation threshold individuals tend to produce larger egg cases but also suffer increased susceptibility to predation. © 2010 The Authors. Journal Compilation © 2010 European Society For Evolutionary Biology.

  7. Double Photoionization Near Threshold

    Science.gov (United States)

    Wehlitz, Ralf

    2007-01-01

    The threshold region of the double-photoionization cross section is of particular interest because both ejected electrons move slowly in the Coulomb field of the residual ion. Near threshold both electrons have time to interact with each other and with the residual ion. Also, different theoretical models compete to describe the double-photoionization cross section in the threshold region. We have investigated that cross section for lithium and beryllium and have analyzed our data with respect to the latest results in the Coulomb-dipole theory. We find that our data support the idea of a Coulomb-dipole interaction.

  8. Two-dimensional threshold voltage analytical model of DMG strained-silicon-on-insulator MOSFETs

    International Nuclear Information System (INIS)

    Li Jin; Liu Hongxia; Li Bin; Cao Lei; Yuan Bo

    2010-01-01

    For the first time, a simple and accurate two-dimensional analytical model for the surface potential variation along the channel in fully depleted dual-material gate strained-Si-on-insulator (DMG SSOI) MOSFETs is developed. We investigate the improved short channel effect (SCE), hot carrier effect (HCE), drain-induced barrier-lowering (DIBL) and carrier transport efficiency for the novel structure MOSFET. The analytical model takes into account the effects of different metal gate lengths, work functions, the drain bias and Ge mole fraction in the relaxed SiGe buffer. The surface potential in the channel region exhibits a step potential, which can suppress SCE, HCE and DIBL. Also, strained-Si and SOI structure can improve the carrier transport efficiency, with strained-Si being particularly effective. Further, the threshold voltage model correctly predicts a 'rollup' in threshold voltage with decreasing channel length ratios or Ge mole fraction in the relaxed SiGe buffer. The validity of the two-dimensional analytical model is verified using numerical simulations. (semiconductor devices)

  9. Fabrication of Pt nanowires with a diffraction-unlimited feature size by high-threshold lithography

    International Nuclear Information System (INIS)

    Li, Li; Zhang, Ziang; Yu, Miao; Song, Zhengxun; Weng, Zhankun; Wang, Zuobin; Li, Wenjun; Wang, Dapeng; Zhao, Le; Peng, Kuiqing

    2015-01-01

    Although the nanoscale world can already be observed at a diffraction-unlimited resolution using far-field optical microscopy, to make the step from microscopy to lithography still requires a suitable photoresist material system. In this letter, we consider the threshold to be a region with a width characterized by the extreme feature size obtained using a Gaussian beam spot. By narrowing such a region through improvement of the threshold sensitization to intensity in a high-threshold material system, the minimal feature size becomes smaller. By using platinum as the negative photoresist, we demonstrate that high-threshold lithography can be used to fabricate nanowire arrays with a scalable resolution along the axial direction of the linewidth from the micro- to the nanoscale using a nanosecond-pulsed laser source with a wavelength λ 0  = 1064 nm. The minimal feature size is only several nanometers (sub λ 0 /100). Compared with conventional polymer resist lithography, the advantages of high-threshold lithography are sharper pinpoints of laser intensity triggering the threshold response and also higher robustness allowing for large area exposure by a less-expensive nanosecond-pulsed laser

  10. Corrective response times in a coordinated eye-head-arm countermanding task.

    Science.gov (United States)

    Tao, Gordon; Khan, Aarlenne Z; Blohm, Gunnar

    2018-06-01

    Inhibition of motor responses has been described as a race between two competing decision processes of motor initiation and inhibition, which manifest as the reaction time (RT) and the stop signal reaction time (SSRT); in the case where motor initiation wins out over inhibition, an erroneous movement occurs that usually needs to be corrected, leading to corrective response times (CRTs). Here we used a combined eye-head-arm movement countermanding task to investigate the mechanisms governing multiple effector coordination and the timing of corrective responses. We found a high degree of correlation between effector response times for RT, SSRT, and CRT, suggesting that decision processes are strongly dependent across effectors. To gain further insight into the mechanisms underlying CRTs, we tested multiple models to describe the distribution of RTs, SSRTs, and CRTs. The best-ranked model (according to 3 information criteria) extends the LATER race model governing RTs and SSRTs, whereby a second motor initiation process triggers the corrective response (CRT) only after the inhibition process completes in an expedited fashion. Our model suggests that the neural processing underpinning a failed decision has a residual effect on subsequent actions. NEW & NOTEWORTHY Failure to inhibit erroneous movements typically results in corrective movements. For coordinated eye-head-hand movements we show that corrective movements are only initiated after the erroneous movement cancellation signal has reached a decision threshold in an accelerated fashion.

  11. Predissociation of the D 1PIsub(u) state of H2 near threshold

    International Nuclear Information System (INIS)

    Borondo, F.; Eguiagaray, L.R.; Riera, A.

    1982-01-01

    A recent calculation of Komarov and Ostrovsky (J. Phys. B.; 12:2485 (1979)) seemed to have settled a controversy regarding the different experimental values of the H(2s)/H(2p) sharing ratio in the predissociation of the D 1 PIsub(u) state of H 2 near threshold. This calculation was based on a correct physical picture of the dissociation process, but the dynamical treatment rests on invalid assumptions. In the present work, a more rigorous quantum mechanical treatment is presented, and a branching ratio of 0.70 is obtained. (author)

  12. Coherent π0 electroproduction on the deuteron at threshold

    International Nuclear Information System (INIS)

    Jeon, B.K.; Sato, T.; Ohtsubo, H.

    1989-01-01

    We studied the effect of the exchange current on the longitudinal form factor of neutral pion electroproduction. As a result, we obtained a large effect of the exchange current on pion production at threshold with a momentum transfer of 2-3 fm -1 . This reaction may serve as a clear test of the exchange current, which is related to the exchange axial-charge. (orig.)

  13. Non-Gaussian Halo Bias Re-examined: Mass-dependent Amplitude from the Peak-Background Split and Thresholding

    International Nuclear Information System (INIS)

    Desjacques, Vincent; Jeong, Donghui; Schmidt, Fabian

    2011-01-01

    Recent results of N-body simulations have shown that current theoretical models are not able to correctly predict the amplitude of the scale-dependent halo bias induced by primordial non-Gaussianity, for models going beyond the simplest, local quadratic case. Motivated by these discrepancies, we carefully examine three theoretical approaches based on (1) the statistics of thresholded regions, (2) a peak-background split method based on separation of scales, and (3) a peak-background split method using the conditional mass function. We first demonstrate that the statistics of thresholded regions, which is shown to be equivalent at leading order to a local bias expansion, cannot explain the mass-dependent deviation between theory and N-body simulations. In the two formulations of the peak-background split on the other hand, we identify an important, but previously overlooked, correction to the non-Gaussian bias that strongly depends on halo mass. This new term is in general significant for any primordial non-Gaussianity going beyond the simplest local f NL model. In a separate paper (to be published in PRD rapid communication), the authors compare these new theoretical predictions with N-body simulations, showing good agreement for all simulated types of non-Gaussianity.

  14. Vanishing corrections on intermediate scale and implications for unification of forces

    International Nuclear Information System (INIS)

    Parida, M.K.

    1996-02-01

    In two-step breakings of a class of grand unified theories including SO(10), we prove a theorem showing that the scale (M I ) where the Pati-Salam gauge symmetry with parity breaks down to the standard gauge group, has vanishing corrections due to all sources emerging from higher scales (μ > M I ) such as the one-loop and all higher-loop effects, the GUT-threshold, gravitational smearing, and string threshold effects. Implications of such a scale for the unification of gauge couplings with small Majorana neutrino masses are discussed. In string inspired SO(10), we show that M I ≅ 5 x 10 12 GeV, needed for neutrino masses, with the GUT scale M U ≅ M str can be realized provided certain particle states in the predicted spectrum are light. (author). 28 refs, 1 tab

  15. No evidence of a threshold in traffic volume affecting road-kill mortality at a large spatio-temporal scale

    Energy Technology Data Exchange (ETDEWEB)

    Grilo, Clara, E-mail: clarabentesgrilo@gmail.com [Departamento de Biología de la Conservación, Estación Biológica de Doñana (EBD-CSIC), Calle Américo Vespucio s/n, E-41092 Sevilla (Spain); Centro Brasileiro de Estudos em Ecologia de Estradas, Departamento de Biologia, Universidade Federal de Lavras, Campus Universitário, 37200-000 Lavras, Minas Gerais (Brazil); Ferreira, Flavio Zanchetta; Revilla, Eloy [Departamento de Biología de la Conservación, Estación Biológica de Doñana (EBD-CSIC), Calle Américo Vespucio s/n, E-41092 Sevilla (Spain)

    2015-11-15

    Previous studies have found that the relationship between wildlife road mortality and traffic volume follows a threshold effect on low traffic volume roads. We aimed at evaluating the response of several species to increasing traffic intensity on highways over a large geographic area and temporal period. We used data of four terrestrial vertebrate species with different biological and ecological features known by their high road-kill rates: the barn owl (Tyto alba), hedgehog (Erinaceus europaeus), red fox (Vulpes vulpes) and European rabbit (Oryctolagus cuniculus). Additionally, we checked whether road-kill likelihood varies when traffic patterns depart from the average. We used annual average daily traffic (AADT) and road-kill records observed along 1000 km of highways in Portugal over seven consecutive years (2003–2009). We fitted candidate models using Generalized Linear Models with a binomial distribution through a sample unit of 1 km segments to describe the effect of traffic on the probability of finding at least one victim in each segment during the study. We also assigned for each road-kill record the traffic of that day and the AADT on that year to test for differences using Paired Student's t-test. Mortality risk declined significantly with traffic volume but varied among species: the probability of finding road-killed red foxes and rabbits occurs up to moderate traffic volumes (< 20,000 AADT) whereas barn owls and hedgehogs occurred up to higher traffic volumes (40,000 AADT). Perception of risk may explain differences in responses towards high traffic highway segments. Road-kill rates did not vary significantly when traffic intensity departed from the average. In summary, we did not find evidence of traffic thresholds for the analysed species and traffic intensities. We suggest mitigation measures to reduce mortality be applied in particular on low traffic roads (< 5000 AADT) while additional measures to reduce barrier effects should take into

  16. No evidence of a threshold in traffic volume affecting road-kill mortality at a large spatio-temporal scale

    International Nuclear Information System (INIS)

    Grilo, Clara; Ferreira, Flavio Zanchetta; Revilla, Eloy

    2015-01-01

    Previous studies have found that the relationship between wildlife road mortality and traffic volume follows a threshold effect on low traffic volume roads. We aimed at evaluating the response of several species to increasing traffic intensity on highways over a large geographic area and temporal period. We used data of four terrestrial vertebrate species with different biological and ecological features known by their high road-kill rates: the barn owl (Tyto alba), hedgehog (Erinaceus europaeus), red fox (Vulpes vulpes) and European rabbit (Oryctolagus cuniculus). Additionally, we checked whether road-kill likelihood varies when traffic patterns depart from the average. We used annual average daily traffic (AADT) and road-kill records observed along 1000 km of highways in Portugal over seven consecutive years (2003–2009). We fitted candidate models using Generalized Linear Models with a binomial distribution through a sample unit of 1 km segments to describe the effect of traffic on the probability of finding at least one victim in each segment during the study. We also assigned for each road-kill record the traffic of that day and the AADT on that year to test for differences using Paired Student's t-test. Mortality risk declined significantly with traffic volume but varied among species: the probability of finding road-killed red foxes and rabbits occurs up to moderate traffic volumes (< 20,000 AADT) whereas barn owls and hedgehogs occurred up to higher traffic volumes (40,000 AADT). Perception of risk may explain differences in responses towards high traffic highway segments. Road-kill rates did not vary significantly when traffic intensity departed from the average. In summary, we did not find evidence of traffic thresholds for the analysed species and traffic intensities. We suggest mitigation measures to reduce mortality be applied in particular on low traffic roads (< 5000 AADT) while additional measures to reduce barrier effects should take into

  17. Establishment of the BOSPOR-80 machine library of evaluated threshold reaction cross-sections and its testing by means of integral experiments

    International Nuclear Information System (INIS)

    Bychkov, V.M.; Zolotarev, K.I.; Pashchenko, A.B.; Plyaskin, V.I.

    1982-08-01

    A paper was published in 1979 containing a compilation of experimental data on the cross-sections of (n,p), (n,α) and (n,2n) threshold reactions and recommended excitation functions. A further paper considered the development of evaluation methods based on the use of theoretical model calculations, an increase in the number of recommended excitation functions, correction of the recommended cross-sections on the basis of integral experiments and allowance for recent experimental data. To satisfy the wide circle of users, BOSPOR-80 - a machine library of evaluated threshold reaction cross-sections - was set up

  18. Impact of Thresholds and Load Patterns when Executing HPC Applications with Cloud Elasticity

    Directory of Open Access Journals (Sweden)

    Vinicius Facco Rodrigues

    2016-04-01

    Full Text Available Elasticity is one of the most known capabilities related to cloud computing, being largely deployed reactively using thresholds. In this way, maximum and minimum limits are used to drive resource allocation and deallocation actions, leading to the following problem statements: How can cloud users set the threshold values to enable elasticity in their cloud applications? And what is the impact of the application’s load pattern in the elasticity? This article tries to answer these questions for iterative high performance computing applications, showing the impact of both thresholds and load patterns on application performance and resource consumption. To accomplish this, we developed a reactive and PaaS-based elasticity model called AutoElastic and employed it over a private cloud to execute a numerical integration application. Here, we are presenting an analysis of best practices and possible optimizations regarding the elasticity and HPC pair. Considering the results, we observed that the maximum threshold influences the application time more than the minimum one. We concluded that threshold values close to 100% of CPU load are directly related to a weaker reactivity, postponing resource reconfiguration when its activation in advance could be pertinent for reducing the application runtime.

  19. Nonlinear threshold behavior during the loss of Arctic sea ice.

    Science.gov (United States)

    Eisenman, I; Wettlaufer, J S

    2009-01-06

    In light of the rapid recent retreat of Arctic sea ice, a number of studies have discussed the possibility of a critical threshold (or "tipping point") beyond which the ice-albedo feedback causes the ice cover to melt away in an irreversible process. The focus has typically been centered on the annual minimum (September) ice cover, which is often seen as particularly susceptible to destabilization by the ice-albedo feedback. Here, we examine the central physical processes associated with the transition from ice-covered to ice-free Arctic Ocean conditions. We show that although the ice-albedo feedback promotes the existence of multiple ice-cover states, the stabilizing thermodynamic effects of sea ice mitigate this when the Arctic Ocean is ice covered during a sufficiently large fraction of the year. These results suggest that critical threshold behavior is unlikely during the approach from current perennial sea-ice conditions to seasonally ice-free conditions. In a further warmed climate, however, we find that a critical threshold associated with the sudden loss of the remaining wintertime-only sea ice cover may be likely.

  20. Threshold velocity for environmentally-assisted cracking in low alloy steels

    International Nuclear Information System (INIS)

    Wire, G.L.; Kandra, J.T.

    1997-01-01

    Environmentally Assisted Cracking (EAC) in low alloy steels is generally believed to be activated by dissolution of MnS inclusions at the crack tip in high temperature LWR environments. EAC is the increase of fatigue crack growth rate of up to 40 to 100 times the rate in air that occurs in high temperature LWR environments. A steady state theory developed by Combrade, suggested that EAC will initiate only above a critical crack velocity and cease below this same velocity. A range of about twenty in critical crack tip velocities was invoked by Combrade, et al., to describe data available at that time. This range was attributed to exposure of additional sulfides above and below the crack plane. However, direct measurements of exposed sulfide densities on cracked specimens were performed herein and the results rule out significant additional sulfide exposure as a plausible explanation. Alternatively, it is proposed herein that localized EAC starting at large sulfide clusters reduces the calculated threshold velocity from the value predicted for a uniform distribution of sulfides. Calculations are compared with experimental results where the threshold velocity has been measured, and the predicted wide range of threshold values for steels of similar sulfur content but varying sulfide morphology is observed. The threshold velocity decreases with the increasing maximum sulfide particle size, qualitatively consistent with the theory. The calculation provides a basis for a conservative minimum velocity threshold tied directly to the steel sulfur level, in cases where no details of sulfide distribution are known

  1. SOA thresholds for the perception of discrete/continuous tactile stimulation

    DEFF Research Database (Denmark)

    Eid, Mohamad; Korres, Georgios; Jensen, Camilla Birgitte Falk

    In this paper we present an experiment to measure the upper and lower thresholds of the Stimulus Onset Asynchrony (SOA) for continuous/discrete apparent haptic motion. We focus on three stimulation parameters: the burst duration, the SOA time, and the inter-actuator distance (between successive......-discrete boundary at lower SOA. Furthermore, the larger the inter-actuator distance, the more linear the relationship between the burst duration and the SOA timing. Finally, the large range between lower and upper thresholds for SOA can be utilized to create continuous movement stimulation on the skin at “varying...... speeds”. The results are discussed in reference to designing a tactile interface for providing continuous haptic motion with a desired speed of continuous tactile stimulation....

  2. Colour thresholding and objective quantification in bioimaging

    Science.gov (United States)

    Fermin, C. D.; Gerber, M. A.; Torre-Bueno, J. R.

    1992-01-01

    Computer imaging is rapidly becoming an indispensable tool for the quantification of variables in research and medicine. Whilst its use in medicine has largely been limited to qualitative observations, imaging in applied basic sciences, medical research and biotechnology demands objective quantification of the variables in question. In black and white densitometry (0-256 levels of intensity) the separation of subtle differences between closely related hues from stains is sometimes very difficult. True-colour and real-time video microscopy analysis offer choices not previously available with monochrome systems. In this paper we demonstrate the usefulness of colour thresholding, which has so far proven indispensable for proper objective quantification of the products of histochemical reactions and/or subtle differences in tissue and cells. In addition, we provide interested, but untrained readers with basic information that may assist decisions regarding the most suitable set-up for a project under consideration. Data from projects in progress at Tulane are shown to illustrate the advantage of colour thresholding over monochrome densitometry and for objective quantification of subtle colour differences between experimental and control samples.

  3. Modeling direction discrimination thresholds for yaw rotations around an earth-vertical axis for arbitrary motion profiles.

    Science.gov (United States)

    Soyka, Florian; Giordano, Paolo Robuffo; Barnett-Cowan, Michael; Bülthoff, Heinrich H

    2012-07-01

    Understanding the dynamics of vestibular perception is important, for example, for improving the realism of motion simulation and virtual reality environments or for diagnosing patients suffering from vestibular problems. Previous research has found a dependence of direction discrimination thresholds for rotational motions on the period length (inverse frequency) of a transient (single cycle) sinusoidal acceleration stimulus. However, self-motion is seldom purely sinusoidal, and up to now, no models have been proposed that take into account non-sinusoidal stimuli for rotational motions. In this work, the influence of both the period length and the specific time course of an inertial stimulus is investigated. Thresholds for three acceleration profile shapes (triangular, sinusoidal, and trapezoidal) were measured for three period lengths (0.3, 1.4, and 6.7 s) in ten participants. A two-alternative forced-choice discrimination task was used where participants had to judge if a yaw rotation around an earth-vertical axis was leftward or rightward. The peak velocity of the stimulus was varied, and the threshold was defined as the stimulus yielding 75 % correct answers. In accordance with previous research, thresholds decreased with shortening period length (from ~2 deg/s for 6.7 s to ~0.8 deg/s for 0.3 s). The peak velocity was the determining factor for discrimination: Different profiles with the same period length have similar velocity thresholds. These measurements were used to fit a novel model based on a description of the firing rate of semi-circular canal neurons. In accordance with previous research, the estimates of the model parameters suggest that velocity storage does not influence perceptual thresholds.

  4. Photoproduction of vector mesons off nucleons near threshold

    International Nuclear Information System (INIS)

    Friman, B.

    1995-01-01

    A simple meson-exchange model is proposed for the photoproduction of ρ- and ω-mesons off protons near threshold. This model provides a good description of the available data and implies a large ρ-nucleon interaction in the scalar channel (σ-exchange). This phenomenological interaction is applied to estimate the leading contribution to the self-energy of ρ-mesons in matter. The implications of our calculation for experimental studies of the ρ-meson mass in nuclei are discussed. (author)

  5. Threshold law for electron-atom impact ionization

    International Nuclear Information System (INIS)

    Temkin, A.

    1982-01-01

    The threshold law for electron-atom ionization is derived on the basis of the Coulomb-dipole theory. The result is a modulated quasilinear law for the yield: QproportionalE(lnE) -2 [1+C sin(αlnE+μ)]. The derivation depends on a more accurate description of the dipole moment seen by the outer electron as the distance of the inner electron from the nucleus. The derivation also implies Capprox. =α -1 , and it also suggests that α is large. The same law also applies to positron-atom impact ionization

  6. Segmentation by Large Scale Hypothesis Testing - Segmentation as Outlier Detection

    DEFF Research Database (Denmark)

    Darkner, Sune; Dahl, Anders Lindbjerg; Larsen, Rasmus

    2010-01-01

    a microscope and we show how the method can handle transparent particles with significant glare point. The method generalizes to other problems. THis is illustrated by applying the method to camera calibration images and MRI of the midsagittal plane for gray and white matter separation and segmentation......We propose a novel and efficient way of performing local image segmentation. For many applications a threshold of pixel intensities is sufficient but determine the appropriate threshold value can be difficult. In cases with large global intensity variation the threshold value has to be adapted...... locally. We propose a method based on large scale hypothesis testing with a consistent method for selecting an appropriate threshold for the given data. By estimating the background distribution we characterize the segment of interest as a set of outliers with a certain probability based on the estimated...

  7. Photoproduction of the φ(1020) near threshold in CLAS

    International Nuclear Information System (INIS)

    Tedeschi, D.J.

    2002-01-01

    The differential cross section for the photoproduction of the φ (1020) near threshold (E γ = 1.57GeV) is predicted to be sensitive to production mechanisms other than diffraction. However, the existing low energy data is of limited statistics and kinematical coverage. Complete measurements of φ meson production on the proton have been performed at The Thomas Jefferson National Accelerator Facility using a liquid hydrogen target and the CEBAF Large Acceptance Spectrometer (CLAS). The φ was identified by missing mass using a proton and positive kaon detected by CLAS in coincidence with an electron in the photon tagger. The energy of the tagged, bremsstrahlung photons ranged from φ-threshold to 2.4 GeV. A description of the data set and the differential cross section for (E γ = 2.0 GeV) will be presented and compared with present theoretical calculations. (author)

  8. Radiative corrections to chargino production in electron-positron collisions with polarized beams

    International Nuclear Information System (INIS)

    Diaz, Marco A.; King, Stephen F.; Ross, Douglas A.

    2001-01-01

    We study radiative corrections to chargino production at linear colliders with polarized electron beams. We calculate the one-loop corrected cross sections for polarized electon beams due to three families of quarks and squarks, working in the {ovr MS} scheme, extending our previous calculation of the unpolarized cross section with one-loop corrections due to the third family of quarks and squarks. In some cases we find rather large corrections to the tree-level cross sections. For example, for the case of right-handed polarized electrons and large tanβ the corrections can be of order 30%, allowing sensitivity to the squark mass parameters

  9. Error Correcting Codes -34 ...

    Indian Academy of Sciences (India)

    information and coding theory. A large scale relay computer had failed to deliver the expected results due to a hardware fault. Hamming, one of the active proponents of computer usage, was determined to find an efficient means by which computers could detect and correct their own faults. A mathematician by train-.

  10. Hyper-arousal decreases human visual thresholds.

    Directory of Open Access Journals (Sweden)

    Adam J Woods

    Full Text Available Arousal has long been known to influence behavior and serves as an underlying component of cognition and consciousness. However, the consequences of hyper-arousal for visual perception remain unclear. The present study evaluates the impact of hyper-arousal on two aspects of visual sensitivity: visual stereoacuity and contrast thresholds. Sixty-eight participants participated in two experiments. Thirty-four participants were randomly divided into two groups in each experiment: Arousal Stimulation or Sham Control. The Arousal Stimulation group underwent a 50-second cold pressor stimulation (immersing the foot in 0-2° C water, a technique known to increase arousal. In contrast, the Sham Control group immersed their foot in room temperature water. Stereoacuity thresholds (Experiment 1 and contrast thresholds (Experiment 2 were measured before and after stimulation. The Arousal Stimulation groups demonstrated significantly lower stereoacuity and contrast thresholds following cold pressor stimulation, whereas the Sham Control groups showed no difference in thresholds. These results provide the first evidence that hyper-arousal from sensory stimulation can lower visual thresholds. Hyper-arousal's ability to decrease visual thresholds has important implications for survival, sports, and everyday life.

  11. Magnetic monopoles near the black hole threshold

    International Nuclear Information System (INIS)

    Lue, A.; Weinberg, E.J.

    1999-01-01

    We present new analytic and numerical results for self-gravitating SU(2)-Higgs magnetic monopoles approaching the black hole threshold. Our investigation extends to large Higgs self-coupling, λ, a regime heretofore unexplored. When λ is small, the critical solution where a horizon first appears is extremal Reissner-Nordstroem outside the horizon but has a nonsingular interior. When λ is large, the critical solution is an extremal black hole with non-Abelian hair and a mass less than the extremal Reissner-Nordstroem value. The transition between these two regimes is reminiscent of a first-order phase transition. We analyze in detail the approach to these critical solutions as the Higgs expectation value is varied, and compare this analysis with the numerical results. copyright 1999 The American Physical Society

  12. Simulations of charge summing and threshold dispersion effects in Medipix3

    International Nuclear Information System (INIS)

    Pennicard, D.; Ballabriga, R.; Llopart, X.; Campbell, M.; Graafsma, H.

    2011-01-01

    A novel feature of the Medipix3 photon-counting pixel readout chip is inter-pixel communication. By summing together the signals from neighbouring pixels at a series of 'summing nodes', and assigning each hit to the node with the highest signal, the chip can compensate for charge-sharing effects. However, previous experimental tests have demonstrated that the node-to-node variation in the detector's response is very large. Using computer simulations, it is shown that this variation is due to threshold dispersion, which results in many hits being assigned to whichever summing node in the vicinity has the lowest threshold level. A reduction in threshold variation would attenuate but not solve this issue. A new charge summing and hit assignment process is proposed, where the signals in individual pixels are used to determine the hit location, and then signals from neighbouring pixels are summed to determine whether the total photon energy is above threshold. In simulation, this new mode accurately assigns each hit to the pixel with the highest pulse height without any losses or double counting. - Research highlights: → Medipix3 readout chip compensates charge sharing using inter-pixel communication. → In initial production run, the flat-field response is unexpectedly nonuniform. → This effect is reproduced in simulation, and is caused by threshold dispersion. → A new inter-pixel communication process is proposed. → Simulations demonstrate the new process should give much better uniformity.

  13. Novel threshold pressure sensors based on nonlinear dynamics of MEMS resonators

    Science.gov (United States)

    Hasan, Mohammad H.; Alsaleem, Fadi M.; Ouakad, Hassen M.

    2018-06-01

    Triggering an alarm in a car for low air-pressure in the tire or tripping an HVAC compressor if the refrigerant pressure is lower than a threshold value are examples for applications where measuring the amount of pressure is not as important as determining if the pressure has exceeded a threshold value for an action to occur. Unfortunately, current technology still relies on analog pressure sensors to perform this functionality by adding a complex interface (extra circuitry, controllers, and/or decision units). In this paper, we demonstrate two new smart tunable-threshold pressure switch concepts that can reduce the complexity of a threshold pressure sensor. The first concept is based on the nonlinear subharmonic resonance of a straight double cantilever microbeam with a proof mass and the other concept is based on the snap-through bi-stability of a clamped-clamped MEMS shallow arch. In both designs, the sensor operation concept is simple. Any actuation performed at a certain pressure lower than a threshold value will activate a nonlinear dynamic behavior (subharmonic resonance or snap-through bi-stability) yielding a large output that would be interpreted as a logic value of ONE, or ON. Once the pressure exceeds the threshold value, the nonlinear response ceases to exist, yielding a small output that would be interpreted as a logic value of ZERO, or OFF. A lumped, single degree of freedom model for the double cantilever beam, that is validated using experimental data, and a continuous beam model for the arch beam, are used to simulate the operation range of the proposed sensors by identifying the relationship between the excitation signal and the critical cut-off pressure.

  14. Sub-threshold Post Traumatic Stress Disorder in the WHO World Mental Health Surveys

    Science.gov (United States)

    McLaughlin, Katie A.; Koenen, Karestan C.; Friedman, Matthew J.; Ruscio, Ayelet Meron; Karam, Elie G.; Shahly, Victoria; Stein, Dan J.; Hill, Eric D.; Petukhova, Maria; Alonso, Jordi; Andrade, Laura Helena; Angermeyer, Matthias C.; Borges, Guilherme; de Girolamo, Giovanni; de Graaf, Ron; Demyttenaere, Koen; Florescu, Silvia E.; Mladenova, Maya; Posada-Villa, Jose; Scott, Kate M.; Takeshima, Tadashi; Kessler, Ronald C.

    2014-01-01

    Background Although only a minority of people exposed to a traumatic event (TE) develops PTSD, symptoms not meeting full PTSD criteria are common and often clinically significant. Individuals with these symptoms have sometimes been characterized as having sub-threshold PTSD, but no consensus exists on the optimal definition of this term. Data from a large cross-national epidemiological survey are used to provide a principled basis for such a definition. Methods The WHO World Mental Health (WMH) Surveys administered fully-structured psychiatric diagnostic interviews to community samples in 13 countries containing assessments of PTSD associated with randomly selected TEs. Focusing on the 23,936 respondents reporting lifetime TE exposure, associations of approximated DSM-5 PTSD symptom profiles with six outcomes (distress-impairment, suicidality, comorbid fear-distress disorders, PTSD symptom duration) were examined to investigate implications of different sub-threshold definitions. Results Although consistently highest distress-impairment, suicidality, comorbidity, and symptom duration were observed among the 3.0% of respondents with DSM-5 PTSD than other symptom profiles, the additional 3.6% of respondents meeting two or three of DSM-5 Criteria BE also had significantly elevated scores for most outcomes. The proportion of cases with threshold versus sub-threshold PTSD varied depending on TE type, with threshold PTSD more common following interpersonal violence and sub-threshold PTSD more common following events happening to loved ones. Conclusions Sub-threshold DSM-5 PTSD is most usefully defined as meeting two or three of the DSM-5 Criteria B-E. Use of a consistent definition is critical to advance understanding of the prevalence, predictors, and clinical significance of sub-threshold PTSD. PMID:24842116

  15. Threshold resummation at N{sup 3}LL accuracy and soft-virtual cross sections at N{sup 3}LO

    Energy Technology Data Exchange (ETDEWEB)

    Catani, Stefano [INFN, Sezione di Firenze and Dipartimento di Fisica e Astronomia, Università di Firenze, I-50019 Sesto Fiorentino, Florence (Italy); Cieri, Leandro [Dipartimento di Fisica, Università di Roma “La Sapienza” and INFN, Sezione di Roma, I-00185 Rome (Italy); Florian, Daniel de [Departamento de Física, FCEYN, Universidad de Buenos Aires, (1428) Pabellón 1 Ciudad Universitaria, Capital Federal (Argentina); Ferrera, Giancarlo [Dipartimento di Fisica, Università di Milano and INFN, Sezione di Milano, I-20133 Milan (Italy); Grazzini, Massimiliano [Physik-Institut, Universität Zürich, CH-8057 Zürich (Switzerland)

    2014-11-15

    We consider QCD radiative corrections to the production of colorless high-mass systems in hadron collisions. We show that the recent computation of the soft-virtual corrections to Higgs boson production at N{sup 3}LO [1] together with the universality structure of soft-gluon emission can be exploited to extract the general expression of the hard-virtual coefficient that contributes to threshold resummation at N{sup 3}LL accuracy. The hard-virtual coefficient is directly related to the process-dependent virtual amplitude through a universal (process-independent) factorization formula that we explicitly evaluate up to three-loop order. As an application, we present the explicit expression of the soft-virtual N{sup 3}LO corrections for the production of an arbitrary colorless system. In the case of the Drell–Yan process, we confirm the recent result of Ref. [2].

  16. Evaluation of supra-threshold hearing following an event of recreational acoustic exposure

    DEFF Research Database (Denmark)

    Smits, Bertrand; Holtegaard, Pernille; Jeong, Cheol-Ho

    2018-01-01

    Studies with small rodents have exhibited physiological evidence of noise-induced cochlear synaptopathy prior to outer-hair-cell loss following noise-induced large temporary threshold shifts (TTS). The auditory system may thus not fully recover after a TTS. If this noise-induced damage also occurs...

  17. Renormalisation group corrections to the littlest seesaw model and maximal atmospheric mixing

    International Nuclear Information System (INIS)

    King, Stephen F.; Zhang, Jue; Zhou, Shun

    2016-01-01

    The Littlest Seesaw (LS) model involves two right-handed neutrinos and a very constrained Dirac neutrino mass matrix, involving one texture zero and two independent Dirac masses, leading to a highly predictive scheme in which all neutrino masses and the entire PMNS matrix is successfully predicted in terms of just two real parameters. We calculate the renormalisation group (RG) corrections to the LS predictions, with and without supersymmetry, including also the threshold effects induced by the decoupling of heavy Majorana neutrinos both analytically and numerically. We find that the predictions for neutrino mixing angles and mass ratios are rather stable under RG corrections. For example we find that the LS model with RG corrections predicts close to maximal atmospheric mixing, θ_2_3=45"∘±1"∘, in most considered cases, in tension with the latest NOvA results. The techniques used here apply to other seesaw models with a strong normal mass hierarchy.

  18. Renormalisation group corrections to the littlest seesaw model and maximal atmospheric mixing

    Energy Technology Data Exchange (ETDEWEB)

    King, Stephen F. [School of Physics and Astronomy, University of Southampton,SO17 1BJ Southampton (United Kingdom); Zhang, Jue [Center for High Energy Physics, Peking University,Beijing 100871 (China); Zhou, Shun [Center for High Energy Physics, Peking University,Beijing 100871 (China); Institute of High Energy Physics, Chinese Academy of Sciences,Beijing 100049 (China)

    2016-12-06

    The Littlest Seesaw (LS) model involves two right-handed neutrinos and a very constrained Dirac neutrino mass matrix, involving one texture zero and two independent Dirac masses, leading to a highly predictive scheme in which all neutrino masses and the entire PMNS matrix is successfully predicted in terms of just two real parameters. We calculate the renormalisation group (RG) corrections to the LS predictions, with and without supersymmetry, including also the threshold effects induced by the decoupling of heavy Majorana neutrinos both analytically and numerically. We find that the predictions for neutrino mixing angles and mass ratios are rather stable under RG corrections. For example we find that the LS model with RG corrections predicts close to maximal atmospheric mixing, θ{sub 23}=45{sup ∘}±1{sup ∘}, in most considered cases, in tension with the latest NOvA results. The techniques used here apply to other seesaw models with a strong normal mass hierarchy.

  19. Plutonium Assay in Soil at the BRC Threshold

    International Nuclear Information System (INIS)

    Miller, T.

    2003-01-01

    The Atomic Weapons Establishment (AWE) at Aldermaston has investigated the performance of low and high-resolution gamma-ray detectors for plutonium (Pu) assay in soil at the UK Below Regulatory Concern (BRC) threshold (0.4 Bq/g above the natural background activity level). The goal was a rapid and economical technique for sorting large volumes of lightly contaminated soils into above and BRC fractions. The strategy involved utilizing the relatively high yield 60 keV emission from Am-241 ingrowth (Pu-241 daughter) and known isotopic ratios. This paper covers the determination of detector response factors for an Am-241 source positioned at various locations within a circular tray of soil. These factors were weighted, according to the relative volumes that they represent, in order to derive a uniform response factor and quantify the systematic error for non-uniform activity distributions. Detection limits and random errors were also derived from the counting data. The high-resolution detector was shown to have the best detection levels and lowest systematic and random errors. However, uncertainties for non-uniform distributions of contamination were relatively large. Hence, analyzing soils at the BRC threshold would only be feasible if contamination was well distributed throughout the soil sample being monitored. Fortunately, contaminated land at AWE is generally homogeneous and so the technique has wide applicability

  20. Estimating economic thresholds for site-specific weed control using manual weed counts and sensor technology: an example based on three winter wheat trials.

    Science.gov (United States)

    Keller, Martina; Gutjahr, Christoph; Möhring, Jens; Weis, Martin; Sökefeld, Markus; Gerhards, Roland

    2014-02-01

    Precision experimental design uses the natural heterogeneity of agricultural fields and combines sensor technology with linear mixed models to estimate the effect of weeds, soil properties and herbicide on yield. These estimates can be used to derive economic thresholds. Three field trials are presented using the precision experimental design in winter wheat. Weed densities were determined by manual sampling and bi-spectral cameras, yield and soil properties were mapped. Galium aparine, other broad-leaved weeds and Alopecurus myosuroides reduced yield by 17.5, 1.2 and 12.4 kg ha(-1) plant(-1)  m(2) in one trial. The determined thresholds for site-specific weed control with independently applied herbicides were 4, 48 and 12 plants m(-2), respectively. Spring drought reduced yield effects of weeds considerably in one trial, since water became yield limiting. A negative herbicide effect on the crop was negligible, except in one trial, in which the herbicide mixture tended to reduce yield by 0.6 t ha(-1). Bi-spectral cameras for weed counting were of limited use and still need improvement. Nevertheless, large weed patches were correctly identified. The current paper presents a new approach to conducting field trials and deriving decision rules for weed control in farmers' fields. © 2013 Society of Chemical Industry.

  1. Conceptions of nuclear threshold status

    International Nuclear Information System (INIS)

    Quester, G.H.

    1991-01-01

    This paper reviews some alternative definitions of nuclear threshold status. Each of them is important, and major analytical confusions would result if one sense of the term is mistaken for another. The motives for nations entering into such threshold status are a blend of civilian and military gains, and of national interests versus parochial or bureaucratic interests. A portion of the rationale for threshold status emerges inevitably from the pursuit of economic goals, and another portion is made more attraction by the derives of the domestic political process. Yet the impact on international security cannot be dismissed, especially where conflicts among the states remain real. Among the military or national security motives are basic deterrence, psychological warfare, war-fighting and, more generally, national prestige. In the end, as the threshold phenomenon is assayed for lessons concerning the role of nuclear weapons more generally in international relations and security, one might conclude that threshold status and outright proliferation coverage to a degree in the motives for all of the states involved and in the advantages attained. As this paper has illustrated, nuclear threshold status is more subtle and more ambiguous than outright proliferation, and it takes considerable time to sort out the complexities. Yet the world has now had a substantial amount of time to deal with this ambiguous status, and this may tempt more states to exploit it

  2. Efficient error correction for next-generation sequencing of viral amplicons.

    Science.gov (United States)

    Skums, Pavel; Dimitrova, Zoya; Campo, David S; Vaughan, Gilberto; Rossi, Livia; Forbi, Joseph C; Yokosawa, Jonny; Zelikovsky, Alex; Khudyakov, Yury

    2012-06-25

    Next-generation sequencing allows the analysis of an unprecedented number of viral sequence variants from infected patients, presenting a novel opportunity for understanding virus evolution, drug resistance and immune escape. However, sequencing in bulk is error prone. Thus, the generated data require error identification and correction. Most error-correction methods to date are not optimized for amplicon analysis and assume that the error rate is randomly distributed. Recent quality assessment of amplicon sequences obtained using 454-sequencing showed that the error rate is strongly linked to the presence and size of homopolymers, position in the sequence and length of the amplicon. All these parameters are strongly sequence specific and should be incorporated into the calibration of error-correction algorithms designed for amplicon sequencing. In this paper, we present two new efficient error correction algorithms optimized for viral amplicons: (i) k-mer-based error correction (KEC) and (ii) empirical frequency threshold (ET). Both were compared to a previously published clustering algorithm (SHORAH), in order to evaluate their relative performance on 24 experimental datasets obtained by 454-sequencing of amplicons with known sequences. All three algorithms show similar accuracy in finding true haplotypes. However, KEC and ET were significantly more efficient than SHORAH in removing false haplotypes and estimating the frequency of true ones. Both algorithms, KEC and ET, are highly suitable for rapid recovery of error-free haplotypes obtained by 454-sequencing of amplicons from heterogeneous viruses.The implementations of the algorithms and data sets used for their testing are available at: http://alan.cs.gsu.edu/NGS/?q=content/pyrosequencing-error-correction-algorithm.

  3. Quantum secret sharing based on quantum error-correcting codes

    International Nuclear Information System (INIS)

    Zhang Zu-Rong; Liu Wei-Tao; Li Cheng-Zu

    2011-01-01

    Quantum secret sharing(QSS) is a procedure of sharing classical information or quantum information by using quantum states. This paper presents how to use a [2k − 1, 1, k] quantum error-correcting code (QECC) to implement a quantum (k, 2k − 1) threshold scheme. It also takes advantage of classical enhancement of the [2k − 1, 1, k] QECC to establish a QSS scheme which can share classical information and quantum information simultaneously. Because information is encoded into QECC, these schemes can prevent intercept-resend attacks and be implemented on some noisy channels. (general)

  4. Legislating thresholds for drug trafficking: a policy development case study from New South Wales, Australia.

    Science.gov (United States)

    Hughes, Caitlin Elizabeth; Ritter, Alison; Cowdery, Nicholas

    2014-09-01

    Legal thresholds are used in many parts of the world to define the quantity of illicit drugs over which possession is deemed "trafficking" as opposed to "possession for personal use". There is limited knowledge about why or how such laws were developed. In this study we analyse the policy processes underpinning the introduction and expansion of the drug trafficking legal threshold system in New South Wales (NSW), Australia. A critical legal and historical analysis was undertaken sourcing data from legislation, Parliamentary Hansard debates, government inquiries, police reports and research. A timeline of policy developments was constructed from 1970 until 2013 outlining key steps including threshold introduction (1970), expansion (1985), and wholesale revision (1988). We then critically analysed the drivers of each step and the roles played by formal policy actors, public opinion, research/data and the drug trafficking problem. We find evidence that while justified as a necessary tool for effective law enforcement of drug trafficking, their introduction largely preceded overt police calls for reform or actual increases in drug trafficking. Moreover, while the expansion from one to four thresholds had the intent of differentiating small from large scale traffickers, the quantities employed were based on government assumptions which led to "manifest problems" and the revision in 1988 of over 100 different quantities. Despite the revisions, there has remained no further formal review and new quantities for "legal highs" continue to be added based on assumption and an uncertain evidence-base. The development of legal thresholds for drug trafficking in NSW has been arbitrary and messy. That the arbitrariness persists from 1970 until the present day makes it hard to conclude the thresholds have been well designed. Our narrative provides a platform for future policy reform. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Surgical correction of postoperative astigmatism

    Directory of Open Access Journals (Sweden)

    Lindstrom Richard

    1990-01-01

    Full Text Available The photokeratoscope has increased the understanding of the aspheric nature of the cornea as well as a better understanding of normal corneal topography. This has significantly affected the development of newer and more predictable models of surgical astigmatic correction. Relaxing incisions effectively flatten the steeper meridian an equivalent amount as they steepen the flatter meridian. The net change in spherical equivalent is, therefore, negligible. Poor predictability is the major limitation of relaxing incisions. Wedge resection can correct large degrees of postkeratoplasty astigmatism, Resection of 0.10 mm of tissue results in approximately 2 diopters of astigmatic correction. Prolonged postoperative rehabilitation and induced irregular astigmatism are limitations of the procedure. Transverse incisions flatten the steeper meridian an equivalent amount as they steepen the flatter meridian. Semiradial incisions result in two times the amount of flattening in the meridian of the incision compared to the meridian 90 degrees away. Combination of transverse incisions with semiradial incisions describes the trapezoidal astigmatic keratotomy. This procedure may correct from 5.5 to 11.0 diopters dependent upon the age of the patient. The use of the surgical keratometer is helpful in assessing a proper endpoint during surgical correction of astigmatism.

  6. Threshold Concepts in Finance: Student Perspectives

    Science.gov (United States)

    Hoadley, Susan; Kyng, Tim; Tickle, Leonie; Wood, Leigh N.

    2015-01-01

    Finance threshold concepts are the essential conceptual knowledge that underpin well-developed financial capabilities and are central to the mastery of finance. In this paper we investigate threshold concepts in finance from the point of view of students, by establishing the extent to which students are aware of threshold concepts identified by…

  7. Exploring light mediators with low-threshold direct detection experiments

    Energy Technology Data Exchange (ETDEWEB)

    Kahlhoefer, Felix [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); RWTH Aachen Univ. (Germany). Inst. for Theoretical Particle Physics and Cosmology; Kulkarni, Suchita [Oesterreichische Akademie der Wissenschaften, Vienna (Austria). Inst. fuer Hochenergiephysik; Wild, Sebastian [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2017-11-15

    We explore the potential of future cryogenic direct detection experiments to determine the properties of the mediator that communicates the interactions between dark matter and nuclei. Due to their low thresholds and large exposures, experiments like CRESST-III, SuperCDMS SNOLAB and EDELWEISS-III will have excellent capability to reconstruct mediator masses in the MeV range for a large class of models. Combining the information from several experiments further improves the parameter reconstruction, even when taking into account additional nuisance parameters related to background uncertainties and the dark matter velocity distribution. These observations may offer the intriguing possibility of studying dark matter self-interactions with direct detection experiments.

  8. Exploring light mediators with low-threshold direct detection experiments

    International Nuclear Information System (INIS)

    Kahlhoefer, Felix

    2017-11-01

    We explore the potential of future cryogenic direct detection experiments to determine the properties of the mediator that communicates the interactions between dark matter and nuclei. Due to their low thresholds and large exposures, experiments like CRESST-III, SuperCDMS SNOLAB and EDELWEISS-III will have excellent capability to reconstruct mediator masses in the MeV range for a large class of models. Combining the information from several experiments further improves the parameter reconstruction, even when taking into account additional nuisance parameters related to background uncertainties and the dark matter velocity distribution. These observations may offer the intriguing possibility of studying dark matter self-interactions with direct detection experiments.

  9. Thresholding magnetic resonance images of human brain

    Institute of Scientific and Technical Information of China (English)

    Qing-mao HU; Wieslaw L NOWINSKI

    2005-01-01

    In this paper, methods are proposed and validated to determine low and high thresholds to segment out gray matter and white matter for MR images of different pulse sequences of human brain. First, a two-dimensional reference image is determined to represent the intensity characteristics of the original three-dimensional data. Then a region of interest of the reference image is determined where brain tissues are present. The non-supervised fuzzy c-means clustering is employed to determine: the threshold for obtaining head mask, the low threshold for T2-weighted and PD-weighted images, and the high threshold for T1-weighted, SPGR and FLAIR images. Supervised range-constrained thresholding is employed to determine the low threshold for T1-weighted, SPGR and FLAIR images. Thresholding based on pairs of boundary pixels is proposed to determine the high threshold for T2- and PD-weighted images. Quantification against public data sets with various noise and inhomogeneity levels shows that the proposed methods can yield segmentation robust to noise and intensity inhomogeneity. Qualitatively the proposed methods work well with real clinical data.

  10. Near threshold fatigue testing

    Science.gov (United States)

    Freeman, D. C.; Strum, M. J.

    1993-01-01

    Measurement of the near-threshold fatigue crack growth rate (FCGR) behavior provides a basis for the design and evaluation of components subjected to high cycle fatigue. Typically, the near-threshold fatigue regime describes crack growth rates below approximately 10(exp -5) mm/cycle (4 x 10(exp -7) inch/cycle). One such evaluation was recently performed for the binary alloy U-6Nb. The procedures developed for this evaluation are described in detail to provide a general test method for near-threshold FCGR testing. In particular, techniques for high-resolution measurements of crack length performed in-situ through a direct current, potential drop (DCPD) apparatus, and a method which eliminates crack closure effects through the use of loading cycles with constant maximum stress intensity are described.

  11. Solving large instances of the quadratic cost of partition problem on dense graphs by data correcting algorithms

    NARCIS (Netherlands)

    Goldengorin, Boris; Vink, Marius de

    1999-01-01

    The Data-Correcting Algorithm (DCA) corrects the data of a hard problem instance in such a way that we obtain an instance of a well solvable special case. For a given prescribed accuracy of the solution, the DCA uses a branch and bound scheme to make sure that the solution of the corrected instance

  12. Correction

    DEFF Research Database (Denmark)

    Pinkevych, Mykola; Cromer, Deborah; Tolstrup, Martin

    2016-01-01

    [This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.].......[This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.]....

  13. Multilingual text induced spelling correction

    NARCIS (Netherlands)

    Reynaert, M.W.C.

    2004-01-01

    We present TISC, a multilingual, language-independent and context-sensitive spelling checking and correction system designed to facilitate the automatic removal of non-word spelling errors in large corpora. Its lexicon is derived from raw text corpora, without supervision, and contains word unigrams

  14. Genetic variation in threshold reaction norms for alternative reproductive tactics in male Atlantic salmon, Salmo salar.

    Science.gov (United States)

    Piché, Jacinthe; Hutchings, Jeffrey A; Blanchard, Wade

    2008-07-07

    Alternative reproductive tactics may be a product of adaptive phenotypic plasticity, such that discontinuous variation in life history depends on both the genotype and the environment. Phenotypes that fall below a genetically determined threshold adopt one tactic, while those exceeding the threshold adopt the alternative tactic. We report evidence of genetic variability in maturation thresholds for male Atlantic salmon (Salmo salar) that mature either as large (more than 1 kg) anadromous males or as small (10-150 g) parr. Using a common-garden experimental protocol, we find that the growth rate at which the sneaker parr phenotype is expressed differs among pure- and mixed-population crosses. Maturation thresholds of hybrids were intermediate to those of pure crosses, consistent with the hypothesis that the life-history switch points are heritable. Our work provides evidence, for a vertebrate, that thresholds for alternative reproductive tactics differ genetically among populations and can be modelled as discontinuous reaction norms for age and size at maturity.

  15. Large N Scalars

    DEFF Research Database (Denmark)

    Sannino, Francesco

    2016-01-01

    We construct effective Lagrangians, and corresponding counting schemes, valid to describe the dynamics of the lowest lying large N stable massive composite state emerging in strongly coupled theories. The large N counting rules can now be employed when computing quantum corrections via an effective...

  16. An ulra-low-energy/frame multi-standard JPEG co-processor in 65nm CMOS with sub/near-threshold power supply.

    NARCIS (Netherlands)

    Pu, Yu; Pineda de Gyvez, J.; Corporaal, H.; Ha, Y.

    2009-01-01

    Many digital ICs can benefit from sub/near threshold operations that provide ultra-low-energy/operation for long battery lifetime. In addition, sub/near threshold operation largely mitigates the transient current hence lowering the ground bounce noise. This also helps to improve the performance of

  17. On the Physical Significance of Infra-red Corrections to Inflationary Observables

    CERN Document Server

    Bartolo, N; Pietroni, M; Riotto, Antonio; Seery, D

    2008-01-01

    Inflationary observables, like the power spectrum, computed at one- and higher-order loop level seem to be plagued by large infra-red corrections. In this short note, we point out that these large infra-red corrections appear only in quantities which are not directly observable. This is in agreement with general expectations concerning infra-red effects.

  18. Multiple testing corrections in quantitative proteomics: A useful but blunt tool.

    Science.gov (United States)

    Pascovici, Dana; Handler, David C L; Wu, Jemma X; Haynes, Paul A

    2016-09-01

    Multiple testing corrections are a useful tool for restricting the FDR, but can be blunt in the context of low power, as we demonstrate by a series of simple simulations. Unfortunately, in proteomics experiments low power can be common, driven by proteomics-specific issues like small effects due to ratio compression, and few replicates due to reagent high cost, instrument time availability and other issues; in such situations, most multiple testing corrections methods, if used with conventional thresholds, will fail to detect any true positives even when many exist. In this low power, medium scale situation, other methods such as effect size considerations or peptide-level calculations may be a more effective option, even if they do not offer the same theoretical guarantee of a low FDR. Thus, we aim to highlight in this article that proteomics presents some specific challenges to the standard multiple testing corrections methods, which should be employed as a useful tool but not be regarded as a required rubber stamp. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. At-Risk-of-Poverty Threshold

    Directory of Open Access Journals (Sweden)

    Táňa Dvornáková

    2012-06-01

    Full Text Available European Statistics on Income and Living Conditions (EU-SILC is a survey on households’ living conditions. The main aim of the survey is to get long-term comparable data on social and economic situation of households. Data collected in the survey are used mainly in connection with the evaluation of income poverty and determinationof at-risk-of-poverty rate. This article deals with the calculation of the at risk-of-poverty threshold based on data from EU-SILC 2009. The main task is to compare two approaches to the computation of at riskof-poverty threshold. The first approach is based on the calculation of the threshold for each country separately,while the second one is based on the calculation of the threshold for all states together. The introduction summarizes common attributes in the calculation of the at-risk-of-poverty threshold, such as disposable household income, equivalised household income. Further, different approaches to both calculations are introduced andadvantages and disadvantages of these approaches are stated. Finally, the at-risk-of-poverty rate calculation is described and comparison of the at-risk-of-poverty rates based on these two different approaches is made.

  20. Fatigue Crack Growth Rate and Stress-Intensity Factor Corrections for Out-of-Plane Crack Growth

    Science.gov (United States)

    Forth, Scott C.; Herman, Dave J.; James, Mark A.

    2003-01-01

    Fatigue crack growth rate testing is performed by automated data collection systems that assume straight crack growth in the plane of symmetry and use standard polynomial solutions to compute crack length and stress-intensity factors from compliance or potential drop measurements. Visual measurements used to correct the collected data typically include only the horizontal crack length, which for cracks that propagate out-of-plane, under-estimates the crack growth rates and over-estimates the stress-intensity factors. The authors have devised an approach for correcting both the crack growth rates and stress-intensity factors based on two-dimensional mixed mode-I/II finite element analysis (FEA). The approach is used to correct out-of-plane data for 7050-T7451 and 2025-T6 aluminum alloys. Results indicate the correction process works well for high DeltaK levels but fails to capture the mixed-mode effects at DeltaK levels approaching threshold (da/dN approximately 10(exp -10) meter/cycle).

  1. Ellipticity of near-threshold harmonics from stretched molecules.

    Science.gov (United States)

    Li, Weiyan; Dong, Fulong; Yu, Shujuan; Wang, Shang; Yang, Shiping; Chen, Yanjun

    2015-11-30

    We study the ellipticity of near-threshold harmonics (NTH) from aligned molecules with large internuclear distances numerically and analytically. The calculated harmonic spectra show a broad plateau for NTH which is several orders of magnitude higher than that for high-order harmonics. In particular, the NTH plateau shows high ellipticity at small and intermediate orientation angles. Our analyses reveal that the main contributions to the NTH plateau come from the transition of the electron from continuum states to these two lowest bound states of the system, which are strongly coupled together by the laser field. Besides continuum states, higher excited states also play a role in the NTH plateau, resulting in a large phase difference between parallel and perpendicular harmonics and accordingly high ellipticity of the NTH plateau. The NTH plateau with high intensity and large ellipticity provides a promising manner for generating strong elliptically-polarized extreme-ultraviolet (EUV) pulses.

  2. Quantum gravitational corrections for spinning particles

    International Nuclear Information System (INIS)

    Fröb, Markus B.

    2016-01-01

    We calculate the quantum corrections to the gauge-invariant gravitational potentials of spinning particles in flat space, induced by loops of both massive and massless matter fields of various types. While the corrections to the Newtonian potential induced by massless conformal matter for spinless particles are well known, and the same corrections due to massless minimally coupled scalars http://dx.doi.org/10.1088/0264-9381/27/24/245008, massless non-conformal scalars http://dx.doi.org/10.1103/PhysRevD.87.104027 and massive scalars, fermions and vector bosons http://dx.doi.org/10.1103/PhysRevD.91.064047 have been recently derived, spinning particles receive additional corrections which are the subject of the present work. We give both fully analytic results valid for all distances from the particle, and present numerical results as well as asymptotic expansions. At large distances from the particle, the corrections due to massive fields are exponentially suppressed in comparison to the corrections from massless fields, as one would expect. However, a surprising result of our analysis is that close to the particle itself, on distances comparable to the Compton wavelength of the massive fields running in the loops, these corrections can be enhanced with respect to the massless case.

  3. Segment and fit thresholding: a new method for image analysis applied to microarray and immunofluorescence data.

    Science.gov (United States)

    Ensink, Elliot; Sinha, Jessica; Sinha, Arkadeep; Tang, Huiyuan; Calderone, Heather M; Hostetter, Galen; Winter, Jordan; Cherba, David; Brand, Randall E; Allen, Peter J; Sempere, Lorenzo F; Haab, Brian B

    2015-10-06

    Experiments involving the high-throughput quantification of image data require algorithms for automation. A challenge in the development of such algorithms is to properly interpret signals over a broad range of image characteristics, without the need for manual adjustment of parameters. Here we present a new approach for locating signals in image data, called Segment and Fit Thresholding (SFT). The method assesses statistical characteristics of small segments of the image and determines the best-fit trends between the statistics. Based on the relationships, SFT identifies segments belonging to background regions; analyzes the background to determine optimal thresholds; and analyzes all segments to identify signal pixels. We optimized the initial settings for locating background and signal in antibody microarray and immunofluorescence data and found that SFT performed well over multiple, diverse image characteristics without readjustment of settings. When used for the automated analysis of multicolor, tissue-microarray images, SFT correctly found the overlap of markers with known subcellular localization, and it performed better than a fixed threshold and Otsu's method for selected images. SFT promises to advance the goal of full automation in image analysis.

  4. Summary of DOE threshold limits efforts

    International Nuclear Information System (INIS)

    Wickham, L.E.; Smith, C.F.; Cohen, J.J.

    1987-01-01

    The Department of Energy (DOE) has been developing the concept of threshold quantities for use in determining which waste materials may be disposed of as nonradioactive waste in DOE sanitary landfills. Waste above a threshold level could be managed as radioactive or mixed waste (if hazardous chemicals are present); waste below this level would be handled as sanitary waste. After extensive review of a draft threshold guidance document in 1985, a second draft threshold background document was produced in March 1986. The second draft included a preliminary cost-benefit analysis and quality assurance considerations. The review of the second draft has been completed. Final changes to be incorporated include an in-depth cost-benefit analysis of two example sites and recommendations of how to further pursue (i.e. employ) the concept of threshold quantities within the DOE. 3 references

  5. Importance of Lorentz structure in the parton model: Target mass corrections, transverse momentum dependence, positivity bounds

    International Nuclear Information System (INIS)

    D'Alesio, U.; Leader, E.; Murgia, F.

    2010-01-01

    We show that respecting the underlying Lorentz structure in the parton model has very strong consequences. Failure to insist on the correct Lorentz covariance is responsible for the existence of contradictory results in the literature for the polarized structure function g 2 (x), whereas with the correct imposition we are able to derive the Wandzura-Wilczek relation for g 2 (x) and the target-mass corrections for polarized deep inelastic scattering without recourse to the operator product expansion. We comment briefly on the problem of threshold behavior in the presence of target-mass corrections. Careful attention to the Lorentz structure has also profound implications for the structure of the transverse momentum dependent parton densities often used in parton model treatments of hadron production, allowing the k T dependence to be derived explicitly. It also leads to stronger positivity and Soffer-type bounds than usually utilized for the collinear densities.

  6. The threshold vs LNT showdown: Dose rate findings exposed flaws in the LNT model part 2. How a mistake led BEIR I to adopt LNT

    International Nuclear Information System (INIS)

    Calabrese, Edward J.

    2017-01-01

    This paper reveals that nearly 25 years after the used Russell's dose-rate data to support the adoption of the linear-no-threshold (LNT) dose response model for genetic and cancer risk assessment, Russell acknowledged a significant under-reporting of the mutation rate of the historical control group. This error, which was unknown to BEIR I, had profound implications, leading it to incorrectly adopt the LNT model, which was a decision that profoundly changed the course of risk assessment for radiation and chemicals to the present. -- Highlights: • The BEAR I Genetics Panel made an error in denying dose rate for mutation. • The BEIR I Genetics Subcommittee attempted to correct this dose rate error. • The control group used for risk assessment by BEIR I is now known to be in error. • Correcting this error contradicts the LNT, supporting a threshold model.

  7. The threshold vs LNT showdown: Dose rate findings exposed flaws in the LNT model part 2. How a mistake led BEIR I to adopt LNT

    Energy Technology Data Exchange (ETDEWEB)

    Calabrese, Edward J., E-mail: edwardc@schoolph.umass.edu [Department of Environmental Health Sciences, School of Public Health and Health Sciences, Morrill I, N344, University of Massachusetts, Amherst, MA 01003 (United States)

    2017-04-15

    This paper reveals that nearly 25 years after the used Russell's dose-rate data to support the adoption of the linear-no-threshold (LNT) dose response model for genetic and cancer risk assessment, Russell acknowledged a significant under-reporting of the mutation rate of the historical control group. This error, which was unknown to BEIR I, had profound implications, leading it to incorrectly adopt the LNT model, which was a decision that profoundly changed the course of risk assessment for radiation and chemicals to the present. -- Highlights: • The BEAR I Genetics Panel made an error in denying dose rate for mutation. • The BEIR I Genetics Subcommittee attempted to correct this dose rate error. • The control group used for risk assessment by BEIR I is now known to be in error. • Correcting this error contradicts the LNT, supporting a threshold model.

  8. Quantum corrections to Drell-Yan production of Z bosons

    Energy Technology Data Exchange (ETDEWEB)

    Shcherbakova, Elena S.

    2011-08-15

    In this thesis, we present higher-order corrections to inclusive Z-boson hadroproduction via the Drell-Yan mechanism, h{sub 1}+h{sub 2}{yields}Z+X, at large transverse momentum (q{sub T}). Specifically, we include the QED, QCD and electroweak corrections of orders O({alpha}{sub S}{alpha}, {alpha}{sub S}{sup 2}{alpha}, {alpha}{sub S}{alpha}{sup 2}). We work in the framework of the Standard Model and adopt the MS scheme of renormalization and factorization. The cross section of Z-boson production has been precisely measured at various hadron-hadron colliders, including the Tevatron and the LHC. Our calculations will help to calibrate and monitor the luminosity and to estimate of backgrounds of the hadron-hadron interactions more reliably. Besides the total cross section, we study the distributions in the transverse momentum and the rapidity (y) of the Z boson, appropriate for Tevatron and LHC experimental conditions. Investigating the relative sizes fo the various types of corrections by means of the factor K = {sigma}{sub tot} / {sigma}{sub Born}, we find that the QCS corrections of order {alpha}{sub S}{sup 2}{alpha} are largest in general and that the electroweak corrections of order {alpha}{sub S}{alpha}{sup 2} play an important role at large values of q{sub T}, while the QED corrections at the same order are small, of order 2% or below. We also compare out results with the existing literature. We correct a few misprints in the original calculation of the QCD corrections, and find the published electroweak correction to be incomplete. Our results for the QED corrections are new. (orig.)

  9. Comparisons between detection threshold and loudness perception for individual cochlear implant channels

    Science.gov (United States)

    Bierer, Julie Arenberg; Nye, Amberly D

    2014-01-01

    thresholds had the narrowest dynamic ranges (for σ ≥ 0.5) and steepest growth of loudness functions for all electrode configurations. Conclusions Together with previous studies using focused stimulation, the results suggest that auditory responses to electrical stimuli at both threshold and suprathreshold current levels are not uniform across the electrode array of individual cochlear implant listeners. Specifically, the steeper growth of loudness and thus smaller dynamic ranges observed for high-threshold channels are consistent with a degraded electrode-neuron interface, which could stem from lower numbers of functioning auditory neurons or a relatively large distance between the neurons and electrodes. These findings may have potential implications for how stimulation levels are set during the clinical mapping procedure, particularly for speech-processing strategies that use focused electrical fields. PMID:25036146

  10. Temporal impulse and step responses of the human eye obtained psychophysically by means of a drift-correcting perturbation technique

    NARCIS (Netherlands)

    Roufs, J.A.J.; Blommaert, F.J.J.

    1981-01-01

    Internal impulse and step responses are derived from the thresholds of short probe flashes by means of a drift-correcting perturbation technique. The approach is based on only two postulated systems properties: quasi-linearity and peak detection. A special feature of the technique is its strong

  11. Parton distributions with threshold resummation

    CERN Document Server

    Bonvini, Marco; Rojo, Juan; Rottoli, Luca; Ubiali, Maria; Ball, Richard D.; Bertone, Valerio; Carrazza, Stefano; Hartland, Nathan P.

    2015-01-01

    We construct a set of parton distribution functions (PDFs) in which fixed-order NLO and NNLO calculations are supplemented with soft-gluon (threshold) resummation up to NLL and NNLL accuracy respectively, suitable for use in conjunction with any QCD calculation in which threshold resummation is included at the level of partonic cross sections. These resummed PDF sets, based on the NNPDF3.0 analysis, are extracted from deep-inelastic scattering, Drell-Yan, and top quark pair production data, for which resummed calculations can be consistently used. We find that, close to threshold, the inclusion of resummed PDFs can partially compensate the enhancement in resummed matrix elements, leading to resummed hadronic cross-sections closer to the fixed-order calculation. On the other hand, far from threshold, resummed PDFs reduce to their fixed-order counterparts. Our results demonstrate the need for a consistent use of resummed PDFs in resummed calculations.

  12. Effect of threshold quantization in opportunistic splitting algorithm

    KAUST Repository

    Nam, Haewoon

    2011-12-01

    This paper discusses algorithms to find the optimal threshold and also investigates the impact of threshold quantization on the scheduling outage performance of the opportunistic splitting scheduling algorithm. Since this algorithm aims at finding the user with the highest channel quality within the minimal number of mini-slots by adjusting the threshold every mini-slot, optimizing the threshold is of paramount importance. Hence, in this paper we first discuss how to compute the optimal threshold along with two tight approximations for the optimal threshold. Closed-form expressions are provided for those approximations for simple calculations. Then, we consider linear quantization of the threshold to take the limited number of bits for signaling messages in practical systems into consideration. Due to the limited granularity for the quantized threshold value, an irreducible scheduling outage floor is observed. The numerical results show that the two approximations offer lower scheduling outage probability floors compared to the conventional algorithm when the threshold is quantized. © 2006 IEEE.

  13. Updated LPI Thresholds for the Nike Laser*

    Science.gov (United States)

    Weaver, J. L.; Oh, J.; Afeyan, B.; Phillips, L.; Seely, J.; Kehne, D.; Brown, C.; Obenschain, S. P.; Serlin, V.; Schmitt, A. J.; Feldman, U.; Holland, G.; Manka, C.; Lehmberg, R. H.; McLean, E.

    2009-11-01

    Advanced implosion designs for direct drive inertial confinement fusion use high laser intensities (10^15-10^16 W/cm^2) to achieve gain (g>100) with a reduction in total laser energy (ENike laser at NRL are an attractive choice due to their combination of short wavelength (248 nm), large bandwidth (1-2 THz), and beam smoothing by induced spatial incoherence but the potential threat from laser-plasma instabilities (LPI) needs to be assessed. The 2008 LPI campaign at Nike yielded threshold intensities above 10^15 W/cm^2 for the two-plasmon instability, a value higher than reported for 351 nm glass lasers. The experiments used a planar geometry, solid polystyrene targets, and a subset of beams (E<200 J) with a reduced focal spot (d<125 μm). The 2009 campaign extended the shot parameters to higher laser energies (E<1 kJ) and larger spot sizes (d<300 μm). Spectrally-resolved and time-resolved measurements of x-rays and emission near ^1/2φo and ^3/2φo harmonics of the laser wavelength show threshold intensities consistent with the 2008 results. *Work supported by DoE/NNSA

  14. Dead time corrections using the backward extrapolation method

    Energy Technology Data Exchange (ETDEWEB)

    Gilad, E., E-mail: gilade@bgu.ac.il [The Unit of Nuclear Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105 (Israel); Dubi, C. [Department of Physics, Nuclear Research Center NEGEV (NRCN), Beer-Sheva 84190 (Israel); Geslot, B.; Blaise, P. [DEN/CAD/DER/SPEx/LPE, CEA Cadarache, Saint-Paul-les-Durance 13108 (France); Kolin, A. [Department of Physics, Nuclear Research Center NEGEV (NRCN), Beer-Sheva 84190 (Israel)

    2017-05-11

    Dead time losses in neutron detection, caused by both the detector and the electronics dead time, is a highly nonlinear effect, known to create high biasing in physical experiments as the power grows over a certain threshold, up to total saturation of the detector system. Analytic modeling of the dead time losses is a highly complicated task due to the different nature of the dead time in the different components of the monitoring system (e.g., paralyzing vs. non paralyzing), and the stochastic nature of the fission chains. In the present study, a new technique is introduced for dead time corrections on the sampled Count Per Second (CPS), based on backward extrapolation of the losses, created by increasingly growing artificially imposed dead time on the data, back to zero. The method has been implemented on actual neutron noise measurements carried out in the MINERVE zero power reactor, demonstrating high accuracy (of 1–2%) in restoring the corrected count rate. - Highlights: • A new method for dead time corrections is introduced and experimentally validated. • The method does not depend on any prior calibration nor assumes any specific model. • Different dead times are imposed on the signal and the losses are extrapolated to zero. • The method is implemented and validated using neutron measurements from the MINERVE. • Result show very good correspondence to empirical results.

  15. Risk thresholds for alcohol consumption

    DEFF Research Database (Denmark)

    Wood, Angela M; Kaptoge, Stephen; Butterworth, Adam S

    2018-01-01

    previous cardiovascular disease. METHODS: We did a combined analysis of individual-participant data from three large-scale data sources in 19 high-income countries (the Emerging Risk Factors Collaboration, EPIC-CVD, and the UK Biobank). We characterised dose-response associations and calculated hazard......BACKGROUND: Low-risk limits recommended for alcohol consumption vary substantially across different national guidelines. To define thresholds associated with lowest risk for all-cause mortality and cardiovascular disease, we studied individual-participant data from 599 912 current drinkers without......·4 million person-years of follow-up. For all-cause mortality, we recorded a positive and curvilinear association with the level of alcohol consumption, with the minimum mortality risk around or below 100 g per week. Alcohol consumption was roughly linearly associated with a higher risk of stroke (HR per 100...

  16. Log canonical thresholds of smooth Fano threefolds

    International Nuclear Information System (INIS)

    Cheltsov, Ivan A; Shramov, Konstantin A

    2008-01-01

    The complex singularity exponent is a local invariant of a holomorphic function determined by the integrability of fractional powers of the function. The log canonical thresholds of effective Q-divisors on normal algebraic varieties are algebraic counterparts of complex singularity exponents. For a Fano variety, these invariants have global analogues. In the former case, it is the so-called α-invariant of Tian; in the latter case, it is the global log canonical threshold of the Fano variety, which is the infimum of log canonical thresholds of all effective Q-divisors numerically equivalent to the anticanonical divisor. An appendix to this paper contains a proof that the global log canonical threshold of a smooth Fano variety coincides with its α-invariant of Tian. The purpose of the paper is to compute the global log canonical thresholds of smooth Fano threefolds (altogether, there are 105 deformation families of such threefolds). The global log canonical thresholds are computed for every smooth threefold in 64 deformation families, and the global log canonical thresholds are computed for a general threefold in 20 deformation families. Some bounds for the global log canonical thresholds are computed for 14 deformation families. Appendix A is due to J.-P. Demailly.

  17. Detection Thresholds of Falling Snow From Satellite-Borne Active and Passive Sensors

    Science.gov (United States)

    Skofronick-Jackson, Gail M.; Johnson, Benjamin T.; Munchak, S. Joseph

    2013-01-01

    There is an increased interest in detecting and estimating the amount of falling snow reaching the Earths surface in order to fully capture the global atmospheric water cycle. An initial step toward global spaceborne falling snow algorithms for current and future missions includes determining the thresholds of detection for various active and passive sensor channel configurations and falling snow events over land surfaces and lakes. In this paper, cloud resolving model simulations of lake effect and synoptic snow events were used to determine the minimum amount of snow (threshold) that could be detected by the following instruments: the W-band radar of CloudSat, Global Precipitation Measurement (GPM) Dual-Frequency Precipitation Radar (DPR)Ku- and Ka-bands, and the GPM Microwave Imager. Eleven different nonspherical snowflake shapes were used in the analysis. Notable results include the following: 1) The W-band radar has detection thresholds more than an order of magnitude lower than the future GPM radars; 2) the cloud structure macrophysics influences the thresholds of detection for passive channels (e.g., snow events with larger ice water paths and thicker clouds are easier to detect); 3) the snowflake microphysics (mainly shape and density)plays a large role in the detection threshold for active and passive instruments; 4) with reasonable assumptions, the passive 166-GHz channel has detection threshold values comparable to those of the GPM DPR Ku- and Ka-band radars with approximately 0.05 g *m(exp -3) detected at the surface, or an approximately 0.5-1.0-mm * h(exp -1) melted snow rate. This paper provides information on the light snowfall events missed by the sensors and not captured in global estimates.

  18. Threshold concepts as barriers to understanding climate science

    Science.gov (United States)

    Walton, P.

    2013-12-01

    Whilst the scientific case for current climate change is compelling, the consequences of climate change have largely failed to permeate through to individuals. This lack of public awareness of the science and the potential impacts could be considered a key obstacle to action. The possible reasons for such limited success centre on the issue that climate change is a complex subject, and that a wide ranging academic, political and social research literature on the science and wider implications of climate change has failed to communicate the key issues in an accessible way. These failures to adequately communicate both the science and the social science of climate change at a number of levels results in ';communication gaps' that act as fundamental barriers to both understanding and engagement with the issue. Meyer and Land (2003) suggest that learners can find certain ideas and concepts within a discipline difficult to understand and these act as a barrier to deeper understanding of a subject. To move beyond these threshold concepts, they suggest that the expert needs to support the learner through a range of learning experiences that allows the development of learning strategies particular to the individual. Meyer and Land's research into these threshold concepts has been situated within Economics, but has been suggested to be more widely applicable though there has been no attempt to either define or evaluate threshold concepts to climate change science. By identifying whether common threshold concepts exist specifically in climate science for cohorts of either formal or informal learners, scientists will be better able to support the public in understanding these concepts by changing how the knowledge is communicated to help overcome these barriers to learning. This paper reports on the findings of a study that examined the role of threshold concepts as barriers to understanding climate science in a UK University and considers its implications for wider

  19. SEMICONDUCTOR DEVICES: Two-dimensional threshold voltage analytical model of DMG strained-silicon-on-insulator MOSFETs

    Science.gov (United States)

    Jin, Li; Hongxia, Liu; Bin, Li; Lei, Cao; Bo, Yuan

    2010-08-01

    For the first time, a simple and accurate two-dimensional analytical model for the surface potential variation along the channel in fully depleted dual-material gate strained-Si-on-insulator (DMG SSOI) MOSFETs is developed. We investigate the improved short channel effect (SCE), hot carrier effect (HCE), drain-induced barrier-lowering (DIBL) and carrier transport efficiency for the novel structure MOSFET. The analytical model takes into account the effects of different metal gate lengths, work functions, the drain bias and Ge mole fraction in the relaxed SiGe buffer. The surface potential in the channel region exhibits a step potential, which can suppress SCE, HCE and DIBL. Also, strained-Si and SOI structure can improve the carrier transport efficiency, with strained-Si being particularly effective. Further, the threshold voltage model correctly predicts a “rollup" in threshold voltage with decreasing channel length ratios or Ge mole fraction in the relaxed SiGe buffer. The validity of the two-dimensional analytical model is verified using numerical simulations.

  20. Quantum corrections for spinning particles in de Sitter

    Energy Technology Data Exchange (ETDEWEB)

    Fröb, Markus B. [Department of Mathematics, University of York, Heslington, York, YO10 5DD (United Kingdom); Verdaguer, Enric, E-mail: mbf503@york.ac.uk, E-mail: enric.verdaguer@ub.edu [Departament de Física Quàntica i Astrofísica, Institut de Ciències del Cosmos (ICC), Universitat de Barcelona (UB), C/ Martí i Franquès 1, 08028 Barcelona (Spain)

    2017-04-01

    We compute the one-loop quantum corrections to the gravitational potentials of a spinning point particle in a de Sitter background, due to the vacuum polarisation induced by conformal fields in an effective field theory approach. We consider arbitrary conformal field theories, assuming only that the theory contains a large number N of fields in order to separate their contribution from the one induced by virtual gravitons. The corrections are described in a gauge-invariant way, classifying the induced metric perturbations around the de Sitter background according to their behaviour under transformations on equal-time hypersurfaces. There are six gauge-invariant modes: two scalar Bardeen potentials, one transverse vector and one transverse traceless tensor, of which one scalar and the vector couple to the spinning particle. The quantum corrections consist of three different parts: a generalisation of the flat-space correction, which is only significant at distances of the order of the Planck length; a constant correction depending on the undetermined parameters of the renormalised effective action; and a term which grows logarithmically with the distance from the particle. This last term is the most interesting, and when resummed gives a modified power law, enhancing the gravitational force at large distances. As a check on the accuracy of our calculation, we recover the linearised Kerr-de Sitter metric in the classical limit and the flat-space quantum correction in the limit of vanishing Hubble constant.

  1. The intensity threshold of colour vision in a passerine bird, the blue tit (Cyanistes caeruleus).

    Science.gov (United States)

    Gomez, Doris; Grégoire, Arnaud; Del Rey Granado, Maria; Bassoul, Marine; Degueldre, David; Perret, Philippe; Doutrelant, Claire

    2014-11-01

    Many vertebrates use colour vision for vital behaviour but their visual performance in dim light is largely unknown. The light intensity threshold of colour vision is known only for humans, horses and two parrot species. Here, we first explore this threshold in a passerine bird, the blue tit (Cyanistes caeruleus). Using classic conditioning of colour cues to food rewards in three individuals, we find a threshold ranging from 0.05 to 0.2 cd m(-2). Results are comparable to the two previously tested bird species. For tits, nest light conditions probably exceed that threshold, at least after sunrise. These results shed new light on the lively debate questioning the visual performance of cavity nesters and the evolutionary significance of egg and chick coloration. Although this needs further investigation, it is possible that blue tits exploit both colour and brightness cues when viewing their eggs, chicks or conspecifics in their nests. © 2014. Published by The Company of Biologists Ltd.

  2. Photoproduction of vector mesons off nucleons near threshold

    International Nuclear Information System (INIS)

    Friman, B.; Soyeur, M.

    1995-11-01

    We propose a simple meson-exchange model of the photoproduction of ρ-and ω-mesons off protons near threshold (E γ < or∼2 GeV). We show that this model provides a good description of the available data and implies a large ρ-nucleon interaction in the scalar channel (σ-exchange). We use this phenomenological interaction to estimate the leading contribution to the self-energy of ρ-mesons in matter. We discuss the implications of our calculation for experimental studies of the ρ-meson mass in nuclei. (orig.)

  3. The liability threshold model for censored twin data

    DEFF Research Database (Denmark)

    Holst, Klaus K.; Scheike, Thomas; Hjelmborg, Jacob B.

    2016-01-01

    the disease thus still being at risk. Ignoring this right-censoring can lead to severely biased estimates. The classical liability threshold model can be extended with inverse probability of censoring weighting of complete observations. This leads to a flexible way of modelling twin concordance and obtaining...... studies of diseases, as a way of quantifying such genetic contribution. The endpoint in these studies are typically defined as occurrence of a disease versus death without the disease. However, a large fraction of the subjects may still be alive at the time of follow-up without having experienced...

  4. Threshold Concepts and Information Literacy

    Science.gov (United States)

    Townsend, Lori; Brunetti, Korey; Hofer, Amy R.

    2011-01-01

    What do we teach when we teach information literacy in higher education? This paper describes a pedagogical approach to information literacy that helps instructors focus content around transformative learning thresholds. The threshold concept framework holds promise for librarians because it grounds the instructor in the big ideas and underlying…

  5. Predissociation of the D /sup 1/PIsub(u) state of H/sub 2/ near threshold

    Energy Technology Data Exchange (ETDEWEB)

    Borondo, F.; Eguiagaray, L.R.; Riera, A. (Universidad Autonoma de Madrid (Spain). Dept. de Quimica Fisica y Quimica Cuantica)

    1982-03-28

    A recent calculation of Komarov and Ostrovsky (J. Phys. B.; 12:2485 (1979)) seemed to have settled a controversy regarding the different experimental values of the H(2s)/H(2p) sharing ratio in the predissociation of the D /sup 1/PIsub(u) state of H/sub 2/ near threshold. This calculation was based on a correct physical picture of the dissociation process, but the dynamical treatment rests on invalid assumptions. In the present work, a more rigorous quantum mechanical treatment is presented, and a branching ratio of 0.70 is obtained.

  6. Cool, warm, and heat-pain detection thresholds: testing methods and inferences about anatomic distribution of receptors.

    Science.gov (United States)

    Dyck, P J; Zimmerman, I; Gillen, D A; Johnson, D; Karnes, J L; O'Brien, P C

    1993-08-01

    We recently found that vibratory detection threshold is greatly influenced by the algorithm of testing. Here, we study the influence of stimulus characteristics and algorithm of testing and estimating threshold on cool (CDT), warm (WDT), and heat-pain (HPDT) detection thresholds. We show that continuously decreasing (for CDT) or increasing (for WDT) thermode temperature to the point at which cooling or warming is perceived and signaled by depressing a response key ("appearance" threshold) overestimates threshold with rapid rates of thermal change. The mean of the appearance and disappearance thresholds also does not perform well for insensitive sites and patients. Pyramidal (or flat-topped pyramidal) stimuli ranging in magnitude, in 25 steps, from near skin temperature to 9 degrees C for 10 seconds (for CDT), from near skin temperature to 45 degrees C for 10 seconds (for WDT), and from near skin temperature to 49 degrees C for 10 seconds (for HPDT) provide ideal stimuli for use in several algorithms of testing and estimating threshold. Near threshold, only the initial direction of thermal change from skin temperature is perceived, and not its return to baseline. Use of steps of stimulus intensity allows the subject or patient to take the needed time to decide whether the stimulus was felt or not (in 4, 2, and 1 stepping algorithms), or whether it occurred in stimulus interval 1 or 2 (in two-alternative forced-choice testing). Thermal thresholds were generally significantly lower with a large (10 cm2) than with a small (2.7 cm2) thermode.(ABSTRACT TRUNCATED AT 250 WORDS)

  7. Iran: the next nuclear threshold state?

    OpenAIRE

    Maurer, Christopher L.

    2014-01-01

    Approved for public release; distribution is unlimited A nuclear threshold state is one that could quickly operationalize its peaceful nuclear program into one capable of producing a nuclear weapon. This thesis compares two known threshold states, Japan and Brazil, with Iran to determine if the Islamic Republic could also be labeled a threshold state. Furthermore, it highlights the implications such a status could have on U.S. nonproliferation policy. Although Iran's nuclear program is mir...

  8. Photoionization dynamics of excited Ne, Ar, Kr and Xe atoms near threshold

    International Nuclear Information System (INIS)

    Sukhorukov, V L; Petrov, I D; Schäfer, M; Merkt, F; Ruf, M-W; Hotop, H

    2012-01-01

    A review of experimental and theoretical studies of the threshold photoionization of the heavier rare-gas atoms is presented, with particular emphasis on the autoionization resonances in the spectral region between the lowest two ionization thresholds 2 P 3/2 and 2 P 1/2 , accessed from the ground or excited states. Observed trends in the positions, widths and shapes of the autoionization resonances depending on the atomic number, the principal quantum number n, the orbital angular momentum quantum number ℓ and further quantum numbers specifying the fine- and hyperfine-structure levels are summarized and discussed in the light of ab initio and multichannel quantum defect theory calculations. The dependence of the photoionization spectra on the initially prepared neutral state are also discussed, including results on the photoionization cross sections and photoelectron angular distributions of polarized excited states. The effects of various approximations in the theoretical treatment of photoionization in these systems are analysed. The very large diversity of observed phenomena and the numerous anomalies in spectral structures associated with the threshold ionization of the rare-gas atoms can be described in terms of a limited set of interactions and dynamical processes. Examples are provided illustrating characteristic aspects of the photoionization, and sets of recommended parameters describing the energy-level structure and photoionization dynamics of the rare-gas atoms are presented which were extracted in a critical analysis of the very large body of experimental and theoretical data available on these systems in the literature. (topical review)

  9. Correcting quantum errors with entanglement.

    Science.gov (United States)

    Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu

    2006-10-20

    We show how entanglement shared between encoder and decoder can simplify the theory of quantum error correction. The entanglement-assisted quantum codes we describe do not require the dual-containing constraint necessary for standard quantum error-correcting codes, thus allowing us to "quantize" all of classical linear coding theory. In particular, efficient modern classical codes that attain the Shannon capacity can be made into entanglement-assisted quantum codes attaining the hashing bound (closely related to the quantum capacity). For systems without large amounts of shared entanglement, these codes can also be used as catalytic codes, in which a small amount of initial entanglement enables quantum communication.

  10. Threshold Games and Cooperation on Multiplayer Graphs.

    Directory of Open Access Journals (Sweden)

    Kaare B Mikkelsen

    Full Text Available The study investigates the effect on cooperation in multiplayer games, when the population from which all individuals are drawn is structured-i.e. when a given individual is only competing with a small subset of the entire population.To optimize the focus on multiplayer effects, a class of games were chosen for which the payoff depends nonlinearly on the number of cooperators-this ensures that the game cannot be represented as a sum of pair-wise interactions, and increases the likelihood of observing behaviour different from that seen in two-player games. The chosen class of games are named "threshold games", and are defined by a threshold, M > 0, which describes the minimal number of cooperators in a given match required for all the participants to receive a benefit. The model was studied primarily through numerical simulations of large populations of individuals, each with interaction neighbourhoods described by various classes of networks.When comparing the level of cooperation in a structured population to the mean-field model, we find that most types of structure lead to a decrease in cooperation. This is both interesting and novel, simply due to the generality and breadth of relevance of the model-it is likely that any model with similar payoff structure exhibits related behaviour. More importantly, we find that the details of the behaviour depends to a large extent on the size of the immediate neighbourhoods of the individuals, as dictated by the network structure. In effect, the players behave as if they are part of a much smaller, fully mixed, population, which we suggest an expression for.

  11. Hydrometeorological threshold conditions for debris flow initiation in Norway

    Directory of Open Access Journals (Sweden)

    N. K. Meyer

    2012-10-01

    Full Text Available Debris flows, triggered by extreme precipitation events and rapid snow melt, cause considerable damage to the Norwegian infrastructure every year. To define intensity-duration (ID thresholds for debris flow initiation critical water supply conditions arising from intensive rainfall or snow melt were assessed on the basis of daily hydro-meteorological information for 502 documented debris flow events. Two threshold types were computed: one based on absolute ID relationships and one using ID relationships normalized by the local precipitation day normal (PDN. For each threshold type, minimum, medium and maximum threshold values were defined by fitting power law curves along the 10th, 50th and 90th percentiles of the data population. Depending on the duration of the event, the absolute threshold intensities needed for debris flow initiation vary between 15 and 107 mm day−1. Since the PDN changes locally, the normalized thresholds show spatial variations. Depending on location, duration and threshold level, the normalized threshold intensities vary between 6 and 250 mm day−1. The thresholds obtained were used for a frequency analysis of over-threshold events giving an estimation of the exceedance probability and thus potential for debris flow events in different parts of Norway. The absolute thresholds are most often exceeded along the west coast, while the normalized thresholds are most frequently exceeded on the west-facing slopes of the Norwegian mountain ranges. The minimum thresholds derived in this study are in the range of other thresholds obtained for regions with a climate comparable to Norway. Statistics reveal that the normalized threshold is more reliable than the absolute threshold as the former shows no spatial clustering of debris flows related to water supply events captured by the threshold.

  12. 11 CFR 9036.1 - Threshold submission.

    Science.gov (United States)

    2010-01-01

    ... credit or debit card, including one made over the Internet, the candidate shall provide sufficient... section shall not count toward the threshold amount. (c) Threshold certification by Commission. (1) After...

  13. Influence of the angular scattering of electrons on the runaway threshold in air

    Science.gov (United States)

    Chanrion, O.; Bonaventura, Z.; Bourdon, A.; Neubert, T.

    2016-04-01

    The runaway electron mechanism is of great importance for the understanding of the generation of x- and gamma rays in atmospheric discharges. In 1991, terrestrial gamma-ray flashes (TGFs) were discovered by the Compton Gamma-Ray Observatory. Those emissions are bremsstrahlung from high energy electrons that run away in electric fields associated with thunderstorms. In this paper, we discuss the runaway threshold definition with a particular interest in the influence of the angular scattering for electron energy close to the threshold. In order to understand the mechanism of runaway, we compare the outcome of different Fokker-Planck and Monte Carlo models with increasing complexity in the description of the scattering. The results show that the inclusion of the stochastic nature of collisions smooths the probability to run away around the threshold. Furthermore, we observe that a significant number of electrons diffuse out of the runaway regime when we take into account the diffusion in angle due to the scattering. Those results suggest using a runaway threshold energy based on the Fokker-Planck model assuming the angular equilibrium that is 1.6 to 1.8 times higher than the one proposed by [1, 2], depending on the magnitude of the ambient electric field. The threshold also is found to be 5 to 26 times higher than the one assuming forward scattering. We give a fitted formula for the threshold field valid over a large range of electric fields. Furthermore, we have shown that the assumption of forward scattering is not valid below 1 MeV where the runaway threshold usually is defined. These results are important for the thermal runaway and the runaway electron avalanche discharge mechanisms suggested to participate in the TGF generation.

  14. Applications of molecules as high-resolution, high-sensitivity threshold electron detectors

    International Nuclear Information System (INIS)

    Chutjian, A.

    1991-01-01

    The goal of the work under the contract entitled ''Applications of Molecules as High-Resolution, High-Sensitivity Threshold Electron Detectors'' (DoE IAA No. DE-AI01-83ER13093 Mod. A006) was to explore the electron attachment properties of a variety of molecules at electron energies not accessible by other experimental techniques. As a result of this work, not only was a large body of basic data measured on attachment cross sections and rate constants; but also extensive theoretical calculations were carried out to verify the underlying phenomenon of s-wave attachment. Important outgrowths of this week were also realized in other areas of research. The basic data have applications in fields such as combustion, soot reduction, rocket-exhaust modification, threshold photoelectron spectroscopy, and trace species detection

  15. Assessing the Electrode-Neuron Interface with the Electrically Evoked Compound Action Potential, Electrode Position, and Behavioral Thresholds.

    Science.gov (United States)

    DeVries, Lindsay; Scheperle, Rachel; Bierer, Julie Arenberg

    2016-06-01

    Variability in speech perception scores among cochlear implant listeners may largely reflect the variable efficacy of implant electrodes to convey stimulus information to the auditory nerve. In the present study, three metrics were applied to assess the quality of the electrode-neuron interface of individual cochlear implant channels: the electrically evoked compound action potential (ECAP), the estimation of electrode position using computerized tomography (CT), and behavioral thresholds using focused stimulation. The primary motivation of this approach is to evaluate the ECAP as a site-specific measure of the electrode-neuron interface in the context of two peripheral factors that likely contribute to degraded perception: large electrode-to-modiolus distance and reduced neural density. Ten unilaterally implanted adults with Advanced Bionics HiRes90k devices participated. ECAPs were elicited with monopolar stimulation within a forward-masking paradigm to construct channel interaction functions (CIF), behavioral thresholds were obtained with quadrupolar (sQP) stimulation, and data from imaging provided estimates of electrode-to-modiolus distance and scalar location (scala tympani (ST), intermediate, or scala vestibuli (SV)) for each electrode. The width of the ECAP CIF was positively correlated with electrode-to-modiolus distance; both of these measures were also influenced by scalar position. The ECAP peak amplitude was negatively correlated with behavioral thresholds. Moreover, subjects with low behavioral thresholds and large ECAP amplitudes, averaged across electrodes, tended to have higher speech perception scores. These results suggest a potential clinical role for the ECAP in the objective assessment of individual cochlear implant channels, with the potential to improve speech perception outcomes.

  16. The large density electron beam-plasma Buneman instability

    International Nuclear Information System (INIS)

    Mantei, T.D.; Doveil, F.; Gresillon, D.

    1976-01-01

    The threshold conditions and growth rate of the Buneman (electron beam-stationary ion) instability are calculated with kinetic theory, including a stationary electronic population. A criteria on the wave energy sign is used to separate the Buneman hydrodynamic instability from the ion-acoustic kinetic instability. The stationary electron population raises the instability threshold and, for large beam velocities yields a maximum growth rate oblique to the beam. (author)

  17. Thermotactile perception thresholds measurement conditions.

    Science.gov (United States)

    Maeda, Setsuo; Sakakibara, Hisataka

    2002-10-01

    The purpose of this paper is to investigate the effects of posture, push force and rate of temperature change on thermotactile thresholds and to clarify suitable measuring conditions for Japanese people. Thermotactile (warm and cold) thresholds on the right middle finger were measured with an HVLab thermal aesthesiometer. Subjects were eight healthy male Japanese students. The effects of posture in measurement were examined in the posture of a straight hand and forearm placed on a support, the same posture without a support, and the fingers and hand flexed at the wrist with the elbow placed on a desk. The finger push force applied to the applicator of the thermal aesthesiometer was controlled at a 0.5, 1.0, 2.0 and 3.0 N. The applicator temperature was changed to 0.5, 1.0, 1.5, 2.0 and 2.5 degrees C/s. After each measurement, subjects were asked about comfort under the measuring conditions. Three series of experiments were conducted on different days to evaluate repeatability. Repeated measures ANOVA showed that warm thresholds were affected by the push force and the rate of temperature change and that cold thresholds were influenced by posture and push force. The comfort assessment indicated that the measurement posture of a straight hand and forearm laid on a support was the most comfortable for the subjects. Relatively high repeatability was obtained under measurement conditions of a 1 degrees C/s temperature change rate and a 0.5 N push force. Measurement posture, push force and rate of temperature change can affect the thermal threshold. Judging from the repeatability, a push force of 0.5 N and a temperature change of 1.0 degrees C/s in the posture with the straight hand and forearm laid on a support are recommended for warm and cold threshold measurements.

  18. DOE approach to threshold quantities

    International Nuclear Information System (INIS)

    Wickham, L.E.; Kluk, A.F.; Department of Energy, Washington, DC)

    1985-01-01

    The Department of Energy (DOE) is developing the concept of threshold quantities for use in determining which waste materials must be handled as radioactive waste and which may be disposed of as nonradioactive waste at its sites. Waste above this concentration level would be managed as radioactive or mixed waste (if hazardous chemicals are present); waste below this level would be handled as sanitary waste. Ideally, the threshold must be set high enough to significantly reduce the amount of waste requiring special handling. It must also be low enough so that waste at the threshold quantity poses a very small health risk and multiple exposures to such waste would still constitute a small health risk. It should also be practical to segregate waste above or below the threshold quantity using available instrumentation. Guidance is being prepared to aid DOE sites in establishing threshold quantity values based on pathways analysis using site-specific parameters (waste stream characteristics, maximum exposed individual, population considerations, and site specific parameters such as rainfall, etc.). A guidance dose of between 0.001 to 1.0 mSv/y (0.1 to 100 mrem/y) was recommended with 0.3 mSv/y (30 mrem/y) selected as the guidance dose upon which to base calculations. Several tasks were identified, beginning with the selection of a suitable pathway model for relating dose to the concentration of radioactivity in the waste. Threshold concentrations corresponding to the guidance dose were determined for waste disposal sites at a selected humid and arid site. Finally, cost-benefit considerations at the example sites were addressed. The results of the various tasks are summarized and the relationship of this effort with related developments at other agencies discussed

  19. Tapping in synchrony with a perturbed metronome: the phase correction response to small and large phase shifts as a function of tempo.

    Science.gov (United States)

    Repp, Bruno H

    2011-01-01

    When tapping is paced by an auditory sequence containing small phase shift (PS) perturbations, the phase correction response (PCR) of the tap following a PS increases with the baseline interonset interval (IOI), leading eventually to overcorrection (B. H. Repp, 2008). Experiment 1 shows that this holds even for fixed-size PSs that become imperceptible as the IOI increases (here, from 400 to 1200 ms). Earlier research has also shown (but only for IOI=500 ms) that the PCR is proportionally smaller for large than for small PSs (B. H. Repp, 2002a, 2002b). Experiment 2 introduced large PSs and found smaller PCRs than in Experiment 1, at all of the same IOIs. In Experiments 3A and 3B, the author investigated whether the change in slope of the sigmoid function relating PCR and PS magnitudes occurs at a fixed absolute or relative PS magnitude across different IOIs (600, 1000, 1400 ms). The results suggest no clear answer; the exact shape of the function may depend on the range of PSs used in an experiment. Experiment 4 examined the PCR in the IOI range from 1000 to 2000 ms and found overcorrection throughout, but with the PCR increasing much more gradually than in Experiment 1. These results provide important new information about the phase correction process and pose challenges for models of sensorimotor synchronization, which presently cannot explain nonlinear PCR functions and overcorrection. Copyright © Taylor & Francis Group, LLC

  20. Coulomb corrections for interferometry analysis of expanding hadron systems

    Energy Technology Data Exchange (ETDEWEB)

    Sinyukov, Yu.M.; Lednicky, R.; Pluta, J.; Erazmus, B. [Centre National de la Recherche Scientifique, 44 - Nantes (France). Lab. de Physique Subatomique et des Technologies Associees; Akkelin, S.V. [ITP, Kiev (Ukraine)

    1997-09-01

    The problem of the Coulomb corrections to the two-boson correlation functions for the systems formed in ultra-relativistic heavy ion collisions is considered for large effective system volumes. The modification of the standard zero-distance correction (so called Gamow or Coulomb factor) has been proposed for such a kind of systems. For the {pi}{sup +}{pi}{sup +} and K{sup +}K{sup +} correlation functions the analytical calculations of the Coulomb correction are compared with the exact numerical results. (author). 20 refs.

  1. Threshold Based Opportunistic Scheduling of Secondary Users in Underlay Cognitive Radio Networks

    KAUST Repository

    Song, Yao

    2011-12-01

    In underlay cognitive radio networks, secondary users can share the spectrum with primary users as long as the interference caused by the secondary users to primary users is below a certain predetermined threshold. It is reasonable to assume that there is always a large pool of secondary users trying to access the channel, which can be occupied by only one secondary user at a given time. As a result, a multi-user scheduling problem arises among the secondary users. In this thesis, by manipulating basic schemes based on selective multi-user diversity, normalized thresholding, transmission power control, and opportunistic round robin, we propose and analyze eight scheduling schemes of secondary users in an underlay cognitive radio set-up. The system performance of these schemes is quantified by using various performance metrics such as the average system capacity, normalized average feedback load, scheduling outage probability, and system fairness of access. In our proposed schemes, the best user out of all the secondary users in the system is picked to transmit at each given time slot in order to maximize the average system capacity. Two thresholds are used in the two rounds of the selection process to determine the best user. The first threshold is raised by the power constraint from the primary user. The second threshold, which can be adjusted by us, is introduced to reduce the feedback load. The overall system performance is therefore dependent on the choice of these two thresholds and the number of users in the system given the channel conditions for all the users. In this thesis, by deriving analytical formulas and presenting numerical examples, we try to provide insights of the relationship between the performance metrics and the involved parameters including two selection thresholds and the number of active users in the system, in an effort to maximize the average system capacity as well as satisfy the requirements of scheduling outage probability and

  2. Doubler system quench detection threshold

    International Nuclear Information System (INIS)

    Kuepke, K.; Kuchnir, M.; Martin, P.

    1983-01-01

    The experimental study leading to the determination of the sensitivity needed for protecting the Fermilab Doubler from damage during quenches is presented. The quench voltage thresholds involved were obtained from measurements made on Doubler cable of resistance x temperature and voltage x time during quenches under several currents and from data collected during operation of the Doubler Quench Protection System as implemented in the B-12 string of 20 magnets. At 4kA, a quench voltage threshold in excess of 5.OV will limit the peak Doubler cable temperature to 452K for quenches originating in the magnet coils whereas a threshold of 0.5V is required for quenches originating outside of coils

  3. A probabilistic Poisson-based model accounts for an extensive set of absolute auditory threshold measurements.

    Science.gov (United States)

    Heil, Peter; Matysiak, Artur; Neubauer, Heinrich

    2017-09-01

    Thresholds for detecting sounds in quiet decrease with increasing sound duration in every species studied. The neural mechanisms underlying this trade-off, often referred to as temporal integration, are not fully understood. Here, we probe the human auditory system with a large set of tone stimuli differing in duration, shape of the temporal amplitude envelope, duration of silent gaps between bursts, and frequency. Duration was varied by varying the plateau duration of plateau-burst (PB) stimuli, the duration of the onsets and offsets of onset-offset (OO) stimuli, and the number of identical bursts of multiple-burst (MB) stimuli. Absolute thresholds for a large number of ears (>230) were measured using a 3-interval-3-alternative forced choice (3I-3AFC) procedure. Thresholds decreased with increasing sound duration in a manner that depended on the temporal envelope. Most commonly, thresholds for MB stimuli were highest followed by thresholds for OO and PB stimuli of corresponding durations. Differences in the thresholds for MB and OO stimuli and in the thresholds for MB and PB stimuli, however, varied widely across ears, were negative in some ears, and were tightly correlated. We show that the variation and correlation of MB-OO and MB-PB threshold differences are linked to threshold microstructure, which affects the relative detectability of the sidebands of the MB stimuli and affects estimates of the bandwidth of auditory filters. We also found that thresholds for MB stimuli increased with increasing duration of the silent gaps between bursts. We propose a new model and show that it accurately accounts for our results and does so considerably better than a leaky-integrator-of-intensity model and a probabilistic model proposed by others. Our model is based on the assumption that sensory events are generated by a Poisson point process with a low rate in the absence of stimulation and higher, time-varying rates in the presence of stimulation. A subject in a 3I-3AFC

  4. Ecosystem impacts of hypoxia: thresholds of hypoxia and pathways to recovery

    International Nuclear Information System (INIS)

    Steckbauer, A; Duarte, C M; Vaquer-Sunyer, R; Carstensen, J; Conley, D J

    2011-01-01

    Coastal hypoxia is increasing in the global coastal zone, where it is recognized as a major threat to biota. Managerial efforts to prevent hypoxia and achieve recovery of ecosystems already affected by hypoxia are largely based on nutrient reduction plans. However, these managerial efforts need to be informed by predictions on the thresholds of hypoxia (i.e. the oxygen levels required to conserve biodiversity) as well as the timescales for the recovery of ecosystems already affected by hypoxia. The thresholds for hypoxia in coastal ecosystems are higher than previously thought and are not static, but regulated by local and global processes, being particularly sensitive to warming. The examination of recovery processes in a number of coastal areas managed for reducing nutrient inputs and, thus, hypoxia (Northern Adriatic; Black Sea; Baltic Sea; Delaware Bay; and Danish Coastal Areas) reveals that recovery timescales following the return to normal oxygen conditions are much longer than those of loss following the onset of hypoxia, and typically involve decadal timescales. The extended lag time for ecosystem recovery from hypoxia results in non-linear pathways of recovery due to hysteresis and the shift in baselines, affecting the oxygen thresholds for hypoxia through time.

  5. Reaction thresholds in doubly special relativity

    International Nuclear Information System (INIS)

    Heyman, Daniel; Major, Seth; Hinteleitner, Franz

    2004-01-01

    Two theories of special relativity with an additional invariant scale, 'doubly special relativity', are tested with calculations of particle process kinematics. Using the Judes-Visser modified conservation laws, thresholds are studied in both theories. In contrast with some linear approximations, which allow for particle processes forbidden in special relativity, both the Amelino-Camelia and Magueijo-Smolin frameworks allow no additional processes. To first order, the Amelino-Camelia framework thresholds are lowered and the Magueijo-Smolin framework thresholds may be raised or lowered

  6. Development of a landlside EWS based on rainfall thresholds for Tuscany Region, Italy

    Science.gov (United States)

    Rosi, Ascanio; Segoni, Samuele; Battistini, Alessandro; Rossi, Guglielmo; Catani, Filippo; Casagli, Nicola

    2017-04-01

    We present the set-up of a landslide EWS based on rainfall thresholds for the Tuscany region (central Italy), that shows a heterogeneous distribution of reliefs and precipitation. The work started with the definition of a single set of thresholds for the whole region, but it resulted unsuitable for EWS purposes, because of the heterogeneity of the Tuscan territory and non-repeatability of the analyses, that were affected by a high degree of subjectivity. To overcome this problem, the work started from the implementation of a software capable of objectively defining the rainfall thresholds, since some of the main issues of these thresholds are the subjectivity of the analysis and therefore their non-repeatability. This software, named MaCumBA, is largely automated and can analyze, in a short time, a high number of rainfall events to define several parameters of the threshold, such as the intensity (I) and the duration (D) of the rainfall event, the no-rain time gap (NRG: how many hours without rain are needed to consider two events as separated) and the equation describing the threshold. The possibility of quickly perform several analyses lead to the decision to divide the territory in 25 homogeneous areas (named alert zones, AZ), so as a single threshold for each AZ could be defined. For the definition of the thresholds two independent datasets (of joint rainfall-landslide occurrences) have been used: a calibration dataset (data from 2000 to 2007) and a validation dataset (2008-2009). Once the thresholds were defined, a WebGIS-based EWS has been implemented. In this system it is possible to focus both on monitoring of real-time data and on forecasting at different lead times up to 48 h; forecasting data are collected from LAMI (Limited Area Model Italy) rainfall forecasts. The EWS works on the basis of the threshold parameters defined by MaCumBA (I, D, NRG). An important feature of the warning system is that the visualization of the thresholds in the Web

  7. Threshold Dynamics of a Stochastic SIR Model with Vertical Transmission and Vaccination

    OpenAIRE

    Miao, Anqi; Zhang, Jian; Zhang, Tongqian; Pradeep, B. G. Sampath Aruna

    2017-01-01

    A stochastic SIR model with vertical transmission and vaccination is proposed and investigated in this paper. The threshold dynamics are explored when the noise is small. The conditions for the extinction or persistence of infectious diseases are deduced. Our results show that large noise can lead to the extinction of infectious diseases which is conducive to epidemic diseases control.

  8. Tunable femtosecond lasers with low pump thresholds

    Science.gov (United States)

    Oppo, Karen

    The work in this thesis is concerned with the development of tunable, femtosecond laser systems, exhibiting low pump threshold powers. The main motive for this work was the development of a low threshold, self-modelocked Ti:Al2O3 laser in order to replace the conventional large-frame argon-ion pump laser with a more compact and efficient all-solid-state alternative. Results are also presented for an all-solid-state, self-modelocked Cr:LiSAF laser, however most of this work is concerned with self-modelocked Ti:Al2O3 laser systems. In chapter 2, the operation of a regeneratively-initiated, and a hard-aperture self- modelocked Ti:Al2O3 laser, pumped by an argon-ion laser, is discussed. Continuous- wave oscillation thresholds as low as 160mW have been demonstrated, along with self-modelocked threshold powers as low as 500mW. The measurement and suppression of phase noise on modelocked lasers is discussed in chapter 3. This is followed by a comparison of the phase noise characteristics of the regeneratively-initiated, and hard-aperture self-modelocked Ti:Al2O3 lasers. The use of a synchronously-operating, high resolution electron-optical streak camera in the evaluation of timing jitter is also presented. In chapter 4, the construction and self-modelocked operation of an all-solid-state Ti:Al2O3 laser is described. The all-solid-state alternative to the conventional argon-ion pump laser was a continuous-wave, intracavity-frequency doubled, diode-laser pumped Nd:YLF ring laser. At a total diode-laser pump power of 10W, this minilaser was capable of producing a single frequency output of 1W, at 523.5nm in a TEM00 beam. The remainder of this thesis looks at the operation of a self-modelocked Ti:Al2O3 laser generating ultrashort pulses at wavelengths as long as 1053nm. The motive for this work was the development of an all-solid-state, self- modelocked Ti:Al2O3 laser operating at 1053nm, for use as a master oscillator in a Nd:glass power chain.

  9. Thresholds in chemical respiratory sensitisation.

    Science.gov (United States)

    Cochrane, Stella A; Arts, Josje H E; Ehnes, Colin; Hindle, Stuart; Hollnagel, Heli M; Poole, Alan; Suto, Hidenori; Kimber, Ian

    2015-07-03

    There is a continuing interest in determining whether it is possible to identify thresholds for chemical allergy. Here allergic sensitisation of the respiratory tract by chemicals is considered in this context. This is an important occupational health problem, being associated with rhinitis and asthma, and in addition provides toxicologists and risk assessors with a number of challenges. In common with all forms of allergic disease chemical respiratory allergy develops in two phases. In the first (induction) phase exposure to a chemical allergen (by an appropriate route of exposure) causes immunological priming and sensitisation of the respiratory tract. The second (elicitation) phase is triggered if a sensitised subject is exposed subsequently to the same chemical allergen via inhalation. A secondary immune response will be provoked in the respiratory tract resulting in inflammation and the signs and symptoms of a respiratory hypersensitivity reaction. In this article attention has focused on the identification of threshold values during the acquisition of sensitisation. Current mechanistic understanding of allergy is such that it can be assumed that the development of sensitisation (and also the elicitation of an allergic reaction) is a threshold phenomenon; there will be levels of exposure below which sensitisation will not be acquired. That is, all immune responses, including allergic sensitisation, have threshold requirement for the availability of antigen/allergen, below which a response will fail to develop. The issue addressed here is whether there are methods available or clinical/epidemiological data that permit the identification of such thresholds. This document reviews briefly relevant human studies of occupational asthma, and experimental models that have been developed (or are being developed) for the identification and characterisation of chemical respiratory allergens. The main conclusion drawn is that although there is evidence that the

  10. On Gluonic Corrections to the Mass Spectrum in a Relativistic Charmonium Model

    OpenAIRE

    Hitoshi, ITO; Department of Physics, Faculty of Science and Technology Kinki University

    1984-01-01

    It is shown that the gluonic correction in the innermost region is abnormally large in the ^1S_0 State and a cutoff parameter which suppresses this correction. should be introduced. The retardation effect is estimated under this restriction on the gluonic correction. The correction due to the pair creation is shown to be small except for the ^1S_0 and ^3P_0 states.

  11. Delays in using chromatic and luminance information to correct rapid reaches.

    Science.gov (United States)

    Kane, Adam; Wade, Alex; Ma-Wyatt, Anna

    2011-09-07

    People can use feedback to make online corrections to movements but only if there is sufficient time to integrate the new information and make the correction. A key variable in this process is therefore the speed at which the new information about the target location is coded. Conduction velocities for chromatic signals are lower than for achromatic signals so it may take longer to correct reaches to chromatic stimuli. In addition to this delay, the sensorimotor system may prefer achromatic information over the chromatic information as delayed information may be less valuable when movements are made under time pressure. A down-weighting of chromatic information may result in additional latencies for chromatically directed reaches. In our study, participants made online corrections to reaches to achromatic, (L-M)-cone, and S-cone stimuli. Our chromatic stimuli were carefully adjusted to minimize stimulation of achromatic pathways, and we equated stimuli both in terms of detection thresholds and also by their estimated neural responses. Similar stimuli were used throughout the subjective adjustments and final reaching experiment. Using this paradigm, we found that responses to achromatic stimuli were only slightly faster than responses to (L-M)-cone and S-cone stimuli. We conclude that the sensorimotor system treats chromatic and achromatic information similarly and that the delayed chromatic responses primarily reflect early conduction delays.

  12. Ablation by ultrashort laser pulses: Atomistic and thermodynamic analysis of the processes at the ablation threshold

    International Nuclear Information System (INIS)

    Upadhyay, Arun K.; Inogamov, Nail A.; Rethfeld, Baerbel; Urbassek, Herbert M.

    2008-01-01

    Ultrafast laser irradiation of solids may ablate material off the surface. We study this process for thin films using molecular-dynamics simulation and thermodynamic analysis. Both metals and Lennard-Jones (LJ) materials are studied. We find that despite the large difference in thermodynamical properties between these two classes of materials--e.g., for aluminum versus LJ the ratio T c /T tr of critical to triple-point temperature differs by more than a factor of 4--the values of the ablation threshold energy E abl normalized to the cohesion energy, ε abl =E abl /E coh , are surprisingly universal: all are near 0.3 with ±30% scattering. The difference in the ratio T c /T tr means that for metals the melting threshold ε m is low, ε m abl , while for LJ it is high, ε m >ε abl . This thermodynamical consideration gives a simple explanation for the difference between metals and LJ. It explains why despite the universality in ε abl , metals thermomechanically ablate always from the liquid state. This is opposite to LJ materials, which (near threshold) ablate from the solid state. Furthermore, we find that immediately below the ablation threshold, the formation of large voids (cavitation) in the irradiated material leads to a strong temporary expansion on a very slow time scale. This feature is easily distinguished from the acoustic oscillations governing the material response at smaller intensities, on the one hand, and the ablation occurring at larger intensities, on the other hand. This finding allows us to explain the puzzle of huge surface excursions found in experiments at near-threshold laser irradiation

  13. PVR: Patch-to-Volume Reconstruction for Large Area Motion Correction of Fetal MRI.

    Science.gov (United States)

    Alansary, Amir; Rajchl, Martin; McDonagh, Steven G; Murgasova, Maria; Damodaram, Mellisa; Lloyd, David F A; Davidson, Alice; Rutherford, Mary; Hajnal, Joseph V; Rueckert, Daniel; Kainz, Bernhard

    2017-10-01

    In this paper, we present a novel method for the correction of motion artifacts that are present in fetal magnetic resonance imaging (MRI) scans of the whole uterus. Contrary to current slice-to-volume registration (SVR) methods, requiring an inflexible anatomical enclosure of a single investigated organ, the proposed patch-to-volume reconstruction (PVR) approach is able to reconstruct a large field of view of non-rigidly deforming structures. It relaxes rigid motion assumptions by introducing a specific amount of redundant information that is exploited with parallelized patchwise optimization, super-resolution, and automatic outlier rejection. We further describe and provide an efficient parallel implementation of PVR allowing its execution within reasonable time on commercially available graphics processing units, enabling its use in the clinical practice. We evaluate PVR's computational overhead compared with standard methods and observe improved reconstruction accuracy in the presence of affine motion artifacts compared with conventional SVR in synthetic experiments. Furthermore, we have evaluated our method qualitatively and quantitatively on real fetal MRI data subject to maternal breathing and sudden fetal movements. We evaluate peak-signal-to-noise ratio, structural similarity index, and cross correlation with respect to the originally acquired data and provide a method for visual inspection of reconstruction uncertainty. We further evaluate the distance error for selected anatomical landmarks in the fetal head, as well as calculating the mean and maximum displacements resulting from automatic non-rigid registration to a motion-free ground truth image. These experiments demonstrate a successful application of PVR motion compensation to the whole fetal body, uterus, and placenta.

  14. An odor-specific threshold deficit implicates abnormal cAMP signaling in youths at clinical risk for psychosis.

    Science.gov (United States)

    Kamath, Vidyulata; Moberg, Paul J; Calkins, Monica E; Borgmann-Winter, Karin; Conroy, Catherine G; Gur, Raquel E; Kohler, Christian G; Turetsky, Bruce I

    2012-07-01

    While olfactory deficits have been reported in schizophrenia and youths at-risk for psychosis, few studies have linked these deficits to current pathophysiological models of the illness. There is evidence that disrupted cyclic adenosine 3',5'-monophosphate (cAMP) signaling may contribute to schizophrenia pathology. As cAMP mediates olfactory signal transduction, the degree to which this disruption could manifest in olfactory impairment was ascertained. Odor-detection thresholds to two odorants that differ in the degree to which they activate intracellular cAMP were assessed in clinical risk and low-risk participants. Birhinal assessments of odor-detection threshold sensitivity to lyral and citralva were acquired in youths experiencing prodromal symptoms (n=17) and controls at low risk for developing psychosis (n=15). Citralva and lyral are odorants that differ in cAMP activation; citralva is a strong cAMP activator and lyral is a weak cAMP activator. The overall group-by-odor interaction was statistically significant. At-risk youths showed significantly reduced odor detection thresholds for lyral, but showed intact detection thresholds for citralva. This odor-specific threshold deficit was uncorrelated with deficits in odor identification or discrimination, which were also present. ROC curve analysis revealed that olfactory performance correctly classified at-risk and low-risk youths with greater than 97% accuracy. This study extends prior findings of an odor-specific hyposmia implicating cAMP-mediated signal transduction in schizophrenia and unaffected first-degree relatives to include youths at clinical risk for developing the disorder. These results suggest that dysregulation of cAMP signaling may be present during the psychosis prodrome. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Compositional threshold for Nuclear Waste Glass Durability

    International Nuclear Information System (INIS)

    Kruger, Albert A.; Farooqi, Rahmatullah; Hrma, Pavel R.

    2013-01-01

    Within the composition space of glasses, a distinct threshold appears to exist that separates 'good' glasses, i.e., those which are sufficiently durable, from 'bad' glasses of a low durability. The objective of our research is to clarify the origin of this threshold by exploring the relationship between glass composition, glass structure and chemical durability around the threshold region

  16. Unitarity corrections and high field strengths in high energy hard collisions

    International Nuclear Information System (INIS)

    Kovchegov, Y.V.; Mueller, A.H.

    1997-01-01

    Unitarity corrections to the BFKL description of high energy hard scattering are viewed in large N c QCD in light-cone quantization. In a center of mass frame unitarity corrections to high energy hard scattering are manifestly perturbatively calculable and unrelated to questions of parton saturation. In a frame where one of the hadrons is initially at rest unitarity corrections are related to parton saturation effects and involve potential strengths A μ ∝1/g. In such a frame we describe the high energy scattering in terms of the expectation value of a Wilson loop. The large potentials A μ ∝1/g are shown to be pure gauge terms allowing perturbation theory to again describe unitarity corrections and parton saturation effects. Genuine nonperturbative effects only come in at energies well beyond those energies where unitarity constraints first become important. (orig.)

  17. Optimizing Systems of Threshold Detection Sensors

    National Research Council Canada - National Science Library

    Banschbach, David C

    2008-01-01

    .... Below the threshold all signals are ignored. We develop a mathematical model for setting individual sensor thresholds to obtain optimal probability of detecting a significant event, given a limit on the total number of false positives allowed...

  18. Finite-Q22 Corrections to Parity-Violating DIS

    International Nuclear Information System (INIS)

    T. Hobbs; W. Melnitchouk

    2008-01-01

    Parity-violating deep inelastic scattering (PVDIS) has been proposed as an important new tool to extract the flavor and isospin dependence of parton distributions in the nucleon. We discuss finite-Q 2 effects in PVDIS asymmetries arising from subleading kinematical corrections and longitudinal contributions to the gamma Z interference. For the proton, these need to be accounted for when extracting the d/u ratio at large x. For the deuteron, the finite-Q 2 corrections can distort the effects of charge symmetry violation in parton distributions, or signals for physics beyond the standard model. We further explore the dependence of PVDIS asymmetries for polarized targets on the u and d helicity distributions at large x

  19. Bone sarcoma in humans induced by radium: A threshold response?

    International Nuclear Information System (INIS)

    Rowland, R.E.

    1996-01-01

    The radium 226 and radium 228 have induced malignancies in the skeleton (primarily bone sarcomas) of humans. They have also induced carcinomas in the paranasal sinuses and mastoid air cells. There is no evidence that any leukemias or any other solid cancers have been induced by internally deposited radium. This paper discuses a study conducted on the dial painter population. This study made a concerted effort to verify, for each of the measured radium cases, the published values of the skeletal dose and the initial intake of radium. These were derived from body content measurements made some 40 years after the radium intake. Corrections to the assumed radium retention function resulted in a considerable number of dose changes. These changes have changed the shape of the dose response function. It now appears that the induction of bone sarcomas is a threshold process

  20. Identifying Threshold Concepts for Information Literacy: A Delphi Study

    Directory of Open Access Journals (Sweden)

    Lori Townsend

    2016-06-01

    Full Text Available This study used the Delphi method to engage expert practitioners on the topic of threshold concepts for information literacy. A panel of experts considered two questions. First, is the threshold concept approach useful for information literacy instruction? The panel unanimously agreed that the threshold concept approach holds potential for information literacy instruction. Second, what are the threshold concepts for information literacy instruction? The panel proposed and discussed over fifty potential threshold concepts, finally settling on six information literacy threshold concepts.

  1. Multiuser switched diversity scheduling systems with per-user threshold

    KAUST Repository

    Nam, Haewoon

    2010-05-01

    A multiuser switched diversity scheduling scheme with per-user feedback threshold is proposed and analyzed in this paper. The conventional multiuser switched diversity scheduling scheme uses a single feedback threshold for every user, where the threshold is a function of the average signal-to-noise ratios (SNRs) of the users as well as the number of users involved in the scheduling process. The proposed scheme, however, constructs a sequence of feedback thresholds instead of a single feedback threshold such that each user compares its channel quality with the corresponding feedback threshold in the sequence. Numerical and simulation results show that thanks to the flexibility of threshold selection, where a potentially different threshold can be used for each user, the proposed scheme provides a higher system capacity than that for the conventional scheme. © 2006 IEEE.

  2. A threshold model for Australian Stock Exchange equities

    Science.gov (United States)

    Bertram, William K.

    2005-02-01

    In this paper, we present a threshold model to describe the phenomena of zero return enhancement that is present in Australian Stock Exchange data. We examine the intraday behaviour of the ASX data and construct a new measure for the market activity using principal component analysis. We use this measure to create a business time scale that keeps the level of zero return enhancement constant throughout trading hours. Operating in this new time scale we fit the model to data for small and large time scales and find that the model affords an excellent approximation of the distribution of stock returns.

  3. Statistical mechanics of error-correcting codes

    Science.gov (United States)

    Kabashima, Y.; Saad, D.

    1999-01-01

    We investigate the performance of error-correcting codes, where the code word comprises products of K bits selected from the original message and decoding is carried out utilizing a connectivity tensor with C connections per index. Shannon's bound for the channel capacity is recovered for large K and zero temperature when the code rate K/C is finite. Close to optimal error-correcting capability is obtained for finite K and C. We examine the finite-temperature case to assess the use of simulated annealing for decoding and extend the analysis to accommodate other types of noisy channels.

  4. A nondispersive X-ray spectrometer with dead time correction of great accuracy

    International Nuclear Information System (INIS)

    Guillon, H.; Friant, A.

    1976-01-01

    Processing the analog signals from an energy dispersive X-ray spectrometer requires a great number of functions to be assembled. Instead of using function modules, it was decided to build a unit intended for working out digital-input data to the mini-computer, from the signals delivered by the Si(Li) detector. The unit contains six cards intended for the following functions: main amplifier, stabilizer of the threshold level and pile-up detector, amplitude encoder, pulse generator and fast amplifier, chronometer with dead time correction and high voltage polarization [fr

  5. Practical Atmospheric Correction Algorithms for a Multi-Spectral Sensor From the Visible Through the Thermal Spectral Regions

    Energy Technology Data Exchange (ETDEWEB)

    Borel, C.C.; Villeneuve, P.V.; Clodium, W.B.; Szymenski, J.J.; Davis, A.B.

    1999-04-04

    Deriving information about the Earth's surface requires atmospheric corrections of the measured top-of-the-atmosphere radiances. One possible path is to use atmospheric radiative transfer codes to predict how the radiance leaving the ground is affected by the scattering and attenuation. In practice the atmosphere is usually not well known and thus it is necessary to use more practical methods. The authors will describe how to find dark surfaces, estimate the atmospheric optical depth, estimate path radiance and identify thick clouds using thresholds on reflectance and NDVI and columnar water vapor. The authors describe a simple method to correct a visible channel contaminated by a thin cirrus clouds.

  6. NxRepair: error correction in de novo sequence assembly using Nextera mate pairs

    Directory of Open Access Journals (Sweden)

    Rebecca R. Murphy

    2015-06-01

    Full Text Available Scaffolding errors and incorrect repeat disambiguation during de novo assembly can result in large scale misassemblies in draft genomes. Nextera mate pair sequencing data provide additional information to resolve assembly ambiguities during scaffolding. Here, we introduce NxRepair, an open source toolkit for error correction in de novo assemblies that uses Nextera mate pair libraries to identify and correct large-scale errors. We show that NxRepair can identify and correct large scaffolding errors, without use of a reference sequence, resulting in quantitative improvements in the assembly quality. NxRepair can be downloaded from GitHub or PyPI, the Python Package Index; a tutorial and user documentation are also available.

  7. Dual-functional Memory and Threshold Resistive Switching Based on the Push-Pull Mechanism of Oxygen Ions

    KAUST Repository

    Huang, Yi-Jen

    2016-04-07

    The combination of nonvolatile memory switching and volatile threshold switching functions of transition metal oxides in crossbar memory arrays is of great potential for replacing charge-based flash memory in very-large-scale integration. Here, we show that the resistive switching material structure, (amorphous TiOx)/(Ag nanoparticles)/(polycrystalline TiOx), fabricated on the textured-FTO substrate with ITO as the top electrode exhibits both the memory switching and threshold switching functions. When the device is used for resistive switching, it is forming-free for resistive memory applications with low operation voltage (<±1 V) and self-compliance to current up to 50 μA. When it is used for threshold switching, the low threshold current is beneficial for improving the device selectivity. The variation of oxygen distribution measured by energy dispersive X-ray spectroscopy and scanning transmission electron microscopy indicates the formation or rupture of conducting filaments in the device at different resistance states. It is therefore suggested that the push and pull actions of oxygen ions in the amorphous TiOx and polycrystalline TiOx films during the voltage sweep account for the memory switching and threshold switching properties in the device.

  8. Temporal Gain Correction for X-Ray Calorimeter Spectrometers

    Science.gov (United States)

    Porter, F. S.; Chiao, M. P.; Eckart, M. E.; Fujimoto, R.; Ishisaki, Y.; Kelley, R. L.; Kilbourne, C. A.; Leutenegger, M. A.; McCammon, D.; Mitsuda, K.

    2016-01-01

    Calorimetric X-ray detectors are very sensitive to their environment. The boundary conditions can have a profound effect on the gain including heat sink temperature, the local radiation temperature, bias, and the temperature of the readout electronics. Any variation in the boundary conditions can cause temporal variations in the gain of the detector and compromise both the energy scale and the resolving power of the spectrometer. Most production X-ray calorimeter spectrometers, both on the ground and in space, have some means of tracking the gain as a function of time, often using a calibration spectral line. For small gain changes, a linear stretch correction is often sufficient. However, the detectors are intrinsically non-linear and often the event analysis, i.e., shaping, optimal filters etc., add additional non-linearity. Thus for large gain variations or when the best possible precision is required, a linear stretch correction is not sufficient. Here, we discuss a new correction technique based on non-linear interpolation of the energy-scale functions. Using Astro-HSXS calibration data, we demonstrate that the correction can recover the X-ray energy to better than 1 part in 104 over the entire spectral band to above 12 keV even for large-scale gain variations. This method will be used to correct any temporal drift of the on-orbit per-pixel gain using on-board calibration sources for the SXS instrument on the Astro-H observatory.

  9. Analysis of quantum error-correcting codes: Symplectic lattice codes and toric codes

    Science.gov (United States)

    Harrington, James William

    Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present

  10. Fat fraction bias correction using T1 estimates and flip angle mapping.

    Science.gov (United States)

    Yang, Issac Y; Cui, Yifan; Wiens, Curtis N; Wade, Trevor P; Friesen-Waldner, Lanette J; McKenzie, Charles A

    2014-01-01

    To develop a new method of reducing T1 bias in proton density fat fraction (PDFF) measured with iterative decomposition of water and fat with echo asymmetry and least-squares estimation (IDEAL). PDFF maps reconstructed from high flip angle IDEAL measurements were simulated and acquired from phantoms and volunteer L4 vertebrae. T1 bias was corrected using a priori T1 values for water and fat, both with and without flip angle correction. Signal-to-noise ratio (SNR) maps were used to measure precision of the reconstructed PDFF maps. PDFF measurements acquired using small flip angles were then compared to both sets of corrected large flip angle measurements for accuracy and precision. Simulations show similar results in PDFF error between small flip angle measurements and corrected large flip angle measurements as long as T1 estimates were within one standard deviation from the true value. Compared to low flip angle measurements, phantom and in vivo measurements demonstrate better precision and accuracy in PDFF measurements if images were acquired at a high flip angle, with T1 bias corrected using T1 estimates and flip angle mapping. T1 bias correction of large flip angle acquisitions using estimated T1 values with flip angle mapping yields fat fraction measurements of similar accuracy and superior precision compared to low flip angle acquisitions. Copyright © 2013 Wiley Periodicals, Inc.

  11. Pain thresholds, supra-threshold pain and lidocaine sensitivity in patients with erythromelalgia, including the I848Tmutation in NaV 1.7.

    Science.gov (United States)

    Helås, T; Sagafos, D; Kleggetveit, I P; Quiding, H; Jönsson, B; Segerdahl, M; Zhang, Z; Salter, H; Schmelz, M; Jørum, E

    2017-09-01

    Nociceptive thresholds and supra-threshold pain ratings as well as their reduction upon local injection with lidocaine were compared between healthy subjects and patients with erythromelalgia (EM). Lidocaine (0.25, 0.50, 1.0 or 10 mg/mL) or placebo (saline) was injected intradermally in non-painful areas of the lower arm, in a randomized, double-blind manner, to test the effect on dynamic and static mechanical sensitivity, mechanical pain sensitivity, thermal thresholds and supra-threshold heat pain sensitivity. Heat pain thresholds and pain ratings to supra-threshold heat stimulation did not differ between EM-patients (n = 27) and controls (n = 25), neither did the dose-response curves for lidocaine. Only the subgroup of EM-patients with mutations in sodium channel subunits Na V 1.7, 1.8 or 1.9 (n = 8) had increased lidocaine sensitivity for supra-threshold heat stimuli, contrasting lower sensitivity to strong mechanical stimuli. This pattern was particularly clear in the two patients carrying the Na V 1.7 I848T mutations in whom lidocaine's hyperalgesic effect on mechanical pain sensitivity contrasted more effective heat analgesia. Heat pain thresholds are not sensitized in EM patients, even in those with gain-of-function mutations in Na V 1.7. Differential lidocaine sensitivity was overt only for noxious stimuli in the supra-threshold range suggesting that sensitized supra-threshold encoding is important for the clinical pain phenotype in EM in addition to lower activation threshold. Intracutaneous lidocaine dose-dependently blocked nociceptive sensations, but we did not identify EM patients with particular high lidocaine sensitivity that could have provided valuable therapeutic guidance. Acute pain thresholds and supra-threshold heat pain in controls and patients with erythromelalgia do not differ and have the same lidocaine sensitivity. Acute heat pain thresholds even in EM patients with the Na V 1.7 I848T mutation are normal and only nociceptor

  12. When do price thresholds matter in retail categories?

    OpenAIRE

    Pauwels, Koen; Srinivasan, Shuba; Franses, Philip Hans

    2007-01-01

    textabstractMarketing literature has long recognized that brand price elasticity need not be monotonic and symmetric, but has yet to provide generalizable market-level insights on threshold-based price elasticity, asymmetric thresholds, and the sign and magnitude of elasticity transitions. This paper introduces smooth transition regression models to study threshold-based price elasticity of the top 4 brands across 20 fast-moving consumer good categories. Threshold-based price elasticity is fo...

  13. Wall attenuation and scatter corrections for ion chambers: measurements versus calculations

    Energy Technology Data Exchange (ETDEWEB)

    Rogers, D W.O.; Bielajew, A F [National Research Council of Canada, Ottawa, ON (Canada). Div. of Physics

    1990-08-01

    In precision ion chamber dosimetry in air, wall attenuation and scatter are corrected for A{sub wall} (K{sub att} in IAEA terminology, K{sub w}{sup -1} in standards laboratory terminology). Using the EGS4 system the authors show that Monte Carlo calculated A{sub wall} factors predict relative variations in detector response with wall thickness which agree with all available experimental data within a statistical uncertainty of less than 0.1%. They calculated correction factors for use in exposure and air kerma standards are different by up to 1% from those obtained by extrapolating these same measurements. Using calculated correction factors would imply increases of 0.7-1.0% in the exposure and air kerma standards based on spherical and large diameter, large length cylindrical chambers and decreases of 0.3-0.5% for standards based on large diameter pancake chambers. (author).

  14. Determining the precipitable water vapor thresholds under different rainfall strengths in Taiwan

    Science.gov (United States)

    Yeh, Ta-Kang; Shih, Hsuan-Chang; Wang, Chuan-Sheng; Choy, Suelynn; Chen, Chieh-Hung; Hong, Jing-Shan

    2018-02-01

    Precipitable Water Vapor (PWV) plays an important role for weather forecasting. It is helpful in evaluating the changes of the weather system via observing the distribution of water vapor. The ability of calculating PWV from Global Positioning System (GPS) signals is useful to understand the special weather phenomenon. In this study, 95 ground-based GPS and rainfall stations in Taiwan were utilized from 2006 to 2012 to analyze the relationship between PWV and rainfall. The PWV data were classified into four classes (no, light, moderate and heavy rainfall), and the vertical gradients of the PWV were obtained and the variations of the PWV were analyzed. The results indicated that as the GPS elevation increased every 100 m, the PWV values decreased by 9.5 mm, 11.0 mm, 12.2 mm and 12.3 mm during the no, light, moderate and heavy rainfall conditions, respectively. After applying correction using the vertical gradients mentioned above, the average PWV thresholds were 41.8 mm, 52.9 mm, 62.5 mm and 64.4 mm under the no, light, moderate and heavy rainfall conditions, respectively. This study offers another type of empirical threshold to assist the rainfall prediction and can be used to distinguish the rainfall features between different areas in Taiwan.

  15. Radiative corrections to the masses of supersymmetric Higgs bosons

    International Nuclear Information System (INIS)

    Ellis, J.; Zwirner, F.

    1991-01-01

    The lightest neutral Higgs boson in the minimal supersymmetric extension of the standard model has a tree-level mass less than that of the Z 0 . We calculate radiative corrections to its mass and to that of the heavier CP-even neutral Higgs boson. We find large corrections that increase with the top quark and squark masses, and vary with the ratio of vacuum expectation values v 2 /v 1 . These radiative corrections can be as large as O(100) GeV, and have the effect of (i) invalidating lower bounds on v 2 /v 1 inferred from unsuccessful Higgs searches at LEP I, (ii) in many cases, increasing the mass of the lighter CP-even Higgs boson beyond m z , (iii) often, increasing the mass of the heavier CP-even Higgs boson beyond the LEP reach, into a range more accessible to the LHC or SSC. (orig.)

  16. Threshold Dynamics of a Stochastic SIR Model with Vertical Transmission and Vaccination

    Directory of Open Access Journals (Sweden)

    Anqi Miao

    2017-01-01

    Full Text Available A stochastic SIR model with vertical transmission and vaccination is proposed and investigated in this paper. The threshold dynamics are explored when the noise is small. The conditions for the extinction or persistence of infectious diseases are deduced. Our results show that large noise can lead to the extinction of infectious diseases which is conducive to epidemic diseases control.

  17. High-order above-threshold dissociation of molecules

    Science.gov (United States)

    Lu, Peifen; Wang, Junping; Li, Hui; Lin, Kang; Gong, Xiaochun; Song, Qiying; Ji, Qinying; Zhang, Wenbin; Ma, Junyang; Li, Hanxiao; Zeng, Heping; He, Feng; Wu, Jian

    2018-03-01

    Electrons bound to atoms or molecules can simultaneously absorb multiple photons via the above-threshold ionization featured with discrete peaks in the photoelectron spectrum on account of the quantized nature of the light energy. Analogously, the above-threshold dissociation of molecules has been proposed to address the multiple-photon energy deposition in the nuclei of molecules. In this case, nuclear energy spectra consisting of photon-energy spaced peaks exceeding the binding energy of the molecular bond are predicted. Although the observation of such phenomena is difficult, this scenario is nevertheless logical and is based on the fundamental laws. Here, we report conclusive experimental observation of high-order above-threshold dissociation of H2 in strong laser fields where the tunneling-ionized electron transfers the absorbed multiphoton energy, which is above the ionization threshold to the nuclei via the field-driven inelastic rescattering. Our results provide an unambiguous evidence that the electron and nuclei of a molecule as a whole absorb multiple photons, and thus above-threshold ionization and above-threshold dissociation must appear simultaneously, which is the cornerstone of the nowadays strong-field molecular physics.

  18. Altruism in multiplayer snowdrift games with threshold and punishment

    Science.gov (United States)

    Zhang, Chunyan; Liu, Zhongxin; Sun, Qinglin; Chen, Zengqiang

    2015-09-01

    The puzzle of cooperation attracts broader concerns of the scientific community nowadays. Here we adopt an extra mechanism of punishment in the framework of a threshold multiple-player snowdrift game employed as the scenario for the cooperation problem. Two scenarios are considered: defectors will suffer punishment regardless of the game results, and defectors will incur punishment only when the game fails. We show by analysis that given this assumption, punishing free riders can significantly influence the evolution outcomes, and the results are driven by the specific components of the punishing rule. Particularly, punishing defectors always, not only when the game fails, can be more effective for maintaining public cooperation in multi-player systems. Intriguingly larger thresholds of the game provide a more favorable scenario for the coexistence of the cooperators and defectors under a broad value range of parameters. Further, cooperators are best supported by the large punishment on defectors, and then dominate and stabilize in the population, under the premise that defectors always incur punishment regardless of whether the game ends successfully or not.

  19. Below-threshold harmonic generation from strong non-uniform fields

    Science.gov (United States)

    Yavuz, I.

    2017-10-01

    Strong-field photoemission below the ionization threshold is a rich/complex region where atomic emission and harmonic generation may coexist. We studied the mechanism of below-threshold harmonics (BTH) from spatially non-uniform local fields near the metallic nanostructures. Discrete harmonics are generated due to the broken inversion symmetry, suggesting enriched coherent emission in the vuv frequency range. Through the numerical solution of the time-dependent Schrödinger equation, we investigate wavelength and intensity dependence of BTH. Wavelength dependence identifies counter-regular resonances; individual contributions from the multi-photon emission and channel-closing effects due to quantum path interferences. In order to understand the underlying mechanism of BTH, we devised a generalized semi-classical model, including the influence of Coulomb and non-uniform field interactions. As in uniform fields, Coulomb potential in non-uniform fields is the determinant of BTH; we observed that the generation of BTH are due to returning trajectories with negative energies. Due to large distance effectiveness of the non-uniformity, only long trajectories are noticeably affected.

  20. Characterization of Mode 1 and Mode 2 delamination growth and thresholds in graphite/peek composites

    Science.gov (United States)

    Martin, Roderick H.; Murri, Gretchen B.

    1988-01-01

    Composite materials often fail by delamination. The onset and growth of delamination in AS4/PEEK, a tough thermoplastic matrix composite, was characterized for mode 1 and mode 2 loadings, using the Double Cantilever Beam (DCB) and the End Notched Flexure (ENF) test specimens. Delamination growth per fatigue cycle, da/dN, was related to strain energy release rate, G, by means of a power law. However, the exponents of these power laws were too large for them to be adequately used as a life prediction tool. A small error in the estimated applied loads could lead to large errors in the delamination growth rates. Hence strain energy release rate thresholds, G sub th, below which no delamination would occur were also measured. Mode 1 and 2 threshold G values for no delamination growth were found by monitoring the number of cycles to delamination onset in the DCB and ENF specimens. The maximum applied G for which no delamination growth had occurred until at least 1,000,000 cycles was considered the threshold strain energy release rate. Comments are given on how testing effects, facial interference or delamination front damage, may invalidate the experimental determination of the constants in the expression.

  1. Determinants of Change in the Cost-effectiveness Threshold.

    Science.gov (United States)

    Paulden, Mike; O'Mahony, James; McCabe, Christopher

    2017-02-01

    The cost-effectiveness threshold in health care systems with a constrained budget should be determined by the cost-effectiveness of displacing health care services to fund new interventions. Using comparative statics, we review some potential determinants of the threshold, including the budget for health care, the demand for existing health care interventions, the technical efficiency of existing interventions, and the development of new health technologies. We consider the anticipated direction of impact that would affect the threshold following a change in each of these determinants. Where the health care system is technically efficient, an increase in the health care budget unambiguously raises the threshold, whereas an increase in the demand for existing, non-marginal health interventions unambiguously lowers the threshold. Improvements in the technical efficiency of existing interventions may raise or lower the threshold, depending on the cause of the improvement in efficiency, whether the intervention is already funded, and, if so, whether it is marginal. New technologies may also raise or lower the threshold, depending on whether the new technology is a substitute for an existing technology and, again, whether the existing technology is marginal. Our analysis permits health economists and decision makers to assess if and in what direction the threshold may change over time. This matters, as threshold changes impact the cost-effectiveness of interventions that require decisions now but have costs and effects that fall in future periods.

  2. Low heat pain thresholds in migraineurs between attacks.

    Science.gov (United States)

    Schwedt, Todd J; Zuniga, Leslie; Chong, Catherine D

    2015-06-01

    Between attacks, migraine is associated with hypersensitivities to sensory stimuli. The objective of this study was to investigate hypersensitivity to pain in migraineurs between attacks. Cutaneous heat pain thresholds were measured in 112 migraineurs, migraine free for ≥ 48 hours, and 75 healthy controls. Pain thresholds at the head and at the arm were compared between migraineurs and controls using two-tailed t-tests. Among migraineurs, correlations between heat pain thresholds and headache frequency, allodynia symptom severity, and time interval until next headache were calculated. Migraineurs had lower pain thresholds than controls at the head (43.9 ℃ ± 3.2 ℃ vs. 45.1 ℃ ± 3.0 ℃, p = 0.015) and arm (43.2 ℃ ± 3.4 ℃ vs. 44.8 ℃ ± 3.3 ℃, p pain thresholds and headache frequency or allodynia symptom severity. For the 41 migraineurs for whom time to next headache was known, there were positive correlations between time to next headache and pain thresholds at the head (r = 0.352, p = 0.024) and arm (r = 0.312, p = 0.047). This study provides evidence that migraineurs have low heat pain thresholds between migraine attacks. Mechanisms underlying these lower pain thresholds could also predispose migraineurs to their next migraine attack, a hypothesis supported by finding positive correlations between pain thresholds and time to next migraine attack. © International Headache Society 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  3. Dual-functional Memory and Threshold Resistive Switching Based on the Push-Pull Mechanism of Oxygen Ions

    KAUST Repository

    Huang, Yi-Jen; Chao, Shih-Chun; Lien, Der-Hsien; Wen, Cheng-Yen; He, Jr-Hau; Lee, Si-Chen

    2016-01-01

    The combination of nonvolatile memory switching and volatile threshold switching functions of transition metal oxides in crossbar memory arrays is of great potential for replacing charge-based flash memory in very-large-scale integration. Here, we

  4. Power corrections to the asymptotics of the pion electromagnetic formfactor

    International Nuclear Information System (INIS)

    Gorsky, A.S.

    1984-01-01

    The first power correction to the pion electromagnetic form factor is derived. A few asymptotic wave functions corresponding to the different series of operators and matrix elements of four-particle operators in pion have been found. The large scale of the first power correction approximately 10 2 (GeV 2 )/Q 2 where Q 2 is the momentum transfer indicates that at low energies the whole series of power corrections seems to be taken into account

  5. Some considerations regarding the creep crack growth threshold

    International Nuclear Information System (INIS)

    Thouless, M.D.; Evans, A.G.

    1984-01-01

    The preceding analysis reveals that the existence of a threshold determined by the sintering stress does not influence the post threshold crack velocity. Considerations of the sintering stress can thus be conveniently excluded from analysis of the post threshold crack velocity. The presence of a crack growth threshold has been predicted, based on the existence of cavity nucleation controlled crack growth. A preliminary analysis of cavity nucleation rates within the damage zone reveals that this threshold is relatively abrupt, in accord with experimental observations. Consequently, at stress intensities below K /SUB th/ growth becomes nucleation limited and crack blunting occurs in preference to crack growth

  6. Phase correction of MR perfusion/diffusion images

    International Nuclear Information System (INIS)

    Chenevert, T.L.; Pipe, J.G.; Brunberg, J.A.; Yeung, H.N.

    1989-01-01

    Apparent diffusion coefficient (ADC) and perfusion MR sequences are exceptionally sensitive to minute motion and, therefore, are prone to bulk motions that hamper ADC/perfusion quantification. The authors have developed a phase correction algorithm to substantially reduce this error. The algorithm uses a diffusion-insensitive data set to correct data that are diffusion sensitive but phase corrupt. An assumption of the algorithm is that bulk motion phase shifts are uniform in one dimension, although they may be arbitrarily large and variable from acquisition to acquisition. This is facilitated by orthogonal section selection. The correction is applied after one Fourier transform of a two-dimensional Fourier transform reconstruction. Imaging experiments on rat and human brain demonstrate significant artifact reduction in ADC and perfusion measurements

  7. When Do Price Thresholds Matter in Retail Categories?

    OpenAIRE

    Koen Pauwels; Shuba Srinivasan; Philip Hans Franses

    2007-01-01

    Marketing literature has long recognized that brand price elasticity need not be monotonic and symmetric, but has yet to provide generalizable market-level insights on threshold-based price elasticity, asymmetric thresholds, and the sign and magnitude of elasticity transitions. This paper introduces smooth transition regression models to study threshold-based price elasticity of the top 4 brands across 20 fast-moving consumer good categories. Threshold-based price elasticity is found for 76% ...

  8. Estimating the Threshold Level of Inflation for Thailand

    OpenAIRE

    Jiranyakul, Komain

    2017-01-01

    Abstract. This paper analyzes the relationship between inflation and economic growth in Thailand using annual dataset during 1990 and 2015. The threshold model is estimated for different levels of threshold inflation rate. The results suggest that the threshold level of inflation above which inflation significantly slow growth is estimated at 3 percent. The negative relationship between inflation and growth is apparent above this threshold level of inflation. In other words, the inflation rat...

  9. Time-efficient multidimensional threshold tracking method

    DEFF Research Database (Denmark)

    Fereczkowski, Michal; Kowalewski, Borys; Dau, Torsten

    2015-01-01

    Traditionally, adaptive methods have been used to reduce the time it takes to estimate psychoacoustic thresholds. However, even with adaptive methods, there are many cases where the testing time is too long to be clinically feasible, particularly when estimating thresholds as a function of anothe...

  10. A light-powered sub-threshold microprocessor

    Energy Technology Data Exchange (ETDEWEB)

    Liu Ming; Chen Hong; Zhang Chun; Li Changmeng; Wang Zhihua, E-mail: lium02@mails.tsinghua.edu.cn [Institute of Microelectronics, Tsinghua University, Beijing 100084 (China)

    2010-11-15

    This paper presents an 8-bit sub-threshold microprocessor which can be powered by an integrated photosensitive diode. With a custom designed sub-threshold standard cell library and 1 kbit sub-threshold SRAM design, the leakage power of 58 nW, dynamic power of 385 nW - 165 kHz, EDP 13 pJ/inst and the operating voltage of 350 mV are achieved. Under a light of about 150 kLux, the microprocessor can run at a rate of up to 500 kHz. The microprocessor can be used for wireless-sensor-network nodes.

  11. The reaction np→ pp π- from threshold up to 570 MeV

    International Nuclear Information System (INIS)

    Daum, M.; Finger, M.; Slunecka, M.; Finger, M. Jr.; Janata, A.; Franz, J.; Heinsius, F.H.; Koenigsmann, K.; Lacker, H.; Schmitt, H.; Schweiger, W.; Sereni, P.

    2002-01-01

    The reaction np→ppπ - has been studied in a kinematically complete measurement with a large acceptance time-of-flight spectrometer for incident neutron energies between threshold and 570 MeV. The proton-proton invariant mass distributions show a strong enhancement due to the pp( 1 S 0 ) final state interaction. A large anisotropy was found in the pion angular distributions in contrast to the reaction pp→ppπ 0 . At small energies, a large forward/backward asymmetry has been observed. From the measured integrated cross section σ(np→ppπ - ), the isoscalar cross section σ 01 has been extracted. Its energy dependence indicates that mainly partial waves with Sp final states contribute. (orig.)

  12. Threshold concepts in finance: student perspectives

    Science.gov (United States)

    Hoadley, Susan; Kyng, Tim; Tickle, Leonie; Wood, Leigh N.

    2015-10-01

    Finance threshold concepts are the essential conceptual knowledge that underpin well-developed financial capabilities and are central to the mastery of finance. In this paper we investigate threshold concepts in finance from the point of view of students, by establishing the extent to which students are aware of threshold concepts identified by finance academics. In addition, we investigate the potential of a framework of different types of knowledge to differentiate the delivery of the finance curriculum and the role of modelling in finance. Our purpose is to identify ways to improve curriculum design and delivery, leading to better student outcomes. Whilst we find that there is significant overlap between what students identify as important in finance and the threshold concepts identified by academics, much of this overlap is expressed by indirect reference to the concepts. Further, whilst different types of knowledge are apparent in the student data, there is evidence that students do not necessarily distinguish conceptual from other types of knowledge. As well as investigating the finance curriculum, the research demonstrates the use of threshold concepts to compare and contrast student and academic perceptions of a discipline and, as such, is of interest to researchers in education and other disciplines.

  13. Adhoc: an R package to calculate ad hoc distance thresholds for DNA barcoding identification

    Directory of Open Access Journals (Sweden)

    Gontran Sonet

    2013-12-01

    Full Text Available Identification by DNA barcoding is more likely to be erroneous when it is based on a large distance between the query (the barcode sequence of the specimen to identify and its best match in a reference barcode library. The number of such false positive identifications can be decreased by setting a distance threshold above which identification has to be rejected. To this end, we proposed recently to use an ad hoc distance threshold producing identifications with an estimated relative error probability that can be fixed by the user (e.g. 5%. Here we introduce two R functions that automate the calculation of ad hoc distance thresholds for reference libraries of DNA barcodes. The scripts of both functions, a user manual and an example file are available on the JEMU website (http://jemu.myspecies.info/computer-programs as well as on the comprehensive R archive network (CRAN, http://cran.r-project.org.

  14. Validation of the Two-Layer Model for Correcting Clear Sky Reflectance Near Clouds

    Science.gov (United States)

    Wen, Guoyong; Marshak, Alexander; Evans, K. Frank; Vamal, Tamas

    2014-01-01

    A two-layer model was developed in our earlier studies to estimate the clear sky reflectance enhancement near clouds. This simple model accounts for the radiative interaction between boundary layer clouds and molecular layer above, the major contribution to the reflectance enhancement near clouds for short wavelengths. We use LES/SHDOM simulated 3D radiation fields to valid the two-layer model for reflectance enhancement at 0.47 micrometer. We find: (a) The simple model captures the viewing angle dependence of the reflectance enhancement near cloud, suggesting the physics of this model is correct; and (b) The magnitude of the 2-layer modeled enhancement agree reasonably well with the "truth" with some expected underestimation. We further extend our model to include cloud-surface interaction using the Poisson model for broken clouds. We found that including cloud-surface interaction improves the correction, though it can introduced some over corrections for large cloud albedo, large cloud optical depth, large cloud fraction, large cloud aspect ratio. This over correction can be reduced by excluding scenes (10 km x 10km) with large cloud fraction for which the Poisson model is not designed for. Further research is underway to account for the contribution of cloud-aerosol radiative interaction to the enhancement.

  15. Revisiting instanton corrections to the Konishi multiplet

    Energy Technology Data Exchange (ETDEWEB)

    Alday, Luis F. [Mathematical Institute, University of Oxford,Andrew Wiles Building, Radcliffe Observatory Quarter, Woodstock Road, Oxford, OX2 6GG (United Kingdom); Korchemsky, Gregory P. [Institut de Physique Théorique, Université Paris Saclay, CNRS, CEA,F-91191 Gif-sur-Yvette (France)

    2016-12-01

    We revisit the calculation of instanton effects in correlation functions in N=4 SYM involving the Konishi operator and operators of twist two. Previous studies revealed that the scaling dimensions and the OPE coefficients of these operators do not receive instanton corrections in the semiclassical approximation. We go beyond this approximation and demonstrate that, while operators belonging to the same N=4 supermultiplet ought to have the same conformal data, the evaluation of quantum instanton corrections for one operator can be mapped into a semiclassical computation for another operator in the same supermultiplet. This observation allows us to compute explicitly the leading instanton correction to the scaling dimension of operators in the Konishi supermultiplet as well as to their structure constants in the OPE of two half-BPS scalar operators. We then use these results, together with crossing symmetry, to determine instanton corrections to scaling dimensions of twist-four operators with large spin.

  16. Model-dependence of the CO2 threshold for melting the hard Snowball Earth

    Directory of Open Access Journals (Sweden)

    W. R. Peltier

    2011-01-01

    Full Text Available One of the critical issues of the Snowball Earth hypothesis is the CO2 threshold for triggering the deglaciation. Using Community Atmospheric Model version 3.0 (CAM3, we study the problem for the CO2 threshold. Our simulations show large differences from previous results (e.g. Pierrehumbert, 2004, 2005; Le Hir et al., 2007. At 0.2 bars of CO2, the January maximum near-surface temperature is about 268 K, about 13 K higher than that in Pierrehumbert (2004, 2005, but lower than the value of 270 K for 0.1 bar of CO2 in Le Hir et al. (2007. It is found that the difference of simulation results is mainly due to model sensitivity of greenhouse effect and longwave cloud forcing to increasing CO2. At 0.2 bars of CO2, CAM3 yields 117 Wm−2 of clear-sky greenhouse effect and 32 Wm−2 of longwave cloud forcing, versus only about 77 Wm−2 and 10.5 Wm−2 in Pierrehumbert (2004, 2005, respectively. CAM3 has comparable clear-sky greenhouse effect to that in Le Hir et al. (2007, but lower longwave cloud forcing. CAM3 also produces much stronger Hadley cells than that in Pierrehumbert (2005. Effects of pressure broadening and collision-induced absorption are also studied using a radiative-convective model and CAM3. Both effects substantially increase surface temperature and thus lower the CO2 threshold. The radiative-convective model yields a CO2 threshold of about 0.21 bars with surface albedo of 0.663. Without considering the effects of pressure broadening and collision-induced absorption, CAM3 yields an approximate CO2 threshold of about 1.0 bar for surface albedo of about 0.6. However, the threshold is lowered to 0.38 bars as both effects are considered.

  17. Bedding material affects mechanical thresholds, heat thresholds and texture preference

    Science.gov (United States)

    Moehring, Francie; O’Hara, Crystal L.; Stucky, Cheryl L.

    2015-01-01

    It has long been known that the bedding type animals are housed on can affect breeding behavior and cage environment. Yet little is known about its effects on evoked behavior responses or non-reflexive behaviors. C57BL/6 mice were housed for two weeks on one of five bedding types: Aspen Sani Chips® (standard bedding for our institute), ALPHA-Dri®, Cellu-Dri™, Pure-o’Cel™ or TEK-Fresh. Mice housed on Aspen exhibited the lowest (most sensitive) mechanical thresholds while those on TEK-Fresh exhibited 3-fold higher thresholds. While bedding type had no effect on responses to punctate or dynamic light touch stimuli, TEK-Fresh housed animals exhibited greater responsiveness in a noxious needle assay, than those housed on the other bedding types. Heat sensitivity was also affected by bedding as animals housed on Aspen exhibited the shortest (most sensitive) latencies to withdrawal whereas those housed on TEK-Fresh had the longest (least sensitive) latencies to response. Slight differences between bedding types were also seen in a moderate cold temperature preference assay. A modified tactile conditioned place preference chamber assay revealed that animals preferred TEK-Fresh to Aspen bedding. Bedding type had no effect in a non-reflexive wheel running assay. In both acute (two day) and chronic (5 week) inflammation induced by injection of Complete Freund’s Adjuvant in the hindpaw, mechanical thresholds were reduced in all groups regardless of bedding type, but TEK-Fresh and Pure-o’Cel™ groups exhibited a greater dynamic range between controls and inflamed cohorts than Aspen housed mice. PMID:26456764

  18. 40 CFR 68.115 - Threshold determination.

    Science.gov (United States)

    2010-07-01

    ... (CONTINUED) CHEMICAL ACCIDENT PREVENTION PROVISIONS Regulated Substances for Accidental Release Prevention... process exceeds the threshold. (b) For the purposes of determining whether more than a threshold quantity... portion of the process is less than 10 millimeters of mercury (mm Hg), the amount of the substance in the...

  19. Approach to DOE threshold guidance limits

    International Nuclear Information System (INIS)

    Shuman, R.D.; Wickham, L.E.

    1984-01-01

    The need for less restrictive criteria governing disposal of extremely low-level radioactive waste has long been recognized. The Low-Level Waste Management Program has been directed by the Department of Energy (DOE) to aid in the development of a threshold guidance limit for DOE low-level waste facilities. Project objectives are concernd with the definition of a threshold limit dose and pathway analysis of radionuclide transport within selected exposure scenarios at DOE sites. Results of the pathway analysis will be used to determine waste radionuclide concentration guidelines that meet the defined threshold limit dose. Methods of measurement and verification of concentration limits round out the project's goals. Work on defining a threshold limit dose is nearing completion. Pathway analysis of sanitary landfill operations at the Savannah River Plant and the Idaho National Engineering Laboratory is in progress using the DOSTOMAN computer code. Concentration limit calculations and determination of implementation procedures shall follow completion of the pathways work. 4 references

  20. Sub-LET Threshold SEE Cross Section Dependency with Ion Energy

    CERN Document Server

    Garcia Alia, Ruben; Brandenburg, Sytze; Brugger, Markus; Daly, Eamonn; Ferlet-Cavrois, Veronique; Gaillard, Remi; Hoeffgen, Stefan; Menicucci, Alessandra; Metzger, Stefan; Zadeh, Ali; Muschitiello, Michele; Noordeh, Emil; Santin, Giovanni; CERN. Geneva. ATS Department

    2015-01-01

    This study focuses on the ion species and energy dependence of the heavy ion SEE cross section in the sub-let threshold region through a set of experimental data. In addition, a Monte Carlo based model is introduced and applied, showing a good agreement with the data in the several hundred MeV/n range while evidencing large discrepancies with the measurements in the 10-30 MeV/n interval, notably for the NE ion. Such discrepancies are carefully analysed and discussed.

  1. Quantitative Evaluation of 2 Scatter-Correction Techniques for 18F-FDG Brain PET/MRI in Regard to MR-Based Attenuation Correction.

    Science.gov (United States)

    Teuho, Jarmo; Saunavaara, Virva; Tolvanen, Tuula; Tuokkola, Terhi; Karlsson, Antti; Tuisku, Jouni; Teräs, Mika

    2017-10-01

    In PET, corrections for photon scatter and attenuation are essential for visual and quantitative consistency. MR attenuation correction (MRAC) is generally conducted by image segmentation and assignment of discrete attenuation coefficients, which offer limited accuracy compared with CT attenuation correction. Potential inaccuracies in MRAC may affect scatter correction, because the attenuation image (μ-map) is used in single scatter simulation (SSS) to calculate the scatter estimate. We assessed the impact of MRAC to scatter correction using 2 scatter-correction techniques and 3 μ-maps for MRAC. Methods: The tail-fitted SSS (TF-SSS) and a Monte Carlo-based single scatter simulation (MC-SSS) algorithm implementations on the Philips Ingenuity TF PET/MR were used with 1 CT-based and 2 MR-based μ-maps. Data from 7 subjects were used in the clinical evaluation, and a phantom study using an anatomic brain phantom was conducted. Scatter-correction sinograms were evaluated for each scatter correction method and μ-map. Absolute image quantification was investigated with the phantom data. Quantitative assessment of PET images was performed by volume-of-interest and ratio image analysis. Results: MRAC did not result in large differences in scatter algorithm performance, especially with TF-SSS. Scatter sinograms and scatter fractions did not reveal large differences regardless of the μ-map used. TF-SSS showed slightly higher absolute quantification. The differences in volume-of-interest analysis between TF-SSS and MC-SSS were 3% at maximum in the phantom and 4% in the patient study. Both algorithms showed excellent correlation with each other with no visual differences between PET images. MC-SSS showed a slight dependency on the μ-map used, with a difference of 2% on average and 4% at maximum when a μ-map without bone was used. Conclusion: The effect of different MR-based μ-maps on the performance of scatter correction was minimal in non-time-of-flight 18 F-FDG PET

  2. Height drift correction in non-raster atomic force microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Travis R. [Department of Mathematics, University of California Los Angeles, Los Angeles, CA 90095 (United States); Ziegler, Dominik [Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Brune, Christoph [Institute for Computational and Applied Mathematics, University of Münster (Germany); Chen, Alex [Statistical and Applied Mathematical Sciences Institute, Research Triangle Park, NC 27709 (United States); Farnham, Rodrigo; Huynh, Nen; Chang, Jen-Mei [Department of Mathematics and Statistics, California State University Long Beach, Long Beach, CA 90840 (United States); Bertozzi, Andrea L., E-mail: bertozzi@math.ucla.edu [Department of Mathematics, University of California Los Angeles, Los Angeles, CA 90095 (United States); Ashby, Paul D., E-mail: pdashby@lbl.gov [Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States)

    2014-02-01

    We propose a novel method to detect and correct drift in non-raster scanning probe microscopy. In conventional raster scanning drift is usually corrected by subtracting a fitted polynomial from each scan line, but sample tilt or large topographic features can result in severe artifacts. Our method uses self-intersecting scan paths to distinguish drift from topographic features. Observing the height differences when passing the same position at different times enables the reconstruction of a continuous function of drift. We show that a small number of self-intersections is adequate for automatic and reliable drift correction. Additionally, we introduce a fitness function which provides a quantitative measure of drift correctability for any arbitrary scan shape. - Highlights: • We propose a novel height drift correction method for non-raster SPM. • Self-intersecting scans enable the distinction of drift from topographic features. • Unlike conventional techniques our method is unsupervised and tilt-invariant. • We introduce a fitness measure to quantify correctability for general scan paths.

  3. A Review of Target Mass Corrections

    Energy Technology Data Exchange (ETDEWEB)

    I. Schienbein; V. Radescu; G. Zeller; M. E. Christy; C. E. Keppel; K. S. McFarland; W. Melnitchouk; F. I. Olness; M. H. Reno; F. Steffens; J.-Y. Yu

    2007-09-06

    With recent advances in the precision of inclusive lepton-nuclear scattering experiments, it has become apparent that comparable improvements are needed in the accuracy of the theoretical analysis tools. In particular, when extracting parton distribution functions in the large-x region, it is crucial to correct the data for effects associated with the nonzero mass of the target. We present here a comprehensive review of these target mass corrections (TMC) to structure functions data, summarizing the relevant formulas for TMCs in electromagnetic and weak processes. We include a full analysis of both hadronic and partonic masses, and trace how these effects appear in the operator product expansion and the factorized parton model formalism, as well as their limitations when applied to data in the x -> 1 limit. We evaluate the numerical effects of TMCs on various structure functions, and compare fits to data with and without these corrections.

  4. Statistical Algorithm for the Adaptation of Detection Thresholds

    DEFF Research Database (Denmark)

    Stotsky, Alexander A.

    2008-01-01

    Many event detection mechanisms in spark ignition automotive engines are based on the comparison of the engine signals to the detection threshold values. Different signal qualities for new and aged engines necessitate the development of an adaptation algorithm for the detection thresholds...... remains constant regardless of engine age and changing detection threshold values. This, in turn, guarantees the same event detection performance for new and aged engines/sensors. Adaptation of the engine knock detection threshold is given as an example. Udgivelsesdato: 2008...

  5. Perspective: Uses and misuses of thresholds in diagnostic decision making.

    Science.gov (United States)

    Warner, Jeremy L; Najarian, Robert M; Tierney, Lawrence M

    2010-03-01

    The concept of thresholds plays a vital role in decisions involving the initiation, continuation, and completion of diagnostic testing. Much research has focused on the development of explicit thresholds, in the form of practice guidelines and decision analyses. However, these tools are used infrequently; most medical decisions are made at the bedside, using implicit thresholds. Study of these thresholds can lead to a deeper understanding of clinical decision making. The authors examine some factors constituting individual clinicians' implicit thresholds. They propose a model for static thresholds using the concept of situational gravity to explain why some thresholds are high, and some low. Next, they consider the hypothetical effects of incorrect placement of thresholds (miscalibration) and changes to thresholds during diagnosis (manipulation). They demonstrate these concepts using common clinical scenarios. Through analysis of miscalibration of thresholds, the authors demonstrate some common maladaptive clinical behaviors, which are nevertheless internally consistent. They then explain how manipulation of thresholds gives rise to common cognitive heuristics including premature closure and anchoring. They also discuss the case where no threshold has been exceeded despite exhaustive collection of data, which commonly leads to application of the availability or representativeness heuristics. Awareness of implicit thresholds allows for a more effective understanding of the processes of medical decision making and, possibly, to the avoidance of detrimental heuristics and their associated medical errors. Research toward accurately defining these thresholds for individual physicians and toward determining their dynamic properties during the diagnostic process may yield valuable insights.

  6. Soft-gluon resummation for high-pT inclusive-hadron production at COMPASS

    International Nuclear Information System (INIS)

    Pfeuffer, Melanie

    2013-01-01

    One of the experiments that may be used to probe the nucleon's gluon distribution is the fixed-target lepton scattering experiment COMPASS at CERN, where charged hadrons with high transverse momentum are observed. An aspect that makes the COMPASS experiment quite challenging for the theoretical calculation in perturbative QCD is its fixed-target regime. The hadron's transverse momentum is relatively large compared to the available center-of-mass energy. Thus the partonic process is close to the threshold, where all available partonic center-of-mass energy is just used to produce the high-transverse momentum parton that subsequently hadronizes into the observed hadron, and its recoiling counterpart. Additional real gluon radiation is strongly suppressed and therefore mostly constrained to the emission of soft and/or collinear gluons. This results in a strong imbalance between real and virtual gluon diagrams and the cancellation of infrared singularities leaves behind large logarithmic corrections to the leading order cross section. These logarithms are not only present in the next-to-leading (NLO) corrections, but appear also in all higher order corrections in its perturbation expansion. They dominate the cross section in the kinematic region close to the threshold and thus have to be taken into account order-by-order. A technique that addresses these logarithms is known as threshold resummation. The main goal of this work is to investigate the relevance of higher-order QCD corrections of the unpolarized photoproduction reaction in fixed-target scattering at COMPASS, where the hadron is produced at large transverse momentum. In particular the large logarithmic threshold corrections to the partonic cross sections are addressed, which are resummed to all orders at next-to-leading logarithmic (NLL) accuracy. As new technical ingredient to resummation, the rapidity dependence of the cross section in the resummed calculation is fully included in order to account for all

  7. Is action potential threshold lowest in the axon?

    NARCIS (Netherlands)

    Kole, Maarten H. P.; Stuart, Greg J.

    2008-01-01

    Action potential threshold is thought to be lowest in the axon, but when measured using conventional techniques, we found that action potential voltage threshold of rat cortical pyramidal neurons was higher in the axon than at other neuronal locations. In contrast, both current threshold and voltage

  8. Minimum Transendothelial Electrical Resistance Thresholds for the Study of Small and Large Molecule Drug Transport in a Human in Vitro Blood-Brain Barrier Model.

    Science.gov (United States)

    Mantle, Jennifer L; Min, Lie; Lee, Kelvin H

    2016-12-05

    A human cell-based in vitro model that can accurately predict drug penetration into the brain as well as metrics to assess these in vitro models are valuable for the development of new therapeutics. Here, human induced pluripotent stem cells (hPSCs) are differentiated into a polarized monolayer that express blood-brain barrier (BBB)-specific proteins and have transendothelial electrical resistance (TEER) values greater than 2500 Ω·cm 2 . By assessing the permeabilities of several known drugs, a benchmarking system to evaluate brain permeability of drugs was established. Furthermore, relationships between TEER and permeability to both small and large molecules were established, demonstrating that different minimum TEER thresholds must be achieved to study the brain transport of these two classes of drugs. This work demonstrates that this hPSC-derived BBB model exhibits an in vivo-like phenotype, and the benchmarks established here are useful for assessing functionality of other in vitro BBB models.

  9. Minimal and non-minimal standard models: Universality of radiative corrections

    International Nuclear Information System (INIS)

    Passarino, G.

    1991-01-01

    The possibility of describing electroweak processes by means of models with a non-minimal Higgs sector is analyzed. The renormalization procedure which leads to a set of fitting equations for the bare parameters of the lagrangian is first reviewed for the minimal standard model. A solution of the fitting equations is obtained, which correctly includes large higher-order corrections. Predictions for physical observables, notably the W boson mass and the Z O partial widths, are discussed in detail. Finally the extension to non-minimal models is described under the assumption that new physics will appear only inside the vector boson self-energies and the concept of universality of radiative corrections is introduced, showing that to a large extent they are insensitive to the details of the enlarged Higgs sector. Consequences for the bounds on the top quark mass are also discussed. (orig.)

  10. Gain and Threshold Current in Type II In(AsSb Mid-Infrared Quantum Dot Lasers

    Directory of Open Access Journals (Sweden)

    Qi Lu

    2015-04-01

    Full Text Available In this work, we improved the performance of mid-infrared type II InSb/InAs quantum dot (QD laser diodes by incorporating a lattice-matched p-InAsSbP cladding layer. The resulting devices exhibited emission around 3.1 µm and operated up to 120 K in pulsed mode, which is the highest working temperature for this type of QD laser. The modal gain was estimated to be 2.9 cm−1 per QD layer. A large blue shift (~150 nm was observed in the spontaneous emission spectrum below threshold due to charging effects. Because of the QD size distribution, only a small fraction of QDs achieve threshold at the same injection level at 4 K. Carrier leakage from the waveguide into the cladding layers was found to be the main reason for the high threshold current at higher temperatures.

  11. Applying Threshold Concepts to Finance Education

    Science.gov (United States)

    Hoadley, Susan; Wood, Leigh N.; Tickle, Leonie; Kyng, Tim

    2016-01-01

    Purpose: The purpose of this paper is to investigate and identify threshold concepts that are the essential conceptual content of finance programmes. Design/Methodology/Approach: Conducted in three stages with finance academics and students, the study uses threshold concepts as both a theoretical framework and a research methodology. Findings: The…

  12. Defect sizing of post-irradiated nuclear fuels using grayscale thresholding in their radiographic images

    International Nuclear Information System (INIS)

    Chaudhary, Usman Khurshid; Iqbal, Masood; Ahmad, Munir

    2010-01-01

    Quantification of different types of material defects in a number of reference standard post-irradiated nuclear fuel image samples have been carried out by virtue of developing a computer program that takes radiographic images of the fuel as input. The program is based on user adjustable grayscale thresholding in the regime of image segmentation whereby it selects and counts the pixels having graylevel values less than or equal to the computed threshold. It can size the defects due to chipping in nuclear fuel, cracks, voids, melting, deformation, inclusion of foreign materials, heavy isotope accumulation, non-uniformity, etc. The classes of fuel range from those of research and power reactors to fast breeders and from pellets to annular and vibro-compacted fuel. The program has been validated against ground truth realities of some locally fabricated metallic plates having drilled holes of known sizes simulated as defects in them in which the results indicate that it either correctly selects and quantifies at least 94% of the actual required regions of interest in a given image or it gives less than 8.1% false alarm rate. Also, the developed program is independent of image size.

  13. Defect sizing of post-irradiated nuclear fuels using grayscale thresholding in their radiographic images

    Energy Technology Data Exchange (ETDEWEB)

    Chaudhary, Usman Khurshid, E-mail: ukhurshid@hotmail.co [Department of Physics and Applied Mathematics, Pakistan Institute of Engineering and Applied Sciences, P.O. Nilore, Islamabad 45650 (Pakistan); Iqbal, Masood, E-mail: masiqbal@hotmail.co [Nuclear Engineering Division, Pakistan Institute of Nuclear Science and Technology, P.O. Nilore, Islamabad 45650 (Pakistan); Ahmad, Munir [Nondestructive Testing Group, Directorate of Technology, Pakistan Institute of Nuclear Science and Technology, P.O. Nilore, Islamabad 45650 (Pakistan)

    2010-10-15

    Quantification of different types of material defects in a number of reference standard post-irradiated nuclear fuel image samples have been carried out by virtue of developing a computer program that takes radiographic images of the fuel as input. The program is based on user adjustable grayscale thresholding in the regime of image segmentation whereby it selects and counts the pixels having graylevel values less than or equal to the computed threshold. It can size the defects due to chipping in nuclear fuel, cracks, voids, melting, deformation, inclusion of foreign materials, heavy isotope accumulation, non-uniformity, etc. The classes of fuel range from those of research and power reactors to fast breeders and from pellets to annular and vibro-compacted fuel. The program has been validated against ground truth realities of some locally fabricated metallic plates having drilled holes of known sizes simulated as defects in them in which the results indicate that it either correctly selects and quantifies at least 94% of the actual required regions of interest in a given image or it gives less than 8.1% false alarm rate. Also, the developed program is independent of image size.

  14. Correction of chromatic abberation in electrostatic lense systems containing quadrupoles

    International Nuclear Information System (INIS)

    Baranova, L.A.; Ul'yanova, N.S.; Yavor, S.Ya.

    1991-01-01

    Possibility of chromatic abberation correction in immersion systems consisting of axysimmetric and quadrupole lenses is shown. Concrete examples are presented. A number of new directions in science and technique, using ion beams are intensively developed presently. When using them accute necessity arises in chromatic abberation correction, while large-scale energy scattering is observed as a rule in such cases

  15. Empirical gradient threshold technique for automated segmentation across image modalities and cell lines.

    Science.gov (United States)

    Chalfoun, J; Majurski, M; Peskin, A; Breen, C; Bajcsy, P; Brady, M

    2015-10-01

    New microscopy technologies are enabling image acquisition of terabyte-sized data sets consisting of hundreds of thousands of images. In order to retrieve and analyze the biological information in these large data sets, segmentation is needed to detect the regions containing cells or cell colonies. Our work with hundreds of large images (each 21,000×21,000 pixels) requires a segmentation method that: (1) yields high segmentation accuracy, (2) is applicable to multiple cell lines with various densities of cells and cell colonies, and several imaging modalities, (3) can process large data sets in a timely manner, (4) has a low memory footprint and (5) has a small number of user-set parameters that do not require adjustment during the segmentation of large image sets. None of the currently available segmentation methods meet all these requirements. Segmentation based on image gradient thresholding is fast and has a low memory footprint. However, existing techniques that automate the selection of the gradient image threshold do not work across image modalities, multiple cell lines, and a wide range of foreground/background densities (requirement 2) and all failed the requirement for robust parameters that do not require re-adjustment with time (requirement 5). We present a novel and empirically derived image gradient threshold selection method for separating foreground and background pixels in an image that meets all the requirements listed above. We quantify the difference between our approach and existing ones in terms of accuracy, execution speed, memory usage and number of adjustable parameters on a reference data set. This reference data set consists of 501 validation images with manually determined segmentations and image sizes ranging from 0.36 Megapixels to 850 Megapixels. It includes four different cell lines and two image modalities: phase contrast and fluorescent. Our new technique, called Empirical Gradient Threshold (EGT), is derived from this reference

  16. The asymmetry of U.S. monetary policy: Evidence from a threshold Taylor rule with time-varying threshold values

    Science.gov (United States)

    Zhu, Yanli; Chen, Haiqiang

    2017-05-01

    In this paper, we revisit the issue whether U.S. monetary policy is asymmetric by estimating a forward-looking threshold Taylor rule with quarterly data from 1955 to 2015. In order to capture the potential heterogeneity for regime shift mechanism under different economic conditions, we modify the threshold model by assuming the threshold value as a latent variable following an autoregressive (AR) dynamic process. We use the unemployment rate as the threshold variable and separate the sample into two periods: expansion periods and recession periods. Our findings support that the U.S. monetary policy operations are asymmetric in these two regimes. More precisely, the monetary authority tends to implement an active Taylor rule with a weaker response to the inflation gap (the deviation of inflation from its target) and a stronger response to the output gap (the deviation of output from its potential level) in recession periods. The threshold value, interpreted as the targeted unemployment rate of monetary authorities, exhibits significant time-varying properties, confirming the conjecture that policy makers may adjust their reference point for the unemployment rate accordingly to reflect their attitude on the health of general economy.

  17. Model Threshold untuk Pembelajaran Memproduksi Pantun Kelas XI

    Directory of Open Access Journals (Sweden)

    Fitri Nura Murti

    2017-03-01

    Full Text Available Abstract: The learning pantun method in schools provided less opportunity to develop the students’ creativity in producing pantun. This situation was supported by the result of the observation conducted on eleventh graders at SMAN 2 Bondowoso. It showed that the students tend to plagiarize their pantun. The general objective of this research and development is to develop Threshold Pantun model for learning to produce pantun for elevent graders. The product was presented in guidance book for teachers entitled “Pembelajaran Memproduksi Pantun Menggunakan Model Threshold Pantun untuk Kelas XI”. This study adapted design method of Borg-Gall’s R&D procedure. The result of this study showed that Threshold Pantun model was appropriate to be implemented for learning to produce pantun. Key Words: Threshold Pantun model, produce pantun Abstrak: Pembelajaran pantun di sekolah selama ini kurang mengembangkan kreativitas siswa dalam memproduksi pantun. Hal tersebut dikuatkan oleh hasil observasi siswa kelas XI SMAN 2 Bondowoso yang menunjukkan adanya kecenderungan produk siswa bersifat plagiat. Tujuan penelitian dan pengembangan ini secara umum adalah mengembangkan model Threshold Pantun untuk pembelajaran memproduksi pantun kelas XI..Produk disajikan dalam bentuk buku panduan bagi guru dengan judul “Pembelajaran Memproduksi Pantun Menggunakan Model Threshold Pantun untuk Kelas XI”. Penelitian ini menggunakan rancangan penelitian yang diadaptasi dari prosedur penelitian dan pengembangan Borg dan Gall. Berdasarkan hasil validasi model Threshold Pantun untuk pembelajaran memproduksi pantun layak diimplementasikan. Kata kunci: model Threshold Pantun, memproduksi pantun

  18. Identification of ecological thresholds from variations in phytoplankton communities among lakes: contribution to the definition of environmental standards.

    Science.gov (United States)

    Roubeix, Vincent; Danis, Pierre-Alain; Feret, Thibaut; Baudoin, Jean-Marc

    2016-04-01

    In aquatic ecosystems, the identification of ecological thresholds may be useful for managers as it can help to diagnose ecosystem health and to identify key levers to enable the success of preservation and restoration measures. A recent statistical method, gradient forest, based on random forests, was used to detect thresholds of phytoplankton community change in lakes along different environmental gradients. It performs exploratory analyses of multivariate biological and environmental data to estimate the location and importance of community thresholds along gradients. The method was applied to a data set of 224 French lakes which were characterized by 29 environmental variables and the mean abundances of 196 phytoplankton species. Results showed the high importance of geographic variables for the prediction of species abundances at the scale of the study. A second analysis was performed on a subset of lakes defined by geographic thresholds and presenting a higher biological homogeneity. Community thresholds were identified for the most important physico-chemical variables including water transparency, total phosphorus, ammonia, nitrates, and dissolved organic carbon. Gradient forest appeared as a powerful method at a first exploratory step, to detect ecological thresholds at large spatial scale. The thresholds that were identified here must be reinforced by the separate analysis of other aquatic communities and may be used then to set protective environmental standards after consideration of natural variability among lakes.

  19. QRS Detection Based on Improved Adaptive Threshold

    Directory of Open Access Journals (Sweden)

    Xuanyu Lu

    2018-01-01

    Full Text Available Cardiovascular disease is the first cause of death around the world. In accomplishing quick and accurate diagnosis, automatic electrocardiogram (ECG analysis algorithm plays an important role, whose first step is QRS detection. The threshold algorithm of QRS complex detection is known for its high-speed computation and minimized memory storage. In this mobile era, threshold algorithm can be easily transported into portable, wearable, and wireless ECG systems. However, the detection rate of the threshold algorithm still calls for improvement. An improved adaptive threshold algorithm for QRS detection is reported in this paper. The main steps of this algorithm are preprocessing, peak finding, and adaptive threshold QRS detecting. The detection rate is 99.41%, the sensitivity (Se is 99.72%, and the specificity (Sp is 99.69% on the MIT-BIH Arrhythmia database. A comparison is also made with two other algorithms, to prove our superiority. The suspicious abnormal area is shown at the end of the algorithm and RR-Lorenz plot drawn for doctors and cardiologists to use as aid for diagnosis.

  20. Empirical Correction to the Likelihood Ratio Statistic for Structural Equation Modeling with Many Variables.

    Science.gov (United States)

    Yuan, Ke-Hai; Tian, Yubin; Yanagihara, Hirokazu

    2015-06-01

    Survey data typically contain many variables. Structural equation modeling (SEM) is commonly used in analyzing such data. The most widely used statistic for evaluating the adequacy of a SEM model is T ML, a slight modification to the likelihood ratio statistic. Under normality assumption, T ML approximately follows a chi-square distribution when the number of observations (N) is large and the number of items or variables (p) is small. However, in practice, p can be rather large while N is always limited due to not having enough participants. Even with a relatively large N, empirical results show that T ML rejects the correct model too often when p is not too small. Various corrections to T ML have been proposed, but they are mostly heuristic. Following the principle of the Bartlett correction, this paper proposes an empirical approach to correct T ML so that the mean of the resulting statistic approximately equals the degrees of freedom of the nominal chi-square distribution. Results show that empirically corrected statistics follow the nominal chi-square distribution much more closely than previously proposed corrections to T ML, and they control type I errors reasonably well whenever N ≥ max(50,2p). The formulations of the empirically corrected statistics are further used to predict type I errors of T ML as reported in the literature, and they perform well.

  1. Medium corrections to nucleon-nucleon interactions

    International Nuclear Information System (INIS)

    Dortmans, P.J.; Amos, K.

    1990-01-01

    The Bethe-Goldstone equations have been solved for both negative and positive energies to specify two nucleon G-matrices fully off of the energy shell. Medium correction effects of Pauli blocking and of the auxiliary potential are included in infinite matter systems characterized by fermi momenta in the range 0.5 fm -1 to 1.8 fm -1 . The Paris interaction is used as the starting potential in most calculations. Medium corrections are shown to be very significant over a large range of energies and densities. On the energy shell values of G-matrices vary markedly from those of free two nucleon (NN) t-matrices which have been solved by way of the Lippmann-Schwinger equation. Off of the energy shell, however, the free and medium corrected Kowalski-Noyes f-ratios rate are quite similar suggesting that a useful model of medium corrected G-matrices are appropriately scaled free NN t-matrices. The choice of auxiliary potential form is also shown to play a decisive role in the negative energy regime, especially when the saturation of nuclear matter is considered. 30 refs., 7 tabs., 7 figs

  2. Near-threshold photoionization of hydrogenlike uranium studied in ion-atom collisions via the time-reversed process.

    Science.gov (United States)

    Stöhlker, T; Ma, X; Ludziejewski, T; Beyer, H F; Bosch, F; Brinzanescu, O; Dunford, R W; Eichler, J; Hagmann, S; Ichihara, A; Kozhuharov, C; Krämer, A; Liesen, D; Mokler, P H; Stachura, Z; Swiat, P; Warczak, A

    2001-02-05

    Radiative electron capture, the time-reversed photoionization process occurring in ion-atom collisions, provides presently the only access to photoionization studies for very highly charged ions. By applying the deceleration mode of the ESR storage ring, we studied this process in low-energy collisions of bare uranium ions with low- Z target atoms. This technique allows us to extend the current information about photoionization to much lower energies than those accessible for neutral heavy elements in the direct reaction channel. The results prove that for high- Z systems, higher-order multipole contributions and magnetic corrections persist even at energies close to the threshold.

  3. Characterization of Mode I and Mode II delamination growth and thresholds in AS4/PEEK composites

    Science.gov (United States)

    Martin, Roderick H.; Murri, Gretchen Bostaph

    1990-01-01

    Composite materials often fail by delamination. The onset and growth of delamination in AS4/PEEK, a tough thermoplastic matrix composite, was characterized for mode 1 and mode 2 loadings, using the Double Cantilever Beam (DCB) and the End Notched Flexure (ENF) test specimens. Delamination growth per fatigue cycle, da/dN, was related to strain energy release rate, G, by means of a power law. However, the exponents of these power laws were too large for them to be adequately used as a life prediction tool. A small error in the estimated applied loads could lead to large errors in the delamination growth rates. Hence strain energy release rate thresholds, G sub th, below which no delamination would occur were also measured. Mode 1 and 2 threshold G values for no delamination growth were found by monitoring the number of cycles to delamination onset in the DCB and ENF specimens. The maximum applied G for which no delamination growth had occurred until at least 1,000,000 cycles was considered the threshold strain energy release rate. Comments are given on how testing effects, facial interference or delamination front damage, may invalidate the experimental determination of the constants in the expression.

  4. Electroweak corrections in the hadronic production of heavy quarks; Elektroschwache Korrekturen bei der hadronischen Erzeugung schwerer Quarks

    Energy Technology Data Exchange (ETDEWEB)

    Scharf, Andreas Bernhard

    2008-06-27

    In this thesis the electroweak corrections to the top-quark pair production and to the production of bottom-quark jets were studied. especially mixed one-loop amplitudes as well as the interferences of electroweak Born amplitudes and one-loop QCD corrections were calculated. These corrections are of great importance for the experimental analyses at the LHC. For both processes compact analytical results for the virtual and real corrections were calculated. For the Tevatron and the LHC the corrections to the total cross section for the top-quark pair production were determined. At the Tevatron these corrections are only some permille large and therefore concerning the total cross section presumably negligible. For the LHC these corrections are some percent large and by this of the same order of magnitude as the QCD corrections in next-to-leading order to the total cross section to be expected. For the differential distributions in M{sub t} {sub anti} {sub t} and p{sub T} the relative corrections lie in dependence on the Higgs mass between +4% and -6%. A comparison between the integrated distributions in p{sub T} respectively M{sub t} {sub anti} {sub t} and the estimated statistical error shows that these corrections are presently not of importance. At the LHC for the M{sub t} {sub anti} {sub t} respectively p{sub T} distribution in dependence of the Higgs mass large negative corrections of up to -15% respectively -20% were found for M{sub t} {sub anti} {sub t}=5 TeV (p{sub T}=2 TeV). The comparison between the integrated distributions and the statistical error shows that the weak O({alpha}) corrections at the LHC are phenomenologically relevant. This is especially valid for the search for new physics at large M{sub t} {sub anti} {sub t}. For the bottom-jet production the weak O({alpha}) corrections for the differential and integrated p{sub T} distribution were calculated for a simple and two-fold b-tag. At the Tevatron the corrections for a simple b-tag for the

  5. Environment and host as large-scale controls of ectomycorrhizal fungi.

    Science.gov (United States)

    van der Linde, Sietse; Suz, Laura M; Orme, C David L; Cox, Filipa; Andreae, Henning; Asi, Endla; Atkinson, Bonnie; Benham, Sue; Carroll, Christopher; Cools, Nathalie; De Vos, Bruno; Dietrich, Hans-Peter; Eichhorn, Johannes; Gehrmann, Joachim; Grebenc, Tine; Gweon, Hyun S; Hansen, Karin; Jacob, Frank; Kristöfel, Ferdinand; Lech, Paweł; Manninger, Miklós; Martin, Jan; Meesenburg, Henning; Merilä, Päivi; Nicolas, Manuel; Pavlenda, Pavel; Rautio, Pasi; Schaub, Marcus; Schröck, Hans-Werner; Seidling, Walter; Šrámek, Vít; Thimonier, Anne; Thomsen, Iben Margrete; Titeux, Hugues; Vanguelova, Elena; Verstraeten, Arne; Vesterdal, Lars; Waldner, Peter; Wijk, Sture; Zhang, Yuxin; Žlindra, Daniel; Bidartondo, Martin I

    2018-06-06

    Explaining the large-scale diversity of soil organisms that drive biogeochemical processes-and their responses to environmental change-is critical. However, identifying consistent drivers of belowground diversity and abundance for some soil organisms at large spatial scales remains problematic. Here we investigate a major guild, the ectomycorrhizal fungi, across European forests at a spatial scale and resolution that is-to our knowledge-unprecedented, to explore key biotic and abiotic predictors of ectomycorrhizal diversity and to identify dominant responses and thresholds for change across complex environmental gradients. We show the effect of 38 host, environment, climate and geographical variables on ectomycorrhizal diversity, and define thresholds of community change for key variables. We quantify host specificity and reveal plasticity in functional traits involved in soil foraging across gradients. We conclude that environmental and host factors explain most of the variation in ectomycorrhizal diversity, that the environmental thresholds used as major ecosystem assessment tools need adjustment and that the importance of belowground specificity and plasticity has previously been underappreciated.

  6. Analysis of ecological thresholds in a temperate forest undergoing dieback.

    Directory of Open Access Journals (Sweden)

    Philip Martin

    Full Text Available Positive feedbacks in drivers of degradation can cause threshold responses in natural ecosystems. Though threshold responses have received much attention in studies of aquatic ecosystems, they have been neglected in terrestrial systems, such as forests, where the long time-scales required for monitoring have impeded research. In this study we explored the role of positive feedbacks in a temperate forest that has been monitored for 50 years and is undergoing dieback, largely as a result of death of the canopy dominant species (Fagus sylvatica, beech. Statistical analyses showed strong non-linear losses in basal area for some plots, while others showed relatively gradual change. Beech seedling density was positively related to canopy openness, but a similar relationship was not observed for saplings, suggesting a feedback whereby mortality in areas with high canopy openness was elevated. We combined this observation with empirical data on size- and growth-mediated mortality of trees to produce an individual-based model of forest dynamics. We used this model to simulate changes in the structure of the forest over 100 years under scenarios with different juvenile and mature mortality probabilities, as well as a positive feedback between seedling and mature tree mortality. This model produced declines in forest basal area when critical juvenile and mature mortality probabilities were exceeded. Feedbacks in juvenile mortality caused a greater reduction in basal area relative to scenarios with no feedback. Non-linear, concave declines of basal area occurred only when mature tree mortality was 3-5 times higher than rates observed in the field. Our results indicate that the longevity of trees may help to buffer forests against environmental change and that the maintenance of old, large trees may aid the resilience of forest stands. In addition, our work suggests that dieback of forests may be avoidable providing pressures on mature and juvenile trees do

  7. Mortality on extreme heat days using official thresholds in Spain: a multi-city time series analysis

    Directory of Open Access Journals (Sweden)

    Tobias Aurelio

    2012-02-01

    Full Text Available Abstract Background The 2003 heat wave had a high impact on mortality in Europe, which made necessary to develop heat health watch warning systems. In Spain this was carried-out by the Ministry of Health in 2004, being based on exceeding of city-specific simultaneous thresholds of minimum and maximum daily temperatures. The aim of this study is to assess effectiveness of the official thresholds established by the Ministry of Health for each provincial capital city, by quantifying and comparing the short-term effects of above-threshold days on total daily mortality. Methods Total daily mortality and minimum and maximum temperatures for the 52 capitals of province in Spain were collected during summer months (June to September for the study period 1995-2004. Data was analysed using GEE for Poisson regression. Relative Risk (RR of total daily mortality was quantified for the current day of official thresholds exceeded. Results The number of days in which the thresholds were exceeded show great inconsistency, with provinces with great number of exceeded days adjacent to provinces that did not exceed or rarely exceeded. The average overall excess risk of dying during an extreme heat day was about 25% (RR = 1.24; 95% confidence interval (CI = [1.19-1.30]. Relative risks showed a significant heterogeneity between cities (I2 = 54.9%. Western situation and low mean summer temperatures were associated with higher relative risks, suggesting thresholds may have been set too high in these areas. Conclusions This study confirmed that extreme heat days have a considerable impact on total daily mortality in Spain. Official thresholds gave consistent relative risk in the large capital cities. However, in some other cities thresholds

  8. Threshold enhancement of diphoton resonances

    Directory of Open Access Journals (Sweden)

    Aoife Bharucha

    2016-10-01

    Full Text Available We revisit a mechanism to enhance the decay width of (pseudo-scalar resonances to photon pairs when the process is mediated by loops of charged fermions produced near threshold. Motivated by the recent LHC data, indicating the presence of an excess in the diphoton spectrum at approximately 750 GeV, we illustrate this threshold enhancement mechanism in the case of a 750 GeV pseudoscalar boson A with a two-photon decay mediated by a charged and uncolored fermion having a mass at the 12MA threshold and a small decay width, <1 MeV. The implications of such a threshold enhancement are discussed in two explicit scenarios: i the Minimal Supersymmetric Standard Model in which the A state is produced via the top quark mediated gluon fusion process and decays into photons predominantly through loops of charginos with masses close to 12MA and ii a two Higgs doublet model in which A is again produced by gluon fusion but decays into photons through loops of vector-like charged heavy leptons. In both these scenarios, while the mass of the charged fermion has to be adjusted to be extremely close to half of the A resonance mass, the small total widths are naturally obtained if only suppressed three-body decay channels occur. Finally, the implications of some of these scenarios for dark matter are discussed.

  9. Manifold corrections on spinning compact binaries

    International Nuclear Information System (INIS)

    Zhong Shuangying; Wu Xin

    2010-01-01

    This paper deals mainly with a discussion of three new manifold correction methods and three existing ones, which can numerically preserve or correct all integrals in the conservative post-Newtonian Hamiltonian formulation of spinning compact binaries. Two of them are listed here. One is a new momentum-position scaling scheme for complete consistency of both the total energy and the magnitude of the total angular momentum, and the other is the Nacozy's approach with least-squares correction of the four integrals including the total energy and the total angular momentum vector. The post-Newtonian contributions, the spin effects, and the classification of orbits play an important role in the effectiveness of these six manifold corrections. They are all nearly equivalent to correct the integrals at the level of the machine epsilon for the pure Kepler problem. Once the third-order post-Newtonian contributions are added to the pure orbital part, three of these corrections have only minor effects on controlling the errors of these integrals. When the spin effects are also included, the effectiveness of the Nacozy's approach becomes further weakened, and even gets useless for the chaotic case. In all cases tested, the new momentum-position scaling scheme always shows the optimal performance. It requires a little but not much expensive additional computational cost when the spin effects exist and several time-saving techniques are used. As an interesting case, the efficiency of the correction to chaotic eccentric orbits is generally better than one to quasicircular regular orbits. Besides this, the corrected fast Lyapunov indicators and Lyapunov exponents of chaotic eccentric orbits are large as compared with the uncorrected counterparts. The amplification is a true expression of the original dynamical behavior. With the aid of both the manifold correction added to a certain low-order integration algorithm as a fast and high-precision device and the fast Lyapunov

  10. Software-controlled, highly automated intrafraction prostate motion correction with intrafraction stereographic targeting: System description and clinical results

    International Nuclear Information System (INIS)

    Mutanga, Theodore F.; Boer, Hans C. J. de; Rajan, Vinayakrishnan; Dirkx, Maarten L. P.; Os, Marjolein J. H. van; Incrocci, Luca; Heijmen, Ben J. M.

    2012-01-01

    Purpose: A new system for software-controlled, highly automated correction of intrafraction prostate motion,'' intrafraction stereographic targeting'' (iSGT), is described and evaluated. Methods: At our institute, daily prostate positioning is routinely performed at the start of treatment beam using stereographic targeting (SGT). iSGT was implemented by extension of the SGT software to facilitate fast and accurate intrafraction motion corrections with minimal user interaction. iSGT entails megavoltage (MV) image acquisitions with the first segment of selected IMRT beams, automatic registration of implanted markers, followed by remote couch repositioning to correct for intrafraction motion above a predefined threshold, prior to delivery of the remaining segments. For a group of 120 patients, iSGT with corrections for two nearly lateral beams was evaluated in terms of workload and impact on effective intrafraction displacements in the sagittal plane. Results: SDs of systematic (Σ) and random (σ) displacements relative to the planning CT measured directly after initial SGT setup correction were eff eff eff eff eff eff < 0.7 mm, requiring corrections in 82.4% of the fractions. Because iSGT is highly automated, the extra time added by iSGT is <30 s if a correction is required. Conclusions: Without increasing imaging dose, iSGT successfully reduces intrafraction prostate motion with minimal workload and increase in fraction time. An action level of 2 mm is recommended.

  11. ENTRIA workshop. Determine threshold values in radiation protection

    International Nuclear Information System (INIS)

    Diener, Lisa

    2015-01-01

    Threshold values affect our daily lives. Whether it concerns traffic or noise regulations, we all experience thresholds on a regular basis. But how are such values generated? The conference ''Determine Thres-hold Values in Radiation Protection'', taking place on January 27th 2015 in Braunschweig, focused on this question. The conference was undertaken in the context of the BMBF-funded interdisciplinary research project ''ENTRIA - Disposal Options for Radioactive Residues''. It aimed to stimulate a cross-disciplinary discussion. Spea-kers from different disciplinary backgrounds talked about topics like procedures of setting threshold values, standards for evaluating dosages, and public participation in the standardization of threshold values. Two major theses emerged: First, setting threshold values always requires considering contexts and protection targets. Second, existing uncertainties must be communicated in and with the public. Altogether, the conference offered lots of input and issues for discussion. In addition, it raised interesting and important questions for further and ongoing work in the research project ENTRIA.

  12. Effect of dissipation on dynamical fusion thresholds

    International Nuclear Information System (INIS)

    Sierk, A.J.

    1986-01-01

    The existence of dynamical thresholds to fusion in heavy nuclei (A greater than or equal to 200) due to the nature of the potential-energy surface is shown. These thresholds exist even in the absence of dissipative forces, due to the coupling between the various collective deformation degrees of freedom. Using a macroscopic model of nuclear shape dynamics, It is shown how three different suggested dissipation mechanisms increase by varying amounts the excitation energy over the one-dimensional barrier required to cause compound-nucleus formation. The recently introduced surface-plus-window dissipation may give a reasonable representation of experimental data on fusion thresholds, in addition to properly describing fission-fragment kinetic energies and isoscalar giant multipole widths. Scaling of threshold results to asymmetric systems is discussed. 48 refs., 10 figs

  13. Forward-central jet correlations at the Large Hadron Collider

    International Nuclear Information System (INIS)

    Deak, M.; Hautmann, F.; Jung, H.; Antwerpen Univ.; Kutak, K.

    2010-12-01

    For high-p T forward processes at the Large Hadron Collider (LHC), QCD logarithmic corrections in the hard transverse momentum and in the large rapidity interval may both be quantitatively significant. The theoretical framework to resum consistently both kinds of logarithmic corrections to higher orders in perturbation theory is based on QCD high-energy factorization. We present numerical Monte Carlo applications of this method to final-state observables associated with production of one forward and one central jet. By computing jet correlations in rapidity and azimuth, we analyze the role of corrections to the parton-showering chain from large-angle gluon radiation, and discuss this in relationship with Monte Carlo results modeling interactions due to multiple parton chains. (orig.)

  14. Smartphone threshold audiometry in underserved primary health-care contexts.

    Science.gov (United States)

    Sandström, Josefin; Swanepoel, De Wet; Carel Myburgh, Hermanus; Laurent, Claude

    2016-01-01

    To validate a calibrated smartphone-based hearing test in a sound booth environment and in primary health-care clinics. A repeated-measure within-subject study design was employed whereby air-conduction hearing thresholds determined by smartphone-based audiometry was compared to conventional audiometry in a sound booth and a primary health-care clinic environment. A total of 94 subjects (mean age 41 years ± 17.6 SD and range 18-88; 64% female) were assessed of whom 64 were tested in the sound booth and 30 within primary health-care clinics without a booth. In the sound booth 63.4% of conventional and smartphone thresholds indicated normal hearing (≤15 dBHL). Conventional thresholds exceeding 15 dB HL corresponded to smartphone thresholds within ≤10 dB in 80.6% of cases with an average threshold difference of -1.6 dB ± 9.9 SD. In primary health-care clinics 13.7% of conventional and smartphone thresholds indicated normal hearing (≤15 dBHL). Conventional thresholds exceeding 15 dBHL corresponded to smartphone thresholds within ≤10 dB in 92.9% of cases with an average threshold difference of -1.0 dB ± 7.1 SD. Accurate air-conduction audiometry can be conducted in a sound booth and without a sound booth in an underserved community health-care clinic using a smartphone.

  15. Is the diagnostic threshold for bulimia nervosa clinically meaningful?

    Science.gov (United States)

    Chapa, Danielle A N; Bohrer, Brittany K; Forbush, Kelsie T

    2018-01-01

    The DSM-5 differentiates full- and sub-threshold bulimia nervosa (BN) according to average weekly frequencies of binge eating and inappropriate compensatory behaviors. This study was the first to evaluate the modified frequency criterion for BN published in the DSM-5. The purpose of this study was to test whether community-recruited adults (N=125; 83.2% women) with current full-threshold (n=77) or sub-threshold BN (n=48) differed in comorbid psychopathology and eating disorder (ED) illness duration, symptom severity, and clinical impairment. Participants completed the Clinical Impairment Assessment and participated in semi-structured clinical interviews of ED- and non-ED psychopathology. Differences between the sub- and full-threshold BN groups were assessed using MANOVA and Chi-square analyses. ED illness duration, age-of-onset, body mass index (BMI), alcohol and drug misuse, and the presence of current and lifetime mood or anxiety disorders did not differ between participants with sub- and full-threshold BN. Participants with full-threshold BN had higher levels of clinical impairment and weight concern than those with sub-threshold BN. However, minimal clinically important difference analyses suggested that statistically significant differences between participants with sub- and full-threshold BN on clinical impairment and weight concern were not clinically significant. In conclusion, sub-threshold BN did not differ from full-threshold BN in clinically meaningful ways. Future studies are needed to identify an improved frequency criterion for BN that better distinguishes individuals in ways that will more validly inform prognosis and effective treatment planning for BN. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Time over threshold based multi-channel LuAG-APD PET detector

    International Nuclear Information System (INIS)

    Shimazoe, Kenji; Orita, Tadashi; Nakamura, Yasuaki; Takahashi, Hiroyuki

    2013-01-01

    To achieve efficient signal processing, several time-based positron emission tomography (PET) systems using a large number of granulated gamma-ray detectors have recently been proposed. In this work described here, a 144-channel Pr:LuAG avalanche photodiode (APD) PET detector that uses time over threshold (ToT) and pulse train methods was designed and fabricated. The detector is composed of 12×12 Pr:LuAG crystals, each of which produces a 2 mm×2 mm×10 mm pixel individually coupled to a 12×12 APD array, which in turn is connected pixel-by-pixel with one channel of a time over threshold based application-specific integrated circuit (ToT-ASIC) that was designed and fabricated using a 0.25 μm 3.3 V Taiwan Semiconductor Company complementary metal oxide semiconductor (TSMC CMOS) process. The ToT outputs are connected through a field-programmable gate array (FPGA) to a data acquisition (DAQ) system. Three front-end ASIC boards—each incorporating a ToT-ASIC chip, threshold control digital-to-analog converters (DACs), and connectors, and dissipating power at about 230 mW per board—are used to read from the 144-channel LuAG-APD detector. All three boards are connected through an FPGA board that is programmed to calibrate the individual thresholds of the ToT circuits to allow digital multiplexing to form an integrated PET module with a measured timing resolution of 4.2 ns. Images transmitted by this PET system can be successfully acquired through collimation masks. As a further implementation of this technology, an animal PET system consisting of eight gamma pixel modules forming a ring is planned

  17. Two-loop corrections to the triple Higgs boson production cross section

    Energy Technology Data Exchange (ETDEWEB)

    Florian, Daniel de [International Center for Advanced Studies (ICAS), ECyT-UNSAM, Campus Miguelete, 25 de Mayo y Francia (1650) Buenos Aires (Argentina); Mazzitelli, Javier [International Center for Advanced Studies (ICAS), ECyT-UNSAM, Campus Miguelete, 25 de Mayo y Francia (1650) Buenos Aires (Argentina); Physik-Institut, Universität Zürich, Winterthurerstrasse 190, CH-8057 Zürich (Switzerland)

    2017-02-22

    In this paper we compute the QCD corrections for the triple Higgs boson production cross section via gluon fusion, within the heavy-top approximation. We present, for the first time, analytical results for the next-to-leading order corrections, and also compute the soft and virtual contributions of the next-to-next-to-leading order cross section. We provide predictions for the total cross section and the triple Higgs invariant mass distribution. We find that the QCD corrections are large at both perturbative orders, and that the scale uncertainty is substantially reduced when the second order perturbative corrections are included.

  18. Cost-effectiveness thresholds: pros and cons.

    Science.gov (United States)

    Bertram, Melanie Y; Lauer, Jeremy A; De Joncheere, Kees; Edejer, Tessa; Hutubessy, Raymond; Kieny, Marie-Paule; Hill, Suzanne R

    2016-12-01

    Cost-effectiveness analysis is used to compare the costs and outcomes of alternative policy options. Each resulting cost-effectiveness ratio represents the magnitude of additional health gained per additional unit of resources spent. Cost-effectiveness thresholds allow cost-effectiveness ratios that represent good or very good value for money to be identified. In 2001, the World Health Organization's Commission on Macroeconomics in Health suggested cost-effectiveness thresholds based on multiples of a country's per-capita gross domestic product (GDP). In some contexts, in choosing which health interventions to fund and which not to fund, these thresholds have been used as decision rules. However, experience with the use of such GDP-based thresholds in decision-making processes at country level shows them to lack country specificity and this - in addition to uncertainty in the modelled cost-effectiveness ratios - can lead to the wrong decision on how to spend health-care resources. Cost-effectiveness information should be used alongside other considerations - e.g. budget impact and feasibility considerations - in a transparent decision-making process, rather than in isolation based on a single threshold value. Although cost-effectiveness ratios are undoubtedly informative in assessing value for money, countries should be encouraged to develop a context-specific process for decision-making that is supported by legislation, has stakeholder buy-in, for example the involvement of civil society organizations and patient groups, and is transparent, consistent and fair.

  19. Cost–effectiveness thresholds: pros and cons

    Science.gov (United States)

    Lauer, Jeremy A; De Joncheere, Kees; Edejer, Tessa; Hutubessy, Raymond; Kieny, Marie-Paule; Hill, Suzanne R

    2016-01-01

    Abstract Cost–effectiveness analysis is used to compare the costs and outcomes of alternative policy options. Each resulting cost–effectiveness ratio represents the magnitude of additional health gained per additional unit of resources spent. Cost–effectiveness thresholds allow cost–effectiveness ratios that represent good or very good value for money to be identified. In 2001, the World Health Organization’s Commission on Macroeconomics in Health suggested cost–effectiveness thresholds based on multiples of a country’s per-capita gross domestic product (GDP). In some contexts, in choosing which health interventions to fund and which not to fund, these thresholds have been used as decision rules. However, experience with the use of such GDP-based thresholds in decision-making processes at country level shows them to lack country specificity and this – in addition to uncertainty in the modelled cost–effectiveness ratios – can lead to the wrong decision on how to spend health-care resources. Cost–effectiveness information should be used alongside other considerations – e.g. budget impact and feasibility considerations – in a transparent decision-making process, rather than in isolation based on a single threshold value. Although cost–effectiveness ratios are undoubtedly informative in assessing value for money, countries should be encouraged to develop a context-specific process for decision-making that is supported by legislation, has stakeholder buy-in, for example the involvement of civil society organizations and patient groups, and is transparent, consistent and fair. PMID:27994285

  20. Investigation of the radiative efficiency and threshold in InGaN laser diodes under the influence of efficiency droop

    International Nuclear Information System (INIS)

    Ryu, Han-Youl

    2012-01-01

    Based on the rate equation model of semiconductor lasers, the radiative efficiency and threshold current density of InGaN-based blue laser diodes (LDs) are theoretically investigated, including the effect of efficiency droop in the InGaN quantum wells. The peak point of the radiative efficiency versus current density relation is used as the parameter of the rate equation analysis. The threshold current density of InGaN blue LDs is found to depend strongly on the maximum radiative efficiency at low current density, implying that improving the maximum efficiency is important to maintain a high radiative efficiency at a large current density and to achieve a low-threshold lasing action under the influence of efficiency droop.

  1. Evaluation of the most suitable threshold value for modelling snow glacier melt through T- index approach: the case study of Forni Glacier (Italian Alps)

    Science.gov (United States)

    Senese, Antonella; Maugeri, Maurizio; Vuillermoz, Elisa; Smiraglia, Claudio; Diolaiuti, Guglielmina

    2014-05-01

    Glacier melt occurs whenever the surface temperature is null (273.15 K) and the net energy budget is positive. These conditions can be assessed by analyzing meteorological and energy data acquired by a supraglacial Automatic Weather Station (AWS). In the case this latter is not present at the glacier surface the assessment of actual melting conditions and the evaluation of melt amount is difficult and degree-day (also named T-index) models are applied. These approaches require the choice of a correct temperature threshold. In fact, melt does not necessarily occur at daily air temperatures higher than 273.15 K, since it is determined by the energy budget which in turn is only indirectly affected by air temperature. This is the case of the late spring period when ablation processes start at the glacier surface thus progressively reducing snow thickness. In this study, to detect the most indicative air temperature threshold witnessing melt conditions in the April-June period, we analyzed air temperature data recorded from 2006 to 2012 by a supraglacial AWS (at 2631 m a.s.l.) on the ablation tongue of the Forni Glacier (Italy), and by a weather station located nearby the studied glacier (at Bormio, 1225 m a.s.l.). Moreover we evaluated the glacier energy budget (which gives the actual melt, Senese et al., 2012) and the snow water equivalent values during this time-frame. Then the ablation amount was estimated both from the surface energy balance (MEB from supraglacial AWS data) and from degree-day method (MT-INDEX, in this latter case applying the mean tropospheric lapse rate to temperature data acquired at Bormio changing the air temperature threshold) and the results were compared. We found that the mean tropospheric lapse rate permits a good and reliable reconstruction of daily glacier air temperature conditions and the major uncertainty in the computation of snow melt from degree-day models is driven by the choice of an appropriate air temperature threshold. Then

  2. Cavitation and non-cavitation regime for large-scale ultrasonic standing wave particle separation systems--In situ gentle cavitation threshold determination and free radical related oxidation.

    Science.gov (United States)

    Johansson, Linda; Singh, Tanoj; Leong, Thomas; Mawson, Raymond; McArthur, Sally; Manasseh, Richard; Juliano, Pablo

    2016-01-01

    We here suggest a novel and straightforward approach for liter-scale ultrasound particle manipulation standing wave systems to guide system design in terms of frequency and acoustic power for operating in either cavitation or non-cavitation regimes for ultrasound standing wave systems, using the sonochemiluminescent chemical luminol. We show that this method offers a simple way of in situ determination of the cavitation threshold for selected separation vessel geometry. Since the pressure field is system specific the cavitation threshold is system specific (for the threshold parameter range). In this study we discuss cavitation effects and also measure one implication of cavitation for the application of milk fat separation, the degree of milk fat lipid oxidation by headspace volatile measurements. For the evaluated vessel, 2 MHz as opposed to 1 MHz operation enabled operation in non-cavitation or low cavitation conditions as measured by the luminol intensity threshold method. In all cases the lipid oxidation derived volatiles were below the human sensory detection level. Ultrasound treatment did not significantly influence the oxidative changes in milk for either 1 MHz (dose of 46 kJ/L and 464 kJ/L) or 2 MHz (dose of 37 kJ/L and 373 kJ/L) operation. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Summary report of a workshop on establishing cumulative effects thresholds : a suggested approach for establishing cumulative effects thresholds in a Yukon context

    International Nuclear Information System (INIS)

    2003-01-01

    Increasingly, thresholds are being used as a land and cumulative effects assessment and management tool. To assist in the management of wildlife species such as woodland caribou, the Department of Indian and Northern Affairs (DIAND) Environment Directorate, Yukon sponsored a workshop to develop and use cumulative thresholds in the Yukon. The approximately 30 participants reviewed recent initiatives in the Yukon and other jurisdictions. The workshop is expected to help formulate a strategic vision for implementing cumulative effects thresholds in the Yukon. The key to success resides in building relationships with Umbrella Final Agreement (UFA) Boards, the Development Assessment Process (DAP), and the Yukon Environmental and Socio-Economic Assessment Act (YESAA). Broad support is required within an integrated resource management framework. The workshop featured discussions on current science and theory of cumulative effects thresholds. Potential data and implementation issues were also discussed. It was concluded that thresholds are useful and scientifically defensible. The threshold research results obtained in Alberta, British Columbia and the Northwest Territories are applicable to the Yukon. One of the best tools for establishing and tracking thresholds is habitat effectiveness. Effects must be monitored and tracked. Biologists must share their information with decision makers. Interagency coordination and assistance should be facilitated through the establishment of working groups. Regional land use plans should include thresholds. 7 refs.

  4. Frequency threshold for ion beam formation in expanding RF plasma

    Science.gov (United States)

    Chakraborty Thakur, Saikat; Harvey, Zane; Biloiu, Ioana; Hansen, Alex; Hardin, Robert; Przybysz, William; Scime, Earl

    2008-11-01

    We observe a threshold frequency for ion beam formation in expanding, low pressure, argon helicon plasma. Mutually consistent measurements of ion beam energy and density relative to the background ion density obtained with a retarding field energy analyzer and laser induced fluorescence indicate that a stable ion beam of 15 eV appears for source frequencies above 11.5 MHz. Reducing the frequency increases the upstream beam amplitude. Downstream of the expansion region, a clear ion beam is seen only for the higher frequencies. At lower frequencies, large electrostatic instabilities appear and an ion beam is not observed. The upstream plasma density increases sharply at the same threshold frequency that leads to the appearance of a stable double layer. The observations are consistent with the theoretical prediction that downstream electrons accelerated into the source by the double layer lead to increased ionization, thus balancing the higher loss rates upstream [1]. 1. M. A. Lieberman, C. Charles and R. W. Boswell, J. Phys. D: Appl. Phys. 39 (2006) 3294-3304

  5. ‘Soglitude’- introducing a method of thinking thresholds

    Directory of Open Access Journals (Sweden)

    Tatjana Barazon

    2010-04-01

    Full Text Available ‘Soglitude’ is an invitation to acknowledge the existence of thresholds in thought. A threshold in thought designates the indetermination, the passage, the evolution of every state the world is in. The creation we add to it, and the objectivity we suppose, on the border of those two ideas lies our perceptive threshold. No state will ever be permanent, and in order to stress the temporary, fluent character of the world and our perception of it, we want to introduce a new suitable method to think change and transformation, when we acknowledge our own threshold nature. The contributions gathered in this special issue come from various disciplines: anthropology, philosophy, critical theory, film studies, political science, literature and history. The variety of these insights shows the resonance of the idea of threshold in every category of thought. We hope to enlarge the notion in further issues on physics and chemistry, as well as mathematics. The articles in this issue introduce the method of threshold thinking by showing the importance of the in-between, of the changing of perspective in their respective domain. The ‘Documents’ section named INTERSTICES, includes a selection of poems, two essays, a philosophical-artistic project called ‘infraphysique’, a performance on thresholds in the soul, and a dialogue with Israel Rosenfield. This issue presents a kaleidoscope of possible threshold thinking and hopes to initiate new ways of looking at things.For every change that occurs in reality there is a subjective counterpart in our perception and this needs to be acknowledged as such. What we name objective is reflected in our own personal perception in its own personal manner, in such a way that the objectivity of an event might altogether be questioned. The absolute point of view, the view from “nowhere”, could well be the projection that causes dogmatism. By introducing the method of thinking thresholds into a system, be it

  6. Identifying Threshold Concepts for Information Literacy: A Delphi Study

    OpenAIRE

    Lori Townsend; Amy R. Hofer; Silvia Lin Hanick; Korey Brunetti

    2016-01-01

    This study used the Delphi method to engage expert practitioners on the topic of threshold concepts for information literacy. A panel of experts considered two questions. First, is the threshold concept approach useful for information literacy instruction? The panel unanimously agreed that the threshold concept approach holds potential for information literacy instruction. Second, what are the threshold concepts for information literacy instruction? The panel proposed and discussed over fift...

  7. Corrective Jaw Surgery

    Medline Plus

    Full Text Available ... out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment of jaws ... out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment of jaws ...

  8. Melanin microcavitation threshold in the near infrared

    Science.gov (United States)

    Schmidt, Morgan S.; Kennedy, Paul K.; Vincelette, Rebecca L.; Schuster, Kurt J.; Noojin, Gary D.; Wharmby, Andrew W.; Thomas, Robert J.; Rockwell, Benjamin A.

    2014-02-01

    Thresholds for microcavitation of isolated bovine and porcine melanosomes were determined using single nanosecond (ns) laser pulses in the NIR (1000 - 1319 nm) wavelength regime. Average fluence thresholds for microcavitation increased non-linearly with increasing wavelength. Average fluence thresholds were also measured for 10-ns pulses at 532 nm, and found to be comparable to visible ns pulse values published in previous reports. Fluence thresholds were used to calculate melanosome absorption coefficients, which decreased with increasing wavelength. This trend was found to be comparable to the decrease in retinal pigmented epithelial (RPE) layer absorption coefficients reported over the same wavelength region. Estimated corneal total intraocular energy (TIE) values were determined and compared to the current and proposed maximum permissible exposure (MPE) safe exposure levels. Results from this study support the proposed changes to the MPE levels.

  9. Resonances, cusp effects and a virtual state in e/sup -/-He scattering near the n = 3 thresholds. [Variational methods, resonance, threshold structures

    Energy Technology Data Exchange (ETDEWEB)

    Nesbet, R K [International Business Machines Corp., San Jose, Calif. (USA). Research Lab.

    1978-01-14

    Variational calculations locate and identify resonances and new threshold structures in electron impact excitation of He metastable states, in the region of the 3/sup 3/S and 3/sup 1/S excitation thresholds. A virtual state is found at the 3/sup 3/S threshold.

  10. LHC Orbit Correction Reproducibility and Related Machine Protection

    CERN Document Server

    Baer, T; Schmidt, R; Wenninger, J

    2012-01-01

    The Large Hadron Collider (LHC) has an unprecedented nominal stored beam energy of up to 362 MJ per beam. In order to ensure an adequate machine protection by the collimation system, a high reproducibility of the beam position at collimators and special elements like the final focus quadrupoles is essential. This is realized by a combination of manual orbit corrections, feed forward and real time feedback. In order to protect the LHC against inconsistent orbit corrections, which could put the machine in a vulnerable state, a novel software-based interlock system for orbit corrector currents was developed. In this paper, the principle of the new interlock system is described and the reproducibility of the LHC orbit correction is discussed against the background of this system.

  11. Diagrammatic Approach to Meson Production in Proton-Proton Collisions near Threshold

    International Nuclear Information System (INIS)

    Kaiser, Norbert

    2000-01-01

    We evaluate the threshold T-matrices for the reactions pp → ppπ 0 , pnπ + , ppη, ppω,pΛK + , and pn → pnη in a relativistic Feynman diagram approach. We employ an effective range approximation to take care of the strong S-wave pN and pΛ final-state interaction. We stress that the heavy baryon formalism is not applicable in the NN-system above π-production threshold due to the large external momentum, vertical-bar rvec p vertical-bar ≅ √(Mm π ). The magnitudes of the experimental threshold amplitudes extracted from total cross section data, script-A = (2.7 - 0.3i)fm 4 , script-B = (2.8 - 1.5i)fm 4 , vertical-bar script-C vertical-bar = 1.32 fm 4 , vertical-bar Omega vertical-bar = 0.53 fm 4 , script-K = √(2(vertical-bar)K s (vertical-bar) 2 + (vertical-bar)K t (vertical-bar) 2 ) = 0.38 fm 4 and (vertical-bar)script-D(vertical-bar) = 2.3 fm 4 can be reproduced by (long-range) o ne-pion exchange and short-range vector meson exchanges, with the latter giving the largest contributions. Pion loop effects in pp → ppπ 0 appear to be small. The presented diagrammatic approach requires further tests via studies of angular distributions and polarization observables

  12. A two-dimensional matrix correction for off-axis portal dose prediction errors

    International Nuclear Information System (INIS)

    Bailey, Daniel W.; Kumaraswamy, Lalith; Bakhtiari, Mohammad; Podgorsak, Matthew B.

    2013-01-01

    Purpose: This study presents a follow-up to a modified calibration procedure for portal dosimetry published by Bailey et al. [“An effective correction algorithm for off-axis portal dosimetry errors,” Med. Phys. 36, 4089–4094 (2009)]. A commercial portal dose prediction system exhibits disagreement of up to 15% (calibrated units) between measured and predicted images as off-axis distance increases. The previous modified calibration procedure accounts for these off-axis effects in most regions of the detecting surface, but is limited by the simplistic assumption of radial symmetry. Methods: We find that a two-dimensional (2D) matrix correction, applied to each calibrated image, accounts for off-axis prediction errors in all regions of the detecting surface, including those still problematic after the radial correction is performed. The correction matrix is calculated by quantitative comparison of predicted and measured images that span the entire detecting surface. The correction matrix was verified for dose-linearity, and its effectiveness was verified on a number of test fields. The 2D correction was employed to retrospectively examine 22 off-axis, asymmetric electronic-compensation breast fields, five intensity-modulated brain fields (moderate-high modulation) manipulated for far off-axis delivery, and 29 intensity-modulated clinical fields of varying complexity in the central portion of the detecting surface. Results: Employing the matrix correction to the off-axis test fields and clinical fields, predicted vs measured portal dose agreement improves by up to 15%, producing up to 10% better agreement than the radial correction in some areas of the detecting surface. Gamma evaluation analyses (3 mm, 3% global, 10% dose threshold) of predicted vs measured portal dose images demonstrate pass rate improvement of up to 75% with the matrix correction, producing pass rates that are up to 30% higher than those resulting from the radial correction technique alone. As

  13. Effects of pulse duration on magnetostimulation thresholds

    Energy Technology Data Exchange (ETDEWEB)

    Saritas, Emine U., E-mail: saritas@ee.bilkent.edu.tr [Department of Bioengineering, University of California, Berkeley, Berkeley, California 94720-1762 (United States); Department of Electrical and Electronics Engineering, Bilkent University, Bilkent, Ankara 06800 (Turkey); National Magnetic Resonance Research Center (UMRAM), Bilkent University, Bilkent, Ankara 06800 (Turkey); Goodwill, Patrick W. [Department of Bioengineering, University of California, Berkeley, Berkeley, California 94720-1762 (United States); Conolly, Steven M. [Department of Bioengineering, University of California, Berkeley, Berkeley, California 94720-1762 (United States); Department of EECS, University of California, Berkeley, California 94720-1762 (United States)

    2015-06-15

    Purpose: Medical imaging techniques such as magnetic resonance imaging and magnetic particle imaging (MPI) utilize time-varying magnetic fields that are subject to magnetostimulation limits, which often limit the speed of the imaging process. Various human-subject experiments have studied the amplitude and frequency dependence of these thresholds for gradient or homogeneous magnetic fields. Another contributing factor was shown to be number of cycles in a magnetic pulse, where the thresholds decreased with longer pulses. The latter result was demonstrated on two subjects only, at a single frequency of 1.27 kHz. Hence, whether the observed effect was due to the number of cycles or due to the pulse duration was not specified. In addition, a gradient-type field was utilized; hence, whether the same phenomenon applies to homogeneous magnetic fields remained unknown. Here, the authors investigate the pulse duration dependence of magnetostimulation limits for a 20-fold range of frequencies using homogeneous magnetic fields, such as the ones used for the drive field in MPI. Methods: Magnetostimulation thresholds were measured in the arms of six healthy subjects (age: 27 ± 5 yr). Each experiment comprised testing the thresholds at eight different pulse durations between 2 and 125 ms at a single frequency, which took approximately 30–40 min/subject. A total of 34 experiments were performed at three different frequencies: 1.2, 5.7, and 25.5 kHz. A solenoid coil providing homogeneous magnetic field was used to induce stimulation, and the field amplitude was measured in real time. A pre-emphasis based pulse shaping method was employed to accurately control the pulse durations. Subjects reported stimulation via a mouse click whenever they felt a twitching/tingling sensation. A sigmoid function was fitted to the subject responses to find the threshold at a specific frequency and duration, and the whole procedure was repeated at all relevant frequencies and pulse durations

  14. Effects of pulse duration on magnetostimulation thresholds

    International Nuclear Information System (INIS)

    Saritas, Emine U.; Goodwill, Patrick W.; Conolly, Steven M.

    2015-01-01

    Purpose: Medical imaging techniques such as magnetic resonance imaging and magnetic particle imaging (MPI) utilize time-varying magnetic fields that are subject to magnetostimulation limits, which often limit the speed of the imaging process. Various human-subject experiments have studied the amplitude and frequency dependence of these thresholds for gradient or homogeneous magnetic fields. Another contributing factor was shown to be number of cycles in a magnetic pulse, where the thresholds decreased with longer pulses. The latter result was demonstrated on two subjects only, at a single frequency of 1.27 kHz. Hence, whether the observed effect was due to the number of cycles or due to the pulse duration was not specified. In addition, a gradient-type field was utilized; hence, whether the same phenomenon applies to homogeneous magnetic fields remained unknown. Here, the authors investigate the pulse duration dependence of magnetostimulation limits for a 20-fold range of frequencies using homogeneous magnetic fields, such as the ones used for the drive field in MPI. Methods: Magnetostimulation thresholds were measured in the arms of six healthy subjects (age: 27 ± 5 yr). Each experiment comprised testing the thresholds at eight different pulse durations between 2 and 125 ms at a single frequency, which took approximately 30–40 min/subject. A total of 34 experiments were performed at three different frequencies: 1.2, 5.7, and 25.5 kHz. A solenoid coil providing homogeneous magnetic field was used to induce stimulation, and the field amplitude was measured in real time. A pre-emphasis based pulse shaping method was employed to accurately control the pulse durations. Subjects reported stimulation via a mouse click whenever they felt a twitching/tingling sensation. A sigmoid function was fitted to the subject responses to find the threshold at a specific frequency and duration, and the whole procedure was repeated at all relevant frequencies and pulse durations

  15. Regional rainfall thresholds for landslide occurrence using a centenary database

    Science.gov (United States)

    Vaz, Teresa; Luís Zêzere, José; Pereira, Susana; Cruz Oliveira, Sérgio; Garcia, Ricardo A. C.; Quaresma, Ivânia

    2018-04-01

    This work proposes a comprehensive method to assess rainfall thresholds for landslide initiation using a centenary landslide database associated with a single centenary daily rainfall data set. The method is applied to the Lisbon region and includes the rainfall return period analysis that was used to identify the critical rainfall combination (cumulated rainfall duration) related to each landslide event. The spatial representativeness of the reference rain gauge is evaluated and the rainfall thresholds are assessed and calibrated using the receiver operating characteristic (ROC) metrics. Results show that landslide events located up to 10 km from the rain gauge can be used to calculate the rainfall thresholds in the study area; however, these thresholds may be used with acceptable confidence up to 50 km from the rain gauge. The rainfall thresholds obtained using linear and potential regression perform well in ROC metrics. However, the intermediate thresholds based on the probability of landslide events established in the zone between the lower-limit threshold and the upper-limit threshold are much more informative as they indicate the probability of landslide event occurrence given rainfall exceeding the threshold. This information can be easily included in landslide early warning systems, especially when combined with the probability of rainfall above each threshold.

  16. Continuous quantum error correction for non-Markovian decoherence

    International Nuclear Information System (INIS)

    Oreshkov, Ognyan; Brun, Todd A.

    2007-01-01

    We study the effect of continuous quantum error correction in the case where each qubit in a codeword is subject to a general Hamiltonian interaction with an independent bath. We first consider the scheme in the case of a trivial single-qubit code, which provides useful insights into the workings of continuous error correction and the difference between Markovian and non-Markovian decoherence. We then study the model of a bit-flip code with each qubit coupled to an independent bath qubit and subject to continuous correction, and find its solution. We show that for sufficiently large error-correction rates, the encoded state approximately follows an evolution of the type of a single decohering qubit, but with an effectively decreased coupling constant. The factor by which the coupling constant is decreased scales quadratically with the error-correction rate. This is compared to the case of Markovian noise, where the decoherence rate is effectively decreased by a factor which scales only linearly with the rate of error correction. The quadratic enhancement depends on the existence of a Zeno regime in the Hamiltonian evolution which is absent in purely Markovian dynamics. We analyze the range of validity of this result and identify two relevant time scales. Finally, we extend the result to more general codes and argue that the performance of continuous error correction will exhibit the same qualitative characteristics

  17. The gradual nature of threshold switching

    International Nuclear Information System (INIS)

    Wimmer, M; Salinga, M

    2014-01-01

    The recent commercialization of electronic memories based on phase change materials proved the usability of this peculiar family of materials for application purposes. More advanced data storage and computing concepts, however, demand a deeper understanding especially of the electrical properties of the amorphous phase and the switching behaviour. In this work, we investigate the temporal evolution of the current through the amorphous state of the prototypical phase change material, Ge 2 Sb 2 Te 5 , under constant voltage. A custom-made electrical tester allows the measurement of delay times over five orders of magnitude, as well as the transient states of electrical excitation prior to the actual threshold switching. We recognize a continuous current increase over time prior to the actual threshold-switching event to be a good measure for the electrical excitation. A clear correlation between a significant rise in pre-switching-current and the later occurrence of threshold switching can be observed. This way, we found experimental evidence for the existence of an absolute minimum for the threshold voltage (or electric field respectively) holding also for time scales far beyond the measurement range. (paper)

  18. Fringe order correction for the absolute phase recovered by two selected spatial frequency fringe projections in fringe projection profilometry.

    Science.gov (United States)

    Ding, Yi; Peng, Kai; Yu, Miao; Lu, Lei; Zhao, Kun

    2017-08-01

    The performance of the two selected spatial frequency phase unwrapping methods is limited by a phase error bound beyond which errors will occur in the fringe order leading to a significant error in the recovered absolute phase map. In this paper, we propose a method to detect and correct the wrong fringe orders. Two constraints are introduced during the fringe order determination of two selected spatial frequency phase unwrapping methods. A strategy to detect and correct the wrong fringe orders is also described. Compared with the existing methods, we do not need to estimate the threshold associated with absolute phase values to determine the fringe order error, thus making it more reliable and avoiding the procedure of search in detecting and correcting successive fringe order errors. The effectiveness of the proposed method is validated by the experimental results.

  19. English Learners Perception on Lecturers’ Corrective Feedback

    Directory of Open Access Journals (Sweden)

    Titien Fatmawaty Mohammad

    2016-04-01

    Full Text Available The importance of written corrective feedback (CF has been an issue of substantial debate in the literature and this controversial issue has led to a development in latest studies to draw on foreign language acquisition (FLA research as a way to further comprehend the complexities of this issue particularly how students and teachers perceive the effectiveness of written corrective feedback. This research has largely focused on students’ perception on Lecturers’ corrective feedback, perceives the usefulness of different types of corrective feedback and the reasons they have for their preferences. Qualitative data was collected from 40 EFL students in 6th semester, by means of written questionnaires, interview and observation. Four feedback strategies were employed in this research and ranked each statement by using five-point Likert scale. Findings showed that almost all students 81.43 % want correction or feedback from lecturers for the mistakes on their writing. For the type of written corrective feedback, students prefer lecturers mark their mistakes and give comment on their work with the percentage as follows: 93% students found that giving clues or comment about how to fix errors can improve their writing ability, 76.69% of the students found that error identification is the most useful type of feedback, and 57.50% of students have a positive opinion for the provision of correction which is accompanied by comment. Those percentages of students perspective is supported by students’ explanation in an open ended question of questionnaire. Pedagogical implications of the study are also discussed.

  20. Optimization Problems on Threshold Graphs

    Directory of Open Access Journals (Sweden)

    Elena Nechita

    2010-06-01

    Full Text Available During the last three decades, different types of decompositions have been processed in the field of graph theory. Among these we mention: decompositions based on the additivity of some characteristics of the graph, decompositions where the adjacency law between the subsets of the partition is known, decompositions where the subgraph induced by every subset of the partition must have predeterminate properties, as well as combinations of such decompositions. In this paper we characterize threshold graphs using the weakly decomposition, determine: density and stability number, Wiener index and Wiener polynomial for threshold graphs.