WorldWideScience

Sample records for modelling non-perturbative corrections

  1. Non-perturbative power corrections to ghost and gluon propagators

    International Nuclear Information System (INIS)

    Boucaud, Philippe; Leroy, Jean-Pierre; Yaouanc, Alain Le; Lokhov, Alexey; Micheli, Jacques; Pene, Olivier; RodrIguez-Quintero, Jose; Roiesnel, Claude

    2006-01-01

    We study the dominant non-perturbative power corrections to the ghost and gluon propagators in Landau gauge pure Yang-Mills theory using OPE and lattice simulations. The leading order Wilson coefficients are proven to be the same for both propagators. The ratio of the ghost and gluon propagators is thus free from this dominant power correction. Indeed, a purely perturbative fit of this ratio gives smaller value ( ≅ 270MeV) of Λ M-barS-bar than the one obtained from the propagators separately( ≅ 320MeV). This argues in favour of significant non-perturbative ∼ 1/q 2 power corrections in the ghost and gluon propagators. We check the self-consistency of the method

  2. Non-perturbative aspects of nonlinear sigma models

    Energy Technology Data Exchange (ETDEWEB)

    Flore, Raphael

    2012-12-07

    The aim of this thesis was the study and further development of non-perturbative methods of quantum field theory by means of their application to nonlinear sigma models. While a large part of the physical phenomena of quantum field theory can be successfully predicted by the perturbation theory, some aspects in the region of large coupling strengths are not definitively understood and require suited non-perturbative methods for its analysis. This thesis is concentrated on two approaches, the numerical treatment of field theories on discrete space-time lattices and the functional renormalization group (FRG) as description of the renormalization flux of effective actions. Considerations of the nonlinear O(N) models have shown that for the correct analysis of the critical properties in the framework of the FRG an approach must be chosen, which contained fourth-derivation orders. For this a covariant formalism was developed, which is based on a background-field expansion and the development of a heat kernel. Apart from a destabilizing coupling the results suggest a nontrivial fixed point and by this a non-perturbative renormalizability of these models. The resulting flow diagrams were finally still compared with the results of a numerical analysis of the renormalization flow by means of the Monte-Carlo renormalization group, and hereby qualitative agreement was found. Furthermore an alternative formulation of the FRG in phase-space coordinates was studied and their consistency tested on simple examples. Beyond this an alternative expansion of the effective action in orders of the canonical momenta was applied to the nonlinear O(N) models with the result of a stable non-trivial fixed point, the critical properties of which however show not the expected N-dependence. By means of the FRG finally still the renormalization of topological operators was studied by means of the winding number of the O(3){approx_equal}CP{sup 1} model. By the generalization of the topological

  3. Non-perturbative aspects of nonlinear sigma models

    International Nuclear Information System (INIS)

    Flore, Raphael

    2012-01-01

    The aim of this thesis was the study and further development of non-perturbative methods of quantum field theory by means of their application to nonlinear sigma models. While a large part of the physical phenomena of quantum field theory can be successfully predicted by the perturbation theory, some aspects in the region of large coupling strengths are not definitively understood and require suited non-perturbative methods for its analysis. This thesis is concentrated on two approaches, the numerical treatment of field theories on discrete space-time lattices and the functional renormalization group (FRG) as description of the renormalization flux of effective actions. Considerations of the nonlinear O(N) models have shown that for the correct analysis of the critical properties in the framework of the FRG an approach must be chosen, which contained fourth-derivation orders. For this a covariant formalism was developed, which is based on a background-field expansion and the development of a heat kernel. Apart from a destabilizing coupling the results suggest a nontrivial fixed point and by this a non-perturbative renormalizability of these models. The resulting flow diagrams were finally still compared with the results of a numerical analysis of the renormalization flow by means of the Monte-Carlo renormalization group, and hereby qualitative agreement was found. Furthermore an alternative formulation of the FRG in phase-space coordinates was studied and their consistency tested on simple examples. Beyond this an alternative expansion of the effective action in orders of the canonical momenta was applied to the nonlinear O(N) models with the result of a stable non-trivial fixed point, the critical properties of which however show not the expected N-dependence. By means of the FRG finally still the renormalization of topological operators was studied by means of the winding number of the O(3)≅CP 1 model. By the generalization of the topological operator and the

  4. Non-perturbative treatment of relativistic quantum corrections in large Z atoms

    International Nuclear Information System (INIS)

    Dietz, K.; Weymans, G.

    1983-09-01

    Renormalised g-Hartree-Dirac equations incorporating Dirac sea contributions are derived. Their implications for the non-perturbative, selfconsistent calculation of quantum corrections in large Z atoms are discussed. (orig.)

  5. Non-perturbative chiral corrections for lattice QCD

    International Nuclear Information System (INIS)

    Thomas, A.W.; Leinweber, D.B.; Lu, D.H.

    2002-01-01

    We explore the chiral aspects of extrapolation of observables calculated within lattice QCD, using the nucleon magnetic moments as an example. Our analysis shows that the biggest effects of chiral dynamics occur for quark masses corresponding to a pion mass below 600 MeV. In this limited range chiral perturbation theory is not rapidly convergent, but we can develop some understanding of the behaviour through chiral quark models. This model dependent analysis leads us to a simple Pade approximant which builds in both the limits m π → 0 and m π → ∞ correctly and permits a consistent, model independent extrapolation to the physical pion mass which should be extremely reliable. (author)

  6. Insights on non-perturbative aspects of TMDs from models

    Energy Technology Data Exchange (ETDEWEB)

    H. Avakian, A. Efremov, P. Schweitzer, O. Teryaev, F. Yuan, P. Zavada

    2009-12-01

    Transverse momentum dependent parton distribution functions are a key ingredient in the description of spin and azimuthal asymmetries in deep-inelastic scattering processes. Recent results from non-perturbative calculations in effective approaches are reviewed, with focus on relations among different parton distribution functions in QCD and models.

  7. Non perturbative method for radiative corrections applied to lepton-proton scattering

    International Nuclear Information System (INIS)

    Chahine, C.

    1979-01-01

    We present a new, non perturbative method to effect radiative corrections in lepton (electron or muon)-nucleon scattering, useful for existing or planned experiments. This method relies on a spectral function derived in a previous paper, which takes into account both real soft photons and virtual ones and hence is free from infrared divergence. Hard effects are computed perturbatively and then included in the form of 'hard factors' in the non peturbative soft formulas. Practical computations are effected using the Gauss-Jacobi integration method which reduce the relevant integrals to a rapidly converging sequence. For the simple problem of the radiative quasi-elastic peak, we get an exponentiated form conjectured by Schwinger and found by Yennie, Frautschi and Suura. We compare also our results with the peaking approximation, which we derive independantly, and with the exact one-photon emission formula of Mo and Tsai. Applications of our method to the continuous spectrum include the radiative tail of the Δ 33 resonance in e + p scattering and radiative corrections to the Feynman scale invariant F 2 structure function for the kinematics of two recent high energy muon experiments

  8. Non-perturbative effective interactions in the standard model

    CERN Document Server

    Arbuzov, Boris A

    2014-01-01

    This monograph is devoted to the nonperturbative dynamics in the Standard Model (SM), the basic theory of all, but gravity, fundamental interactions in nature. The Standard Model is devided into two parts: the Quantum chromodynamics (QCD) and the Electro-weak theory (EWT) are well-defined renormalizable theories in which the perturbation theory is valid. However, for the adequate description of the real physics nonperturbative effects are inevitable. This book describes how these nonperturbative effects may be obtained in the framework of spontaneous generation of effective interactions. The well-known example of such effective interaction is provided by the famous Nambu--Jona-Lasinio effective interaction. Also a spontaneous generation of this interaction in the framework of QCD is described and applied to the method for other effective interactions in QCD and EWT. The method is based on N.N. Bogoliubov conception of compensation equations. As a result we then describe the principle feathures of the Standard...

  9. Non-perturbative effective interactions in the standard model

    Energy Technology Data Exchange (ETDEWEB)

    Arbuzov, Boris A. [Moscow Lomonosov State Univ. (Russian Federation). Skobeltsyn Inst. of Nuclear Physics

    2014-07-01

    This monograph is devoted to the nonperturbative dynamics in the Standard Model (SM), the basic theory of allfundamental interactions in natureexcept gravity. The Standard Model is divided into two parts: the quantum chromodynamics (QCD) and the electro-weak theory (EWT) are well-defined renormalizable theories in which the perturbation theory is valid. However, for the adequate description of the real physics nonperturbative effects are inevitable. This book describes how these nonperturbative effects may be obtained in the framework of spontaneous generation of effective interactions. The well-known example of such effective interaction is provided by the famous Nambu-Jona-Lasinio effective interaction. Also a spontaneous generation of this interaction in the framework of QCD is described and applied to the method for other effective interactions in QCD and EWT. The method is based on N.N. Bogolyubov's conception of compensation equations. As a result we then describe the principal features of the Standard Model, e.g. Higgs sector, and significant nonperturbative effects including recent results obtained at LHC and TEVATRON.

  10. Ab initio approach to the non-perturbative scalar Yukawa model

    OpenAIRE

    Li, YangDepartment of Physics and Astronomy, Iowa State University, Ames, IA, 50011, USA; Karmanov, V.A.(Lebedev Physical Institute, Leninsky Prospekt 53, Moscow, 119991, Russia); Maris, P.(Department of Physics and Astronomy, Iowa State University, Ames, IA, 50011, USA); Vary, J.P.(Department of Physics and Astronomy, Iowa State University, Ames, IA, 50011, USA)

    2015-01-01

    We report on the first non-perturbative calculation of the scalar Yukawa model in the single-nucleon sector up to four-body Fock sector truncation (one "scalar nucleon" and three "scalar pions"). The light-front Hamiltonian approach with a systematic non-perturbative renormalization is applied. We study the $n$-body norms and the electromagnetic form factor. We find that the one- and two-body contributions dominate up to coupling $\\alpha \\approx 1.7$. As we approach the coupling $\\alpha \\appr...

  11. Physics beyond the standard model in the non-perturbative unification scheme

    International Nuclear Information System (INIS)

    Kapetanakis, D.; Zoupanos, G.

    1990-01-01

    The non-perturbative unification scenario predicts reasonably well the low energy gauge couplings of the standard model. Agreement with the measured low energy couplings is obtained by assuming certain kind of physics beyond the standard model. A number of possibilities for physics beyond the standard model is examined. The best candidates so far are the standard model with eight fermionic families and a similar number of Higgs doublets, and the supersymmetric standard model with five families. (author)

  12. Two-dimensional sigma models: modelling non-perturbative effects of gauge theories

    International Nuclear Information System (INIS)

    Novikov, V.A.; Shifman, M.A.; Vainshtein, A.I.; Zakharov, V.I.

    1984-01-01

    The review is devoted to a discussion of non-perturbative effects in gauge theories and two-dimensional sigma models. The main emphasis is put on supersymmetric 0(3) sigma model. The instanton-based method for calculating the exact Gell-Mann-Low function and bifermionic condensate is considered in detail. All aspects of the method in simplifying conditions are discussed. The basic points are: the instanton measure from purely classical analysis; a non-renormalization theorem in self-dual external fields; existence of vacuum condensates and their compatibility with supersymmetry

  13. Quasilocal quark models as effective theory of non-perturbative QCD

    International Nuclear Information System (INIS)

    Andrianov, A.A.

    2006-01-01

    We consider the Quasilocal Quark Model of NJL type (QNJLM) as effective theory of non-perturbative QCD including scalar (S), pseudo-scalar (P), vector (V) and axial-vector (A) four-fermion interaction with derivatives. In the presence of a strong attraction in the scalar channel the chiral symmetry is spontaneously broken and as a consequence the composite meson states are generated in all channels. With the help of Operator Product Expansion the appropriate set of Chiral Symmetry Restoration (CSR) Sum Rules in these channels are imposed as matching rules to QCD at intermediate energies. The mass spectrum and some decay constants for ground and excited meson states are calculated

  14. Non-Perturbative Renormalization

    CERN Document Server

    Mastropietro, Vieri

    2008-01-01

    The notion of renormalization is at the core of several spectacular achievements of contemporary physics, and in the last years powerful techniques have been developed allowing to put renormalization on a firm mathematical basis. This book provides a self-consistent and accessible introduction to the sophisticated tools used in the modern theory of non-perturbative renormalization, allowing an unified and rigorous treatment of Quantum Field Theory, Statistical Physics and Condensed Matter models. In particular the first part of this book is devoted to Constructive Quantum Field Theory, providi

  15. Towards a non-perturbative study of the strongly coupled standard model

    International Nuclear Information System (INIS)

    Dagotto, E.; Kogut, J.

    1988-01-01

    The strongly coupled standard model of Abbott and Farhi can be a good alternative to the standard model if it has a phase where chiral symmetry is not broken, the SU(2) sector confines and the scalar field is in the symmetric regime. To look for such a phase we did a numerical analysis in the context of lattice gauge theory. To simplify the model we studied a U(1) gauge theory with Higgs fields and four species of dynamical fermions. In this toy model we did not find a phase with the correct properties required by the strongly coupled standard model. We also speculate about a possible solution to this problem using a new phase of the SU(2) gauge theory with a large number of flavors. (orig.)

  16. Non-perturbative models of intermittency in drift-wave turbulence: towards a probabilistic theory of anomalous transport

    International Nuclear Information System (INIS)

    Kim, Eun-jin; Diamond, P.H.; Malkov, M.

    2003-01-01

    Two examples of non-perturbative models of intermittency in drift-wave (DW) turbulence are presented. The first is a calculation of the probability distribution function (PDF) of ion heat flux due to structures in ion temperature gradient turbulence. The instanton calculus predicts the PDF to be a stretched exponential. The second is a derivation of a bi-variate Burgers equation for the evolution of the DW population density in the presence of radially extended streamer flows. The PDF of fluctuation intensity avalanches is determined. The relation of this to turbulence spreading, observed in simulations, is discussed. (author)

  17. A non-perturbative approach to jet cross-sections and a new model for hadron-hadron interactions

    International Nuclear Information System (INIS)

    Andersson, B.

    1986-01-01

    The author discusses two subjects in this work. The first is a description of a non-perturbative approach to calculate the probabilities to obtain a particular state of confined force field in a hard interaction like e/sup +/e/sup -/ annihilation. This approach has been discussed previously by the author. There are at this time many more results of the program, in particular, some rather puzzling and disturbing ones as compared to the results obtained in perturbative QCD. The second subject is a new approach to hadron-hadron inelastic scattering. A model for these interactions based upon multiple perturbative parton interactions and subsequent string-stretching and breaking has been formulated by others in earlier works

  18. Non perturbative analysis of an N=2 Landau-Ginsburg model

    International Nuclear Information System (INIS)

    Leaf Herrmann, W.A.

    1993-01-01

    We analyze the topological sector of an N=2 Landau-Ginsburg model using nonperturbative methods. In particular, we study the renormalization group flow between two superconformal minimal models, numerically compute the correlation functions along this trajectory, and compare the results to semi-classical calculations. We also study some aspects of arbitrary supersymmetric perturbations of the Landau-Ginsburg model. 20 refs, 4 figs

  19. Non-perturbative effects in two-dimensional lattice O(N) models

    International Nuclear Information System (INIS)

    Ogilvie, M.C.; Maryland Univ., College Park

    1981-01-01

    Non-abelian analogues of Kosterlitz-Thouless vortices may have important effects in two-dimensional lattice spin systems with O(N) symmetries. Renormalization group equations which include these effects are developed in two ways. The first set of equations extends the renormalization group equations of Kosterlitz to 0(N) spin systems, in a form suggested by Cardy and Hamber. The second is derived from a Villain-type 0(N) model using Migdal's recursion relations. Using these equations, the part played by topological excitations int he crossover from weak to strong coupling behavior is studied. Another effect which influences crossover behavior is also discussed; irrelevant operators which occur naturally in lattice theories can make important contributions to the renormalization group flow in the crossover region. When combined with conventional perturbative results, these two effects may explain the observed crossover behavior of these models. (orig.)

  20. Perturbative and non-perturbative approaches to string sigma-models in AdS/CFT

    Energy Technology Data Exchange (ETDEWEB)

    Vescovi, Edoardo

    2016-10-05

    This thesis discusses quantum aspects of type II superstring theories in AdS{sub 5} x S{sup 5} and AdS{sub 4} x CP{sup 3} backgrounds relevant for the AdS/CFT correspondence, using perturbative methods at large string tension and lattice field theory techniques inspired by a work of Roiban and McKeown. We review the construction of the supercoset sigma-model for strings in the AdS{sub 5} x S{sup 5} background, whereas the general quantum dynamics of the superstring in AdS{sub 4} x CP{sup 3} is described by a double dimensional reduction of the supermembrane action in AdS{sub 4} x S{sup 7}. We present a manifestly covariant formalism for semiclassical quantization of strings around arbitrary minimal-area surfaces in AdS{sub 5} x S{sup 5}, expressing the fluctuation operators in terms of intrinsic and extrinsic invariants of the background geometry. We exactly solve the spectral problem for a fourth-order generalization of the Lame differential equation with doubly periodic coefficients in a complex variable. This calculates the one-loop energy of the (J{sub 1},J{sub 2})-string in the SU(2) sector in the limit described by a quantum Landau-Lifshitz model and the bosonic contribution to the energy of the (S,J)-string rotating in AdS{sub 5} and S{sup 5}. Similar techniques calculate the 1/4-BPS latitude Wilson loops in N=4 SYM theory at one loop, normalized to the 1/2-BPS circular loop. Our regularization scheme reproduces the next-to-leading order predicted by supersymmetric localization, up to a remainder function that we discuss upon. We also study the AdS{sub 4} x CP{sup 3} string action expanded around the null cusp background and compute the cusp anomaly up to two loops. This agrees with an all-loop conjectured expression of the ABJM interpolating function. We finally discretize the AdS{sub 5} x S{sup 5} superstring theory in the AdS light-cone gauge and perform lattice simulations at finite coupling with a Monte Carlo algorithm. We measure the string action

  1. c-function and central charge of the sine-Gordon model from the non-perturbative renormalization group flow

    Directory of Open Access Journals (Sweden)

    V. Bacsó

    2015-12-01

    Full Text Available In this paper we study the c-function of the sine-Gordon model taking explicitly into account the periodicity of the interaction potential. The integration of the c-function along trajectories of the non-perturbative renormalization group flow gives access to the central charges of the model in the fixed points. The results at vanishing frequency β2, where the periodicity does not play a role, are retrieved and the independence on the cutoff regulator for small frequencies is discussed. Our findings show that the central charge obtained integrating the trajectories starting from the repulsive low-frequencies fixed points (β2<8π to the infra-red limit is in good quantitative agreement with the expected Δc=1 result. The behavior of the c-function in the other parts of the flow diagram is also discussed. Finally, we point out that including also higher harmonics in the renormalization group treatment at the level of local potential approximation is not sufficient to give reasonable results, even if the periodicity is taken into account. Rather, incorporating the wave-function renormalization (i.e. going beyond local potential approximation is crucial to get sensible results even when a single frequency is used.

  2. Non-perturbative Heavy Quark Effective Theory

    DEFF Research Database (Denmark)

    Della Morte, Michele; Heitger, Jochen; Simma, Hubert

    2015-01-01

    We review a lattice strategy how to non-perturbatively determine the coefficients in the HQET expansion of all components of the heavy-light axial and vector currents, including 1/m_h-corrections. We also discuss recent preliminary results on the form factors parameterizing semi-leptonic B...

  3. Non-Perturbative Quantum Geometry III

    CERN Document Server

    Krefl, Daniel

    2016-08-02

    The Nekrasov-Shatashvili limit of the refined topological string on toric Calabi-Yau manifolds and the resulting quantum geometry is studied from a non-perturbative perspective. The quantum differential and thus the quantum periods exhibit Stockes phenomena over the combined string coupling and quantized Kaehler moduli space. We outline that the underlying formalism of exact quantization is generally applicable to points in moduli space featuring massless hypermultiplets, leading to non-perturbative band splitting. Our prime example is local P1xP1 near a conifold point in moduli space. In particular, we will present numerical evidence that in a Stockes chamber of interest the string based quantum geometry reproduces the non-perturbative corrections for the Nekrasov-Shatashvili limit of 4d supersymmetric SU(2) gauge theory at strong coupling found in the previous part of this series. A preliminary discussion of local P2 near the conifold point in moduli space is also provided.

  4. Non-perturbative aspects of string theory from elliptic curves

    International Nuclear Information System (INIS)

    Reuter, Jonas

    2015-08-01

    We consider two examples for non-perturbative aspects of string theory involving elliptic curves. First, we discuss F-theory on genus-one fibered Calabi-Yau manifolds with the fiber being a hypersurface in a toric fano variety. We discuss in detail the fiber geometry in order to find the gauge groups, matter content and Yukawa couplings of the corresponding supergravity theories for the four examples leading to gauge groups SU(3) x SU(2) x U(1), SU(4) x SU(2) x SU(2)/Z 2 , U(1) and Z 3 . The theories are connected by Higgsings on the field theory side and conifold transitions on the geometry side. We extend the discussion to the network of Higgsings relating all theories stemming from the 16 hypersurface fibrations. For the models leading to gauge groups SU(3) x SU(2) x U(1), SU(4) x SU(2) x SU(2)/Z 2 and U(1) we discuss the construction of vertical G 4 fluxes. Via the D3-brane tadpole cancelation condition we can restrict the minimal number of families in the first two of these models to be at least three. As a second example for non-perturbative aspects of string theory we discuss a proposal for a non-perturbative completion of topological string theory on local B-model geometries. We discuss in detail the computation of quantum periods for the examples of local F 1 , local F 2 and the resolution of C 3 /Z 5 . The quantum corrections are calculated order by order using second order differential operators acting on the classical periods. Using quantum geometry we calculate the refined free energies in the Nekrasov-Shatashvili limit. Finally we check the non-perturbative completion of topological string theory for the geometry of local F 2 against numerical calculations.

  5. Non-perturbative effects in 2d CPN-1 model and 4d YM theory. A ZN toron approach

    International Nuclear Information System (INIS)

    Zhitnitsky, A.R.

    1992-01-01

    A lot of different problems such as: Fractional topological charge, torons, Z N -symmetry, θ-dependence, confinement, U(1)-problem and all that are discussed in 2d CP N-1 model and 4d gluodynamics. A comprehensive topological classification of torons (the toron is a self-dual solution with topological number Q=1/N) is formulated and their interaction is founded in the quasiclassical approximation. It turns out that the number of different kinds of torons is equal to N, and that they are classified by the weights μ of fundamental representation of the group SU(N). Moreover, an interaction of these torons is Coulomb-like ∝Σ ij μ i μ j ln(x i -x j ) 2 and this gas can be expressed as a field theory of the Toda type. The expectation of different quantities (the vacuum energy, the topological density, the Wilson loop operator) are calculated using this effective field theory. All results (confinement, correct dependence on θ, and so on) are precisely what is well known from different considerations. The disorder parameter M is introduced and the corresponding vacuum expectation value is calculated ∝exp(2πik/N) in agreement with 't Hooft's conjecture about properties of in the confinement phase. The hypothesis of abelian dominance and corresponding Weyl Z N symmetry is realized in this approach in an automatic way. (orig.)

  6. Large N non-perturbative effects in N=4 superconformal Chern-Simons theories

    International Nuclear Information System (INIS)

    Hatsuda, Yasuyuki; Honda, Masazumi; Okuyama, Kazumi

    2015-07-01

    We investigate the large N instanton effects of partition functions in a class of N = 4 circular quiver Chern-Simons theories on a three-sphere. Our analysis is based on the supersymmetry localization and the Fermi-gas formalism. The resulting matrix model can be regarded as a two-parameter deformation of the ABJM matrix model, and has richer non-perturbative structures. Based on a systematic semi-classical analysis, we find analytic expressions of membrane instanton corrections. We also exactly compute the partition function for various cases and find some exact forms of worldsheet instanton corrections, which appear as quantum mechanical non-perturbative corrections in the Fermi-gas system.

  7. Non-perturbative effects and the refined topological string

    Energy Technology Data Exchange (ETDEWEB)

    Hatsuda, Yasuyuki [DESY Hamburg (Germany). Theory Group; Tokyo Institute of Technology (Japan). Dept. of Physics; Marino, Marcos [Geneve Univ. (Switzerland). Dept. de Physique Theorique et Section de Mathematiques; Moriyama, Sanefumi [Nagoya Univ. (Japan). Kobayashi Maskawa Inst.; Nagoya Univ. (Japan). Graduate School of Mathematics; Okuyama, Kazumi [Shinshu Univ., Matsumoto, Nagano (Japan). Dept. of Physics

    2013-06-15

    The partition function of ABJM theory on the three-sphere has non-perturbative corrections due to membrane instantons in the M-theory dual. We show that the full series of membrane instanton corrections is completely determined by the refined topological string on the Calabi-Yau manifold known as local P{sup 1} x P{sup 1}, in the Nekrasov-Shatashvili limit. Our result can be interpreted as a first-principles derivation of the full series of non-perturbative effects for the closed topological string on this Calabi-Yau background. Based on this, we make a proposal for the non-perturbative free energy of topological strings on general, local Calabi-Yau manifolds.

  8. Non-perturbative description of quantum systems

    CERN Document Server

    Feranchuk, Ilya; Le, Van-Hoang; Ulyanenkov, Alexander

    2015-01-01

    This book introduces systematically the operator method for the solution of the Schrödinger equation. This method permits to describe the states of quantum systems in the entire range of parameters of Hamiltonian with a predefined accuracy. The operator method is unique compared with other non-perturbative methods due to its ability to deliver in zeroth approximation the uniformly suitable estimate for both ground and excited states of quantum system. The method has been generalized for the application to quantum statistics and quantum field theory.  In this book, the numerous applications of operator method for various physical systems are demonstrated. Simple models are used to illustrate the basic principles of the method which are further used for the solution of complex problems of quantum theory for many-particle systems. The results obtained are supplemented by numerical calculations, presented as tables and figures.

  9. Non-perturbative effects in supersymmetry

    International Nuclear Information System (INIS)

    Veneziano, G.

    1987-01-01

    Some non perturbative aspects of globally supersymmetric (SUSY) gauge theories are discussed. These share with their non-supersymmetric analogues interesting non perturbative features, such as the spontaneous breaking of chiral symmetries via condensates. What is peculiar about supersymmetric theories, however, is that one is able to say a lot about non-perturbative effects even without resorting to elaborate numerical calculations: general arguments, supersymmetric and chiral Ward identities and analytic, dynamical calculations will turn out to effectively determine most of the supersymmetric vacuum properties. 28 references, 5 figures

  10. Non-perturbative topological strings and conformal blocks

    NARCIS (Netherlands)

    Cheng, M.C.N.; Dijkgraaf, R.; Vafa, C.

    2011-01-01

    We give a non-perturbative completion of a class of closed topological string theories in terms of building blocks of dual open strings. In the specific case where the open string is given by a matrix model these blocks correspond to a choice of integration contour. We then apply this definition to

  11. New Methods in Non-Perturbative QCD

    Energy Technology Data Exchange (ETDEWEB)

    Unsal, Mithat [North Carolina State Univ., Raleigh, NC (United States)

    2017-01-31

    In this work, we investigate the properties of quantum chromodynamics (QCD), by using newly developing mathematics and physics formalisms. Almost all of the mass in the visible universe emerges from a quantum chromodynamics (QCD), which has a completely negligible microscopic mass content. An intimately related issue in QCD is the quark confinement problem. Answers to non-perturbative questions in QCD remained largely elusive despite much effort over the years. It is also believed that the usual perturbation theory is inadequate to address these kinds of problems. Perturbation theory gives a divergent asymptotic series (even when the theory is properly renormalized), and there are non-perturbative phenomena which never appear at any order in perturbation theory. Recently, a fascinating bridge between perturbation theory and non-perturbative effects has been found: a formalism called resurgence theory in mathematics tells us that perturbative data and non-perturbative data are intimately related. Translating this to the language of quantum field theory, it turns out that non-perturbative information is present in a coded form in perturbation theory and it can be decoded. We take advantage of this feature, which is particularly useful to understand some unresolved mysteries of QCD from first principles. In particular, we use: a) Circle compactifications which provide a semi-classical window to study confinement and mass gap problems, and calculable prototypes of the deconfinement phase transition; b) Resurgence theory and transseries which provide a unified framework for perturbative and non-perturbative expansion; c) Analytic continuation of path integrals and Lefschetz thimbles which may be useful to address sign problem in QCD at finite density.

  12. Non-perturbative QCD and hadron physics

    International Nuclear Information System (INIS)

    Cobos-Martínez, J J

    2016-01-01

    A brief exposition of contemporary non-perturbative methods based on the Schwinger-Dyson (SDE) and Bethe-Salpeter equations (BSE) of Quantum Chromodynamics (QCD) and their application to hadron physics is given. These equations provide a non-perturbative continuum formulation of QCD and are a powerful and promising tool for the study of hadron physics. Results on some properties of hadrons based on this approach, with particular attention to the pion distribution amplitude, elastic, and transition electromagnetic form factors, and their comparison to experimental data are presented. (paper)

  13. Elliptic CY3folds and non-perturbative modular transformation

    International Nuclear Information System (INIS)

    Iqbal, Amer; Shabbir, Khurram

    2016-01-01

    We study the refined topological string partition function of a class of toric elliptically fibered Calabi-Yau threefolds. These Calabi-Yau threefolds give rise to five dimensional quiver gauge theories and are dual to configurations of M5-M2-branes. We determine the Gopakumar-Vafa invariants for these threefolds and show that the genus g free energy is given by the weight 2 g Eisenstein series. We also show that although the free energy at all genera are modular invariant, the full partition function satisfies the non-perturbative modular transformation property discussed by Lockhart and Vafa in arXiv:1210.5909 and therefore the modularity of free energy is up to non-perturbative corrections. (orig.)

  14. Elliptic CY3folds and non-perturbative modular transformation

    Energy Technology Data Exchange (ETDEWEB)

    Iqbal, Amer [Government College University, Abdus Salam School of Mathematical Sciences, Lahore (Pakistan); Shabbir, Khurram [Government College University, Department of Mathematics, Lahore (Pakistan)

    2016-03-15

    We study the refined topological string partition function of a class of toric elliptically fibered Calabi-Yau threefolds. These Calabi-Yau threefolds give rise to five dimensional quiver gauge theories and are dual to configurations of M5-M2-branes. We determine the Gopakumar-Vafa invariants for these threefolds and show that the genus g free energy is given by the weight 2 g Eisenstein series. We also show that although the free energy at all genera are modular invariant, the full partition function satisfies the non-perturbative modular transformation property discussed by Lockhart and Vafa in arXiv:1210.5909 and therefore the modularity of free energy is up to non-perturbative corrections. (orig.)

  15. Non-perturbative Debye mass in finite-T QCD

    CERN Document Server

    Kajantie, Keijo; Peisa, J; Rajantie, A; Rummukainen, K; Shaposhnikov, Mikhail E

    1997-01-01

    Employing a non-perturbative gauge invariant definition of the Debye screening mass m_D in the effective field theory approach to finite T QCD, we use 3d lattice simulations to determine the leading O(g^2) and to estimate the next-to-leading O(g^3) corrections to m_D in the high temperature region. The O(g^2) correction is large and modifies qualitatively the standard power-counting hierarchy picture of correlation lengths in high temperature QCD.

  16. On the non-perturbative effects

    International Nuclear Information System (INIS)

    Manjavidze, J.; Voronyuk, V.

    2004-01-01

    The quantum correspondence principle based on the time reversibility is adopted to take into account the non-Abelian symmetry constrains. The main properties of the new strong-coupling perturbation theory which take into account non-perturbative effects are described. (author)

  17. A non-perturbative definition of 2D quantum gravity by the fifth time action

    International Nuclear Information System (INIS)

    Ambjoern, J.; Greensite, J.; Varsted, S.

    1990-07-01

    The general formalism for stabilizing bottomless Euclidean field theories (the 'fifth-time' action) provides a natural non-perturbative definition of matrix models corresponding to 2d quantum gravity. The formalism allows, in principle, the use of lattice Monte Carlo techniques for non-perturbative computation of correlation functions. (orig.)

  18. Non-perturbative inputs for gluon distributions in the hadrons

    Energy Technology Data Exchange (ETDEWEB)

    Ermolaev, B.I. [Ioffe Physico-Technical Institute, Saint Petersburg (Russian Federation); Troyan, S.I. [St. Petersburg Institute of Nuclear Physics, Gatchina (Russian Federation)

    2017-03-15

    Description of hadronic reactions at high energies is conventionally done in the framework of QCD factorization. All factorization convolutions comprise non-perturbative inputs mimicking non-perturbative contributions and perturbative evolution of those inputs. We construct inputs for the gluon-hadron scattering amplitudes in the forward kinematics and, using the optical theorem, convert them into inputs for gluon distributions in the hadrons, embracing the cases of polarized and unpolarized hadrons. In the first place, we formulate mathematical criteria which any model for the inputs should obey and then suggest a model satisfying those criteria. This model is based on a simple reasoning: after emitting an active parton off the hadron, the remaining set of spectators becomes unstable and therefore it can be described through factors of the resonance type, so we call it the resonance model. We use it to obtain non-perturbative inputs for gluon distributions in unpolarized and polarized hadrons for all available types of QCD factorization: basic, K{sub T}-and collinear factorizations. (orig.)

  19. Non-perturbative inputs for gluon distributions in the hadrons

    International Nuclear Information System (INIS)

    Ermolaev, B.I.; Troyan, S.I.

    2017-01-01

    Description of hadronic reactions at high energies is conventionally done in the framework of QCD factorization. All factorization convolutions comprise non-perturbative inputs mimicking non-perturbative contributions and perturbative evolution of those inputs. We construct inputs for the gluon-hadron scattering amplitudes in the forward kinematics and, using the optical theorem, convert them into inputs for gluon distributions in the hadrons, embracing the cases of polarized and unpolarized hadrons. In the first place, we formulate mathematical criteria which any model for the inputs should obey and then suggest a model satisfying those criteria. This model is based on a simple reasoning: after emitting an active parton off the hadron, the remaining set of spectators becomes unstable and therefore it can be described through factors of the resonance type, so we call it the resonance model. We use it to obtain non-perturbative inputs for gluon distributions in unpolarized and polarized hadrons for all available types of QCD factorization: basic, K_T-and collinear factorizations. (orig.)

  20. A non-perturbative operator product expansion

    International Nuclear Information System (INIS)

    Bietenholz, W.; Cundy, N.; Goeckeler, M.

    2009-10-01

    Nucleon structure functions can be observed in Deep Inelastic Scattering experiments, but it is an outstanding challenge to confront them with fully non-perturbative QCD results. For this purpose we investigate the product of electromagnetic currents (with large photonmomenta) between quark states (of low momenta). By means of an Operator Product Expansion the structure function can be decomposed into matrix elements of local operators, and Wilson coefficients. For consistency both have to be computed non-perturbatively. Here we present precision results for a set of Wilson coefficients. They are evaluated from propagators for numerous quark momenta on the lattice, where the use of chiral fermions suppresses undesired operator mixing. This overdetermines the Wilson coefficients, but reliable results can be extracted by means of a Singular Value Decomposition. (orig.)

  1. Non-perturbative quark mass renormalization

    CERN Document Server

    Capitani, S.; Luescher, M.; Sint, S.; Sommer, R.; Weisz, P.; Wittig, H.

    1998-01-01

    We show that the renormalization factor relating the renormalization group invariant quark masses to the bare quark masses computed in lattice QCD can be determined non-perturbatively. The calculation is based on an extension of a finite-size technique previously employed to compute the running coupling in quenched QCD. As a by-product we obtain the $\\Lambda$--parameter in this theory with completely controlled errors.

  2. Non-perturbative materialization of ghosts

    International Nuclear Information System (INIS)

    Emparan, Roberto; Garriga, Jaume

    2006-01-01

    In theories with a hidden ghost sector that couples to visible matter through gravity only, empty space can decay into ghosts and ordinary matter by graviton exchange. Perturbatively, such processes can be very slow provided that the gravity sector violates Lorentz invariance above some cut-off scale. Here, we investigate non-perturbative decay processes involving ghosts, such as the spontaneous creation of self-gravitating lumps of ghost matter, as well as pairs of Bondi dipoles (i.e. lumps of ghost matter chasing after positive energy objects). We find the corresponding instantons and calculate their Euclidean action. In some cases, the instantons induce topology change or have negative Euclidean action. To shed some light on the meaning of such peculiarities, we also consider the nucleation of concentrical domain walls of ordinary and ghost matter, where the Euclidean calculation can be compared with the canonical (Lorentzian) description of tunneling. We conclude that non-perturbative ghost nucleation processes can be safely suppressed in phenomenological scenarios

  3. Non-perturbative Aspects of QCD and Parameterized Quark Propagator

    Institute of Scientific and Technical Information of China (English)

    HAN Ding-An; ZHOU Li-Juan; ZENG Ya-Guang; GU Yun-Ting; CAO Hui; MA Wei-Xing; MENG Cheng-Ju; PAN Ji-Huan

    2008-01-01

    Based on the Global Color Symmetry Model, the non-perturbative QCD vacuum is investigated in theparameterized fully dressed quark propagator. Our theoretical predictions for various quantities characterized the QCD vacuum are in agreement with those predicted by many other phenomenological QCD inspired models. The successful predictions clearly indicate the extensive validity of our parameterized quark propagator used here. A detailed discussion on the arbitrariness in determining the integration cut-off parameter of# in calculating QCD vacuum condensates and a good method, which avoided the dependence of calculating results on the cut-off parameter is also strongly recommended to readers.

  4. Introduction to non-perturbative heavy quark effective theory

    International Nuclear Information System (INIS)

    Sommer, R.

    2010-08-01

    My lectures on the effective field theory for heavy quarks, an expansion around the static limit, concentrate on the motivation and formulation of HQET, its renormalization and discretization. This provides the basis for understanding that and how this effective theory can be formulated fully non-perturbatively in the QCD coupling, while by the very nature of an effective field theory, it is perturbative in the expansion parameter 1/m. After the couplings in the effective theory have been determined, the result at a certain order in 1/m is unique up to higher order terms in 1/m. In particular the continuum limit of the lattice regularized theory exists and leaves no trace of how it was regularized. In other words, the theory yields an asymptotic expansion of the QCD observables in 1/m - as usual in a quantum field theory modified by powers of logarithms. None of these properties has been shown rigorously (e.g. to all orders in perturbation theory) but perturbative computations and recently also non-perturbative lattice results give strong support to this ''standard wisdom''. A subtle issue is that a theoretically consistent formulation of the theory is only possible through a non-perturbative matching of its parameters with QCD at finite values of 1/m. As a consequence one finds immediately that the splitting of a result for a certain observable into, for example, lowest order and first order is ambiguous. Depending on how the matching between effective theory and QCD is done, a first order contribution may vanish and appear instead in the lowest order. For example, the often cited phenomenological HQET parameters anti Λ and λ 1 lack a unique non-perturbative definition. But this does not affect the precision of the asymptotic expansion in 1/m. The final result for an observable is correct up to order (1/m) n+1 if the theory was treated including (1/m) n terms. Clearly, the weakest point of HQET is that it intrinsically is an expansion. In practise, carrying it

  5. Introduction to non-perturbative heavy quark effective theory

    Energy Technology Data Exchange (ETDEWEB)

    Sommer, R. [DESY, Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC

    2010-08-15

    My lectures on the effective field theory for heavy quarks, an expansion around the static limit, concentrate on the motivation and formulation of HQET, its renormalization and discretization. This provides the basis for understanding that and how this effective theory can be formulated fully non-perturbatively in the QCD coupling, while by the very nature of an effective field theory, it is perturbative in the expansion parameter 1/m. After the couplings in the effective theory have been determined, the result at a certain order in 1/m is unique up to higher order terms in 1/m. In particular the continuum limit of the lattice regularized theory exists and leaves no trace of how it was regularized. In other words, the theory yields an asymptotic expansion of the QCD observables in 1/m - as usual in a quantum field theory modified by powers of logarithms. None of these properties has been shown rigorously (e.g. to all orders in perturbation theory) but perturbative computations and recently also non-perturbative lattice results give strong support to this ''standard wisdom''. A subtle issue is that a theoretically consistent formulation of the theory is only possible through a non-perturbative matching of its parameters with QCD at finite values of 1/m. As a consequence one finds immediately that the splitting of a result for a certain observable into, for example, lowest order and first order is ambiguous. Depending on how the matching between effective theory and QCD is done, a first order contribution may vanish and appear instead in the lowest order. For example, the often cited phenomenological HQET parameters anti {lambda} and {lambda}{sub 1} lack a unique non-perturbative definition. But this does not affect the precision of the asymptotic expansion in 1/m. The final result for an observable is correct up to order (1/m){sup n+1} if the theory was treated including (1/m){sup n} terms. Clearly, the weakest point of HQET is that it

  6. Model Correction Factor Method

    DEFF Research Database (Denmark)

    Christensen, Claus; Randrup-Thomsen, Søren; Morsing Johannesen, Johannes

    1997-01-01

    The model correction factor method is proposed as an alternative to traditional polynomial based response surface techniques in structural reliability considering a computationally time consuming limit state procedure as a 'black box'. The class of polynomial functions is replaced by a limit...... of the model correction factor method, is that in simpler form not using gradient information on the original limit state function or only using this information once, a drastic reduction of the number of limit state evaluation is obtained together with good approximations on the reliability. Methods...

  7. Thoughts on non-perturbative thermalization and jet quenching in heavy ion collisions

    International Nuclear Information System (INIS)

    Kovchegov, Yuri V.

    2006-01-01

    We start by presenting physical arguments for the impossibility of perturbative thermalization leading to (non-viscous) Bjorken hydrodynamic description of heavy ion collisions. These arguments are complimentary to our more formal argument presented in [Yu.V. Kovchegov, hep-ph/0503038]. We argue that the success of hydrodynamic models in describing the quark-gluon system produced in heavy ion collisions could only be due to non-perturbative strong coupling effects. We continue by studying non-perturbative effects in heavy ion collisions at high energies. We model non-perturbative phenomena by an instanton ensemble. We show that non-perturbative instanton vacuum fields may significantly contribute to jet quenching in nuclear collisions. At the same time, the instanton ensemble contribution to thermalization is likely to be rather weak, leading to non-perturbative thermalization time comparable to the time of hadronization. This example illustrates that jet quenching is not necessarily a signal of a thermalized medium. Indeed, since the instanton models do not capture all the effects of QCD vacuum (e.g., they do not account for confinement), there may be other non-perturbative effects facilitating thermalization of the system

  8. Probing non-perturbative effects in M-theory

    International Nuclear Information System (INIS)

    Hatsuda, Yasuyuki; Okuyama, Kazumi

    2014-07-01

    The AdS/CFT correspondence enables us to probe M-theory on various backgrounds from the corresponding dual gauge theories. Here we investigate in detail a three-dimensional U(N) N=4 super Yang-Mills theory coupled to one adjoint hypermultiplet and N f fundamental hypermultiplets, which is large N dual to M-theory on AdS 4 x S 7 /Z N f . Using the localization and the Fermi-gas formulation, we explore non-perturbative corrections to the partition function. As in the ABJM theory, we find that there exists a non-trivial pole cancellation mechanism, which guarantees the theory to be well-defined, between worldsheet instantons and membrane instantons for all rational (in particular, physical or integral) values of N f .

  9. Non-perturbative QCD correlation functions

    Energy Technology Data Exchange (ETDEWEB)

    Cyrol, Anton Konrad

    2017-11-27

    Functional methods provide access to the non-perturbative regime of quantum chromo- dynamics. Hence, they allow investigating confinement and chiral symmetry breaking. In this dissertation, correlation functions of Yang-Mills theory and unquenched two-flavor QCD are computed from the functional renormalization group. Employing a self-consistent vertex expansion of the effective action, Yang-Mills correlation functions are obtained in four as well as in three spacetime dimensions. To this end, confinement and Slavnov-Taylor identities are discussed. Our numerical results show very good agreement with corresponding lattice results. Next, unquenched two-flavor QCD is considered where it is shown that the unquenched two-flavor gluon propagator is insensitive to the pion mass. Furthermore, the necessity for consistent truncations is emphasized. Finally, correlation functions of finite-temperature Yang-Mills theory are computed in a truncation that includes the splitting of the gluon field into directions that are transverse and longitudinal to the heat bath. In particular, it includes the splitting of the three- and four-gluon vertices. The obtained gluon propagator allows to extract a Debye screening mass that coincides with the hard thermal loop screening mass at high temperatures, but is meaningful also at temperatures below the phase transition temperature.

  10. Testing QCD in the non-perturbative regime

    Energy Technology Data Exchange (ETDEWEB)

    A.W. Thomas

    2007-01-01

    This is an exciting time for strong interaction physics. We have a candidate for a fundamental theory, namely QCD, which has passed all the tests thrown at it in the perturbative regime. In the non-perturbative regime it has also produced some promising results and recently a few triumphs but the next decade will see enormous progress in our ability to unambiguously calculate the consequences of non-perturbative QCD and to test those predictions experimentally. Amongst the new experimental facilities being constructed, the hadronic machines at JPARC and GSI-FAIR and the 12 GeV Upgrade at Jefferson Lab, the major new electromagnetic facility worldwide, present a beautifully complementary network aimed at producing precise new measurements which will advance our knowledge of nuclear systems and push our ability to calculate the consequences of QCD to the limit. We will first outline the plans at Jefferson Lab for doubling the energy of CEBAF. The new facility presents some wonderful opportunities for discovery in strong interaction physics, as well as beyond the standard model. Then we turn to the theoretical developments aimed at extracting precise results for physical hadron properties from lattice QCD simulations. This discussion will begin with classical examples, such as the mass of the nucleon and ?, before dealing with a very recent and spectacular success involving information extracted from modern parity violating electron scattering.

  11. Non-perturbative heavy quark effective theory. An application to semi-leptonic B-decays

    International Nuclear Information System (INIS)

    Della Morte, Michele; Heitger, Jochen; Simma, Hubert; Sommer, Rainer; Humboldt-Universitaet, Berlin

    2015-01-01

    We review a lattice strategy how to non-perturbatively determine the coefficients in the HQET expansion of all components of the heavy-light axial and vector currents, including 1/m h -corrections. We also discuss recent preliminary results on the form factors parameterizing semi-leptonic B-decays at the leading order in 1/m h .

  12. Non-perturbative subtractions in the heavy quark effective field theory

    International Nuclear Information System (INIS)

    Maiani, L.; Martinelli, G.; Sachrajda, C.T.

    1992-01-01

    We demonstrate the presence of ultraviolet power divergences in the O(1/m h ) corrections to matrix elements of hadronic operators containing a heavy quark field (where m h is the mass of the heavy quark). These power divergences must be subtracted non-perturbatively. The implications for lattice computations are discussed in detail. (orig.)

  13. A non-perturbative study of massive gauge theories

    DEFF Research Database (Denmark)

    Della Morte, Michele; Hernandez, Pilar

    2013-01-01

    and the lightest degrees of freedom are spin one vector particles with the same quantum numbers as the conserved current, we argue that the most general effective theory describing their low-energy dynamics must be a massive gauge theory. We present results of a exploratory numerical simulation of the model......We consider a non-perturbative formulation of an SU(2) massive gauge theory on a space-time lattice, which is also a discretised gauged non-linear chiral model. The lattice model is shown to have an exactly conserved global SU(2) symmetry. If a scaling region for the lattice model exists...... and find indications for the presence of a scaling region where both a triplet vector and a scalar remain light....

  14. Spectral zeta function and non-perturbative effects in ABJM Fermi-gas

    International Nuclear Information System (INIS)

    Hatsuda, Yasuyuki

    2015-03-01

    The exact partition function in ABJM theory on three-sphere can be regarded as a canonical partition function of a non-interacting Fermi-gas with an unconventional Hamiltonian. All the information on the partition function is encoded in the discrete spectrum of this Hamiltonian. We explain how (quantum mechanical) non-perturbative corrections in the Fermi-gas system appear from a spectral consideration. Basic tools in our analysis are a Mellin-Barnes type integral representation and a spectral zeta function. From a consistency with known results, we conjecture that the spectral zeta function in the ABJM Fermi-gas has an infinite number of ''non-perturbative'' poles, which are invisible in the semi-classical expansion of the Planck constant. We observe that these poles indeed appear after summing up perturbative corrections. As a consequence, the perturbative resummation of the spectral zeta function causes non-perturbative corrections to the grand canonical partition function. We also present another example associated with a spectral problem in topological string theory. A conjectured non-perturbative free energy on the resolved conifold is successfully reproduced in this framework.

  15. Non-perturbative renormalization on the lattice

    International Nuclear Information System (INIS)

    Koerner, Daniel

    2014-01-01

    Strongly-interacting theories lie at the heart of elementary particle physics. Their distinct behaviour shapes our world sui generis. We are interested in lattice simulations of supersymmetric models, but every discretization of space-time inevitably breaks supersymmetry and allows renormalization of relevant susy-breaking operators. To understand the role of such operators, we study renormalization group trajectories of the nonlinear O(N) Sigma model (NLSM). Similar to quantum gravity, it is believed to adhere to the asymptotic safety scenario. By combining the demon method with blockspin transformations, we compute the global flow diagram. In two dimensions, we reproduce asymptotic freedom and in three dimensions, asymptotic safety is demonstrated. Essential for these results is the application of a novel optimization scheme to treat truncation errors. We proceed with a lattice simulation of the supersymmetric nonlinear O(3) Sigma model. Using an original discretization that requires to fine tune only a single operator, we argue that the continuum limit successfully leads to the correct continuum physics. Unfortunately, for large lattices, a sign problem challenges the applicability of Monte Carlo methods. Consequently, the last chapter of this thesis is spent on an assessment of the fermion-bag method. We find that sign fluctuations are thereby significantly reduced for the susy NLSM. The proposed discretization finally promises a direct confirmation of supersymmetry restoration in the continuum limit. For a complementary analysis, we study the one-flavor Gross-Neveu model which has a complex phase problem. However, phase fluctuations for Wilson fermions are very small and no conclusion can be drawn regarding the potency of the fermion-bag approach for this model.

  16. Non-perturbative construction of the Luttinger-Ward functional

    Directory of Open Access Journals (Sweden)

    M.Potthoff

    2006-01-01

    Full Text Available For a system of correlated electrons, the Luttinger-Ward functional provides a link between static thermodynamic quantities on the one hand and single-particle excitations on the other. The functional is useful in deriving several general properties of the system as well as in formulating the thermodynamically consistent approximations. Its original construction, however, is perturbative as it is based on the weak-coupling skeleton-diagram expansion. Here, it is shown that the Luttinger-Ward functional can be derived within a general functional-integral approach. This alternative and non-perturbative approach stresses the fact that the Luttinger-Ward functional is universal for a large class of models.

  17. A hybrid model for coupling kinetic corrections of fusion reactivity to hydrodynamic implosion simulations

    Science.gov (United States)

    Tang, Xian-Zhu; McDevitt, C. J.; Guo, Zehua; Berk, H. L.

    2014-03-01

    Inertial confinement fusion requires an imploded target in which a central hot spot is surrounded by a cold and dense pusher. The hot spot/pusher interface can take complicated shape in three dimensions due to hydrodynamic mix. It is also a transition region where the Knudsen and inverse Knudsen layer effect can significantly modify the fusion reactivity in comparison with the commonly used value evaluated with background Maxwellians. Here, we describe a hybrid model that couples the kinetic correction of fusion reactivity to global hydrodynamic implosion simulations. The key ingredient is a non-perturbative treatment of the tail ions in the interface region where the Gamow ion Knudsen number approaches or surpasses order unity. The accuracy of the coupling scheme is controlled by the precise criteria for matching the non-perturbative kinetic model to perturbative solutions in both configuration space and velocity space.

  18. Non-perturbative heavy quark effective theory. Introduction and status

    International Nuclear Information System (INIS)

    Sommer, Rainer; Humboldt-Universitaet, Berlin

    2015-01-01

    We give an introduction to Heavy Quark Effective Theory (HQET). Our emphasis is on its formulation non-perturbative in the strong coupling, including the non-perturbative determination of the parameters in the HQET Lagrangian. In a second part we review the present status of HQET on the lattice, largely based on work of the ALPHA collaboration in the last few years. We finally discuss opportunities and challenges.

  19. Non-perturbative renormalization of HQET and QCD

    International Nuclear Information System (INIS)

    Sommer, Rainer

    2003-01-01

    We discuss the necessity of non-perturbative renormalization in QCD and HQET and explain the general strategy for solving this problem. A few selected topics are discussed in some detail, namely the importance of off shell improvement in the MOM-scheme on the lattice, recent progress in the implementation of finite volume schemes and then particular emphasis is put on the recent idea to carry out a non-perturbative renormalization of the Heavy Quark Effective Theory (HQET)

  20. Topological string theory, modularity and non-perturbative physics

    Energy Technology Data Exchange (ETDEWEB)

    Rauch, Marco

    2011-09-15

    In this thesis the holomorphic anomaly of correlators in topological string theory, matrix models and supersymmetric gauge theories is investigated. In the first part it is shown how the techniques of direct integration known from topological string theory can be used to solve the closed amplitudes of Hermitian multi-cut matrix models with polynomial potentials. In the case of the cubic matrix model, explicit expressions for the ring of non-holomorphic modular forms that are needed to express all closed matrix model amplitudes are given. This allows to integrate the holomorphic anomaly equation up to holomorphic modular terms that are fixed by the gap condition up to genus four. There is an one-dimensional submanifold of the moduli space in which the spectral curve becomes the Seiberg-Witten curve and the ring reduces to the non-holomorphic modular ring of the group {gamma}(2). On that submanifold, the gap conditions completely fix the holomorphic ambiguity and the model can be solved explicitly to very high genus. Using these results it is possible to make precision tests of the connection between the large order behavior of the 1/N expansion and non-perturbative effects due to instantons. Finally, it is argued that a full understanding of the large genus asymptotics in the multi-cut case requires a new class of non-perturbative sectors in the matrix model. In the second part a holomorphic anomaly equation for the modified elliptic genus of two M5-branes wrapping a rigid divisor inside a Calabi-Yau manifold is derived using wall-crossing formulae and the theory of mock modular forms. The anomaly originates from restoring modularity of an indefinite theta-function capturing the wall-crossing of BPS invariants associated to D4- D2-D0 brane systems. The compatibility of this equation with anomaly equations previously observed in the context of N=4 topological Yang-Mills theory on P{sup 2} and E-strings obtained from wrapping M5-branes on a del Pezzo surface which in

  1. Topological string theory, modularity and non-perturbative physics

    International Nuclear Information System (INIS)

    Rauch, Marco

    2011-09-01

    In this thesis the holomorphic anomaly of correlators in topological string theory, matrix models and supersymmetric gauge theories is investigated. In the first part it is shown how the techniques of direct integration known from topological string theory can be used to solve the closed amplitudes of Hermitian multi-cut matrix models with polynomial potentials. In the case of the cubic matrix model, explicit expressions for the ring of non-holomorphic modular forms that are needed to express all closed matrix model amplitudes are given. This allows to integrate the holomorphic anomaly equation up to holomorphic modular terms that are fixed by the gap condition up to genus four. There is an one-dimensional submanifold of the moduli space in which the spectral curve becomes the Seiberg-Witten curve and the ring reduces to the non-holomorphic modular ring of the group Γ(2). On that submanifold, the gap conditions completely fix the holomorphic ambiguity and the model can be solved explicitly to very high genus. Using these results it is possible to make precision tests of the connection between the large order behavior of the 1/N expansion and non-perturbative effects due to instantons. Finally, it is argued that a full understanding of the large genus asymptotics in the multi-cut case requires a new class of non-perturbative sectors in the matrix model. In the second part a holomorphic anomaly equation for the modified elliptic genus of two M5-branes wrapping a rigid divisor inside a Calabi-Yau manifold is derived using wall-crossing formulae and the theory of mock modular forms. The anomaly originates from restoring modularity of an indefinite theta-function capturing the wall-crossing of BPS invariants associated to D4- D2-D0 brane systems. The compatibility of this equation with anomaly equations previously observed in the context of N=4 topological Yang-Mills theory on P 2 and E-strings obtained from wrapping M5-branes on a del Pezzo surface which in turn is

  2. Single hadron spectrum in γγ collisions: The QCD contribution to order αsub(s) and the non perturbative background

    International Nuclear Information System (INIS)

    Aurenche, P.; Douiri, A.; Baier, R.; Fontannaz, M.; Schiff, D.

    1985-01-01

    We calculate the corrections of order αsub(s) to the process γγ->HX where both initial photons are real. The analytic expressions are given and a detailed discussion of the variation of the corrections with psub(T) and rapidity is presented. The dependence on the factorization prescription and scale is also discussed. Using the equivalent photon approximation the cross-section for e + e - ->e + e - HX is calculated both in the PEP/PETRA and LEP energy range. Based on the vector meson dominance model the non perturbative background is estimated and its importance for present and future experiments is emphasized. (orig.)

  3. Non perturbative aspects of strongly correlated electron systems

    International Nuclear Information System (INIS)

    Controzzi, D.

    2000-01-01

    In this thesis we report some selected works on Strongly Correlated Electron Systems. A common ingredient of these works is the use of non-perturbative techniques available in low dimensions. In the first part we use the Bethe Ansatz to study some properties of two families of integrable models introduced by Fateev. We calculate the Thermodynamics of the models and show how they can be interpreted as effective Landau-Ginzburg theories for coupled two-dimensional superconductors interacting with an insulating substrate. This allows us to study exactly the dependence of the critical temperature on the thickness of the insulating layer, and on the interaction between the order parameters of two different superconducting planes. In the second part of the thesis we study the optical conductivity of the sine-Gordon model using the Form Factor method and Conformal Perturbation Theory. This allows us to develop, for the first time, a complete theory of the optical conductivity of one-dimensional Mott insulators, in the Quantum Field Theory limit. (author)

  4. A non-perturbative approach to strings

    International Nuclear Information System (INIS)

    Orland, P.

    1986-03-01

    After briefly reviewing the theory of strings in the light-cone gauge, a lattice regularized path integral for the amplitudes is discussed. The emphasis is put on a toy string model; the U(N) Veneziano model in the limit as N->infinite with g 0 2 N fixed. The lattice methods of Giles and Thorn are used extensively, but are found to require modification beyond perturbation theory. The twenty-six-dimensional toy string model is recast as a two-dimensional spin system. (orig.)

  5. A non-perturbative approach to strings

    International Nuclear Information System (INIS)

    Orland, P.

    1986-01-01

    After briefly reviewing the theory of strings in the light-cone gauge, a lattice regularized path integral for the amplitudes is discussed. The emphasis is put on a toy string model; the U(N) Veneziano model in the limit as N → ∞, with g/sup 2//sub o/N fixed. The lattice methods of Giles and Thorn are used extensively, but are found to require modification beyond perturbation theory. The twenty-six-dimensional toy string model is recast as a two-dimensional spin system

  6. Non-perturbative string theories and singular surfaces

    International Nuclear Information System (INIS)

    Bochicchio, M.

    1990-01-01

    Singular surfaces are shown to be dense in the Teichmueller space of all Riemann surfaces and in the grasmannian. This happens because a regular surface of genus h, obtained identifying 2h disks in pairs, can be approximated by a very large genus singular surface with punctures dense in the 2h disks. A scale ε is introduced and the approximate genus is defined as half the number of connected regions covered by punctures of radius ε. The non-perturbative partition function is proposed to be a scaling limit of the partition function on such infinite genus singular surfaces with a weight which is the coupling constant g raised to the approximate genus. For a gaussian model in any space-time dimension the regularized partition function on singular surfaces of infinite genus is the partition function of a two-dimensional lattice gas of charges and monopoles. It is shown that modular invariance of the partition function implies a version of the Dirac quantization condition for the values of the e/m charges. Before the scaling limit the phases of the lattice gas may be classified according to the 't Hooft criteria for the condensation of e/m operators. (orig.)

  7. Variational techniques in non-perturbative QCD

    CERN Document Server

    Kovner, Alex; Kovner, Alex

    2004-01-01

    We review attempts to apply the variational principle to understand the vacuum of non-abelian gauge theories. In particular, we focus on the method explored by Ian Kogan and collaborators, which imposes exact gauge invariance on the trial Gaussian wave functional prior to the minimization of energy. We describe the application of the method to a toy model -- confining compact QED in 2+1 dimensions -- where it works wonderfully and reproduces all known non-trivial results. We then follow its applications to pure Yang-Mills theory in 3+1 dimensions at zero and finite temperature. Among the results of the variational calculation are dynamical mass generation and the analytic description of the deconfinement phase transition.

  8. Introduction to non-perturbative quantum chromodynamics

    International Nuclear Information System (INIS)

    Pene, O.

    1995-01-01

    Quantum chromodynamics is considered to be the theory of strong interaction. The main peculiarity of this theory is that its asymptotic states (hadrons) are different from its elementary fields (quarks and gluons). This property plays a great part in any physical process involving small momentum-energy transfers. In such a range perturbative methods are no longer allowed. This work focuses on other tools such as QCD symmetry, the quark model, Green functions and the sum rules. To get hadron characteristics numerically, QCD on lattices is used but only in the case of simple process involving no more than one hadron in the initial and final states because of the complexity of the Green function. Some examples using a Monte-Carlo simulation are given. (A.C.)

  9. Alien calculus and non perturbative effects in Quantum Field Theory

    Science.gov (United States)

    Bellon, Marc P.

    2016-12-01

    In many domains of physics, methods for dealing with non-perturbative aspects are required. Here, I want to argue that a good approach for this is to work on the Borel transforms of the quantities of interest, the singularities of which give non-perturbative contributions. These singularities in many cases can be largely determined by using the alien calculus developed by Jean Écalle. My main example will be the two point function of a massless theory given as a solution of a renormalization group equation.

  10. Non-perturbative methods applied to multiphoton ionization

    International Nuclear Information System (INIS)

    Brandi, H.S.; Davidovich, L.; Zagury, N.

    1982-09-01

    The use of non-perturbative methods in the treatment of atomic ionization is discussed. Particular attention is given to schemes of the type proposed by Keldysh where multiphoton ionization and tunnel auto-ionization occur for high intensity fields. These methods are shown to correspond to a certain type of expansion of the T-matrix in the intra-atomic potential; in this manner a criterium concerning the range of application of these non-perturbative schemes is suggested. A brief comparison between the ionization rate of atoms in the presence of linearly and circularly polarized light is presented. (Author) [pt

  11. An efficiency correction model

    NARCIS (Netherlands)

    Francke, M.K.; de Vos, A.F.

    2009-01-01

    We analyze a dataset containing costs and outputs of 67 American local exchange carriers in a period of 11 years. This data has been used to judge the efficiency of BT and KPN using static stochastic frontier models. We show that these models are dynamically misspecified. As an alternative we

  12. The FLUKA Monte Carlo, Non-Perturbative QCD and Cosmic Ray Cascades

    International Nuclear Information System (INIS)

    Battistoni, G.

    2005-01-01

    The FLUKA Monte Carlo code, presently used in cosmic ray physics, contains packages to sample soft hadronic processes which are built according to the Dual Parton Model. This is a phenomenological model capable of reproducing many of the features of hadronic collisions in the non perturbative QCD regime. The basic principles of the model are summarized and, as an example, the associated Lambda-K production is discussed. This is a process which has some relevance for the calculation of atmospheric neutrino fluxes

  13. Non-perturbative approach to 2D-supergravity and super-Virasoro constraints

    CERN Document Server

    Becker, M

    1994-01-01

    The coupling of N=1 SCFT of type (4m,2) to two-dimensional supergravity can be formulated non-perturbatively in terms of a discrete super-eigenvalue model proposed by Alvarez-Gaum\\'e, et al. We derive the superloop equations that describe, in the double scaling limit, the non-perturbative solution of this model. These equations are equivalent to the double scaled super-Virasoro constraints satisfied by the partition function. They are formulated in terms of a \\widehat c=1 theory, with a \\IZ_2-twisted scalar field and a Weyl-Majorana fermion in the Ramond sector. We have solved the superloop equations to all orders in the genus expansion and obtained the explicit expressions for the correlation functions of gravitationally dressed scaling operators in the NS- and R-sector. In the double scaling limit, we obtain a formulation of the model in terms of a new supersymmetric extension of the KdV hierarchy.

  14. Non-perturbative QCD Effects and the Top Mass at the Tevatron

    CERN Document Server

    Wicke, Daniel

    2008-01-01

    The modelling of non-perturbative effects is an important part of modern collider physics simulations. In hadron collisions there is some indication that the modelling of the interactions of the beam remnants, the underlying event, may require non-trivial colour reconnection effects to be present. We recently introduced a universally applicable toy model of such reconnections, based on hadronising strings. This model, which has one free parameter, has been implemented in the Pythia event generator. We then considered several parameter sets (`tunes'), constrained by fits to Tevatron minimum-bias data, and determined the sensitivity of a simplified top mass analysis to these effects, in exclusive semi-leptonic top events at the Tevatron. A first attempt at isolating the genuine non-perturbative effects gave an estimate of order +-0.5GeV from non-perturbative uncertainties. The results presented here are an update to the original study and include recent bug fixes of Pythia that influenced the tunings investigat...

  15. Non-Perturbative Formulation of Time-Dependent String Solutions

    CERN Document Server

    Alexandre, J; Mavromatos, Nikolaos E; Alexandre, Jean; Ellis, John; Mavromatos, Nikolaos E.

    2006-01-01

    We formulate here a new world-sheet renormalization-group technique for the bosonic string, which is non-perturbative in the Regge slope alpha' and based on a functional method for controlling the quantum fluctuations, whose magnitudes are scaled by the value of alpha'. Using this technique we exhibit, in addition to the well-known linear-dilaton cosmology, a new, non-perturbative time-dependent background solution. Using the reparametrization invariance of the string S-matrix, we demonstrate that this solution is conformally invariant to alpha', and we give a heuristic inductive argument that conformal invariance can be maintained to all orders in alpha'. This new time-dependent string solution may be applicable to primordial cosmology or to the exit from linear-dilaton cosmology at large times.

  16. Non-perturbative Green functions in quantum gauge theories

    International Nuclear Information System (INIS)

    Shabanov, S.V.

    1991-01-01

    Non-perturbative Green functions for gauge invariant variables are considered. The Green functions are found to be modified as compared with the usual ones in a definite gauge because of a physical configuration space (PCS) reduction. In the Yang-Mills theory with fermions this phenomenon follows from the Singer theorem about the absence of a global gauge condition for the fields tensing to zero at spatial infinity. 20 refs

  17. Non-perturbative versus perturbative renormalization of lattice operators

    International Nuclear Information System (INIS)

    Goeckeler, M.; Technische Hochschule Aachen; Horsley, R.; Ilgenfritz, E.M.; Oelrich, H.; Forschungszentrum Juelich GmbH; Schierholz, G.; Forschungszentrum Juelich GmbH; Perlt, H.; Schiller, A.; Rakow, P.

    1995-09-01

    Our objective is to compute the moments of the deep-inelastic structure functions of the nucleon on the lattice. A major source of uncertainty is the renormalization of the lattice operators that enter the calculation. In this talk we compare the renormalization constants of the most relevant twist-two bilinear quark operators which we have computed non-perturbatively and perturbatively to one loop order. Furthermore, we discuss the use of tadpole improved perturbation theory. (orig.)

  18. Non-perturbative particle dynamics in (2+1)-gravity

    CERN Document Server

    Bellini, A; Valtancoli, P

    1995-01-01

    We construct a non-perturbative, single-valued solution for the metric and the motion of two interacting particles in (2+1)-Gravity, by using a Coulomb gauge of conformal type. The method provides the mapping from multivalued ( minkowskian ) coordinates to single-valued ones, which solves the non-abelian monodromies due to particles's momenta and can be applied also to the general N-body case.

  19. Calibrated geometries and non perturbative superpotentials in M-theory

    International Nuclear Information System (INIS)

    Hernandez, R.

    1999-12-01

    We consider non perturbative effects in M-theory compactifications on a seven-manifold of G 2 holonomy arising from membranes wrapped on supersymmetric three-cycles. When membranes are wrapped on associative submanifolds they induce a superpotential that can be calculated using calibrated geometry. This superpotential is also derived from compactification on a seven-manifold, to four dimensional Anti-de Sitter spacetime, of eleven dimensional supergravity with non vanishing expectation value of the four-form field strength. (author)

  20. Non-perturbative plaquette in 3d pure SU(3)

    CERN Document Server

    Hietanen, A; Laine, Mikko; Rummukainen, K; Schröder, Y

    2005-01-01

    We present a determination of the elementary plaquette and, after the subsequent ultraviolet subtractions, of the finite part of the gluon condensate, in lattice regularization in three-dimensional pure SU(3) gauge theory. Through a change of regularization scheme to MSbar and a matching back to full four-dimensional QCD, this result determines the first non-perturbative contribution in the weak-coupling expansion of hot QCD pressure.

  1. Non-perturbative scalar potential inspired by type IIA strings on rigid CY

    Energy Technology Data Exchange (ETDEWEB)

    Alexandrov, Sergei [Laboratoire Charles Coulomb (L2C), UMR 5221, CNRS-Université de Montpellier,F-34095, Montpellier (France); Ketov, Sergei V. [Department of Physics, Tokyo Metropolitan University,1-1 Minami-ohsawa, Hachioji-shi, Tokyo 192-0397 (Japan); Kavli Institute for the Physics and Mathematics of the Universe (IPMU), The University of Tokyo,Chiba 277-8568 (Japan); Institute of Physics and Technology, Tomsk Polytechnic University,30 Lenin Ave., Tomsk 634050 (Russian Federation); Wakimoto, Yuki [Department of Physics, Tokyo Metropolitan University,1-1 Minami-ohsawa, Hachioji-shi, Tokyo 192-0397 (Japan)

    2016-11-10

    Motivated by a class of flux compactifications of type IIA strings on rigid Calabi-Yau manifolds, preserving N=2 local supersymmetry in four dimensions, we derive a non-perturbative potential of all scalar fields from the exact D-instanton corrected metric on the hypermultiplet moduli space. Applying this potential to moduli stabilization, we find a discrete set of exact vacua for axions. At these critical points, the stability problem is decoupled into two subspaces spanned by the axions and the other fields (dilaton and Kähler moduli), respectively. Whereas the stability of the axions is easily achieved, numerical analysis shows instabilities in the second subspace.

  2. Non-perturbative field theory/field theory on a lattice

    International Nuclear Information System (INIS)

    Ambjorn, J.

    1988-01-01

    The connection between the theory of critical phenomena in statistical mechanics and the renormalization of field theory is briefly outlined. The way of using this connection is described to get information about non-perturbative quantities in QCD and about more intelligent ways of doing the Monte Carlo (MC) simulations. The (MC) method is shown to be a viable one in high energy physics, but it is not a good substitute for an analytic understanding. MC-methods will be very valuable both for getting out hard numbers and for testing the correctness of new ideas

  3. Introduction and overview to some topics in perturbative QCD and their relationship to non perturbative effects

    International Nuclear Information System (INIS)

    West, G.

    1990-01-01

    The main thrust of this talk is to review and discuss various topics in both perturbative and non-perturbative QCD that are, by and large, model independent. This inevitably means that we shall rely heavily on the renormalization group and asymptotic freedom. Although this usually means that one has to concentrate on high energy phenomena, there are some physical processes even involving bound states which are certainly highly non-perturbative, where one can make some progress without becoming overly model independent. Experience with the EMC effect, where there are about as many ''explanations'' as authors, has surely taught us that it may well be worth returning to ''basics'' and thinking about general properties of QCD rather than guessing, essentially arbitrarily, what we think is its low energy structure. No doubt we shall have to await further numerical progress or for some inspired theoretical insight before we can, with confidence, attack these extremely difficult problems. So, with this in mine, I shall review a smattering of problems which do have a non-perturbative component and where some rather modest progress can actually be made; I emphasize the adjective ''modest''exclamation point

  4. Correlations in double parton distributions: perturbative and non-perturbative effects

    Energy Technology Data Exchange (ETDEWEB)

    Rinaldi, Matteo; Scopetta, Sergio [Dipartimento di Fisica e Geologia, Università degli Studi di Perugia andIstituto Nazionale di Fisica Nucleare, Sezione di Perugia, via A. Pascoli, I-06123 Perugia (Italy); Traini, Marco [Institut de Physique Théorique CEA-Saclay, F-91191 Gif-sur-Yvette (France); INFN - TIFPA, Dipartimento di Fisica, Università degli Studi di Trento,Via Sommarive 14, I-38123 Povo (Trento) (Italy); Vento, Vicente [Departament de Física Teòrica, Universitat de València and Institut de Física Corpuscular,Consejo Superior de Investigaciones Científicas, 46100 Carrer del Dr. Moliner 50 València (Spain)

    2016-10-12

    The correct description of Double Parton Scattering (DPS), which represents a background in several channels for the search of new Physics at the LHC, requires the knowledge of double parton distribution functions (dPDFs). These quantities represent also a novel tool for the study of the three-dimensional nucleon structure, complementary to the possibilities offered by electromagnetic probes. In this paper we analyze dPDFs using Poincaré covariant predictions obtained by using a Light-Front constituent quark model proposed in a recent paper, and QCD evolution. We study to what extent factorized expressions for dPDFs, which neglect, at least in part, two-parton correlations, can be used. We show that they fail in reproducing the calculated dPDFs, in particular in the valence region. Actually measurable processes at existing facilities occur at low longitudinal momenta of the interacting partons; to have contact with these processes we have analyzed correlations between pairs of partons of different kind, finding that, in some cases, they are strongly suppressed at low longitudinal momenta, while for other distributions they can be sizeable. For example, the effect of gluon-gluon correlations can be as large as 20 %. We have shown that these behaviors can be understood in terms of a delicate interference of non-perturbative correlations, generated by the dynamics of the model, and perturbative ones, generated by the model independent evolution procedure. Our analysis shows that at LHC kinematics two-parton correlations can be relevant in DPS, and therefore we address the possibility to study them experimentally.

  5. Experimental investigations of strong interaction in the non-perturbative QCD region

    International Nuclear Information System (INIS)

    Lindenbaum, S.J.; Samuel, S.

    1993-09-01

    A critical investigation of non-perturbative QCD require investigating glueballs, search for a Quark Gluon Plasma (OGP), and search for strangelets. In the glueball area the data obtained (E- 881) at 8 GeV/c were analyzed for π - + p → φφn (OZI forbidden), φK + K - n (OZI allowed), K - p → φφ(ΛΣ) (OZI allowed), and bar pp → φφ → φφπ 0 (OZI forbidden), φK + K - π 0 (OZI allowed). By comparing the OZI forbidden (glueball filter reactions) with the OZI allowed and previous 22 GeV/c π - p → φφn or φK + K - n data a further critical test of the so far unsuccessfully challenged hypothesis that our g T (2010), g T '(2300) and g T double-prime(2340) all with I G J PC = 0 + 2 ++ are produced by 1-3 2 ++ glueballs will be made. In the QGP search with a large-solid-angle TPC a good Ξ signal was observed. The ratio of Ξ to single strange quark particles such as λ is a better indication of strangeness enhancement in QGP formation. The data indicate enhancement by a factor ∼ 2 over cascade model (corrected to observed strangeness) predictions, but it is definitely far from conclusive at this stage since the result is model dependent. Double λ topologies of the type needed to discover light strangelets in the nanosecond lifetime region were found. In addition, research has been accomplished in three main areas: bosonic technicolor and strings, buckministerfullerene C 60 and neutrino oscillations in a dense neutrino gas

  6. Non-perturbative O(a) improvement of lattice QCD

    CERN Document Server

    Lüscher, Martin; Sommer, Rainer; Weisz, P; Wolff, U; Luescher, Martin; Sint, Stefan; Sommer, Rainer; Weisz, Peter; Wolff, Ulli

    1997-01-01

    The coefficients multiplying the counterterms required for O($a$) improvement of the action and the isovector axial current in lattice QCD are computed non-perturbatively, in the quenched approximation and for bare gauge couplings $g_0$ in the range $0 \\leq g_0 \\leq 1$. A finite-size method based on the Schrödinger functional is employed, which enables us to perform all calculations at zero or nearly zero quark mass. As a by-product the critical hopping parameter $\\kappa_c$ is obtained at all couplings considered.

  7. Tests of perturbative and non perturbative structure of moments of hadronic event shapes using experiments JADE and OPAL

    International Nuclear Information System (INIS)

    Pahl, Christoph Johannes

    2008-01-01

    In hadron production data of the e + e - annihilation experiments JADE and OPAL we measure the first five moments of twelve hadronic-event-shape variables at c.m. energies from 14 to 207 GeV. From the comparison of the QCD NLO prediction with the data corrected by means of MC models about hadronization we obtain the reference value of the strong coupling α s (M Z 0 )=0.1254±0.0007(stat.)±0.0010(exp.) +0.0009 -0.0 0 23 (had.) +0.0069 -0.0053 (theo.). For some, especially higher moments, systematic unsufficiencies in the QCD NLO prediction are recognizable. Simultaneous fits to two moments under assumption of identical renormalization scales yield scale values from x μ =0.057 to x μ =0.196. We check predictions of different non-perturbative models. From the single-dressed-gluon approximation a perturbative prediction in O(α 5 s ) results with neglegible energy power correction, which describes the thrust average on hadron level well with α s (M Z 0 )=0.1186±0,0017(exp.) -0.0028 +0.0033 (theo.). The variance of the event-shape variable is measured and compared with models as well as predictions. [de

  8. Non-perturbative effects in the transverse momentum distribution of electroweak bosons at the LHC

    CERN Document Server

    Siodmok, Andrzej; Seymour, Michael H

    2009-01-01

    The transverse momentum of electroweak bosons in a Drell-Yan process is an important quantity for the experimental program at the LHC. The new model of non-perturbative gluon emission in an initial state parton shower presented in this note gives a good description of this quantity for the data taken in previous experiments over a wide range of CM energy. The model's prediction for the transverse momentum distribution of Z bosons for the LHC is presented and used for a comparison with other approaches.

  9. Non-perturbative supersymmetry anomaly in supersymmetric QCD

    International Nuclear Information System (INIS)

    Shamir, Y.

    1991-03-01

    The zero modes of the Dirac operator in an instanton and other topologically non-trivial backgrounds are unstable in a large class of massless or partially massless supersymmetric gauge theories. We show that under a generic perturbation of the scalar fields all zero modes become resonances, and discuss the ensuing breakdown of conventional perturbation theory. As a result, despite of the presence of massless fermions, the field theoretic tunneling amplitude is not suppressed. In massless supersymmetric QCD with N c ≤ N f the effective potential is found to be negative and monotonically increasing in the weak coupling regime for scalar VEVs which lie on the perturbatively flat directions. Consequently, massless supersymmetric QCD with N c ≤ N f exhibits a non-perturbative supersymmetry anomaly and exists in a strongly interacting phase which closely resembles ordinary QCD. The same conclusions apply if small masses are added to the lagrangian and the massless limit is smooth. (author). 21 refs, 5 figs

  10. A non-perturbative analysis in finite volume gauge theory

    International Nuclear Information System (INIS)

    Koller, J.; State Univ. of New York, Stony Brook; Van Baal, P.; State Univ. of New York, Stony Brook

    1988-01-01

    We discuss SU(2) gauge theory on a three-torus using a finite volume expansion. Our discovery of natural coordinates allows us to obtain continuum results in a region where Monte Carlo data are also available. The obtained results agree well with the perturbative and semiclassical analysis for small volumes, and there is fair agreement with the Monte Carlo results in intermediate volumes. The simple picture which emerges for the approximate low energy dynamics is that of three interacting particles enclosed in a sphere, with zero total 'angular momentum'. The validity of an adiabatic approximation is investigated. The fundamentally new understanding gained, is that non-perturbative dynamics can be incorporated by imposing boundary conditions which arise through the nontrivial topology of configuration space. (orig.)

  11. Multiphoton transitions in semiconductors in the non-perturbative approach

    International Nuclear Information System (INIS)

    Iqbal, M.Z.; Hassan, A.R.

    1987-09-01

    Transition rates for multiphoton absorption via direct band-to-band excitation have been calculated using a non-perturbative approach due to Jones and Reiss, based on the Volkov type final state wave functions. Both cases of parabolic and non-parabolic energy bands have been included in our calculations. Absorption coefficients have been obtained for the cases of plane polarized and circularly polarized light. In particular, two-photon absorption coefficients are derived for the two cases of polarization for the parabolic band approximation as well as for non-parabolic bands and compared with the results based on perturbation theory. Numerical estimates of the two photon absorption coefficients resulting from our calculations are also provided. (author). 10 refs, 1 tab

  12. Non-perturbative renormalization of three-quark operators

    Energy Technology Data Exchange (ETDEWEB)

    Goeckeler, Meinulf [Regensburg Univ. (Germany). Inst. fuer Theoretische Physik; Horsley, Roger [Edinburgh Univ. (United Kingdom). School of Physics and Astronomy; Kaltenbrunner, Thomas [Regensburg Univ. (DE). Inst. fuer Theoretische Physik] (and others)

    2008-10-15

    High luminosity accelerators have greatly increased the interest in semi-exclusive and exclusive reactions involving nucleons. The relevant theoretical information is contained in the nucleon wavefunction and can be parametrized by moments of the nucleon distribution amplitudes, which in turn are linked to matrix elements of local three-quark operators. These can be calculated from first principles in lattice QCD. Defining an RI-MOM renormalization scheme, we renormalize three-quark operators corresponding to low moments non-perturbatively and take special care of the operator mixing. After performing a scheme matching and a conversion of the renormalization scale we quote our final results in the MS scheme at {mu}=2 GeV. (orig.)

  13. Non-perturbative approach for laser radiation interactions with solids

    International Nuclear Information System (INIS)

    Jalbert, G.

    1985-01-01

    Multiphoton transitions in direct-gap crystals are studied considering non-perturbative approaches. Two methods currently used for atoms and molecules are revised, generalized and applied to solids. In the first one, we construct an S-matrix which incorporates the eletromagnetic field to all orders in an approximated way leading to analytical solution for the multiphoton transition rates. In the second one, the transition probability is calculated within the Bloch-Floquet formalism applieed to the specific case of solids. This formalism is interpreted as a classical approximation to the quantum treatment of the field. In the weak field limit, we compare our results with the usual perturbation calculations. We also incorporate, in the first approach, the non homogeneity and the multimodes effects of a real laser. (author) [pt

  14. World-Line Formalism: Non-Perturbative Applications

    Directory of Open Access Journals (Sweden)

    Dmitry Antonov

    2016-11-01

    Full Text Available This review addresses the impact on various physical observables which is produced by confinement of virtual quarks and gluons at the level of the one-loop QCD diagrams. These observables include the quark condensate for various heavy flavors, the Yang-Mills running coupling with an infra-red stable fixed point, and the correlation lengths of the stochastic Yang-Mills fields. Other non-perturbative applications of the world-line formalism presented in the review are devoted to the determination of the electroweak phase-transition critical temperature, to the derivation of a semi-classical analogue of the relation between the chiral and the gluon QCD condensates, and to the calculation of the free energy of the gluon plasma in the high-temperature limit. As a complementary result, we demonstrate Casimir scaling of k-string tensions in the Gaussian ensemble of the stochastic Yang-Mills fields.

  15. New Results in {mathcal {N}}=2 N = 2 Theories from Non-perturbative String

    Science.gov (United States)

    Bonelli, Giulio; Grassi, Alba; Tanzini, Alessandro

    2018-03-01

    We describe the magnetic phase of SU(N) $\\mathcal{N}=2$ Super Yang-Mills theories in the self-dual Omega background in terms of a new class of multi-cut matrix models. These arise from a non-perturbative completion of topological strings in the dual four dimensional limit which engineers the gauge theory in the strongly coupled magnetic frame. The corresponding spectral determinants provide natural candidates for the tau functions of isomonodromy problems for flat spectral connections associated to the Seiberg-Witten geometry.

  16. 1/4 BPS States and Non-Perturbative Couplings in N=4 String Theories

    CERN Document Server

    Lerche, W.

    1999-01-01

    We compute certain 2K+4-point one-loop couplings in the type IIA string compactified on K3 x T^2, which are related a topological index on this manifold. Their special feature is that they are sensitive to only short and intermediate BPS multiplets. The couplings derive from underlying prepotentials of the form G(T,U)=d^{2K}V ln[chi10(T,U,V)], where chi10(T,U,V) is the helicity partition function of 1/4 BPS states. In the dual heterotic string on T^6, the amplitudes describe non-perturbative gravitational corrections due to bound states of fivebrane instantons with heterotic world-sheet instantons. We argue, as a consequence, that our results give information about instanton configurations in six dimensional Sp(2k) gauge theories on T^6.

  17. A non-perturbative argument for the non-abelian Higgs mechanism

    Energy Technology Data Exchange (ETDEWEB)

    De Palma, G. [Scuola Normale Superiore, Pisa (Italy); INFN, Sezione di Pisa, Pisa (Italy); Strocchi, F., E-mail: franco.strocchi@sns.it [INFN, Sezione di Pisa, Pisa (Italy)

    2013-09-15

    The evasion of massless Goldstone bosons by the non-abelian Higgs mechanism is proved by a non-perturbative argument in the local BRST gauge. -- Highlights: •The perturbative explanation of the Higgs mechanism (HM) is not under mathematical control. •We offer a non-perturbative proof of the absence of Goldstone bosons from the non-abelian HM. •Our non-perturbative proof in the BRST gauge avoids a mean field ansatz and expansion.

  18. A non-perturbative argument for the non-abelian Higgs mechanism

    International Nuclear Information System (INIS)

    De Palma, G.; Strocchi, F.

    2013-01-01

    The evasion of massless Goldstone bosons by the non-abelian Higgs mechanism is proved by a non-perturbative argument in the local BRST gauge. -- Highlights: •The perturbative explanation of the Higgs mechanism (HM) is not under mathematical control. •We offer a non-perturbative proof of the absence of Goldstone bosons from the non-abelian HM. •Our non-perturbative proof in the BRST gauge avoids a mean field ansatz and expansion

  19. Holomorphic couplings in non-perturbative string compactifications

    Energy Technology Data Exchange (ETDEWEB)

    Klevers, Denis Marco

    2011-06-15

    In this thesis we present an analysis of several aspects of four-dimensional, non-perturbative N = 1 compactifications of string theory. Our focus is on the study of brane dynamics and their effective physics as encoded in the holomorphic couplings of the low-energy N=1 effective action, most prominently the superpotential W. The thesis is divided into three parts. In part one we derive the effective action of a spacetime-filling D5-brane in generic Type IIB Calabi-Yau orientifold compactifications. In the second part we invoke tools from string dualities, namely from F-theory, heterotic/F-theory duality and mirror symmetry, for a more elaborate study of the dynamics of (p, q) 7-branes and heterotic five-branes. In this context we demonstrate exact computations of the complete perturbative effective superpotential, both due to branes and background fluxes. Finally, in the third part we present a novel geometric description of five-branes in Type IIB and heterotic M-theory Calabi-Yau compactifications via a non-Calabi-Yau threefold Z{sub 3}, that is canonically constructed from the original five-brane and Calabi-Yau threefold Z{sub 3} via a blow-up. We exploit the use of the blow-up threefold Z{sub 3} as a tool to derive open-closed Picard-Fuchs differential equations, that govern the complete effective brane and flux superpotential. In addition, we present first evidence to interpret Z{sub 3} as a flux compactification dual to the original five-brane by defining an SU(3)-structure on Z{sub 3}, that is generated dynamically by the five-brane backreaction. (orig.)

  20. Holomorphic couplings in non-perturbative string compactifications

    International Nuclear Information System (INIS)

    Klevers, Denis Marco

    2011-06-01

    In this thesis we present an analysis of several aspects of four-dimensional, non-perturbative N = 1 compactifications of string theory. Our focus is on the study of brane dynamics and their effective physics as encoded in the holomorphic couplings of the low-energy N=1 effective action, most prominently the superpotential W. The thesis is divided into three parts. In part one we derive the effective action of a spacetime-filling D5-brane in generic Type IIB Calabi-Yau orientifold compactifications. In the second part we invoke tools from string dualities, namely from F-theory, heterotic/F-theory duality and mirror symmetry, for a more elaborate study of the dynamics of (p, q) 7-branes and heterotic five-branes. In this context we demonstrate exact computations of the complete perturbative effective superpotential, both due to branes and background fluxes. Finally, in the third part we present a novel geometric description of five-branes in Type IIB and heterotic M-theory Calabi-Yau compactifications via a non-Calabi-Yau threefold Z 3 , that is canonically constructed from the original five-brane and Calabi-Yau threefold Z 3 via a blow-up. We exploit the use of the blow-up threefold Z 3 as a tool to derive open-closed Picard-Fuchs differential equations, that govern the complete effective brane and flux superpotential. In addition, we present first evidence to interpret Z 3 as a flux compactification dual to the original five-brane by defining an SU(3)-structure on Z 3 , that is generated dynamically by the five-brane backreaction. (orig.)

  1. From Faddeev-Kulish to LSZ. Towards a non-perturbative description of colliding electrons

    Science.gov (United States)

    Dybalski, Wojciech

    2017-12-01

    In a low energy approximation of the massless Yukawa theory (Nelson model) we derive a Faddeev-Kulish type formula for the scattering matrix of N electrons and reformulate it in LSZ terms. To this end, we perform a decomposition of the infrared finite Dollard modifier into clouds of real and virtual photons, whose infrared divergencies mutually cancel. We point out that in the original work of Faddeev and Kulish the clouds of real photons are omitted, and consequently their wave-operators are ill-defined on the Fock space of free electrons. To support our observations, we compare our final LSZ expression for N = 1 with a rigorous non-perturbative construction due to Pizzo. While our discussion contains some heuristic steps, they can be formulated as clear-cut mathematical conjectures.

  2. Non-perturbative QCD Effect on K-Factor of Drell-Yan Process

    International Nuclear Information System (INIS)

    Hou Zhaoyu; Zhi Haisu; Chen Junxiao

    2006-01-01

    By using a non-perturbative quark propagator with the lowest-dimensional condensate contributions from the QCD vacuum, the non-perturbative effect to K-factor of the Drell-Yan process is numerically investigated for 12 6 C- 12 6 C collision at the center-of-mass energy (s) 1/2 = 200 GeV, 630 GeV respectively. Calculated results show that the non-perturbative QCD effect has just a weak influence on K-factor in the two cases.

  3. PREFACE: Loops 11: Non-Perturbative / Background Independent Quantum Gravity

    Science.gov (United States)

    Mena Marugán, Guillermo A.; Barbero G, J. Fernando; Garay, Luis J.; Villaseñor, Eduardo J. S.; Olmedo, Javier

    2012-05-01

    Loops 11 The international conference LOOPS'11 took place in Madrid from the 23-28 May 2011. It was hosted by the Instituto de Estructura de la Materia (IEM), which belongs to the Consejo Superior de Investigaciones Cientĺficas (CSIC). Like previous editions of the LOOPS meetings, it dealt with a wealth of state-of-the-art topics on Quantum Gravity, with special emphasis on non-perturbative background-independent approaches to spacetime quantization. The main topics addressed at the conference ranged from the foundations of Quantum Gravity to its phenomenological aspects. They encompassed different approaches to Loop Quantum Gravity and Cosmology, Polymer Quantization, Quantum Field Theory, Black Holes, and discrete approaches such as Dynamical Triangulations, amongst others. In addition, this edition celebrated the 25th anniversary of the introduction of the now well-known Ashtekar variables and the Wednesday morning session was devoted to this silver jubilee. The structure of the conference was designed to reflect the current state and future prospects of research on the different topics mentioned above. Plenary lectures that provided general background and the 'big picture' took place during the mornings, and the more specialised talks were distributed in parallel sessions during the evenings. To be more specific, Monday evening was devoted to Shape Dynamics and Phenomenology Derived from Quantum Gravity in Parallel Session A, and to Covariant Loop Quantum Gravity and Spin foams in Parallel Session B. Tuesday's three Parallel Sessions dealt with Black Hole Physics and Dynamical Triangulations (Session A), the continuation of Monday's session on Covariant Loop Quantum Gravity and Spin foams (Session B) and Foundations of Quantum Gravity (Session C). Finally, Thursday and Friday evenings were devoted to Loop Quantum Cosmology (Session A) and to Hamiltonian Loop Quantum Gravity (Session B). The result of the conference was very satisfactory and enlightening. Not

  4. The non-perturbative QCD Debye mass from a Wilson line operator

    CERN Document Server

    Laine, Mikko

    1999-01-01

    According to a proposal by Arnold and Yaffe, the non-perturbative g^2T-contribution to the Debye mass in the deconfined QCD plasma phase can be determined from a single Wilson line operator in the three-dimensional pure SU(3) gauge theory. We extend a previous SU(2) measurement of this quantity to the physical SU(3) case. We find a numerical coefficient which is more accurate and smaller than that obtained previously with another method, but still very large compared with the naive expectation: the correction is larger than the leading term up to T ~ 10^7 T_c, corresponding to g^2 ~ 0.4. At moderate temperatures T ~ 2 T_c, a consistent picture emerges where the Debye mass is m_D ~ 6T, the lightest gauge invariant screening mass in the system is ~ 3T, and the purely magnetic operators couple dominantly to a scale ~ 6T. Electric (~ gT) and magnetic (~ g^2T) scales are therefore strongly overlapping close to the phase transition, and the colour-electric fields play an essential role in the dynamics.

  5. Random surfaces: A non-perturbative regularization of strings?

    International Nuclear Information System (INIS)

    Ambjoern, J.

    1989-12-01

    I review the basic properties of the theory of randum surfaces. While it is by now well known that the theory of (discretized) random surfaces correctly describes the (perturbative) aspects of non-critical strings in d 1. In these lectures I intend to show that the theory of dynamical triangulated random surfaces provides us with a lot of information about the dynamics of both the bosonic string and the superstring even for d>1. I also briefly review recent attempts to define a string field theory (sum over all genus) in this approach. (orig.)

  6. Infinite-degree-corrected stochastic block model

    DEFF Research Database (Denmark)

    Herlau, Tue; Schmidt, Mikkel Nørgaard; Mørup, Morten

    2014-01-01

    In stochastic block models, which are among the most prominent statistical models for cluster analysis of complex networks, clusters are defined as groups of nodes with statistically similar link probabilities within and between groups. A recent extension by Karrer and Newman [Karrer and Newman...... corrected stochastic block model as a nonparametric Bayesian model, incorporating a parameter to control the amount of degree correction that can then be inferred from data. Additionally, our formulation yields principled ways of inferring the number of groups as well as predicting missing links...

  7. Classical Electron Model with QED Corrections

    OpenAIRE

    Lenk, Ron

    2010-01-01

    In this article we build a metric for a classical general relativistic electron model with QED corrections. We calculate the stress-energy tensor for the radiative corrections to the Coulomb potential in both the near-field and far-field approximations. We solve the three field equations in both cases by using a perturbative expansion to first order in alpha (the fine-structure constant) while insisting that the usual (+, +, -, -) structure of the stress-energy tensor is maintained. The resul...

  8. Turbulent mixing of a critical fluid: The non-perturbative renormalization

    Directory of Open Access Journals (Sweden)

    M. Hnatič

    2018-01-01

    Full Text Available Non-perturbative Renormalization Group (NPRG technique is applied to a stochastical model of a non-conserved scalar order parameter near its critical point, subject to turbulent advection. The compressible advecting flow is modeled by a random Gaussian velocity field with zero mean and correlation function 〈υjυi〉∼(Pji⊥+αPji∥/kd+ζ. Depending on the relations between the parameters ζ, α and the space dimensionality d, the model reveals several types of scaling regimes. Some of them are well known (model A of equilibrium critical dynamics and linear passive scalar field advected by a random turbulent flow, but there is a new nonequilibrium regime (universality class associated with new nontrivial fixed points of the renormalization group equations. We have obtained the phase diagram (d, ζ of possible scaling regimes in the system. The physical point d=3, ζ=4/3 corresponding to three-dimensional fully developed Kolmogorov's turbulence, where critical fluctuations are irrelevant, is stable for α≲2.26. Otherwise, in the case of “strong compressibility” α≳2.26, the critical fluctuations of the order parameter become relevant for three-dimensional turbulence. Estimations of critical exponents for each scaling regime are presented.

  9. Bias-correction in vector autoregressive models

    DEFF Research Database (Denmark)

    Engsted, Tom; Pedersen, Thomas Quistgaard

    2014-01-01

    We analyze the properties of various methods for bias-correcting parameter estimates in both stationary and non-stationary vector autoregressive models. First, we show that two analytical bias formulas from the existing literature are in fact identical. Next, based on a detailed simulation study......, we show that when the model is stationary this simple bias formula compares very favorably to bootstrap bias-correction, both in terms of bias and mean squared error. In non-stationary models, the analytical bias formula performs noticeably worse than bootstrapping. Both methods yield a notable...... improvement over ordinary least squares. We pay special attention to the risk of pushing an otherwise stationary model into the non-stationary region of the parameter space when correcting for bias. Finally, we consider a recently proposed reduced-bias weighted least squares estimator, and we find...

  10. A Blast Wave Model With Viscous Corrections

    Science.gov (United States)

    Yang, Z.; Fries, R. J.

    2017-04-01

    Hadronic observables in the final stage of heavy ion collision can be described well by fluid dynamics or blast wave parameterizations. We improve existing blast wave models by adding shear viscous corrections to the particle distributions in the Navier-Stokes approximation. The specific shear viscosity η/s of a hadron gas at the freeze-out temperature is a new parameter in this model. We extract the blast wave parameters with viscous corrections from experimental data which leads to constraints on the specific shear viscosity at kinetic freeze-out. Preliminary results show η/s is rather small.

  11. A Blast Wave Model With Viscous Corrections

    International Nuclear Information System (INIS)

    Yang, Z; Fries, R J

    2017-01-01

    Hadronic observables in the final stage of heavy ion collision can be described well by fluid dynamics or blast wave parameterizations. We improve existing blast wave models by adding shear viscous corrections to the particle distributions in the Navier-Stokes approximation. The specific shear viscosity η/s of a hadron gas at the freeze-out temperature is a new parameter in this model. We extract the blast wave parameters with viscous corrections from experimental data which leads to constraints on the specific shear viscosity at kinetic freeze-out. Preliminary results show η/s is rather small. (paper)

  12. QCD non-perturbative study in radiative and pure-leptonic decays of Bc by wave function

    International Nuclear Information System (INIS)

    Guo Peng; Hou Zhaoyu; Zhi Haisu

    2012-01-01

    The radiative and pure-leptonic decays of B c mesons are of hadrons uncertainty in theoretical calculations. Using three types of the B c meson wave functions which describe the characteristics of the QCD non-perturbative and by controlling the parameters in them, the uncertainties of B c meson decay caused by the hadron decay model are studied in detail. The theoretical results show the branching ratios are (1.81981∼3.18961) × 10 -5 , which are sensitive to the type of wave functions. (authors)

  13. Non-perturbative embedding of local defects in crystalline materials

    International Nuclear Information System (INIS)

    Cances, Eric; Deleurence, Amelie; Lewin, Mathieu

    2008-01-01

    We present a new variational model for computing the electronic first-order density matrix of a crystalline material in the presence of a local defect. A natural way to obtain variational discretizations of this model is to expand the difference Q between the density matrix of the defective crystal and the density matrix of the perfect crystal, in a basis of precomputed maximally localized Wannier functions of the reference perfect crystal. This approach can be used within any semi-empirical or density functional theory framework

  14. R 2 inflation to probe non-perturbative quantum gravity

    Science.gov (United States)

    Koshelev, Alexey S.; Sravan Kumar, K.; Starobinsky, Alexei A.

    2018-03-01

    It is natural to expect a consistent inflationary model of the very early Universe to be an effective theory of quantum gravity, at least at energies much less than the Planck one. For the moment, R + R 2, or shortly R 2, inflation is the most successful in accounting for the latest CMB data from the PLANCK satellite and other experiments. Moreover, recently it was shown to be ultra-violet (UV) complete via an embedding into an analytic infinite derivative (AID) non-local gravity. In this paper, we derive a most general theory of gravity that contributes to perturbed linear equations of motion around maximally symmetric space-times. We show that such a theory is quadratic in the Ricci scalar and the Weyl tensor with AID operators along with the Einstein-Hilbert term and possibly a cosmological constant. We explicitly demonstrate that introduction of the Ricci tensor squared term is redundant. Working in this quadratic AID gravity framework without a cosmological term we prove that for a specified class of space homogeneous space-times, a space of solutions to the equations of motion is identical to the space of backgrounds in a local R 2 model. We further compute the full second order perturbed action around any background belonging to that class. We proceed by extracting the key inflationary parameters of our model such as a spectral index ( n s ), a tensor-to-scalar ratio ( r) and a tensor tilt ( n t ). It appears that n s remains the same as in the local R 2 inflation in the leading slow-roll approximation, while r and n t get modified due to modification of the tensor power spectrum. This class of models allows for any value of r complete R 2 gravity a natural target for future CMB probes.

  15. Non-Perturbative QCD Coupling and Beta Function from Light Front Holography

    Energy Technology Data Exchange (ETDEWEB)

    Brodsky, Stanley J.; /SLAC /Southern Denmark U., CP3-Origins; de Teramond, Guy F.; /Costa Rica U.; Deur, Alexandre; /Jefferson Lab

    2010-05-26

    The light-front holographic mapping of classical gravity in AdS space, modified by a positive-sign dilaton background, leads to a non-perturbative effective coupling {alpha}{sub s}{sup AdS} (Q{sup 2}). It agrees with hadron physics data extracted from different observables, such as the effective charge defined by the Bjorken sum rule, as well as with the predictions of models with built-in confinement and lattice simulations. It also displays a transition from perturbative to nonperturbative conformal regimes at a momentum scale {approx} 1 GeV. The resulting {beta}-function appears to capture the essential characteristics of the full {beta}-function of QCD, thus giving further support to the application of the gauge/gravity duality to the confining dynamics of strongly coupled QCD. Commensurate scale relations relate observables to each other without scheme or scale ambiguity. In this paper we extrapolate these relations to the nonperturbative domain, thus extending the range of predictions based on {alpha}{sub s}{sup AdS} (Q{sup 2}).

  16. Non-perturbative effective potential: Lower bounds on the Higgs mass and dynamical applications

    International Nuclear Information System (INIS)

    Faivre, H.

    2006-01-01

    The purpose of this work was to assess the benefits of using non-perturbative methods to phenomenological issues in field theory. The exact equations of the Wilson renormalization group (RG) and the effective action have been used, we have computed the energy gap between the first 2 levels in double-well potential. We get a very good agreement with exact solutions inferring from the numerical solving of the Schroedinger equation. RG equations lead to a convex effective potential that is consistent with theory. We have considered the Higgs sector of the standard model. It is commonly acknowledged that the Yukawa coupling between the top quark and the Higgs boson generates the instability of the electroweak vacuum at high energy. We show that this instability does not exist, it is a mere consequence of the extrapolation of the RG equations beyond their validity range. We have also used the effective potential for the description of the time history of the mean value of the quantum field. We have defined the conditions under which the dynamics of the mean value can be described in the local potential approximation by classical equations of motion in which the effective potential replaces the classical potential. (A.C.)

  17. Non-Perturbative QCD Coupling and Beta Function from Light Front Holography

    International Nuclear Information System (INIS)

    Brodsky, Stanley J.

    2010-01-01

    The light-front holographic mapping of classical gravity in AdS space, modified by a positive-sign dilaton background, leads to a non-perturbative effective coupling α s AdS (Q 2 ). It agrees with hadron physics data extracted from different observables, such as the effective charge defined by the Bjorken sum rule, as well as with the predictions of models with built-in confinement and lattice simulations. It also displays a transition from perturbative to nonperturbative conformal regimes at a momentum scale ∼ 1 GeV. The resulting β-function appears to capture the essential characteristics of the full β-function of QCD, thus giving further support to the application of the gauge/gravity duality to the confining dynamics of strongly coupled QCD. Commensurate scale relations relate observables to each other without scheme or scale ambiguity. In this paper we extrapolate these relations to the nonperturbative domain, thus extending the range of predictions based on α s AdS (Q 2 ).

  18. Exact quantization conditions, toric Calabi-Yau and non-perturbative topological string

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Kaiwen [Department of Mathematics, University of Science and Technology of China,96 Jinzhai Road, Hefei, Anhui 230026 (China); Wang, Xin; Huang, Min-xin [Interdisciplinary Center for Theoretical Study,Department of Modern Physics, University of Science and Technology of China,96 Jinzhai Road, Hefei, Anhui 230026 (China)

    2017-01-16

    We establish the precise relation between the Nekrasov-Shatashvili (NS) quantization scheme and Grassi-Hatsuda-Mariño conjecture for the mirror curve of arbitrary toric Calabi-Yau threefold. For a mirror curve of genus g, the NS quantization scheme leads to g quantization conditions for the corresponding integrable system. The exact NS quantization conditions enjoy a self S-duality with respect to Planck constant ℏ and can be derived from the Lockhart-Vafa partition function of non-perturbative topological string. Based on a recent observation on the correspondence between spectral theory and topological string, another quantization scheme was proposed by Grassi-Hatsuda-Mariño, in which there is a single quantization condition and the spectra are encoded in the vanishing of a quantum Riemann theta function. We demonstrate that there actually exist at least g nonequivalent quantum Riemann theta functions and the intersections of their theta divisors coincide with the spectra determined by the exact NS quantization conditions. This highly nontrivial coincidence between the two quantization schemes requires infinite constraints among the refined Gopakumar-Vafa invariants. The equivalence for mirror curves of genus one has been verified for some local del Pezzo surfaces. In this paper, we generalize the correspondence to higher genus, and analyze in detail the resolved ℂ{sup 3}/ℤ{sub 5} orbifold and several SU(N) geometries. We also give a proof for some models at ℏ=2π/k.

  19. On non-perturbative effects of background fields

    International Nuclear Information System (INIS)

    Hosoda, Masataka; Yamakoshi, Hitoshi; Shimizu, Tadayoshi.

    1986-01-01

    APS-index of the Abelian Higgs model is at first obtained in a bounded domain of a disk with radius R. It is shown that the APS-index depends strongly on the behavior of the background fields and becomes integer when boundary effects are taken into account. Next, the electric charge of the vacuum is reconsidered in the momopole field coupled to a massive Dirac particle. It is reconfirmed that the monopole ground state has an electric charge θ/π which changes discontinuously to zero when the fermion mass is zero. (author)

  20. Non-perturbative renormalization of static-light four-fermion operators in quenched lattice QCD

    Energy Technology Data Exchange (ETDEWEB)

    Palombi, F. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Papinutto, M.; Pena, C. [CERN, Geneva (Switzerland). Physics Dept., Theory Div.; Wittig, H. [Mainz Univ. (Germany). Inst. fuer Kernphysik

    2007-06-15

    We perform a non-perturbative study of the scale-dependent renormalization factors of a multiplicatively renormalizable basis of {delta}B=2 parity-odd four-fermion operators in quenched lattice QCD. Heavy quarks are treated in the static approximation with various lattice discretizations of the static action. Light quarks are described by nonperturbatively O(a) improved Wilson-type fermions. The renormalization group running is computed for a family of Schroedinger functional (SF) schemes through finite volume techniques in the continuum limit. We compute non-perturbatively the relation between the renormalization group invariant operators and their counterparts renormalized in the SF at a low energy scale. Furthermore, we provide non-perturbative estimates for the matching between the lattice regularized theory and all the SF schemes considered. (orig.)

  1. Conformal bootstrap: non-perturbative QFT's under siege

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    [Exceptionally in Council Chamber] Originally formulated in the 70's, the conformal bootstrap is the ambitious idea that one can use internal consistency conditions to carve out, and eventually solve, the space of conformal field theories. In this talk I will review recent developments in the field which have boosted this program to a new level. I will present a method to extract quantitative informations in strongly-interacting theories, such as 3D Ising, O(N) vector model and even systems without a Lagrangian formulation. I will explain how these techniques have led to the world record determination of several critical exponents. Finally, I will review exact analytical results obtained using bootstrap techniques.

  2. Non-perturbative transitions among intersecting-brane vacua

    CERN Document Server

    Angelantonj, Carlo; Dudas, Emilian; Pradisi, Gianfranco; 10.1007/JHEP07(2011)123

    2011-01-01

    We investigate the transmutation of D-branes into Abelian magnetic backgrounds on the world-volume of higher-dimensional branes, within the framework of global models with compact internal dimensions. The phenomenon, T-dual to brane recombination in the intersecting-brane picture, shares some similarities to inverse small-instanton transitions in non-compact spaces, though in this case the Abelian magnetic background is a consequence of the compactness of the internal manifold, and is not ascribed to a zero-size non-Abelian instanton growing to maximal size. We provide details of the transition in various supersymmetric orientifolds and non-supersymmetric tachyon-free vacua with Brane Supersymmetry Breaking, both from brane recombination and from a field theory Higgs mechanism viewpoints.

  3. Non-perturbative improvement of stout-smeared three flavour clover fermions

    Energy Technology Data Exchange (ETDEWEB)

    Cundy, N.; Goeckeler, M. [Regensburg Univ. (Germany). Inst. fuer Theoretische Physik; Horsley, R. [Edinburgh Univ. (GB). School of Physics and Astronomy] (and others)

    2009-01-15

    We discuss a 3-flavour lattice QCD action with clover improvement in which the fermion matrix has single level stout smearing for the hopping terms together with unsmeared links for the clover term. With the (tree-level) Symanzik improved gluon action this constitutes the Stout Link Non-perturbative Clover or SLiNC action. To cancel O(a) terms the clover term coefficient has to be tuned. We present here results of a non-perturbative determination of this coefficient using the Schroedinger functional and as a by-product a determination of the critical hopping parameter. Comparisons of the results are made with lowest order perturbation theory. (orig.)

  4. Controlling quark mass determinations non-perturbatively in three-flavour QCD

    CERN Document Server

    Campos, Isabel

    2017-01-01

    The determination of quark masses from lattice QCD simulations requires a non-perturbative renormalization procedure and subsequent scale evolution to high energies, where a conversion to the commonly used MS-bar scheme can be safely established. We present our results for the non-perturbative running of renormalized quark masses in Nf=3 QCD between the electroweak and a hadronic energy scale, where lattice simulations are at our disposal. Recent theoretical advances in combination with well-established techniques allows to follow the scale evolution to very high statistical accuracy, and full control of systematic effects.

  5. Non-perturbative gravity at different length scales

    International Nuclear Information System (INIS)

    Folkerts, Sarah

    2013-01-01

    In this thesis, we investigate different aspects of gravity as an effective field theory. Building on the arguments of self-completeness of Einstein gravity, we argue that any sensible theory, which does not propagate negative-norm states and reduces to General Relativity in the low energy limit is self-complete. Due to black hole formation in high energy scattering experiments, distances smaller than the Planck scale are shielded from any accessibility. Degrees of freedom with masses larger than the Planck mass are mapped to large classical black holes which are described by the already existing infrared theory. Since high energy (UV) modifications of gravity which are ghost-free can only produce stronger gravitational interactions than Einstein gravity, the black hole shielding is even more efficient in such theories. In this light, we argue that conventional attempts of a Wilsonian UV completion are severely constrained. Furthermore, we investigate the quantum picture for black holes which emerges in the low energy description put forward by Dvali and Gomez in which black holes are described as Bose-Einstein condensates of many weakly coupled gravitons. Specifically, we investigate a non-relativistic toy model which mimics certain aspects of the graviton condensate picture. This toy model describes the collapse of a condensate of attractive bosons which emits particles due to incoherent scattering. We show that it is possible that the evolution of the condensate follows the critical point which is accompanied by the appearance of a light mode. Another aspect of gravitational interactions concerns the question whether quantum gravity breaks global symmetries. Arguments relying on the no hair theorem and wormhole solutions suggest that global symmetries can be violated. In this thesis, we parametrize such effects in terms of an effective field theory description of three-form fields. We investigate the possible implications for the axion solution of the strong CP

  6. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  7. Non-perturbative investigation of current correlators in twisted mass lattice QCD

    International Nuclear Information System (INIS)

    Petschlies, Marcus

    2013-01-01

    We present an investigation of hadronic current-current correlators based on the first principles of Quantum Chromodynamics. Specifically we apply the non-perturbative methods of twisted mass lattice QCD with dynamical up and down quark taking advantage of its automatic O(a) improvement. As a special application we discuss the calculation of the hadronic leading order contribution to the muon anomalous magnetic moment. The latter is regarded as a promising quantity for the search for physics beyond the standard model. The origin of the strong interest in the muon anomaly lies in the persistent discrepancy between the standard model estimate and its experimental measurement. In the theoretical determination the hadronic leading order part is currently afflicted with the largest uncertainty and a dedicated lattice investigation of the former can be of strong impact on future estimates. We discuss our study of all systematic uncertainties in the lattice calculation, including three lattice volumes, two lattice spacings, pion masses from 650 MeV to 290 MeV and the quark-disconnected contribution. We present a new method for the extrapolation to the physical point that softens the pion mass dependence of a μ hlo and allows for a linear extrapolation with small statistical uncertainty at the physical point. We determine the contribution of up and down quark as a μ hlo (N f =2)=5.69(15)10 -8 . The methods used for the muon are extended to the electron and tau lepton and we find a e hlo (N f =2)=1.512(43)10 -12 and a τ hlo (N f =2)=2.635(54)10 -6 . We estimate the charm contribution to a μ hlo in partially quenched tmLQCD with the result a μ hlo (charm)=1.447(24)(30)10 -9 in very good agreement with a dispersion-relation based result using experimental data for the hadronic R-ratio.

  8. Gauge threshold corrections for local string models

    International Nuclear Information System (INIS)

    Conlon, Joseph P.

    2009-01-01

    We study gauge threshold corrections for local brane models embedded in a large compact space. A large bulk volume gives important contributions to the Konishi and super-Weyl anomalies and the effective field theory analysis implies the unification scale should be enhanced in a model-independent way from M s to RM s . For local D3/D3 models this result is supported by the explicit string computations. In this case the scale RM s comes from the necessity of global cancellation of RR tadpoles sourced by the local model. We also study D3/D7 models and discuss discrepancies with the effective field theory analysis. We comment on phenomenological implications for gauge coupling unification and for the GUT scale.

  9. A Non-Perturbative Treatment of Quantum Impurity Problems in Real Lattices

    Science.gov (United States)

    Allerdt, Andrew C.

    Historically, the RKKY or indirect exchange, interaction has been accepted as being able to be described by second order perturbation theory. A typical universal expression is usually given in this context. This approach, however, fails to incorporate many body effects, quantum fluctuations, and other important details. In Chapter 2, a novel numerical approach is developed to tackle these problems in a quasi-exact, non-perturbative manner. Behind the method lies the main concept of being able to exactly map an n-dimensional lattice problem onto a 1-dimensional chain. The density matrix renormalization group algorithm is then employed to solve the newly cast Hamiltonian. In the following chapters, it is demonstrated that conventional RKKY theory does not capture the crucial physics. It is found that the Kondo effect, i.e. the screening of an impurity spin, tends to dominate over a ferromagnetic interaction between impurity spins. Furthermore, it is found that the indirect exchange interaction does not decay algebraically. Instead, there is a crossover upon increasing JK, where impurities favor forming their own independent Kondo states after just a few lattice spacings. This is not a trivial result, as one may naively expect impurities to interact when their conventional Kondo clouds overlap. The spin structure around impurities coupled to the edge of a 2D topological insulator is investigated in Chapter 7. Modeled after materials such as silicine, germanene, and stanene, it is shown with spatial resolution of the lattice that the specific impurity placement plays a key role. Effects of spin-orbit interactions are also discussed. Finally, in the last chapter, transition metal complexes are studied. This really shows the power and versatility of the method developed throughout the work. The spin states of an iron atom in the molecule FeN4C 10 are calculated and compared to DFT, showing the importance of inter-orbital coulomb interactions. Using dynamical DMRG, the

  10. AdS/QCD, LIight-Front Holography, and the Non-perturbative Running Coupling

    Energy Technology Data Exchange (ETDEWEB)

    Brodsky, Stanley J.; /SLAC; de Teramond, Guy; /Costa Rica U.; Deur, Alexandre; /Jefferson Lab

    2010-04-29

    The combination of Anti-de Sitter space (AdS) methods with light-front (LF) holography provides a remarkably accurate first approximation for the spectra and wavefunctions of meson and baryon light-quark bound states. The resulting bound-state Hamiltonian equation of motion in QCD leads to relativistic light-front wave equations in terms of an invariant impact variable {zeta} which measures the separation of the quark and gluonic constituents within the hadron at equal light-front time. These equations of motion in physical space-time are equivalent to the equations of motion which describe the propagation of spin-J modes in anti-de Sitter (AdS) space. The eigenvalues give the hadronic spectrum, and the eigenmodes represent the probability distributions of the hadronic constituents at a given scale. A positive-sign confining dilaton background modifying AdS space gives a very good account of meson and baryon spectroscopy and form factors. The light-front holographic mapping of this model also leads to a non-perturbative effective coupling {alpha}{sub s}{sup Ads} (Q{sup 2}) which agrees with the effective charge defined by the Bjorken sum rule and lattice simulations. It displays a transition from perturbative to nonperturbative conformal regimes at a momentum scale {approx} 1 GeV. The resulting {beta}-function appears to capture the essential characteristics of the full {beta}-function of QCD, thus giving further support to the application of the gauge/gravity duality to the confining dynamics of strongly coupled QCD.

  11. The coherence lifetime-borrowing effect in vibronically coupled molecular aggregates under non-perturbative system-environment interactions.

    Science.gov (United States)

    Yeh, Shu-Hao; Engel, Gregory S.; Kais, Sabre

    Recently it has been suggested that the long-lived coherences in some photosynthetic pigment-protein systems, such as the Fenna-Matthews-Olson complex, could be attributed to the mixing of the pigments' electronic and vibrational degrees of freedom. In order to verify whether this is the case and to understand its underlying mechanism, a theoretical model capable of including both the electronic excitations and intramolecular vibrational modes of the pigments is necessary. Our model simultaneously considers the electronic and vibrational degrees of freedom, treating the system-environment interactions non-perturbatively by implementing the hierarchical equations of motion approach. Here we report the simulated two-dimensional electronic spectra of vibronically coupled molecular dimers to demonstrate how the electronic coherence lifetimes can be extended by borrowing the lifetime from the vibrational coherences. Funded by Qatar National Research Fund and Qatar Environment and Energy Research Institute.

  12. Non-perturbative Heavy-Flavor Transport at RHIC and LHC

    Energy Technology Data Exchange (ETDEWEB)

    He, Min, E-mail: mhe@comp.tamu.edu; Fries, Rainer J.; Rapp, Ralf

    2013-08-15

    We calculate open heavy-flavor (HF) transport in relativistic heavy-ion collisions by applying a strong-coupling treatment in both macro- and microscopic dynamics (hydrodynamics and non-perturbative diffusion interactions). The hydrodynamic medium evolution is quantitatively constrained by bulk and multi-strange hadron spectra and elliptic flow. The heavy quark transport coefficient is evaluated from a non-perturbative T-matrix approach in the Quark–Gluon Plasma which, close to the critical temperature, leads to resonance formation and feeds into the recombination of heavy quarks on a hydrodynamic hypersurface. In the hadronic phase, the diffusion of HF mesons is obtained from effective hadronic theory. We compute observables at RHIC and LHC for non-photonic electrons and HF mesons, respectively.

  13. Non-perturbative BRST quantization of Euclidean Yang-Mills theories in Curci-Ferrari gauges

    Science.gov (United States)

    Pereira, A. D.; Sobreiro, R. F.; Sorella, S. P.

    2016-10-01

    In this paper we address the issue of the non-perturbative quantization of Euclidean Yang-Mills theories in the Curci-Ferrari gauge. In particular, we construct a refined Gribov-Zwanziger action for this gauge, which takes into account the presence of gauge copies as well as the dynamical formation of dimension-two condensates. This action enjoys a non-perturbative BRST symmetry recently proposed in Capri et al. (Phys. Rev. D 92(4), 045039. doi: 10.1103/PhysRevD.92.045039 arXiv:1506.06995 [hep-th], 2015). Finally, we pay attention to the gluon propagator in different space-time dimensions.

  14. Non-perturbative BRST quantization of Euclidean Yang-Mills theories in Curci-Ferrari gauges

    International Nuclear Information System (INIS)

    Pereira, A.D.; Sobreiro, R.F.; Sorella, S.P.

    2016-01-01

    In this paper we address the issue of the non-perturbative quantization of Euclidean Yang-Mills theories in the Curci-Ferrari gauge. In particular, we construct a refined Gribov-Zwanziger action for this gauge, which takes into account the presence of gauge copies as well as the dynamical formation of dimension-two condensates. This action enjoys a non-perturbative BRST symmetry recently proposed in Capri et al. (Phys. Rev. D 92(4), 045039. doi:10.1103/PhysRevD.92.045039. arXiv:1506.06995 [hepth], 2015). Finally, we pay attention to the gluon propagator in different space-time dimensions. (orig.)

  15. HQET at order 1/m. Pt. 1. Non-perturbative parameters in the quenched approximation

    Energy Technology Data Exchange (ETDEWEB)

    Blossier, Benoit [Paris XI Univ., 91 - Orsay (France). Lab. de Physique Theorique; Della Morte, Michele [Mainz Univ. (Germany). Inst. fuer Kernphysik; Garron, Nicolas [Universidad Autonoma de Madrid (Spain). Dept. Fisica Teorica y Inst. de Fisica Teorica UAM/CSIC; Edinburgh Univ. (United Kingdom). School of Physics and Astronomy - SUPA; Sommer, Rainer [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC

    2010-01-15

    We determine non-perturbatively the parameters of the lattice HQET Lagrangian and those of heavy-light axial-vector and vector currents in the quenched approximation. The HQET expansion includes terms of order 1/m{sub b}. Our results allow to compute, for example, the heavy-light spectrum and B-meson decay constants in the static approximation and to order 1/m{sub b} in HQET. The determination of the parameters is separated into universal and non-universal parts. The universal results can be used to determine the parameters for various discretizations. The computation reported in this paper uses the plaquette gauge action and the ''HYP1/2'' action for the b-quark described by HQET. The parameters of the currents also depend on the light-quark action, for which we choose non-perturbatively O(a)-improved Wilson fermions. (orig.)

  16. HQET at order 1/m. Pt. 1. Non-perturbative parameters in the quenched approximation

    International Nuclear Information System (INIS)

    Blossier, Benoit; Della Morte, Michele; Garron, Nicolas; Edinburgh Univ.; Sommer, Rainer

    2010-01-01

    We determine non-perturbatively the parameters of the lattice HQET Lagrangian and those of heavy-light axial-vector and vector currents in the quenched approximation. The HQET expansion includes terms of order 1/m b . Our results allow to compute, for example, the heavy-light spectrum and B-meson decay constants in the static approximation and to order 1/m b in HQET. The determination of the parameters is separated into universal and non-universal parts. The universal results can be used to determine the parameters for various discretizations. The computation reported in this paper uses the plaquette gauge action and the ''HYP1/2'' action for the b-quark described by HQET. The parameters of the currents also depend on the light-quark action, for which we choose non-perturbatively O(a)-improved Wilson fermions. (orig.)

  17. Non-perturbative BRST quantization of Euclidean Yang-Mills theories in Curci-Ferrari gauges

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, A.D. [UFF, Universidade Federal Fluminense, Instituto de Fisica, Campus da Praia Vermelha, Niteroi, RJ (Brazil); Max Planck Institute for Gravitational Physics, Albert Einstein Institute, Potsdam (Germany); UERJ, Universidade do Estado do Rio de Janeiro, Departamento de Fisica Teorica, Rio de Janeiro (Brazil); Sobreiro, R.F. [UFF, Universidade Federal Fluminense, Instituto de Fisica, Campus da Praia Vermelha, Niteroi, RJ (Brazil); Sorella, S.P. [UERJ, Universidade do Estado do Rio de Janeiro, Departamento de Fisica Teorica, Rio de Janeiro (Brazil)

    2016-10-15

    In this paper we address the issue of the non-perturbative quantization of Euclidean Yang-Mills theories in the Curci-Ferrari gauge. In particular, we construct a refined Gribov-Zwanziger action for this gauge, which takes into account the presence of gauge copies as well as the dynamical formation of dimension-two condensates. This action enjoys a non-perturbative BRST symmetry recently proposed in Capri et al. (Phys. Rev. D 92(4), 045039. doi:10.1103/PhysRevD.92.045039. arXiv:1506.06995 [hepth], 2015). Finally, we pay attention to the gluon propagator in different space-time dimensions. (orig.)

  18. Four-fluxes and non-perturbative superpotentials in two dimensions

    International Nuclear Information System (INIS)

    Lerche, W.

    1998-01-01

    We show how certain non-perturbative superpotentials W(Σ), which are the two-dimensional analogs of the Seiberg-Witten prepotential in 4d, can be computed via geometric engineering from 4-folds. We analyze an explicit example for which the relevant compact geometry of the 4-fold is given by P 1 fibered over P 2 . In the field theory limit, this gives an effective U(1) gauge theory with N=(2,2) supersymmetry in two dimensions. We find that the analog of the SW curve is a K3 surface, and that the complex FI coupling is given by the modular parameter of this surface. The FI potential itself coincides with the middle period of a meromorphic differential. However, it only shows up in the effective action if a certain 4-flux is switched on, and then supersymmetry appears to be non-perturbatively broken. (orig.)

  19. Simple liquid models with corrected dielectric constants

    Science.gov (United States)

    Fennell, Christopher J.; Li, Libo; Dill, Ken A.

    2012-01-01

    Molecular simulations often use explicit-solvent models. Sometimes explicit-solvent models can give inaccurate values for basic liquid properties, such as the density, heat capacity, and permittivity, as well as inaccurate values for molecular transfer free energies. Such errors have motivated the development of more complex solvents, such as polarizable models. We describe an alternative here. We give new fixed-charge models of solvents for molecular simulations – water, carbon tetrachloride, chloroform and dichloromethane. Normally, such solvent models are parameterized to agree with experimental values of the neat liquid density and enthalpy of vaporization. Here, in addition to those properties, our parameters are chosen to give the correct dielectric constant. We find that these new parameterizations also happen to give better values for other properties, such as the self-diffusion coefficient. We believe that parameterizing fixed-charge solvent models to fit experimental dielectric constants may provide better and more efficient ways to treat solvents in computer simulations. PMID:22397577

  20. Non-perturbative analytical solutions of the space- and time-fractional Burgers equations

    International Nuclear Information System (INIS)

    Momani, Shaher

    2006-01-01

    Non-perturbative analytical solutions for the generalized Burgers equation with time- and space-fractional derivatives of order α and β, 0 < α, β ≤ 1, are derived using Adomian decomposition method. The fractional derivatives are considered in the Caputo sense. The solutions are given in the form of series with easily computable terms. Numerical solutions are calculated for the fractional Burgers equation to show the nature of solution as the fractional derivative parameter is changed

  1. Comments on exact quantization conditions and non-perturbative topological strings

    International Nuclear Information System (INIS)

    Hatsuda, Yasuyuki

    2015-12-01

    We give some remarks on exact quantization conditions associated with quantized mirror curves of local Calabi-Yau threefolds, conjectured in arXiv:1410.3382. It is shown that they characterize a non-perturbative completion of the refined topological strings in the Nekrasov-Shatashvili limit. We find that the quantization conditions enjoy an exact S-dual invariance. We also discuss Borel summability of the semi-classical spectrum.

  2. Large x Behaviour and the Non-Perturbative Structure of Hadronic Systems

    Energy Technology Data Exchange (ETDEWEB)

    Anthony W. Thomas

    2005-02-01

    While the traditional interest in structure functions has been the confirmation of the predictions of perturbative QCD, this data also contains a wealth of information on how QCD works in the infrared, or confinement, region. As the challenge of the strong force now turns to the study of QCD in the non-perturbative region, such information is extremely valuable.We outline some of the key issues for both nucleon and nuclear structure functions.

  3. Non-perturbative renormalisation of left-left four-fermion operators with Neuberger fermions

    International Nuclear Information System (INIS)

    Dimopoulos, P.; Giusti, L.; Hernandez, P.; Palombi, F.; Pena, C.; Vladikas, A.; Wennekers, J.; Wittig, H.

    2006-01-01

    We outline a general strategy for the non-perturbative renormalisation of composite operators in discretisations based on Neuberger fermions, via a matching to results obtained with Wilson-type fermions. As an application, we consider the renormalisation of the four-quark operators entering the ΔS=1 and ΔS=2 effective Hamiltonians. Our results are an essential ingredient for the determination of the low-energy constants governing non-leptonic kaon decays

  4. A non-perturbative approach to the Coleman-Weinberg mechanism in massless scalar QED

    International Nuclear Information System (INIS)

    Malbouisson, A.P.C.; Nogueira, F.S.; Svaiter, N.F.

    1995-08-01

    We rederived non-perturbatively the Coleman-Weinberg expression for the effective potential for massless scalar QED. Our result is not restricted to small values of the coupling constants. This shows that the Coleman-Weinberg result can be established beyond the range of perturbation theory. Also, we derive it in a manifestly renormalization group invariant way. It is shown that with the derivation given no Landau ghost singularity arises. The finite temperature case is discussed. (author). 13 refs

  5. Correction

    DEFF Research Database (Denmark)

    Pinkevych, Mykola; Cromer, Deborah; Tolstrup, Martin

    2016-01-01

    [This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.].......[This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.]....

  6. Non-perturbative phenomena in QCD vacuum, hadrons, and quark-gluon plasma

    International Nuclear Information System (INIS)

    Shuryak, E.V.

    1983-01-01

    These lectures provide a brief review of recent progress in non-perturbative quantum chromodynamics (QCD). They are intended for non specialists, mainly experimentalists. The main object of discussion, the QCD vacuum, is a rather complicated medium. It may be studied either by infinitesimal probes producing microscopic excitations (=hadrons), or by finite excitations (say, heating some volume to a given temperature T). In the latter case, some qualitative changes (phase transitions) should take place. A summary is given of the extent to which such phenomena can be observed in the laboratory by proton-proton, proton-nucleus, and nucleus-nucleus collisions. (orig.)

  7. Non-perturbative computation of the strong coupling constant on the lattice

    International Nuclear Information System (INIS)

    Sommer, Rainer; Humboldt-Universitaet, Berlin; Wolff, Ulli

    2015-01-01

    We review the long term project of the ALPHA collaboration to compute in QCD the running coupling constant and quark masses at high energy scales in terms of low energy hadronic quantities. The adapted techniques required to numerically carry out the required multiscale non-perturbative calculation with our special emphasis on the control of systematic errors are summarized. The complete results in the two dynamical flavor approximation are reviewed and an outlook is given on the ongoing three flavor extension of the programme with improved target precision.

  8. Super Toeplitz operators and non-perturbative deformation quantization of supermanifolds

    Energy Technology Data Exchange (ETDEWEB)

    Borthwick, D.; Lesniewski, A.; Rinaldi, M. (Harvard Univ., Cambridge, MA (United States). Lyman Lab. of Physics); Klimek, S. (IUPUI, Indianapolis, IN (United States). Dept. of Mathematics)

    1993-04-01

    The purpose of this paper is to construct non-perturbative deformation quantizations of the algebras of smooth functions on Poisson supermanifolds. For the examples U[sup 1vertical] [sup stroke1] and C[sup mvertical] [sup stroken], algebras of super Toeplitz operators are defined with respect to certain Hilbert spaces of superholomorphic functions. Generators and relations for these algebras are given. The algebras can be thought of as algebras of 'quantized functions', and deformation conditions are proven which demonstrate the recovery of the super Piosson structures in a semi-classical limit. (orig.).

  9. Coulomb versus nuclear break-up of 11Be halo nucleus in a non perturbative framework

    International Nuclear Information System (INIS)

    Fallot, M.; Scarpaci, J.A.; Margueron, J.; Lacroix, D.; Chomaz, Ph.

    2000-01-01

    The 11 Be break-up is calculated using a non perturbative time-dependent quantum calculation. The evolution of the neutron halo wave function shows an emission of neutron at large angles for grazing impact parameters and at forward angles for large impact parameters. The neutron angular distribution is deduced for the different targets and compared to experimental data. We emphasize the diversity of diffraction mechanisms, in particular we discuss the interplay of the nuclear effects such as the towing mode and the Coulomb break-up. A good agreement is found with experimental data. (authors)

  10. Analyzing Bs - anti Bs mixing. Non-perturbative contributions to bag parameters from sum rules

    International Nuclear Information System (INIS)

    Mannel, T.; Pivovarov, A.A.; Russian Academy of Sciecnes, Moscow

    2007-03-01

    We use QCD sum rules to compute matrix elements of the ΔB=2 operators appearing in the heavy-quark expansion of the width difference of the B s mass eigenstates. Our analysis includes the leading-order operators Q and Q S , as well as the subleading operators R 2 and R 3 , which appear at next-to-leading order in the 1/m b expansion. We conclude that the violation of the factorization approximation for these matrix elements due to non-perturbative vacuum condensates is as low as 1-2%. (orig.)

  11. Orbital classical solutions, non-perturbative phenomena and singularity at the zero coupling constant point

    International Nuclear Information System (INIS)

    Vourdas, A.

    1982-01-01

    We try to extend previous arguments on orbital classical solutions in non-relativistic quantum mechanics to the 1/4lambda vertical stroke phi vertical stroke 4 complex relativistic field theory. The single valuedness of the Green function in the semiclassical (Planksche Konstante → 0) limit leads to a Bohr-Sommerfeld quantization. A path integral formalism for the Green functions analogous to that in non-relativistic quantum mechanics is employed and a semiclassical approach which uses our classical solutions indicates non-perturbative effects. They reflect an esub(1/lambda) singularity at the zero coupling constant point. (orig.)

  12. Non-perturbative renormalization of left-left four-fermion operators in quenched lattice QCD

    CERN Document Server

    Guagnelli, M; Peña, C; Sint, S; Vladikas, A

    2006-01-01

    We define a family of Schroedinger Functional renormalization schemes for the four-quark multiplicatively renormalizable operators of the $\\Delta F = 1$ and $\\Delta F = 2$ effective weak Hamiltonians. Using the lattice regularization with quenched Wilson quarks, we compute non-perturbatively the renormalization group running of these operators in the continuum limit in a large range of renormalization scales. Continuum limit extrapolations are well controlled thanks to the implementation of two fermionic actions (Wilson and Clover). The ratio of the renormalization group invariant operator to its renormalized counterpart at a low energy scale, as well as the renormalization constant at this scale, is obtained for all schemes.

  13. Non-Perturbative Asymptotic Improvement of Perturbation Theory and Mellin-Barnes Representation

    Directory of Open Access Journals (Sweden)

    Samuel Friot

    2010-10-01

    Full Text Available Using a method mixing Mellin-Barnes representation and Borel resummation we show how to obtain hyperasymptotic expansions from the (divergent formal power series which follow from the perturbative evaluation of arbitrary ''N-point'' functions for the simple case of zero-dimensional φ4 field theory. This hyperasymptotic improvement appears from an iterative procedure, based on inverse factorial expansions, and gives birth to interwoven non-perturbative partial sums whose coefficients are related to the perturbative ones by an interesting resurgence phenomenon. It is a non-perturbative improvement in the sense that, for some optimal truncations of the partial sums, the remainder at a given hyperasymptotic level is exponentially suppressed compared to the remainder at the preceding hyperasymptotic level. The Mellin-Barnes representation allows our results to be automatically valid for a wide range of the phase of the complex coupling constant, including Stokes lines. A numerical analysis is performed to emphasize the improved accuracy that this method allows to reach compared to the usual perturbative approach, and the importance of hyperasymptotic optimal truncation schemes.

  14. Fundamental parameters of QCD from non-perturbative methods for two and four flavors

    International Nuclear Information System (INIS)

    Marinkovic, Marina

    2013-01-01

    The non-perturbative formulation of Quantumchromodynamics (QCD) on a four dimensional space-time Euclidean lattice together with the finite size techniques enable us to perform the renormalization of the QCD parameters non-perturbatively. In order to obtain precise predictions from lattice QCD, one needs to include the dynamical fermions into lattice QCD simulations. We consider QCD with two and four mass degenerate flavors of O(a) improved Wilson quarks. In this thesis, we improve the existing determinations of the fundamental parameters of two and four flavor QCD. In four flavor theory, we compute the precise value of the Λ parameter in the units of the scale L max defined in the hadronic regime. We also give the precise determination of the Schroedinger functional running coupling in four flavour theory and compare it to the perturbative results. The Monte Carlo simulations of lattice QCD within the Schroedinger Functional framework were performed with a platform independent program package Schroedinger Funktional Mass Preconditioned Hybrid Monte Carlo (SF-MP-HMC), developed as a part of this project. Finally, we compute the strange quark mass and the Λ parameter in two flavour theory, performing a well-controlled continuum limit and chiral extrapolation. To achieve this, we developed a universal program package for simulating two flavours of Wilson fermions, Mass Preconditioned Hybrid Monte Carlo (MP-HMC), which we used to run large scale simulations on small lattice spacings and on pion masses close to the physical value.

  15. Wall Correction Model for Wind Tunnels with Open Test Section

    DEFF Research Database (Denmark)

    Sørensen, Jens Nørkær; Shen, Wen Zhong; Mikkelsen, Robert Flemming

    2004-01-01

    , the corrections from the model are in very good agreement with the CFD computaions, demonstrating that one-dimensional momentum theory is a reliable way of predicting corrections for wall interference in wind tunnels with closed as well as open cross sections. Keywords: Wind tunnel correction, momentum theory...

  16. Diffusion corrections to the hard pomeron

    CERN Document Server

    Ciafaloni, Marcello; Müller, A H; Ciafaloni, Marcello; Taiuti, Martina

    2001-01-01

    The high-energy behaviour of two-scale hard processes is investigated in the framework of small-x models with running coupling, having the Airy diffusion model as prototype. We show that, in some intermediate high-energy regime, the perturbative hard Pomeron exponent determines the energy dependence, and we prove that diffusion corrections have the form hinted at before in particular cases. We also discuss the breakdown of such regime at very large energies, and the onset of the non-perturbative Pomeron behaviour.

  17. Paralegals in Corrections: A Proposed Model.

    Science.gov (United States)

    McShane, Marilyn D.

    1987-01-01

    Describes the legal assistance program currently offered by the Texas Department of Corrections which demonstrates the wide range of questions and problems that the paralegal can address. Reviews paralegal's functions in the prison setting and the services they can provide in assisting prisoners to maintain their rights. (Author/ABB)

  18. Evolutionary modeling-based approach for model errors correction

    Directory of Open Access Journals (Sweden)

    S. Q. Wan

    2012-08-01

    Full Text Available The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963 equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data."

    On the basis of the intelligent features of evolutionary modeling (EM, including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.

  19. B-physics from non-perturbatively renormalized HQET in two-flavour lattice QCD

    Energy Technology Data Exchange (ETDEWEB)

    Bernardoni, Fabio; Simma, Hubert [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Blossier, Benoit; Gerardin, Antoine [Paris-11 Univ., 91 - Orsay (France). Lab. de Physique Theorique; CNRS, Orsay (France); Bulava, John [CERN, Geneva (Switzerland). Physics Department; Della Morte, Michele; Hippel, Georg M. von [Mainz Univ. (Germany). Inst. fuer Kernphysik; Fritzsch, Patrick [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Garron, Nicolas [Trinity College, Dublin (Ireland). School of Mathematics; Heitger, Jochen [Muenster Univ. (Germany). Inst. fuer Theoretische Physik 1; Collaboration: ALPHA Collaboration

    2012-10-15

    We report on the ALPHA Collaboration's lattice B-physics programme based on N{sub f}=2 O(a) improved Wilson fermions and HQET, including all NLO effects in the inverse heavy quark mass, as well as non-perturbative renormalization and matching, to fix the parameters of the effective theory. Our simulations in large physical volume cover 3 lattice spacings a {approx} (0.08-0.05) fm and pion masses down to 190 MeV to control continuum and chiral extrapolations. We present the status of results for the b-quark mass and the B{sub (s)}-meson decay constants, f{sub B} and f{sub B{sub s}}.

  20. Phase control of the probability of electronic transitions in the non-perturbative laser field intensity

    International Nuclear Information System (INIS)

    Yokoyama, Keiichi; Sugita, Akihiro; Yamada, Hidetaka; Teranishi, Yoshiaki; Yokoyama, Atsushi

    2007-01-01

    A preparatory study on the quantum control of the selective transition K(4S 1/2 ) → K(4P J ) (J=1/2, 3/2) in intense laser field is reported. To generate high average power femtosecond laser pulses with enough field intensity, a Ti:Sapphire regenerative amplifier system with a repetition rate of 1 kHz is constructed. The bandwidth and pulse energy are shown to qualify the required values for the completely selective transition with 100% population inversion. A preliminary experiment of the selective excitation shows that the fringe pattern formed by a phase related pulse pair depends on the laser intensity, indicating that the perturbative behavior of the excitation probabilities is not valid any more and the laser intensity reaches a non-perturbative region. (author)

  1. A Non-Perturbative, Finite Particle Number Approach to Relativistic Scattering Theory

    Energy Technology Data Exchange (ETDEWEB)

    Lindesay, James V

    2001-05-11

    We present integral equations for the scattering amplitudes of three scalar particles, using the Faddeev channel decomposition, which can be readily extended to any finite number of particles of any helicity. The solution of these equations, which have been demonstrated to be calculable, provide a non-perturbative way of obtaining relativistic scattering amplitudes for any finite number of particles that are Lorentz invariant, unitary, cluster decomposable and reduce unambiguously in the non-relativistic limit to the non-relativistic Faddeev equations. The aim of this program is to develop equations which explicitly depend upon physically observable input variables, and do not require ''renormalization'' or ''dressing'' of these parameters to connect them to the boundary states.

  2. Non-perturbative running of quark masses in three-flavour QCD

    CERN Document Server

    Campos, Isabel; Pena, Carlos; Preti, David; Ramos, Alberto; Vladikas, Anastassios

    2016-01-01

    We present our preliminary results for the computation of the non-perturbative running of renormalized quark masses in $N_f = 3$ QCD, between the electroweak and hadronic scales, using standard finite-size scaling techniques. The computation is carried out to very high precision, using massless $\\mathcal{O}(a)$-improved Wilson quarks. Following the strategy adopted by the ALPHA Collaboration for the running coupling, different schemes are used above and below a scale $\\mu_0 \\sim m_b$, which differ by using either the Schr\\"odinger Functional or Gradient Flow renormalized coupling. We discuss our results for the running in both regions, and the procedure to match the two schemes.

  3. Non-perturbative QCD. Renormalization, O(a)-improvement and matching to heavy quark effective theory

    International Nuclear Information System (INIS)

    Sommer, R.

    2006-11-01

    We give an introduction to three topics in lattice gauge theory: I. The Schroedinger Functional and O(a) improvement. O(a) improvement has been reviewed several times. Here we focus on explaining the basic ideas in detail and then proceed directly to an overview of the literature and our personal assessment of what has been achieved and what is missing. II. The computation of the running coupling, running quark masses and the extraction of the renormalization group invariants. We focus on the basic strategy and on the large effort that has been invested in understanding the continuum limit. We point out what remains to be done. III. Non-perturbative Heavy Quark Effective Theory. Since the literature on this subject is still rather sparse, we go beyond the basic ideas and discuss in some detail how the theory works in principle and in practice. (orig.)

  4. Non-perturbative QCD. Renormalization, O(a)-improvement and matching to heavy quark effective theory

    Energy Technology Data Exchange (ETDEWEB)

    Sommer, R.

    2006-11-15

    We give an introduction to three topics in lattice gauge theory: I. The Schroedinger Functional and O(a) improvement. O(a) improvement has been reviewed several times. Here we focus on explaining the basic ideas in detail and then proceed directly to an overview of the literature and our personal assessment of what has been achieved and what is missing. II. The computation of the running coupling, running quark masses and the extraction of the renormalization group invariants. We focus on the basic strategy and on the large effort that has been invested in understanding the continuum limit. We point out what remains to be done. III. Non-perturbative Heavy Quark Effective Theory. Since the literature on this subject is still rather sparse, we go beyond the basic ideas and discuss in some detail how the theory works in principle and in practice. (orig.)

  5. Holographic p-wave superconductor models with Weyl corrections

    Directory of Open Access Journals (Sweden)

    Lu Zhang

    2015-04-01

    Full Text Available We study the effect of the Weyl corrections on the holographic p-wave dual models in the backgrounds of AdS soliton and AdS black hole via a Maxwell complex vector field model by using the numerical and analytical methods. We find that, in the soliton background, the Weyl corrections do not influence the properties of the holographic p-wave insulator/superconductor phase transition, which is different from that of the Yang–Mills theory. However, in the black hole background, we observe that similarly to the Weyl correction effects in the Yang–Mills theory, the higher Weyl corrections make it easier for the p-wave metal/superconductor phase transition to be triggered, which shows that these two p-wave models with Weyl corrections share some similar features for the condensation of the vector operator.

  6. Wall correction model for wind tunnels with open test section

    DEFF Research Database (Denmark)

    Sørensen, Jens Nørkær; Shen, Wen Zhong; Mikkelsen, Robert Flemming

    2006-01-01

    In the paper we present a correction model for wall interference on rotors of wind turbines or propellers in wind tunnels. The model, which is based on a one-dimensional momentum approach, is validated against results from CFD computations using a generalized actuator disc principle. In the model...... good agreement with the CFD computations, demonstrating that one-dimensional momentum theory is a reliable way of predicting corrections for wall interference in wind tunnels with closed as well as open cross sections....

  7. Mass corrections to Green functions in instanton vacuum model

    International Nuclear Information System (INIS)

    Esaibegyan, S.V.; Tamaryan, S.N.

    1987-01-01

    The first nonvanishing mass corrections to the effective Green functions are calculated in the model of instanton-based vacuum consisting of a superposition of instanton-antiinstanton fluctuations. The meson current correlators are calculated with account of these corrections; the mass spectrum of pseudoscalar octet as well as the value of the kaon axial constant are found. 7 refs

  8. Quantum fields in the non-perturbative regime. Yang-Mills theory and gravity

    Energy Technology Data Exchange (ETDEWEB)

    Eichhorn, Astrid

    2011-09-06

    In this thesis we study candidates for fundamental quantum field theories, namely non-Abelian gauge theories and asymptotically safe quantum gravity. Whereas the first ones have a stronglyinteracting low-energy limit, the second one enters a non-perturbative regime at high energies. Thus, we apply a tool suited to the study of quantum field theories beyond the perturbative regime, namely the Functional Renormalisation Group. In a first part, we concentrate on the physical properties of non-Abelian gauge theories at low energies. Focussing on the vacuum properties of the theory, we present an evaluation of the full effective potential for the field strength invariant F{sub {mu}}{sub {nu}}F{sup {mu}}{sup {nu}} from non-perturbative gauge correlation functions and find a non-trivial minimum corresponding to the existence of a dimension four gluon condensate in the vacuum. We also relate the infrared asymptotic form of the {beta} function of the running background-gauge coupling to the asymptotic behavior of Landau-gauge gluon and ghost propagators and derive an upper bound on their scaling exponents. We then consider the theory at finite temperature and study the nature of the confinement phase transition in d = 3+1 dimensions in various non-Abelian gauge theories. For SU(N) with N= 3,..,12 and Sp(2) we find a first-order phase transition in agreement with general expectations. Moreover our study suggests that the phase transition in E(7) Yang-Mills theory also is of first order. Our studies shed light on the question which property of a gauge group determines the order of the phase transition. In a second part we consider asymptotically safe quantum gravity. Here, we focus on the Faddeev-Popov ghost sector of the theory, to study its properties in the context of an interacting UV regime. We investigate several truncations, which all lend support to the conjecture that gravity may be asymptotically safe. In a first truncation, we study the ghost anomalous dimension

  9. Quantum fields in the non-perturbative regime. Yang-Mills theory and gravity

    International Nuclear Information System (INIS)

    Eichhorn, Astrid

    2011-01-01

    In this thesis we study candidates for fundamental quantum field theories, namely non-Abelian gauge theories and asymptotically safe quantum gravity. Whereas the first ones have a stronglyinteracting low-energy limit, the second one enters a non-perturbative regime at high energies. Thus, we apply a tool suited to the study of quantum field theories beyond the perturbative regime, namely the Functional Renormalisation Group. In a first part, we concentrate on the physical properties of non-Abelian gauge theories at low energies. Focussing on the vacuum properties of the theory, we present an evaluation of the full effective potential for the field strength invariant F μν F μν from non-perturbative gauge correlation functions and find a non-trivial minimum corresponding to the existence of a dimension four gluon condensate in the vacuum. We also relate the infrared asymptotic form of the β function of the running background-gauge coupling to the asymptotic behavior of Landau-gauge gluon and ghost propagators and derive an upper bound on their scaling exponents. We then consider the theory at finite temperature and study the nature of the confinement phase transition in d = 3+1 dimensions in various non-Abelian gauge theories. For SU(N) with N= 3,..,12 and Sp(2) we find a first-order phase transition in agreement with general expectations. Moreover our study suggests that the phase transition in E(7) Yang-Mills theory also is of first order. Our studies shed light on the question which property of a gauge group determines the order of the phase transition. In a second part we consider asymptotically safe quantum gravity. Here, we focus on the Faddeev-Popov ghost sector of the theory, to study its properties in the context of an interacting UV regime. We investigate several truncations, which all lend support to the conjecture that gravity may be asymptotically safe. In a first truncation, we study the ghost anomalous dimension which we find to be negative at the

  10. Correction

    CERN Multimedia

    2002-01-01

    Tile Calorimeter modules stored at CERN. The larger modules belong to the Barrel, whereas the smaller ones are for the two Extended Barrels. (The article was about the completion of the 64 modules for one of the latter.) The photo on the first page of the Bulletin n°26/2002, from 24 July 2002, illustrating the article «The ATLAS Tile Calorimeter gets into shape» was published with a wrong caption. We would like to apologise for this mistake and so publish it again with the correct caption.

  11. Planck-scale corrections to axion models

    International Nuclear Information System (INIS)

    Barr, S.M.; Seckel, D.

    1992-01-01

    It has been argued that quantum gravitational effects will violate all nonlocal symmetries. Peccei-Quinn symmetries must therefore be an ''accidental'' or automatic consequence of local gauge symmetry. Moreover, higher-dimensional operators suppressed by powers of M Pl are expected to explicitly violate the Peccei-Quinn symmetry. Unless these operators are of dimension d≥10, axion models do not solve the strong CP problem in a natural fashion. A small gravitationally induced contribution to the axion mass has little if any effect on the density of relic axions. If d=10, 11, or 12 these operators can solve the axion domain-wall problem, and we describe a simple class of Kim-Shifman-Vainshtein-Zakharov axion models where this occurs. We also study the astrophysics and cosmology of ''heavy axions'' in models where 5≤d≤10

  12. Non-perturbative Approach to Equation of State and Collective Modes of the QGP

    Directory of Open Access Journals (Sweden)

    Y.F. Liu Shuai

    2018-01-01

    Full Text Available We discuss a non-perturbative T-matrix approach to investigate the microscopic structure of the quark-gluon plasma (QGP. Utilizing an effective Hamiltonian which includes both light- and heavy-parton degrees of freedoms. The basic two-body interaction includes color-Coulomb and confining contributions in all available color channels, and is constrained by lattice-QCD data for the heavy-quark free energy. The in-medium T-matrices and parton spectral functions are computed selfconsistently with full account of off-shell properties encoded in large scattering widths. We apply the T-matrices to calculate the equation of state (EoS for the QGP, including a ladder resummation of the Luttinger-Ward functional using a matrix-log technique to account for the dynamical formation of bound states. It turns out that the latter become the dominant degrees of freedom in the EoS at low QGP temperatures indicating a transition from parton to hadron degrees of freedom. The calculated spectral properties of one- and two-body states confirm this picture, where large parton scattering rates dissolve the parton quasiparticle structures while broad resonances start to form as the pseudocritical temperature is approached from above. Further calculations of transport coefficients reveal a small viscosity and heavy-quark diffusion coefficient.

  13. G-fluxes and non-perturbative stabilisation of heterotic M-theory

    International Nuclear Information System (INIS)

    Curio, Gottfried; Krause, Axel

    2002-01-01

    We examine heterotic M-theory compactified on a Calabi-Yau manifold with an additional parallel M5-brane. The dominant non-perturbative effect stems from open membrane instantons connecting the M5 with the boundaries. We derive the four-dimensional low-energy supergravity potential for this situation including subleading contributions as it turns out that the leading term vanishes after minimisation. At the minimum of the potential the M5 gets stabilised at the middle of the orbifold interval while the vacuum energy is shown to be manifestly positive. Moreover, induced by the non-trivial running of the Calabi-Yau volume along the orbifold which is driven by the G-fluxes, we find that the orbifold-length and the Calabi-Yau volume modulus are stabilised at values which are related by the G-flux of the visible boundary. Finally we determine the supersymmetry-breaking scale and the gravitino mass for this open membrane vacuum

  14. {alpha}{sub s} from the non-perturbatively renormalised lattice three-gluon vertex

    Energy Technology Data Exchange (ETDEWEB)

    Alles, B. [Pisa Univ. (Italy). Dipt. di Fisica; Henty, D.S. [Department of Physics and Astronomy, University of Edinburgh, Edinburgh EH9 3JZ (United Kingdom); Panagopoulos, H. [Department of Natural Sciences, University of Cyprus, CY-1678 Nicosia (Cyprus); Parrinello, C. [Department of Mathematical Sciences, University of Liverpool, Liverpool L69 3BX (United Kingdom); Pittori, C. [L.P.T.H.E., Universite de Paris Sud, Centre d`Orsay, 91405 Orsay (France); Richards, D.G. [Department of Physics and Astronomy, University of Edinburgh, Edinburgh EH9 3JZ (United Kingdom)]|[Fermilab, P.O. Box 500, Batavia, IL 60510 (United States)

    1997-09-29

    We compute the running QCD coupling on the lattice by evaluating two-point and three-point off-shell gluon Green`s functions in a fixed gauge and imposing non-perturbative renormalisation conditions on them. Our exploratory study is performed in the quenched approximation at {beta}=6.0 on 16{sup 4} and 24{sup 4} lattices. We show that, for momenta in the range 1.8-2.3 GeV, our coupling runs according to the two-loop asymptotic formula, allowing a precise determination of the corresponding {Lambda} parameter. The role of lattice artifacts and finite-volume effects is carefully analysed and these appear to be under control in the momentum range of interest. Our renormalisation procedure corresponds to a momentum subtraction scheme in continuum field theory, and therefore lattice perturbation theory is not needed in order to match our results to the anti M anti S scheme, thus eliminating a major source of uncertainty in the determination of {alpha} {sub anti} {sub M} {sub anti} {sub S}. Our method can be applied directly to the unquenched case. (orig.). 20 refs.

  15. Infrared behavior of the effective coupling in quantum chromodynamics: A non-perturbative approach

    International Nuclear Information System (INIS)

    Bar-Gadda, U.

    1980-01-01

    In this paper we examine a different viewpoint, based on a self-consistent approach. This means that rather than attempting to identify any particular physical mechanism as dominating the QCD vacuum state we use the non-perturbative Schwinger-Dyson equations and Slavnov-Taylor identities of QCD as well as the renormalization group equation to obtain the self-consistent behavior of the effective coupling in the infrared region. We show that the infrared effective coupling behavior anti g(q 2 /μ 2 , gsub(R)(μ)) = (μ 2 /q 2 )sup(lambda/2)gsub(R)(μ) in the infrared limit q 2 /μ 2 → 0, where μ 2 is the euclidean subtraction point; lambda = 1/2(d - 2), where d is the space-time dimension, is the preferred solution if a sufficient self-consistency condition is satisfied. Finally we briefly discuss the nature of the dynamical mass Λ and the 1/N expansion as well as an effective bound state equation. (orig.)

  16. A complete non-perturbative renormalization prescription for quasi-PDFs

    Energy Technology Data Exchange (ETDEWEB)

    Alexandrou, Constantia [Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; The Cyprus Institute, Nicosia (Cyprus); Cichy, Krzysztof [Frankfurt Univ. (Germany). Inst. fuer Theoretische Physik; Adam Mickiewicz Univ., Poznan (Poland). Faculty of Physics; Constantinou, Martha [Temple Univ., Philadelphia, PA (United States). Dept. of Physics; Hadjiyiannakou, Kyriakos [The Cyprus Institute, Nicosia (Cyprus); Jansen, Karl; Steffens, Fernanda [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Panagopoulos, Haralambos [Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; Collaboration: European Twisted Mass Collaboration

    2017-06-15

    In this work we present, for the first time, the non-perturbative renormalization for the unpolarized, helicity and transversity quasi-PDFs, in an RI{sup '} scheme. The proposed prescription addresses simultaneously all aspects of renormalization: logarithmic divergences, finite renormalization as well as the linear divergence which is present in the matrix elements of fermion operators with Wilson lines. Furthermore, for the case of the unpolarized quasi-PDF, we describe how to eliminate the unwanted mixing with the twist-3 scalar operator. We utilize perturbation theory for the one-loop conversion factor that brings the renormalization functions to the MS-scheme at a scale of 2 GeV. We also explain how to improve the estimates on the renormalization functions by eliminating lattice artifacts. The latter can be computed in one-loop perturbation theory and to all orders in the lattice spacing. We apply the methodology for the renormalization to an ensemble of twisted mass fermions with N{sub f}=2+1+1 dynamical quarks, and a pion mass of around 375 MeV.

  17. Non-perturbative measurement of low-intensity charged particle beams

    Science.gov (United States)

    Fernandes, M.; Geithner, R.; Golm, J.; Neubert, R.; Schwickert, M.; Stöhlker, T.; Tan, J.; Welsch, C. P.

    2017-01-01

    Non-perturbative measurements of low-intensity charged particle beams are particularly challenging to beam diagnostics due to the low amplitude of the induced electromagnetic fields. In the low-energy antiproton decelerator (AD) and the future extra low energy antiproton rings at CERN, an absolute measurement of the beam intensity is essential to monitor the operation efficiency. Superconducting quantum interference device (SQUID) based cryogenic current comparators (CCC) have been used for measuring slow charged beams in the nA range, showing a very good current resolution. But these were unable to measure fast bunched beams, due to the slew-rate limitation of SQUID devices and presented a strong susceptibility to external perturbations. Here, we present a CCC system developed for the AD machine, which was optimised in terms of its current resolution, system stability, ability to cope with short bunched beams, and immunity to mechanical vibrations. This paper presents the monitor design and the first results from measurements with a low energy antiproton beam obtained in the AD in 2015. These are the first CCC beam current measurements ever performed in a synchrotron machine with both coasting and short bunched beams. It is shown that the system is able to stably measure the AD beam throughout the entire cycle, with a current resolution of 30 {nA}.

  18. Towards a non-perturbative matching of HQET and QCD with dynamical light quarks

    International Nuclear Information System (INIS)

    Della Morte, M.; Simma, H.; Sommer, R.

    2007-10-01

    We explain how the strategy of solving renormalization problems in HQET non-perturbatively by a matching to QCD in finite volume can be implemented to include dynamical fermions. As a primary application, some elements of an HQET computation of the mass of the b-quark beyond the leading order with N f =2 are outlined. In particular, the matching of HQET and QCD requires relativistic QCD simulations in a volume with L∼0.5 fm, which will serve to quantitatively determine the heavy quark mass dependence of heavy-light meson observables in the continuum limit of finite-volume two-flavour lattice QCD. As a preparation for the latter, we report on our determination of the renormalization constants and improvement coefficients relating the renormalized current and subtracted bare quark mass in the relevant weak coupling region. The calculation of these coefficients employs a constant physics condition in the Schrodinger functional scheme, where the box size L is fixed by working at a prescribed value of the renormalized coupling. (orig.)

  19. Necessary and sufficient conditions for non-perturbative equivalences of large Nc orbifold gauge theories

    International Nuclear Information System (INIS)

    Kovtun, Pave; Uensal, Mithat; Yaffe, Laurence G.

    2005-01-01

    Large N coherent state methods are used to study the relation between U(N c ) gauge theories containing adjoint representation matter fields and their orbifold projections. The classical dynamical systems which reproduce the large N c limits of the quantum dynamics in parent and daughter orbifold theories are compared. We demonstrate that the large N c dynamics of the parent theory, restricted to the subspace invariant under the orbifold projection symmetry, and the large N c dynamics of the daughter theory, restricted to the untwisted sector invariant under 'theory space' permutations, coincide. This implies equality, in the large N c limit, between appropriately identified connected correlation functions in parent and daughter theories, provided the orbifold projection symmetry is not spontaneously broken in the parent theory and the theory space permutation symmetry is not spontaneously broken in the daughter. The necessity of these symmetry realization conditions for the validity of the large N c equivalence is unsurprising, but demonstrating the sufficiency of these conditions is new. This work extends an earlier proof of non-perturbative large N c equivalence which was only valid in the phase of the (lattice regularized) theories continuously connected to large mass and strong coupling

  20. Towards a non-perturbative matching of HQET and QCD with dynamical light quarks

    Energy Technology Data Exchange (ETDEWEB)

    Della Morte, M. [CERN, Geneva (Switzerland). Physics Dept.; Fritzsch, P.; Heitger, J. [Muenster Univ. (Germany). Inst. fuer Theoretische Physik 1; Meyer, H.B. [Massachusets Institute of Technology, Center for Theoretical Physics, Cambridge, MA (United States); Simma, H.; Sommer, R. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2007-10-15

    We explain how the strategy of solving renormalization problems in HQET non-perturbatively by a matching to QCD in finite volume can be implemented to include dynamical fermions. As a primary application, some elements of an HQET computation of the mass of the b-quark beyond the leading order with N{sub f} =2 are outlined. In particular, the matching of HQET and QCD requires relativistic QCD simulations in a volume with L{approx}0.5 fm, which will serve to quantitatively determine the heavy quark mass dependence of heavy-light meson observables in the continuum limit of finite-volume two-flavour lattice QCD. As a preparation for the latter, we report on our determination of the renormalization constants and improvement coefficients relating the renormalized current and subtracted bare quark mass in the relevant weak coupling region. The calculation of these coefficients employs a constant physics condition in the Schrodinger functional scheme, where the box size L is fixed by working at a prescribed value of the renormalized coupling. (orig.)

  1. Inflation via logarithmic entropy-corrected holographic dark energy model

    Energy Technology Data Exchange (ETDEWEB)

    Darabi, F.; Felegary, F. [Azarbaijan Shahid Madani University, Department of Physics, Tabriz (Iran, Islamic Republic of); Setare, M.R. [University of Kurdistan, Department of Science, Bijar (Iran, Islamic Republic of)

    2016-12-15

    We study the inflation in terms of the logarithmic entropy-corrected holographic dark energy (LECHDE) model with future event horizon, particle horizon, and Hubble horizon cut-offs, and we compare the results with those obtained in the study of inflation by the holographic dark energy HDE model. In comparison, the spectrum of primordial scalar power spectrum in the LECHDE model becomes redder than the spectrum in the HDE model. Moreover, the consistency with the observational data in the LECHDE model of inflation constrains the reheating temperature and Hubble parameter by one parameter of holographic dark energy and two new parameters of logarithmic corrections. (orig.)

  2. Inflation via logarithmic entropy-corrected holographic dark energy model

    International Nuclear Information System (INIS)

    Darabi, F.; Felegary, F.; Setare, M.R.

    2016-01-01

    We study the inflation in terms of the logarithmic entropy-corrected holographic dark energy (LECHDE) model with future event horizon, particle horizon, and Hubble horizon cut-offs, and we compare the results with those obtained in the study of inflation by the holographic dark energy HDE model. In comparison, the spectrum of primordial scalar power spectrum in the LECHDE model becomes redder than the spectrum in the HDE model. Moreover, the consistency with the observational data in the LECHDE model of inflation constrains the reheating temperature and Hubble parameter by one parameter of holographic dark energy and two new parameters of logarithmic corrections. (orig.)

  3. Correction

    Directory of Open Access Journals (Sweden)

    2012-01-01

    Full Text Available Regarding Gorelik, G., & Shackelford, T.K. (2011. Human sexual conflict from molecules to culture. Evolutionary Psychology, 9, 564–587: The authors wish to correct an omission in citation to the existing literature. In the final paragraph on p. 570, we neglected to cite Burch and Gallup (2006 [Burch, R. L., & Gallup, G. G., Jr. (2006. The psychobiology of human semen. In S. M. Platek & T. K. Shackelford (Eds., Female infidelity and paternal uncertainty (pp. 141–172. New York: Cambridge University Press.]. Burch and Gallup (2006 reviewed the relevant literature on FSH and LH discussed in this paragraph, and should have been cited accordingly. In addition, Burch and Gallup (2006 should have been cited as the originators of the hypothesis regarding the role of FSH and LH in the semen of rapists. The authors apologize for this oversight.

  4. Correction

    CERN Multimedia

    2002-01-01

    The photo on the second page of the Bulletin n°48/2002, from 25 November 2002, illustrating the article «Spanish Visit to CERN» was published with a wrong caption. We would like to apologise for this mistake and so publish it again with the correct caption.   The Spanish delegation, accompanied by Spanish scientists at CERN, also visited the LHC superconducting magnet test hall (photo). From left to right: Felix Rodriguez Mateos of CERN LHC Division, Josep Piqué i Camps, Spanish Minister of Science and Technology, César Dopazo, Director-General of CIEMAT (Spanish Research Centre for Energy, Environment and Technology), Juan Antonio Rubio, ETT Division Leader at CERN, Manuel Aguilar-Benitez, Spanish Delegate to Council, Manuel Delfino, IT Division Leader at CERN, and Gonzalo León, Secretary-General of Scientific Policy to the Minister.

  5. Correction

    Directory of Open Access Journals (Sweden)

    2014-01-01

    Full Text Available Regarding Tagler, M. J., and Jeffers, H. M. (2013. Sex differences in attitudes toward partner infidelity. Evolutionary Psychology, 11, 821–832: The authors wish to correct values in the originally published manuscript. Specifically, incorrect 95% confidence intervals around the Cohen's d values were reported on page 826 of the manuscript where we reported the within-sex simple effects for the significant Participant Sex × Infidelity Type interaction (first paragraph, and for attitudes toward partner infidelity (second paragraph. Corrected values are presented in bold below. The authors would like to thank Dr. Bernard Beins at Ithaca College for bringing these errors to our attention. Men rated sexual infidelity significantly more distressing (M = 4.69, SD = 0.74 than they rated emotional infidelity (M = 4.32, SD = 0.92, F(1, 322 = 23.96, p < .001, d = 0.44, 95% CI [0.23, 0.65], but there was little difference between women's ratings of sexual (M = 4.80, SD = 0.48 and emotional infidelity (M = 4.76, SD = 0.57, F(1, 322 = 0.48, p = .29, d = 0.08, 95% CI [−0.10, 0.26]. As expected, men rated sexual infidelity (M = 1.44, SD = 0.70 more negatively than they rated emotional infidelity (M = 2.66, SD = 1.37, F(1, 322 = 120.00, p < .001, d = 1.12, 95% CI [0.85, 1.39]. Although women also rated sexual infidelity (M = 1.40, SD = 0.62 more negatively than they rated emotional infidelity (M = 2.09, SD = 1.10, this difference was not as large and thus in the evolutionary theory supportive direction, F(1, 322 = 72.03, p < .001, d = 0.77, 95% CI [0.60, 0.94].

  6. Bias-Correction in Vector Autoregressive Models: A Simulation Study

    Directory of Open Access Journals (Sweden)

    Tom Engsted

    2014-03-01

    Full Text Available We analyze the properties of various methods for bias-correcting parameter estimates in both stationary and non-stationary vector autoregressive models. First, we show that two analytical bias formulas from the existing literature are in fact identical. Next, based on a detailed simulation study, we show that when the model is stationary this simple bias formula compares very favorably to bootstrap bias-correction, both in terms of bias and mean squared error. In non-stationary models, the analytical bias formula performs noticeably worse than bootstrapping. Both methods yield a notable improvement over ordinary least squares. We pay special attention to the risk of pushing an otherwise stationary model into the non-stationary region of the parameter space when correcting for bias. Finally, we consider a recently proposed reduced-bias weighted least squares estimator, and we find that it compares very favorably in non-stationary models.

  7. A non-perturbative exploration of the high energy regime in Nf=3 QCD. ALPHA Collaboration

    Science.gov (United States)

    Dalla Brida, Mattia; Fritzsch, Patrick; Korzec, Tomasz; Ramos, Alberto; Sint, Stefan; Sommer, Rainer

    2018-05-01

    Using continuum extrapolated lattice data we trace a family of running couplings in three-flavour QCD over a large range of scales from about 4 to 128 GeV. The scale is set by the finite space time volume so that recursive finite size techniques can be applied, and Schrödinger functional (SF) boundary conditions enable direct simulations in the chiral limit. Compared to earlier studies we have improved on both statistical and systematic errors. Using the SF coupling to implicitly define a reference scale 1/L_0≈ 4 GeV through \\bar{g}^2(L_0) =2.012, we quote L_0 Λ ^{N_f=3}_{{\\overline{MS}}} =0.0791(21). This error is dominated by statistics; in particular, the remnant perturbative uncertainty is negligible and very well controlled, by connecting to infinite renormalization scale from different scales 2^n/L_0 for n=0,1,\\ldots ,5. An intermediate step in this connection may involve any member of a one-parameter family of SF couplings. This provides an excellent opportunity for tests of perturbation theory some of which have been published in a letter (ALPHA collaboration, M. Dalla Brida et al. in Phys Rev Lett 117(18):182001, 2016). The results indicate that for our target precision of 3 per cent in L_0 Λ ^{N_f=3}_{{\\overline{MS}}}, a reliable estimate of the truncation error requires non-perturbative data for a sufficiently large range of values of α _s=\\bar{g}^2/(4π ). In the present work we reach this precision by studying scales that vary by a factor 2^5= 32, reaching down to α _s≈ 0.1. We here provide the details of our analysis and an extended discussion.

  8. Multisite bias correction of precipitation data from regional climate models

    Czech Academy of Sciences Publication Activity Database

    Hnilica, Jan; Hanel, M.; Puš, V.

    2017-01-01

    Roč. 37, č. 6 (2017), s. 2934-2946 ISSN 0899-8418 R&D Projects: GA ČR GA16-05665S Grant - others:Grantová agentura ČR - GA ČR(CZ) 16-16549S Institutional support: RVO:67985874 Keywords : bias correction * regional climate model * correlation * covariance * multivariate data * multisite correction * principal components * precipitation Subject RIV: DA - Hydrology ; Limnology OBOR OECD: Climatic research Impact factor: 3.760, year: 2016

  9. Diffusion coefficient adaptive correction in Lagrangian puff model

    International Nuclear Information System (INIS)

    Tan Wenji; Wang Dezhong; Ma Yuanwei; Ji Zhilong

    2014-01-01

    Lagrangian puff model is widely used in the decision support system for nuclear emergency management. The diffusion coefficient is one of the key parameters impacting puff model. An adaptive method was proposed in this paper, which could correct the diffusion coefficient in Lagrangian puff model, and it aimed to improve the accuracy of calculating the nuclide concentration distribution. This method used detected concentration data, meteorological data and source release data to estimate the actual diffusion coefficient with least square method. The diffusion coefficient adaptive correction method was evaluated by Kincaid data in MVK, and was compared with traditional Pasquill-Gifford (P-G) diffusion scheme method. The results indicate that this diffusion coefficient adaptive correction method can improve the accuracy of Lagrangian puff model. (authors)

  10. Direct cointegration testing in error-correction models

    NARCIS (Netherlands)

    F.R. Kleibergen (Frank); H.K. van Dijk (Herman)

    1994-01-01

    textabstractAbstract An error correction model is specified having only exact identified parameters, some of which reflect a possible departure from a cointegration model. Wald, likelihood ratio, and Lagrange multiplier statistics are derived to test for the significance of these parameters. The

  11. Loop Corrections to Standard Model fields in inflation

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xingang [Institute for Theory and Computation, Harvard-Smithsonian Center for Astrophysics,60 Garden Street, Cambridge, MA 02138 (United States); Department of Physics, The University of Texas at Dallas,800 W Campbell Rd, Richardson, TX 75080 (United States); Wang, Yi [Department of Physics, The Hong Kong University of Science and Technology,Clear Water Bay, Kowloon, Hong Kong (China); Xianyu, Zhong-Zhi [Center of Mathematical Sciences and Applications, Harvard University,20 Garden Street, Cambridge, MA 02138 (United States)

    2016-08-08

    We calculate 1-loop corrections to the Schwinger-Keldysh propagators of Standard-Model-like fields of spin-0, 1/2, and 1, with all renormalizable interactions during inflation. We pay special attention to the late-time divergences of loop corrections, and show that the divergences can be resummed into finite results in the late-time limit using dynamical renormalization group method. This is our first step toward studying both the Standard Model and new physics in the primordial universe.

  12. Corrected Four-Sphere Head Model for EEG Signals.

    Science.gov (United States)

    Næss, Solveig; Chintaluri, Chaitanya; Ness, Torbjørn V; Dale, Anders M; Einevoll, Gaute T; Wójcik, Daniel K

    2017-01-01

    The EEG signal is generated by electrical brain cell activity, often described in terms of current dipoles. By applying EEG forward models we can compute the contribution from such dipoles to the electrical potential recorded by EEG electrodes. Forward models are key both for generating understanding and intuition about the neural origin of EEG signals as well as inverse modeling, i.e., the estimation of the underlying dipole sources from recorded EEG signals. Different models of varying complexity and biological detail are used in the field. One such analytical model is the four-sphere model which assumes a four-layered spherical head where the layers represent brain tissue, cerebrospinal fluid (CSF), skull, and scalp, respectively. While conceptually clear, the mathematical expression for the electric potentials in the four-sphere model is cumbersome, and we observed that the formulas presented in the literature contain errors. Here, we derive and present the correct analytical formulas with a detailed derivation. A useful application of the analytical four-sphere model is that it can serve as ground truth to test the accuracy of numerical schemes such as the Finite Element Method (FEM). We performed FEM simulations of the four-sphere head model and showed that they were consistent with the corrected analytical formulas. For future reference we provide scripts for computing EEG potentials with the four-sphere model, both by means of the correct analytical formulas and numerical FEM simulations.

  13. Corrected Four-Sphere Head Model for EEG Signals

    Directory of Open Access Journals (Sweden)

    Solveig Næss

    2017-10-01

    Full Text Available The EEG signal is generated by electrical brain cell activity, often described in terms of current dipoles. By applying EEG forward models we can compute the contribution from such dipoles to the electrical potential recorded by EEG electrodes. Forward models are key both for generating understanding and intuition about the neural origin of EEG signals as well as inverse modeling, i.e., the estimation of the underlying dipole sources from recorded EEG signals. Different models of varying complexity and biological detail are used in the field. One such analytical model is the four-sphere model which assumes a four-layered spherical head where the layers represent brain tissue, cerebrospinal fluid (CSF, skull, and scalp, respectively. While conceptually clear, the mathematical expression for the electric potentials in the four-sphere model is cumbersome, and we observed that the formulas presented in the literature contain errors. Here, we derive and present the correct analytical formulas with a detailed derivation. A useful application of the analytical four-sphere model is that it can serve as ground truth to test the accuracy of numerical schemes such as the Finite Element Method (FEM. We performed FEM simulations of the four-sphere head model and showed that they were consistent with the corrected analytical formulas. For future reference we provide scripts for computing EEG potentials with the four-sphere model, both by means of the correct analytical formulas and numerical FEM simulations.

  14. A strategy for implementing non-perturbative renormalisation of heavy-light four-quark operators in the static approximation

    Energy Technology Data Exchange (ETDEWEB)

    Palombi, F. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Gruppe Theorie; Papinutto, M. [Istituto Nazionale di Fisica Nucleare, Rome (Italy); Pena, C. [European Organization for Nuclear Research, Geneva (Switzerland). Theoretical Physics Div.; Wittig, H. [Mainz Univ. (Germany). Inst. fuer Kernphysik

    2006-04-15

    We discuss the renormalisation properties of the complete set of {delta}B=2 four-quark operators with the heavy quark treated in the static approximation. We elucidate the role of heavy quark symmetry and other symmetry transformations in constraining their mixing under renormalisation. By employing the Schroedinger functional, a set of non-perturbative renormalisation conditions can be defined in terms of suitable correlation functions. As a first step in a fully non-perturbative determination of the scale-dependent renormalisation factors, we evaluate these conditions in lattice perturbation theory at one loop. Thereby we verify the expected mixing patterns and determine the anomalous dimensions of the operators at NLO in the Schroedinger functional scheme. Finally, by employing twisted-mass QCD it is shown how finite subtractions arising from explicit chiral symmetry breaking can be avoided completely. (Orig.)

  15. A strategy for implementing non-perturbative renormalisation of heavy-light four-quark operators in the static approximation

    International Nuclear Information System (INIS)

    Palombi, F.; Pena, C.; Wittig, H.

    2006-04-01

    We discuss the renormalisation properties of the complete set of ΔB=2 four-quark operators with the heavy quark treated in the static approximation. We elucidate the role of heavy quark symmetry and other symmetry transformations in constraining their mixing under renormalisation. By employing the Schroedinger functional, a set of non-perturbative renormalisation conditions can be defined in terms of suitable correlation functions. As a first step in a fully non-perturbative determination of the scale-dependent renormalisation factors, we evaluate these conditions in lattice perturbation theory at one loop. Thereby we verify the expected mixing patterns and determine the anomalous dimensions of the operators at NLO in the Schroedinger functional scheme. Finally, by employing twisted-mass QCD it is shown how finite subtractions arising from explicit chiral symmetry breaking can be avoided completely. (Orig.)

  16. Non-perturbative renormalization of the static vector current and its O(a)-improvement in quenched QCD

    Energy Technology Data Exchange (ETDEWEB)

    Palombi, F.

    2007-06-15

    We carry out the renormalization and the Symanzik O(a)-improvement programme for the static vector current in quenched lattice QCD. The scale independent ratio of the renormalization constants of the static vector and axial currents is obtained non-perturbatively from an axial Ward identity with Wilson-type light quarks and various lattice discretizations of the static action. The improvement coefficients c{sub V}{sup stat} and b{sub V}{sup stat} are obtained up to O(g{sub 4}{sup 0})-terms by enforcing improvement conditions respectively on the axial Ward identity and a three-point correlator of the static vector current. A comparison between the non-perturbative estimates and the corresponding one-loop results shows a non-negligible effect of the O(g{sub 4}{sup 0})-terms on the improvement coefficients but a good accuracy of the perturbative description of the ratio of the renormalization constants. (orig.)

  17. Effect of Hydrotherapy on Static and Dynamic Balance in Older Adults: Comparison of Perturbed and Non-Perturbed Programs

    Directory of Open Access Journals (Sweden)

    Elham Azimzadeh

    2013-01-01

    Full Text Available Objectives: Falling is a main cause of mortality in elderly. Balance training exercises can help to prevent falls in older adults. According to the principle of specificity of training, the perturbation-based trainings are more similar to the real world. So these training programs can improve balance in elderly. Furthermore, exercising in an aquatic environment can reduce the limitations for balance training rather than a non-aquatic on. The aim of this study is comparing the effectiveness of perturbed and non-perturbed balance training programs in water on static and dynamic balance in aforementioned population group. Methods & Materials: 37 old women (age 80-65, were randomized to the following groups: perturbation-based training (n=12, non-perturbation-based training (n=12 and control (n=13 groups. Static and dynamic balance had been tested before and after the eight weeks of training by the postural stability test of the Biodex balance system using dynamic (level 4 and static platform. The data were analyzed by one sample paired t-test, Independent t-test and ANOVA. Results: There was a significant improvement for all indexes of static and dynamic balance in perturbation-based training (P<0.05. However, in non-perturbed group, all indexes were improved except ML (P<0.05. ANOVA showed that perturbed training was more effective than non-perturbed training on both static and dynamic balances. Conclusion: The findings confirmed the specificity principle of training. Although balance training can improve balance abilities, these kinds of trainings are not such specific for improving balance neuromuscular activities.The perturbation-based trainings can activate postural compensatory responses and reduce falling risk. According to results, we can conclude that hydrotherapy especially with perturbation-based programs will be useful for rehabilitation interventions in elderly .

  18. Scale-invariant scalar metric fluctuations during inflation: non-perturbative formalism from a 5D vacuum

    International Nuclear Information System (INIS)

    Anabitarte, M.; Bellini, M.; Madriz Aguilar, Jose Edgar

    2010-01-01

    We extend to 5D an approach of a 4D non-perturbative formalism to study scalar metric fluctuations of a 5D Riemann-flat de Sitter background metric. In contrast with the results obtained in 4D, the spectrum of cosmological scalar metric fluctuations during inflation can be scale invariant and the background inflaton field can take sub-Planckian values. (orig.)

  19. Non-perturbative treatment of excitation and ionization in U92++U91+ collisions at 1 GeV/amu

    International Nuclear Information System (INIS)

    Becker, U.; Gruen, N.; Scheid, W.; Soff, G.

    1986-01-01

    Inner shell excitation and ionization processes in relativistic collisions of very heavy ions are treated by a non-perturbative method for the first time. The time-dependent Dirac equation is solved by a finite difference method for the scattering of U 92+ on U 91+ at Esub(lab) = 1 GeV/amu and zero impact parameter. The K-shell ionization probabilities are compared with those resulting from first-order perturbation theory. (orig.)

  20. Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... of non-stationary non-linear time series models. Thus the paper provides a full asymptotic theory for estimators as well as standard and non-standard test statistics. The derived asymptotic results prove to be new compared to results found elsewhere in the literature due to the impact of the estimated...... symmetric non-linear error correction considered. A simulation study shows that the fi…nite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....

  1. Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbek, Anders

    In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... of non-stationary non-linear time series models. Thus the paper provides a full asymptotic theory for estimators as well as standard and non-standard test statistics. The derived asymptotic results prove to be new compared to results found elsewhere in the literature due to the impact of the estimated...... symmetric non-linear error correction are considered. A simulation study shows that the finite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....

  2. Non-perturbative aspects of quantum field theory. From the quark-gluon plasma to quantum gravity

    International Nuclear Information System (INIS)

    Christiansen, Nicolai

    2015-01-01

    In this dissertation we investigate several aspects of non-perturbative quantum field theory. Two main parts of the thesis are concerned with non-perturbative renormalization of quantum gravity within the asymptotic safety scenario. This framework is based on a non-Gaussian ultraviolet fixed point and provides a well-defined theory of quantized gravity. We employ functional renormalization group (FRG) techniques that allow for the study of quantum fields even in strongly coupled regimes. We construct a setup for the computation of graviton correlation functions and analyze the ultraviolet completion of quantum gravity in terms of the properties of the two- and three point function of the graviton. Moreover, the coupling of gravity to Yang-Mills theories is discussed. In particular, we study the effects of graviton induced interactions on asymptotic freedom on the one hand, and the role of gluonic fluctuations in the gravity sector on the other hand. The last subject of this thesis is the physics of the quark-gluon plasma. We set-up a general non-perturbative strategy for the computation of transport coefficients in non-Abelian gauge theories. We determine the viscosity over entropy ratio η/s in SU(3) Yang-Mills theory as a function of temperature and estimate its behavior in full quantum chromodynamics (QCD).

  3. A theoretical Markov chain model for evaluating correctional ...

    African Journals Online (AJOL)

    In this paper a stochastic method is applied in the study of the long time effect of confinement in a correctional institution on the behaviour of a person with criminal tendencies. The approach used is Markov chain, which uses past history to predict the state of a system in the future. A model is developed for comparing the ...

  4. Next-to-leading order corrections to the valon model

    Indian Academy of Sciences (India)

    Next-to-leading order corrections to the valon model. G R BOROUN. ∗ and E ESFANDYARI. Physics Department, Razi University, Kermanshah 67149, Iran. ∗. Corresponding author. E-mail: grboroun@gmail.com; boroun@razi.ac.ir. MS received 17 January 2014; revised 31 October 2014; accepted 21 November 2014.

  5. Standard Model-like corrections to Dilatonic Dynamics

    DEFF Research Database (Denmark)

    Antipin, Oleg; Krog, Jens; Mølgaard, Esben

    2013-01-01

    the same non-abelian global symmetries as a technicolor-like theory with matter in a complex representation of the gauge group. We then embed the electroweak gauge group within the global flavor structure and add also ordinary quark-like states to mimic the effects of the top. We find that the standard...... model-like induced corrections modify the original phase diagram and the details of the dilatonic spectrum. In particular, we show that the corrected theory exhibits near-conformal behavior for a smaller range of flavors and colors. For this range of values, however, our results suggest that near...

  6. Threshold corrections and gauge symmetry in twisted superstring models

    International Nuclear Information System (INIS)

    Pierce, D.M.

    1994-01-01

    Threshold corrections to the running of gauge couplings are calculated for superstring models with free complex world sheet fermions. For two N=1 SU(2)xU(1) 5 models, the threshold corrections lead to a small increase in the unification scale. Examples are given to illustrate how a given particle spectrum can be described by models with different boundary conditions on the internal fermions. We also discuss how complex twisted fermions can enhance the symmetry group of an N=4, SU(3)xU(1)xU(1) model to the gauge group SU(3)xSU(2)xU(1). It is then shown how a mixing angle analogous to the Weinberg angle depends on the boundary conditions of the internal fermions

  7. Tracer kinetic modelling of receptor data with mathematical metabolite correction

    International Nuclear Information System (INIS)

    Burger, C.; Buck, A.

    1996-01-01

    Quantitation of metabolic processes with dynamic positron emission tomography (PET) and tracer kinetic modelling relies on the time course of authentic ligand in plasma, i.e. the input curve. The determination of the latter often requires the measurement of labelled metabilites, a laborious procedure. In this study we examined the possibility of mathematical metabolite correction, which might obviate the need for actual metabolite measurements. Mathematical metabilite correction was implemented by estimating the input curve together with kinetic tissue parameters. The general feasibility of the approach was evaluated in a Monte Carlo simulation using a two tissue compartment model. The method was then applied to a series of five human carbon-11 iomazenil PET studies. The measured cerebral tissue time-activity curves were fitted with a single tissue compartment model. For mathematical metabolite correction the input curve following the peak was approximated by a sum of three decaying exponentials, the amplitudes and characteristic half-times of which were then estimated by the fitting routine. In the simulation study the parameters used to generate synthetic tissue time-activity curves (K 1 -k 4 ) were refitted with reasonable identifiability when using mathematical metabolite correciton. Absolute quantitation of distribution volumes was found to be possible provided that the metabolite and the kinetic models are adequate. If the kinetic model is oversimplified, the linearity of the correlation between true and estimated distribution volumes is still maintained, although the linear regression becomes dependent on the input curve. These simulation results were confirmed when applying mathematical metabolite correction to the 11 C iomazenil study. Estimates of the distribution volume calculated with a measured input curve were linearly related to the estimates calculated using mathematical metabolite correction with correlation coefficients >0.990. (orig./MG)

  8. Bias-correction in vector autoregressive models: A simulation study

    DEFF Research Database (Denmark)

    Engsted, Tom; Pedersen, Thomas Quistgaard

    We analyze and compare the properties of various methods for bias-correcting parameter estimates in vector autoregressions. First, we show that two analytical bias formulas from the existing literature are in fact identical. Next, based on a detailed simulation study, we show that this simple...... and easy-to-use analytical bias formula compares very favorably to the more standard but also more computer intensive bootstrap bias-correction method, both in terms of bias and mean squared error. Both methods yield a notable improvement over both OLS and a recently proposed WLS estimator. We also...... of pushing an otherwise stationary model into the non-stationary region of the parameter space during the process of correcting for bias....

  9. An alternative ionospheric correction model for global navigation satellite systems

    Science.gov (United States)

    Hoque, M. M.; Jakowski, N.

    2015-04-01

    The ionosphere is recognized as a major error source for single-frequency operations of global navigation satellite systems (GNSS). To enhance single-frequency operations the global positioning system (GPS) uses an ionospheric correction algorithm (ICA) driven by 8 coefficients broadcasted in the navigation message every 24 h. Similarly, the global navigation satellite system Galileo uses the electron density NeQuick model for ionospheric correction. The Galileo satellite vehicles (SVs) transmit 3 ionospheric correction coefficients as driver parameters of the NeQuick model. In the present work, we propose an alternative ionospheric correction algorithm called Neustrelitz TEC broadcast model NTCM-BC that is also applicable for global satellite navigation systems. Like the GPS ICA or Galileo NeQuick, the NTCM-BC can be optimized on a daily basis by utilizing GNSS data obtained at the previous day at monitor stations. To drive the NTCM-BC, 9 ionospheric correction coefficients need to be uploaded to the SVs for broadcasting in the navigation message. Our investigation using GPS data of about 200 worldwide ground stations shows that the 24-h-ahead prediction performance of the NTCM-BC is better than the GPS ICA and comparable to the Galileo NeQuick model. We have found that the 95 percentiles of the prediction error are about 16.1, 16.1 and 13.4 TECU for the GPS ICA, Galileo NeQuick and NTCM-BC, respectively, during a selected quiet ionospheric period, whereas the corresponding numbers are found about 40.5, 28.2 and 26.5 TECU during a selected geomagnetic perturbed period. However, in terms of complexity the NTCM-BC is easier to handle than the Galileo NeQuick and in this respect comparable to the GPS ICA.

  10. Topological quantum error correction in the Kitaev honeycomb model

    Science.gov (United States)

    Lee, Yi-Chan; Brell, Courtney G.; Flammia, Steven T.

    2017-08-01

    The Kitaev honeycomb model is an approximate topological quantum error correcting code in the same phase as the toric code, but requiring only a 2-body Hamiltonian. As a frustrated spin model, it is well outside the commuting models of topological quantum codes that are typically studied, but its exact solubility makes it more amenable to analysis of effects arising in this noncommutative setting than a generic topologically ordered Hamiltonian. Here we study quantum error correction in the honeycomb model using both analytic and numerical techniques. We first prove explicit exponential bounds on the approximate degeneracy, local indistinguishability, and correctability of the code space. These bounds are tighter than can be achieved using known general properties of topological phases. Our proofs are specialized to the honeycomb model, but some of the methods may nonetheless be of broader interest. Following this, we numerically study noise caused by thermalization processes in the perturbative regime close to the toric code renormalization group fixed point. The appearance of non-topological excitations in this setting has no significant effect on the error correction properties of the honeycomb model in the regimes we study. Although the behavior of this model is found to be qualitatively similar to that of the standard toric code in most regimes, we find numerical evidence of an interesting effect in the low-temperature, finite-size regime where a preferred lattice direction emerges and anyon diffusion is geometrically constrained. We expect this effect to yield an improvement in the scaling of the lifetime with system size as compared to the standard toric code.

  11. Parton distribution functions with QED corrections in the valon model

    Science.gov (United States)

    Mottaghizadeh, Marzieh; Taghavi Shahri, Fatemeh; Eslami, Parvin

    2017-10-01

    The parton distribution functions (PDFs) with QED corrections are obtained by solving the QCD ⊗QED DGLAP evolution equations in the framework of the "valon" model at the next-to-leading-order QCD and the leading-order QED approximations. Our results for the PDFs with QED corrections in this phenomenological model are in good agreement with the newly related CT14QED global fits code [Phys. Rev. D 93, 114015 (2016), 10.1103/PhysRevD.93.114015] and APFEL (NNPDF2.3QED) program [Comput. Phys. Commun. 185, 1647 (2014), 10.1016/j.cpc.2014.03.007] in a wide range of x =[10-5,1 ] and Q2=[0.283 ,108] GeV2 . The model calculations agree rather well with those codes. In the latter, we proposed a new method for studying the symmetry breaking of the sea quark distribution functions inside the proton.

  12. Logarithmic corrections to scaling in the XY2-model

    International Nuclear Information System (INIS)

    Kenna, R.; Irving, A.C.

    1995-01-01

    We study the distribution of partition function zeroes for the XY-model in two dimensions. In particular we find the scaling behaviour of the end of the distribution of zeroes in the complex external magnetic field plane in the thermodynamic limit (the Yang-Lee edge) and the form for the density of these zeroes. Assuming that finite-size scaling holds, we show that there have to exist logarithmic corrections to the leading scaling behaviour of thermodynamic quantities in this model. These logarithmic corrections are also manifest in the finite-size scaling formulae and we identify them numerically. The method presented here can be used to check the compatibility of scaling behaviour of odd and even thermodynamic functions in other models too. ((orig.))

  13. ANALYSIS AND CORRECTION OF SYSTEMATIC HEIGHT MODEL ERRORS

    Directory of Open Access Journals (Sweden)

    K. Jacobsen

    2016-06-01

    Full Text Available The geometry of digital height models (DHM determined with optical satellite stereo combinations depends upon the image orientation, influenced by the satellite camera, the system calibration and attitude registration. As standard these days the image orientation is available in form of rational polynomial coefficients (RPC. Usually a bias correction of the RPC based on ground control points is required. In most cases the bias correction requires affine transformation, sometimes only shifts, in image or object space. For some satellites and some cases, as caused by small base length, such an image orientation does not lead to the possible accuracy of height models. As reported e.g. by Yong-hua et al. 2015 and Zhang et al. 2015, especially the Chinese stereo satellite ZiYuan-3 (ZY-3 has a limited calibration accuracy and just an attitude recording of 4 Hz which may not be satisfying. Zhang et al. 2015 tried to improve the attitude based on the color sensor bands of ZY-3, but the color images are not always available as also detailed satellite orientation information. There is a tendency of systematic deformation at a Pléiades tri-stereo combination with small base length. The small base length enlarges small systematic errors to object space. But also in some other satellite stereo combinations systematic height model errors have been detected. The largest influence is the not satisfying leveling of height models, but also low frequency height deformations can be seen. A tilt of the DHM by theory can be eliminated by ground control points (GCP, but often the GCP accuracy and distribution is not optimal, not allowing a correct leveling of the height model. In addition a model deformation at GCP locations may lead to not optimal DHM leveling. Supported by reference height models better accuracy has been reached. As reference height model the Shuttle Radar Topography Mission (SRTM digital surface model (DSM or the new AW3D30 DSM, based on ALOS

  14. Repeat-aware modeling and correction of short read errors.

    Science.gov (United States)

    Yang, Xiao; Aluru, Srinivas; Dorman, Karin S

    2011-02-15

    High-throughput short read sequencing is revolutionizing genomics and systems biology research by enabling cost-effective deep coverage sequencing of genomes and transcriptomes. Error detection and correction are crucial to many short read sequencing applications including de novo genome sequencing, genome resequencing, and digital gene expression analysis. Short read error detection is typically carried out by counting the observed frequencies of kmers in reads and validating those with frequencies exceeding a threshold. In case of genomes with high repeat content, an erroneous kmer may be frequently observed if it has few nucleotide differences with valid kmers with multiple occurrences in the genome. Error detection and correction were mostly applied to genomes with low repeat content and this remains a challenging problem for genomes with high repeat content. We develop a statistical model and a computational method for error detection and correction in the presence of genomic repeats. We propose a method to infer genomic frequencies of kmers from their observed frequencies by analyzing the misread relationships among observed kmers. We also propose a method to estimate the threshold useful for validating kmers whose estimated genomic frequency exceeds the threshold. We demonstrate that superior error detection is achieved using these methods. Furthermore, we break away from the common assumption of uniformly distributed errors within a read, and provide a framework to model position-dependent error occurrence frequencies common to many short read platforms. Lastly, we achieve better error correction in genomes with high repeat content. The software is implemented in C++ and is freely available under GNU GPL3 license and Boost Software V1.0 license at "http://aluru-sun.ece.iastate.edu/doku.php?id = redeem". We introduce a statistical framework to model sequencing errors in next-generation reads, which led to promising results in detecting and correcting errors

  15. M{sub b} and f{sub B} from non-perturbatively renormalized HQET with N{sub f} = 2 light quarks

    Energy Technology Data Exchange (ETDEWEB)

    Blossier, Benoit [CNRS et Univ. Paris-Sud XI, Orsay (France). Lab. de Physique Theorique; Bulava, John [CERN, Geneva (Switzerland). Physics Dept.; Della Morte, Michele; Hippel, Georg von [Mainz Univ. (Germany). Inst. fuer Kernphysik; Donnellan, Michael; Simma, Hubert; Sommer, Rainer [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). NIC; Fritzsch, Patrick [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Garron, Nicolas [Edinburgh Univ. (United Kingdom). Tait Inst.; Heitger, Jochen [Muenster Univ. (Germany). Inst. fuer Theoretische Physik 1

    2011-12-15

    We present an updated analysis of the non-perturbatively renormalized b-quark mass and B meson decay constant based on CLS lattices with two dynamical non-perturbatively improved Wilson quarks. This update incorporates additional light quark masses and lattice spacings in large physical volume to improve chiral extrapolations and to reach the continuum limit. We use Heavy Quark Effective Theory (HQET) including 1/m{sub b} terms with non-perturbative coefficients based on the matching of QCD and HQET developed by the ALPHA collaboration during the past years. (orig.)

  16. Using modeling to develop and evaluate a corrective action system

    International Nuclear Information System (INIS)

    Rodgers, L.

    1995-01-01

    At a former trucking facility in EPA Region 4, a corrective action system was installed to remediate groundwater and soil contaminated with gasoline and fuel oil products released from several underground storage tanks (USTs). Groundwater modeling was used to develop the corrective action plan and later used with soil vapor modeling to evaluate the systems effectiveness. Groundwater modeling was used to determine the effects of a groundwater recovery system on the water table at the site. Information gathered during the assessment phase was used to develop a three dimensional depiction of the subsurface at the site. Different groundwater recovery schemes were then modeled to determine the most effective method for recovering contaminated groundwater. Based on the modeling and calculations, a corrective action system combining soil vapor extraction (SVE) and groundwater recovery was designed. The system included seven recovery wells, to extract both soil vapor and groundwater, and a groundwater treatment system. Operation and maintenance of the system included monthly system sampling and inspections and quarterly groundwater sampling. After one year of operation the effectiveness of the system was evaluated. A subsurface soil gas model was used to evaluate the effects of the SVE system on the site contamination as well as its effects on the water table and groundwater recovery operations. Groundwater modeling was used in evaluating the effectiveness of the groundwater recovery system. Plume migration and capture were modeled to insure that the groundwater recovery system at the site was effectively capturing the contaminant plume. The two models were then combined to determine the effects of the two systems, acting together, on the remediation process

  17. Corrected Statistical Energy Analysis Model for Car Interior Noise

    Directory of Open Access Journals (Sweden)

    A. Putra

    2015-01-01

    Full Text Available Statistical energy analysis (SEA is a well-known method to analyze the flow of acoustic and vibration energy in a complex structure. For an acoustic space where significant absorptive materials are present, direct field component from the sound source dominates the total sound field rather than a reverberant field, where the latter becomes the basis in constructing the conventional SEA model. Such environment can be found in a car interior and thus a corrected SEA model is proposed here to counter this situation. The model is developed by eliminating the direct field component from the total sound field and only the power after the first reflection is considered. A test car cabin was divided into two subsystems and by using a loudspeaker as a sound source, the power injection method in SEA was employed to obtain the corrected coupling loss factor and the damping loss factor from the corrected SEA model. These parameters were then used to predict the sound pressure level in the interior cabin using the injected input power from the engine. The results show satisfactory agreement with the directly measured SPL.

  18. A model of diffraction scattering with unitary corrections

    International Nuclear Information System (INIS)

    Etim, E.; Malecki, A.; Satta, L.

    1989-01-01

    The inability of the multiple scattering model of Glauber and similar geometrical picture models to fit data at Collider energies, to fit low energy data at large momentum transfers and to explain the absence of multiple diffraction dips in the data is noted. It is argued and shown that a unitary correction to the multiple scattering amplitude gives rise to a better model and allows to fit all available data on nucleon-nucleon and nucleus-nucleus collisions at all energies and all momentum transfers. There are no multiple diffraction dips

  19. A correction on coastal heads for groundwater flow models.

    Science.gov (United States)

    Lu, Chunhui; Werner, Adrian D; Simmons, Craig T; Luo, Jian

    2015-01-01

    We introduce a simple correction to coastal heads for constant-density groundwater flow models that contain a coastal boundary, based on previous analytical solutions for interface flow. The results demonstrate that accurate discharge to the sea in confined aquifers can be obtained by direct application of Darcy's law (for constant-density flow) if the coastal heads are corrected to ((α + 1)/α)hs  - B/2α, in which hs is the mean sea level above the aquifer base, B is the aquifer thickness, and α is the density factor. For unconfined aquifers, the coastal head should be assigned the value hs1+α/α. The accuracy of using these corrections is demonstrated by consistency between constant-density Darcy's solution and variable-density flow numerical simulations. The errors introduced by adopting two previous approaches (i.e., no correction and using the equivalent fresh water head at the middle position of the aquifer to represent the hydraulic head at the coastal boundary) are evaluated. Sensitivity analysis shows that errors in discharge to the sea could be larger than 100% for typical coastal aquifer parameter ranges. The location of observation wells relative to the toe is a key factor controlling the estimation error, as it determines the relative aquifer length of constant-density flow relative to variable-density flow. The coastal head correction method introduced in this study facilitates the rapid and accurate estimation of the fresh water flux from a given hydraulic head measurement and allows for an improved representation of the coastal boundary condition in regional constant-density groundwater flow models. © 2014, National Ground Water Association.

  20. Recoil corrected bag model calculations for semileptonic weak decays

    International Nuclear Information System (INIS)

    Lie-Svendsen, Oe.; Hoegaasen, H.

    1987-02-01

    Recoil corrections to various model results for strangeness changing weak decay amplitudes have been developed. It is shown that the spurious reference frame dependence of earlier calculations is reduced. The second class currents are generally less important than obtained by calculations in the static approximation. Theoretical results are compared to observations. The agreement is quite good, although the values for the Cabibbo angle obtained by fits to the decay rates are somewhat to large

  1. Radiative corrections to the Higgs couplings in the triplet model

    International Nuclear Information System (INIS)

    KIKUCHI, M.

    2014-01-01

    The feature of extended Higgs models can appear in the pattern of deviations from the Standard Model (SM) predictions in coupling constants of the SM-like Higgs boson (h). We can thus discriminate extended Higgs models by precisely measuring the pattern of deviations in the coupling constants of h, even when extra bosons are not found directly. In order to compare the theoretical predictions to the future precision data at the ILC, we must evaluate the theoretical predictions with radiative corrections in various extended Higgs models. In this paper, we give our comprehensive study for radiative corrections to various Higgs boson couplings of h in the minimal Higgs triplet model (HTM). First, we define renormalization conditions in the model, and we calculate the Higgs coupling; gγγ, hWW, hZZ and hhh at the one loop level. We then evaluate deviations in coupling constants of the SM-like Higgs boson from the predictions in the SM. We find that one-loop contributions to these couplings are substantial as compared to their expected measurement accuracies at the ILC. Therefore the HTM has a possibility to be distinguished from the other models by comparing the pattern of deviations in the Higgs boson couplings.

  2. Non-perturbative improvement of the axial current with three dynamical flavors and the Iwasaki gauge action

    Energy Technology Data Exchange (ETDEWEB)

    Kaneko, T.; Hashimoto, S. [High Energy Accelerator Research Organization (KEK), Tsukuba, Ibaraki (Japan)]|[Graduate Univ. for Advanced Studies, Tsukuba, Ibaraki (Japan); Aoki, S. [Tsukuba Univ., Ibaraki (Japan). Graduate School of Pure and Applied Sciences]|[Brookhaven National Laboratory, Upton, NY (United States). Riken BNL Research Center; Della Morte, M. [CERN, Physics Dept., Geneva (Switzerland); Hoffmann, R. [Colorado Univ., Boulder, CO (United States). Dept. of Physics; Sommer, R. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2007-03-15

    We perform a non-perturbative determination of the improvement coefficient c{sub A} to remove O(a) discretization errors in the axial vector current in three-flavor lattice QCD with the Iwasaki gauge action and the standard O(a)-improved Wilson quark action. An improvement condition with a good sensitivity to c{sub A} is imposed at constant physics. Combining our results with the perturbative expansion, c{sub A} is now known rather precisely for a{sup -1}>or similar 1.6 GeV. (orig.)

  3. Non-perturbative improvement of the axial current with three dynamical flavors and the Iwasaki gauge action

    International Nuclear Information System (INIS)

    Kaneko, T.; Hashimoto, S.; Aoki, S.; Hoffmann, R.

    2007-03-01

    We perform a non-perturbative determination of the improvement coefficient c A to remove O(a) discretization errors in the axial vector current in three-flavor lattice QCD with the Iwasaki gauge action and the standard O(a)-improved Wilson quark action. An improvement condition with a good sensitivity to c A is imposed at constant physics. Combining our results with the perturbative expansion, c A is now known rather precisely for a -1 >or similar 1.6 GeV. (orig.)

  4. Kaon semileptonic decay form factors from Nf = 2 non-perturbatively O(a)-improved Wilson fermions

    International Nuclear Information System (INIS)

    Broemmel, D.; Nakamura, Y.; Pleiter, D.

    2007-10-01

    We present first results from the QCDSF collaboration for the kaon semileptonic decay form factors at zero momentum transfer, using two flavours of non-perturbatively O(a)-improved Wilson quarks. A lattice determination of these form factors is of particular interest to improve the accuracy on the CKM matrix element vertical stroke V us vertical stroke. Calculations are performed on lattices with lattice spacing of about 0.08 fm with different values of light and strange quark masses, which allows us to extrapolate to chiral limit. Employing double ratio techniques, we are able to get small statistical errors. (orig.)

  5. Perturbative corrections for approximate inference in gaussian latent variable models

    DEFF Research Database (Denmark)

    Opper, Manfred; Paquet, Ulrich; Winther, Ole

    2013-01-01

    Expectation Propagation (EP) provides a framework for approximate inference. When the model under consideration is over a latent Gaussian field, with the approximation being Gaussian, we show how these approximations can systematically be corrected. A perturbative expansion is made of the exact b...... illustrate on tree-structured Ising model approximations. Furthermore, they provide a polynomial-time assessment of the approximation error. We also provide both theoretical and practical insights on the exactness of the EP solution. © 2013 Manfred Opper, Ulrich Paquet and Ole Winther....

  6. Likelihood-Based Inference in Nonlinear Error-Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    We consider a class of vector nonlinear error correction models where the transfer function (or loadings) of the stationary relation- ships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long- run cointegration parameters, and the short-run parameters. Asymp- totic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normaity can be found. A simulation study...

  7. Testing and inference in nonlinear cointegrating vector error correction models

    DEFF Research Database (Denmark)

    Kristensen, D.; Rahbek, A.

    2013-01-01

    We analyze estimators and tests for a general class of vector error correction models that allows for asymmetric and nonlinear error correction. For a given number of cointegration relationships, general hypothesis testing is considered, where testing for linearity is of particular interest. Under...... the null of linearity, parameters of nonlinear components vanish, leading to a nonstandard testing problem. We apply so-called sup-tests to resolve this issue, which requires development of new(uniform) functional central limit theory and results for convergence of stochastic integrals. We provide a full...... asymptotic theory for estimators and test statistics. The derived asymptotic results prove to be nonstandard compared to results found elsewhere in the literature due to the impact of the estimated cointegration relations. This complicates implementation of tests motivating the introduction of bootstrap...

  8. Emergence of spacetime dynamics in entropy corrected and braneworld models

    International Nuclear Information System (INIS)

    Sheykhi, A.; Dehghani, M.H.; Hosseini, S.E.

    2013-01-01

    A very interesting new proposal on the origin of the cosmic expansion was recently suggested by Padmanabhan [arXiv:1206.4916]. He argued that the difference between the surface degrees of freedom and the bulk degrees of freedom in a region of space drives the accelerated expansion of the universe, as well as the standard Friedmann equation through relation ΔV = Δt(N sur −N bulk ). In this paper, we first present the general expression for the number of degrees of freedom on the holographic surface, N sur , using the general entropy corrected formula S = A/(4L p 2 )+s(A). Then, as two example, by applying the Padmanabhan's idea we extract the corresponding Friedmann equations in the presence of power-law and logarithmic correction terms in the entropy. We also extend the study to RS II and DGP braneworld models and derive successfully the correct form of the Friedmann equations in these theories. Our study further supports the viability of Padmanabhan's proposal

  9. ITER Side Correction Coil Quench model and analysis

    Science.gov (United States)

    Nicollet, S.; Bessette, D.; Ciazynski, D.; Duchateau, J. L.; Gauthier, F.; Lacroix, B.

    2016-12-01

    Previous thermohydraulic studies performed for the ITER TF, CS and PF magnet systems have brought some important information on the detection and consequences of a quench as a function of the initial conditions (deposited energy, heated length). Even if the temperature margin of the Correction Coils is high, their behavior during a quench should also be studied since a quench is likely to be triggered by potential anomalies in joints, ground fault on the instrumentation wires, etc. A model has been developed with the SuperMagnet Code (Bagnasco et al., 2010) for a Side Correction Coil (SCC2) with four pancakes cooled in parallel, each of them represented by a Thea module (with the proper Cable In Conduit Conductor characteristics). All the other coils of the PF cooling loop are hydraulically connected in parallel (top/bottom correction coils and six Poloidal Field Coils) are modeled by Flower modules with equivalent hydraulics properties. The model and the analysis results are presented for five quench initiation cases with/without fast discharge: two quenches initiated by a heat input to the innermost turn of one pancake (case 1 and case 2) and two other quenches initiated at the innermost turns of four pancakes (case 3 and case 4). In the 5th case, the quench is initiated at the middle turn of one pancake. The impact on the cooling circuit, e.g. the exceedance of the opening pressure of the quench relief valves, is detailed in case of an undetected quench (i.e. no discharge of the magnet). Particular attention is also paid to a possible secondary quench detection system based on measured thermohydraulic signals (pressure, temperature and/or helium mass flow rate). The maximum cable temperature achieved in case of a fast current discharge (primary detection by voltage) is compared to the design hot spot criterion of 150 K, which includes the contribution of helium and jacket.

  10. PENDEKATAN ERROR CORRECTION MODEL SEBAGAI PENENTU HARGA SAHAM

    Directory of Open Access Journals (Sweden)

    David Kaluge

    2017-03-01

    Full Text Available This research was to find the effect of profitability, rate of interest, GDP, and foreign exchange rate on stockprices. Approach used was error correction model. Profitability was indicated by variables EPS, and ROIwhile the SBI (1 month was used for representing interest rate. This research found that all variablessimultaneously affected the stock prices significantly. Partially, EPS, PER, and Foreign Exchange rate significantlyaffected the prices both in short run and long run. Interestingly that SBI and GDP did not affect theprices at all. The variable of ROI had only long run impact on the prices.

  11. Hierarchy generation in compactified supersymmetric models

    International Nuclear Information System (INIS)

    Ross, G.G.

    1988-01-01

    The problem of generating a large hierarchy in compactified supersymmetric models is re-examined. It is shown how, even for the class of models for which Str M 2 is non-vanishing, a combination of non-perturbative effects and radiative corrections may lead to an exponentially large hierarchy. A corollary is that the couplings of the effective field theory in the visible sector should be small, i.e., perturbation theory should be applicable. (orig.)

  12. Simulation of QCD with N_f=2+1 flavors of non-perturbatively improved Wilson fermions

    International Nuclear Information System (INIS)

    Bruno, Mattia; Djukanovic, Dalibor; Engel, Georg P.; Francis, Anthony; Herdoiza, Gregorio; Horch, Hanno; Korcyl, Piotr; Korzec, Tomasz; Papinutto, Mauro; Schaefer, Stefan; Scholz, Enno E.; Simeth, Jakob; Simma, Hubert; Söldner, Wolfgang

    2015-01-01

    We describe a new set of gauge configurations generated within the CLS effort. These ensembles have N_f=2+1 flavors of non-perturbatively improved Wilson fermions in the sea with the Lüscher-Weisz action used for the gluons. Open boundary conditions in time are used to address the problem of topological freezing at small lattice spacings and twisted-mass reweighting for improved stability of the simulations. We give the bare parameters at which the ensembles have been generated and how these parameters have been chosen. Details of the algorithmic setup and its performance are presented as well as measurements of the pion and kaon masses alongside the scale parameter t_0.

  13. Non-perturbative renormalisation of {delta}F=2 four-fermion operators in two-flavour QCD

    Energy Technology Data Exchange (ETDEWEB)

    Dimopoulos, P.; Vladikas, A. [INFN, Sezione di Roma II (Italy)]|[Rome-3 Univ. (Italy). Dipt. di Fisica; Herdoiza, G. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Palombi, F.; Papinutto, M. [CERN, Geneva (Switzerland). Physics Dept., TH Division; Pena, C. [Universidad Autonoma de Madrid (Spain). Dept. de Fisica Teorica C-XI]|[Univ. Autonoma de Madrid (Spain). Inst. de Fisica Teorica UAM/CSIC C-XVI; Wittig, H. [Mainz Univ. (Germany). Inst. fuer Kernphysik

    2007-12-15

    Using Schroedinger Functional methods, we compute the non-perturbative renormalisation and renormalisation group running of several four-fermion operators, in the framework of lattice simulations with two dynamical Wilson quarks. Two classes of operators have been targeted: (i) those with left-left current structure and four propagating quark fields; (ii) all operators containing two static quarks. In both cases, only the parity-odd contributions have been considered, being the ones that renormalise multiplicatively. Our results, once combined with future simulations of the corresponding lattice hadronic matrix elements, may be used for the computation of phenomenological quantities of interest, such as B{sub K} and B{sub B} (the latter also in the static limit). (orig.)

  14. String Threshold corrections in models with spondaneously broken supersymmetry

    CERN Document Server

    Kiritsis, Elias B; Petropoulos, P M; Rizos, J

    1999-01-01

    We analyse a class of four-dimensional heterotic ground states with N=2 space-time supersymmetry. From the ten-dimensional perspective, such models can be viewed as compactifications on a six-dimensional manifold with SU(2) holonomy, which is locally but not globally K3 x T^2. The maximal N=4 supersymmetry is spontaneously broken to N=2. The masses of the two massive gravitinos depend on the (T,U) moduli of T^2. We evaluate the one-loop threshold corrections of gauge and R^2 couplings and we show that they fall in several universality classes, in contrast to what happens in usual K3 x T^2 compactifications, where the N=4 supersymmetry is explicitly broken to N=2, and where a single universality class appears. These universality properties follow from the structure of the elliptic genus. The behaviour of the threshold corrections as functions of the moduli is analysed in detail: it is singular across several rational lines of the T^2 moduli because of the appearance of extra massless states, and suffers only f...

  15. The Innsbruck/ESO sky models and telluric correction tools*

    Directory of Open Access Journals (Sweden)

    Kimeswenger S.

    2015-01-01

    While the ground based astronomical observatories just have to correct for the line-of-sight integral of these effects, the Čerenkov telescopes use the atmosphere as the primary detector. The measured radiation originates at lower altitudes and does not pass through the entire atmosphere. Thus, a decent knowledge of the profile of the atmosphere at any time is required. The latter cannot be achieved by photometric measurements of stellar sources. We show here the capabilities of our sky background model and data reduction tools for ground-based optical/infrared telescopes. Furthermore, we discuss the feasibility of monitoring the atmosphere above any observing site, and thus, the possible application of the method for Čerenkov telescopes.

  16. Physical correction model for automatic correction of intensity non-uniformity in magnetic resonance imaging

    Directory of Open Access Journals (Sweden)

    Stefan Leger

    2017-10-01

    Conclusion: The proposed PCM algorithm led to a significantly improved image quality compared to the originally acquired images, suggesting that it is applicable to the correction of MRI data. Thus it may help to reduce intensity non-uniformity which is an important step for advanced image analysis.

  17. Non-perturbational surface-wave inversion: A Dix-type relation for surface waves

    Science.gov (United States)

    Haney, Matt; Tsai, Victor C.

    2015-01-01

    We extend the approach underlying the well-known Dix equation in reflection seismology to surface waves. Within the context of surface wave inversion, the Dix-type relation we derive for surface waves allows accurate depth profiles of shear-wave velocity to be constructed directly from phase velocity data, in contrast to perturbational methods. The depth profiles can subsequently be used as an initial model for nonlinear inversion. We provide examples of the Dix-type relation for under-parameterized and over-parameterized cases. In the under-parameterized case, we use the theory to estimate crustal thickness, crustal shear-wave velocity, and mantle shear-wave velocity across the Western U.S. from phase velocity maps measured at 8-, 20-, and 40-s periods. By adopting a thin-layer formalism and an over-parameterized model, we show how a regularized inversion based on the Dix-type relation yields smooth depth profiles of shear-wave velocity. In the process, we quantitatively demonstrate the depth sensitivity of surface-wave phase velocity as a function of frequency and the accuracy of the Dix-type relation. We apply the over-parameterized approach to a near-surface data set within the frequency band from 5 to 40 Hz and find overall agreement between the inverted model and the result of full nonlinear inversion.

  18. Non-perturbative RPA-method implemented in the Coulomb gauge QCD Hamiltonian: From quarks and gluons to baryons and mesons

    Science.gov (United States)

    Yepez-Martinez, Tochtli; Civitarese, Osvaldo; Hess, Peter O.

    2018-02-01

    Starting from an algebraic model based on the QCD-Hamiltonian and previously applied to study meson states, we have developed an extension of it in order to explore the structure of baryon states. In developing our approach we have adapted concepts taken from group theory and non-perturbative many-body methods to describe states built from effective quarks and anti-quarks degrees of freedom. As a Hamiltonian we have used the QCD Hamiltonian written in the Coulomb Gauge, and expressed it in terms of effective quark-antiquark, di-quarks and di-antiquark excitations. To gain some insights about the relevant interactions of quarks in hadronic states, the Hamiltonian was approximately diagonalized by mapping quark-antiquark pairs and di-quarks (di-antiquarks) onto phonon states. In dealing with the structure of the vacuum of the theory, color-scalar and color-vector states are introduced to account for ground-state correlations. While the use of a purely color-scalar ground state is an obvious choice, so that colorless hadrons contain at least three quarks, the presence of coupled color-vector pairs in the ground state allows for colorless excitations resulting from the action of color objects upon it.

  19. Non-perturbative solution of a quantum mechanical oscillator interacting with a specific environment

    International Nuclear Information System (INIS)

    Badralexe, E.; Gupta, R.K.; Scheid, W.

    1984-01-01

    A quantum mechanical model of an oscillator interacting linearly with an environment is treated by the method of perturbation series expansion. For a special class of environments and interactions, the series is summed up to all orders. An integral equation for the time dependence of the coordinate operator of the oscillator is obtained, which is solved analytically by the method of Laplace transformations. General conditions are stated for a dissipative behaviour of the special class of environments considered. An example, which is widely applicable, is discussed. (author)

  20. Structure of Nonlocal quark vacuum condensate in non-perturbative QCD vacuum

    International Nuclear Information System (INIS)

    Xiang Qianfei; Ma Weixing; Zhou Lijuan; Jiang Weizhou

    2014-01-01

    Based on the Dyson-Schwinger Equations (DSEs) with the rainbow truncation, and Operator Product Expansion, the structure of nonlocal quark vacuum condensate in QCD, described by quark self-energy functions A_f and B_f given usually by the solutions of the DSEs of quark propagator, is predicted numerically. We also calculate the local quark vacuum condensate, quark-gluon mixed local vacuum condensate, and quark virtuality. The self-energy functions A_f and B_f are given by the parameterized quark propagator functions σ_v"f (p"2) and σ_s"f (p"2) of Roberts and Williams, instead of the numerical solutions of the DSEs. Our calculated results are in reasonable agreement with those of QCD sum rules, Lattice QCD calculations, and instanton model predictions, although the resulting local quark vacuum condensate for light quarks, u, d, s, are a little bit larger than those of the above theoretical predictions. We think the differences are caused by model dependence. The larger of strange quark vacuum condensate than u, d quark is due to the s quark mass which is more larger than u, d quark masses. Of course, the Roberts-Williams parameterized quark propagator is an empirical formulism, which approximately describes quark propagation. (authors)

  1. Construction of Non-Perturbative, Unitary Particle-Antiparticle Amplitudes for Finite Particle Number Scattering Formalisms

    International Nuclear Information System (INIS)

    Lindesay, James V

    2002-01-01

    Starting from a unitary, Lorentz invariant two-particle scattering amplitude, we show how to use an identification and replacement process to construct a unique, unitary particle-antiparticle amplitude. This process differs from conventional on-shell Mandelstam s,t,u crossing in that the input and constructed amplitudes can be off-diagonal and off-energy shell. Further, amplitudes are constructed using the invariant parameters which are appropriate to use as driving terms in the multi-particle, multichannel nonperturbative, cluster decomposable, relativistic scattering equations of the Faddeev-type integral equations recently presented by Alfred, Kwizera, Lindesay and Noyes. It is therefore anticipated that when so employed, the resulting multi-channel solutions will also be unitary. The process preserves the usual particle-antiparticle symmetries. To illustrate this process, we construct a J=0 scattering length model chosen for simplicity. We also exhibit a class of physical models which contain a finite quantum mass parameter and are Lorentz invariant. These are constructed to reduce in the appropriate limits, and with the proper choice of value and sign of the interaction parameter, to the asymptotic solution of the nonrelativistic Coulomb problem, including the forward scattering singularity , the essential singularity in the phase, and the Bohr bound-state spectrum

  2. Non-perturbative analysis of some simple field theories on a momentum space lattice

    International Nuclear Information System (INIS)

    Brooks, E.D. III.

    1984-01-01

    In this work, a new technique is developed for the numerical study of quantum field theory. The procedure, borrowed from nonrelativistic quantum mechanics, is that of finding the eigenvalues of a finite Hamiltonian matrix. The matrix is created by evaluating the matrix elements of the Hamiltonian operator on a finite basis of states. The eigenvalues and eigenvectors of the finite dimensional matrix become an accurate approximation to those of the physical system as the finite basis of states is extended to become more complete. A model of scalars coupled to fermions in 0 + 1 dimensions as a simple field theory is studied to consider in the course of developing the technique. Having developed the numerical and analytical techniques, a Fermi field coupled to a Bose field in 1 + 1 dimensions with the Yukawa coupling lambda anti-psi phi psi is considered. The large coupling limit basis of the 0 + 1 dimensional model is extended to this case using a Bogoliubov transformation on the fermions. It provides a handle on the behavior of the system in the large coupling limit. The effects of renormalization and the generation of bound states are considered

  3. The two-component non-perturbative pomeron and the G-Universality

    Energy Technology Data Exchange (ETDEWEB)

    Nicolescu, Basarab E-mail: nicolesc@in2p3.fr

    2001-04-01

    In this communication we present a generalization of the Donnachie-Landshoff model inspired by the recent discovery of a 2-component Pomeron in LLA-QCD by Bartels, Lipatov and Vacca. In particular, we explore a new property, not present in the usual Regge theory - the G-Universality - which signifies the independence of one of the Pomeron components on the nature of the initial and final hadrons. The best description of the p-barp, pp, {pi}{sup {+-}}p, K{sup {+-}}p, {gamma}{gamma} and {gamma}p forward data is obtained when G-universality is imposed. Moreover, the ln{sup 2}s behaviour of the hadron amplitude, first established by Heisenberg, is clearly favoured by the data.

  4. Introduction to non-perturbative quantum chromodynamics; Introduction a QCD non perturbatif

    Energy Technology Data Exchange (ETDEWEB)

    Pene, O. [Paris-11 Univ., 91 - Orsay (France). Lab. de Physique Theorique et Hautes Energies

    1995-12-31

    Quantum chromodynamics is considered to be the theory of strong interaction. The main peculiarity of this theory is that its asymptotic states (hadrons) are different from its elementary fields (quarks and gluons). This property plays a great part in any physical process involving small momentum-energy transfers. In such a range perturbative methods are no longer allowed. This work focuses on other tools such as QCD symmetry, the quark model, Green functions and the sum rules. To get hadron characteristics numerically, QCD on lattices is used but only in the case of simple process involving no more than one hadron in the initial and final states because of the complexity of the Green function. Some examples using a Monte-Carlo simulation are given. (A.C.) 39 refs.

  5. Model Consistent Pseudo-Observations of Precipitation and Their Use for Bias Correcting Regional Climate Models

    Directory of Open Access Journals (Sweden)

    Peter Berg

    2015-01-01

    Full Text Available Lack of suitable observational data makes bias correction of high space and time resolution regional climate models (RCM problematic. We present a method to construct pseudo-observational precipitation data bymerging a large scale constrained RCMreanalysis downscaling simulation with coarse time and space resolution observations. The large scale constraint synchronizes the inner domain solution to the driving reanalysis model, such that the simulated weather is similar to observations on a monthly time scale. Monthly biases for each single month are corrected to the corresponding month of the observational data, and applied to the finer temporal resolution of the RCM. A low-pass filter is applied to the correction factors to retain the small spatial scale information of the RCM. The method is applied to a 12.5 km RCM simulation and proven successful in producing a reliable pseudo-observational data set. Furthermore, the constructed data set is applied as reference in a quantile mapping bias correction, and is proven skillful in retaining small scale information of the RCM, while still correcting the large scale spatial bias. The proposed method allows bias correction of high resolution model simulations without changing the fine scale spatial features, i.e., retaining the very information required by many impact models.

  6. Non-perturbative methodologies for low-dimensional strongly-correlated systems: From non-Abelian bosonization to truncated spectrum methods.

    Science.gov (United States)

    James, Andrew J A; Konik, Robert M; Lecheminant, Philippe; Robinson, Neil J; Tsvelik, Alexei M

    2018-02-26

    We review two important non-perturbative approaches for extracting the physics of low-dimensional strongly correlated quantum systems. Firstly, we start by providing a comprehensive review of non-Abelian bosonization. This includes an introduction to the basic elements of conformal field theory as applied to systems with a current algebra, and we orient the reader by presenting a number of applications of non-Abelian bosonization to models with large symmetries. We then tie this technique into recent advances in the ability of cold atomic systems to realize complex symmetries. Secondly, we discuss truncated spectrum methods for the numerical study of systems in one and two dimensions. For one-dimensional systems we provide the reader with considerable insight into the methodology by reviewing canonical applications of the technique to the Ising model (and its variants) and the sine-Gordon model. Following this we review recent work on the development of renormalization groups, both numerical and analytical, that alleviate the effects of truncating the spectrum. Using these technologies, we consider a number of applications to one-dimensional systems: properties of carbon nanotubes, quenches in the Lieb-Liniger model, 1  +  1D quantum chromodynamics, as well as Landau-Ginzburg theories. In the final part we move our attention to consider truncated spectrum methods applied to two-dimensional systems. This involves combining truncated spectrum methods with matrix product state algorithms. We describe applications of this method to two-dimensional systems of free fermions and the quantum Ising model, including their non-equilibrium dynamics.

  7. Non-perturbative methodologies for low-dimensional strongly-correlated systems: From non-Abelian bosonization to truncated spectrum methods

    Science.gov (United States)

    James, Andrew J. A.; Konik, Robert M.; Lecheminant, Philippe; Robinson, Neil J.; Tsvelik, Alexei M.

    2018-04-01

    We review two important non-perturbative approaches for extracting the physics of low-dimensional strongly correlated quantum systems. Firstly, we start by providing a comprehensive review of non-Abelian bosonization. This includes an introduction to the basic elements of conformal field theory as applied to systems with a current algebra, and we orient the reader by presenting a number of applications of non-Abelian bosonization to models with large symmetries. We then tie this technique into recent advances in the ability of cold atomic systems to realize complex symmetries. Secondly, we discuss truncated spectrum methods for the numerical study of systems in one and two dimensions. For one-dimensional systems we provide the reader with considerable insight into the methodology by reviewing canonical applications of the technique to the Ising model (and its variants) and the sine-Gordon model. Following this we review recent work on the development of renormalization groups, both numerical and analytical, that alleviate the effects of truncating the spectrum. Using these technologies, we consider a number of applications to one-dimensional systems: properties of carbon nanotubes, quenches in the Lieb–Liniger model, 1  +  1D quantum chromodynamics, as well as Landau–Ginzburg theories. In the final part we move our attention to consider truncated spectrum methods applied to two-dimensional systems. This involves combining truncated spectrum methods with matrix product state algorithms. We describe applications of this method to two-dimensional systems of free fermions and the quantum Ising model, including their non-equilibrium dynamics.

  8. Coaching, Not Correcting: An Alternative Model for Minority Students

    Science.gov (United States)

    Dresser, Rocío; Asato, Jolynn

    2014-01-01

    The debate on the role of oral corrective feedback or "repair" in English instruction settings has been going on for over 30 years. Some educators believe that oral grammar correction is effective because they have noticed that students who learned a set of grammar rules were more likely to use them in real life communication (Krashen,…

  9. Loop Corrections in Very Special Relativity Standard Model

    Science.gov (United States)

    Alfaro, Jorge

    2018-01-01

    In this talk we want to study one-loop corrections in VSRSM. In particular, we use the new Sim(2)-invariant dimensional regularization to compute one-loop corrections to the Effective Action in the subsector of the VSRSM that describe the interaction of photons with charged leptons. New stringent bounds for the masses of ve and vµ are obtained.

  10. Correction tool for Active Shape Model based lumbar muscle segmentation.

    Science.gov (United States)

    Valenzuela, Waldo; Ferguson, Stephen J; Ignasiak, Dominika; Diserens, Gaelle; Vermathen, Peter; Boesch, Chris; Reyes, Mauricio

    2015-08-01

    In the clinical environment, accuracy and speed of the image segmentation process plays a key role in the analysis of pathological regions. Despite advances in anatomic image segmentation, time-effective correction tools are commonly needed to improve segmentation results. Therefore, these tools must provide faster corrections with a low number of interactions, and a user-independent solution. In this work we present a new interactive correction method for correcting the image segmentation. Given an initial segmentation and the original image, our tool provides a 2D/3D environment, that enables 3D shape correction through simple 2D interactions. Our scheme is based on direct manipulation of free form deformation adapted to a 2D environment. This approach enables an intuitive and natural correction of 3D segmentation results. The developed method has been implemented into a software tool and has been evaluated for the task of lumbar muscle segmentation from Magnetic Resonance Images. Experimental results show that full segmentation correction could be performed within an average correction time of 6±4 minutes and an average of 68±37 number of interactions, while maintaining the quality of the final segmentation result within an average Dice coefficient of 0.92±0.03.

  11. Numerical investigation of non-perturbative kinetic effects of energetic particles on toroidicity-induced Alfvén eigenmodes in tokamaks and stellarators

    International Nuclear Information System (INIS)

    Slaby, Christoph; Könies, Axel; Kleiber, Ralf

    2016-01-01

    The resonant interaction of shear Alfvén waves with energetic particles is investigated numerically in tokamak and stellarator geometry using a non-perturbative MHD-kinetic hybrid approach. The focus lies on toroidicity-induced Alfvén eigenmodes (TAEs), which are most easily destabilized by a fast-particle population in fusion plasmas. While the background plasma is treated within the framework of an ideal-MHD theory, the drive of the fast particles, as well as Landau damping of the background plasma, is modelled using the drift-kinetic Vlasov equation without collisions. Building on analytical theory, a fast numerical tool, STAE-K, has been developed to solve the resulting eigenvalue problem using a Riccati shooting method. The code, which can be used for parameter scans, is applied to tokamaks and the stellarator Wendelstein 7-X. High energetic-ion pressure leads to large growth rates of the TAEs and to their conversion into kinetically modified TAEs and kinetic Alfvén waves via continuum interaction. To better understand the physics of this conversion mechanism, the connections between TAEs and the shear Alfvén wave continuum are examined. It is shown that, when energetic particles are present, the continuum deforms substantially and the TAE frequency can leave the continuum gap. The interaction of the TAE with the continuum leads to singularities in the eigenfunctions. To further advance the physical model and also to eliminate the MHD continuum together with the singularities in the eigenfunctions, a fourth-order term connected to radiative damping has been included. The radiative damping term is connected to non-ideal effects of the bulk plasma and introduces higher-order derivatives to the model. Thus, it has the potential to substantially change the nature of the solution. For the first time, the fast-particle drive, Landau damping, continuum damping, and radiative damping have been modelled together in tokamak- as well as in stellarator geometry.

  12. Numerical investigation of non-perturbative kinetic effects of energetic particles on toroidicity-induced Alfvén eigenmodes in tokamaks and stellarators

    Energy Technology Data Exchange (ETDEWEB)

    Slaby, Christoph; Könies, Axel; Kleiber, Ralf [Max-Planck-Institut für Plasmaphysik, D-17491 Greifswald (Germany)

    2016-09-15

    The resonant interaction of shear Alfvén waves with energetic particles is investigated numerically in tokamak and stellarator geometry using a non-perturbative MHD-kinetic hybrid approach. The focus lies on toroidicity-induced Alfvén eigenmodes (TAEs), which are most easily destabilized by a fast-particle population in fusion plasmas. While the background plasma is treated within the framework of an ideal-MHD theory, the drive of the fast particles, as well as Landau damping of the background plasma, is modelled using the drift-kinetic Vlasov equation without collisions. Building on analytical theory, a fast numerical tool, STAE-K, has been developed to solve the resulting eigenvalue problem using a Riccati shooting method. The code, which can be used for parameter scans, is applied to tokamaks and the stellarator Wendelstein 7-X. High energetic-ion pressure leads to large growth rates of the TAEs and to their conversion into kinetically modified TAEs and kinetic Alfvén waves via continuum interaction. To better understand the physics of this conversion mechanism, the connections between TAEs and the shear Alfvén wave continuum are examined. It is shown that, when energetic particles are present, the continuum deforms substantially and the TAE frequency can leave the continuum gap. The interaction of the TAE with the continuum leads to singularities in the eigenfunctions. To further advance the physical model and also to eliminate the MHD continuum together with the singularities in the eigenfunctions, a fourth-order term connected to radiative damping has been included. The radiative damping term is connected to non-ideal effects of the bulk plasma and introduces higher-order derivatives to the model. Thus, it has the potential to substantially change the nature of the solution. For the first time, the fast-particle drive, Landau damping, continuum damping, and radiative damping have been modelled together in tokamak- as well as in stellarator geometry.

  13. A sun-crown-sensor model and adapted C-correction logic for topographic correction of high resolution forest imagery

    Science.gov (United States)

    Fan, Yuanchao; Koukal, Tatjana; Weisberg, Peter J.

    2014-10-01

    Canopy shadowing mediated by topography is an important source of radiometric distortion on remote sensing images of rugged terrain. Topographic correction based on the sun-canopy-sensor (SCS) model significantly improved over those based on the sun-terrain-sensor (STS) model for surfaces with high forest canopy cover, because the SCS model considers and preserves the geotropic nature of trees. The SCS model accounts for sub-pixel canopy shadowing effects and normalizes the sunlit canopy area within a pixel. However, it does not account for mutual shadowing between neighboring pixels. Pixel-to-pixel shadowing is especially apparent for fine resolution satellite images in which individual tree crowns are resolved. This paper proposes a new topographic correction model: the sun-crown-sensor (SCnS) model based on high-resolution satellite imagery (IKONOS) and high-precision LiDAR digital elevation model. An improvement on the C-correction logic with a radiance partitioning method to address the effects of diffuse irradiance is also introduced (SCnS + C). In addition, we incorporate a weighting variable, based on pixel shadow fraction, on the direct and diffuse radiance portions to enhance the retrieval of at-sensor radiance and reflectance of highly shadowed tree pixels and form another variety of SCnS model (SCnS + W). Model evaluation with IKONOS test data showed that the new SCnS model outperformed the STS and SCS models in quantifying the correlation between terrain-regulated illumination factor and at-sensor radiance. Our adapted C-correction logic based on the sun-crown-sensor geometry and radiance partitioning better represented the general additive effects of diffuse radiation than C parameters derived from the STS or SCS models. The weighting factor Wt also significantly enhanced correction results by reducing within-class standard deviation and balancing the mean pixel radiance between sunlit and shaded slopes. We analyzed these improvements with model

  14. Non-perturbative construction of 2D and 4D supersymmetric Yang-Mills theories with 8 supercharges

    International Nuclear Information System (INIS)

    Hanada, Masanori; Matsuura, So; Sugino, Fumihiko

    2012-01-01

    In this paper, we consider two-dimensional N=(4,4) supersymmetric Yang-Mills (SYM) theory and deform it by a mass parameter M with keeping all supercharges. We further add another mass parameter m in a manner to respect two of the eight supercharges and put the deformed theory on a two-dimensional square lattice, on which the two supercharges are exactly preserved. The flat directions of scalar fields are stabilized due to the mass deformations, which gives discrete minima representing fuzzy spheres. We show in the perturbation theory that the lattice continuum limit can be taken without any fine tuning. Around the trivial minimum, this lattice theory serves as a non-perturbative definition of two-dimensional N=(4,4) SYM theory. We also discuss that the same lattice theory realizes four-dimensional N=2U(k) SYM on R 2 ×(Fuzzy R 2 ) around the minimum of k-coincident fuzzy spheres.

  15. Publisher Correction: Studying light-harvesting models with superconducting circuits.

    Science.gov (United States)

    Potočnik, Anton; Bargerbos, Arno; Schröder, Florian A Y N; Khan, Saeed A; Collodo, Michele C; Gasparinetti, Simone; Salathé, Yves; Creatore, Celestino; Eichler, Christopher; Türeci, Hakan E; Chin, Alex W; Wallraff, Andreas

    2018-06-08

    The original HTML version of this Article contained an error in the second mathematical expression in the fourth sentence of the fourth paragraph of the 'Excitation transfer with uniform white noise' section of the Results. This has been corrected in the HTML version of the Article.The original PDF version of this Article incorrectly stated that 'Correspondence and requests for materials should be addressed to A. Pčn.', instead of the correct 'Correspondence and requests for materials should be addressed to A. Potočnik'. This has been corrected in the PDF version of the Article.

  16. The b-quark mass from non-perturbative $N_f=2$ Heavy Quark Effective Theory at $O(1/m_h)$

    DEFF Research Database (Denmark)

    Bernardoni, F.; Blossier, B.; Bulava, J.

    2014-01-01

    We report our final estimate of the b-quark mass from $N_f=2$ lattice QCD simulations using Heavy Quark Effective Theory non-perturbatively matched to QCD at $O(1/m_h)$. Treating systematic and statistical errors in a conservative manner, we obtain $\\overline{m}_{\\rm b}^{\\overline{\\rm MS}}(2 {\\rm...

  17. Bartlett correction in the stable AR(1) model with intercept and trend

    NARCIS (Netherlands)

    van Giersbergen, N.P.A.

    2004-01-01

    The Bartlett correction is derived for testing hypotheses about the autoregressive parameter ρ in the stable: (i) AR(1) model; (ii) AR(1) model with intercept; (iii) AR(1) model with intercept and linear trend. The correction is found explicitly as a function of ρ. In the models with deterministic

  18. Impacts of correcting the inter-variable correlation of climate model outputs on hydrological modeling

    Science.gov (United States)

    Chen, Jie; Li, Chao; Brissette, François P.; Chen, Hua; Wang, Mingna; Essou, Gilles R. C.

    2018-05-01

    Bias correction is usually implemented prior to using climate model outputs for impact studies. However, bias correction methods that are commonly used treat climate variables independently and often ignore inter-variable dependencies. The effects of ignoring such dependencies on impact studies need to be investigated. This study aims to assess the impacts of correcting the inter-variable correlation of climate model outputs on hydrological modeling. To this end, a joint bias correction (JBC) method which corrects the joint distribution of two variables as a whole is compared with an independent bias correction (IBC) method; this is considered in terms of correcting simulations of precipitation and temperature from 26 climate models for hydrological modeling over 12 watersheds located in various climate regimes. The results show that the simulated precipitation and temperature are considerably biased not only in the individual distributions, but also in their correlations, which in turn result in biased hydrological simulations. In addition to reducing the biases of the individual characteristics of precipitation and temperature, the JBC method can also reduce the bias in precipitation-temperature (P-T) correlations. In terms of hydrological modeling, the JBC method performs significantly better than the IBC method for 11 out of the 12 watersheds over the calibration period. For the validation period, the advantages of the JBC method are greatly reduced as the performance becomes dependent on the watershed, GCM and hydrological metric considered. For arid/tropical and snowfall-rainfall-mixed watersheds, JBC performs better than IBC. For snowfall- or rainfall-dominated watersheds, however, the two methods behave similarly, with IBC performing somewhat better than JBC. Overall, the results emphasize the advantages of correcting the P-T correlation when using climate model-simulated precipitation and temperature to assess the impact of climate change on watershed

  19. Heat transfer corrected isothermal model for devolatilization of thermally-thick biomass particles

    DEFF Research Database (Denmark)

    Luo, Hao; Wu, Hao; Lin, Weigang

    Isothermal model used in current computational fluid dynamic (CFD) model neglect the internal heat transfer during biomass devolatilization. This assumption is not reasonable for thermally-thick particles. To solve this issue, a heat transfer corrected isothermal model is introduced. In this model......, two heat transfer corrected coefficients: HT-correction of heat transfer and HR-correction of reaction, are defined to cover the effects of internal heat transfer. A series of single biomass devitalization case have been modeled to validate this model, the results show that devolatilization behaviors...... of both thermally-thick and thermally-thin particles are predicted reasonable by using heat transfer corrected model, while, isothermal model overestimate devolatilization rate and heating rate for thermlly-thick particle.This model probably has better performance than isothermal model when it is coupled...

  20. Learning versus correct models: influence of model type on the learning of a free-weight squat lift.

    Science.gov (United States)

    McCullagh, P; Meyer, K N

    1997-03-01

    It has been assumed that demonstrating the correct movement is the best way to impart task-relevant information. However, empirical verification with simple laboratory skills has shown that using a learning model (showing an individual in the process of acquiring the skill to be learned) may accelerate skill acquisition and increase retention more than using a correct model. The purpose of the present study was to compare the effectiveness of viewing correct versus learning models on the acquisition of a sport skill (free-weight squat lift). Forty female participants were assigned to four learning conditions: physical practice receiving feedback, learning model with model feedback, correct model with model feedback, and learning model without model feedback. Results indicated that viewing either a correct or learning model was equally effective in learning correct form in the squat lift.

  1. Effect of Inhomogeneity correction for lung volume model in TPS

    International Nuclear Information System (INIS)

    Chung, Se Young; Lee, Sang Rok; Kim, Young Bum; Kwon, Young Ho

    2004-01-01

    The phantom that includes high density materials such as steel was custom-made to fix lung and bone in order to evaluation inhomogeneity correction at the time of conducting radiation therapy to treat lung cancer. Using this, values resulting from the inhomogeneous correction algorithm are compared on the 2 and 3 dimensional radiation therapy planning systems. Moreover, change in dose calculation was evaluated according to inhomogeneous by comparing with the actual measurement. As for the image acquisition, inhomogeneous correction phantom(Pig's vertebra, steel(8.21 g/cm 3 ), cork(0.23 g/cm 3 )) that was custom-made and the CT(Volume zoom, Siemens, Germany) were used. As for the radiation therapy planning system, Marks Plan(2D) and XiO(CMS, USA, 3D) were used. To compare with the measurement value, linear accelerator(CL/1800, Varian, USA) and ion chamber were used. Image, obtained from the CT was used to obtain point dose and dose distribution from the region of interest (ROI) while on the radiation therapy planning device. After measurement was conducted under the same conditions, value on the treatment planning device and measured value were subjected to comparison and analysis. And difference between the resulting for the evaluation on the use (or non-use) of inhomogeneity correction algorithm, and diverse inhomogeneity correction algorithm that is included in the radiation therapy planning device was compared as well. As result of comparing the results of measurement value on the region of interest within the inhomogeneity correction phantom and the value that resulted from the homogeneous and inhomogeneous correction, gained from the therapy planning device, margin of error of the measurement value and inhomogeneous correction value at the location 1 of the lung showed 0.8% on 2D and 0.5% on 3D. Margin of error of the measurement value and inhomogeneous correction value at the location 1 of the steel showed 12% on 2D and 5% on 3D, however, it is possible to

  2. Determining Model Correctness for Situations of Belief Fusion

    Science.gov (United States)

    2013-07-01

    cinema together it would seem strange to say that one specific movie is more true than another. However, in this case the term truth can interpreted in...which means that no state is correct. An example is when two persons try to agree on seeing a movie at the cinema . If their preferences include some

  3. NLO QCD Corrections to Drell-Yan in TeV-scale Gravity Models

    International Nuclear Information System (INIS)

    Mathews, Prakash; Ravindran, V.

    2006-01-01

    In TeV scale gravity models, we present the NLO-QCD corrections for the double differential cross sections in the scattering angle for dilepton production at hadron colliders. The quantitative impact of QCD corrections for extra dimension searches at LHC and Tevatron are investigated for both ADD and RS models through K-factors. We also show how the inclusion of QCD corrections to NLO stabilises the cross section with respect to renormalisation and factorisation scale variations

  4. Tests of perturbative and non perturbative structure of moments of hadronic event shapes using experiments JADE and OPAL; Untersuchung perturbativer und nichtperturbativer Struktur der Momente hadronischer Ereignisformvariablen mit den Experimenten JADE und OPAL

    Energy Technology Data Exchange (ETDEWEB)

    Pahl, Christoph Johannes

    2008-01-29

    In hadron production data of the e{sup +}e{sup -} annihilation experiments JADE and OPAL we measure the first five moments of twelve hadronic-event-shape variables at c.m. energies from 14 to 207 GeV. From the comparison of the QCD NLO prediction with the data corrected by means of MC models about hadronization we obtain the reference value of the strong coupling {alpha}{sub s}(M{sub Z{sup 0}})=0.1254{+-}0.0007(stat.){+-}0.0010(exp.){sup +0.0009}{sub -0.0023}(had.){sup +0.0069}{sub -0.0053}(theo.). For some, especially higher moments, systematic unsufficiencies in the QCD NLO prediction are recognizable. Simultaneous fits to two moments under assumption of identical renormalization scales yield scale values from x{sub {mu}}=0.057 to x{sub {mu}}=0.196. We check predictions of different non-perturbative models. From the single-dressed-gluon approximation a perturbative prediction in O({alpha}{sup 5}{sub s}) results with neglegible energy power correction, which describes the thrust average on hadron level well with {alpha}{sub s}(M{sub Z{sup 0}})=0.1186{+-}0,0017(exp.){sub -0.0028}{sup +0.0033}(theo.). The variance of the event-shape variable is measured and compared with models as well as predictions. [German] In Hadronproduktionsdaten der e{sup +}e{sup -}-Vernichtungsexperimente JADE und OPAL messen wir die ersten fuenf Momente von zwoelf hadronischen Ereignisformvariablen bei Schwerpunktsenergien von 14 bis 207 GeV. Aus dem Vergleich der QCD NLO-Vorhersage mit den mittels MC-Modellen um Hadronisierung korrigierten Daten erhalten wir den Referenzwert der starken Kopplung {alpha}{sub s}(M{sub Z{sup 0}})=0.1254{+-}0.0007(stat.){+-}0.0010(exp.){sup +0.0009}{sub -0.0023}(had.){sup +0.0069}{sub -0.0053}(theo.). Fuer einige, insbesondere hoehere, Momente sind systematische Unzulaenglichkeiten in der QCD NLO-Vorhersage erkenntlich. Simultane Fits an zwei Momente unter Annahme identischer Renormierungsskalen ergeben Skalenwerte von x{sub {mu}}=0.057 bis x{sub {mu}}=0

  5. Bias-corrected estimation in potentially mildly explosive autoregressive models

    DEFF Research Database (Denmark)

    Haufmann, Hendrik; Kruse, Robinson

    This paper provides a comprehensive Monte Carlo comparison of different finite-sample bias-correction methods for autoregressive processes. We consider classic situations where the process is either stationary or exhibits a unit root. Importantly, the case of mildly explosive behaviour is studied...... that the indirect inference approach oers a valuable alternative to other existing techniques. Its performance (measured by its bias and root mean squared error) is balanced and highly competitive across many different settings. A clear advantage is its applicability for mildly explosive processes. In an empirical...

  6. Correctness-preserving configuration of business process models

    NARCIS (Netherlands)

    Aalst, van der W.M.P.; Dumas, M.; Gottschalk, F.; Hofstede, ter A.H.M.; La Rosa, M.; Mendling, J.; Fiadeiro, J.; Inverardi, P.

    2008-01-01

    Reference process models capture recurrent business operations in a given domain such as procurement or logistics. These models are intended to be configured to fit the requirements of specific organizations or projects, leading to individualized process models that are subsequently used for domain

  7. Significance of Bias Correction in Drought Frequency and Scenario Analysis Based on Climate Models

    Science.gov (United States)

    Aryal, Y.; Zhu, J.

    2015-12-01

    Assessment of future drought characteristics is difficult as climate models usually have bias in simulating precipitation frequency and intensity. To overcome this limitation, output from climate models need to be bias corrected based on the specific purpose of applications. In this study, we examine the significance of bias correction in the context of drought frequency and scenario analysis using output from climate models. In particular, we investigate the performance of three widely used bias correction techniques: (1) monthly bias correction (MBC), (2) nested bias correction (NBC), and (3) equidistance quantile mapping (EQM) The effect of bias correction in future scenario of drought frequency is also analyzed. The characteristics of drought are investigated in terms of frequency and severity in nine representative locations in different climatic regions across the United States using regional climate model (RCM) output from the North American Regional Climate Change Assessment Program (NARCCAP). The Standardized Precipitation Index (SPI) is used as the means to compare and forecast drought characteristics at different timescales. Systematic biases in the RCM precipitation output are corrected against the National Centers for Environmental Prediction (NCEP) North American Regional Reanalysis (NARR) data. The results demonstrate that bias correction significantly decreases the RCM errors in reproducing drought frequency derived from the NARR data. Preserving mean and standard deviation is essential for climate models in drought frequency analysis. RCM biases both have regional and timescale dependence. Different timescale of input precipitation in the bias corrections show similar results. Drought frequency obtained from the RCM future (2040-2070) scenarios is compared with that from the historical simulations. The changes in drought characteristics occur in all climatic regions. The relative changes in drought frequency in future scenario in relation to

  8. Analytical model for relativistic corrections to the nuclear magnetic shielding constant in atoms

    Energy Technology Data Exchange (ETDEWEB)

    Romero, Rodolfo H. [Facultad de Ciencias Exactas, Universidad Nacional del Nordeste, Avenida Libertad 5500 (3400), Corrientes (Argentina)]. E-mail: rhromero@exa.unne.edu.ar; Gomez, Sergio S. [Facultad de Ciencias Exactas, Universidad Nacional del Nordeste, Avenida Libertad 5500 (3400), Corrientes (Argentina)

    2006-04-24

    We present a simple analytical model for calculating and rationalizing the main relativistic corrections to the nuclear magnetic shielding constant in atoms. It provides good estimates for those corrections and their trends, in reasonable agreement with accurate four-component calculations and perturbation methods. The origin of the effects in deep core atomic orbitals is manifestly shown.

  9. Bias Correction in a Stable AD (1,1) Model: Weak versus Strong Exogeneity

    NARCIS (Netherlands)

    van Giersbergen, N.P.A.

    2001-01-01

    This paper compares the behaviour of a bias-corrected estimator assuming strongly exogenous regressors to the behaviour of a bias-corrected estimator assuming weakly exogenous regressors, when in fact the marginal model contains a feedback mechanism. To this end, the effects of a feedback mechanism

  10. Bias-Corrected Estimation of Noncentrality Parameters of Covariance Structure Models

    Science.gov (United States)

    Raykov, Tenko

    2005-01-01

    A bias-corrected estimator of noncentrality parameters of covariance structure models is discussed. The approach represents an application of the bootstrap methodology for purposes of bias correction, and utilizes the relation between average of resample conventional noncentrality parameter estimates and their sample counterpart. The…

  11. Analytical model for relativistic corrections to the nuclear magnetic shielding constant in atoms

    International Nuclear Information System (INIS)

    Romero, Rodolfo H.; Gomez, Sergio S.

    2006-01-01

    We present a simple analytical model for calculating and rationalizing the main relativistic corrections to the nuclear magnetic shielding constant in atoms. It provides good estimates for those corrections and their trends, in reasonable agreement with accurate four-component calculations and perturbation methods. The origin of the effects in deep core atomic orbitals is manifestly shown

  12. Radiative corrections for semileptonic decays of hyperons: the 'model independent' part

    International Nuclear Information System (INIS)

    Toth, K.; Szegoe, K.; Margaritis, T.

    1984-04-01

    The 'model independent' part of the order α radiative correction due to virtual photon exchanges and inner bremsstrahlung is studied for semileptonic decays of hyperons. Numerical results of high accuracy are given for the relative correction to the branching ratio, the electron energy spectrum and the (Esub(e),Esub(f)) Dalitz distribution in the case of four different decays. (author)

  13. Mathematical models for correction of images, obtained at radioisotope scan

    International Nuclear Information System (INIS)

    Glaz, A.; Lubans, A.

    2002-01-01

    The images, which obtained at radioisotope scintigraphy, contain distortions. Distortions appear as a result of absorption of radiation by patient's body's tissues. Two mathematical models for reducing of such distortions are proposed. Image obtained by only one gamma camera is used in the first mathematical model. Unfortunately, this model allows processing of the images only in case, when it can be assumed, that the investigated organ has a symmetric form. The images obtained by two gamma cameras are used in the second model. It gives possibility to assume that the investigated organ has non-symmetric form and to acquire more precise results. (authors)

  14. Next-to-leading order corrections to the valon model

    Indian Academy of Sciences (India)

    A seminumerical solution to the valon model at next-to-leading order (NLO) in the Laguerre polynomials is presented. We used the valon model to generate the structure of proton with respect to the Laguerre polynomials method. The results are compared with H1 data and other parametrizations.

  15. MODEL PERMINTAAN UANG DI INDONESIA DENGAN PENDEKATAN VECTOR ERROR CORRECTION MODEL

    Directory of Open Access Journals (Sweden)

    imam mukhlis

    2016-09-01

    Full Text Available This research aims to estimate the demand for money model in Indonesia for 2005.2-2015.12. The variables used in this research are ; demand for money, interest rate, inflation, and exchange rate (IDR/US$. The stationary test with ADF used to test unit root in the data. Cointegration test applied to estimate the long run relationship berween variables. This research employed the Vector Error Correction Model (VECM to estimate the money demand model in Indonesia. The results showed that all the data was stationer at the difference level (1%. There were long run relationship between interest rate, inflation and exchange rate to demand for money in Indonesia. The VECM model could not explaine interaction between explanatory variables to independent variables. In the short run, there were not relationship between interest rate, inflation and exchange rate to demand for money in Indonesia for 2005.2-2015.12

  16. Automation of electroweak NLO corrections in general models

    Energy Technology Data Exchange (ETDEWEB)

    Lang, Jean-Nicolas [Universitaet Wuerzburg (Germany)

    2016-07-01

    I discuss the automation of generation of scattering amplitudes in general quantum field theories at next-to-leading order in perturbation theory. The work is based on Recola, a highly efficient one-loop amplitude generator for the Standard Model, which I have extended so that it can deal with general quantum field theories. Internally, Recola computes off-shell currents and for new models new rules for off-shell currents emerge which are derived from the Feynman rules. My work relies on the UFO format which can be obtained by a suited model builder, e.g. FeynRules. I have developed tools to derive the necessary counterterm structures and to perform the renormalization within Recola in an automated way. I describe the procedure using the example of the two-Higgs-doublet model.

  17. Integrated model-based retargeting and optical proximity correction

    Science.gov (United States)

    Agarwal, Kanak B.; Banerjee, Shayak

    2011-04-01

    Conventional resolution enhancement techniques (RET) are becoming increasingly inadequate at addressing the challenges of subwavelength lithography. In particular, features show high sensitivity to process variation in low-k1 lithography. Process variation aware RETs such as process-window OPC are becoming increasingly important to guarantee high lithographic yield, but such techniques suffer from high runtime impact. An alternative to PWOPC is to perform retargeting, which is a rule-assisted modification of target layout shapes to improve their process window. However, rule-based retargeting is not a scalable technique since rules cannot cover the entire search space of two-dimensional shape configurations, especially with technology scaling. In this paper, we propose to integrate the processes of retargeting and optical proximity correction (OPC). We utilize the normalized image log slope (NILS) metric, which is available at no extra computational cost during OPC. We use NILS to guide dynamic target modification between iterations of OPC. We utilize the NILS tagging capabilities of Calibre TCL scripting to identify fragments with low NILS. We then perform NILS binning to assign different magnitude of retargeting to different NILS bins. NILS is determined both for width, to identify regions of pinching, and space, to locate regions of potential bridging. We develop an integrated flow for 1x metal lines (M1) which exhibits lesser lithographic hotspots compared to a flow with just OPC and no retargeting. We also observe cases where hotspots that existed in the rule-based retargeting flow are fixed using our methodology. We finally also demonstrate that such a retargeting methodology does not significantly alter design properties by electrically simulating a latch layout before and after retargeting. We observe less than 1% impact on latch Clk-Q and D-Q delays post-retargeting, which makes this methodology an attractive one for use in improving shape process windows

  18. Modeling Financial Liquidity Of Construction Companies Using Error Correction Mechanism

    Directory of Open Access Journals (Sweden)

    Tomasz Stryjewski

    2017-03-01

    Full Text Available Financial liquidity is one of the most important economic categories in the functioning of the company. There are many methods of assessment of the company in this field, ranging from ratio analysis, to advanced models of financial flows. In this paper was presented econometric model of financial income, which was used to analyze the liquidity of the three construction companies. This analysis was made on the background of methods indicator.

  19. Specification Search for Identifying the Correct Mean Trajectory in Polynomial Latent Growth Models

    Science.gov (United States)

    Kim, Minjung; Kwok, Oi-Man; Yoon, Myeongsun; Willson, Victor; Lai, Mark H. C.

    2016-01-01

    This study investigated the optimal strategy for model specification search under the latent growth modeling (LGM) framework, specifically on searching for the correct polynomial mean or average growth model when there is no a priori hypothesized model in the absence of theory. In this simulation study, the effectiveness of different starting…

  20. Pion-cloud corrections to the relativistic S + V harmonic potential model

    International Nuclear Information System (INIS)

    Palladino, B.E.; Ferreira, P.L.

    1988-01-01

    Pionic corrections to the mass spectrum of low-lying s-wave baryons are incorporated in a relativistic independent quark model with equally mixed Lorentz scalar and vector harmonic potentials. (M.W.O.) [pt

  1. Lipid correction model of carbon stable isotopes for a cosmopolitan predator, spiny dogfish Squalus acanthias.

    Science.gov (United States)

    Reum, J C P

    2011-12-01

    Three lipid correction models were evaluated for liver and white dorsal muscle from Squalus acanthias. For muscle, all three models performed well, based on the Akaike Information Criterion value corrected for small sample sizes (AIC(c) ), and predicted similar lipid corrections to δ(13) C that were up to 2.8 ‰ higher than those predicted using previously published models based on multispecies data. For liver, which possessed higher bulk C:N values compared to that of white muscle, all three models performed poorly and lipid-corrected δ(13) C values were best approximated by simply adding 5.74 ‰ to bulk δ(13) C values. © 2011 The Author. Journal of Fish Biology © 2011 The Fisheries Society of the British Isles.

  2. Error Correction Model of the Demand for Money in Pakistan

    OpenAIRE

    Qayyum, Abdul

    1998-01-01

    The paper estimated dynamic demand for money (Currency) function for Pakistan. it is concluded that in the long run money demand depends on income, rate of inflation and bond rate. The rate of Inflation and rate of interst on deposits emerged as important determinant of money demand in the short run. Moreover dynamic model remans stable througtout the study period.

  3. Do Lumped-Parameter Models Provide the Correct Geometrical Damping?

    DEFF Research Database (Denmark)

    Andersen, Lars

    response during excitation and the geometrical damping related to free vibrations of a hexagonal footing. The optimal order of a lumped-parameter model is determined for each degree of freedom, i.e. horizontal and vertical translation as well as torsion and rocking. In particular, the necessity of coupling...... between horizontal sliding and rocking is discussed....

  4. Radiative Corrections for Wto e barν Decay in the Weinberg-Salam Model

    Science.gov (United States)

    Inoue, K.; Kakuto, A.; Komatsu, H.; Takeshita, S.

    1980-09-01

    The one-loop corrections for the Wto e barν decay rate are calculated in the Weinberg-Salam model with arbitrary number of generations. The on-shell renormalization prescription and the 't Hooft-Feynman gauge are employed. Divergences are treated by the dimensional regularization method. Some numerical estimates for the decay rate are given in the three-generation model. It is found that there are significant corrections mainly owing to fermion-mass singularities.

  5. Investigation of turbulence models with compressibility corrections for hypersonic boundary flows

    Directory of Open Access Journals (Sweden)

    Han Tang

    2015-12-01

    Full Text Available The applications of pressure work, pressure-dilatation, and dilatation-dissipation (Sarkar, Zeman, and Wilcox models to hypersonic boundary flows are investigated. The flat plate boundary layer flows of Mach number 5–11 and shock wave/boundary layer interactions of compression corners are simulated numerically. For the flat plate boundary layer flows, original turbulence models overestimate the heat flux with Mach number high up to 10, and compressibility corrections applied to turbulence models lead to a decrease in friction coefficients and heating rates. The pressure work and pressure-dilatation models yield the better results. Among the three dilatation-dissipation models, Sarkar and Wilcox corrections present larger deviations from the experiment measurement, while Zeman correction can achieve acceptable results. For hypersonic compression corner flows, due to the evident increase of turbulence Mach number in separation zone, compressibility corrections make the separation areas larger, thus cannot improve the accuracy of calculated results. It is unreasonable that compressibility corrections take effect in separation zone. Density-corrected model by Catris and Aupoix is suitable for shock wave/boundary layer interaction flows which can improve the simulation accuracy of the peak heating and have a little influence on separation zone.

  6. Analyzing B{sub s} - anti B{sub s} mixing. Non-perturbative contributions to bag parameters from sum rules

    Energy Technology Data Exchange (ETDEWEB)

    Mannel, T. [Siegen Univ. (Germany). FB 7, Theoretische Physik; Pecjak, B.D. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Pivovarov, A.A. [Siegen Univ. (Germany). FB 7, Theoretische Physik]|[Russian Academy of Sciecnes, Moscow (Russian Federation). Inst. for Nuclear Research

    2007-03-15

    We use QCD sum rules to compute matrix elements of the {delta}B=2 operators appearing in the heavy-quark expansion of the width difference of the B{sub s} mass eigenstates. Our analysis includes the leading-order operators Q and Q{sub S}, as well as the subleading operators R{sub 2} and R{sub 3}, which appear at next-to-leading order in the 1/m{sub b} expansion. We conclude that the violation of the factorization approximation for these matrix elements due to non-perturbative vacuum condensates is as low as 1-2%. (orig.)

  7. Yukawa corrections from PGBs in OGTC model to the process γγ→bb-bar

    International Nuclear Information System (INIS)

    Huang Jinshu; Song Taiping; Song Haizhen; Lu gongru

    2000-01-01

    The Yukawa corrections from the pseudo-Goldstone bosons (PGBs) in the one generation technicolor (OGTC) model to the process γγ→bb-bar are calculated. The authors find the corrections from the PGBs to the cross section γγ→bb-bar are more than 10% in the certain parameter values region. The maximum of the relative corrections to the process e + e - →γγ→bb-bar may reach -51% in laser back-scattering photos mode, and is only -17.9% in Beamstrahlung photons mode. The corrections are greatly larger the contributions from the relevant particles in the standard model and the supersymmetric model. It can be considered as a signatures of finding the technicolor at the next-generation high energy photons collision

  8. Statistical bias correction modelling for seasonal rainfall forecast for the case of Bali island

    Science.gov (United States)

    Lealdi, D.; Nurdiati, S.; Sopaheluwakan, A.

    2018-04-01

    Rainfall is an element of climate which is highly influential to the agricultural sector. Rain pattern and distribution highly determines the sustainability of agricultural activities. Therefore, information on rainfall is very useful for agriculture sector and farmers in anticipating the possibility of extreme events which often cause failures of agricultural production. This research aims to identify the biases from seasonal forecast products from ECMWF (European Centre for Medium-Range Weather Forecasts) rainfall forecast and to build a transfer function in order to correct the distribution biases as a new prediction model using quantile mapping approach. We apply this approach to the case of Bali Island, and as a result, the use of bias correction methods in correcting systematic biases from the model gives better results. The new prediction model obtained with this approach is better than ever. We found generally that during rainy season, the bias correction approach performs better than in dry season.

  9. Spherical aberration correction with an in-lens N-fold symmetric line currents model.

    Science.gov (United States)

    Hoque, Shahedul; Ito, Hiroyuki; Nishi, Ryuji

    2018-04-01

    In our previous works, we have proposed N-SYLC (N-fold symmetric line currents) models for aberration correction. In this paper, we propose "in-lens N-SYLC" model, where N-SYLC overlaps rotationally symmetric lens. Such overlap is possible because N-SYLC is free of magnetic materials. We analytically prove that, if certain parameters of the model are optimized, an in-lens 3-SYLC (N = 3) doublet can correct 3rd order spherical aberration. By computer simulation, we show that the required excitation current for correction is less than 0.25 AT for beam energy 5 keV, and the beam size after correction is smaller than 1 nm at the corrector image plane for initial slope less than 4 mrad. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Empirical Correction to the Likelihood Ratio Statistic for Structural Equation Modeling with Many Variables.

    Science.gov (United States)

    Yuan, Ke-Hai; Tian, Yubin; Yanagihara, Hirokazu

    2015-06-01

    Survey data typically contain many variables. Structural equation modeling (SEM) is commonly used in analyzing such data. The most widely used statistic for evaluating the adequacy of a SEM model is T ML, a slight modification to the likelihood ratio statistic. Under normality assumption, T ML approximately follows a chi-square distribution when the number of observations (N) is large and the number of items or variables (p) is small. However, in practice, p can be rather large while N is always limited due to not having enough participants. Even with a relatively large N, empirical results show that T ML rejects the correct model too often when p is not too small. Various corrections to T ML have been proposed, but they are mostly heuristic. Following the principle of the Bartlett correction, this paper proposes an empirical approach to correct T ML so that the mean of the resulting statistic approximately equals the degrees of freedom of the nominal chi-square distribution. Results show that empirically corrected statistics follow the nominal chi-square distribution much more closely than previously proposed corrections to T ML, and they control type I errors reasonably well whenever N ≥ max(50,2p). The formulations of the empirically corrected statistics are further used to predict type I errors of T ML as reported in the literature, and they perform well.

  11. Sandmeier model based topographic correction to lunar spectral profiler (SP) data from KAGUYA satellite.

    Science.gov (United States)

    Chen, Sheng-Bo; Wang, Jing-Ran; Guo, Peng-Ju; Wang, Ming-Chang

    2014-09-01

    The Moon may be considered as the frontier base for the deep space exploration. The spectral analysis is one of the key techniques to determine the lunar surface rock and mineral compositions. But the lunar topographic relief is more remarkable than that of the Earth. It is necessary to conduct the topographic correction for lunar spectral data before they are used to retrieve the compositions. In the present paper, a lunar Sandmeier model was proposed by considering the radiance effect from the macro and ambient topographic relief. And the reflectance correction model was also reduced based on the Sandmeier model. The Spectral Profile (SP) data from KAGUYA satellite in the Sinus Iridum quadrangle was taken as an example. And the digital elevation data from Lunar Orbiter Laser Altimeter are used to calculate the slope, aspect, incidence and emergence angles, and terrain-viewing factor for the topographic correction Thus, the lunar surface reflectance from the SP data was corrected by the proposed model after the direct component of irradiance on a horizontal surface was derived. As a result, the high spectral reflectance facing the sun is decreased and low spectral reflectance back to the sun is compensated. The statistical histogram of reflectance-corrected pixel numbers presents Gaussian distribution Therefore, the model is robust to correct lunar topographic effect and estimate lunar surface reflectance.

  12. a Semi-Empirical Topographic Correction Model for Multi-Source Satellite Images

    Science.gov (United States)

    Xiao, Sa; Tian, Xinpeng; Liu, Qiang; Wen, Jianguang; Ma, Yushuang; Song, Zhenwei

    2018-04-01

    Topographic correction of surface reflectance in rugged terrain areas is the prerequisite for the quantitative application of remote sensing in mountainous areas. Physics-based radiative transfer model can be applied to correct the topographic effect and accurately retrieve the reflectance of the slope surface from high quality satellite image such as Landsat8 OLI. However, as more and more images data available from various of sensors, some times we can not get the accurate sensor calibration parameters and atmosphere conditions which are needed in the physics-based topographic correction model. This paper proposed a semi-empirical atmosphere and topographic corrction model for muti-source satellite images without accurate calibration parameters.Based on this model we can get the topographic corrected surface reflectance from DN data, and we tested and verified this model with image data from Chinese satellite HJ and GF. The result shows that the correlation factor was reduced almost 85 % for near infrared bands and the classification overall accuracy of classification increased 14 % after correction for HJ. The reflectance difference of slope face the sun and face away the sun have reduced after correction.

  13. Testing effort dependent software reliability model for imperfect debugging process considering both detection and correction

    International Nuclear Information System (INIS)

    Peng, R.; Li, Y.F.; Zhang, W.J.; Hu, Q.P.

    2014-01-01

    This paper studies the fault detection process (FDP) and fault correction process (FCP) with the incorporation of testing effort function and imperfect debugging. In order to ensure high reliability, it is essential for software to undergo a testing phase, during which faults can be detected and corrected by debuggers. The testing resource allocation during this phase, which is usually depicted by the testing effort function, considerably influences not only the fault detection rate but also the time to correct a detected fault. In addition, testing is usually far from perfect such that new faults may be introduced. In this paper, we first show how to incorporate testing effort function and fault introduction into FDP and then develop FCP as delayed FDP with a correction effort. Various specific paired FDP and FCP models are obtained based on different assumptions of fault introduction and correction effort. An illustrative example is presented. The optimal release policy under different criteria is also discussed

  14. On the gluonic correction to lepton-pair decays in a relativistic quarkonium model

    International Nuclear Information System (INIS)

    Ito, Hitoshi

    1987-01-01

    The gluonic correction to the leptonic decay of the heavy vector meson is investigated by using the perturbation theory to the order α s . The on-mass-shell approximation is assumed for the constituent quarks so that we assure the gauge independence of the correction. The decay rates in the model based on the Bethe-Salpeter equation are also shown, in which the gluonic correction with a high-momentum cutoff is calculated for the off-shell quarks. It is shown that the static approximation to the correction factor (1 - 16α s /3π) is not adequate and the gluonic correction does not suppress but enhance the decay rates of the ground states for the c anti c and b anti b systems. (author)

  15. Radiative corrections in a vector-tensor model

    International Nuclear Information System (INIS)

    Chishtie, F.; Gagne-Portelance, M.; Hanif, T.; Homayouni, S.; McKeon, D.G.C.

    2006-01-01

    In a recently proposed model in which a vector non-Abelian gauge field interacts with an antisymmetric tensor field, it has been shown that the tensor field possesses no physical degrees of freedom. This formal demonstration is tested by computing the one-loop contributions of the tensor field to the self-energy of the vector field. It is shown that despite the large number of Feynman diagrams in which the tensor field contributes, the sum of these diagrams vanishes, confirming that it is not physical. Furthermore, if the tensor field were to couple with a spinor field, it is shown at one-loop order that the spinor self-energy is not renormalizable, and hence this coupling must be excluded. In principle though, this tensor field does couple to the gravitational field

  16. An experimental model for the surgical correction of tracheomalacia.

    Science.gov (United States)

    Shaha, A R; Burnett, C; DiMaio, T; Jaffe, B M

    1991-10-01

    Tracheomalacia may result from large intrathoracic goiters. Due to the chronic compression, particularly within the confines of the thoracic inlet, the tracheal wall weakens, with disintegration of some of the cartilaginous rings. Tracheomalacia can cause acute airway distress, particularly during the post-operative period, and may occasionally result in death. The other major cause of tracheomalacia is related to either prolonged endotracheal intubation or over-inflation of the tracheostomy cuff. While various techniques such as internal stenting, external support devices, tracheostomy, and tracheal resection have been used based on individual circumstances, no one method appears to be perfect. To further study this difficult problem, an experimental model of tracheomalacia was created in eight dogs. Six to seven rings of the tracheal cartilages were dissected submucosally. More than half of the circumference of the tracheal rings was resected. The tracheal walls were reconstructed with polytetrafluoroethylene (PTFE) grafts. The grafts strengthened the tracheal wall without causing luminal constriction. Tracheostomy was not performed on any of the dogs. All dogs tolerated the procedure well and were extubated at the conclusion of the experiment. The dogs were followed for 4 to 6 months and then sacrificed so that the tracheal wall could be examined histologically. There was considerable fibrosis leading to stiff neotrachea. The results of this experimental technique for prosthetic reconstruction to counteract problems simulating tracheomalacia are very encouraging.

  17. Real-Time Corrected Traffic Correlation Model for Traffic Flow Forecasting

    Directory of Open Access Journals (Sweden)

    Hua-pu Lu

    2015-01-01

    Full Text Available This paper focuses on the problems of short-term traffic flow forecasting. The main goal is to put forward traffic correlation model and real-time correction algorithm for traffic flow forecasting. Traffic correlation model is established based on the temporal-spatial-historical correlation characteristic of traffic big data. In order to simplify the traffic correlation model, this paper presents correction coefficients optimization algorithm. Considering multistate characteristic of traffic big data, a dynamic part is added to traffic correlation model. Real-time correction algorithm based on Fuzzy Neural Network is presented to overcome the nonlinear mapping problems. A case study based on a real-world road network in Beijing, China, is implemented to test the efficiency and applicability of the proposed modeling methods.

  18. Corrections to scaling in random resistor networks and diluted continuous spin models near the percolation threshold.

    Science.gov (United States)

    Janssen, Hans-Karl; Stenull, Olaf

    2004-02-01

    We investigate corrections to scaling induced by irrelevant operators in randomly diluted systems near the percolation threshold. The specific systems that we consider are the random resistor network and a class of continuous spin systems, such as the x-y model. We focus on a family of least irrelevant operators and determine the corrections to scaling that originate from this family. Our field theoretic analysis carefully takes into account that irrelevant operators mix under renormalization. It turns out that long standing results on corrections to scaling are respectively incorrect (random resistor networks) or incomplete (continuous spin systems).

  19. A golden A5 model of leptons with a minimal NLO correction

    International Nuclear Information System (INIS)

    Cooper, Iain K.; King, Stephen F.; Stuart, Alexander J.

    2013-01-01

    We propose a new A 5 model of leptons which corrects the LO predictions of Golden Ratio mixing via a minimal NLO Majorana mass correction which completely breaks the original Klein symmetry of the neutrino mass matrix. The minimal nature of the NLO correction leads to a restricted and correlated range of the mixing angles allowing agreement within the one sigma range of recent global fits following the reactor angle measurement by Daya Bay and RENO. The minimal NLO correction also preserves the LO inverse neutrino mass sum rule leading to a neutrino mass spectrum that extends into the quasi-degenerate region allowing the model to be accessible to the current and future neutrinoless double beta decay experiments

  20. Importance of Lorentz structure in the parton model: Target mass corrections, transverse momentum dependence, positivity bounds

    International Nuclear Information System (INIS)

    D'Alesio, U.; Leader, E.; Murgia, F.

    2010-01-01

    We show that respecting the underlying Lorentz structure in the parton model has very strong consequences. Failure to insist on the correct Lorentz covariance is responsible for the existence of contradictory results in the literature for the polarized structure function g 2 (x), whereas with the correct imposition we are able to derive the Wandzura-Wilczek relation for g 2 (x) and the target-mass corrections for polarized deep inelastic scattering without recourse to the operator product expansion. We comment briefly on the problem of threshold behavior in the presence of target-mass corrections. Careful attention to the Lorentz structure has also profound implications for the structure of the transverse momentum dependent parton densities often used in parton model treatments of hadron production, allowing the k T dependence to be derived explicitly. It also leads to stronger positivity and Soffer-type bounds than usually utilized for the collinear densities.

  1. H{sup +}{sub 2} ionization by ultra-short electromagnetic pulses investigated through a non-perturbative Coulomb-Volkov approach

    Energy Technology Data Exchange (ETDEWEB)

    RodrIguez, V D [Departamento de Fisica, FCEyN, Universidad de Buenos Aires, 1428 Buenos Aires (Argentina); Macri, P [Departamento de Fisica, FCEyN, Universidad de Buenos Aires, 1428 Buenos Aires (Argentina); Instituto de Astronomia y Fisica del Espacio, Consejo Nacional de Investigaciones CientIficas y Tecnicas, 1428 Buenos Aires (Argentina); Gayet, R [CELIA, Centre Lasers Intenses et Applications, UMR 5107, Unite Mixte de Recherche CNRS-CEA-Universite Bordeaux 1, Universite Bordeaux 1, 351 Cours de la Liberation, 33405 Talence Cedex (France)

    2005-08-14

    The sudden Coulomb-Volkov theoretical approximation has been shown to well describe atomic ionization by intense and ultra-short electromagnetic pulses, such as pulses generated by very fast highly-charged ions. This approach is extended here to investigate single ionization of homonuclear diatomic molecules by such pulses in the framework of one-active electron. Under particular conditions, a Young-like interference formula can approximately be factored out. Present calculations show interference effects originating from the molecular two-centre structure. Fivefold differential angular distributions of the ejected electron are studied as a function of the molecular orientation and internuclear distance. Both non-perturbative and perturbative regimes are examined. In the non-perturbative case, an interference pattern is visible but a main lobe, opposite to the electric field polarization direction, dominates the angular distribution. In contrast, in perturbation conditions the structure of interferences shows analogies to the Young-like interference pattern obtained in ionization of molecules by fast electron impacts. Finally, the strong dependence of these Young-like angular distributions on the internuclear distance is addressed.

  2. Non-perturbative renormalization of the chromo-magnetic operator in heavy quark effective theory and the B{sup *} - B mass splitting

    Energy Technology Data Exchange (ETDEWEB)

    Guazzini, D.; Sommer, R. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Meyer, H. [Massachusetts Institute of Technology, Cambridge, MA (United States). Center for Theoretical Physics

    2007-05-15

    We carry out the non-perturbative renormalization of the chromo-magnetic operator in Heavy Quark Effective Theory. At order 1/m of the expansion, the operator is responsible for the mass splitting between the pseudoscalar and vector B mesons. We obtain its two-loop anomalous dimension in a Schroedinger functional scheme by successive oneloop conversions to the lattice MS scheme and the MS scheme. We then compute the scale evolution of the operator non-perturbatively in the N{sub f}=0 theory between {mu} {approx}0.3 GeV and {mu} {approx}100 GeV, where contact is made with perturbation theory. The overall renormalization factor that converts the bare lattice operator to its renormalization group invariant form is given for the Wilson gauge action and two standard discretizations of the heavy-quark action. As an application, we find that this factor brings the previous quenched predictions of the B{sup *}-B mass splitting closer to the experimental value than found with a perturbative renormalization. The same renormalization factor is applicable to the spin-dependent potentials of Eichten and Feinberg. (orig.)

  3. Centre-of-mass corrections for the harmonic S+V potential model

    International Nuclear Information System (INIS)

    Palladino, B.E.; Ferreira, P.L.

    1986-01-01

    Center-of-Mass corrections to the mass spectrum and static properties of low-lying S-wave baryoins are discussed in the context of a relativistic, independent quark model, based on a Dirac equation, with equally mixed scalar and vector confining potential of harmomic type. A more stisfactory fitting of the parameters involved is obtained, as compared with previous treatments which CM corrections were neglected. (Author) [pt

  4. Radiative corrections to e+e- → W+W- in the Weinberg model

    NARCIS (Netherlands)

    Veltman, M.J.G.; Lemoine, M.

    1980-01-01

    The one-loop radiation corrections to the process e+e- ->W+W- are calculated in the Weinberg model. The corrections are computed in a c.m. energy range of 180-1000 GeV. The dependence on the Higgs mass is studied in detail; it is found that variations in the Higgs mass from 10-1000 GeV give rise

  5. Color Mixing Correction for Post-printed Patterns on Colored Background Using Modified Particle Density Model

    OpenAIRE

    Suwa , Misako; Fujimoto , Katsuhito

    2006-01-01

    http://www.suvisoft.com; Color mixing occurs between background and foreground colors when a pattern is post-printed on a colored area because ink is not completely opaque. This paper proposes a new method for the correction of color mixing in line pattern such as characters and stamps, by using a modified particle density model. Parameters of the color correction can be calculated from two sets of foreground and background colors. By employing this method, the colors of foreground patterns o...

  6. Validation of the Two-Layer Model for Correcting Clear Sky Reflectance Near Clouds

    Science.gov (United States)

    Wen, Guoyong; Marshak, Alexander; Evans, K. Frank; Vamal, Tamas

    2014-01-01

    A two-layer model was developed in our earlier studies to estimate the clear sky reflectance enhancement near clouds. This simple model accounts for the radiative interaction between boundary layer clouds and molecular layer above, the major contribution to the reflectance enhancement near clouds for short wavelengths. We use LES/SHDOM simulated 3D radiation fields to valid the two-layer model for reflectance enhancement at 0.47 micrometer. We find: (a) The simple model captures the viewing angle dependence of the reflectance enhancement near cloud, suggesting the physics of this model is correct; and (b) The magnitude of the 2-layer modeled enhancement agree reasonably well with the "truth" with some expected underestimation. We further extend our model to include cloud-surface interaction using the Poisson model for broken clouds. We found that including cloud-surface interaction improves the correction, though it can introduced some over corrections for large cloud albedo, large cloud optical depth, large cloud fraction, large cloud aspect ratio. This over correction can be reduced by excluding scenes (10 km x 10km) with large cloud fraction for which the Poisson model is not designed for. Further research is underway to account for the contribution of cloud-aerosol radiative interaction to the enhancement.

  7. An Improved Physics-Based Model for Topographic Correction of Landsat TM Images

    Directory of Open Access Journals (Sweden)

    Ainong Li

    2015-05-01

    Full Text Available Optical remotely sensed images in mountainous areas are subject to radiometric distortions induced by topographic effects, which need to be corrected before quantitative applications. Based on Li model and Sandmeier model, this paper proposed an improved physics-based model for the topographic correction of Landsat Thematic Mapper (TM images. The model employed Normalized Difference Vegetation Index (NDVI thresholds to approximately divide land targets into eleven groups, due to NDVI’s lower sensitivity to topography and its significant role in indicating land cover type. Within each group of terrestrial targets, corresponding MODIS BRDF (Bidirectional Reflectance Distribution Function products were used to account for land surface’s BRDF effect, and topographic effects are corrected without Lambertian assumption. The methodology was tested with two TM scenes of severely rugged mountain areas acquired under different sun elevation angles. Results demonstrated that reflectance of sun-averted slopes was evidently enhanced, and the overall quality of images was improved with topographic effect being effectively suppressed. Correlation coefficients between Near Infra-Red band reflectance and illumination condition reduced almost to zero, and coefficients of variance also showed some reduction. By comparison with the other two physics-based models (Sandmeier model and Li model, the proposed model showed favorable results on two tested Landsat scenes. With the almost half-century accumulation of Landsat data and the successive launch and operation of Landsat 8, the improved model in this paper can be potentially helpful for the topographic correction of Landsat and Landsat-like data.

  8. Correction for Measurement Error from Genotyping-by-Sequencing in Genomic Variance and Genomic Prediction Models

    DEFF Research Database (Denmark)

    Ashraf, Bilal; Janss, Luc; Jensen, Just

    sample). The GBSeq data can be used directly in genomic models in the form of individual SNP allele-frequency estimates (e.g., reference reads/total reads per polymorphic site per individual), but is subject to measurement error due to the low sequencing depth per individual. Due to technical reasons....... In the current work we show how the correction for measurement error in GBSeq can also be applied in whole genome genomic variance and genomic prediction models. Bayesian whole-genome random regression models are proposed to allow implementation of large-scale SNP-based models with a per-SNP correction...... for measurement error. We show correct retrieval of genomic explained variance, and improved genomic prediction when accounting for the measurement error in GBSeq data...

  9. A Phillips curve interpretation of error-correction models of the wage and price dynamics

    DEFF Research Database (Denmark)

    Harck, Søren H.

    -correction setting, which actually seems to capture the wage and price dynamics of many large- scale econometric models quite well, is fully compatible with the notion of an old-fashioned Phillips curve with finite slope. It is shown how the steady-state impact of various shocks to the model can be profitably...... This paper presents a model of employment, distribution and inflation in which a modern error correction specification of the nominal wage and price dynamics (referring to claims on income by workers and firms) occupies a prominent role. It is brought out, explicitly, how this rather typical error...

  10. A Phillips curve interpretation of error-correction models of the wage and price dynamics

    DEFF Research Database (Denmark)

    Harck, Søren H.

    2009-01-01

    -correction setting, which actually seems to capture the wage and price dynamics of many large- scale econometric models quite well, is fully compatible with the notion of an old-fashioned Phillips curve with finite slope. It is shown how the steady-state impact of various shocks to the model can be profitably......This paper presents a model of employment, distribution and inflation in which a modern error correction specification of the nominal wage and price dynamics (referring to claims on income by workers and firms) occupies a prominent role. It is brought out, explicitly, how this rather typical error...

  11. Study on fitness functions of genetic algorithm for dynamically correcting nuclide atmospheric diffusion model

    International Nuclear Information System (INIS)

    Ji Zhilong; Ma Yuanwei; Wang Dezhong

    2014-01-01

    Background: In radioactive nuclides atmospheric diffusion models, the empirical dispersion coefficients were deduced under certain experiment conditions, whose difference with nuclear accident conditions is a source of deviation. A better estimation of the radioactive nuclide's actual dispersion process could be done by correcting dispersion coefficients with observation data, and Genetic Algorithm (GA) is an appropriate method for this correction procedure. Purpose: This study is to analyze the fitness functions' influence on the correction procedure and the forecast ability of diffusion model. Methods: GA, coupled with Lagrange dispersion model, was used in a numerical simulation to compare 4 fitness functions' impact on the correction result. Results: In the numerical simulation, the fitness function with observation deviation taken into consideration stands out when significant deviation exists in the observed data. After performing the correction procedure on the Kincaid experiment data, a significant boost was observed in the diffusion model's forecast ability. Conclusion: As the result shows, in order to improve dispersion models' forecast ability using GA, observation data should be given different weight in the fitness function corresponding to their error. (authors)

  12. Quantum-corrected drift-diffusion models for transport in semiconductor devices

    International Nuclear Information System (INIS)

    De Falco, Carlo; Gatti, Emilio; Lacaita, Andrea L.; Sacco, Riccardo

    2005-01-01

    In this paper, we propose a unified framework for Quantum-corrected drift-diffusion (QCDD) models in nanoscale semiconductor device simulation. QCDD models are presented as a suitable generalization of the classical drift-diffusion (DD) system, each particular model being identified by the constitutive relation for the quantum-correction to the electric potential. We examine two special, and relevant, examples of QCDD models; the first one is the modified DD model named Schroedinger-Poisson-drift-diffusion, and the second one is the quantum-drift-diffusion (QDD) model. For the decoupled solution of the two models, we introduce a functional iteration technique that extends the classical Gummel algorithm widely used in the iterative solution of the DD system. We discuss the finite element discretization of the various differential subsystems, with special emphasis on their stability properties, and illustrate the performance of the proposed algorithms and models on the numerical simulation of nanoscale devices in two spatial dimensions

  13. Hydraulic correction method (HCM) to enhance the efficiency of SRTM DEM in flood modeling

    Science.gov (United States)

    Chen, Huili; Liang, Qiuhua; Liu, Yong; Xie, Shuguang

    2018-04-01

    Digital Elevation Model (DEM) is one of the most important controlling factors determining the simulation accuracy of hydraulic models. However, the currently available global topographic data is confronted with limitations for application in 2-D hydraulic modeling, mainly due to the existence of vegetation bias, random errors and insufficient spatial resolution. A hydraulic correction method (HCM) for the SRTM DEM is proposed in this study to improve modeling accuracy. Firstly, we employ the global vegetation corrected DEM (i.e. Bare-Earth DEM), developed from the SRTM DEM to include both vegetation height and SRTM vegetation signal. Then, a newly released DEM, removing both vegetation bias and random errors (i.e. Multi-Error Removed DEM), is employed to overcome the limitation of height errors. Last, an approach to correct the Multi-Error Removed DEM is presented to account for the insufficiency of spatial resolution, ensuring flow connectivity of the river networks. The approach involves: (a) extracting river networks from the Multi-Error Removed DEM using an automated algorithm in ArcGIS; (b) correcting the location and layout of extracted streams with the aid of Google Earth platform and Remote Sensing imagery; and (c) removing the positive biases of the raised segment in the river networks based on bed slope to generate the hydraulically corrected DEM. The proposed HCM utilizes easily available data and tools to improve the flow connectivity of river networks without manual adjustment. To demonstrate the advantages of HCM, an extreme flood event in Huifa River Basin (China) is simulated on the original DEM, Bare-Earth DEM, Multi-Error removed DEM, and hydraulically corrected DEM using an integrated hydrologic-hydraulic model. A comparative analysis is subsequently performed to assess the simulation accuracy and performance of four different DEMs and favorable results have been obtained on the corrected DEM.

  14. A metapopulation model for the spread of MRSA in correctional facilities

    Directory of Open Access Journals (Sweden)

    Marc Beauparlant

    2016-10-01

    Full Text Available The spread of methicillin-resistant strains of Staphylococcus aureus (MRSA in health-care settings has become increasingly difficult to control and has since been able to spread in the general community. The prevalence of MRSA within the general public has caused outbreaks in groups of people in close quarters such as military barracks, gyms, daycare centres and correctional facilities. Correctional facilities are of particular importance for spreading MRSA, as inmates are often in close proximity and have limited access to hygienic products and clean clothing. Although these conditions are ideal for spreading MRSA, a recent study has suggested that recurrent epidemics are caused by the influx of colonized or infected individuals into the correctional facility. In this paper, we further investigate the effects of community dynamics on the spread of MRSA within the correctional facility and determine whether recidivism has a significant effect on disease dynamics. Using a simplified hotspot model ignoring disease dynamics within the correctional facility, as well as two metapopulation models, we demonstrate that outbreaks in correctional facilities can be driven by community dynamics even when spread between inmates is restricted. We also show that disease dynamics within the correctional facility and their effect on the outlying community may be ignored due to the smaller size of the incarcerated population. This will allow construction of simpler models that consider the effects of many MRSA hotspots interacting with the general community. It is suspected that the cumulative effects of hotspots for MRSA would have a stronger feedback effect in other community settings. Keywords: methicillin-resistant Staphylococcus aureus, hotspots, mathematical model, metapopulation model, Latin Hypercube Sampling

  15. One loop electro-weak radiative corrections in the standard model

    International Nuclear Information System (INIS)

    Kalyniak, P.; Sundaresan, M.K.

    1987-01-01

    This paper reports on the effect of radiative corrections in the standard model. A sensitive test of the three gauge boson vertices is expected to come from the work in LEPII in which the reaction e + e - → W + W - can occur. Two calculations of radiative corrections to the reaction e + e - → W + W - exist at present. The results of the calculations although very similar disagree with one another as to the actual magnitude of the correction. Some of the reasons for the disagreement are understood. However, due to the reasons mentioned below, another look must be taken at these lengthy calculations to resolve the differences between the two previous calculations. This is what is being done in the present work. There are a number of reasons why we must take another look at the calculation of the radiative corrections. The previous calculations were carried out before the UA1, UA2 data on W and Z bosons were obtained. Experimental groups require a computer program which can readily calculate the radiative corrections ab initio for various experimental conditions. The normalization of sin 2 θ w in the previous calculations was done in a way which is not convenient for use in the experimental work. It would be desirable to have the analytical expressions for the corrections available so that the renormalization scheme dependence of the corrections could be studied

  16. Evaluation of Ocean Tide Models Used for Jason-2 Altimetry Corrections

    DEFF Research Database (Denmark)

    Fok, H.S.; Baki Iz, H.; Shum, C. K.

    2010-01-01

    It has been more than a decade since the last comprehensive accuracy assessment of global ocean tide models. Here, we conduct an evaluation of the barotropic ocean tide corrections, which were computed using FES2004 and GOT00.2, and other models on the Jason-2 altimetry Geophysical Data Record (G...

  17. A Hierarchical Bayes Error Correction Model to Explain Dynamic Effects of Price Changes

    NARCIS (Netherlands)

    D. Fok (Dennis); R. Paap (Richard); C. Horváth (Csilla); Ph.H.B.F. Franses (Philip Hans)

    2005-01-01

    textabstractThe authors put forward a sales response model to explain the differences in immediate and dynamic effects of promotional prices and regular prices on sales. The model consists of a vector autoregression rewritten in error-correction format which allows to disentangle the immediate

  18. On the Correctness of Real-Time Modular Computer Systems Modeling with Stopwatch Automata Networks

    Directory of Open Access Journals (Sweden)

    Alevtina B. Glonina

    2018-01-01

    Full Text Available In this paper, we consider a schedulability analysis problem for real-time modular computer systems (RT MCS. A system configuration is called schedulable if all the jobs finish within their deadlines. The authors propose a stopwatch automata-based general model of RT MCS operation. A model instance for a given RT MCS configuration is a network of stopwatch automata (NSA and it can be built automatically using the general model. A system operation trace, which is necessary for checking the schedulability criterion, can be obtained from the corresponding NSA trace. The paper substantiates the correctness of the proposed approach. A set of correctness requirements to models of system components and to the whole system model were derived from RT MCS specifications. The authors proved that if all models of system components satisfy the corresponding requirements, the whole system model built according to the proposed approach satisfies its correctness requirements and is deterministic (i.e. for a given configuration a trace generated by the corresponding model run is uniquely determined. The model determinism implies that any model run can be used for schedulability analysis. This fact is crucial for the approach efficiency, as the number of possible model runs grows exponentially with the number of jobs in a system. Correctness requirements to models of system components models can be checked automatically by a verifier using observer automata approach. The authors proved by using UPPAAL verifier that all the developed models of system components satisfy the corresponding requirements. User-defined models of system components can be also used for system modeling if they satisfy the requirements.

  19. Degeneracy of time series models: The best model is not always the correct model

    International Nuclear Information System (INIS)

    Judd, Kevin; Nakamura, Tomomichi

    2006-01-01

    There are a number of good techniques for finding, in some sense, the best model of a deterministic system given a time series of observations. We examine a problem called model degeneracy, which has the consequence that even when a perfect model of a system exists, one does not find it using the best techniques currently available. The problem is illustrated using global polynomial models and the theory of Groebner bases

  20. Vascular input function correction of inflow enhancement for improved pharmacokinetic modeling of liver DCE-MRI.

    Science.gov (United States)

    Ning, Jia; Schubert, Tilman; Johnson, Kevin M; Roldán-Alzate, Alejandro; Chen, Huijun; Yuan, Chun; Reeder, Scott B

    2018-06-01

    To propose a simple method to correct vascular input function (VIF) due to inflow effects and to test whether the proposed method can provide more accurate VIFs for improved pharmacokinetic modeling. A spoiled gradient echo sequence-based inflow quantification and contrast agent concentration correction method was proposed. Simulations were conducted to illustrate improvement in the accuracy of VIF estimation and pharmacokinetic fitting. Animal studies with dynamic contrast-enhanced MR scans were conducted before, 1 week after, and 2 weeks after portal vein embolization (PVE) was performed in the left portal circulation of pigs. The proposed method was applied to correct the VIFs for model fitting. Pharmacokinetic parameters fitted using corrected and uncorrected VIFs were compared between different lobes and visits. Simulation results demonstrated that the proposed method can improve accuracy of VIF estimation and pharmacokinetic fitting. In animal study results, pharmacokinetic fitting using corrected VIFs demonstrated changes in perfusion consistent with changes expected after PVE, whereas the perfusion estimates derived by uncorrected VIFs showed no significant changes. The proposed correction method improves accuracy of VIFs and therefore provides more precise pharmacokinetic fitting. This method may be promising in improving the reliability of perfusion quantification. Magn Reson Med 79:3093-3102, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  1. Bias correction in the realized stochastic volatility model for daily volatility on the Tokyo Stock Exchange

    Science.gov (United States)

    Takaishi, Tetsuya

    2018-06-01

    The realized stochastic volatility model has been introduced to estimate more accurate volatility by using both daily returns and realized volatility. The main advantage of the model is that no special bias-correction factor for the realized volatility is required a priori. Instead, the model introduces a bias-correction parameter responsible for the bias hidden in realized volatility. We empirically investigate the bias-correction parameter for realized volatilities calculated at various sampling frequencies for six stocks on the Tokyo Stock Exchange, and then show that the dynamic behavior of the bias-correction parameter as a function of sampling frequency is qualitatively similar to that of the Hansen-Lunde bias-correction factor although their values are substantially different. Under the stochastic diffusion assumption of the return dynamics, we investigate the accuracy of estimated volatilities by examining the standardized returns. We find that while the moments of the standardized returns from low-frequency realized volatilities are consistent with the expectation from the Gaussian variables, the deviation from the expectation becomes considerably large at high frequencies. This indicates that the realized stochastic volatility model itself cannot completely remove bias at high frequencies.

  2. An effort allocation model considering different budgetary constraint on fault detection process and fault correction process

    Directory of Open Access Journals (Sweden)

    Vijay Kumar

    2016-01-01

    Full Text Available Fault detection process (FDP and Fault correction process (FCP are important phases of software development life cycle (SDLC. It is essential for software to undergo a testing phase, during which faults are detected and corrected. The main goal of this article is to allocate the testing resources in an optimal manner to minimize the cost during testing phase using FDP and FCP under dynamic environment. In this paper, we first assume there is a time lag between fault detection and fault correction. Thus, removal of a fault is performed after a fault is detected. In addition, detection process and correction process are taken to be independent simultaneous activities with different budgetary constraints. A structured optimal policy based on optimal control theory is proposed for software managers to optimize the allocation of the limited resources with the reliability criteria. Furthermore, release policy for the proposed model is also discussed. Numerical example is given in support of the theoretical results.

  3. Magnetic corrections to π -π scattering lengths in the linear sigma model

    Science.gov (United States)

    Loewe, M.; Monje, L.; Zamora, R.

    2018-03-01

    In this article, we consider the magnetic corrections to π -π scattering lengths in the frame of the linear sigma model. For this, we consider all the one-loop corrections in the s , t , and u channels, associated to the insertion of a Schwinger propagator for charged pions, working in the region of small values of the magnetic field. Our calculation relies on an appropriate expansion for the propagator. It turns out that the leading scattering length, l =0 in the S channel, increases for an increasing value of the magnetic field, in the isospin I =2 case, whereas the opposite effect is found for the I =0 case. The isospin symmetry is valid because the insertion of the magnetic field occurs through the absolute value of the electric charges. The channel I =1 does not receive any corrections. These results, for the channels I =0 and I =2 , are opposite with respect to the thermal corrections found previously in the literature.

  4. Robust recurrent neural network modeling for software fault detection and correction prediction

    International Nuclear Information System (INIS)

    Hu, Q.P.; Xie, M.; Ng, S.H.; Levitin, G.

    2007-01-01

    Software fault detection and correction processes are related although different, and they should be studied together. A practical approach is to apply software reliability growth models to model fault detection, and fault correction process is assumed to be a delayed process. On the other hand, the artificial neural networks model, as a data-driven approach, tries to model these two processes together with no assumptions. Specifically, feedforward backpropagation networks have shown their advantages over analytical models in fault number predictions. In this paper, the following approach is explored. First, recurrent neural networks are applied to model these two processes together. Within this framework, a systematic networks configuration approach is developed with genetic algorithm according to the prediction performance. In order to provide robust predictions, an extra factor characterizing the dispersion of prediction repetitions is incorporated into the performance function. Comparisons with feedforward neural networks and analytical models are developed with respect to a real data set

  5. Model-based bootstrapping when correcting for measurement error with application to logistic regression.

    Science.gov (United States)

    Buonaccorsi, John P; Romeo, Giovanni; Thoresen, Magne

    2018-03-01

    When fitting regression models, measurement error in any of the predictors typically leads to biased coefficients and incorrect inferences. A plethora of methods have been proposed to correct for this. Obtaining standard errors and confidence intervals using the corrected estimators can be challenging and, in addition, there is concern about remaining bias in the corrected estimators. The bootstrap, which is one option to address these problems, has received limited attention in this context. It has usually been employed by simply resampling observations, which, while suitable in some situations, is not always formally justified. In addition, the simple bootstrap does not allow for estimating bias in non-linear models, including logistic regression. Model-based bootstrapping, which can potentially estimate bias in addition to being robust to the original sampling or whether the measurement error variance is constant or not, has received limited attention. However, it faces challenges that are not present in handling regression models with no measurement error. This article develops new methods for model-based bootstrapping when correcting for measurement error in logistic regression with replicate measures. The methodology is illustrated using two examples, and a series of simulations are carried out to assess and compare the simple and model-based bootstrap methods, as well as other standard methods. While not always perfect, the model-based approaches offer some distinct improvements over the other methods. © 2017, The International Biometric Society.

  6. Physics Model-Based Scatter Correction in Multi-Source Interior Computed Tomography.

    Science.gov (United States)

    Gong, Hao; Li, Bin; Jia, Xun; Cao, Guohua

    2018-02-01

    Multi-source interior computed tomography (CT) has a great potential to provide ultra-fast and organ-oriented imaging at low radiation dose. However, X-ray cross scattering from multiple simultaneously activated X-ray imaging chains compromises imaging quality. Previously, we published two hardware-based scatter correction methods for multi-source interior CT. Here, we propose a software-based scatter correction method, with the benefit of no need for hardware modifications. The new method is based on a physics model and an iterative framework. The physics model was derived analytically, and was used to calculate X-ray scattering signals in both forward direction and cross directions in multi-source interior CT. The physics model was integrated to an iterative scatter correction framework to reduce scatter artifacts. The method was applied to phantom data from both Monte Carlo simulations and physical experimentation that were designed to emulate the image acquisition in a multi-source interior CT architecture recently proposed by our team. The proposed scatter correction method reduced scatter artifacts significantly, even with only one iteration. Within a few iterations, the reconstructed images fast converged toward the "scatter-free" reference images. After applying the scatter correction method, the maximum CT number error at the region-of-interests (ROIs) was reduced to 46 HU in numerical phantom dataset and 48 HU in physical phantom dataset respectively, and the contrast-noise-ratio at those ROIs increased by up to 44.3% and up to 19.7%, respectively. The proposed physics model-based iterative scatter correction method could be useful for scatter correction in dual-source or multi-source CT.

  7. Renormalisation group corrections to the littlest seesaw model and maximal atmospheric mixing

    International Nuclear Information System (INIS)

    King, Stephen F.; Zhang, Jue; Zhou, Shun

    2016-01-01

    The Littlest Seesaw (LS) model involves two right-handed neutrinos and a very constrained Dirac neutrino mass matrix, involving one texture zero and two independent Dirac masses, leading to a highly predictive scheme in which all neutrino masses and the entire PMNS matrix is successfully predicted in terms of just two real parameters. We calculate the renormalisation group (RG) corrections to the LS predictions, with and without supersymmetry, including also the threshold effects induced by the decoupling of heavy Majorana neutrinos both analytically and numerically. We find that the predictions for neutrino mixing angles and mass ratios are rather stable under RG corrections. For example we find that the LS model with RG corrections predicts close to maximal atmospheric mixing, θ_2_3=45"∘±1"∘, in most considered cases, in tension with the latest NOvA results. The techniques used here apply to other seesaw models with a strong normal mass hierarchy.

  8. Minimal and non-minimal standard models: Universality of radiative corrections

    International Nuclear Information System (INIS)

    Passarino, G.

    1991-01-01

    The possibility of describing electroweak processes by means of models with a non-minimal Higgs sector is analyzed. The renormalization procedure which leads to a set of fitting equations for the bare parameters of the lagrangian is first reviewed for the minimal standard model. A solution of the fitting equations is obtained, which correctly includes large higher-order corrections. Predictions for physical observables, notably the W boson mass and the Z O partial widths, are discussed in detail. Finally the extension to non-minimal models is described under the assumption that new physics will appear only inside the vector boson self-energies and the concept of universality of radiative corrections is introduced, showing that to a large extent they are insensitive to the details of the enlarged Higgs sector. Consequences for the bounds on the top quark mass are also discussed. (orig.)

  9. An Angular Leakage Correction for Modeling a Hemisphere, Using One-Dimensional Spherical Coordinates

    International Nuclear Information System (INIS)

    Schwinkendorf, K.N.; Eberle, C.S.

    2003-01-01

    A radially dependent, angular leakage correction was applied to a one-dimensional, multigroup neutron diffusion theory computer code to accurately model hemispherical geometry. This method allows the analyst to model hemispherical geometry, important in nuclear criticality safety analyses, with one-dimensional computer codes, which execute very quickly. Rapid turnaround times for scoping studies thus may be realized. This method uses an approach analogous to an axial leakage correction in a one-dimensional cylinder calculation. The two-dimensional Laplace operator was preserved in spherical geometry using a leakage correction proportional to 1/r 2 , which was folded into the one-dimensional spherical calculation on a mesh-by-mesh basis. Hemispherical geometry is of interest to criticality safety because of its similarity to piles of spilled fissile material and accumulations of fissile material in process containers. A hemisphere also provides a more realistic calculational model for spilled fissile material than does a sphere

  10. Renormalisation group corrections to the littlest seesaw model and maximal atmospheric mixing

    Energy Technology Data Exchange (ETDEWEB)

    King, Stephen F. [School of Physics and Astronomy, University of Southampton,SO17 1BJ Southampton (United Kingdom); Zhang, Jue [Center for High Energy Physics, Peking University,Beijing 100871 (China); Zhou, Shun [Center for High Energy Physics, Peking University,Beijing 100871 (China); Institute of High Energy Physics, Chinese Academy of Sciences,Beijing 100049 (China)

    2016-12-06

    The Littlest Seesaw (LS) model involves two right-handed neutrinos and a very constrained Dirac neutrino mass matrix, involving one texture zero and two independent Dirac masses, leading to a highly predictive scheme in which all neutrino masses and the entire PMNS matrix is successfully predicted in terms of just two real parameters. We calculate the renormalisation group (RG) corrections to the LS predictions, with and without supersymmetry, including also the threshold effects induced by the decoupling of heavy Majorana neutrinos both analytically and numerically. We find that the predictions for neutrino mixing angles and mass ratios are rather stable under RG corrections. For example we find that the LS model with RG corrections predicts close to maximal atmospheric mixing, θ{sub 23}=45{sup ∘}±1{sup ∘}, in most considered cases, in tension with the latest NOvA results. The techniques used here apply to other seesaw models with a strong normal mass hierarchy.

  11. Semiparametric modeling: Correcting low-dimensional model error in parametric models

    International Nuclear Information System (INIS)

    Berry, Tyrus; Harlim, John

    2016-01-01

    In this paper, a semiparametric modeling approach is introduced as a paradigm for addressing model error arising from unresolved physical phenomena. Our approach compensates for model error by learning an auxiliary dynamical model for the unknown parameters. Practically, the proposed approach consists of the following steps. Given a physics-based model and a noisy data set of historical observations, a Bayesian filtering algorithm is used to extract a time-series of the parameter values. Subsequently, the diffusion forecast algorithm is applied to the retrieved time-series in order to construct the auxiliary model for the time evolving parameters. The semiparametric forecasting algorithm consists of integrating the existing physics-based model with an ensemble of parameters sampled from the probability density function of the diffusion forecast. To specify initial conditions for the diffusion forecast, a Bayesian semiparametric filtering method that extends the Kalman-based filtering framework is introduced. In difficult test examples, which introduce chaotically and stochastically evolving hidden parameters into the Lorenz-96 model, we show that our approach can effectively compensate for model error, with forecasting skill comparable to that of the perfect model.

  12. The modified version of the centre-of-mass correction to the bag model

    International Nuclear Information System (INIS)

    Bartelski, J.; Tatur, S.

    1986-01-01

    We propose the improvement of the recently considered version of the centre-of-mass correction to the bag model. We identify a nucleon bag with physical nucleon confined in an external fictitious spherical well potential with an additional external fictitious pressure characterized by the parameter b. The introduction of such a pressure restores the conservation of the canonical energy-momentum tensor, which was lost in the former model. We propose several methods to determine the numerical value of b. We calculate the Roper resonance mass as well as static electroweak parameters of a nucleon with centre-of-mass corrections taken into account. 7 refs., 1 tab. (author)

  13. Actuator Disc Model Using a Modified Rhie-Chow/SIMPLE Pressure Correction Algorithm

    DEFF Research Database (Denmark)

    Rethore, Pierre-Elouan; Sørensen, Niels

    2008-01-01

    An actuator disc model for the flow solver EllipSys (2D&3D) is proposed. It is based on a correction of the Rhie-Chow algorithm for using discreet body forces in collocated variable finite volume CFD code. It is compared with three cases where an analytical solution is known.......An actuator disc model for the flow solver EllipSys (2D&3D) is proposed. It is based on a correction of the Rhie-Chow algorithm for using discreet body forces in collocated variable finite volume CFD code. It is compared with three cases where an analytical solution is known....

  14. Long-range correlation in synchronization and syncopation tapping: a linear phase correction model.

    Directory of Open Access Journals (Sweden)

    Didier Delignières

    Full Text Available We propose in this paper a model for accounting for the increase in long-range correlations observed in asynchrony series in syncopation tapping, as compared with synchronization tapping. Our model is an extension of the linear phase correction model for synchronization tapping. We suppose that the timekeeper represents a fractal source in the system, and that a process of estimation of the half-period of the metronome, obeying a random-walk dynamics, combines with the linear phase correction process. Comparing experimental and simulated series, we show that our model allows accounting for the experimentally observed pattern of serial dependence. This model complete previous modeling solutions proposed for self-paced and synchronization tapping, for a unifying framework of event-based timing.

  15. The Simulation and Correction to the Brain Deformation Based on the Linear Elastic Model in IGS

    Institute of Scientific and Technical Information of China (English)

    MU Xiao-lan; SONG Zhi-jian

    2004-01-01

    @@ The brain deformation is a vital factor affecting the precision of the IGS and it becomes a hotspot to simulate and correct the brain deformation recently.The research organizations, which firstly resolved the brain deformation with the physical models, have the Image Processing and Analysis department of Yale University, Biomedical Modeling Lab of Vanderbilt University and so on. The former uses the linear elastic model; the latter uses the consolidation model.The linear elastic model only needs to drive the model using the surface displacement of exposed brain cortex,which is more convenient to be measured in the clinic.

  16. Establishment and correction of an Echelle cross-prism spectrogram reduction model

    Science.gov (United States)

    Zhang, Rui; Bayanheshig; Li, Xiaotian; Cui, Jicheng

    2017-11-01

    The accuracy of an echelle cross-prism spectrometer depends on the matching degree between the spectrum reduction model and the actual state of the spectrometer. However, the error of adjustment can change the actual state of the spectrometer and result in a reduction model that does not match. This produces an inaccurate wavelength calibration. Therefore, the calibration of a spectrogram reduction model is important for the analysis of any echelle cross-prism spectrometer. In this study, the spectrogram reduction model of an echelle cross-prism spectrometer was established. The image position laws of a spectrometer that varies with the system parameters were simulated to the influence of the changes in prism refractive index, focal length and so on, on the calculation results. The model was divided into different wavebands. The iterative method, least squares principle and element lamps with known characteristic wavelength were used to calibrate the spectral model in different wavebands to obtain the actual values of the system parameters. After correction, the deviation of actual x- and y-coordinates and the coordinates calculated by the model are less than one pixel. The model corrected by this method thus reflects the system parameters in the current spectrometer state and can assist in accurate wavelength extraction. The instrument installation and adjustment would be guided in model-repeated correction, reducing difficulty of equipment, respectively.

  17. Gravity loop corrections to the standard model Higgs in Einstein gravity

    International Nuclear Information System (INIS)

    Yugo Abe; Masaatsu Horikoshi; Takeo Inami

    2016-01-01

    We study one-loop quantum gravity corrections to the standard model Higgs potential V(φ) à la Coleman-Weinberg and examine the stability question of V(φ) in the energy region of Planck mass scale, μ ≃ M_P_l (M_P_l = 1.22x10"1"9 GeV). We calculate the gravity one-loop corrections to V(φ) in Einstein gravity by using the momentum cut-off Λ. We have found that even small gravity corrections compete with the standard model term of V(φ) and affect the stability argument of the latter part alone. This is because the latter part is nearly zero in the energy region of M_P_l. (author)

  18. Associated heavy quarks pair production with Higgs as a tool for a search for non-perturbative effects of the electroweak interaction at the LHC

    Directory of Open Access Journals (Sweden)

    B.A. Arbuzov

    2017-09-01

    Full Text Available Assuming an existence of the anomalous triple electro-weak bosons interaction being defined by coupling constant λ we calculate its contribution to interactions of the Higgs with pairs of heavy particles. Bearing in mind experimental restrictions −0.011<λ<0.011 we present results for possible effects in processes pp→W+W−H,pp→W+ZH,pp→W−ZH,pp→t¯tH, pp→b¯bH. Effects could be significant with negative sign of λ in associated heavy quarks t,b pairs production with the Higgs. In calculations we rely on results of the non-perturbative approach to a spontaneous generation of effective interactions, which defines the form-factor of the three-boson anomalous interaction.

  19. Correction of Flow Curves and Constitutive Modelling of a Ti-6Al-4V Alloy

    Directory of Open Access Journals (Sweden)

    Ming Hu

    2018-04-01

    Full Text Available Isothermal uniaxial compressions of a Ti-6Al-4V alloy were carried out in the temperature range of 800–1050 °C and strain rate range of 0.001–1 s−1. The effects of friction between the specimen and anvils as well as the increase in temperature caused by the high strain rate deformation were considered, and flow curves were corrected as a result. Constitutive models were discussed based on the corrected flow curves. The correlation coefficient and average absolute relative error for the strain compensated Arrhenius-type constitutive model are 0.986 and 9.168%, respectively, while the values for a modified Johnson-Cook constitutive model are 0.924 and 22.673%, respectively. Therefore, the strain compensated Arrhenius-type constitutive model has a better prediction capability than a modified Johnson-Cook constitutive model.

  20. Color correction with blind image restoration based on multiple images using a low-rank model

    Science.gov (United States)

    Li, Dong; Xie, Xudong; Lam, Kin-Man

    2014-03-01

    We present a method that can handle the color correction of multiple photographs with blind image restoration simultaneously and automatically. We prove that the local colors of a set of images of the same scene exhibit the low-rank property locally both before and after a color-correction operation. This property allows us to correct all kinds of errors in an image under a low-rank matrix model without particular priors or assumptions. The possible errors may be caused by changes of viewpoint, large illumination variations, gross pixel corruptions, partial occlusions, etc. Furthermore, a new iterative soft-segmentation method is proposed for local color transfer using color influence maps. Due to the fact that the correct color information and the spatial information of images can be recovered using the low-rank model, more precise color correction and many other image-restoration tasks-including image denoising, image deblurring, and gray-scale image colorizing-can be performed simultaneously. Experiments have verified that our method can achieve consistent and promising results on uncontrolled real photographs acquired from the Internet and that it outperforms current state-of-the-art methods.

  1. Statistical correction of lidar-derived digital elevation models with multispectral airborne imagery in tidal marshes

    Science.gov (United States)

    Buffington, Kevin J.; Dugger, Bruce D.; Thorne, Karen M.; Takekawa, John Y.

    2016-01-01

    Airborne light detection and ranging (lidar) is a valuable tool for collecting large amounts of elevation data across large areas; however, the limited ability to penetrate dense vegetation with lidar hinders its usefulness for measuring tidal marsh platforms. Methods to correct lidar elevation data are available, but a reliable method that requires limited field work and maintains spatial resolution is lacking. We present a novel method, the Lidar Elevation Adjustment with NDVI (LEAN), to correct lidar digital elevation models (DEMs) with vegetation indices from readily available multispectral airborne imagery (NAIP) and RTK-GPS surveys. Using 17 study sites along the Pacific coast of the U.S., we achieved an average root mean squared error (RMSE) of 0.072 m, with a 40–75% improvement in accuracy from the lidar bare earth DEM. Results from our method compared favorably with results from three other methods (minimum-bin gridding, mean error correction, and vegetation correction factors), and a power analysis applying our extensive RTK-GPS dataset showed that on average 118 points were necessary to calibrate a site-specific correction model for tidal marshes along the Pacific coast. By using available imagery and with minimal field surveys, we showed that lidar-derived DEMs can be adjusted for greater accuracy while maintaining high (1 m) resolution.

  2. Friction correction for model ship resistance and propulsion tests in ice at NRC's OCRE-RC

    Directory of Open Access Journals (Sweden)

    Michael Lau

    2018-05-01

    Full Text Available This paper documents the result of a preliminary analysis on the influence of hull-ice friction coefficient on model resistance and power predictions and their correlation to full-scale measurements. The study is based on previous model-scale/full-scale correlations performed on the National Research Council - Ocean, Coastal, and River Engineering Research Center's (NRC/OCRE-RC model test data. There are two objectives for the current study: (1 to validate NRC/OCRE-RC's modeling standards in regarding to its practice of specifying a CFC (Correlation Friction Coefficient of 0.05 for all its ship models; and (2 to develop a correction methodology for its resistance and propulsion predictions when the model is prepared with an ice friction coefficient slightly deviated from the CFC of 0.05. The mean CFC of 0.056 and 0.050 for perfect correlation as computed from the resistance and power analysis, respectively, have justified NRC/OCRE-RC's selection of 0.05 for the CFC of all its models. Furthermore, a procedure for minor friction corrections is developed. Keywords: Model test, Ice resistance, Power, Friction correction, Correlation friction coefficient

  3. Improved Model for Depth Bias Correction in Airborne LiDAR Bathymetry Systems

    Directory of Open Access Journals (Sweden)

    Jianhu Zhao

    2017-07-01

    Full Text Available Airborne LiDAR bathymetry (ALB is efficient and cost effective in obtaining shallow water topography, but often produces a low-accuracy sounding solution due to the effects of ALB measurements and ocean hydrological parameters. In bathymetry estimates, peak shifting of the green bottom return caused by pulse stretching induces depth bias, which is the largest error source in ALB depth measurements. The traditional depth bias model is often applied to reduce the depth bias, but it is insufficient when used with various ALB system parameters and ocean environments. Therefore, an accurate model that considers all of the influencing factors must be established. In this study, an improved depth bias model is developed through stepwise regression in consideration of the water depth, laser beam scanning angle, sensor height, and suspended sediment concentration. The proposed improved model and a traditional one are used in an experiment. The results show that the systematic deviation of depth bias corrected by the traditional and improved models is reduced significantly. Standard deviations of 0.086 and 0.055 m are obtained with the traditional and improved models, respectively. The accuracy of the ALB-derived depth corrected by the improved model is better than that corrected by the traditional model.

  4. Hydrological modeling as an evaluation tool of EURO-CORDEX climate projections and bias correction methods

    Science.gov (United States)

    Hakala, Kirsti; Addor, Nans; Seibert, Jan

    2017-04-01

    Streamflow stemming from Switzerland's mountainous landscape will be influenced by climate change, which will pose significant challenges to the water management and policy sector. In climate change impact research, the determination of future streamflow is impeded by different sources of uncertainty, which propagate through the model chain. In this research, we explicitly considered the following sources of uncertainty: (1) climate models, (2) downscaling of the climate projections to the catchment scale, (3) bias correction method and (4) parameterization of the hydrological model. We utilize climate projections at the 0.11 degree 12.5 km resolution from the EURO-CORDEX project, which are the most recent climate projections for the European domain. EURO-CORDEX is comprised of regional climate model (RCM) simulations, which have been downscaled from global climate models (GCMs) from the CMIP5 archive, using both dynamical and statistical techniques. Uncertainties are explored by applying a modeling chain involving 14 GCM-RCMs to ten Swiss catchments. We utilize the rainfall-runoff model HBV Light, which has been widely used in operational hydrological forecasting. The Lindström measure, a combination of model efficiency and volume error, was used as an objective function to calibrate HBV Light. Ten best sets of parameters are then achieved by calibrating using the genetic algorithm and Powell optimization (GAP) method. The GAP optimization method is based on the evolution of parameter sets, which works by selecting and recombining high performing parameter sets with each other. Once HBV is calibrated, we then perform a quantitative comparison of the influence of biases inherited from climate model simulations to the biases stemming from the hydrological model. The evaluation is conducted over two time periods: i) 1980-2009 to characterize the simulation realism under the current climate and ii) 2070-2099 to identify the magnitude of the projected change of

  5. Ratio-based vs. model-based methods to correct for urinary creatinine concentrations.

    Science.gov (United States)

    Jain, Ram B

    2016-08-01

    Creatinine-corrected urinary analyte concentration is usually computed as the ratio of the observed level of analyte concentration divided by the observed level of the urinary creatinine concentration (UCR). This ratio-based method is flawed since it implicitly assumes that hydration is the only factor that affects urinary creatinine concentrations. On the contrary, it has been shown in the literature, that age, gender, race/ethnicity, and other factors also affect UCR. Consequently, an optimal method to correct for UCR should correct for hydration as well as other factors like age, gender, and race/ethnicity that affect UCR. Model-based creatinine correction in which observed UCRs are used as an independent variable in regression models has been proposed. This study was conducted to evaluate the performance of ratio-based and model-based creatinine correction methods when the effects of gender, age, and race/ethnicity are evaluated one factor at a time for selected urinary analytes and metabolites. It was observed that ratio-based method leads to statistically significant pairwise differences, for example, between males and females or between non-Hispanic whites (NHW) and non-Hispanic blacks (NHB), more often than the model-based method. However, depending upon the analyte of interest, the reverse is also possible. The estimated ratios of geometric means (GM), for example, male to female or NHW to NHB, were also compared for the two methods. When estimated UCRs were higher for the group (for example, males) in the numerator of this ratio, these ratios were higher for the model-based method, for example, male to female ratio of GMs. When estimated UCR were lower for the group (for example, NHW) in the numerator of this ratio, these ratios were higher for the ratio-based method, for example, NHW to NHB ratio of GMs. Model-based method is the method of choice if all factors that affect UCR are to be accounted for.

  6. Model-Based Illumination Correction for Face Images in Uncontrolled Scenarios

    NARCIS (Netherlands)

    Boom, B.J.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.

    2009-01-01

    Face Recognition under uncontrolled illumination conditions is partly an unsolved problem. Several illumination correction methods have been proposed, but these are usually tested on illumination conditions created in a laboratory. Our focus is more on uncontrolled conditions. We use the Phong model

  7. Center-of-mass corrections in the S+V potential model

    International Nuclear Information System (INIS)

    Palladino, B.E.

    1987-02-01

    Center-of-mass corrections to the mass spectrum and static properties of low-lying S-wave baryons and mesons are discussed in the context of a relativistic, independent quark model, based on a Dirac equation, with equally mixed scalar (S) and vector (V) confining potential. (author) [pt

  8. Splice-correcting oligonucleotides restore BTK function in X-linked agammaglobulinemia model

    DEFF Research Database (Denmark)

    Bestas, Burcu; Moreno, Pedro M D; Blomberg, K Emelie M

    2014-01-01

    , splice-correcting oligonucleotides (SCOs) targeting mutated BTK transcripts for treating XLA. Both the SCO structural design and chemical properties were optimized using 2'-O-methyl, locked nucleic acid, or phosphorodiamidate morpholino backbones. In order to have access to an animal model of XLA, we...

  9. Integrals of random fields treated by the model correction factor method

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...

  10. Model correction factor method for reliability problems involving integrals of non-Gaussian random fields

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...

  11. Rank-based Tests of the Cointegrating Rank in Semiparametric Error Correction Models

    NARCIS (Netherlands)

    Hallin, M.; van den Akker, R.; Werker, B.J.M.

    2012-01-01

    Abstract: This paper introduces rank-based tests for the cointegrating rank in an Error Correction Model with i.i.d. elliptical innovations. The tests are asymptotically distribution-free, and their validity does not depend on the actual distribution of the innovations. This result holds despite the

  12. Correcting Biases in a lower resolution global circulation model with data assimilation

    Science.gov (United States)

    Canter, Martin; Barth, Alexander

    2016-04-01

    With this work, we aim at developping a new method of bias correction using data assimilation. This method is based on the stochastic forcing of a model to correct bias. First, through a preliminary run, we estimate the bias of the model and its possible sources. Then, we establish a forcing term which is directly added inside the model's equations. We create an ensemble of runs and consider the forcing term as a control variable during the assimilation of observations. We then use this analysed forcing term to correct the bias of the model. Since the forcing is added inside the model, it acts as a source term, unlike external forcings such as wind. This procedure has been developed and successfully tested with a twin experiment on a Lorenz 95 model. It is currently being applied and tested on the sea ice ocean NEMO LIM model, which is used in the PredAntar project. NEMO LIM is a global and low resolution (2 degrees) coupled model (hydrodynamic model and sea ice model) with long time steps allowing simulations over several decades. Due to its low resolution, the model is subject to bias in area where strong currents are present. We aim at correcting this bias by using perturbed current fields from higher resolution models and randomly generated perturbations. The random perturbations need to be constrained in order to respect the physical properties of the ocean, and not create unwanted phenomena. To construct those random perturbations, we first create a random field with the Diva tool (Data-Interpolating Variational Analysis). Using a cost function, this tool penalizes abrupt variations in the field, while using a custom correlation length. It also decouples disconnected areas based on topography. Then, we filter the field to smoothen it and remove small scale variations. We use this field as a random stream function, and take its derivatives to get zonal and meridional velocity fields. We also constrain the stream function along the coasts in order not to have

  13. Asteroseismic modelling of solar-type stars: internal systematics from input physics and surface correction methods

    Science.gov (United States)

    Nsamba, B.; Campante, T. L.; Monteiro, M. J. P. F. G.; Cunha, M. S.; Rendle, B. M.; Reese, D. R.; Verma, K.

    2018-04-01

    Asteroseismic forward modelling techniques are being used to determine fundamental properties (e.g. mass, radius, and age) of solar-type stars. The need to take into account all possible sources of error is of paramount importance towards a robust determination of stellar properties. We present a study of 34 solar-type stars for which high signal-to-noise asteroseismic data is available from multi-year Kepler photometry. We explore the internal systematics on the stellar properties, that is, associated with the uncertainty in the input physics used to construct the stellar models. In particular, we explore the systematics arising from: (i) the inclusion of the diffusion of helium and heavy elements; and (ii) the uncertainty in solar metallicity mixture. We also assess the systematics arising from (iii) different surface correction methods used in optimisation/fitting procedures. The systematics arising from comparing results of models with and without diffusion are found to be 0.5%, 0.8%, 2.1%, and 16% in mean density, radius, mass, and age, respectively. The internal systematics in age are significantly larger than the statistical uncertainties. We find the internal systematics resulting from the uncertainty in solar metallicity mixture to be 0.7% in mean density, 0.5% in radius, 1.4% in mass, and 6.7% in age. The surface correction method by Sonoi et al. and Ball & Gizon's two-term correction produce the lowest internal systematics among the different correction methods, namely, ˜1%, ˜1%, ˜2%, and ˜8% in mean density, radius, mass, and age, respectively. Stellar masses obtained using the surface correction methods by Kjeldsen et al. and Ball & Gizon's one-term correction are systematically higher than those obtained using frequency ratios.

  14. Imputation of Housing Rents for Owners Using Models With Heckman Correction

    Directory of Open Access Journals (Sweden)

    Beat Hulliger

    2012-07-01

    Full Text Available The direct income of owners and tenants of dwellings is not comparable since the owners have a hidden income from the investment in their dwelling. This hidden income is considered a part of the disposable income of owners. It may be predicted with the help of a linear model of the rent. Since such a model must be developed and estimated for tenants with observed market rents a selection bias may occur. The selection bias can be minimised through a Heckman correction. The paper applies the Heckman correction to data from the Swiss Statistics on Income and Living Conditions. The Heckman method is adapted to the survey context, the modeling process including the choice of covariates is explained and the effect of the prediction using the model is discussed.

  15. A New High-Precision Correction Method of Temperature Distribution in Model Stellar Atmospheres

    Directory of Open Access Journals (Sweden)

    Sapar A.

    2013-06-01

    Full Text Available The main features of the temperature correction methods, suggested and used in modeling of plane-parallel stellar atmospheres, are discussed. The main features of the new method are described. Derivation of the formulae for a version of the Unsöld-Lucy method, used by us in the SMART (Stellar Model Atmospheres and Radiative Transport software for modeling stellar atmospheres, is presented. The method is based on a correction of the model temperature distribution based on minimizing differences of flux from its accepted constant value and on the requirement of the lack of its gradient, meaning that local source and sink terms of radiation must be equal. The final relative flux constancy obtainable by the method with the SMART code turned out to have the precision of the order of 0.5 %. Some of the rapidly converging iteration steps can be useful before starting the high-precision model correction. The corrections of both the flux value and of its gradient, like in Unsöld-Lucy method, are unavoidably needed to obtain high-precision flux constancy. A new temperature correction method to obtain high-precision flux constancy for plane-parallel LTE model stellar atmospheres is proposed and studied. The non-linear optimization is carried out by the least squares, in which the Levenberg-Marquardt correction method and thereafter additional correction by the Broyden iteration loop were applied. Small finite differences of temperature (δT/T = 10−3 are used in the computations. A single Jacobian step appears to be mostly sufficient to get flux constancy of the order 10−2 %. The dual numbers and their generalization – the dual complex numbers (the duplex numbers – enable automatically to get the derivatives in the nilpotent part of the dual numbers. A version of the SMART software is in the stage of refactorization to dual and duplex numbers, what enables to get rid of the finite differences, as an additional source of lowering precision of the

  16. Oblique corrections in a model with neutrino masses and strong C P resolution

    International Nuclear Information System (INIS)

    Natale, A.A.; Rodrigues da Silva, P.S.

    1994-01-01

    Our intention in this work is to verify what is the order of the limits we obtain on the light neutrino masses, through the calculation and comparison of the oblique corrections with the experimental data. The calculation will be performed for a specific model, although we expect it to be sufficiently general to give one idea of the limits that can be obtained on neutrino masses in this class of models. (author)

  17. Towards New Empirical Versions of Financial and Accounting Models Corrected for Measurement Errors

    OpenAIRE

    Francois-Éric Racicot; Raymond Théoret; Alain Coen

    2006-01-01

    In this paper, we propose a new empirical version of the Fama and French Model based on the Hausman (1978) specification test and aimed at discarding measurement errors in the variables. The proposed empirical framework is general enough to be used for correcting other financial and accounting models of measurement errors. Removing measurement errors is important at many levels as information disclosure, corporate governance and protection of investors.

  18. Two-loop corrections for nuclear matter in the Walecka model

    International Nuclear Information System (INIS)

    Furnstahl, R.J.; Perry, R.J.; Serot, B.D.; Department of Physics, The Ohio State University, Columbus, Ohio 43210; Physics Department and Nuclear Theory Center, Indiana University, Bloomington, Indiana 47405)

    1989-01-01

    Two-loop corrections for nuclear matter, including vacuum polarization, are calculated in the Walecka model to study the loop expansion as an approximation scheme for quantum hadrodynamics. Criteria for useful approximation schemes are discussed, and the concepts of strong and weak convergence are introduced. The two-loop corrections are evaluated first with one-loop parameters and mean fields and then by minimizing the total energy density with respect to the scalar field and refitting parameters to empirical nuclear matter saturation properties. The size and nature of the corrections indicate that the loop expansion is not convergent at two-loop order in either the strong or weak sense. Prospects for alternative approximation schemes are discussed

  19. Mathematical model of rhodium self-powered detectors and algorithms for correction of their time delay

    International Nuclear Information System (INIS)

    Bur'yan, V.I.; Kozlova, L.V.; Kuzhil', A.S.; Shikalov, V.F.

    2005-01-01

    The development of algorithms for correction of self-powered neutron detector (SPND) inertial is caused by necessity to increase the fast response of the in-core instrumentation systems (ICIS). The increase of ICIS fast response will permit to monitor in real time fast transient processes in the core, and in perspective - to use the signals of rhodium SPND for functions of emergency protection by local parameters. In this paper it is proposed to use mathematical model of neutron flux measurements by means of SPND in integral form for creation of correction algorithms. This approach, in the case, is the most convenient for creation of recurrent algorithms for flux estimation. The results of comparison for estimation of neutron flux and reactivity by readings of ionization chambers and SPND signals, corrected by proposed algorithms, are presented [ru

  20. Correction of TRMM 3B42V7 Based on Linear Regression Models over China

    Directory of Open Access Journals (Sweden)

    Shaohua Liu

    2016-01-01

    Full Text Available High temporal-spatial precipitation is necessary for hydrological simulation and water resource management, and remotely sensed precipitation products (RSPPs play a key role in supporting high temporal-spatial precipitation, especially in sparse gauge regions. TRMM 3B42V7 data (TRMM precipitation is an essential RSPP outperforming other RSPPs. Yet the utilization of TRMM precipitation is still limited by the inaccuracy and low spatial resolution at regional scale. In this paper, linear regression models (LRMs have been constructed to correct and downscale the TRMM precipitation based on the gauge precipitation at 2257 stations over China from 1998 to 2013. Then, the corrected TRMM precipitation was validated by gauge precipitation at 839 out of 2257 stations in 2014 at station and grid scales. The results show that both monthly and annual LRMs have obviously improved the accuracy of corrected TRMM precipitation with acceptable error, and monthly LRM performs slightly better than annual LRM in Mideastern China. Although the performance of corrected TRMM precipitation from the LRMs has been increased in Northwest China and Tibetan plateau, the error of corrected TRMM precipitation is still significant due to the large deviation between TRMM precipitation and low-density gauge precipitation.

  1. One-loop radiative correction to the triple Higgs coupling in the Higgs singlet model

    Directory of Open Access Journals (Sweden)

    Shi-Ping He

    2017-01-01

    Full Text Available Though the 125 GeV Higgs boson is consistent with the standard model (SM prediction until now, the triple Higgs coupling can deviate from the SM value in the physics beyond the SM (BSM. In this paper, the radiative correction to the triple Higgs coupling is calculated in the minimal extension of the SM by adding a real gauge singlet scalar. In this model there are two scalars h and H and both of them are mixing states of the doublet and singlet. Provided that the mixing angle is set to be zero, namely the SM limit, h is the pure left-over of the doublet and its behavior is the same as that of the SM at the tree level. However the loop corrections can alter h-related couplings. In this SM limit case, the effect of the singlet H may show up in the h-related couplings, especially the triple h coupling. Our numerical results show that the deviation is sizable. For λΦS=1 (see text for the parameter definition, the deviation δhhh(1 can be 40%. For λΦS=1.5, the δhhh(1 can reach 140%. The sizable radiative correction is mainly caused by three reasons: the magnitude of the coupling λΦS, light mass of the additional scalar and the threshold enhancement. The radiative corrections for the hVV, hff couplings are from the counter-terms, which are the universal correction in this model and always at O(1%. The hZZ coupling, which can be precisely measured, may be a complementarity to the triple h coupling to search for the BSM. In the optimal case, the triple h coupling is very sensitive to the BSM physics, and this model can be tested at future high luminosity hadron colliders and electron–positron colliders.

  2. The Export Supply Model of Bangladesh: An Application of Cointegration and Vector Error Correction Approaches

    Directory of Open Access Journals (Sweden)

    Mahmudul Mannan Toy

    2011-01-01

    Full Text Available The broad objective of this study is to empirically estimate the export supply model of Bangladesh. The techniques of cointegration, Engle-Granger causality and Vector Error Correction are applied to estimate the export supply model. The econometric analysis is done by using the time series data of the variables of interest which is collected from various secondary sources. The study has empirically tested the hypothesis, long run relationship and casualty between variables of the model. The cointegration analysis shows that all the variables of the study are co-integrated at their first differences meaning that there exists long run relationship among the variables. The VECM estimation shows the dynamics of variables in the export supply function and the short run and long run elasticities of export supply with respect to each independent variable. The error correction term is found negative which indicates that any short run disequilibrium will be turned into equilibrium in the long run.

  3. Improvement of the physically-based groundwater model simulations through complementary correction of its errors

    Directory of Open Access Journals (Sweden)

    Jorge Mauricio Reyes Alcalde

    2017-04-01

    Full Text Available Physically-Based groundwater Models (PBM, such MODFLOW, are used as groundwater resources evaluation tools supposing that the produced differences (residuals or errors are white noise. However, in the facts these numerical simulations usually show not only random errors but also systematic errors. For this work it has been developed a numerical procedure to deal with PBM systematic errors, studying its structure in order to model its behavior and correct the results by external and complementary means, trough a framework called Complementary Correction Model (CCM. The application of CCM to PBM shows a decrease in local biases, better distribution of errors and reductions in its temporal and spatial correlations, with 73% of reduction in global RMSN over an original PBM. This methodology seems an interesting chance to update a PBM avoiding the work and costs of interfere its internal structure.

  4. The ρ - ω mass difference in a relativistic potential model with pion corrections

    International Nuclear Information System (INIS)

    Palladino, B.E.; Ferreira, P.L.

    1988-01-01

    The problem of the ρ - ω mass difference is studied in the framework of the relativistic, harmonic, S+V independent quark model implemented by center-of-mass, one-gluon exchange and plon-cloud corrections stemming from the requirement of chiral symmetry in the (u,d) SU(2) flavour sector of the model. The plonic self-energy corrections with different intermediate energy states are instrumental of the analysis of the problem, which requires and appropriate parametrization of the mesonic sector different from that previously used to calculate the mass spectrum of the S-wave baryons. The right ρ - ω mass splitting is found, together with a satisfactory value for the mass of the pion, calculated as a bound-state of a quark-antiquark pair. An analogous discussion based on the cloudy-bag model is also presented. (author) [pt

  5. Power corrections and renormalons in Transverse Momentum Distributions

    Energy Technology Data Exchange (ETDEWEB)

    Scimemi, Ignazio [Departamento de Física Teórica II, Universidad Complutense de Madrid,Ciudad Universitaria, 28040 Madrid (Spain); Vladimirov, Alexey [Institut für Theoretische Physik, Universität Regensburg,D-93040 Regensburg (Germany)

    2017-03-01

    We study the power corrections to Transverse Momentum Distributions (TMDs) by analyzing renormalon divergences of the perturbative series. The renormalon divergences arise independently in two constituents of TMDs: the rapidity evolution kernel and the small-b matching coefficient. The renormalon contributions (and consequently power corrections and non-perturbative corrections to the related cross sections) have a non-trivial dependence on the Bjorken variable and the transverse distance. We discuss the consistency requirements for power corrections for TMDs and suggest inputs for the TMD phenomenology in accordance with this study. Both unpolarized quark TMD parton distribution function and fragmentation function are considered.

  6. Ab initio thermochemistry using optimal-balance models with isodesmic corrections: The ATOMIC protocol

    Science.gov (United States)

    Bakowies, Dirk

    2009-04-01

    A theoretical composite approach, termed ATOMIC for Ab initio Thermochemistry using Optimal-balance Models with Isodesmic Corrections, is introduced for the calculation of molecular atomization energies and enthalpies of formation. Care is taken to achieve optimal balance in accuracy and cost between the various components contributing to high-level estimates of the fully correlated energy at the infinite-basis-set limit. To this end, the energy at the coupled-cluster level of theory including single, double, and quasiperturbational triple excitations is decomposed into Hartree-Fock, low-order correlation (MP2, CCSD), and connected-triples contributions and into valence-shell and core contributions. Statistical analyses for 73 representative neutral closed-shell molecules containing hydrogen and at least three first-row atoms (CNOF) are used to devise basis-set and extrapolation requirements for each of the eight components to maintain a given level of accuracy. Pople's concept of bond-separation reactions is implemented in an ab initio framework, providing for a complete set of high-level precomputed isodesmic corrections which can be used for any molecule for which a valence structure can be drawn. Use of these corrections is shown to lower basis-set requirements dramatically for each of the eight components of the composite model. A hierarchy of three levels is suggested for isodesmically corrected composite models which reproduce atomization energies at the reference level of theory to within 0.1 kcal/mol (A), 0.3 kcal/mol (B), and 1 kcal/mol (C). Large-scale statistical analysis shows that corrections beyond the CCSD(T) reference level of theory, including coupled-cluster theory with fully relaxed connected triple and quadruple excitations, first-order relativistic and diagonal Born-Oppenheimer corrections can normally be dealt with using a greatly simplified model that assumes thermoneutral bond-separation reactions and that reduces the estimate of these

  7. CD-SEM real time bias correction using reference metrology based modeling

    Science.gov (United States)

    Ukraintsev, V.; Banke, W.; Zagorodnev, G.; Archie, C.; Rana, N.; Pavlovsky, V.; Smirnov, V.; Briginas, I.; Katnani, A.; Vaid, A.

    2018-03-01

    Accuracy of patterning impacts yield, IC performance and technology time to market. Accuracy of patterning relies on optical proximity correction (OPC) models built using CD-SEM inputs and intra die critical dimension (CD) control based on CD-SEM. Sub-nanometer measurement uncertainty (MU) of CD-SEM is required for current technologies. Reported design and process related bias variation of CD-SEM is in the range of several nanometers. Reference metrology and numerical modeling are used to correct SEM. Both methods are slow to be used for real time bias correction. We report on real time CD-SEM bias correction using empirical models based on reference metrology (RM) data. Significant amount of currently untapped information (sidewall angle, corner rounding, etc.) is obtainable from SEM waveforms. Using additional RM information provided for specific technology (design rules, materials, processes) CD extraction algorithms can be pre-built and then used in real time for accurate CD extraction from regular CD-SEM images. The art and challenge of SEM modeling is in finding robust correlation between SEM waveform features and bias of CD-SEM as well as in minimizing RM inputs needed to create accurate (within the design and process space) model. The new approach was applied to improve CD-SEM accuracy of 45 nm GATE and 32 nm MET1 OPC 1D models. In both cases MU of the state of the art CD-SEM has been improved by 3x and reduced to a nanometer level. Similar approach can be applied to 2D (end of line, contours, etc.) and 3D (sidewall angle, corner rounding, etc.) cases.

  8. Planning corrective osteotomy of the femoral bone using three-dimensional modeling. Part II

    Directory of Open Access Journals (Sweden)

    Vladimir E. Baskov

    2017-10-01

    Full Text Available Introduction. Three-dimensional (3D modeling and prototyping are increasingly being used in various branches of surgery for planning and performing surgical interventions. In orthopedics, this technology was first used in 1990 for performing knee-joint surgery. This was followed by the development of protocols for creating and applying individual patterns for navigation in the surgical interventions for various bones. Aim. The study aimed to develop a new 3D method for planning and performing corrective osteotomy of the femoral bone using an individual pattern and to identify the advantages of the proposed method in comparison with the standard method of planning and performing surgical intervention. Materials and methods. A new method for planning and performing corrective osteotomy of the femoral bone in children with various pathologies of the hip joint is presented. The outcomes of planning and performing corrective osteotomy of the femoral bone in 27 patients aged 5 to 18 years (32 hip joints with congenital and acquired deformity of the femoral bone were analyzed. Conclusion. The use of computer 3D modeling for planning and implementing corrective interventions on the femoral bone improves the treatment results owing to an almost perfect performance accuracy achieved by the minimization of possible human errors reduction in the surgery duration; and reduction in the radiation exposure for the patient.

  9. Can molecular dynamics simulations help in discriminating correct from erroneous protein 3D models?

    Directory of Open Access Journals (Sweden)

    Gibrat Jean-François

    2008-01-01

    Full Text Available Abstract Background Recent approaches for predicting the three-dimensional (3D structure of proteins such as de novo or fold recognition methods mostly rely on simplified energy potential functions and a reduced representation of the polypeptide chain. These simplifications facilitate the exploration of the protein conformational space but do not permit to capture entirely the subtle relationship that exists between the amino acid sequence and its native structure. It has been proposed that physics-based energy functions together with techniques for sampling the conformational space, e.g., Monte Carlo or molecular dynamics (MD simulations, are better suited to the task of modelling proteins at higher resolutions than those of models obtained with the former type of methods. In this study we monitor different protein structural properties along MD trajectories to discriminate correct from erroneous models. These models are based on the sequence-structure alignments provided by our fold recognition method, FROST. We define correct models as being built from alignments of sequences with structures similar to their native structures and erroneous models from alignments of sequences with structures unrelated to their native structures. Results For three test sequences whose native structures belong to the all-α, all-β and αβ classes we built a set of models intended to cover the whole spectrum: from a perfect model, i.e., the native structure, to a very poor model, i.e., a random alignment of the test sequence with a structure belonging to another structural class, including several intermediate models based on fold recognition alignments. We submitted these models to 11 ns of MD simulations at three different temperatures. We monitored along the corresponding trajectories the mean of the Root-Mean-Square deviations (RMSd with respect to the initial conformation, the RMSd fluctuations, the number of conformation clusters, the evolution of

  10. Retrospective Correction of Physiological Noise in DTI Using an Extended Tensor Model and Peripheral Measurements

    Science.gov (United States)

    Mohammadi, Siawoosh; Hutton, Chloe; Nagy, Zoltan; Josephs, Oliver; Weiskopf, Nikolaus

    2013-01-01

    Diffusion tensor imaging is widely used in research and clinical applications, but this modality is highly sensitive to artefacts. We developed an easy-to-implement extension of the original diffusion tensor model to account for physiological noise in diffusion tensor imaging using measures of peripheral physiology (pulse and respiration), the so-called extended tensor model. Within the framework of the extended tensor model two types of regressors, which respectively modeled small (linear) and strong (nonlinear) variations in the diffusion signal, were derived from peripheral measures. We tested the performance of four extended tensor models with different physiological noise regressors on nongated and gated diffusion tensor imaging data, and compared it to an established data-driven robust fitting method. In the brainstem and cerebellum the extended tensor models reduced the noise in the tensor-fit by up to 23% in accordance with previous studies on physiological noise. The extended tensor model addresses both large-amplitude outliers and small-amplitude signal-changes. The framework of the extended tensor model also facilitates further investigation into physiological noise in diffusion tensor imaging. The proposed extended tensor model can be readily combined with other artefact correction methods such as robust fitting and eddy current correction. PMID:22936599

  11. Hydrological Modeling in Northern Tunisia with Regional Climate Model Outputs: Performance Evaluation and Bias-Correction in Present Climate Conditions

    Directory of Open Access Journals (Sweden)

    Asma Foughali

    2015-07-01

    Full Text Available This work aims to evaluate the performance of a hydrological balance model in a watershed located in northern Tunisia (wadi Sejnane, 378 km2 in present climate conditions using input variables provided by four regional climate models. A modified version (MBBH of the lumped and single layer surface model BBH (Bucket with Bottom Hole model, in which pedo-transfer parameters estimated using watershed physiographic characteristics are introduced is adopted to simulate the water balance components. Only two parameters representing respectively the water retention capacity of the soil and the vegetation resistance to evapotranspiration are calibrated using rainfall-runoff data. The evaluation criterions for the MBBH model calibration are: relative bias, mean square error and the ratio of mean actual evapotranspiration to mean potential evapotranspiration. Daily air temperature, rainfall and runoff observations are available from 1960 to 1984. The period 1960–1971 is selected for calibration while the period 1972–1984 is chosen for validation. Air temperature and precipitation series are provided by four regional climate models (DMI, ARP, SMH and ICT from the European program ENSEMBLES, forced by two global climate models (GCM: ECHAM and ARPEGE. The regional climate model outputs (precipitation and air temperature are compared to the observations in terms of statistical distribution. The analysis was performed at the seasonal scale for precipitation. We found out that RCM precipitation must be corrected before being introduced as MBBH inputs. Thus, a non-parametric quantile-quantile bias correction method together with a dry day correction is employed. Finally, simulated runoff generated using corrected precipitation from the regional climate model SMH is found the most acceptable by comparison with runoff simulated using observed precipitation data, to reproduce the temporal variability of mean monthly runoff. The SMH model is the most accurate to

  12. Modeling Approach/Strategy for Corrective Action Unit 97, Yucca Flat and Climax Mine , Revision 0

    Energy Technology Data Exchange (ETDEWEB)

    Janet Willie

    2003-08-01

    The objectives of the UGTA corrective action strategy are to predict the location of the contaminant boundary for each CAU, develop and implement a corrective action, and close each CAU. The process for achieving this strategy includes modeling to define the maximum extent of contaminant transport within a specified time frame. Modeling is a method of forecasting how the hydrogeologic system, including the underground test cavities, will behave over time with the goal of assessing the migration of radionuclides away from the cavities and chimneys. Use of flow and transport models to achieve the objectives of the corrective action strategy is specified in the FFACO. In the Yucca Flat/Climax Mine system, radionuclide migration will be governed by releases from the cavities and chimneys, and transport in alluvial aquifers, fractured and partially fractured volcanic rock aquifers and aquitards, the carbonate aquifers, and in intrusive units. Additional complexity is associated with multiple faults in Yucca Flat and the need to consider reactive transport mechanisms that both reduce and enhance the mobility of radionuclides. A summary of the data and information that form the technical basis for the model is provided in this document.

  13. A New Global Regression Analysis Method for the Prediction of Wind Tunnel Model Weight Corrections

    Science.gov (United States)

    Ulbrich, Norbert Manfred; Bridge, Thomas M.; Amaya, Max A.

    2014-01-01

    A new global regression analysis method is discussed that predicts wind tunnel model weight corrections for strain-gage balance loads during a wind tunnel test. The method determines corrections by combining "wind-on" model attitude measurements with least squares estimates of the model weight and center of gravity coordinates that are obtained from "wind-off" data points. The method treats the least squares fit of the model weight separate from the fit of the center of gravity coordinates. Therefore, it performs two fits of "wind- off" data points and uses the least squares estimator of the model weight as an input for the fit of the center of gravity coordinates. Explicit equations for the least squares estimators of the weight and center of gravity coordinates are derived that simplify the implementation of the method in the data system software of a wind tunnel. In addition, recommendations for sets of "wind-off" data points are made that take typical model support system constraints into account. Explicit equations of the confidence intervals on the model weight and center of gravity coordinates and two different error analyses of the model weight prediction are also discussed in the appendices of the paper.

  14. Software for Generating Troposphere Corrections for InSAR Using GPS and Weather Model Data

    Science.gov (United States)

    Moore, Angelyn W.; Webb, Frank H.; Fishbein, Evan F.; Fielding, Eric J.; Owen, Susan E.; Granger, Stephanie L.; Bjoerndahl, Fredrik; Loefgren, Johan; Fang, Peng; Means, James D.; hide

    2013-01-01

    Atmospheric errors due to the troposphere are a limiting error source for spaceborne interferometric synthetic aperture radar (InSAR) imaging. This software generates tropospheric delay maps that can be used to correct atmospheric artifacts in InSAR data. The software automatically acquires all needed GPS (Global Positioning System), weather, and Digital Elevation Map data, and generates a tropospheric correction map using a novel algorithm for combining GPS and weather information while accounting for terrain. Existing JPL software was prototypical in nature, required a MATLAB license, required additional steps to acquire and ingest needed GPS and weather data, and did not account for topography in interpolation. Previous software did not achieve a level of automation suitable for integration in a Web portal. This software overcomes these issues. GPS estimates of tropospheric delay are a source of corrections that can be used to form correction maps to be applied to InSAR data, but the spacing of GPS stations is insufficient to remove short-wavelength tropospheric artifacts. This software combines interpolated GPS delay with weather model precipitable water vapor (PWV) and a digital elevation model to account for terrain, increasing the spatial resolution of the tropospheric correction maps and thus removing short wavelength tropospheric artifacts to a greater extent. It will be integrated into a Web portal request system, allowing use in a future L-band SAR Earth radar mission data system. This will be a significant contribution to its technology readiness, building on existing investments in in situ space geodetic networks, and improving timeliness, quality, and science value of the collected data

  15. Tijeras Arroyo Groundwater Current Conceptual Model and Corrective Measures Evaluation Report - December 2016.

    Energy Technology Data Exchange (ETDEWEB)

    Copland, John R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-03-01

    This Tijeras Arroyo Groundwater Current Conceptual Model and Corrective Measures Evaluation Report (CCM/CME Report) has been prepared by the U.S. Department of Energy (DOE) and Sandia Corporation (Sandia) to meet requirements under the Sandia National Laboratories-New Mexico (SNL/NM) Compliance Order on Consent (the Consent Order). The Consent Order, entered into by the New Mexico Environment Department (NMED), DOE, and Sandia, became effective on April 29, 2004. The Consent Order identified the Tijeras Arroyo Groundwater (TAG) Area of Concern (AOC) as an area of groundwater contamination requiring further characterization and corrective action. This report presents an updated Conceptual Site Model (CSM) of the TAG AOC that describes the contaminant release sites, the geological and hydrogeological setting, and the distribution and migration of contaminants in the subsurface. The dataset used for this report includes the analytical results from groundwater samples collected through December 2015.

  16. Improvement of Klobuchar model for GNSS single-frequency ionospheric delay corrections

    Science.gov (United States)

    Wang, Ningbo; Yuan, Yunbin; Li, Zishen; Huo, Xingliang

    2016-04-01

    Broadcast ionospheric model is currently an effective approach to mitigate the ionospheric time delay for real-time Global Navigation Satellite System (GNSS) single-frequency users. Klobuchar coefficients transmitted in Global Positioning System (GPS) navigation message have been widely used in various GNSS positioning and navigation applications; however, this model can only reduce the ionospheric error by approximately 50% in mid-latitudes. With the emerging BeiDou and Galileo, as well as the modernization of GPS and GLONASS, more precise ionospheric correction models or algorithms are required by GNSS single-frequency users. Numerical analysis of the initial phase and nighttime term in Klobuchar algorithm demonstrates that more parameters should be introduced to better describe the variation of nighttime ionospheric total electron content (TEC). In view of this, several schemes are proposed for the improvement of Klobuchar algorithm. Performance of these improved Klobuchar-like models are validated over the continental and oceanic regions during high (2002) and low (2006) levels of solar activities, respectively. Over the continental region, GPS TEC generated from 35 International GNSS Service (IGS) and the Crust Movement Observation Network of China (CMONOC) stations are used as references. Over the oceanic region, TEC data from TOPEX/Poseidon and JASON-1 altimeters are used for comparison. A ten-parameter Klobuchar-like model, which describes the nighttime term as a linear function of geomagnetic latitude, is finally proposed for GNSS single-frequency ionospheric corrections. Compared to GPS TEC, while GPS broadcast model can correct for 55.0% and 49.5% of the ionospheric delay for the year 2002 and 2006, respectively, the proposed ten-parameter Klobuchar-like model can reduce the ionospheric error by 68.4% and 64.7% for the same period. Compared to TOPEX/Poseidon and JASON-1 TEC, the improved ten-parameter Klobuchar-like model can mitigate the ionospheric

  17. NTCP modelling of lung toxicity after SBRT comparing the universal survival curve and the linear quadratic model for fractionation correction

    International Nuclear Information System (INIS)

    Wennberg, Berit M.; Baumann, Pia; Gagliardi, Giovanna

    2011-01-01

    Background. In SBRT of lung tumours no established relationship between dose-volume parameters and the incidence of lung toxicity is found. The aim of this study is to compare the LQ model and the universal survival curve (USC) to calculate biologically equivalent doses in SBRT to see if this will improve knowledge on this relationship. Material and methods. Toxicity data on radiation pneumonitis grade 2 or more (RP2+) from 57 patients were used, 10.5% were diagnosed with RP2+. The lung DVHs were corrected for fractionation (LQ and USC) and analysed with the Lyman- Kutcher-Burman (LKB) model. In the LQ-correction α/β = 3 Gy was used and the USC parameters used were: α/β = 3 Gy, D 0 = 1.0 Gy, n = 10, α 0.206 Gy-1 and d T = 5.8 Gy. In order to understand the relative contribution of different dose levels to the calculated NTCP the concept of fractional NTCP was used. This might give an insight to the questions of whether 'high doses to small volumes' or 'low doses to large volumes' are most important for lung toxicity. Results and Discussion. NTCP analysis with the LKB-model using parameters m = 0.4, D50 = 30 Gy resulted for the volume dependence parameter (n) with LQ correction n = 0.87 and with USC correction n = 0.71. Using parameters m = 0.3, D 50 = 20 Gy n = 0.93 with LQ correction and n 0.83 with USC correction. In SBRT of lung tumours, NTCP modelling of lung toxicity comparing models (LQ,USC) for fractionation correction, shows that low dose contribute less and high dose more to the NTCP when using the USC-model. Comparing NTCP modelling of SBRT data and data from breast cancer, lung cancer and whole lung irradiation implies that the response of the lung is treatment specific. More data are however needed in order to have a more reliable modelling

  18. BANK CAPITAL AND MACROECONOMIC SHOCKS: A PRINCIPAL COMPONENTS ANALYSIS AND VECTOR ERROR CORRECTION MODEL

    Directory of Open Access Journals (Sweden)

    Christian NZENGUE PEGNET

    2011-07-01

    Full Text Available The recent financial turmoil has clearly highlighted the potential role of financial factors on amplification of macroeconomic developments and stressed the importance of analyzing the relationship between banks’ balance sheets and economic activity. This paper assesses the impact of the bank capital channel in the transmission of schocks in Europe on the basis of bank's balance sheet data. The empirical analysis is carried out through a Principal Component Analysis and in a Vector Error Correction Model.

  19. Region of validity of the Thomas–Fermi model with quantum, exchange and shell corrections

    International Nuclear Information System (INIS)

    Dyachkov, S A; Levashov, P R; Minakov, D V

    2016-01-01

    A novel approach to calculate thermodynamically consistent shell corrections in wide range of parameters is used to predict the region of validity of the Thomas-Fermi approach. Calculated thermodynamic functions of electrons at high density are consistent with the more precise density functional theory. It makes it possible to work out a semi-classical model applicable both at low and high density. (paper)

  20. Radiative corrections to e+e- → W+W- in the Weinberg model

    International Nuclear Information System (INIS)

    Lemoine, M.E.

    1979-01-01

    The author summarizes the Weinberg model and then gives the lowest order cross section for e + e - → W + W - . The various radiative corrections are then dealt with and the method used to compute them outlined. Bremsstrahlung and infrared divergences are discussed together with the renormalization procedure. The Ward identities are then summarized. The leading terms of the amplitude in the limit of large Higgs mass are discussed and the results presented. (Auth.)

  1. Applying volumetric weather radar data for rainfall runoff modeling: The importance of error correction.

    Science.gov (United States)

    Hazenberg, P.; Leijnse, H.; Uijlenhoet, R.; Delobbe, L.; Weerts, A.; Reggiani, P.

    2009-04-01

    In the current study half a year of volumetric radar data for the period October 1, 2002 until March 31, 2003 is being analyzed which was sampled at 5 minutes intervals by C-band Doppler radar situated at an elevation of 600 m in the southern Ardennes region, Belgium. During this winter half year most of the rainfall has a stratiform character. Though radar and raingauge will never sample the same amount of rainfall due to differences in sampling strategies, for these stratiform situations differences between both measuring devices become even larger due to the occurrence of a bright band (the point where ice particles start to melt intensifying the radar reflectivity measurement). For these circumstances the radar overestimates the amount of precipitation and because in the Ardennes bright bands occur within 1000 meter from the surface, it's detrimental effects on the performance of the radar can already be observed at relatively close range (e.g. within 50 km). Although the radar is situated at one of the highest points in the region, very close to the radar clutter is a serious problem. As a result both nearby and farther away, using uncorrected radar results in serious errors when estimating the amount of precipitation. This study shows the effect of carefully correcting for these radar errors using volumetric radar data, taking into account the vertical reflectivity profile of the atmosphere, the effects of attenuation and trying to limit the amount of clutter. After applying these correction algorithms, the overall differences between radar and raingauge are much smaller which emphasizes the importance of carefully correcting radar rainfall measurements. The next step is to assess the effect of using uncorrected and corrected radar measurements on rainfall-runoff modeling. The 1597 km2 Ourthe catchment lies within 60 km of the radar. Using a lumped hydrological model serious improvement in simulating observed discharges is found when using corrected radar

  2. Standard model treatment of the radiative corrections to the neutron β-decay

    International Nuclear Information System (INIS)

    Bunatyan, G.G.

    2003-01-01

    Starting with the basic Lagrangian of the Standard Model, the radiative corrections to the neutron β-decay are acquired. The electroweak interactions are consistently taken into consideration amenably to the Weinberg-Salam theory. The effect of the strong quark-quark interactions on the neutron β-decay is parametrized by introducing the nucleon electromagnetic form factors and the weak nucleon transition current specified by the form factors g V , g A , ... The radiative corrections to the total decay probability W and to the asymmetry coefficient of the momentum distribution A are obtained to constitute δW ∼ 8.7 %, δA ∼ -2 %. The contribution to the radiative corrections due to allowance for the nucleon form factors and the nucleon excited states amounts up to a few per cent of the whole value of the radiative corrections. The ambiguity in description of the nucleon compositeness is surely what causes the uncertainties ∼ 0.1 % in evaluation of the neutron β-decay characteristics. For now, this puts bounds to the precision attainable in obtaining the element V ud of the CKM matrix and the g V , g A , ... values from experimental data processing

  3. Beam-hardening correction in CT based on basis image and TV model

    International Nuclear Information System (INIS)

    Li Qingliang; Yan Bin; Li Lei; Sun Hongsheng; Zhang Feng

    2012-01-01

    In X-ray computed tomography, the beam hardening leads to artifacts and reduces the image quality. It analyzes how beam hardening influences on original projection. According, it puts forward a kind of new beam-hardening correction method based on the basis images and TV model. Firstly, according to physical characteristics of the beam hardening an preliminary correction model with adjustable parameters is set up. Secondly, using different parameters, original projections are operated by the correction model. Thirdly, the projections are reconstructed to obtain a series of basis images. Finally, the linear combination of basis images is the final reconstruction image. Here, with total variation for the final reconstruction image as the cost function, the linear combination coefficients for the basis images are determined according to iterative method. To verify the effectiveness of the proposed method, the experiments are carried out on real phantom and industrial part. The results show that the algorithm significantly inhibits cup and strip artifacts in CT image. (authors)

  4. Discharge simulations performed with a hydrological model using bias corrected regional climate model input

    NARCIS (Netherlands)

    Pelt, van S.C.; Kabat, P.; Maat, ter H.W.; Hurk, van den B.J.J.M.; Weerts, A.H.

    2009-01-01

    Studies have demonstrated that precipitation on Northern Hemisphere mid-latitudes has increased in the last decades and that it is likely that this trend will continue. This will have an influence on discharge of the river Meuse. The use of bias correction methods is important when the effect of

  5. A non-perturbative study of 4d U(1) non-commutative gauge theory - the fate of one-loop instability

    International Nuclear Information System (INIS)

    Bietenholz, Wolfgang; Nishimura, Jun; Susaki, Yoshiaki; Volkholz, Jan

    2006-01-01

    Recent perturbative studies show that in 4d non-commutative spaces, the trivial (classically stable) vacuum of gauge theories becomes unstable at the quantum level, unless one introduces sufficiently many fermionic degrees of freedom. This is due to a negative IR-singular term in the one-loop effective potential, which appears as a result of the UV/IR mixing. We study such a system non-perturbatively in the case of pure U(1) gauge theory in four dimensions, where two directions are non-commutative. Monte Carlo simulations are performed after mapping the regularized theory onto a U(N) lattice gauge theory in d = 2. At intermediate coupling strength, we find a phase in which open Wilson lines acquire non-zero vacuum expectation values, which implies the spontaneous breakdown of translational invariance. In this phase, various physical quantities obey clear scaling behaviors in the continuum limit with a fixed non-commutativity parameter θ, which provides evidence for a possible continuum theory. The extent of the dynamically generated space in the non-commutative directions becomes finite in the above limit, and its dependence on θ is evaluated explicitly. We also study the dispersion relation. In the weak coupling symmetric phase, it involves a negative IR-singular term, which is responsible for the observed phase transition. In the broken phase, it reveals the existence of the Nambu-Goldstone mode associated with the spontaneous symmetry breaking

  6. A non-perturbative study of 4d U(1) non-commutative gauge theory — the fate of one-loop instability

    Science.gov (United States)

    Bietenholz, Wolfgang; Nishimura, Jun; Susaki, Yoshiaki; Volkholz, Jan

    2006-10-01

    Recent perturbative studies show that in 4d non-commutative spaces, the trivial (classically stable) vacuum of gauge theories becomes unstable at the quantum level, unless one introduces sufficiently many fermionic degrees of freedom. This is due to a negative IR-singular term in the one-loop effective potential, which appears as a result of the UV/IR mixing. We study such a system non-perturbatively in the case of pure U(1) gauge theory in four dimensions, where two directions are non-commutative. Monte Carlo simulations are performed after mapping the regularized theory onto a U(N) lattice gauge theory in d = 2. At intermediate coupling strength, we find a phase in which open Wilson lines acquire non-zero vacuum expectation values, which implies the spontaneous breakdown of translational invariance. In this phase, various physical quantities obey clear scaling behaviors in the continuum limit with a fixed non-commutativity parameter θ, which provides evidence for a possible continuum theory. The extent of the dynamically generated space in the non-commutative directions becomes finite in the above limit, and its dependence on θ is evaluated explicitly. We also study the dispersion relation. In the weak coupling symmetric phase, it involves a negative IR-singular term, which is responsible for the observed phase transition. In the broken phase, it reveals the existence of the Nambu-Goldstone mode associated with the spontaneous symmetry breaking.

  7. Advanced Corrections for InSAR Using GPS and Numerical Weather Models

    Science.gov (United States)

    Cossu, F.; Foster, J. H.; Amelung, F.; Varugu, B. K.; Businger, S.; Cherubini, T.

    2017-12-01

    We present results from an investigation into the application of numerical weather models for generating tropospheric correction fields for Interferometric Synthetic Aperture Radar (InSAR). We apply the technique to data acquired from a UAVSAR campaign as well as from the CosmoSkyMed satellites. The complex spatial and temporal changes in the atmospheric propagation delay of the radar signal remain the single biggest factor limiting InSAR's potential for hazard monitoring and mitigation. A new generation of InSAR systems is being built and launched, and optimizing the science and hazard applications of these systems requires advanced methodologies to mitigate tropospheric noise. We use the Weather Research and Forecasting (WRF) model to generate a 900 m spatial resolution atmospheric models covering the Big Island of Hawaii and an even higher, 300 m resolution grid over the Mauna Loa and Kilauea volcanoes. By comparing a range of approaches, from the simplest, using reanalyses based on typically available meteorological observations, through to the "kitchen-sink" approach of assimilating all relevant data sets into our custom analyses, we examine the impact of the additional data sets on the atmospheric models and their effectiveness in correcting InSAR data. We focus particularly on the assimilation of information from the more than 60 GPS sites in the island. We ingest zenith tropospheric delay estimates from these sites directly into the WRF analyses, and also perform double-difference tomography using the phase residuals from the GPS processing to robustly incorporate heterogeneous information from the GPS data into the atmospheric models. We assess our performance through comparisons of our atmospheric models with external observations not ingested into the model, and through the effectiveness of the derived phase screens in reducing InSAR variance. Comparison of the InSAR data, our atmospheric analyses, and assessments of the active local and mesoscale

  8. Linear model correction: A method for transferring a near-infrared multivariate calibration model without standard samples

    Science.gov (United States)

    Liu, Yan; Cai, Wensheng; Shao, Xueguang

    2016-12-01

    Calibration transfer is essential for practical applications of near infrared (NIR) spectroscopy because the measurements of the spectra may be performed on different instruments and the difference between the instruments must be corrected. For most of calibration transfer methods, standard samples are necessary to construct the transfer model using the spectra of the samples measured on two instruments, named as master and slave instrument, respectively. In this work, a method named as linear model correction (LMC) is proposed for calibration transfer without standard samples. The method is based on the fact that, for the samples with similar physical and chemical properties, the spectra measured on different instruments are linearly correlated. The fact makes the coefficients of the linear models constructed by the spectra measured on different instruments are similar in profile. Therefore, by using the constrained optimization method, the coefficients of the master model can be transferred into that of the slave model with a few spectra measured on slave instrument. Two NIR datasets of corn and plant leaf samples measured with different instruments are used to test the performance of the method. The results show that, for both the datasets, the spectra can be correctly predicted using the transferred partial least squares (PLS) models. Because standard samples are not necessary in the method, it may be more useful in practical uses.

  9. Holographic quantum error-correcting codes: toy models for the bulk/boundary correspondence

    Energy Technology Data Exchange (ETDEWEB)

    Pastawski, Fernando; Yoshida, Beni [Institute for Quantum Information & Matter and Walter Burke Institute for Theoretical Physics,California Institute of Technology,1200 E. California Blvd., Pasadena CA 91125 (United States); Harlow, Daniel [Princeton Center for Theoretical Science, Princeton University,400 Jadwin Hall, Princeton NJ 08540 (United States); Preskill, John [Institute for Quantum Information & Matter and Walter Burke Institute for Theoretical Physics,California Institute of Technology,1200 E. California Blvd., Pasadena CA 91125 (United States)

    2015-06-23

    We propose a family of exactly solvable toy models for the AdS/CFT correspondence based on a novel construction of quantum error-correcting codes with a tensor network structure. Our building block is a special type of tensor with maximal entanglement along any bipartition, which gives rise to an isometry from the bulk Hilbert space to the boundary Hilbert space. The entire tensor network is an encoder for a quantum error-correcting code, where the bulk and boundary degrees of freedom may be identified as logical and physical degrees of freedom respectively. These models capture key features of entanglement in the AdS/CFT correspondence; in particular, the Ryu-Takayanagi formula and the negativity of tripartite information are obeyed exactly in many cases. That bulk logical operators can be represented on multiple boundary regions mimics the Rindler-wedge reconstruction of boundary operators from bulk operators, realizing explicitly the quantum error-correcting features of AdS/CFT recently proposed in http://dx.doi.org/10.1007/JHEP04(2015)163.

  10. Planning for corrective osteotomy of the femoral bone using 3D-modeling. Part I

    Directory of Open Access Journals (Sweden)

    Alexey G Baindurashvili

    2016-09-01

    Full Text Available Introduction. In standard planning for corrective hip osteotomy, a surgical intervention scheme is created on a uniplanar paper medium on the basis of X-ray images. However, uniplanar skiagrams are unable to render real spatial configuration of the femoral bone. When combining three-dimensional and uniplanar models of bone, human errors inevitably occur, causing the distortion of preset parameters, which may lead to glaring errors and, as a result, to repeated operations. Aims. To develop a new three-dimensional method for planning and performing corrective osteotomy of the femoral bone, using visualizing computer technologies. Materials and methods. A new method of planning for corrective hip osteotomy in children with various hip joint pathologies was developed. We examined the method using 27 patients [aged 5–18 years (32 hip joints] with congenital and acquired femoral bone deformation. The efficiency of the proposed method was assessed in comparison with uniplanar planning using roentgenograms. Conclusions. Computerized operation planning using three-dimensional modeling improves treatment results by minimizing the likelihood of human errors and increasing planning and surgical intervention  accuracy.

  11. Reliable software systems via chains of object models with provably correct behavior

    International Nuclear Information System (INIS)

    Yakhnis, A.; Yakhnis, V.

    1996-01-01

    This work addresses specification and design of reliable safety-critical systems, such as nuclear reactor control systems. Reliability concerns are addressed in complimentary fashion by different fields. Reliability engineers build software reliability models, etc. Safety engineers focus on prevention of potential harmful effects of systems on environment. Software/hardware correctness engineers focus on production of reliable systems on the basis of mathematical proofs. The authors think that correctness may be a crucial guiding issue in the development of reliable safety-critical systems. However, purely formal approaches are not adequate for the task, because they neglect the connection with the informal customer requirements. They alleviate that as follows. First, on the basis of the requirements, they build a model of the system interactions with the environment, where the system is viewed as a black box. They will provide foundations for automated tools which will (a) demonstrate to the customer that all of the scenarios of system behavior are presented in the model, (b) uncover scenarios not present in the requirements, and (c) uncover inconsistent scenarios. The developers will work with the customer until the black box model will not possess scenarios (b) and (c) above. Second, the authors will build a chain of several increasingly detailed models, where the first model is the black box model and the last model serves to automatically generated proved executable code. The behavior of each model will be proved to conform to the behavior of the previous one. They build each model as a cluster of interactive concurrent objects, thus they allow both top-down and bottom-up development

  12. A geometric model of a V-slit Sun sensor correcting for spacecraft wobble

    Science.gov (United States)

    Mcmartin, W. P.; Gambhir, S. S.

    1994-01-01

    A V-Slit sun sensor is body-mounted on a spin-stabilized spacecraft. During injection from a parking or transfer orbit to some final orbit, the spacecraft may not be dynamically balanced. This may result in wobble about the spacecraft spin axis as the spin axis may not be aligned with the spacecraft's axis of symmetry. While the widely used models in Spacecraft Attitude Determination and Control, edited by Wertz, correct for separation, elevation, and azimuthal mounting biases, spacecraft wobble is not taken into consideration. A geometric approach is used to develop a method for measurement of the sun angle which corrects for the magnitude and phase of spacecraft wobble. The algorithm was implemented using a set of standard mathematical routines for spherical geometry on a unit sphere.

  13. The L0 Regularized Mumford-Shah Model for Bias Correction and Segmentation of Medical Images.

    Science.gov (United States)

    Duan, Yuping; Chang, Huibin; Huang, Weimin; Zhou, Jiayin; Lu, Zhongkang; Wu, Chunlin

    2015-11-01

    We propose a new variant of the Mumford-Shah model for simultaneous bias correction and segmentation of images with intensity inhomogeneity. First, based on the model of images with intensity inhomogeneity, we introduce an L0 gradient regularizer to model the true intensity and a smooth regularizer to model the bias field. In addition, we derive a new data fidelity using the local intensity properties to allow the bias field to be influenced by its neighborhood. Second, we use a two-stage segmentation method, where the fast alternating direction method is implemented in the first stage for the recovery of true intensity and bias field and a simple thresholding is used in the second stage for segmentation. Different from most of the existing methods for simultaneous bias correction and segmentation, we estimate the bias field and true intensity without fixing either the number of the regions or their values in advance. Our method has been validated on medical images of various modalities with intensity inhomogeneity. Compared with the state-of-art approaches and the well-known brain software tools, our model is fast, accurate, and robust with initializations.

  14. Enabling full-field physics-based optical proximity correction via dynamic model generation

    Science.gov (United States)

    Lam, Michael; Clifford, Chris; Raghunathan, Ananthan; Fenger, Germain; Adam, Kostas

    2017-07-01

    As extreme ultraviolet lithography becomes closer to reality for high volume production, its peculiar modeling challenges related to both inter and intrafield effects have necessitated building an optical proximity correction (OPC) infrastructure that operates with field position dependency. Previous state-of-the-art approaches to modeling field dependency used piecewise constant models where static input models are assigned to specific x/y-positions within the field. OPC and simulation could assign the proper static model based on simulation-level placement. However, in the realm of 7 and 5 nm feature sizes, small discontinuities in OPC from piecewise constant model changes can cause unacceptable levels of edge placement errors. The introduction of dynamic model generation (DMG) can be shown to effectively avoid these dislocations by providing unique mask and optical models per simulation region, allowing a near continuum of models through the field. DMG allows unique models for electromagnetic field, apodization, aberrations, etc. to vary through the entire field and provides a capability to precisely and accurately model systematic field signatures.

  15. A concentration correction scheme for Lagrangian particle model and its application in street canyon air dispersion modelling

    Energy Technology Data Exchange (ETDEWEB)

    Jiyang Xia [Shanghai Jiao Tong University, Shanghai (China). Department of Engineering Mechanics; Leung, D.Y.C. [The University of Hong Kong (Hong Kong). Department of Mechanical Engineering

    2001-07-01

    Pollutant dispersion in street canyons with various configurations was simulated by discharging a large number of particles into the computation domain after developing a time-dependent wind field. Trajectory of the released particles was predicted using a Lagrangian particle model developed in an earlier study. A concentration correction scheme, based on the concept of 'visibility', was adopted for the Lagrangian particle model to correct the calculated pollutant concentration field in street canyons. The corrected concentrations compared favourably with those from wind tunnel experiments and a linear relationship between the computed concentrations and wind tunnel data were found. The developed model was then applied to four simulations to test for the suitability of the correction scheme and to study pollutant distribution in street canyons with different configurations. For those cases with obstacles presence in the computation domain, the correction scheme gives more reasonable results compared with the one without using it. Different flow regimes are observed in the street canyons, which depend on building configurations. A counter-clockwise rotating vortex may appear in a two-building case with wind flow from left to right, causing lower pollutant concentration at the leeward side of upstream building and higher concentration at the windward side of downstream building. On the other hand, a stable clockwise rotating vortex is formed in the street canyon with multiple identical buildings, resulting in poor natural ventilation in the street canyon. Moreover, particles emitted in the downstream canyon formed by buildings with large height-to-width ratios will be transported to upstream canyons. (author)

  16. Correction of electrode modelling errors in multi-frequency EIT imaging.

    Science.gov (United States)

    Jehl, Markus; Holder, David

    2016-06-01

    The differentiation of haemorrhagic from ischaemic stroke using electrical impedance tomography (EIT) requires measurements at multiple frequencies, since the general lack of healthy measurements on the same patient excludes time-difference imaging methods. It has previously been shown that the inaccurate modelling of electrodes constitutes one of the largest sources of image artefacts in non-linear multi-frequency EIT applications. To address this issue, we augmented the conductivity Jacobian matrix with a Jacobian matrix with respect to electrode movement. Using this new algorithm, simulated ischaemic and haemorrhagic strokes in a realistic head model were reconstructed for varying degrees of electrode position errors. The simultaneous recovery of conductivity spectra and electrode positions removed most artefacts caused by inaccurately modelled electrodes. Reconstructions were stable for electrode position errors of up to 1.5 mm standard deviation along both surface dimensions. We conclude that this method can be used for electrode model correction in multi-frequency EIT.

  17. 1/J2 corrections to BMN energies from the quantum long range Landau-Lifshitz model

    International Nuclear Information System (INIS)

    Minahan, Joseph A.; Tirziu, Alin; Tseytlin, Arkady A.

    2005-01-01

    In a previous paper [hep-th/0509071], it was shown that quantum 1/J corrections to the BMN spectrum in an effective Landau-Lifshitz (LL) model match with the results from the one-loop gauge theory, provided one chooses an appropriate regularization. In this paper we continue this study for the conjectured Bethe ansatz for the long range spin chain representing perturbative large-N N = 4 Super Yang-Mills in the SU(2) sector, and the 'quantum string' Bethe ansatz for its string dual. The comparison is carried out for corrections to BMN energies up to order λ-tilde 3 in the effective expansion parameter λ-tilde = λ/J 2 . After determining the 'gauge-theory' LL action to order λ-tilde 3 , which is accomplished indirectly by fixing the coefficients in the LL action so that the energies of circular strings match with the energies found using the Bethe ansatz, we find perfect agreement. We interpret this as further support for an underlying integrability of the system. We then consider the 'string-theory' LL action which is a limit of the classical string action representing fast string motion on an S 3 subspace of S 5 and compare the resulting λ-tilde 3 /J 2 corrections to the prediction of the 'string' Bethe ansatz. As in the gauge case, we find precise matching. This indicates that the LL hamiltonian supplemented with a normal ordering prescription and ζ-function regularization reproduces the full superstring result for the 1/J 2 corrections, and also signifies that the string Bethe ansatz does describe the quantum BMN string spectrum to order 1/J 2 . We also comment on using the quantum LL approach to determine the non-analytic contributions in λ that are behind the strong to weak coupling interpolation between the string and gauge results

  18. CORRECTION OF FAULTY LINES IN MUSCLE MODEL, TO BE USED IN 3D BUILDING NETWORK CONSTRUCTION

    Directory of Open Access Journals (Sweden)

    I. R. Karas

    2012-07-01

    Full Text Available This paper describes the usage of MUSCLE (Multidirectional Scanning for Line Extraction Model for automatic generation of 3D networks in CityGML format (from raster floor plans. MUSCLE (Multidirectional Scanning for Line Extraction Model is a conversion method which was developed to vectorize the straight lines through the raster images including floor plans, maps for GIS, architectural drawings, and machine plans. The model allows user to define specific criteria which are crucial for acquiring the vectorization process. Unlike traditional vectorization process, this model generates straight lines based on a line thinning algorithm, without performing line following-chain coding and vector reduction stages. In this method the nearly vertical lines were obtained by scanning the images horizontally, while the nearly horizontal lines were obtained by scanning the images vertically. In a case where two or more consecutive lines are nearly horizontal or nearly vertical, raster data become unmanageable and the process generates wrongly vectorized lines. In this situation, to obtain the precise lines, the image with the wrongly vectorized lines is diagonally scanned. By using MUSCLE model, the network models are topologically structured in CityGML format. After the generation process, it is possible to perform 3D network analysis based on these models. Then, by using the software that was designed based on the generated models, a geodatabase of the models could be established. This paper presents the correction application in MUSCLE and explains 3D network construction in detail.

  19. On the importance of appropriate precipitation gauge catch correction for hydrological modelling at mid to high latitudes

    Science.gov (United States)

    Stisen, S.; Højberg, A. L.; Troldborg, L.; Refsgaard, J. C.; Christensen, B. S. B.; Olsen, M.; Henriksen, H. J.

    2012-11-01

    Precipitation gauge catch correction is often given very little attention in hydrological modelling compared to model parameter calibration. This is critical because significant precipitation biases often make the calibration exercise pointless, especially when supposedly physically-based models are in play. This study addresses the general importance of appropriate precipitation catch correction through a detailed modelling exercise. An existing precipitation gauge catch correction method addressing solid and liquid precipitation is applied, both as national mean monthly correction factors based on a historic 30 yr record and as gridded daily correction factors based on local daily observations of wind speed and temperature. The two methods, named the historic mean monthly (HMM) and the time-space variable (TSV) correction, resulted in different winter precipitation rates for the period 1990-2010. The resulting precipitation datasets were evaluated through the comprehensive Danish National Water Resources model (DK-Model), revealing major differences in both model performance and optimised model parameter sets. Simulated stream discharge is improved significantly when introducing the TSV correction, whereas the simulated hydraulic heads and multi-annual water balances performed similarly due to recalibration adjusting model parameters to compensate for input biases. The resulting optimised model parameters are much more physically plausible for the model based on the TSV correction of precipitation. A proxy-basin test where calibrated DK-Model parameters were transferred to another region without site specific calibration showed better performance for parameter values based on the TSV correction. Similarly, the performances of the TSV correction method were superior when considering two single years with a much dryer and a much wetter winter, respectively, as compared to the winters in the calibration period (differential split-sample tests). We conclude that TSV

  20. On the importance of appropriate precipitation gauge catch correction for hydrological modelling at mid to high latitudes

    Directory of Open Access Journals (Sweden)

    S. Stisen

    2012-11-01

    Full Text Available Precipitation gauge catch correction is often given very little attention in hydrological modelling compared to model parameter calibration. This is critical because significant precipitation biases often make the calibration exercise pointless, especially when supposedly physically-based models are in play. This study addresses the general importance of appropriate precipitation catch correction through a detailed modelling exercise. An existing precipitation gauge catch correction method addressing solid and liquid precipitation is applied, both as national mean monthly correction factors based on a historic 30 yr record and as gridded daily correction factors based on local daily observations of wind speed and temperature. The two methods, named the historic mean monthly (HMM and the time–space variable (TSV correction, resulted in different winter precipitation rates for the period 1990–2010. The resulting precipitation datasets were evaluated through the comprehensive Danish National Water Resources model (DK-Model, revealing major differences in both model performance and optimised model parameter sets. Simulated stream discharge is improved significantly when introducing the TSV correction, whereas the simulated hydraulic heads and multi-annual water balances performed similarly due to recalibration adjusting model parameters to compensate for input biases. The resulting optimised model parameters are much more physically plausible for the model based on the TSV correction of precipitation. A proxy-basin test where calibrated DK-Model parameters were transferred to another region without site specific calibration showed better performance for parameter values based on the TSV correction. Similarly, the performances of the TSV correction method were superior when considering two single years with a much dryer and a much wetter winter, respectively, as compared to the winters in the calibration period (differential split-sample tests

  1. Reconstructing interacting entropy-corrected holographic scalar field models of dark energy in the non-flat universe

    Energy Technology Data Exchange (ETDEWEB)

    Karami, K; Khaledian, M S [Department of Physics, University of Kurdistan, Pasdaran Street, Sanandaj (Iran, Islamic Republic of); Jamil, Mubasher, E-mail: KKarami@uok.ac.ir, E-mail: MS.Khaledian@uok.ac.ir, E-mail: mjamil@camp.nust.edu.pk [Center for Advanced Mathematics and Physics (CAMP), National University of Sciences and Technology (NUST), Islamabad (Pakistan)

    2011-02-15

    Here we consider the entropy-corrected version of the holographic dark energy (DE) model in the non-flat universe. We obtain the equation of state parameter in the presence of interaction between DE and dark matter. Moreover, we reconstruct the potential and the dynamics of the quintessence, tachyon, K-essence and dilaton scalar field models according to the evolutionary behavior of the interacting entropy-corrected holographic DE model.

  2. Weak correction to the muon magnetic moment in a gauge model

    International Nuclear Information System (INIS)

    Darby, D.; Grammer, G. Jr.

    1976-01-01

    The weak correction, asub(μ)sup(W), to the anomalous magnetic moment of the muon is calculated in an SU(2) x U(1) x U(1) gauge model of weak and electromagnetic interactions. The Rsub(xi) gauge is used and Ward-Takahashi identities are utilized in eliminating all xi-dependence before the loop integration is performed. asub(μ)sup(W,expt) places no constraint on the mass of one of the neutral vector mesons, which may be arbitrarily small. (Auth.)

  3. Correcting the bias of empirical frequency parameter estimators in codon models.

    Directory of Open Access Journals (Sweden)

    Sergei Kosakovsky Pond

    2010-07-01

    Full Text Available Markov models of codon substitution are powerful inferential tools for studying biological processes such as natural selection and preferences in amino acid substitution. The equilibrium character distributions of these models are almost always estimated using nucleotide frequencies observed in a sequence alignment, primarily as a matter of historical convention. In this note, we demonstrate that a popular class of such estimators are biased, and that this bias has an adverse effect on goodness of fit and estimates of substitution rates. We propose a "corrected" empirical estimator that begins with observed nucleotide counts, but accounts for the nucleotide composition of stop codons. We show via simulation that the corrected estimates outperform the de facto standard estimates not just by providing better estimates of the frequencies themselves, but also by leading to improved estimation of other parameters in the evolutionary models. On a curated collection of sequence alignments, our estimators show a significant improvement in goodness of fit compared to the approach. Maximum likelihood estimation of the frequency parameters appears to be warranted in many cases, albeit at a greater computational cost. Our results demonstrate that there is little justification, either statistical or computational, for continued use of the -style estimators.

  4. Corrections to the neutrinoless double-β-decay operator in the shell model

    Science.gov (United States)

    Engel, Jonathan; Hagen, Gaute

    2009-06-01

    We use diagrammatic perturbation theory to construct an effective shell-model operator for the neutrinoless double-β decay of Se82. The starting point is the same Bonn-C nucleon-nucleon interaction that is used to generate the Hamiltonian for recent shell-model calculations of double-β decay. After first summing high-energy ladder diagrams that account for short-range correlations and then adding diagrams of low order in the G matrix to account for longer-range correlations, we fold the two-body matrix elements of the resulting effective operator with transition densities from the recent shell-model calculation to obtain the overall nuclear matrix element that governs the decay. Although the high-energy ladder diagrams suppress this matrix element at very short distances as expected, they enhance it at distances between one and two fermis, so that their overall effect is small. The corrections due to longer-range physics are large, but cancel one another so that the fully corrected matrix element is comparable to that produced by the bare operator. This cancellation between large and physically distinct low-order terms indicates the importance of a reliable nonperturbative calculation.

  5. New component-based normalization method to correct PET system models

    International Nuclear Information System (INIS)

    Kinouchi, Shoko; Miyoshi, Yuji; Suga, Mikio; Yamaya, Taiga; Yoshida, Eiji; Nishikido, Fumihiko; Tashima, Hideaki

    2011-01-01

    Normalization correction is necessary to obtain high-quality reconstructed images in positron emission tomography (PET). There are two basic types of normalization methods: the direct method and component-based methods. The former method suffers from the problem that a huge count number in the blank scan data is required. Therefore, the latter methods have been proposed to obtain high statistical accuracy normalization coefficients with a small count number in the blank scan data. In iterative image reconstruction methods, on the other hand, the quality of the obtained reconstructed images depends on the system modeling accuracy. Therefore, the normalization weighing approach, in which normalization coefficients are directly applied to the system matrix instead of a sinogram, has been proposed. In this paper, we propose a new component-based normalization method to correct system model accuracy. In the proposed method, two components are defined and are calculated iteratively in such a way as to minimize errors of system modeling. To compare the proposed method and the direct method, we applied both methods to our small OpenPET prototype system. We achieved acceptable statistical accuracy of normalization coefficients while reducing the count number of the blank scan data to one-fortieth that required in the direct method. (author)

  6. Comments and corrections on 3D modeling studies of locomotor muscle moment arms in archosaurs

    Directory of Open Access Journals (Sweden)

    Karl Bates

    2015-10-01

    Full Text Available In a number of recent studies we used computer modeling to investigate the evolution of muscle leverage (moment arms and function in extant and extinct archosaur lineages (crocodilians, dinosaurs including birds and pterosaurs. These studies sought to quantify the level of disparity and convergence in muscle moment arms during the evolution of bipedal and quadrupedal posture in various independent archosaur lineages, and in doing so further our understanding of changes in anatomy, locomotion and ecology during the group’s >250 million year evolutionary history. Subsequent work by others has led us to re-evaluate our models, which revealed a methodological error that impacted on the results obtained from the abduction–adduction and long-axis rotation moment arms in our published studies. In this paper we present corrected abduction–adduction and long axis rotation moment arms for all our models, and evaluate the impact of this new data on the conclusions of our previous studies. We find that, in general, our newly corrected data differed only slightly from that previously published, with very few qualitative changes in muscle moments (e.g., muscles originally identified as abductors remained abductors. As a result the majority of our previous conclusions regarding the functional evolution of key muscles in these archosaur groups are upheld.

  7. Correcting Model Fit Criteria for Small Sample Latent Growth Models with Incomplete Data

    Science.gov (United States)

    McNeish, Daniel; Harring, Jeffrey R.

    2017-01-01

    To date, small sample problems with latent growth models (LGMs) have not received the amount of attention in the literature as related mixed-effect models (MEMs). Although many models can be interchangeably framed as a LGM or a MEM, LGMs uniquely provide criteria to assess global data-model fit. However, previous studies have demonstrated poor…

  8. Modified VMD model with correct analytic properties for describing electromagnetic structure of He4 nucleus

    International Nuclear Information System (INIS)

    Dubnicka, S.; Lucan, L.

    1988-12-01

    A new phenomenological model for electromagnetic (e.m.) form factor (ff) of He 4 nucleus is presented, which is based on a modification of the well proved in e.m. interactions of hadrons vector-meson-dominance (VMD) model by means of an incorporation of correct He 4 ff analytic properties, nonzero vector-meson widths and the right power asymptotic behaviour predicted by the quark model. It reproduces the existing experimental information on He 4 e.m. ff in the space-like region quite well. Furthermore, couplings of all well established isoscalar vector mesons with J pc = 1 -- to He 4 nucleus are evaluated as a result of the analysis and the time-like region behaviour of He 4 e.m. ff is predicted. As a consequence of the latter the total cross section of e + e - → He 4 He-bar 4 process is calculated for the first time. (author). 17 refs, 3 figs

  9. Maximum likelihood estimation and EM algorithm of Copas-like selection model for publication bias correction.

    Science.gov (United States)

    Ning, Jing; Chen, Yong; Piao, Jin

    2017-07-01

    Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. Generalized second law of thermodynamics for non-canonical scalar field model with corrected-entropy

    International Nuclear Information System (INIS)

    Das, Sudipta; Mamon, Abdulla Al; Debnath, Ujjal

    2015-01-01

    In this work, we have considered a non-canonical scalar field dark energy model in the framework of flat FRW background. It has also been assumed that the dark matter sector interacts with the non-canonical dark energy sector through some interaction term. Using the solutions for this interacting non-canonical scalar field dark energy model, we have investigated the validity of generalized second law (GSL) of thermodynamics in various scenarios using first law and area law of thermodynamics. For this purpose, we have assumed two types of horizons viz apparent horizon and event horizon for the universe and using first law of thermodynamics, we have examined the validity of GSL on both apparent and event horizons. Next, we have considered two types of entropy-corrections on apparent and event horizons. Using the modified area law, we have examined the validity of GSL of thermodynamics on apparent and event horizons under some restrictions of model parameters. (orig.)

  11. Can climate models be tuned to simulate the global mean absolute temperature correctly?

    Science.gov (United States)

    Duan, Q.; Shi, Y.; Gong, W.

    2016-12-01

    The Inter-government Panel on Climate Change (IPCC) has already issued five assessment reports (ARs), which include the simulation of the past climate and the projection of the future climate under various scenarios. The participating models can simulate reasonably well the trend in global mean temperature change, especially of the last 150 years. However, there is a large, constant discrepancy in terms of global mean absolute temperature simulations over this period. This discrepancy remained in the same range between IPCC-AR4 and IPCC-AR5, which amounts to about 3oC between the coldest model and the warmest model. This discrepancy has great implications to the land processes, particularly the processes related to the cryosphere, and casts doubts over if land-atmosphere-ocean interactions are correctly considered in those models. This presentation aims to explore if this discrepancy can be reduced through model tuning. We present an automatic model calibration strategy to tune the parameters of a climate model so the simulated global mean absolute temperature would match the observed data over the last 150 years. An intermediate complexity model known as LOVECLIM is used in the study. This presentation will show the preliminary results.

  12. Tax revenue and inflation rate predictions in Banda Aceh using Vector Error Correction Model (VECM)

    Science.gov (United States)

    Maulia, Eva; Miftahuddin; Sofyan, Hizir

    2018-05-01

    A country has some important parameters to achieve the welfare of the economy, such as tax revenues and inflation. One of the largest revenues of the state budget in Indonesia comes from the tax sector. Besides, the rate of inflation occurring in a country can be used as one measure, to measure economic problems that the country facing. Given the importance of tax revenue and inflation rate control in achieving economic prosperity, it is necessary to analyze the relationship and forecasting tax revenue and inflation rate. VECM (Vector Error Correction Model) was chosen as the method used in this research, because of the data used in the form of multivariate time series data. This study aims to produce a VECM model with optimal lag and to predict the tax revenue and inflation rate of the VECM model. The results show that the best model for data of tax revenue and the inflation rate in Banda Aceh City is VECM with 3rd optimal lag or VECM (3). Of the seven models formed, there is a significant model that is the acceptance model of income tax. The predicted results of tax revenue and the inflation rate in Kota Banda Aceh for the next 6, 12 and 24 periods (months) obtained using VECM (3) are considered valid, since they have a minimum error value compared to other models.

  13. Temperature effects on pitfall catches of epigeal arthropods: a model and method for bias correction.

    Science.gov (United States)

    Saska, Pavel; van der Werf, Wopke; Hemerik, Lia; Luff, Martin L; Hatten, Timothy D; Honek, Alois; Pocock, Michael

    2013-02-01

    Carabids and other epigeal arthropods make important contributions to biodiversity, food webs and biocontrol of invertebrate pests and weeds. Pitfall trapping is widely used for sampling carabid populations, but this technique yields biased estimates of abundance ('activity-density') because individual activity - which is affected by climatic factors - affects the rate of catch. To date, the impact of temperature on pitfall catches, while suspected to be large, has not been quantified, and no method is available to account for it. This lack of knowledge and the unavailability of a method for bias correction affect the confidence that can be placed on results of ecological field studies based on pitfall data.Here, we develop a simple model for the effect of temperature, assuming a constant proportional change in the rate of catch per °C change in temperature, r , consistent with an exponential Q 10 response to temperature. We fit this model to 38 time series of pitfall catches and accompanying temperature records from the literature, using first differences and other detrending methods to account for seasonality. We use meta-analysis to assess consistency of the estimated parameter r among studies.The mean rate of increase in total catch across data sets was 0·0863 ± 0·0058 per °C of maximum temperature and 0·0497 ± 0·0107 per °C of minimum temperature. Multiple regression analyses of 19 data sets showed that temperature is the key climatic variable affecting total catch. Relationships between temperature and catch were also identified at species level. Correction for temperature bias had substantial effects on seasonal trends of carabid catches. Synthesis and Applications . The effect of temperature on pitfall catches is shown here to be substantial and worthy of consideration when interpreting results of pitfall trapping. The exponential model can be used both for effect estimation and for bias correction of observed data. Correcting for temperature

  14. Case study of atmospheric correction on CCD data of HJ-1 satellite based on 6S model

    International Nuclear Information System (INIS)

    Xue, Xiaoiuan; Meng, Oingyan; Xie, Yong; Sun, Zhangli; Wang, Chang; Zhao, Hang

    2014-01-01

    In this study, atmospheric radiative transfer model 6S was used to simulate the radioactive transfer process in the surface-atmosphere-sensor. An algorithm based on the look-up table (LUT) founded by 6S model was used to correct (HJ-1) CCD image pixel by pixel. Then, the effect of atmospheric correction on CCD data of HJ-1 satellite was analyzed in terms of the spectral curves and evaluated against the measured reflectance acquired during HJ-1B satellite overpass, finally, the normalized difference vegetation index (NDVI) before and after atmospheric correction were compared. The results showed: (1) Atmospheric correction on CCD data of HJ-1 satellite can reduce the ''increase'' effect of the atmosphere. (2) Apparent reflectance are higher than those of surface reflectance corrected by 6S model in band1∼band3, but they are lower in the near-infrared band; the surface reflectance values corrected agree with the measured reflectance values well. (3)The NDVI increases significantly after atmospheric correction, which indicates the atmospheric correction can highlight the vegetation information

  15. A correction for Dupuit-Forchheimer interface flow models of seawater intrusion in unconfined coastal aquifers

    Science.gov (United States)

    Koussis, Antonis D.; Mazi, Katerina; Riou, Fabien; Destouni, Georgia

    2015-06-01

    Interface flow models that use the Dupuit-Forchheimer (DF) approximation for assessing the freshwater lens and the seawater intrusion in coastal aquifers lack representation of the gap through which fresh groundwater discharges to the sea. In these models, the interface outcrops unrealistically at the same point as the free surface, is too shallow and intersects the aquifer base too far inland, thus overestimating an intruding seawater front. To correct this shortcoming of DF-type interface solutions for unconfined aquifers, we here adapt the outflow gap estimate of an analytical 2-D interface solution for infinitely thick aquifers to fit the 50%-salinity contour of variable-density solutions for finite-depth aquifers. We further improve the accuracy of the interface toe location predicted with depth-integrated DF interface solutions by ∼20% (relative to the 50%-salinity contour of variable-density solutions) by combining the outflow-gap adjusted aquifer depth at the sea with a transverse-dispersion adjusted density ratio (Pool and Carrera, 2011), appropriately modified for unconfined flow. The effectiveness of the combined correction is exemplified for two regional Mediterranean aquifers, the Israel Coastal and Nile Delta aquifers.

  16. The importance of topographically corrected null models for analyzing ecological point processes.

    Science.gov (United States)

    McDowall, Philip; Lynch, Heather J

    2017-07-01

    Analyses of point process patterns and related techniques (e.g., MaxEnt) make use of the expected number of occurrences per unit area and second-order statistics based on the distance between occurrences. Ecologists working with point process data often assume that points exist on a two-dimensional x-y plane or within a three-dimensional volume, when in fact many observed point patterns are generated on a two-dimensional surface existing within three-dimensional space. For many surfaces, however, such as the topography of landscapes, the projection from the surface to the x-y plane preserves neither area nor distance. As such, when these point patterns are implicitly projected to and analyzed in the x-y plane, our expectations of the point pattern's statistical properties may not be met. When used in hypothesis testing, we find that the failure to account for the topography of the generating surface may bias statistical tests that incorrectly identify clustering and, furthermore, may bias coefficients in inhomogeneous point process models that incorporate slope as a covariate. We demonstrate the circumstances under which this bias is significant, and present simple methods that allow point processes to be simulated with corrections for topography. These point patterns can then be used to generate "topographically corrected" null models against which observed point processes can be compared. © 2017 by the Ecological Society of America.

  17. Off-the-job training for VATS employing anatomically correct lung models.

    Science.gov (United States)

    Obuchi, Toshiro; Imakiire, Takayuki; Miyahara, Sou; Nakashima, Hiroyasu; Hamanaka, Wakako; Yanagisawa, Jun; Hamatake, Daisuke; Shiraishi, Takeshi; Moriyama, Shigeharu; Iwasaki, Akinori

    2012-02-01

    We evaluated our simulated major lung resection employing anatomically correct lung models as "off-the-job training" for video-assisted thoracic surgery trainees. A total of 76 surgeons voluntarily participated in our study. They performed video-assisted thoracic surgical lobectomy employing anatomically correct lung models, which are made of sponges so that vessels and bronchi can be cut using usual surgical techniques with typical forceps. After the simulation surgery, participants answered questionnaires on a visual analogue scale, in terms of their level of interest and the reality of our training method as off-the-job training for trainees. We considered that the closer a score was to 10, the more useful our method would be for training new surgeons. Regarding the appeal or level of interest in this simulation surgery, the mean score was 8.3 of 10, and regarding reality, it was 7.0. The participants could feel some of the real sensations of the surgery and seemed to be satisfied to perform the simulation lobectomy. Our training method is considered to be suitable as an appropriate type of surgical off-the-job training.

  18. A variable age of onset segregation model for linkage analysis, with correction for ascertainment, applied to glioma

    DEFF Research Database (Denmark)

    Sun, Xiangqing; Vengoechea, Jaime; Elston, Robert

    2012-01-01

    We propose a 2-step model-based approach, with correction for ascertainment, to linkage analysis of a binary trait with variable age of onset and apply it to a set of multiplex pedigrees segregating for adult glioma....

  19. Electroweak radiative corrections in the SU(2) x U(1) standard model

    International Nuclear Information System (INIS)

    Hollik, W.

    1986-01-01

    This paper contains a discussion of the 1-loop renormalization of the standard model and applications of the radiative corrections to fermion processes. Thereby we restrict the discussion to leptonic processes since these allow the cleanest access to the more subtle parts of the theory avoiding theoretical uncertainties as far as possible. Actual measurements of the W +- ,Z masses and of sin 2 θ W already indicate the presence of higher order effects in electroweak processes between fermions. More accurate measurements in the near future colliders LEP and SLC will allow to test the standard model beyond the tree level. At the 1-loop level a big amount of work has already been done with a satisfactory agreement between the individual calculations for the standard processes: μ decays, ν-scattering, and e + e → μ + μ - . 38 refs

  20. LHC phenomenology and higher order electroweak corrections in supersymmetric models with and without R-parity

    Energy Technology Data Exchange (ETDEWEB)

    Liebler, Stefan Rainer

    2011-09-15

    The standard model of particle physics lacks on some shortcomings from experimental as well as from theoretical point of view: There is no approved mechanism for the generation of masses of the fundamental particles, in particular also not for the light, but massive neutrinos. In addition the standard model does not provide an explanation for the observance of dark matter in the universe. Moreover the gauge couplings of the three forces in the standard model do not unify, implying that a fundamental theory combining all forces can not be formulated. Within this thesis we address supersymmetric models as answers to these various questions, but instead of focusing on the most simple supersymmetrization of the standard model, we consider basic extensions, namely the next-to-minimal supersymmetric standard model (NMSSM), which contains an additional singlet field, and R-parity violating models. Using lepton number violating terms in the context of bilinear R-parity violation and the {mu}{nu}SSM we are able to explain neutrino physics intrinsically supersymmetric, since those terms induce a mixing between the neutralinos and the neutrinos. This thesis works out the phenomenology of the supersymmetric models under consideration and tries to point out differences to the well-known features of the simplest supersymmetric realization of the standard model. In case of the R-parity violating models the decays of the light neutralinos can result in displaced vertices. In combination with a light singlet state these displaced vertices might offer a rich phenomenology like non-standard Higgs decays into a pair of singlinos decaying with displaced vertices. Within this thesis we present some calculations at next order of perturbation theory, since one-loop corrections provide possibly large contributions to the tree-level masses and decay widths. We are using an on-shell renormalization scheme to calculate the masses of neutralinos and charginos including the neutrinos and

  1. LHC phenomenology and higher order electroweak corrections in supersymmetric models with and without R-parity

    International Nuclear Information System (INIS)

    Liebler, Stefan Rainer

    2011-09-01

    The standard model of particle physics lacks on some shortcomings from experimental as well as from theoretical point of view: There is no approved mechanism for the generation of masses of the fundamental particles, in particular also not for the light, but massive neutrinos. In addition the standard model does not provide an explanation for the observance of dark matter in the universe. Moreover the gauge couplings of the three forces in the standard model do not unify, implying that a fundamental theory combining all forces can not be formulated. Within this thesis we address supersymmetric models as answers to these various questions, but instead of focusing on the most simple supersymmetrization of the standard model, we consider basic extensions, namely the next-to-minimal supersymmetric standard model (NMSSM), which contains an additional singlet field, and R-parity violating models. Using lepton number violating terms in the context of bilinear R-parity violation and the μνSSM we are able to explain neutrino physics intrinsically supersymmetric, since those terms induce a mixing between the neutralinos and the neutrinos. This thesis works out the phenomenology of the supersymmetric models under consideration and tries to point out differences to the well-known features of the simplest supersymmetric realization of the standard model. In case of the R-parity violating models the decays of the light neutralinos can result in displaced vertices. In combination with a light singlet state these displaced vertices might offer a rich phenomenology like non-standard Higgs decays into a pair of singlinos decaying with displaced vertices. Within this thesis we present some calculations at next order of perturbation theory, since one-loop corrections provide possibly large contributions to the tree-level masses and decay widths. We are using an on-shell renormalization scheme to calculate the masses of neutralinos and charginos including the neutrinos and leptons in

  2. Correction of the angular dependence of satellite retrieved LST at global scale using parametric models

    Science.gov (United States)

    Ermida, S. L.; Trigo, I. F.; DaCamara, C.; Ghent, D.

    2017-12-01

    Land surface temperature (LST) values retrieved from satellite measurements in the thermal infrared (TIR) may be strongly affected by spatial anisotropy. This effect introduces significant discrepancies among LST estimations from different sensors, overlapping in space and time, that are not related to uncertainties in the methodologies or input data used. Furthermore, these directional effects deviate LST products from an ideally defined LST, which should represent to the ensemble of directional radiometric temperature of all surface elements within the FOV. Angular effects on LST are here conveniently estimated by means of a parametric model of the surface thermal emission, which describes the angular dependence of LST as a function of viewing and illumination geometry. Two models are consistently analyzed to evaluate their performance of and to assess their respective potential to correct directional effects on LST for a wide range of surface conditions, in terms of tree coverage, vegetation density, surface emissivity. We also propose an optimization of the correction of directional effects through a synergistic use of both models. The models are calibrated using LST data as provided by two sensors: MODIS on-board NASA's TERRA and AQUA; and SEVIRI on-board EUMETSAT's MSG. As shown in our previous feasibility studies the sampling of illumination and view angles has a high impact on the model parameters. This impact may be mitigated when the sampling size is increased by aggregating pixels with similar surface conditions. Here we propose a methodology where land surface is stratified by means of a cluster analysis using information on land cover type, fraction of vegetation cover and topography. The models are then adjusted to LST data corresponding to each cluster. It is shown that the quality of the cluster based models is very close to the pixel based ones. Furthermore, the reduced number of parameters allows improving the model trough the incorporation of a

  3. Range walk error correction and modeling on Pseudo-random photon counting system

    Science.gov (United States)

    Shen, Shanshan; Chen, Qian; He, Weiji

    2017-08-01

    Signal to noise ratio and depth accuracy are modeled for the pseudo-random ranging system with two random processes. The theoretical results, developed herein, capture the effects of code length and signal energy fluctuation are shown to agree with Monte Carlo simulation measurements. First, the SNR is developed as a function of the code length. Using Geiger-mode avalanche photodiodes (GMAPDs), longer code length is proven to reduce the noise effect and improve SNR. Second, the Cramer-Rao lower bound on range accuracy is derived to justify that longer code length can bring better range accuracy. Combined with the SNR model and CRLB model, it is manifested that the range accuracy can be improved by increasing the code length to reduce the noise-induced error. Third, the Cramer-Rao lower bound on range accuracy is shown to converge to the previously published theories and introduce the Gauss range walk model to range accuracy. Experimental tests also converge to the presented boundary model in this paper. It has been proven that depth error caused by the fluctuation of the number of detected photon counts in the laser echo pulse leads to the depth drift of Time Point Spread Function (TPSF). Finally, numerical fitting function is used to determine the relationship between the depth error and the photon counting ratio. Depth error due to different echo energy is calibrated so that the corrected depth accuracy is improved to 1cm.

  4. On-line core monitoring system based on buckling corrected modified one group model

    International Nuclear Information System (INIS)

    Freire, Fernando S.

    2011-01-01

    Nuclear power reactors require core monitoring during plant operation. To provide safe, clean and reliable core continuously evaluate core conditions. Currently, the reactor core monitoring process is carried out by nuclear code systems that together with data from plant instrumentation, such as, thermocouples, ex-core detectors and fixed or moveable In-core detectors, can easily predict and monitor a variety of plant conditions. Typically, the standard nodal methods can be found on the heart of such nuclear monitoring code systems. However, standard nodal methods require large computer running times when compared with standards course-mesh finite difference schemes. Unfortunately, classic finite-difference models require a fine mesh reactor core representation. To override this unlikely model characteristic we can usually use the classic modified one group model to take some account for the main core neutronic behavior. In this model a course-mesh core representation can be easily evaluated with a crude treatment of thermal neutrons leakage. In this work, an improvement made on classic modified one group model based on a buckling thermal correction was used to obtain a fast, accurate and reliable core monitoring system methodology for future applications, providing a powerful tool for core monitoring process. (author)

  5. The Asian Correction Can Be Quantitatively Forecasted Using a Statistical Model of Fusion-Fission Processes.

    Science.gov (United States)

    Teh, Boon Kin; Cheong, Siew Ann

    2016-01-01

    The Global Financial Crisis of 2007-2008 wiped out US$37 trillions across global financial markets, this value is equivalent to the combined GDPs of the United States and the European Union in 2014. The defining moment of this crisis was the failure of Lehman Brothers, which precipitated the October 2008 crash and the Asian Correction (March 2009). Had the Federal Reserve seen these crashes coming, they might have bailed out Lehman Brothers, and prevented the crashes altogether. In this paper, we show that some of these market crashes (like the Asian Correction) can be predicted, if we assume that a large number of adaptive traders employing competing trading strategies. As the number of adherents for some strategies grow, others decline in the constantly changing strategy space. When a strategy group grows into a giant component, trader actions become increasingly correlated and this is reflected in the stock price. The fragmentation of this giant component will leads to a market crash. In this paper, we also derived the mean-field market crash forecast equation based on a model of fusions and fissions in the trading strategy space. By fitting the continuous returns of 20 stocks traded in Singapore Exchange to the market crash forecast equation, we obtain crash predictions ranging from end October 2008 to mid-February 2009, with early warning four to six months prior to the crashes.

  6. Radiative corrections to the triple Higgs coupling in the inert Higgs doublet model

    International Nuclear Information System (INIS)

    Arhrib, Abdesslam; Benbrik, Rachid; Falaki, Jaouad El; Jueid, Adil

    2015-01-01

    We investigate the implication of the recent discovery of a Higgs-like particle in the first phase of the LHC Run 1 on the Inert Higgs Doublet Model (IHDM). The determination of the Higgs couplings to SM particles and its intrinsic properties will get improved during the new LHC Run 2 starting this year. The new LHC Run 2 would also shade some light on the triple Higgs coupling. Such measurement is very important in order to establish the details of the electroweak symmetry breaking mechanism. Given the importance of the Higgs couplings both at the LHC and e + e − Linear Collider machines, accurate theoretical predictions are required. We study the radiative corrections to the triple Higgs coupling hhh and to hZZ, hWW couplings in the context of the IHDM. By combining several theoretical and experimental constraints on parameter space, we show that extra particles might modify the triple Higgs coupling near threshold regions. Finally, we discuss the effect of these corrections on the double Higgs production signal at the e + e − LC and show that they can be rather important.

  7. The Asian Correction Can Be Quantitatively Forecasted Using a Statistical Model of Fusion-Fission Processes.

    Directory of Open Access Journals (Sweden)

    Boon Kin Teh

    Full Text Available The Global Financial Crisis of 2007-2008 wiped out US$37 trillions across global financial markets, this value is equivalent to the combined GDPs of the United States and the European Union in 2014. The defining moment of this crisis was the failure of Lehman Brothers, which precipitated the October 2008 crash and the Asian Correction (March 2009. Had the Federal Reserve seen these crashes coming, they might have bailed out Lehman Brothers, and prevented the crashes altogether. In this paper, we show that some of these market crashes (like the Asian Correction can be predicted, if we assume that a large number of adaptive traders employing competing trading strategies. As the number of adherents for some strategies grow, others decline in the constantly changing strategy space. When a strategy group grows into a giant component, trader actions become increasingly correlated and this is reflected in the stock price. The fragmentation of this giant component will leads to a market crash. In this paper, we also derived the mean-field market crash forecast equation based on a model of fusions and fissions in the trading strategy space. By fitting the continuous returns of 20 stocks traded in Singapore Exchange to the market crash forecast equation, we obtain crash predictions ranging from end October 2008 to mid-February 2009, with early warning four to six months prior to the crashes.

  8. Revised Tijeras Arroyo Groundwater Current Conceptual Model and Corrective Measures Evaluation Report - February 2018.

    Energy Technology Data Exchange (ETDEWEB)

    Copland, John R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2018-02-01

    The U.S. Department of Energy (DOE) and the management and operating (M&O) contractor for Sandia National Laboratories beginning on May 1, 2017, National Technology & Engineering Solutions of Sandia, LLC (NTESS), hereinafter collectively referred to as DOE/NTESS, prepared this Revised Tijeras Arroyo Groundwater Current Conceptual Model (CCM) and Corrective Measures Evaluation (CME) Report , referred to as the Revised CCM/CME Report, to meet requirements under the Sandia National Laboratories-New Mexico (SNL/NM) Compliance Order on Consent (Consent Order). The Consent Order became effective on April 29, 2004. The Consent Order identifies the Tijeras Arroyo Groundwater (TAG) Area of Concern (AOC) as an area of groundwater contamination requiring further characterization and corrective action. In November 2004, New Mexico Environment Department (NMED) approved the July 2004 CME Work Plan. In April 2005, DOE and the SNL M&O contractor at the time, Sandia Corporation (Sandia), hereinafter collectively referred to as DOE/Sandia, submitted a CME Report, but NMED did not finalize review of that document. In December 2016, DOE/Sandia submitted a combined and updated CCM/CME Report. NMED issued a disapproval letter in May 2017 that included comments on the December 2016 CCM/CME Report. In August 2017, NMED and DOE/NTESS staff held a meeting to discuss and clarify outstanding issues. This Revised CCM/CME Report addresses (1) the issues presented in the NMED May 2017 disapproval letter and (2) findings from the August 2017 meeting.

  9. Correcting electrode modelling errors in EIT on realistic 3D head models.

    Science.gov (United States)

    Jehl, Markus; Avery, James; Malone, Emma; Holder, David; Betcke, Timo

    2015-12-01

    Electrical impedance tomography (EIT) is a promising medical imaging technique which could aid differentiation of haemorrhagic from ischaemic stroke in an ambulance. One challenge in EIT is the ill-posed nature of the image reconstruction, i.e., that small measurement or modelling errors can result in large image artefacts. It is therefore important that reconstruction algorithms are improved with regard to stability to modelling errors. We identify that wrongly modelled electrode positions constitute one of the biggest sources of image artefacts in head EIT. Therefore, the use of the Fréchet derivative on the electrode boundaries in a realistic three-dimensional head model is investigated, in order to reconstruct electrode movements simultaneously to conductivity changes. We show a fast implementation and analyse the performance of electrode position reconstructions in time-difference and absolute imaging for simulated and experimental voltages. Reconstructing the electrode positions and conductivities simultaneously increased the image quality significantly in the presence of electrode movement.

  10. On Gluonic Corrections to the Mass Spectrum in a Relativistic Charmonium Model

    OpenAIRE

    Hitoshi, ITO; Department of Physics, Faculty of Science and Technology Kinki University

    1984-01-01

    It is shown that the gluonic correction in the innermost region is abnormally large in the ^1S_0 State and a cutoff parameter which suppresses this correction. should be introduced. The retardation effect is estimated under this restriction on the gluonic correction. The correction due to the pair creation is shown to be small except for the ^1S_0 and ^3P_0 states.

  11. Refitting density dependent relativistic model parameters including Center-of-Mass corrections

    International Nuclear Information System (INIS)

    Avancini, Sidney S.; Marinelli, Jose R.; Carlson, Brett Vern

    2011-01-01

    Full text: Relativistic mean field models have become a standard approach for precise nuclear structure calculations. After the seminal work of Serot and Walecka, which introduced a model Lagrangian density where the nucleons interact through the exchange of scalar and vector mesons, several models were obtained through its generalization, including other meson degrees of freedom, non-linear meson interactions, meson-meson interactions, etc. More recently density dependent coupling constants were incorporated into the Walecka-like models, which are then extensively used. In particular, for these models a connection with the density functional theory can be established. Due to the inherent difficulties presented by field theoretical models, only the mean field approximation is used for the solution of these models. In order to calculate finite nuclei properties in the mean field approximation, a reference set has to be fixed and therefore the translational symmetry is violated. It is well known that in such case spurious effects due to the center-of-mass (COM) motion are present, which are more pronounced for light nuclei. In a previous work we have proposed a technique based on the Pierls-Yoccoz projection operator applied to the mean-field relativistic solution, in order to project out spurious COM contributions. In this work we obtain a new fitting for the density dependent parameters of a density dependent hadronic model, taking into account the COM corrections. Our fitting is obtained taking into account the charge radii and binding energies for He 4 , O 16 , Ca 40 , Ca 48 , Ni 56 , Ni 68 , Sn 100 , Sn 132 and Pb 208 . We show that the nuclear observables calculated using our fit are of a quality comparable to others that can be found in the literature, with the advantage that now a translational invariant many-body wave function is at our disposal. (author)

  12. Multivariate Bias Correction Procedures for Improving Water Quality Predictions from the SWAT Model

    Science.gov (United States)

    Arumugam, S.; Libera, D.

    2017-12-01

    Water quality observations are usually not available on a continuous basis for longer than 1-2 years at a time over a decadal period given the labor requirements making calibrating and validating mechanistic models difficult. Further, any physical model predictions inherently have bias (i.e., under/over estimation) and require post-simulation techniques to preserve the long-term mean monthly attributes. This study suggests a multivariate bias-correction technique and compares to a common technique in improving the performance of the SWAT model in predicting daily streamflow and TN loads across the southeast based on split-sample validation. The approach is a dimension reduction technique, canonical correlation analysis (CCA) that regresses the observed multivariate attributes with the SWAT model simulated values. The common approach is a regression based technique that uses an ordinary least squares regression to adjust model values. The observed cross-correlation between loadings and streamflow is better preserved when using canonical correlation while simultaneously reducing individual biases. Additionally, canonical correlation analysis does a better job in preserving the observed joint likelihood of observed streamflow and loadings. These procedures were applied to 3 watersheds chosen from the Water Quality Network in the Southeast Region; specifically, watersheds with sufficiently large drainage areas and number of observed data points. The performance of these two approaches are compared for the observed period and over a multi-decadal period using loading estimates from the USGS LOADEST model. Lastly, the CCA technique is applied in a forecasting sense by using 1-month ahead forecasts of P & T from ECHAM4.5 as forcings in the SWAT model. Skill in using the SWAT model for forecasting loadings and streamflow at the monthly and seasonal timescale is also discussed.

  13. Theory and procedures for finding a correct kinetic model for the bacteriorhodopsin photocycle.

    Science.gov (United States)

    Hendler, R W; Shrager, R; Bose, S

    2001-04-26

    In this paper, we present the implementation and results of new methodology based on linear algebra. The theory behind these methods is covered in detail in the Supporting Information, available electronically (Shragerand Hendler). In brief, the methods presented search through all possible forward sequential submodels in order to find candidates that can be used to construct a complete model for the BR-photocycle. The methodology is limited only to forward sequential models. If no such models are compatible with the experimental data,none will be found. The procedures apply objective tests and filters to eliminate possibilities that cannot be correct, thus cutting the total number of candidate sequences to be considered. In the current application,which uses six exponentials, the total sequences were cut from 1950 to 49. The remaining sequences were further screened using known experimental criteria. The approach led to a solution which consists of a pair of sequences, one with 5 exponentials showing BR* f L(f) M(f) N O BR and the other with three exponentials showing BR* L(s) M(s) BR. The deduced complete kinetic model for the BR photocycle is thus either a single photocycle branched at the L intermediate or a pair of two parallel photocycles. Reasons for preferring the parallel photocycles are presented. Synthetic data constructed on the basis of the parallel photocycles were indistinguishable from the experimental data in a number of analytical tests that were applied.

  14. Correcting transport errors during advection of aerosol and cloud moment sequences in eulerian models

    Energy Technology Data Exchange (ETDEWEB)

    McGraw R.

    2012-03-01

    Moment methods are finding increasing usage for simulations of particle population balance in box models and in more complex flows including two-phase flows. These highly efficient methods have nevertheless had little impact to date for multi-moment representation of aerosols and clouds in atmospheric models. There are evidently two reasons for this: First, atmospheric models, especially if the goal is to simulate climate, tend to be extremely complex and take many man-years to develop. Thus there is considerable inertia to the implementation of novel approaches. Second, and more fundamental, the nonlinear transport algorithms designed to reduce numerical diffusion during advection of various species (tracers) from cell to cell, in the typically coarse grid arrays of these models, can and occasionally do fail to preserve correlations between the moments. Other correlated tracers such as isotopic abundances, composition of aerosol mixtures, hydrometeor phase, etc., are subject to this same fate. In the case of moments, this loss of correlation can and occasionally does give rise to unphysical moment sets. When this happens the simulation can come to a halt. Following a brief description and review of moment methods, the goal of this paper is to present two new approaches that both test moment sequences for validity and correct them when they fail. The new approaches work on individual grid cells without requiring stored information from previous time-steps or neighboring cells.

  15. An Iterative Optimization Algorithm for Lens Distortion Correction Using Two-Parameter Models

    Directory of Open Access Journals (Sweden)

    Daniel Santana-Cedrés

    2016-12-01

    Full Text Available We present a method for the automatic estimation of two-parameter radial distortion models, considering polynomial as well as division models. The method first detects the longest distorted lines within the image by applying the Hough transform enriched with a radial distortion parameter. From these lines, the first distortion parameter is estimated, then we initialize the second distortion parameter to zero and the two-parameter model is embedded into an iterative nonlinear optimization process to improve the estimation. This optimization aims at reducing the distance from the edge points to the lines, adjusting two distortion parameters as well as the coordinates of the center of distortion. Furthermore, this allows detecting more points belonging to the distorted lines, so that the Hough transform is iteratively repeated to extract a better set of lines until no improvement is achieved. We present some experiments on real images with significant distortion to show the ability of the proposed approach to automatically correct this type of distortion as well as a comparison between the polynomial and division models.

  16. Modeling and Performance of Bonus-Malus Systems: Stationarity versus Age-Correction

    Directory of Open Access Journals (Sweden)

    Søren Asmussen

    2014-03-01

    Full Text Available In a bonus-malus system in car insurance, the bonus class of a customer is updated from one year to the next as a function of the current class and the number of claims in the year (assumed Poisson. Thus the sequence of classes of a customer in consecutive years forms a Markov chain, and most of the literature measures performance of the system in terms of the stationary characteristics of this Markov chain. However, the rate of convergence to stationarity may be slow in comparison to the typical sojourn time of a customer in the portfolio. We suggest an age-correction to the stationary distribution and present an extensive numerical study of its effects. An important feature of the modeling is a Bayesian view, where the Poisson rate according to which claims are generated for a customer is the outcome of a random variable specific to the customer.

  17. MODEL FOR THE CORRECTION OF THE SPECIFIC GRAVITY OF BIODIESEL FROM RESIDUAL OIL

    Directory of Open Access Journals (Sweden)

    Tatiana Aparecida Rosa da Silva

    2013-06-01

    Full Text Available Biodiesel is a important fuel with economic benefits, social and environmental. The production cost of the biodiesel can be significantly lowered if the raw material is replaced by a alternative material as residual oil. In this study, the variation of specific gravity with temperature increase for diesel and biodiesel from residual oil obtained by homogeneous basic catalysis. All properties analyzed for biodiesel are within specification Brazil. The determination of the correction algorithm for the specific gravity function of temperature is also presented, and the slope of the line to diesel fuel, methylic biodiesel (BMR and ethylic biodiesel (BER from residual oil were respectively the values -0.7089, -0.7290 and -0.7277. This demonstrates the existence of difference of the model when compared chemically different fuels, like diesel and biodiesel from different sources, indicating the importance of determining the specific algorithm for the operations of conversion of volume to the reference temperature.

  18. Estimating oil product demand in Indonesia using a cointegrating error correction model

    International Nuclear Information System (INIS)

    Dahl, C.

    2001-01-01

    Indonesia's long oil production history and large population mean that Indonesian oil reserves, per capita, are the lowest in OPEC and that, eventually, Indonesia will become a net oil importer. Policy-makers want to forestall this day, since oil revenue comprised around a quarter of both the government budget and foreign exchange revenues for the fiscal years 1997/98. To help policy-makers determine how economic growth and oil-pricing policy affect the consumption of oil products, we estimate the demand for six oil products and total petroleum consumption, using an error correction-cointegration approach, and compare it with estimates on a lagged endogenous model using data for 1970-95. (author)

  19. DETERMINAN PERTUMBUHAN KREDIT MODAL KERJA PERBANKAN DI INDONESIA: Pendekatan Error Correction Model (ECM

    Directory of Open Access Journals (Sweden)

    Sasanti Widyawati

    2016-05-01

    Full Text Available AbstractBank loans has an important role in financing the national economy and driving force of economic growth.Therefore, credit growth must be balanced. However, the condition show that commercial bank credit growthslowed back.Using the method of Error Correction Model (ECM Domowitz - El Badawi, the study analyze theimpact of short-term and long-term independent variables to determine the credit growth in Indonesia financialsector. The results show that, in the short term only non performing loans are significant negative effect onthe working capital loans growth. For long-term, working capital loan interest rates have a significant negativeeffect, third party funds growth have a significant positive effect and inflation have a significant negativeeffect.

  20. GPS, BDS and Galileo ionospheric correction models: An evaluation in range delay and position domain

    Science.gov (United States)

    Wang, Ningbo; Li, Zishen; Li, Min; Yuan, Yunbin; Huo, Xingliang

    2018-05-01

    The performance of GPS Klobuchar (GPSKlob), BDS Klobuchar (BDSKlob) and NeQuick Galileo (NeQuickG) ionospheric correction models are evaluated in the range delay and position domains over China. The post-processed Klobuchar-style (CODKlob) coefficients provided by the Center for Orbit Determination in Europe (CODE) and our own fitted NeQuick coefficients (NeQuickC) are also included for comparison. In the range delay domain, BDS total electrons contents (TEC) derived from 20 international GNSS Monitoring and Assessment System (iGMAS) stations and GPS TEC obtained from 35 Crust Movement Observation Network of China (CMONC) stations are used as references. Compared to BDS TEC during the short period (doy 010-020, 2015), GPSKlob, BDSKlob and NeQuickG can correct 58.4, 66.7 and 54.7% of the ionospheric delay. Compared to GPS TEC for the long period (doy 001-180, 2015), the three ionospheric models can mitigate the ionospheric delay by 64.8, 65.4 and 68.1%, respectively. For the two comparison cases, CODKlob shows the worst performance, which only reduces 57.9% of the ionospheric range errors. NeQuickC exhibits the best performance, which outperforms GPSKlob, BDSKlob and NeQuickG by 6.7, 2.1 and 6.9%, respectively. In the position domain, single-frequency stand point positioning (SPP) was conducted at the selected 35 CMONC sites using GPS C/A pseudorange with and without ionospheric corrections. The vertical position error of the uncorrected case drops significantly from 10.3 m to 4.8, 4.6, 4.4 and 4.2 m for GPSKlob, CODKlob, BDSKlob and NeQuickG, however, the horizontal position error (3.2) merely decreases to 3.1, 2.7, 2.4 and 2.3 m, respectively. NeQuickG outperforms GPSKlob and BDSKlob by 5.8 and 1.9% in vertical component, and by 25.0 and 3.2% in horizontal component.

  1. GUT scale threshold corrections in a complete supersymmetric SO(10) model: αs(MZ) versus proton lifetime

    International Nuclear Information System (INIS)

    Lucas, V.; Raby, S.

    1996-01-01

    We show that one-loop GUT scale threshold corrections to gauge couplings are a significant constraint on the GUT symmetry-breaking sector of the theory. The one-loop threshold corrections relate the prediction for α s (M Z ) to the proton lifetime. We have calculated these corrections in a new complete SO(10) SUSY GUT. The results are consistent with the low-energy measurement of α s (M Z ). We have also calculated the proton lifetime and branching ratios in this model. We show that proton decay rates provide a powerful test for theories of fermion masses. copyright 1996 The American Physical Society

  2. Renormalization group flow of scalar models in gravity

    International Nuclear Information System (INIS)

    Guarnieri, Filippo

    2014-01-01

    In this Ph.D. thesis we study the issue of renormalizability of gravitation in the context of the renormalization group (RG), employing both perturbative and non-perturbative techniques. In particular, we focus on different gravitational models and approximations in which a central role is played by a scalar degree of freedom, since their RG flow is easier to analyze. We restrict our interest in particular to two quantum gravity approaches that have gained a lot of attention recently, namely the asymptotic safety scenario for gravity and the Horava-Lifshitz quantum gravity. In the so-called asymptotic safety conjecture the high energy regime of gravity is controlled by a non-Gaussian fixed point which ensures non-perturbative renormalizability and finiteness of the correlation functions. We then investigate the existence of such a non trivial fixed point using the functional renormalization group, a continuum version of the non-perturbative Wilson's renormalization group. In particular we quantize the sole conformal degree of freedom, which is an approximation that has been shown to lead to a qualitatively correct picture. The question of the existence of a non-Gaussian fixed point in an infinite-dimensional parameter space, that is for a generic f(R) theory, cannot however be studied using such a conformally reduced model. Hence we study it by quantizing a dynamically equivalent scalar-tensor theory, i.e. a generic Brans-Dicke theory with ω=0 in the local potential approximation. Finally, we investigate, using a perturbative RG scheme, the asymptotic freedom of the Horava-Lifshitz gravity, that is an approach based on the emergence of an anisotropy between space and time which lifts the Newton's constant to a marginal coupling and explicitly preserves unitarity. In particular we evaluate the one-loop correction in 2+1 dimensions quantizing only the conformal degree of freedom.

  3. Aspects of quantum corrections in a Lorentz-violating extension of the abelian Higgs Model

    Energy Technology Data Exchange (ETDEWEB)

    Brito, L.C.T.; Fargnoli, H.G. [Universidade Federal de Lavras, MG (Brazil); Scarpelli, A.P. Baeta [Departamento de Policia Federal, Rio de Janeiro, RJ (Brazil)

    2013-07-01

    Full text: We have investigated new aspects related to the four-dimensional abelian gauge-Higgs model with the addition of the Carroll-Field-Jackiw term (CFJ). We have focused on one-loop quantum corrections to the photon and Higgs sectors and we have analyzed what kind of effects are induced at the quantum level by spontaneous gauge symmetry breaking due the presence of the CFJ term. We have shown that new finite and non-ambiguous Lorentz-breaking terms are induced in both sectors at second order in the background vector. Specifically in the pure gauge sector, a CPT-even aether term (free from ambiguities) is induced. A CPT-even term is also induced in the pure Higgs sector. Both terms have been mapped in the Standard Model Extension. Besides, aspects of the one-loop renormalization of the background vector dependent terms have been studied. The new divergences due the presence of the CFJ term were shown to be worked out by the renormalization condition which requires the vanishing of the vacuum expectation value of the Higgs field. So at one loop the CFJ term does not spoil the well known renormalizability of the model without Lorentz symmetry breaking terms. The calculations have been done within dimensional methods and in an arbitrary gauge choice. (author)

  4. Numerical model and analysis of an energy-based system using microwaves for vision correction

    Science.gov (United States)

    Pertaub, Radha; Ryan, Thomas P.

    2009-02-01

    A treatment system was developed utilizing a microwave-based procedure capable of treating myopia and offering a less invasive alternative to laser vision correction without cutting the eye. Microwave thermal treatment elevates the temperature of the paracentral stroma of the cornea to create a predictable refractive change while preserving the epithelium and deeper structures of the eye. A pattern of shrinkage outside of the optical zone may be sufficient to flatten the central cornea. A numerical model was set up to investigate both the electromagnetic field and the resultant transient temperature distribution. A finite element model of the eye was created and the axisymmetric distribution of temperature calculated to characterize the combination of controlled power deposition combined with surface cooling to spare the epithelium, yet shrink the cornea, in a circularly symmetric fashion. The model variables included microwave power levels and pulse width, cooling timing, dielectric material and thickness, and electrode configuration and gap. Results showed that power is totally contained within the cornea and no significant temperature rise was found outside the anterior cornea, due to the near-field design of the applicator and limited thermal conduction with the short on-time. Target isothermal regions were plotted as a result of common energy parameters along with a variety of electrode shapes and sizes, which were compared. Dose plots showed the relationship between energy and target isothermic regions.

  5. Statistical Downscaling and Bias Correction of Climate Model Outputs for Climate Change Impact Assessment in the U.S. Northeast

    Science.gov (United States)

    Ahmed, Kazi Farzan; Wang, Guiling; Silander, John; Wilson, Adam M.; Allen, Jenica M.; Horton, Radley; Anyah, Richard

    2013-01-01

    Statistical downscaling can be used to efficiently downscale a large number of General Circulation Model (GCM) outputs to a fine temporal and spatial scale. To facilitate regional impact assessments, this study statistically downscales (to 1/8deg spatial resolution) and corrects the bias of daily maximum and minimum temperature and daily precipitation data from six GCMs and four Regional Climate Models (RCMs) for the northeast United States (US) using the Statistical Downscaling and Bias Correction (SDBC) approach. Based on these downscaled data from multiple models, five extreme indices were analyzed for the future climate to quantify future changes of climate extremes. For a subset of models and indices, results based on raw and bias corrected model outputs for the present-day climate were compared with observations, which demonstrated that bias correction is important not only for GCM outputs, but also for RCM outputs. For future climate, bias correction led to a higher level of agreements among the models in predicting the magnitude and capturing the spatial pattern of the extreme climate indices. We found that the incorporation of dynamical downscaling as an intermediate step does not lead to considerable differences in the results of statistical downscaling for the study domain.

  6. Effect of tubing length on the dispersion correction of an arterially sampled input function for kinetic modeling in PET.

    Science.gov (United States)

    O'Doherty, Jim; Chilcott, Anna; Dunn, Joel

    2015-11-01

    Arterial sampling with dispersion correction is routinely performed for kinetic analysis of PET studies. Because of the the advent of PET-MRI systems, non-MR safe instrumentation will be required to be kept outside the scan room, which requires the length of the tubing between the patient and detector to increase, thus worsening the effects of dispersion. We examined the effects of dispersion in idealized radioactive blood studies using various lengths of tubing (1.5, 3, and 4.5 m) and applied a well-known transmission-dispersion model to attempt to correct the resulting traces. A simulation study was also carried out to examine noise characteristics of the model. The model was applied to patient traces using a 1.5 m acquisition tubing and extended to its use at 3 m. Satisfactory dispersion correction of the blood traces was achieved in the 1.5 m line. Predictions on the basis of experimental measurements, numerical simulations and noise analysis of resulting traces show that corrections of blood data can also be achieved using the 3 m tubing. The effects of dispersion could not be corrected for the 4.5 m line by the selected transmission-dispersion model. On the basis of our setup, correction of dispersion in arterial sampling tubing up to 3 m by the transmission-dispersion model can be performed. The model could not dispersion correct data acquired using a 4.5 m arterial tubing.

  7. The role of the intervertebral disc in correction of scoliotic curves. A theoretical model of idiopathic scoliosis pathogenesis.

    Science.gov (United States)

    Grivas, T B; Vasiliadis, E S; Rodopoulos, G; Bardakos, N

    2008-01-01

    Wedging of the scoliotic inter-vertebral disc (IVD) was previously reported as a contributory factor for progression of idiopathic scoliotic (IS) curves. The present study introduces a theoretical model of IVD's role in IS pathogenesis and examines if, by reversing IVD wedging with conservative treatment (full- and night-time braces and exercises) or fusionless IS surgery with staples, we can correct the deformity of the immature spine. The proposed model implies the role of the diurnal variation and the asymmetric water distribution in the scoliotic IVD and the subsequent alteration of the mechanical environment of the adjacent vertebral growth plates. Modulation of the IVD by applying corrective forces on the scoliotic curve restores a close-to-normal force application on the vertebral growth plates through the Hueter-Volkmann principle and consequently prevents curve progression. The forces are now transmitted evenly to the growth plate and increase the rate of proliferation of chondrocytes at the corrected pressure side, the concave. Application of appropriately directed forces, ideally opposite to the apex of the deformity, likely leads to optimal correction. The wedging of the elastic IVD in the immature scoliotic spine could be reversed by application of corrective forces on it. Reversal of IVD wedging is thus amended into a "corrective", rather than "progressive", factor of the deformity. Through the proposed model, treatment of progressive IS with braces, exercises and fusionless surgery by anterior stapling could be effective.

  8. Logarithmic correction in the deformed AdS5 model to produce the heavy quark potential and QCD beta function

    International Nuclear Information System (INIS)

    He Song; Huang Mei; Yan Qishu

    2011-01-01

    We study the holographic QCD model, which contains a quadratic term -σz 2 and a logarithmic term -c 0 log[(z IR -z)/z IR ] with an explicit infrared cutoff z IR in the deformed AdS 5 warp factor. We investigate the heavy-quark potential for three cases, i.e., with only a quadratic correction, with both quadratic and logarithmic corrections, and with only a logarithmic correction. We solve the dilaton field and dilation potential from the Einstein equation and investigate the corresponding beta function in the Guersoy-Kiritsis-Nitti framework. Our studies show that in the case with only a quadratic correction, a negative σ or the Andreev-Zakharov model is favored to fit the heavy-quark potential and to produce the QCD beta function at 2-loop level; however, the dilaton potential is unbounded in the infrared regime. One interesting observation for the case of positive σ is that the corresponding beta function exists in an infrared fixed point. In the case with only a logarithmic correction, the heavy-quark Cornell potential can be fitted very well, the corresponding beta function agrees with the QCD beta function at 2-loop level reasonably well, and the dilaton potential is bounded from below in the infrared. At the end, we propose a more compact model which has only a logarithmic correction in the deformed warp factor and has less free parameters.

  9. Bodily tides near the 1:1 spin-orbit resonance: correction to Goldreich's dynamical model

    Science.gov (United States)

    Williams, James G.; Efroimsky, Michael

    2012-12-01

    Spin-orbit coupling is often described in an approach known as " the MacDonald torque", which has long become the textbook standard due to its apparent simplicity. Within this method, a concise expression for the additional tidal potential, derived by MacDonald (Rev Geophys 2:467-541, 1994), is combined with a convenient assumption that the quality factor Q is frequency-independent (or, equivalently, that the geometric lag angle is constant in time). This makes the treatment unphysical because MacDonald's derivation of the said formula was, very implicitly, based on keeping the time lag frequency-independent, which is equivalent to setting Q scale as the inverse tidal frequency. This contradiction requires the entire MacDonald treatment of both non-resonant and resonant rotation to be rewritten. The non-resonant case was reconsidered by Efroimsky and Williams (Cel Mech Dyn Astron 104:257-289, 2009), in application to spin modes distant from the major commensurabilities. In the current paper, we continue this work by introducing the necessary alterations into the MacDonald-torque-based model of falling into a 1-to-1 resonance. (The original version of this model was offered by Goldreich (Astron J 71:1-7, 1996). Although the MacDonald torque, both in its original formulation and in its corrected version, is incompatible with realistic rheologies of minerals and mantles, it remains a useful toy model, which enables one to obtain, in some situations, qualitatively meaningful results without resorting to the more rigorous (and complicated) theory of Darwin and Kaula. We first address this simplified model in application to an oblate primary body, with tides raised on it by an orbiting zero-inclination secondary. (Here the role of the tidally-perturbed primary can be played by a satellite, the perturbing secondary being its host planet. A planet may as well be the perturbed primary, its host star acting as the tide-raising secondary). We then extend the model to a

  10. Multivariate quantile mapping bias correction: an N-dimensional probability density function transform for climate model simulations of multiple variables

    Science.gov (United States)

    Cannon, Alex J.

    2018-01-01

    Most bias correction algorithms used in climatology, for example quantile mapping, are applied to univariate time series. They neglect the dependence between different variables. Those that are multivariate often correct only limited measures of joint dependence, such as Pearson or Spearman rank correlation. Here, an image processing technique designed to transfer colour information from one image to another—the N-dimensional probability density function transform—is adapted for use as a multivariate bias correction algorithm (MBCn) for climate model projections/predictions of multiple climate variables. MBCn is a multivariate generalization of quantile mapping that transfers all aspects of an observed continuous multivariate distribution to the corresponding multivariate distribution of variables from a climate model. When applied to climate model projections, changes in quantiles of each variable between the historical and projection period are also preserved. The MBCn algorithm is demonstrated on three case studies. First, the method is applied to an image processing example with characteristics that mimic a climate projection problem. Second, MBCn is used to correct a suite of 3-hourly surface meteorological variables from the Canadian Centre for Climate Modelling and Analysis Regional Climate Model (CanRCM4) across a North American domain. Components of the Canadian Forest Fire Weather Index (FWI) System, a complicated set of multivariate indices that characterizes the risk of wildfire, are then calculated and verified against observed values. Third, MBCn is used to correct biases in the spatial dependence structure of CanRCM4 precipitation fields. Results are compared against a univariate quantile mapping algorithm, which neglects the dependence between variables, and two multivariate bias correction algorithms, each of which corrects a different form of inter-variable correlation structure. MBCn outperforms these alternatives, often by a large margin

  11. Corrections for hydrostatic atmospheric models: radii and effective temperatures of Wolf Rayet stars

    International Nuclear Information System (INIS)

    Loore, C. de; Hellings, P.; Lamers, H.J.G.L.M.

    1982-01-01

    With the assumption of plane-parallel hydrostatic atmospheres, used generally for the computation of evolutionary models, the radii of WR stars are seriously underestimated. The true atmospheres may be very extended, due to the effect of the stellar wind. Instead of these hydrostatic atmospheres the authors consider dynamical atmospheres adopting a velocity law. The equation of the optical depth is integrated outwards using the equation of continuity. The ''hydrostatic'' radii are to be multiplied with a factor 2 to 8, and the effective temperatures with a factor 0.8 to 0.35 when Wolf Rayet characteristics for the wind are considered, and WR mass loss rates are used. With these corrections the effective temperatures of the theoretical models, which are helium burning Roche lobe overflow remnants, range between 30,000 K and 50,000 K. Effective temperatures calculated in the hydrostatic hypothesis can be as high as 150,000 K for helium burning RLOF-remnants with WR mass loss rates. (Auth.)

  12. Two-loop corrections to the ρ parameter in Two-Higgs-Doublet models

    Energy Technology Data Exchange (ETDEWEB)

    Hessenberger, Stephan; Hollik, Wolfgang [Max-Planck-Institut fuer Physik (Werner-Heisenberg-Institut), Muenchen (Germany)

    2017-03-15

    Models with two scalar doublets are among the simplest extensions of the Standard Model which fulfill the relation ρ = 1 at lowest order for the ρ parameter as favored by experimental data for electroweak observables allowing only small deviations from unity. Such small deviations Δρ originate exclusively from quantum effects with special sensitivity to mass splittings between different isospin components of fermions and scalars. In this paper the dominant two-loop electroweak corrections to Δρ are calculated in the CP-conserving THDM, resulting from the top-Yukawa coupling and the self-couplings of the Higgs bosons in the gauge-less limit. The on-shell renormalization scheme is applied. With the assumption that one of the CP-even neutral scalars represents the scalar boson observed by the LHC experiments, with standard properties, the two-loop non-standard contributions in Δρ can be separated from the standard ones. These contributions are of particular interest since they increase with mass splittings between non-standard Higgs bosons and can be additionally enhanced by tanβ and λ{sub 5}, an additional free coefficient of the Higgs potential, and can thus modify the one-loop result substantially. Numerical results are given for the dependence on the various non-standard parameters, and the influence on the calculation of electroweak precision observables is discussed. (orig.)

  13. Reliability Analysis of Offshore Jacket Structures with Wave Load on Deck using the Model Correction Factor Method

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Friis-Hansen, P.; Nielsen, J.S.

    2006-01-01

    failure/collapse of jacket type platforms with wave in deck loads using the so-called Model Correction Factor Method (MCFM). A simple representative model for the RSR measure is developed and used in the MCFM technique. A realistic example is evaluated and it is seen that it is possible to perform...

  14. Digital Elevation Model Correction for the thalweg values of Obion River system, TN

    Science.gov (United States)

    Dullo, T. T.; Bhuyian, M. N. M.; Hawkins, S. A.; Kalyanapu, A. J.

    2016-12-01

    Obion River system is located in North-West Tennessee and discharges into the Mississippi River. To facilitate US Department of Agriculture (USDA) to estimate water availability for agricultural consumption a one-dimensional HEC-RAS model has been proposed. The model incorporates the major tributaries (north and south), main stem of Obion River along with a segment of the Mississippi River. A one-meter spatial resolution Light Detection and Ranging (LiDAR) derived Digital Elevation Model (DEM) was used as the primary source of topographic data. LiDAR provides fine-resolution terrain data over given extent. However, it lacks in accurate representation of river bathymetry due to limited penetration beyond a certain water depth. This reduces the conveyance along river channel as represented by the DEM and affects the hydrodynamic modeling performance. This research focused on proposing a method to overcome this issue and test the qualitative improvement by the proposed method over an existing technique. Therefore, objective of this research is to compare effectiveness of a HEC-RAS based bathymetry optimization method with an existing hydraulic based DEM correction technique (Bhuyian et al., 2014) for Obion River system in Tennessee. Accuracy of hydrodynamic simulations (upon employing bathymetry from respective sources) would be regarded as the indicator of performance. The aforementioned river system includes nine major reaches with a total river length of 310 km. The bathymetry of the river was represented via 315 cross sections equally spaced at about one km. This study targeted to selecting best practice for treating LiDAR based terrain data over complex river system at a sub-watershed scale.

  15. Global embedding of fibre inflation models

    Energy Technology Data Exchange (ETDEWEB)

    Cicoli, Michele [Dipartimento di Fisica e Astronomia, Università di Bologna,via Irnerio 46, 40126 Bologna (Italy); INFN - Sezione di Bologna,viale Berti Pichat 6/2, 40127 Bologna (Italy); Abdus Salam ICTP,Strada Costiera 11, Trieste 34151 (Italy); Muia, Francesco [Rudolf Peierls Centre for Theoretical Physics, University of Oxford,1 Keble Rd., Oxford OX1 3NP (United Kingdom); Shukla, Pramod [Abdus Salam ICTP,Strada Costiera 11, Trieste 34151 (Italy)

    2016-11-30

    We present concrete embeddings of fibre inflation models in globally consistent type IIB Calabi-Yau orientifolds with closed string moduli stabilisation. After performing a systematic search through the existing list of toric Calabi-Yau manifolds, we find several examples that reproduce the minimal setup to embed fibre inflation models. This involves Calabi-Yau manifolds with h{sup 1,1}=3 which are K3 fibrations over a ℙ{sup 1} base with an additional shrinkable rigid divisor. We then provide different consistent choices of the underlying brane set-up which generate a non-perturbative superpotential suitable for moduli stabilisation and string loop corrections with the correct form to drive inflation. For each Calabi-Yau orientifold setting, we also compute the effect of higher derivative contributions and study their influence on the inflationary dynamics.

  16. Large-N limit of the gradient flow in the 2D O(N) nonlinear sigma model

    International Nuclear Information System (INIS)

    Makino, Hiroki; Sugino, Fumihiko; Suzuki, Hiroshi

    2015-01-01

    The gradient flow equation in the 2D O(N) nonlinear sigma model with lattice regularization is solved in the leading order of the 1/N expansion. By using this solution, we analytically compute the thermal expectation value of a lattice energy–momentum tensor defined through the gradient flow. The expectation value reproduces thermodynamic quantities obtained by the standard large-N method. This analysis confirms that the above lattice energy–momentum tensor restores the correct normalization automatically in the continuum limit, in a system with a non-perturbative mass gap

  17. Implementing a Batterer's Intervention Program in a Correctional Setting: A Tertiary Prevention Model

    Science.gov (United States)

    Yorke, Nada J.; Friedman, Bruce D.; Hurt, Pat

    2010-01-01

    This study discusses the pretest and posttest results of a batterer's intervention program (BIP) implemented within a California state prison substance abuse program (SAP), with a recommendation for further programs to be implemented within correctional institutions. The efficacy of utilizing correctional facilities to reach offenders who…

  18. Professional Development: A Capacity-Building Model for Juvenile Correctional Education Systems

    Science.gov (United States)

    Mathur, Sarup R.; Clark, Heather Griller; Schoenfeld, Naomi A.

    2009-01-01

    Youth in correctional facilities experience a broad range of educational, psychological, medical, and social needs. Professional development, a systemic process that improves the likelihood of student success by enhancing educator abilities, is a powerful way to positively affect student outcomes in correctional settings. This article offers a…

  19. Correction to the crack extension direction in numerical modelling of mixed mode crack paths

    DEFF Research Database (Denmark)

    Lucht, Tore; Aliabadi, M.H.

    2007-01-01

    In order to avoid introduction of an error when a local crack-growth criterion is used in an incremental crack growth formulation, each straight crack extension would have to be infinitesimal or have its direction corrected. In this paper a new procedure to correct the crack extension direction...

  20. One-loop corrections to e+e− → e+e− in the weinberg model

    NARCIS (Netherlands)

    Consoli, M.

    1979-01-01

    Radiative corrections to Bhabha scattering are calculated in the simplest example of non-Abelian gauge theories. A detailed analysis of the higher-order effects is presented and the total differential cross section including weak corrections is evaluated at different angles in an energy range up to

  1. Impact of energy technology patents in China: Evidence from a panel cointegration and error correction model

    International Nuclear Information System (INIS)

    Li, Ke; Lin, Boqiang

    2016-01-01

    Enhancing energy technology innovation performance, which is widely measured by energy technology patents through energy technology research and development (R&D) activities, is a fundamental way to implement energy conservation and emission abatement. This study analyzes the effects of R&D investment activities, economic growth, and energy price on energy technology patents in 30 provinces of China over the period 1999–2013. Several unit root tests indicate that all the above variables are generated by panel unit root processes, and a panel cointegration model is confirmed among the variables. In order to ensure the consistency of the estimators, the Fully-Modified OLS (FMOLS) method is adopted, and the results indicate that R&D investment activities and economic growth have positive effects on energy technology patents while energy price has a negative effect. However, the panel error correction models indicate that the cointegration relationship helps to promote economic growth, but it reduces R&D investment and energy price in the short term. Therefore, market-oriented measures including financial support and technical transformation policies for the development of low-carbon energy technologies, an effective energy price mechanism, especially the targeted fossil-fuel subsidies and their die away mode are vital in promoting China's energy technology innovation. - Highlights: • Energy technology patents in China are analyzed. • Relationship between energy patents and funds for R&D activities are analyzed. • China's energy price system hinders energy technology innovation. • Some important implications for China's energy technology policy are discussed. • A panel cointegration model with FMOLS estimator is used.

  2. Sensory feedback, error correction, and remapping in a multiple oscillator model of place cell activity

    Directory of Open Access Journals (Sweden)

    Joseph D. Monaco

    2011-09-01

    Full Text Available Mammals navigate by integrating self-motion signals (‘path integration’ and occasionally fixing on familiar environmental landmarks. The rat hippocampus is a model system of spatial representation in which place cells are thought to integrate both sensory and spatial information from entorhinal cortex. The localized firing fields of hippocampal place cells and entorhinal grid cells demonstrate a phase relationship with the local theta (6–10 Hz rhythm that may be a temporal signature of path integration. However, encoding self-motion in the phase of theta oscillations requires high temporal precision and is susceptible to idiothetic noise, neuronal variability, and a changing environment. We present a model based on oscillatory interference theory, previously studied in the context of grid cells, in which transient temporal synchronization among a pool of path-integrating theta oscillators produces hippocampal-like place fields. We hypothesize that a spatiotemporally extended sensory interaction with external cues modulates feedback to the theta oscillators. We implement a form of this cue-driven feedback and show that it can retrieve fixed points in the phase code of position. A single cue can smoothly reset oscillator phases to correct for both systematic errors and continuous noise in path integration. Further, simulations in which local and global cues are rotated against each other reveal a phase-code mechanism in which conflicting cue arrangements can reproduce experimentally observed distributions of ‘partial remapping’ responses. This abstract model demonstrates that phase-code feedback can provide stability to the temporal coding of position during navigation and may contribute to the context-dependence of hippocampal spatial representations. While the anatomical substrates of these processes have not been fully characterized, our findings suggest several signatures that can be evaluated in future experiments.

  3. A model-based correction for outcome reporting bias in meta-analysis.

    Science.gov (United States)

    Copas, John; Dwan, Kerry; Kirkham, Jamie; Williamson, Paula

    2014-04-01

    It is often suspected (or known) that outcomes published in medical trials are selectively reported. A systematic review for a particular outcome of interest can only include studies where that outcome was reported and so may omit, for example, a study that has considered several outcome measures but only reports those giving significant results. Using the methodology of the Outcome Reporting Bias (ORB) in Trials study of (Kirkham and others, 2010. The impact of outcome reporting bias in randomised controlled trials on a cohort of systematic reviews. British Medical Journal 340, c365), we suggest a likelihood-based model for estimating the effect of ORB on confidence intervals and p-values in meta-analysis. Correcting for bias has the effect of moving estimated treatment effects toward the null and hence more cautious assessments of significance. The bias can be very substantial, sometimes sufficient to completely overturn previous claims of significance. We re-analyze two contrasting examples, and derive a simple fixed effects approximation that can be used to give an initial estimate of the effect of ORB in practice.

  4. Quarkonium spectroscopy in a potential model with vacuum-polarization corrections

    International Nuclear Information System (INIS)

    Barik, N.; Jena, S.N.

    1980-01-01

    We consider a potential model taking long-distance vacuum-polarization corrections as suggested by Poggio and Schnitzer, which enables one to interpolate between cc-bar, bb-bar, and tt-bar systems. Taking special care for the accuracy of the numerical integration near the origin, we have developed a numerical method to obtain the heavy-quark--antiquark bound states along with their leptonic widths. We obtain the above flavor-independent potential giving good agreement with the so-called experimental mass splitting of the 1S-2S states of the psi and UPSILON family with reasonable values of the quark-gluon coupling constant α/sub s/, which do not deviate very much from the quantum-chromodynamics value. We obtain some of the bound states of the hypothetical tt-bar family and observe that the effect of screening of the potential due to the vacuum-polarization cloud decreases with increase of the mass of the heavy quark forming the quarkonium

  5. Capital productivity in industrialised economies: Evidence from error-correction model and lagrange multiplier tests

    Directory of Open Access Journals (Sweden)

    Trofimov Ivan D.

    2017-01-01

    Full Text Available The paper re-examines the “stylized facts” of the balanced growth in developed economies, looking specifically at capital productivity variable. The economic data is obtained from European Commission AMECO database, spanning 1961-2014 period. For a sample of 22 OECD economies, the paper applies univariate LM unit root tests with one or two structural breaks, and estimates error-correction and linear trend models with breaks. It is shown that diverse statistical patterns were present across economies and overall mixed evidence is provided as to the stability of capital productivity and balanced growth in general. Specifically, both upward and downward trends in capital productivity were present, while in several economies mean reversion and random walk patterns were observed. The data and results were largely in line with major theoretical explanations pertaining to capital productivity. With regard to determinants of the capital productivity movements, the structure of capital stock and the prices of capital goods were likely most salient.

  6. Assessment of cassava supply response in Nigeria using vector error correction model (VECM

    Directory of Open Access Journals (Sweden)

    Obayelu Oluwakemi Adeola

    2016-12-01

    Full Text Available The response of agricultural commodities to changes in price is an important factor in the success of any reform programme in agricultural sector of Nigeria. The producers of traditional agricultural commodities, such as cassava, face the world market directly. Consequently, the producer price of cassava has become unstable, which is a disincentive for both its production and trade. This study investigated cassava supply response to changes in price. Data collected from FAOSTAT from 1966 to 2010 were analysed using Vector Error Correction Model (VECM approach. The results of the VECM for the estimation of short run adjustment of the variables toward their long run relationship showed a linear deterministic trend in the data and that Area cultivated and own prices jointly explained 74% and 63% of the variation in the Nigeria cassava output in the short run and long-run respectively. Cassava prices (P<0.001 and land cultivated (P<0.1 had positive influence on cassava supply in the short-run. The short-run price elasticity was 0.38 indicating that price policies were effective in the short-run promotion of cassava production in Nigeria. However, in the long-run elasticity cassava was not responsive to price incentives significantly. This suggests that price policies are not effective in the long-run promotion of cassava production in the country owing to instability in governance and government policies.

  7. Strange mass corrections to hyperonic semi-leptonic decays in statistical model

    Energy Technology Data Exchange (ETDEWEB)

    Upadhyay, A.; Batra, M. [Thapar University, School of Physics and Material Science, Patiala (India)

    2013-12-15

    We study the spin distribution, weak decay coupling constant ratios for strange baryon octets with SU(3) breaking effects. Baryon is taken as an ensemble of quark-gluon Fock states in the sea with three valence quarks with definite spin, color and flavor quantum numbers. We apply the statistical model to calculate the probabilities of each Fock states, to analyze the impact of SU(3) breaking in the weak decays. The symmetry breaking effects are studied in terms of a parameter ''r '' whose best-fit value is obtained from the experimental data of semi-leptonic weak decay coupling constant ratios. We suggest the dominant contribution from H{sub 1}G{sub 8} (sea with spin one and color octet) where symmetry breaking corrections lead to the deviations in the value of the axial-vector matrix elements ratio F/D from experimental values by 17%. We conclude that symmetry breaking also significantly affects the polarization of quark in strange baryons. (orig.)

  8. Corrective Measures Study Modeling Results for the Southwest Plume - Burial Ground Complex/Mixed Waste Management Facility

    International Nuclear Information System (INIS)

    Harris, M.K.

    1999-01-01

    Groundwater modeling scenarios were performed to support the Corrective Measures Study and Interim Action Plan for the southwest plume of the Burial Ground Complex/Mixed Waste Management Facility. The modeling scenarios were designed to provide data for an economic analysis of alternatives, and subsequently evaluate the effectiveness of the selected remedial technologies for tritium reduction to Fourmile Branch. Modeling scenarios assessed include no action, vertical barriers, pump, treat, and reinject; and vertical recirculation wells

  9. Augmented GNSS Differential Corrections Minimum Mean Square Error Estimation Sensitivity to Spatial Correlation Modeling Errors

    Directory of Open Access Journals (Sweden)

    Nazelie Kassabian

    2014-06-01

    Full Text Available Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs. This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold.

  10. Quantum Kramers model: Corrections to the linear response theory for continuous bath spectrum

    Science.gov (United States)

    Rips, Ilya

    2017-01-01

    Decay of the metastable state is analyzed within the quantum Kramers model in the weak-to-intermediate dissipation regime. The decay kinetics in this regime is determined by energy exchange between the unstable mode and the stable modes of thermal bath. In our previous paper [Phys. Rev. A 42, 4427 (1990), 10.1103/PhysRevA.42.4427], Grabert's perturbative approach to well dynamics in the case of the discrete bath [Phys. Rev. Lett. 61, 1683 (1988), 10.1103/PhysRevLett.61.1683] has been extended to account for the second order terms in the classical equations of motion (EOM) for the stable modes. Account of the secular terms reduces EOM for the stable modes to those of the forced oscillator with the time-dependent frequency (TDF oscillator). Analytic expression for the characteristic function of energy loss of the unstable mode has been derived in terms of the generating function of the transition probabilities for the quantum forced TDF oscillator. In this paper, the approach is further developed and applied to the case of the continuous frequency spectrum of the bath. The spectral density functions of the bath of stable modes are expressed in terms of the dissipative properties (the friction function) of the original bath. They simplify considerably for the one-dimensional systems, when the density of phonon states is constant. Explicit expressions for the fourth order corrections to the linear response theory result for the characteristic function of the energy loss and its cumulants are obtained for the particular case of the cubic potential with Ohmic (Markovian) dissipation. The range of validity of the perturbative approach in this case is determined (γ /ωbrate for the quantum and for the classical Kramers models. Results for the classical escape rate are in very good agreement with the numerical simulations for high barriers. The results can serve as an additional proof of the robustness and accuracy of the linear response theory.

  11. Radiative Corrections to e+e-→ Zh at Future Higgs Factory in the Minimal Dilaton Model

    International Nuclear Information System (INIS)

    Heng Zhao-Xia; Li Dong-Wei; Zhou Hai-Jing

    2015-01-01

    The minimal dilaton model (MDM) extends the Standard Model by one singlet scalar called dilaton and one top quark partner called t'. In this work we investigate the t'-induced radiative correction to the Higgs-strahlung production process e + e − → Zh at future Higgs factory. We first present the analytical calculations in detail and show how to handle the ultraviolet divergence. Then we calculate the correction numerically by considering the constraints from precision electroweak data. We find that, for sinθ L = 0.2 and m t' = 1200 GeV, the correction is 0.26% and 2.1% for √s e + e - - 240 GeV, 1 TeV respectively, and a larger value can be achieved as sin θ L increases. (physics of elementary particles and fields)

  12. Consequences of the center-of-mass correction in nuclear mean-field models

    International Nuclear Information System (INIS)

    Bender, M.; Rutz, K.; Reinhard, P.G.; Maruhn, J.A.

    2000-01-01

    We study the influence of the scheme for the correction for spurious center-of-mass motion on the fit of effective interactions for self-consistent nuclear mean-field calculations. We find that interactions with very simple center-of-mass correction have significantly larger surface coefficients than interactions for which the center-of-mass correction was calculated for the actual many-body state during the fit. The reason for that is that the effective interaction has to counteract the wrong trends with nucleon number of all simplified schemes for center-of-mass correction which puts a wrong trend with mass number into the effective interaction itself. The effect becomes clearly visible when looking at the deformation energy of largely deformed systems, e.g. superdeformed states or fission barriers of heavy nuclei. (orig.)

  13. Completion Report for Model Evaluation Well ER-5-5: Corrective Action Unit 98: Frenchman Flat

    Energy Technology Data Exchange (ETDEWEB)

    NSTec Underground Test Area and Boreholes Programs and Operations

    2013-01-18

    Model Evaluation Well ER-5-5 was drilled for the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office in support of Nevada Environmental Management Operations at the Nevada National Security Site (formerly known as the Nevada Test Site). The well was drilled in July and August 2012 as part of a model evaluation well program in the Frenchman Flat area of Nye County, Nevada. The primary purpose of the well was to provide detailed geologic, hydrogeologic, chemical, and radiological data that can be used to test and build confidence in the applicability of the Frenchman Flat Corrective Action Unit flow and transport models for their intended purpose. In particular, this well was designed to obtain data to evaluate the uncertainty in model forecasts of contaminant migration from the upgradient underground nuclear test MILK SHAKE, conducted in Emplacement Hole U-5k in 1968, which were considered to be uncertain due to the unknown extent of a basalt lava-flow aquifer present in this area. Well ER-5-5 is expected to provide information to refine the Phase II Frenchman Flat hydrostratigraphic framework model, if necessary, as well as to support future groundwater flow and transport modeling. The 31.1-centimeter (cm) diameter hole was drilled to a total depth of 331.3 meters (m). The completion string, set at the depth of 317.2 m, consists of 16.8-cm stainless-steel casing hanging from 19.4-cm carbon-steel casing. The 16.8-cm stainless-steel casing has one slotted interval open to the basalt lava-flow aquifer and limited intervals of the overlying and underlying alluvial aquifer. A piezometer string was also installed in the annulus between the completion string and the borehole wall. The piezometer is composed of 7.3-cm stainless-steel tubing suspended from 6.0-cm carbon-steel tubing. The piezometer string was landed at 319.2 m, to monitor the basalt lava-flow aquifer. Data collected during and shortly after hole construction include

  14. In-medium effects in K+ scattering versus Glauber model with noneikonal corrections

    International Nuclear Information System (INIS)

    Eliseev, S.M.; Rihan, T.H.

    1996-01-01

    The discrepancy between the experimental and the theoretical ratio R of the total cross sections, R=σ(K + - 12 C)/6σ(K + - d), at momenta up to 800 MeV/c is discussed in the framework of the Glauber multiple scattering approach. It is shown that various corrections such as adopting relativistic K + -N amplitudes as well as noneikonal corrections seem to fail in reproducing the experimental data especially at higher momenta. 17 refs., 1 fig

  15. "The empathy impulse: A multinomial model of intentional and unintentional empathy for pain": Correction.

    Science.gov (United States)

    2018-04-01

    Reports an error in "The empathy impulse: A multinomial model of intentional and unintentional empathy for pain" by C. Daryl Cameron, Victoria L. Spring and Andrew R. Todd ( Emotion , 2017[Apr], Vol 17[3], 395-411). In this article, there was an error in the calculation of some of the effect sizes. The w effect size was manually computed incorrectly. The incorrect number of total observations was used, which affected the final effect size estimates. This computing error does not change any of the results or interpretations about model fit based on the G² statistic, or about significant differences across conditions in process parameters. Therefore, it does not change any of the hypothesis tests or conclusions. The w statistics for overall model fit should be .02 instead of .04 in Study 1, .01 instead of .02 in Study 2, .01 instead of .03 for the OIT in Study 3 (model fit for the PIT remains the same: .00), and .02 instead of .03 in Study 4. The corrected tables can be seen here: http://osf.io/qebku at the Open Science Framework site for the article. (The following abstract of the original article appeared in record 2017-01641-001.) Empathy for pain is often described as automatic. Here, we used implicit measurement and multinomial modeling to formally quantify unintentional empathy for pain: empathy that occurs despite intentions to the contrary. We developed the pain identification task (PIT), a sequential priming task wherein participants judge the painfulness of target experiences while trying to avoid the influence of prime experiences. Using multinomial modeling, we distinguished 3 component processes underlying PIT performance: empathy toward target stimuli (Intentional Empathy), empathy toward prime stimuli (Unintentional Empathy), and bias to judge target stimuli as painful (Response Bias). In Experiment 1, imposing a fast (vs. slow) response deadline uniquely reduced Intentional Empathy. In Experiment 2, inducing imagine-self (vs. imagine

  16. NNLO QCD corrections to the Drell-Yan cross section in models of TeV-scale gravity

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed, Taushif; Banerjee, Pulak; Dhani, Prasanna K.; Rana, Narayan [The Institute of Mathematical Sciences, Chennai, Tamil Nadu (India); Homi Bhabha National Institute, Mumbai (India); Kumar, M.C. [Indian Institute of Technology Guwahati, Department of Physics, Guwahati (India); Mathews, Prakash [Saha Institute of Nuclear Physics, Kolkata, West Bengal (India); Ravindran, V. [The Institute of Mathematical Sciences, Chennai, Tamil Nadu (India)

    2017-01-15

    The first results on the complete next-to-next-to-leading order (NNLO) Quantum Chromodynamic (QCD) corrections to the production of di-leptons at hadron colliders in large extra dimension models with spin-2 particles are reported in this article. In particular, we have computed these corrections to the invariant mass distribution of the di-leptons taking into account all the partonic sub-processes that contribute at NNLO. In these models, spin-2 particles couple through the energy-momentum tensor of the Standard Model with the universal coupling strength. The tensorial nature of the interaction and the presence of both quark annihilation and gluon fusion channels at the Born level make it challenging computationally and interesting phenomenologically. We have demonstrated numerically the importance of our results at Large Hadron Collider energies. The two-loop corrections contribute an additional 10% to the total cross section. We find that the QCD corrections are not only large but also important to make the predictions stable under renormalisation and factorisation scale variations, providing an opportunity to stringently constrain the parameters of the models with a spin-2 particle. (orig.)

  17. Correction: Keep Calm and Learn Multilevel Logistic Modeling: A Simplified Three-Step Procedure Using Stata, R, Mplus, and SPSS

    Directory of Open Access Journals (Sweden)

    Nicolas Sommet

    2017-12-01

    Full Text Available This article details a correction to the article: Sommet, N. & Morselli, D., (2017. Keep Calm and Learn Multilevel Logistic Modeling: A Simplified Three-Step Procedure Using Stata, R, Mplus, and SPSS. 'International Review of Social Psychology'. 30(1, pp. 203–218. DOI: https://doi.org/10.5334/irsp.90

  18. Reliability Analysis of a Composite Wind Turbine Blade Section Using the Model Correction Factor Method: Numerical Study and Validation

    DEFF Research Database (Denmark)

    Dimitrov, Nikolay Krasimirov; Friis-Hansen, Peter; Berggreen, Christian

    2013-01-01

    by the composite failure criteria. Each failure mode has been considered in a separate component reliability analysis, followed by a system analysis which gives the total probability of failure of the structure. The Model Correction Factor method used in connection with FORM (First-Order Reliability Method) proved...

  19. One-loop corrections for e+e- annihilation into μ+μ- in the Weinberg model

    NARCIS (Netherlands)

    Veltman, M.J.G.; Passarino, G.

    1979-01-01

    Analytical expressions for the cross section including all the one-loop radiative corrections in the context of the Weinberg model are presented. The systematic calculation of one-loop diagrams has been carried out using a recently proposed scheme. Numerical results are shown in a region from

  20. Non-model-based correction of respiratory motion using beat-to-beat 3D spiral fat-selective imaging.

    Science.gov (United States)

    Keegan, Jennifer; Gatehouse, Peter D; Yang, Guang-Zhong; Firmin, David N

    2007-09-01

    To demonstrate the feasibility of retrospective beat-to-beat correction of respiratory motion, without the need for a respiratory motion model. A high-resolution three-dimensional (3D) spiral black-blood scan of the right coronary artery (RCA) of six healthy volunteers was acquired over 160 cardiac cycles without respiratory gating. One spiral interleaf was acquired per cardiac cycle, prior to each of which a complete low-resolution fat-selective 3D spiral dataset was acquired. The respiratory motion (3D translation) on each cardiac cycle was determined by cross-correlating a region of interest (ROI) in the fat around the artery in the low-resolution datasets with that on a reference end-expiratory dataset. The measured translations were used to correct the raw data of the high-resolution spiral interleaves. Beat-to-beat correction provided consistently good results, with the image quality being better than that obtained with a fixed superior-inferior tracking factor of 0.6 and better than (N = 5) or equal to (N = 1) that achieved using a subject-specific retrospective 3D translation motion model. Non-model-based correction of respiratory motion using 3D spiral fat-selective imaging is feasible, and in this small group of volunteers produced better-quality images than a subject-specific retrospective 3D translation motion model. (c) 2007 Wiley-Liss, Inc.

  1. Spatial heterogeneity in geothermally-influenced lakes derived from atmospherically corrected Landsat thermal imagery and three-dimensional hydrodynamic modelling

    DEFF Research Database (Denmark)

    Allan, Mathew G; Hamilton, David P.; Trolle, Dennis

    2016-01-01

    Atmospheric correction of Landsat 7 thermal data was carried out for the purpose of retrieval of lake skin water temperature in Rotorua lakes, and Lake Taupo, North Island, New Zealand. The effect of the atmosphere was modelled using four sources of atmospheric profile data as input to the MODera...

  2. Precision Measurements of the Cluster Red Sequence using an Error Corrected Gaussian Mixture Model

    Energy Technology Data Exchange (ETDEWEB)

    Hao, Jiangang; /Fermilab /Michigan U.; Koester, Benjamin P.; /Chicago U.; Mckay, Timothy A.; /Michigan U.; Rykoff, Eli S.; /UC, Santa Barbara; Rozo, Eduardo; /Ohio State U.; Evrard, August; /Michigan U.; Annis, James; /Fermilab; Becker, Matthew; /Chicago U.; Busha, Michael; /KIPAC, Menlo Park /SLAC; Gerdes, David; /Michigan U.; Johnston, David E.; /Northwestern U. /Brookhaven

    2009-07-01

    The red sequence is an important feature of galaxy clusters and plays a crucial role in optical cluster detection. Measurement of the slope and scatter of the red sequence are affected both by selection of red sequence galaxies and measurement errors. In this paper, we describe a new error corrected Gaussian Mixture Model for red sequence galaxy identification. Using this technique, we can remove the effects of measurement error and extract unbiased information about the intrinsic properties of the red sequence. We use this method to select red sequence galaxies in each of the 13,823 clusters in the maxBCG catalog, and measure the red sequence ridgeline location and scatter of each. These measurements provide precise constraints on the variation of the average red galaxy populations in the observed frame with redshift. We find that the scatter of the red sequence ridgeline increases mildly with redshift, and that the slope decreases with redshift. We also observe that the slope does not strongly depend on cluster richness. Using similar methods, we show that this behavior is mirrored in a spectroscopic sample of field galaxies, further emphasizing that ridgeline properties are independent of environment. These precise measurements serve as an important observational check on simulations and mock galaxy catalogs. The observed trends in the slope and scatter of the red sequence ridgeline with redshift are clues to possible intrinsic evolution of the cluster red-sequence itself. Most importantly, the methods presented in this work lay the groundwork for further improvements in optically-based cluster cosmology.

  3. PRECISION MEASUREMENTS OF THE CLUSTER RED SEQUENCE USING AN ERROR-CORRECTED GAUSSIAN MIXTURE MODEL

    International Nuclear Information System (INIS)

    Hao Jiangang; Annis, James; Koester, Benjamin P.; Mckay, Timothy A.; Evrard, August; Gerdes, David; Rykoff, Eli S.; Rozo, Eduardo; Becker, Matthew; Busha, Michael; Wechsler, Risa H.; Johnston, David E.; Sheldon, Erin

    2009-01-01

    The red sequence is an important feature of galaxy clusters and plays a crucial role in optical cluster detection. Measurement of the slope and scatter of the red sequence are affected both by selection of red sequence galaxies and measurement errors. In this paper, we describe a new error-corrected Gaussian Mixture Model for red sequence galaxy identification. Using this technique, we can remove the effects of measurement error and extract unbiased information about the intrinsic properties of the red sequence. We use this method to select red sequence galaxies in each of the 13,823 clusters in the maxBCG catalog, and measure the red sequence ridgeline location and scatter of each. These measurements provide precise constraints on the variation of the average red galaxy populations in the observed frame with redshift. We find that the scatter of the red sequence ridgeline increases mildly with redshift, and that the slope decreases with redshift. We also observe that the slope does not strongly depend on cluster richness. Using similar methods, we show that this behavior is mirrored in a spectroscopic sample of field galaxies, further emphasizing that ridgeline properties are independent of environment. These precise measurements serve as an important observational check on simulations and mock galaxy catalogs. The observed trends in the slope and scatter of the red sequence ridgeline with redshift are clues to possible intrinsic evolution of the cluster red sequence itself. Most importantly, the methods presented in this work lay the groundwork for further improvements in optically based cluster cosmology.

  4. Completion Report for Model Evaluation Well ER-11-2: Corrective Action Unit 98: Frenchman Flat

    Energy Technology Data Exchange (ETDEWEB)

    NSTec Underground Test Area and Boreholes Programs and Operations

    2013-01-22

    Model Evaluation Well ER-11-2 was drilled for the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office in support of Nevada Environmental Management Operations at the Nevada National Security Site (formerly known as the Nevada Test Site). The well was drilled in August 2012 as part of a model evaluation program in the Frenchman Flat area of Nye County, Nevada. The primary purpose of the well was to provide detailed geologic, hydrogeologic, chemical, and radionuclide data that can be used to test and build confidence in the applicability of the Frenchman Flat Corrective Action Unit flow and transport models for their intended purpose. In particular, this well was designed to provide data to evaluate the uncertainty in model forecasts of contaminant migration from the upgradient underground nuclear test PIN STRIPE, conducted in borehole U-11b in 1966. Well ER-11-2 will provide information that can be used to refine the Phase II Frenchman Flat hydrostratigraphic framework model if necessary, as well as to support future groundwater flow and transport modeling. The main 31.1-centimeter (cm) hole was drilled to a total depth of 399.6 meters (m). A completion casing string was not set in Well ER-11-2. However, a piezometer string was installed in the 31.1-cm open hole. The piezometer is composed of 7.3-cm stainless-steel tubing hung on 6.0-cm carbon-steel tubing via a crossover sub. The piezometer string was landed at 394.5 m, for monitoring the lower tuff confining unit. Data collected during and shortly after hole construction include composite drill cuttings samples collected every 3.0 m, various geophysical logs, water quality (including tritium and other test-related radionuclides) measurements, and water level measurements. The well penetrated 42.7 m of Quaternary and Tertiary alluvium and 356.9 m of Tertiary volcanic rock. The water-level measured in the piezometer string on September 25, 2012, was 353.8 m below ground surface. No

  5. Implementing a generic method for bias correction in statistical models using random effects, with spatial and population dynamics examples

    DEFF Research Database (Denmark)

    Thorson, James T.; Kristensen, Kasper

    2016-01-01

    Statistical models play an important role in fisheries science when reconciling ecological theory with available data for wild populations or experimental studies. Ecological models increasingly include both fixed and random effects, and are often estimated using maximum likelihood techniques...... configurations of an age-structured population dynamics model. This simulation experiment shows that the epsilon-method and the existing bias-correction method perform equally well in data-rich contexts, but the epsilon-method is slightly less biased in data-poor contexts. We then apply the epsilon......-method to a spatial regression model when estimating an index of population abundance, and compare results with an alternative bias-correction algorithm that involves Markov-chain Monte Carlo sampling. This example shows that the epsilon-method leads to a biologically significant difference in estimates of average...

  6. Electroweak corrections

    International Nuclear Information System (INIS)

    Beenakker, W.J.P.

    1989-01-01

    The prospect of high accuracy measurements investigating the weak interactions, which are expected to take place at the electron-positron storage ring LEP at CERN and the linear collider SCL at SLAC, offers the possibility to study also the weak quantum effects. In order to distinguish if the measured weak quantum effects lie within the margins set by the standard model and those bearing traces of new physics one had to go beyond the lowest order and also include electroweak radiative corrections (EWRC) in theoretical calculations. These higher-order corrections also can offer the possibility of getting information about two particles present in the Glashow-Salam-Weinberg model (GSW), but not discovered up till now, the top quark and the Higgs boson. In ch. 2 the GSW standard model of electroweak interactions is described. In ch. 3 some special techniques are described for determination of integrals which are responsible for numerical instabilities caused by large canceling terms encountered in the calculation of EWRC effects, and methods necessary to get hold of the extensive algebra typical for EWRC. In ch. 4 various aspects related to EWRC effects are discussed, in particular the dependence of the unknown model parameters which are the masses of the top quark and the Higgs boson. The processes which are discussed are production of heavy fermions from electron-positron annihilation and those of the fermionic decay of the Z gauge boson. (H.W.). 106 refs.; 30 figs.; 6 tabs.; schemes

  7. A 2 × 2 taxonomy of multilevel latent contextual models: accuracy-bias trade-offs in full and partial error correction models.

    Science.gov (United States)

    Lüdtke, Oliver; Marsh, Herbert W; Robitzsch, Alexander; Trautwein, Ulrich

    2011-12-01

    In multilevel modeling, group-level variables (L2) for assessing contextual effects are frequently generated by aggregating variables from a lower level (L1). A major problem of contextual analyses in the social sciences is that there is no error-free measurement of constructs. In the present article, 2 types of error occurring in multilevel data when estimating contextual effects are distinguished: unreliability that is due to measurement error and unreliability that is due to sampling error. The fact that studies may or may not correct for these 2 types of error can be translated into a 2 × 2 taxonomy of multilevel latent contextual models comprising 4 approaches: an uncorrected approach, partial correction approaches correcting for either measurement or sampling error (but not both), and a full correction approach that adjusts for both sources of error. It is shown mathematically and with simulated data that the uncorrected and partial correction approaches can result in substantially biased estimates of contextual effects, depending on the number of L1 individuals per group, the number of groups, the intraclass correlation, the number of indicators, and the size of the factor loadings. However, the simulation study also shows that partial correction approaches can outperform full correction approaches when the data provide only limited information in terms of the L2 construct (i.e., small number of groups, low intraclass correlation). A real-data application from educational psychology is used to illustrate the different approaches.

  8. Investigation of attenuation correction in SPECT using textural features, Monte Carlo simulations, and computational anthropomorphic models.

    Science.gov (United States)

    Spirou, Spiridon V; Papadimitroulas, Panagiotis; Liakou, Paraskevi; Georgoulias, Panagiotis; Loudos, George

    2015-09-01

    To present and evaluate a new methodology to investigate the effect of attenuation correction (AC) in single-photon emission computed tomography (SPECT) using textural features analysis, Monte Carlo techniques, and a computational anthropomorphic model. The GATE Monte Carlo toolkit was used to simulate SPECT experiments using the XCAT computational anthropomorphic model, filled with a realistic biodistribution of (99m)Tc-N-DBODC. The simulated gamma camera was the Siemens ECAM Dual-Head, equipped with a parallel hole lead collimator, with an image resolution of 3.54 × 3.54 mm(2). Thirty-six equispaced camera positions, spanning a full 360° arc, were simulated. Projections were calculated after applying a ± 20% energy window or after eliminating all scattered photons. The activity of the radioisotope was reconstructed using the MLEM algorithm. Photon attenuation was accounted for by calculating the radiological pathlength in a perpendicular line from the center of each voxel to the gamma camera. Twenty-two textural features were calculated on each slice, with and without AC, using 16 and 64 gray levels. A mask was used to identify only those pixels that belonged to each organ. Twelve of the 22 features showed almost no dependence on AC, irrespective of the organ involved. In both the heart and the liver, the mean and SD were the features most affected by AC. In the liver, six features were affected by AC only on some slices. Depending on the slice, skewness decreased by 22-34% with AC, kurtosis by 35-50%, long-run emphasis mean by 71-91%, and long-run emphasis range by 62-95%. In contrast, gray-level non-uniformity mean increased by 78-218% compared with the value without AC and run percentage mean by 51-159%. These results were not affected by the number of gray levels (16 vs. 64) or the data used for reconstruction: with the energy window or without scattered photons. The mean and SD were the main features affected by AC. In the heart, no other feature was

  9. Digital terrain model evaluation and computation of the terrain correction and indirect effect in South America

    Directory of Open Access Journals (Sweden)

    Denizar Blitzkow

    2009-12-01

    Full Text Available The main objectives of this paper are to compare digital terrain models, to show the generated models for South America and to present two applications. Shuttle Radar Topography Mission (SRTM produced the most important and updated height information in the world. This paper addresses the attention to comparisons of the following models: SRTM3, DTM2002, GLOBE, GTOPO30, ETOPO2 and ETOPO5, at the common points of the grid. The comparisons are limited by latitudes 60º S and 25 º N and longitudes 100 º W and 25 º W. All these data, after some analysis, have been used to create three models for South America: SAM_1mv1, SAM_1mv2 (both of 1' grid spacing and SAM_30s (30" grid spacing. Besides this effort, the three models as well as STRM were evaluated using Bench Marks (BM in Brazil and Argentina. This paper also shows two important geodesy and geophysics applications using the SAM_1mv1: terrain correction (one of the reductions applied to the gravity acceleration and indirect effect (a consequence of the reduction of the external mass to the geoid. These are important at Andes for a precise geoid computation.Los objetivos principales de este documento son comparar modelos digitales del continente; enseñar los modelos generados para Sudamérica y presentar dos aplicaciones. Shuttle Radar Topography Mission (SRTM produjo la información más importante y más actualizada de las altitudes del mundo. Este trabajo centra su atención en las comparaciones de los modelos siguientes: SRTM3, DTM2002, GLOBO, GTOPO30, ETOPO2 y ETOPO5, en los puntos comunes de la rejilla. Las comparaciones son limitadas por las latitudes 60º S y 25 º N y longitudes 100 º W y 25 º W. Todos estos datos, después de los análisis, se han utilizado para crear tres modelos para Sudamérica: SAM_1mv1, SAM_1mv2 (1' de espaciamiento de la rejilla y SAM_30s (30" de espaciamiento de la rejilla. Los tres modelos bien como el STRM fueron evaluados usando puntos de referencia de

  10. Evaluation of metal artifacts in MVCT systems using a model based correction method

    Energy Technology Data Exchange (ETDEWEB)

    Paudel, M. R.; Mackenzie, M.; Fallone, B. G.; Rathee, S. [Department of Oncology, Medical Physics Division, University of Alberta, 11560 University Avenue, Edmonton, Alberta T6G 1Z2 (Canada); Department of Medical Physics, Cross Cancer Institute, 11560 University Avenue, Edmonton, Alberta T6G 1Z2 (Canada) and Department of Oncology, Medical Physics Division, University of Alberta, 11560 University Avenue, Edmonton, Alberta T6G 1Z2 (Canada); Department of Physics, University of Alberta, 11322-89 Avenue, Edmonton, Alberta T6G 2G7 (Canada); Department of Medical Physics, Cross Cancer Institute, 11560 University Avenue, Edmonton, Alberta T6G 1Z2 (Canada) and Department of Oncology, Medical Physics Division, University of Alberta, 11560 University Avenue, Edmonton, Alberta T6G 1Z2 (Canada); Department of Medical Physics, Cross Cancer Institute, 11560 University Avenue, Edmonton, Alberta T6G 1Z2 (Canada) and Department of Oncology, Medical Physics Division, University of Alberta, 11560 University Avenue, Edmonton, Alberta T6G 1Z2 (Canada)

    2012-10-15

    Purpose: To evaluate the performance of a model based image reconstruction method in reducing metal artifacts in the megavoltage computed tomography (MVCT) images of a phantom representing bilateral hip prostheses and to compare with the filtered-backprojection (FBP) technique. Methods: An iterative maximum likelihood polychromatic algorithm for CT (IMPACT) is used with an additional model for the pair/triplet production process and the energy dependent response of the detectors. The beam spectra for an in-house bench-top and TomoTherapy Trade-Mark-Sign MVCTs are modeled for use in IMPACT. The empirical energy dependent response of detectors is calculated using a constrained optimization technique that predicts the measured attenuation of the beam by various thicknesses (0-24 cm) of solid water slabs. A cylindrical (19.1 cm diameter) plexiglass phantom containing various cylindrical inserts of relative electron densities 0.295-1.695 positioned between two steel rods (2.7 cm diameter) is scanned in the bench-top MVCT that utilizes the bremsstrahlung radiation from a 6 MeV electron beam passed through 4 cm solid water on the Varian Clinac 2300C and in the imaging beam of the TomoTherapy Trade-Mark-Sign MVCT. The FBP technique in bench-top MVCT reconstructs images from raw signal normalized to air scan and corrected for beam hardening using a uniform plexiglass cylinder (20 cm diameter). The IMPACT starts with a FBP reconstructed seed image and reconstructs the final image in 150 iterations. Results: In both MVCTs, FBP produces visible dark shading in the image connecting the steel rods. In the IMPACT reconstructed images this shading is nearly removed and the uniform background is restored. The average attenuation coefficients of the inserts and the background are very close to the corresponding values in the absence of the steel inserts. In the FBP images of the bench-top MVCT, the shading causes 4%-9.5% underestimation of electron density at the central inserts

  11. Development of a Detailed Volumetric Finite Element Model of the Spine to Simulate Surgical Correction of Spinal Deformities

    Directory of Open Access Journals (Sweden)

    Mark Driscoll

    2013-01-01

    Full Text Available A large spectrum of medical devices exists; it aims to correct deformities associated with spinal disorders. The development of a detailed volumetric finite element model of the osteoligamentous spine would serve as a valuable tool to assess, compare, and optimize spinal devices. Thus the purpose of the study was to develop and initiate validation of a detailed osteoligamentous finite element model of the spine with simulated correction from spinal instrumentation. A finite element of the spine from T1 to L5 was developed using properties and geometry from the published literature and patient data. Spinal instrumentation, consisting of segmental translation of a scoliotic spine, was emulated. Postoperative patient and relevant published data of intervertebral disc stress, screw/vertebra pullout forces, and spinal profiles was used to evaluate the models validity. Intervertebral disc and vertebral reaction stresses respected published in vivo, ex vivo, and in silico values. Screw/vertebra reaction forces agreed with accepted pullout threshold values. Cobb angle measurements of spinal deformity following simulated surgical instrumentation corroborated with patient data. This computational biomechanical analysis validated a detailed volumetric spine model. Future studies seek to exploit the model to explore the performance of corrective spinal devices.

  12. A national prediction model for PM2.5 component exposures and measurement error-corrected health effect inference.

    Science.gov (United States)

    Bergen, Silas; Sheppard, Lianne; Sampson, Paul D; Kim, Sun-Young; Richards, Mark; Vedal, Sverre; Kaufman, Joel D; Szpiro, Adam A

    2013-09-01

    Studies estimating health effects of long-term air pollution exposure often use a two-stage approach: building exposure models to assign individual-level exposures, which are then used in regression analyses. This requires accurate exposure modeling and careful treatment of exposure measurement error. To illustrate the importance of accounting for exposure model characteristics in two-stage air pollution studies, we considered a case study based on data from the Multi-Ethnic Study of Atherosclerosis (MESA). We built national spatial exposure models that used partial least squares and universal kriging to estimate annual average concentrations of four PM2.5 components: elemental carbon (EC), organic carbon (OC), silicon (Si), and sulfur (S). We predicted PM2.5 component exposures for the MESA cohort and estimated cross-sectional associations with carotid intima-media thickness (CIMT), adjusting for subject-specific covariates. We corrected for measurement error using recently developed methods that account for the spatial structure of predicted exposures. Our models performed well, with cross-validated R2 values ranging from 0.62 to 0.95. Naïve analyses that did not account for measurement error indicated statistically significant associations between CIMT and exposure to OC, Si, and S. EC and OC exhibited little spatial correlation, and the corrected inference was unchanged from the naïve analysis. The Si and S exposure surfaces displayed notable spatial correlation, resulting in corrected confidence intervals (CIs) that were 50% wider than the naïve CIs, but that were still statistically significant. The impact of correcting for measurement error on health effect inference is concordant with the degree of spatial correlation in the exposure surfaces. Exposure model characteristics must be considered when performing two-stage air pollution epidemiologic analyses because naïve health effect inference may be inappropriate.

  13. N3 Bias Field Correction Explained as a Bayesian Modeling Method

    DEFF Research Database (Denmark)

    Larsen, Christian Thode; Iglesias, Juan Eugenio; Van Leemput, Koen

    2014-01-01

    Although N3 is perhaps the most widely used method for MRI bias field correction, its underlying mechanism is in fact not well understood. Specifically, the method relies on a relatively heuristic recipe of alternating iterative steps that does not optimize any particular objective function. In t...

  14. Thickness correction of mammographic images by means of a global parameter model of the compressed breast.

    NARCIS (Netherlands)

    Snoeren, P.R.; Karssemeijer, N.

    2004-01-01

    Peripheral enhancement and tilt correction of unprocessed digital mammograms was achieved with a new reversible algorithm. This method has two major advantages for image visualization. First, the display dynamic range can be relatively small, and second, adjustment of the overall luminance to

  15. Publisher Correction: Probing the strongly driven spin-boson model in a superconducting quantum circuit.

    Science.gov (United States)

    Magazzù, L; Forn-Díaz, P; Belyansky, R; Orgiazzi, J-L; Yurtalan, M A; Otto, M R; Lupascu, A; Wilson, C M; Grifoni, M

    2018-06-07

    The original PDF and HTML versions of this Article omitted the ORCID ID of the authors L. Magazzù and P. Forn-Díaz. (L. Magazzù: 0000-0002-4377-8387; P. Forn-Diaz: 0000-0003-4365-5157).The original PDF version of this Article contained errors in Eqs. (2), (6), (13), (14), (25), (26). These equations were missing all instances of 'Γ' and 'Δ', which are correctly displayed in the HTML version.Similarly, the inline equation in the third sentence of the caption of Fig. 2 was missing the left hand term 'Ω'.The original HTML version of this Article contained errors in Table 1. The correct version of the sixth row of the first column states 'Figure 2' instead of the original, incorrect 'Figure'. And the correction version of the ninth row of the first column states 'Figure 3' instead of the original, incorrect 'Figure'.This has been corrected in both the PDF and HTML versions of the Article.

  16. Coupling constant corrections in a holographic model of heavy ion collisions

    NARCIS (Netherlands)

    Grozdanov, Sašo; Schee, Wilke van der

    2017-01-01

    We initiate a holographic study of coupling-dependent heavy ion collisions by analysing for the first time the effects of leading-order, inverse coupling constant corrections. In the dual description, this amounts to colliding gravitational shock waves in a theory with curvature-squared terms. We

  17. Challenges in modelling the random structure correctly in growth mixture models and the impact this has on model mixtures.

    Science.gov (United States)

    Gilthorpe, M S; Dahly, D L; Tu, Y K; Kubzansky, L D; Goodman, E

    2014-06-01

    Lifecourse trajectories of clinical or anthropological attributes are useful for identifying how our early-life experiences influence later-life morbidity and mortality. Researchers often use growth mixture models (GMMs) to estimate such phenomena. It is common to place constrains on the random part of the GMM to improve parsimony or to aid convergence, but this can lead to an autoregressive structure that distorts the nature of the mixtures and subsequent model interpretation. This is especially true if changes in the outcome within individuals are gradual compared with the magnitude of differences between individuals. This is not widely appreciated, nor is its impact well understood. Using repeat measures of body mass index (BMI) for 1528 US adolescents, we estimated GMMs that required variance-covariance constraints to attain convergence. We contrasted constrained models with and without an autocorrelation structure to assess the impact this had on the ideal number of latent classes, their size and composition. We also contrasted model options using simulations. When the GMM variance-covariance structure was constrained, a within-class autocorrelation structure emerged. When not modelled explicitly, this led to poorer model fit and models that differed substantially in the ideal number of latent classes, as well as class size and composition. Failure to carefully consider the random structure of data within a GMM framework may lead to erroneous model inferences, especially for outcomes with greater within-person than between-person homogeneity, such as BMI. It is crucial to reflect on the underlying data generation processes when building such models.

  18. Higher-order corrections to the effective potential close to the jamming transition in the perceptron model

    Science.gov (United States)

    Altieri, Ada

    2018-01-01

    In view of the results achieved in a previously related work [A. Altieri, S. Franz, and G. Parisi, J. Stat. Mech. (2016) 093301], 10.1088/1742-5468/2016/09/093301, regarding a Plefka-like expansion of the free energy up to the second order in the perceptron model, we improve the computation here focusing on the role of third-order corrections. The perceptron model is a simple example of constraint satisfaction problem, falling in the same universality class as hard spheres near jamming and hence allowing us to get exact results in high dimensions for more complex settings. Our method enables to define an effective potential (or Thouless-Anderson-Palmer free energy), namely a coarse-grained functional, which depends on the generalized forces and the effective gaps between particles. The analysis of the third-order corrections to the effective potential reveals that, albeit irrelevant in a mean-field framework in the thermodynamic limit, they might instead play a fundamental role in considering finite-size effects. We also study the typical behavior of generalized forces and we show that two kinds of corrections can occur. The first contribution arises since the system is analyzed at a finite distance from jamming, while the second one is due to finite-size corrections. We nevertheless show that third-order corrections in the perturbative expansion vanish in the jamming limit both for the potential and the generalized forces, in agreement with the isostaticity argument proposed by Wyart and coworkers. Finally, we analyze the relevant scaling solutions emerging close to the jamming line, which define a crossover regime connecting the control parameters of the model to an effective temperature.

  19. True coincidence summing correction and mathematical efficiency modeling of a well detector

    Energy Technology Data Exchange (ETDEWEB)

    Jäderström, H., E-mail: henrik.jaderstrom@canberra.com [CANBERRA Industries Inc., 800 Research Parkway, Meriden, CT 06450 (United States); Mueller, W.F. [CANBERRA Industries Inc., 800 Research Parkway, Meriden, CT 06450 (United States); Atrashkevich, V. [Stroitely St 4-4-52, Moscow (Russian Federation); Adekola, A.S. [CANBERRA Industries Inc., 800 Research Parkway, Meriden, CT 06450 (United States)

    2015-06-01

    True coincidence summing (TCS) occurs when two or more photons are emitted from the same decay of a radioactive nuclide and are detected within the resolving time of the gamma ray detector. TCS changes the net peak areas of the affected full energy peaks in the spectrum and the nuclide activity is rendered inaccurate if no correction is performed. TCS is independent of the count rate, but it is strongly dependent on the peak and total efficiency, as well as the characteristics of a given nuclear decay. The TCS effects are very prominent for well detectors because of the high efficiencies, and make accounting for TCS a necessity. For CANBERRA's recently released Small Anode Germanium (SAGe) well detector, an extension to CANBERRA's mathematical efficiency calibration method (In Situ Object Calibration Software or ISOCS, and Laboratory SOurceless Calibration Software or LabSOCS) has been developed that allows for calculation of peak and total efficiencies for SAGe well detectors. The extension also makes it possible to calculate TCS corrections for well detectors using the standard algorithm provided with CANBERRAS's Spectroscopy software Genie 2000. The peak and total efficiencies from ISOCS/LabSOCS have been compared to MCNP with agreements within 3% for peak efficiencies and 10% for total efficiencies for energies above 30 keV. A sample containing Ra-226 daughters has been measured within the well and analyzed with and without TCS correction and applying the correction factor shows significant improvement of the activity determination for the energy range 46–2447 keV. The implementation of ISOCS/LabSOCS for well detectors offers a powerful tool for efficiency calibration for these detectors. The automated algorithm to correct for TCS effects in well detectors makes nuclide specific calibration unnecessary and offers flexibility in carrying out gamma spectral analysis.

  20. Non-perturbative unitarity constraints on the ratio of shear viscosity to entropy density in UV complete theories with a gravity dual

    CERN Document Server

    Brustein, Ram

    2011-01-01

    We reconsider, from a novel perspective, how unitarity constrains the corrections to the ratio of shear viscosity \\eta\\ to entropy density s. We start with higher-derivative extensions of Einstein gravity in asymptotically anti-de Sitter spacetimes. It is assumed that these theories are derived from string theory and thus have a unitary UV completion that is dual to a unitary, UV-complete boundary gauge theory. We then propose that the gravitational perturbations about a solution of the UV complete theory are described by an effective theory whose linearized equations of motion have at most two time derivatives. Our proposal leads to a concrete prescription for the calculation of \\eta/s for theories of gravity with arbitrary higher-derivative corrections. The resulting ratio can take on values above or below 1/4\\pi\\ and is consistent with all the previous calculations, even though our reasoning is substantially different. For the purpose of calculating \\eta/s, our proposal also leads to only two possible cand...