WorldWideScience

Sample records for dimensionally regularized polyakov

  1. Supersymmetric dimensional regularization

    International Nuclear Information System (INIS)

    Egoryan, E.Sh.

    1982-01-01

    A generalized scheme of dimensional regularization which preserves supersymmetry is proposed. The scheme is applicable to all supersymmetric theories. Two models with extended supersymmetry are considered. The Slavnov naive supersymmetric identities are shown to hold at a dimensional regularized level

  2. Physical model of dimensional regularization

    Energy Technology Data Exchange (ETDEWEB)

    Schonfeld, Jonathan F.

    2016-12-15

    We explicitly construct fractals of dimension 4-ε on which dimensional regularization approximates scalar-field-only quantum-field theory amplitudes. The construction does not require fractals to be Lorentz-invariant in any sense, and we argue that there probably is no Lorentz-invariant fractal of dimension greater than 2. We derive dimensional regularization's power-law screening first for fractals obtained by removing voids from 3-dimensional Euclidean space. The derivation applies techniques from elementary dielectric theory. Surprisingly, fractal geometry by itself does not guarantee the appropriate power-law behavior; boundary conditions at fractal voids also play an important role. We then extend the derivation to 4-dimensional Minkowski space. We comment on generalization to non-scalar fields, and speculate about implications for quantum gravity. (orig.)

  3. Dimensional regularization in configuration space

    International Nuclear Information System (INIS)

    Bollini, C.G.; Giambiagi, J.J.

    1995-09-01

    Dimensional regularization is introduced in configuration space by Fourier transforming in D-dimensions the perturbative momentum space Green functions. For this transformation, Bochner theorem is used, no extra parameters, such as those of Feynman or Bogoliubov-Shirkov are needed for convolutions. The regularized causal functions in x-space have ν-dependent moderated singularities at the origin. They can be multiplied together and Fourier transformed (Bochner) without divergence problems. The usual ultraviolet divergences appear as poles of the resultant functions of ν. Several example are discussed. (author). 9 refs

  4. One-dimensional QCD in thimble regularization

    Science.gov (United States)

    Di Renzo, F.; Eruzzi, G.

    2018-01-01

    QCD in 0 +1 dimensions is numerically solved via thimble regularization. In the context of this toy model, a general formalism is presented for S U (N ) theories. The sign problem that the theory displays is a genuine one, stemming from a (quark) chemical potential. Three stationary points are present in the original (real) domain of integration, so that contributions from all the thimbles associated to them are to be taken into account: we show how semiclassical computations can provide hints on the regions of parameter space where this is absolutely crucial. Known analytical results for the chiral condensate and the Polyakov loop are correctly reproduced: this is in particular trivial at high values of the number of flavors Nf. In this regime we notice that the single thimble dominance scenario takes place (the dominant thimble is the one associated to the identity). At low values of Nf computations can be more difficult. It is important to stress that this is not at all a consequence of the original sign problem (not even via the residual phase). The latter is always under control, while accidental, delicate cancelations of contributions coming from different thimbles can be in place in (restricted) regions of the parameter space.

  5. Regularization, Renormalization, and Dimensional Analysis: Dimensional Regularization meets Freshman E&M

    OpenAIRE

    Olness, Fredrick; Scalise, Randall

    2008-01-01

    We illustrate the dimensional regularization technique using a simple problem from elementary electrostatics. We contrast this approach with the cutoff regularization approach, and demonstrate that dimensional regularization preserves the translational symmetry. We then introduce a Minimal Subtraction (MS) and a Modified Minimal Subtraction (MS-Bar) scheme to renormalize the result. Finally, we consider dimensional transmutation as encountered in the case of compact extra-dimensions.

  6. Derivation of the Polyakov action

    International Nuclear Information System (INIS)

    Kachkachi, M.

    1999-11-01

    We develop another method to get the Polyakov action that is: tile solution of tile conformal Ward identity on a Riemann surface Σ. We find that this action is the sum of two terms: the first one is expressed in terms of the projective connection and produces the diffeomorphism anomaly and tile second one is anomaly and contains the globally defined zero modes of the Ward identity. The explicit expression of this action is given on the complex plane. (author)

  7. Dimensional regularization and analytical continuation at finite temperature

    International Nuclear Information System (INIS)

    Chen Xiangjun; Liu Lianshou

    1998-01-01

    The relationship between dimensional regularization and analytical continuation of infrared divergent integrals at finite temperature is discussed and a method of regularization of infrared divergent integrals and infrared divergent sums is given

  8. Factorization and regularization by dimensional reduction

    Science.gov (United States)

    Signer, Adrian; Stöckinger, Dominik

    2005-10-01

    Since an old observation by Beenakker et al., the evaluation of QCD processes in dimensional reduction has repeatedly led to terms that seem to violate the QCD factorization theorem. We reconsider the example of the process gg → ttbar and show that the factorization problem can be completely resolved. A natural interpretation of the seemingly non-factorizing terms is found, and they are rewritten in a systematic and factorized form. The key to the solution is that the D- and (4 - D)-dimensional parts of the 4-dimensional gluon have to be regarded as independent partons.

  9. Regularized Discriminant Analysis: A Large Dimensional Study

    KAUST Repository

    Yang, Xiaoke

    2018-04-28

    In this thesis, we focus on studying the performance of general regularized discriminant analysis (RDA) classifiers. The data used for analysis is assumed to follow Gaussian mixture model with different means and covariances. RDA offers a rich class of regularization options, covering as special cases the regularized linear discriminant analysis (RLDA) and the regularized quadratic discriminant analysis (RQDA) classi ers. We analyze RDA under the double asymptotic regime where the data dimension and the training size both increase in a proportional way. This double asymptotic regime allows for application of fundamental results from random matrix theory. Under the double asymptotic regime and some mild assumptions, we show that the asymptotic classification error converges to a deterministic quantity that only depends on the data statistical parameters and dimensions. This result not only implicates some mathematical relations between the misclassification error and the class statistics, but also can be leveraged to select the optimal parameters that minimize the classification error, thus yielding the optimal classifier. Validation results on the synthetic data show a good accuracy of our theoretical findings. We also construct a general consistent estimator to approximate the true classification error in consideration of the unknown previous statistics. We benchmark the performance of our proposed consistent estimator against classical estimator on synthetic data. The observations demonstrate that the general estimator outperforms others in terms of mean squared error (MSE).

  10. A complexity-regularized quantization approach to nonlinear dimensionality reduction

    OpenAIRE

    Raginsky, Maxim

    2005-01-01

    We consider the problem of nonlinear dimensionality reduction: given a training set of high-dimensional data whose ``intrinsic'' low dimension is assumed known, find a feature extraction map to low-dimensional space, a reconstruction map back to high-dimensional space, and a geometric description of the dimension-reduced data as a smooth manifold. We introduce a complexity-regularized quantization approach for fitting a Gaussian mixture model to the training set via a Lloyd algorithm. Complex...

  11. Regularization by Dimensional Reduction: Consistency, Quantum Action Principle, and Supersymmetry

    Science.gov (United States)

    Stöckinger, Dominik

    2005-03-01

    It is proven by explicit construction that regularization by dimensional reduction can be formulated in a mathematically consistent way. In this formulation the quantum action principle is shown to hold. This provides an intuitive and elegant relation between the D-dimensional lagrangian and Ward or Slavnov-Taylor identities, and it can be used in particular to study to what extent dimensional reduction preserves supersymmetry. We give several examples of previously unchecked cases.

  12. Dimensional regularization of the supersymmetric Yang-Mills model

    International Nuclear Information System (INIS)

    EgoAyan, E.Sh.

    1982-01-01

    A slightly modified scheme of dimensional regularization is proposed for applying to models having simple supersymmetry in Wess-Zumino gauges. It is proved that supersymmetric Ward identities are fulfilled in the Yang-Mills model with simple supersymmetry as well as in vector-like models with matter fields having simple supersymmetry

  13. Dimensional regularization and dimensional reduction in the light cone

    Science.gov (United States)

    Qiu, J.

    2008-06-01

    We calculate all of the 2 to 2 scattering process in Yang-Mills theory in the light cone gauge, with the dimensional regulator as the UV regulator. The IR is regulated with a cutoff in q+. It supplements our earlier work, where a Lorentz noncovariant regulator was used, and the final results bear some problems in gauge fixing. Supersymmetry relations among various amplitudes are checked by using the light cone superfields.

  14. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing

    Science.gov (United States)

    Li, Shuang; Liu, Bing; Zhang, Chen

    2016-01-01

    Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios. PMID:27247562

  15. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.

    Science.gov (United States)

    Li, Shuang; Liu, Bing; Zhang, Chen

    2016-01-01

    Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.

  16. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing

    Directory of Open Access Journals (Sweden)

    Shuang Li

    2016-01-01

    Full Text Available Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.

  17. Dimensional regularization and infrared divergences in quantum electrodynamics

    International Nuclear Information System (INIS)

    Marculescu, S.

    1979-01-01

    Dimensional continuation was devised as a powerful regularization method for ultraviolet divergences in quantum field theories. Recently it was clear, at least for quantum electrodynamics, that such a method could be employed for factorizing out infrared divergences from the on-shell S-matrix elements. This provides a renormalization scheme on the electron mass-shell without using a gauge violating ''photon mass''. (author)

  18. Dimensional regularization in position space and a forest formula for regularized Epstein-Glaser renormalization

    Energy Technology Data Exchange (ETDEWEB)

    Keller, Kai Johannes

    2010-04-15

    The present work contains a consistent formulation of the methods of dimensional regularization (DimReg) and minimal subtraction (MS) in Minkowski position space. The methods are implemented into the framework of perturbative Algebraic Quantum Field Theory (pAQFT). The developed methods are used to solve the Epstein-Glaser recursion for the construction of time-ordered products in all orders of causal perturbation theory. A solution is given in terms of a forest formula in the sense of Zimmermann. A relation to the alternative approach to renormalization theory using Hopf algebras is established. (orig.)

  19. Proof of Polyakov conjecture on supercomplex plane

    International Nuclear Information System (INIS)

    Kachkachi, M.; Kouadik, S.

    1994-10-01

    Using Neumann series, we solve iteratively SBE to arbitrary order. Then applying this, we compute the energy momentum tensor and n points functions for generic n starting from WZP action on the supercomplex plane. We solve the superconformal Ward identity and we show that the iterative solution to arbitrary order is resumed by WZP action. This proves the Polyakov conjecture on supercomplex plane. (author). 8 refs

  20. A Large Dimensional Analysis of Regularized Discriminant Analysis Classifiers

    KAUST Repository

    Elkhalil, Khalil

    2017-11-01

    This article carries out a large dimensional analysis of standard regularized discriminant analysis classifiers designed on the assumption that data arise from a Gaussian mixture model with different means and covariances. The analysis relies on fundamental results from random matrix theory (RMT) when both the number of features and the cardinality of the training data within each class grow large at the same pace. Under mild assumptions, we show that the asymptotic classification error approaches a deterministic quantity that depends only on the means and covariances associated with each class as well as the problem dimensions. Such a result permits a better understanding of the performance of regularized discriminant analsysis, in practical large but finite dimensions, and can be used to determine and pre-estimate the optimal regularization parameter that minimizes the misclassification error probability. Despite being theoretically valid only for Gaussian data, our findings are shown to yield a high accuracy in predicting the performances achieved with real data sets drawn from the popular USPS data base, thereby making an interesting connection between theory and practice.

  1. Electrostatic self-force in the field of an (n + 1)-dimensional black hole: Dimensional regularization

    International Nuclear Information System (INIS)

    Grats, Yu. V.; Spirin, P. A.

    2016-01-01

    The self-energy of a classical charged particle localized at a relatively large distance outside the event horizon of an (n + 1)-dimensional Schwarzschild–Tangherlini black hole for an arbitrary n ≥ 3 is calculated. An expression for the electrostatic Green function is derived in the first two orders of the perturbation theory. Dimensional regularization is proposed to be used to regularize the corresponding formally divergent expression for the self-energy. The derived expression for the renormalized self-energy is compared with the results of other authors.

  2. Dimensional reduction in numerical relativity: Modified Cartoon formalism and regularization

    Science.gov (United States)

    Cook, William G.; Figueras, Pau; Kunesch, Markus; Sperhake, Ulrich; Tunyasuvunakool, Saran

    2016-06-01

    We present in detail the Einstein equations in the Baumgarte-Shapiro-Shibata-Nakamura formulation for the case of D-dimensional spacetimes with SO(D - d) isometry based on a method originally introduced in Ref. 1. Regularized expressions are given for a numerical implementation of this method on a vertex centered grid including the origin of the quasi-radial coordinate that covers the extra dimensions with rotational symmetry. Axisymmetry, corresponding to the value d = D - 2, represents a special case with fewer constraints on the vanishing of tensor components and is conveniently implemented in a variation of the general method. The robustness of the scheme is demonstrated for the case of a black-hole head-on collision in D = 7 spacetime dimensions with SO(4) symmetry.

  3. Fried-Yennie gauge in dimensionally regularized QED

    International Nuclear Information System (INIS)

    Adkins, G.S.

    1993-01-01

    The Fried-Yennie gauge in QED is a covariant gauge with agreeable infrared properties. That is, the mass-shell renormalization scheme can be implemented without introducing artificial infrared divergences, and terms having spuriously low orders in α disappear in certain bound-state calculations. The photon propagator in the Fried-Yennie gauge has the form D β μν (k)=(-1/k 2 )[g μν +βk μ kν/k 2 ], where β is the gauge parameter. In this work, I show that the Fried-Yennie gauge parameter is β=2/(1-2ε) when dimensional regularization (with n=4-2ε dimensions of spacetime) is used to regulate the theory

  4. Manifold-splitting regularization, self-linking, twisting, writhing numbers of space-time ribbons

    International Nuclear Information System (INIS)

    Tze, C.H.

    1988-01-01

    The authors present an alternative formulation of Polyakov's regularization of Gauss' integral formula for a single closed Feynman path. A key element in his proof of the D = 3 fermi-bose transmutations induced by topological gauge fields, this regularization is linked here with the existence and properties of a nontrivial topological invariant for a closed space ribbon. This self-linking coefficient, an integer, is the sum of two differential characteristics of the ribbon, its twisting and writhing numbers. These invariants form the basis for a physical interpretation of our regularization. Their connection to Polyakov's spinorization is discussed. The authors further generalize their construction to the self-linking, twisting and writhing of higher dimensional d = eta(odd) submanifolds in D = (2eta + 1) space-time

  5. On Regularity Criteria for the Two-Dimensional Generalized Liquid Crystal Model

    Directory of Open Access Journals (Sweden)

    Yanan Wang

    2014-01-01

    Full Text Available We establish the regularity criteria for the two-dimensional generalized liquid crystal model. It turns out that the global existence results satisfy our regularity criteria naturally.

  6. Two-loop parameter relations between dimensional regularization and dimensional reduction applied to SUSY-QCD

    Science.gov (United States)

    Mihaila, L.

    2009-10-01

    The two-loop relations between the running gluino-quark-squark coupling, the gluino and the quark mass defined in dimensional regularization (DREG) and dimensional reduction (DRED) in the framework of SUSY-QCD are presented. Furthermore, we verify with the help of these relations that the three-loop β-functions derived in the minimal subtraction scheme combined with DREG or DRED transform into each other. This result confirms the equivalence of the two schemes at the three-loop order, if applied to SUSY-QCD.

  7. The difference between n-dimensional regularization and n-dimensional reduction in QCD

    Science.gov (United States)

    Smith, J.; van Neerven, W. L.

    2005-03-01

    We discuss the difference between n-dimensional regularization and n-dimensional reduction for processes in QCD which have an additional mass scale. Examples are heavy flavor production in hadron-hadron collisions or on-shell photon-hadron collisions where the scale is represented by the mass m. Another example is electroproduction of heavy flavors where we have two mass scales given by m and the virtuality of the photon smash[b]{Q = sqrt{-q^2}}. Finally we study the Drell-Yan process where the additional scale is represented by the virtuality smash[t]{Q = sqrt{q^2}} of the vector boson (γ^*, W, Z). The difference between the two schemes is not accounted for by the usual oversubtractions. There are extra counter terms which multiply the mass scale dependent parts of the Born cross sections. In the case of the Drell-Yan process it turns out that the off-shell mass regularization agrees with n-dimensional regularization.

  8. Polyakov-Wiegmann formula and multiplicative gerbes

    International Nuclear Information System (INIS)

    Gawedzki, Krzysztof; Waldorf, Konrad

    2009-01-01

    An unambiguous definition of Feynman amplitudes in the Wess-Zumino-Witten sigma model and the Chern-Simon gauge theory with a general Lie group is determined by a certain geometric structure on the group. For the WZW amplitudes, this is a (bundle) gerbe with connection of an appropriate curvature whereas for the CS amplitudes, the gerbe has to be additionally equipped with a multiplicative structure assuring its compatibility with the group multiplication. We show that for simple compact Lie groups the obstruction to the existence of a multiplicative structure is provided by a 2-cocycle of phases that appears in the Polyakov-Wiegmann formula relating the Wess-Zumino action functional of the product of group-valued fields to the sum of the individual contributions. These phases were computed long time ago for all compact simple Lie groups. If they are trivial, then the multiplicative structure exists and is unique up to isomorphism.

  9. Effect of the Gribov horizon on the Polyakov loop and vice versa

    Energy Technology Data Exchange (ETDEWEB)

    Canfora, F.E. [Centro de Estudios Cientificos (CECS), Valdivia (Chile); Dudal, D. [KU Leuven Campus Kortrijk, KULAK, Department of Physics, Kortrijk (Belgium); Ghent University, Department of Physics and Astronomy, Gent (Belgium); Justo, I.F. [Ghent University, Department of Physics and Astronomy, Gent (Belgium); UERJ, Universidade do Estado do Rio de Janeiro, Departamento de Fisica Teorica, Instituto de Fisica, Maracana, Rio de Janeiro (Brazil); Pais, P. [Centro de Estudios Cientificos (CECS), Valdivia (Chile); Universite Libre de Bruxelles and International Solvay Institutes, Physique Theorique et Mathematique, Brussels (Belgium); Rosa, L. [Universita di Napoli Federico II, Dipartimento di Fisica, Monte S. Angelo (Italy); INFN, Sezione di Napoli, Monte S. Angelo (Italy); Vercauteren, D. [Duy Tan University, Institute of Research and Development, Da Nang (Viet Nam)

    2015-07-15

    We consider finite-temperature SU(2) gauge theory in the continuum formulation, which necessitates the choice of a gauge fixing. Choosing the Landau gauge, the existing gauge copies are taken into account by means of the Gribov-Zwanziger quantization scheme, which entails the introduction of a dynamical mass scale (Gribov mass) directly influencing the Green functions of the theory. Here, we determine simultaneously the Polyakov loop (vacuum expectation value) and Gribov mass in terms of temperature, by minimizing the vacuum energy w.r.t. the Polyakov-loop parameter and solving the Gribov gap equation. Inspired by the Casimir energy-style of computation, we illustrate the usage of Zeta function regularization in finite-temperature calculations. Our main result is that the Gribov mass directly feels the deconfinement transition, visible from a cusp occurring at the same temperature where the Polyakov loop becomes nonzero. In this exploratory work we mainly restrict ourselves to the original Gribov-Zwanziger quantization procedure in order to illustrate the approach and the potential direct link between the vacuum structure of the theory (dynamical mass scales) and (de)confinement. We also present a first look at the critical temperature obtained from the refined Gribov-Zwanziger approach. Finally, a particular problem for the pressure at low temperatures is reported. (orig.)

  10. (2+1-dimensional regular black holes with nonlinear electrodynamics sources

    Directory of Open Access Journals (Sweden)

    Yun He

    2017-11-01

    Full Text Available On the basis of two requirements: the avoidance of the curvature singularity and the Maxwell theory as the weak field limit of the nonlinear electrodynamics, we find two restricted conditions on the metric function of (2+1-dimensional regular black hole in general relativity coupled with nonlinear electrodynamics sources. By the use of the two conditions, we obtain a general approach to construct (2+1-dimensional regular black holes. In this manner, we construct four (2+1-dimensional regular black holes as examples. We also study the thermodynamic properties of the regular black holes and verify the first law of black hole thermodynamics.

  11. A practicable γ5-scheme in dimensional regularization

    International Nuclear Information System (INIS)

    Koerner, J.G.; Kreimer, D.; Schilcher, K.

    1991-08-01

    We present a new simple Υ 5 regularization scheme. We discuss its use in the standard radiative correction calculations including the anomaly contributions. The new scheme features an anticommuting Υ 5 which leads to great simplifications in practical calculations. We carefully discuss the underlying mathematics of our Υ 5 -scheme which is formulated in terms of simple projection operations. (orig.)

  12. Regularized lattice Bhatnagar-Gross-Krook model for two- and three-dimensional cavity flow simulations.

    Science.gov (United States)

    Montessori, A; Falcucci, G; Prestininzi, P; La Rocca, M; Succi, S

    2014-05-01

    We investigate the accuracy and performance of the regularized version of the single-relaxation-time lattice Boltzmann equation for the case of two- and three-dimensional lid-driven cavities. The regularized version is shown to provide a significant gain in stability over the standard single-relaxation time, at a moderate computational overhead.

  13. The quantum equivalence of Nambu and Polyakov string actions

    International Nuclear Information System (INIS)

    Morris, T.R.

    1990-01-01

    By integrating out the auxiliary metric in the Polyakov string path integral, we derive a path integral for the Nambu action complete with measure. We show how to gauge fix this Nambu form of the partition function. This involves an intermediate partial gauge-fixing step. Our result is the Polyakov path integral in conformal gauge with the correct measure. The intermediate step may enjoy off-shell BRS symmetry by a generalization of the standard procedures. We show how the Teichmueller parameters arise in the Nambu formalism for general genus. These results allow us to make some observations on the physical characteristics of typical string world-sheets. (orig.)

  14. On the Gribov ambiguity in the Polyakov string

    International Nuclear Information System (INIS)

    Jaskolski, Z.

    1988-01-01

    The global aspects of the gauge fixing in the Polyakov path integral for the bosonic string are considered within the Ebin--Fischer--Mareden approach to the geometry of spaces of Riemannian metrics and conformal structures. It is shown that for surfaces of higher genus, the existence of local conformal gauges is sufficient to derive the globally defined integral over the Teichmueller space. The generalized Faddeev--Popov procedure for incomplete gauges is formulated and used to derive the global expression for the Polyakov path integral in the cases of torus and sphere. The Gribov ambiguity in the functional integral over surfaces without boundary can be successfully overcome for arbitrary genus

  15. The Polyakov relation for the sphere and higher genus surfaces

    International Nuclear Information System (INIS)

    Menotti, Pietro

    2016-01-01

    The Polyakov relation, which in the sphere topology gives the changes of the Liouville action under the variation of the position of the sources, is also related in the case of higher genus to the dependence of the action on the moduli of the surface. We write and prove such a relation for genus 1 and for all hyperelliptic surfaces. (paper)

  16. Relation between the pole and the minimally subtracted mass in dimensional regularization and dimensional reduction to three-loop order

    Science.gov (United States)

    Marquard, P.; Mihaila, L.; Piclum, J. H.; Steinhauser, M.

    2007-06-01

    We compute the relation between the pole quark mass and the minimally subtracted quark mass in the framework of QCD applying dimensional reduction as a regularization scheme. Special emphasis is put on the evanescent couplings and the renormalization of the ɛ-scalar mass. As a by-product we obtain the three-loop on-shell renormalization constants ZmOS and Z2OS in dimensional regularization and thus provide the first independent check of the analytical results computed several years ago.

  17. Seiberg-Witten and 'Polyakov-like' Magnetic Bion Confinements are Continuously Connected

    Energy Technology Data Exchange (ETDEWEB)

    Poppitz, Erich; /Toronto U.; Unsal, Mithat; /SLAC /Stanford U., Phys. Dept.

    2012-06-01

    We study four-dimensional N = 2 supersymmetric pure-gauge (Seiberg-Witten) theory and its N = 1 mass perturbation by using compactification on S{sup 1} x R{sup 3}. It is well known that on R{sup 4} (or at large S{sup 1} size L) the perturbed theory realizes confinement through monopole or dyon condensation. At small S{sup 1}, we demonstrate that confinement is induced by a generalization of Polyakov's three-dimensional instanton mechanism to a locally four-dimensional theory - the magnetic bion mechanism - which also applies to a large class of nonsupersymmetric theories. Using a large- vs. small-L Poisson duality, we show that the two mechanisms of confinement, previously thought to be distinct, are in fact continuously connected.

  18. Regularized logistic regression with adjusted adaptive elastic net for gene selection in high dimensional cancer classification.

    Science.gov (United States)

    Algamal, Zakariya Yahya; Lee, Muhammad Hisyam

    2015-12-01

    Cancer classification and gene selection in high-dimensional data have been popular research topics in genetics and molecular biology. Recently, adaptive regularized logistic regression using the elastic net regularization, which is called the adaptive elastic net, has been successfully applied in high-dimensional cancer classification to tackle both estimating the gene coefficients and performing gene selection simultaneously. The adaptive elastic net originally used elastic net estimates as the initial weight, however, using this weight may not be preferable for certain reasons: First, the elastic net estimator is biased in selecting genes. Second, it does not perform well when the pairwise correlations between variables are not high. Adjusted adaptive regularized logistic regression (AAElastic) is proposed to address these issues and encourage grouping effects simultaneously. The real data results indicate that AAElastic is significantly consistent in selecting genes compared to the other three competitor regularization methods. Additionally, the classification performance of AAElastic is comparable to the adaptive elastic net and better than other regularization methods. Thus, we can conclude that AAElastic is a reliable adaptive regularized logistic regression method in the field of high-dimensional cancer classification. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. The light-cone gauge in Polyakov's theory of strings and its relation to the conformal gauge

    International Nuclear Information System (INIS)

    Tzani, R.

    1989-01-01

    The author studies the string theory as a gauge theory. The analysis includes the formulation of the interacting bosonic string by fixing the Gervais-Sakita light-cone gauge in Polyakov's path-integral formulation of the theory and the study of the problem of changing gauge in string theory in the context of the functional formulation of the theory. The main results are the following: Mandelstam's picture is obtained from the light-cone gauge fixed Polyakov's theory. Due to the off-diagonal nature of the gauge, the calculation of the determinants differs from the usual (conformal gauge) case. The regularization of the functional integrals associated with these determinants is done by using the conformal-invariance principle. He then shows that the conformal anomaly associated with this new gauge fixing is canceled at dimensions of space-time d = 26. Studying the problem of changing gauge in string theory, he shows the equivalence between the light-cone and conformal gauge in the path-integral formulation of the theory. In particular, by performing a proper change of variables in the commuting and ghost fields in the Polyakov path-integral, the string theory in the conformal gauge is obtained from the light-cone gauge fixed expression. Finally, the problem of changing gauge is generalized to the higher genus surfaces. It is shown that the string theory in the conformal gauge is equivalent to the light-cone gauge fixed theory for surface with arbitrary number of handles

  20. The ξ/ξ2nd ratio as a test for Effective Polyakov Loop Actions

    Science.gov (United States)

    Caselle, Michele; Nada, Alessandro

    2018-03-01

    Effective Polyakov line actions are a powerful tool to study the finite temperature behaviour of lattice gauge theories. They are much simpler to simulate than the original (3+1) dimensional LGTs and are affected by a milder sign problem. However it is not clear to which extent they really capture the rich spectrum of the original theories, a feature which is instead of great importance if one aims to address the sign problem. We propose here a simple way to address this issue based on the so called second moment correlation length ξ2nd. The ratio ξ/ξ2nd between the exponential correlation length and the second moment one is equal to 1 if only a single mass is present in the spectrum, and becomes larger and larger as the complexity of the spectrum increases. Since both ξexp and ξ2nd are easy to measure on the lattice, this is an economic and effective way to keep track of the spectrum of the theory. In this respect we show using both numerical simulation and effective string calculations that this ratio increases dramatically as the temperature decreases. This non-trivial behaviour should be reproduced by the Polyakov loop effective action.

  1. The ξ/ξ2nd ratio as a test for Effective Polyakov Loop Actions

    Directory of Open Access Journals (Sweden)

    Caselle Michele

    2018-01-01

    Full Text Available Effective Polyakov line actions are a powerful tool to study the finite temperature behaviour of lattice gauge theories. They are much simpler to simulate than the original (3+1 dimensional LGTs and are affected by a milder sign problem. However it is not clear to which extent they really capture the rich spectrum of the original theories, a feature which is instead of great importance if one aims to address the sign problem. We propose here a simple way to address this issue based on the so called second moment correlation length ξ2nd. The ratio ξ/ξ2nd between the exponential correlation length and the second moment one is equal to 1 if only a single mass is present in the spectrum, and becomes larger and larger as the complexity of the spectrum increases. Since both ξexp and ξ2nd are easy to measure on the lattice, this is an economic and effective way to keep track of the spectrum of the theory. In this respect we show using both numerical simulation and effective string calculations that this ratio increases dramatically as the temperature decreases. This non-trivial behaviour should be reproduced by the Polyakov loop effective action.

  2. QCD at Zero Baryon Density and the Polyakov Loop Paradox

    CERN Document Server

    Kratochvila, S; Forcrand, Ph. de

    2006-01-01

    We compare the grand canonical partition function at fixed chemical potential mu with the canonical partition function at fixed baryon number B, formally and by numerical simulations at mu=0 and B=0 with four flavours of staggered quarks. We verify that the free energy densities are equal in the thermodynamic limit, and show that they can be well described by the hadron resonance gas at T T_c. Small differences between the two ensembles, for thermodynamic observables characterising the deconfinement phase transition, vanish with increasing lattice size. These differences are solely caused by contributions of non-zero baryon density sectors, which are exponentially suppressed with increasing volume. The Polyakov loop shows a different behaviour: for all temperatures and volumes, its expectation value is exactly zero in the canonical formulation, whereas it is always non-zero in the commonly used grand-canonical formulation. We clarify this paradoxical difference, and show that the non-vanishing Polyakov loop e...

  3. Form factors and scattering amplitudes in N=4 SYM in dimensional and massive regularizations

    Energy Technology Data Exchange (ETDEWEB)

    Henn, Johannes M. [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; California Univ., Santa Barbara, CA (United States). Kavli Inst. for Theoretical Physics; Moch, Sven [California Univ., Santa Barbara, CA (United States). Kavli Inst. for Theoretical Physics; Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Naculich, Stephen G. [California Univ., Santa Barbara, CA (United States). Kavli Inst. for Theoretical Physics; Bowdoin College, Brunswick, ME (United States). Dept. of Physics

    2011-09-15

    The IR-divergent scattering amplitudes of N=4 supersymmetric Yang-Mills theory can be regulated in a variety of ways, including dimensional regularization and massive (or Higgs) regularization. The IR-finite part of an amplitude in different regularizations generally differs by an additive constant at each loop order, due to the ambiguity in separating finite and divergent contributions. We give a prescription for defining an unambiguous, regulator-independent finite part of the amplitude by factoring off a product of IR-divergent ''wedge'' functions. For the cases of dimensional regularization and the common-mass Higgs regulator, we define the wedge function in terms of a form factor, and demonstrate the regularization independence of the n-point amplitude through two loops. We also deduce the form of the wedge function for the more general differential-mass Higgs regulator, although we lack an explicit operator definition in this case. Finally, using extended dual conformal symmetry, we demonstrate the link between the differential-mass wedge function and the anomalous dual conformal Ward identity for the finite part of the scattering amplitude. (orig.)

  4. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    Science.gov (United States)

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  5. Global Regularity for the Yang-Mills Equations on High Dimensional Minkowski Space

    OpenAIRE

    Krieger, Joachim; Sterbenz, Jacob

    2005-01-01

    This monograph contains a study of the global Cauchy problem for the Yang-Mills equations on (6+1) and higher dimensional Minkowski space, when the initial data sets are small in the critical gauge covariant Sobolev space. (H) over dot(A)((n-4)/2). Regularity is obtained through a certain "microlocal geometric renormalization" of the equations which is implemented via a family of approximate null Cronstrom gauge transformations. The argument is then reduced to controlling some degenerate elli...

  6. Dimensionally regularized Tsallis' statistical mechanics and two-body Newton's gravitation

    Science.gov (United States)

    Zamora, J. D.; Rocca, M. C.; Plastino, A.; Ferri, G. L.

    2018-05-01

    Typical Tsallis' statistical mechanics' quantifiers like the partition function and the mean energy exhibit poles. We are speaking of the partition function Z and the mean energy 〈 U 〉 . The poles appear for distinctive values of Tsallis' characteristic real parameter q, at a numerable set of rational numbers of the q-line. These poles are dealt with dimensional regularization resources. The physical effects of these poles on the specific heats are studied here for the two-body classical gravitation potential.

  7. Wavelet-based regularization of the Galerkin truncated three-dimensional incompressible Euler flows.

    Science.gov (United States)

    Farge, Marie; Okamoto, Naoya; Schneider, Kai; Yoshimatsu, Katsunori

    2017-12-01

    We present numerical simulations of the three-dimensional Galerkin truncated incompressible Euler equations that we integrate in time while regularizing the solution by applying a wavelet-based denoising. For this, at each time step, the vorticity field is decomposed into wavelet coefficients, which are split into strong and weak coefficients, before reconstructing them in physical space to obtain the corresponding coherent and incoherent vorticities. Both components are multiscale and orthogonal to each other. Then, by using the Biot-Savart kernel, one obtains the coherent and incoherent velocities. Advancing the coherent flow in time, while filtering out the noiselike incoherent flow, models turbulent dissipation and corresponds to an adaptive regularization. To track the flow evolution in both space and scale, a safety zone is added in wavelet coefficient space to the coherent wavelet coefficients. It is shown that the coherent flow indeed exhibits an intermittent nonlinear dynamics and a k^{-5/3} energy spectrum, where k is the wave number, characteristic of three-dimensional homogeneous isotropic turbulence. Finally, we compare the dynamical and statistical properties of Euler flows subjected to four kinds of regularizations: dissipative (Navier-Stokes), hyperdissipative (iterated Laplacian), dispersive (Euler-Voigt), and wavelet-based regularizations.

  8. Heavy quark free energies, potentials and the renormalized Polyakov loop

    International Nuclear Information System (INIS)

    Kaczmarek, O.; Karsch, F.; Petreczky, P.; Zantow, F.

    2004-01-01

    We discuss the renormalized free energy of a heavy quark anti-quark pair in the color singlet channel for quenched and full QCD at finite temperature. The temperature and mass dependence, as well as its short distance behavior is analyzed. Using the free energies we calculate the heavy quark potential and entropy in quenched QCD The asymptotic large distance behavior of the free energy is used to define the non-perturbatively renormalized Polyakov loop which is well behaved in the continuum limit. String breaking is studied in the color singlet channel in 2-flavor QCD

  9. The three-point function in split dimensional regularization in the Coulomb gauge

    CERN Document Server

    Leibbrandt, G

    1998-01-01

    We use a gauge-invariant regularization procedure, called ``split dimensional regularization'', to evaluate the quark self-energy $\\Sigma (p)$ and quark-quark-gluon vertex function $\\Lambda_\\mu (p^\\prime,p)$ in the Coulomb gauge, $\\vec{\\bigtriangledown}\\cdot\\vec{A}^a = 0$. The technique of split dimensional regularization was designed to regulate Coulomb-gauge Feynman integrals in non-Abelian theories. The technique which is based on two complex regulating parameters, $\\omega$ and $\\sigma$, is shown to generate a well-defined set of Coulomb-gauge integrals. A major component of this project deals with the evaluation of four-propagator and five-propagator Coulomb integrals, some of which are nonlocal. It is further argued that the standard one-loop BRST identity relating $\\Sigma$ and $\\Lambda_\\mu$, should by rights be replaced by a more general BRST identity which contains two additional contributions from ghost vertex diagrams. Despite the appearance of nonlocal Coulomb integrals, both $\\Sigma$ and $\\Lambda_\\...

  10. Sparse Regularization in Fuzzy $c$-Means for High-Dimensional Data Clustering.

    Science.gov (United States)

    Chang, Xiangyu; Wang, Qingnan; Liu, Yuewen; Wang, Yu

    2016-12-01

    In high-dimensional data clustering practices, the cluster structure is commonly assumed to be confined to a limited number of relevant features, rather than the entire feature set. However, for high-dimensional data, identifying the relevant features and discovering the cluster structure are still challenging problems. To solve these problems, this paper proposes a novel fuzzy c-means (FCM) model with sparse regularization (ℓq(0regularization), by reformulating the FCM objective function into the weighted between-cluster sum of square form and imposing the sparse regularization on the weights. An algorithm is also developed to explicitly solve the proposed model. Compared with the existing clustering models, the proposed model can shrink the weights of irrelevant features (noisy features) to exact zero, and also can be efficiently solved in analytic forms when q = 1,1/2. Experiments on both synthetic and real-world data sets show that the proposed approach outperforms the existing clustering approaches.

  11. The anomaly in the central charge of the supersymmetric kink from dimensional regularization and reduction

    Science.gov (United States)

    Rebhan, A.; van Nieuwenhuizen, P.; Wimmer, R.

    2003-01-01

    We show that the anomalous contribution to the central charge of the (1+1)-dimensional N=1 supersymmetric kink that is required for BPS saturation at the quantum level can be linked to an analogous term in the extra momentum operator of a (2+1)-dimensional kink domain wall with spontaneous parity violation and chiral domain wall fermions. In the quantization of the domain wall, BPS saturation is preserved by nonvanishing quantum corrections to the momentum density in the extra space dimension. Dimensional reduction from 2+1 to 1+1 dimensions preserves the unbroken N=1/2 supersymmetry and turns these parity-violating contributions into the anomaly of the central charge of the supersymmetric kink. On the other hand, standard dimensional regularization by dimensional reduction from 1 to (1- ɛ) spatial dimensions, which also preserves supersymmetry, obtains the anomaly from an evanescent counterterm. We identify the anomaly in the ordinary central charge as an anomalous contribution to the divergence of the conformal central-charge current.

  12. The Polyakov loop and its correlators in higher representations of SU(3) at finite temperature

    International Nuclear Information System (INIS)

    Huebner, K.A.

    2006-09-01

    We have calculated the Polyakov loop in representations D=3,6,8,10,15,15',24,27 and diquark and baryonic Polyakov loop correlation functions with fundamental sources in SU(3) pure gauge theory and 2-flavour QCD with staggered quarks and Q anti Q-singlet correlation functions with sources in the fundamental and adjoint representation in SU(3) pure gauge theory. We have tested a new renormalisation procedure for the Polyakov loop and extracted the adjoint Polyakov loop below T c , binding energy of the gluelump and string breaking distances. Moreover, we could show Casimir scaling for the Polyakov loop in different representations in SU(3) pure gauge theory above T c . Diquark antitriplet and baryonic singlet free energies are related to the Q anti Q-singlet free energies by the Casimir as well. (orig.)

  13. The Polyakov loop and its correlators in higher representations of SU(3) at finite temperature

    Energy Technology Data Exchange (ETDEWEB)

    Huebner, K.A.

    2006-09-15

    We have calculated the Polyakov loop in representations D=3,6,8,10,15,15',24,27 and diquark and baryonic Polyakov loop correlation functions with fundamental sources in SU(3) pure gauge theory and 2-flavour QCD with staggered quarks and Q anti Q-singlet correlation functions with sources in the fundamental and adjoint representation in SU(3) pure gauge theory. We have tested a new renormalisation procedure for the Polyakov loop and extracted the adjoint Polyakov loop below T{sub c}, binding energy of the gluelump and string breaking distances. Moreover, we could show Casimir scaling for the Polyakov loop in different representations in SU(3) pure gauge theory above T{sub c}. Diquark antitriplet and baryonic singlet free energies are related to the Q anti Q-singlet free energies by the Casimir as well. (orig.)

  14. Fuzzy bags, Polyakov loop and gauge/string duality

    Directory of Open Access Journals (Sweden)

    Zuo Fen

    2014-01-01

    Full Text Available Confinement in SU(N gauge theory is due to the linear potential between colored objects. At short distances, the linear contribution could be considered as the quadratic correction to the leading Coulomb term. Recent lattice data show that such quadratic corrections also appear in the deconfined phase, in both the thermal quantities and the Polyakov loop. These contributions are studied systematically employing the gauge/string duality. “Confinement” in N${\\cal N}$ = 4 SU(N Super Yang-Mills (SYM theory could be achieved kinematically when the theory is defined on a compact space manifold. In the large-N limit, deconfinement of N${\\cal N}$ = 4 SYM on S3${{\\Bbb S}^3}$ at strong coupling is dual to the Hawking-Page phase transition in the global Anti-de Sitter spacetime. Meantime, all the thermal quantities and the Polyakov loop achieve significant quadratic contributions. Similar results can also be obtained at weak coupling. However, when confinement is induced dynamically through the local dilaton field in the gravity-dilaton system, these contributions can not be generated consistently. This is in accordance with the fact that there is no dimension-2 gauge-invariant operator in the boundary gauge theory. Based on these results, we suspect that quadratic corrections, and also confinement, should be due to global or non-local effects in the bulk spacetime.

  15. S-matrix regularities of two-dimensional sigma-models of Stiefel manifolds

    International Nuclear Information System (INIS)

    Flume-Gorczyca, B.

    1980-01-01

    The S-matrices of the two-dimensional nonlinear O(n + m)/O(n) and O(n + m)/O(n) x O(m) sigma-models corresponding to Stiefel and Grassmann manifolds, respectively, are compared in leading order in 1/n. It is shown, that after averaging over O(m) labels of the incoming and outgoing particles, the S-matrices of both models become identical. This result explains why commonly expected regularities of the Grassmann models, in particular absence of particle production, are found, modulo an O(m) average, also in Stiefel models. (orig.)

  16. A Durbin-Levinson Regularized Estimator of High Dimensional Autocovariance Matrices

    DEFF Research Database (Denmark)

    Proietti, Tommaso; Giovannelli, Alessandro

    We consider the problem of estimating the high-dimensional autocovariance matrix of a stationary random process, with the purpose of out of sample prediction and feature extraction. This problem has received several solutions. In the nonparametric framework, the literature has concentrated...... a sample autocovariance sequence which is positive definite. We show that the regularized estimator of the autocovariance matrix is consistent and its convergence rates is established. We then focus on constructing the optimal linear predictor and we assess its properties. The computational complexity...

  17. Computational methodology to determine fluid related parameters of non regular three-dimensional scaffolds.

    Science.gov (United States)

    Acosta Santamaría, Víctor Andrés; Malvè, M; Duizabo, A; Mena Tobar, A; Gallego Ferrer, G; García Aznar, J M; Doblaré, M; Ochoa, I

    2013-11-01

    The application of three-dimensional (3D) biomaterials to facilitate the adhesion, proliferation, and differentiation of cells has been widely studied for tissue engineering purposes. The fabrication methods used to improve the mechanical response of the scaffold produce complex and non regular structures. Apart from the mechanical aspect, the fluid behavior in the inner part of the scaffold should also be considered. Parameters such as permeability (k) or wall shear stress (WSS) are important aspects in the provision of nutrients, the removal of metabolic waste products or the mechanically-induced differentiation of cells attached in the trabecular network of the scaffolds. Experimental measurements of these parameters are not available in all labs. However, fluid parameters should be known prior to other types of experiments. The present work compares an experimental study with a computational fluid dynamics (CFD) methodology to determine the related fluid parameters (k and WSS) of complex non regular poly(L-lactic acid) scaffolds based only on the treatment of microphotographic images obtained with a microCT (μCT). The CFD analysis shows similar tendencies and results with low relative difference compared to those of the experimental study, for high flow rates. For low flow rates the accuracy of this prediction reduces. The correlation between the computational and experimental results validates the robustness of the proposed methodology.

  18. Hedgehog black holes and the Polyakov loop at strong coupling

    Science.gov (United States)

    Headrick, Matthew

    2008-05-01

    In N=4 super-Yang-Mills theory at large N, large λ, and finite temperature, the value of the Wilson-Maldacena loop wrapping the Euclidean time circle (the Polyakov-Maldacena loop, or PML) is computed by the area of a certain minimal surface in the dual supergravity background. This prescription can be used to calculate the free energy as a function of the PML (averaged over the spatial coordinates), by introducing into the bulk action a Lagrange multiplier term that fixes the (average) area of the appropriate minimal surface. This term, which can also be viewed as a chemical potential for the PML, contributes to the bulk stress tensor like a string stretching from the horizon to the boundary (smeared over the angular directions). We find the corresponding “hedgehog” black hole solutions numerically, within an SO(6)-preserving ansatz, and derive part of the free energy diagram for the PML. As a warm-up problem, we also find exact solutions for hedgehog black holes in pure gravity, and derive the free energy and phase diagrams for that system.

  19. A two-dimensional regularization algorithm for density profile evaluation from broadband reflectometry

    International Nuclear Information System (INIS)

    Nunes, F.; Varela, P.; Silva, A.; Manso, M.; Santos, J.; Nunes, I.; Serra, F.; Kurzan, B.; Suttrop, W.

    1997-01-01

    Broadband reflectometry is a current technique that uses the round-trip group delays of reflected frequency-swept waves to measure density profiles of fusion plasmas. The main factor that may limit the accuracy of the reconstructed profiles is the interference of the probing waves with the plasma density fluctuations: plasma turbulence leads to random phase variations and magneto hydrodynamic activity produces mainly strong amplitude and phase modulations. Both effects cause the decrease, and eventually loss, of signal at some frequencies. Several data processing techniques can be applied to filter and/or interpolate noisy group delay data obtained from turbulent plasmas with a single frequency sweep. Here, we propose a more powerful algorithm performing two-dimensional regularization (in space and time) of data provided by multiple consecutive frequency sweeps, which leads to density profiles with improved accuracy. The new method is described and its application to simulated data corrupted by noise and missing data is considered. It is shown that the algorithm improves the identification of slowly varying plasma density perturbations by attenuating the effect of fast fluctuations and noise contained in experimental data. First results obtained with this method in ASDEX Upgrade tokamak are presented. copyright 1997 American Institute of Physics

  20. A two-dimensional regularization algorithm for density profile evaluation from broadband reflectometry

    Science.gov (United States)

    Nunes, F.; Varela, P.; Silva, A.; Manso, M.; Santos, J.; Nunes, I.; Serra, F.; Kurzan, B.; Suttrop, W.

    1997-01-01

    Broadband reflectometry is a current technique that uses the round-trip group delays of reflected frequency-swept waves to measure density profiles of fusion plasmas. The main factor that may limit the accuracy of the reconstructed profiles is the interference of the probing waves with the plasma density fluctuations: plasma turbulence leads to random phase variations and magneto hydrodynamic activity produces mainly strong amplitude and phase modulations. Both effects cause the decrease, and eventually loss, of signal at some frequencies. Several data processing techniques can be applied to filter and/or interpolate noisy group delay data obtained from turbulent plasmas with a single frequency sweep. Here, we propose a more powerful algorithm performing two-dimensional regularization (in space and time) of data provided by multiple consecutive frequency sweeps, which leads to density profiles with improved accuracy. The new method is described and its application to simulated data corrupted by noise and missing data is considered. It is shown that the algorithm improves the identification of slowly varying plasma density perturbations by attenuating the effect of fast fluctuations and noise contained in experimental data. First results obtained with this method in ASDEX Upgrade tokamak are presented.

  1. Fabrication and characterization of one- and two-dimensional regular patterns produced employing multiple exposure holographic lithography

    DEFF Research Database (Denmark)

    Tamulevičius, S.; Jurkevičiute, A.; Armakavičius, N.

    2017-01-01

    In this paper we describe fabrication and characterization methods of two-dimensional periodic microstructures in photoresist with pitch of 1.2 urn and lattice constant 1.2-4.8 μm, formed using two-beam multiple exposure holographic lithography technique. The regular structures were recorded empl...

  2. Physics-driven Spatiotemporal Regularization for High-dimensional Predictive Modeling: A Novel Approach to Solve the Inverse ECG Problem

    Science.gov (United States)

    Yao, Bing; Yang, Hui

    2016-12-01

    This paper presents a novel physics-driven spatiotemporal regularization (STRE) method for high-dimensional predictive modeling in complex healthcare systems. This model not only captures the physics-based interrelationship between time-varying explanatory and response variables that are distributed in the space, but also addresses the spatial and temporal regularizations to improve the prediction performance. The STRE model is implemented to predict the time-varying distribution of electric potentials on the heart surface based on the electrocardiogram (ECG) data from the distributed sensor network placed on the body surface. The model performance is evaluated and validated in both a simulated two-sphere geometry and a realistic torso-heart geometry. Experimental results show that the STRE model significantly outperforms other regularization models that are widely used in current practice such as Tikhonov zero-order, Tikhonov first-order and L1 first-order regularization methods.

  3. Shape and Symmetry Determine Two-Dimensional Melting Transitions of Hard Regular Polygons

    Science.gov (United States)

    Anderson, Joshua A.; Antonaglia, James; Millan, Jaime A.; Engel, Michael; Glotzer, Sharon C.

    2017-04-01

    The melting transition of two-dimensional systems is a fundamental problem in condensed matter and statistical physics that has advanced significantly through the application of computational resources and algorithms. Two-dimensional systems present the opportunity for novel phases and phase transition scenarios not observed in 3D systems, but these phases depend sensitively on the system and, thus, predicting how any given 2D system will behave remains a challenge. Here, we report a comprehensive simulation study of the phase behavior near the melting transition of all hard regular polygons with 3 ≤n ≤14 vertices using massively parallel Monte Carlo simulations of up to 1 ×106 particles. By investigating this family of shapes, we show that the melting transition depends upon both particle shape and symmetry considerations, which together can predict which of three different melting scenarios will occur for a given n . We show that systems of polygons with as few as seven edges behave like hard disks; they melt continuously from a solid to a hexatic fluid and then undergo a first-order transition from the hexatic phase to the isotropic fluid phase. We show that this behavior, which holds for all 7 ≤n ≤14 , arises from weak entropic forces among the particles. Strong directional entropic forces align polygons with fewer than seven edges and impose local order in the fluid. These forces can enhance or suppress the discontinuous character of the transition depending on whether the local order in the fluid is compatible with the local order in the solid. As a result, systems of triangles, squares, and hexagons exhibit a Kosterlitz-Thouless-Halperin-Nelson-Young (KTHNY) predicted continuous transition between isotropic fluid and triatic, tetratic, and hexatic phases, respectively, and a continuous transition from the appropriate x -atic to the solid. In particular, we find that systems of hexagons display continuous two-step KTHNY melting. In contrast, due to

  4. Shape and Symmetry Determine Two-Dimensional Melting Transitions of Hard Regular Polygons

    Directory of Open Access Journals (Sweden)

    Joshua A. Anderson

    2017-04-01

    Full Text Available The melting transition of two-dimensional systems is a fundamental problem in condensed matter and statistical physics that has advanced significantly through the application of computational resources and algorithms. Two-dimensional systems present the opportunity for novel phases and phase transition scenarios not observed in 3D systems, but these phases depend sensitively on the system and, thus, predicting how any given 2D system will behave remains a challenge. Here, we report a comprehensive simulation study of the phase behavior near the melting transition of all hard regular polygons with 3≤n≤14 vertices using massively parallel Monte Carlo simulations of up to 1×10^{6} particles. By investigating this family of shapes, we show that the melting transition depends upon both particle shape and symmetry considerations, which together can predict which of three different melting scenarios will occur for a given n. We show that systems of polygons with as few as seven edges behave like hard disks; they melt continuously from a solid to a hexatic fluid and then undergo a first-order transition from the hexatic phase to the isotropic fluid phase. We show that this behavior, which holds for all 7≤n≤14, arises from weak entropic forces among the particles. Strong directional entropic forces align polygons with fewer than seven edges and impose local order in the fluid. These forces can enhance or suppress the discontinuous character of the transition depending on whether the local order in the fluid is compatible with the local order in the solid. As a result, systems of triangles, squares, and hexagons exhibit a Kosterlitz-Thouless-Halperin-Nelson-Young (KTHNY predicted continuous transition between isotropic fluid and triatic, tetratic, and hexatic phases, respectively, and a continuous transition from the appropriate x-atic to the solid. In particular, we find that systems of hexagons display continuous two-step KTHNY melting. In

  5. The effect of the Polyakov loop on the chiral phase transition

    Directory of Open Access Journals (Sweden)

    Szép Zs.

    2011-04-01

    Full Text Available The Polyakov loop is included in the S U(2L × S U(2R chiral quark-meson model by considering the propagation of the constituent quarks, coupled to the (σ, π meson multiplet, on the homogeneous background of a temporal gauge field, diagonal in color space. The model is solved at finite temperature and quark baryon chemical potential both in the chiral limit and for the physical value of the pion mass by using an expansion in the number of flavors Nf. Keeping the fermion propagator at its tree-level, a resummation on the pion propagator is constructed which resums infinitely many orders in 1/Nf, where O(1/Nf represents the order at which the fermions start to contribute in the pion propagator. The influence of the Polyakov loop on the tricritical or the critical point in the µq – T phase diagram is studied for various forms of the Polyakov loop potential.

  6. Three-dimensional quantitative microwave imaging from measured data with multiplicative smoothing and value picking regularization

    International Nuclear Information System (INIS)

    De Zaeytijd, Jürgen; Franchois, Ann

    2009-01-01

    This paper presents reconstructions of four targets from the 3D Fresnel database. The electromagnetic inverse scattering problem is treated as a nonlinear optimization problem for the complex permittivity in an investigation domain. The goal of this paper is to explore the achievable reconstruction quality when such a quantitative inverse scattering approach is employed on real world measurements, using only single-frequency data. Two regularization techniques to reduce the ill-possedness of the inverse scattering problem are compared. The first one is a multiplicative smoothing regularization, applied directly to the cost function, which yields smoothed reconstructions of the homogeneous Fresnel targets without much experimentation to determine the regularization parameter. The second technique is the recently proposed value picking (VP) regularization which is particularly suited for the class of piecewise (quasi-)homogeneous targets, such as those of the Fresnel database. In contrast to edge-preserving regularization methods, VP regularization does not operate on the spatial distribution of permittivity values, but it clusters them around some reference values, the VP values, in the complex plane. These VP values are included in the cost function as auxiliary optimization variables and their number can be gradually increased using a stepwise relaxed VP regularization scheme. Both regularization strategies are incorporated in a Gauss–Newton minimization framework with line search. It is shown that the reconstruction quality using single-frequency Fresnel data is good when using multiplicative smoothing and even better when using the VP regularization. In particular, the completely blind reconstruction of the mystery target in the database provides us with a detailed quantitative image of a plausible object

  7. Regular figures

    CERN Document Server

    Tóth, L Fejes; Ulam, S; Stark, M

    1964-01-01

    Regular Figures concerns the systematology and genetics of regular figures. The first part of the book deals with the classical theory of the regular figures. This topic includes description of plane ornaments, spherical arrangements, hyperbolic tessellations, polyhedral, and regular polytopes. The problem of geometry of the sphere and the two-dimensional hyperbolic space are considered. Classical theory is explained as describing all possible symmetrical groupings in different spaces of constant curvature. The second part deals with the genetics of the regular figures and the inequalities fo

  8. Dimensional regularization and n-wave procedure for scalar fields in multi-dimensional quasi-euclidean spaces

    CERN Document Server

    Pavlov, Y V

    2001-01-01

    One derived expressions for the vacuum mean values of energy-momentum tensor of the scalar field with arbitrary relation to curvature in N-dimensional quasi-euclidean space-time for vacuum. One generalized n-wave procedure for multidimensional spaces. One calculated all counter-members for N=5 and for a conformal scalar field in N=6, 7. One determined the geometric structure of three first counter-members for N-dimensional spaces. All subtractions in 4-dimensional space-time and 3 first subtractions in multidimensional spaces are shown to correspond to renormalization of constants of priming and gravitational Lagrangian

  9. 't Hooft-Polyakov monopoles in an antiferromagnetic Bose-Einstein condensate

    NARCIS (Netherlands)

    Stoof, H.T.C.; Vliegen, E.; Al Khawaja, U.

    2001-01-01

    We show that an antiferromagnetic spin-1 Bose-Einstein condensate, which can for instance be created with 23-Na atoms in an optical trap, has not only singular line-like vortex excitations, but also allows for singular point-like topological excitations, i.e., 't Hooft-Polyakov monopoles. We discuss

  10. A power series solution of the 't Hooft-Polyakov monopole

    International Nuclear Information System (INIS)

    Schaposnik, F.A.

    1976-01-01

    The purpos of this note is to show that, with the aid of conservation laws, one can easily decouple the equation of motion corresponding to the magnetic-monopole solution of 't Hooft-Polyakov, thus obtaining analytical solutions in the form of series expansions and asymptotic behaviours of vector and scalar fields

  11. Network-based regularization for high dimensional SNP data in the case-control study of Type 2 diabetes.

    Science.gov (United States)

    Ren, Jie; He, Tao; Li, Ye; Liu, Sai; Du, Yinhao; Jiang, Yu; Wu, Cen

    2017-05-16

    Over the past decades, the prevalence of type 2 diabetes mellitus (T2D) has been steadily increasing around the world. Despite large efforts devoted to better understand the genetic basis of the disease, the identified susceptibility loci can only account for a small portion of the T2D heritability. Some of the existing approaches proposed for the high dimensional genetic data from the T2D case-control study are limited by analyzing a few number of SNPs at a time from a large pool of SNPs, by ignoring the correlations among SNPs and by adopting inefficient selection techniques. We propose a network constrained regularization method to select important SNPs by taking the linkage disequilibrium into account. To accomodate the case control study, an iteratively reweighted least square algorithm has been developed within the coordinate descent framework where optimization of the regularized logistic loss function is performed with respect to one parameter at a time and iteratively cycle through all the parameters until convergence. In this article, a novel approach is developed to identify important SNPs more effectively through incorporating the interconnections among them in the regularized selection. A coordinate descent based iteratively reweighed least squares (IRLS) algorithm has been proposed. Both the simulation study and the analysis of the Nurses's Health Study, a case-control study of type 2 diabetes data with high dimensional SNP measurements, demonstrate the advantage of the network based approach over the competing alternatives.

  12. A sparsity-regularized Born iterative method for reconstruction of two-dimensional piecewise continuous inhomogeneous domains

    KAUST Repository

    Sandhu, Ali Imran

    2016-04-10

    A sparsity-regularized Born iterative method (BIM) is proposed for efficiently reconstructing two-dimensional piecewise-continuous inhomogeneous dielectric profiles. Such profiles are typically not spatially sparse, which reduces the efficiency of the sparsity-promoting regularization. To overcome this problem, scattered fields are represented in terms of the spatial derivative of the dielectric profile and reconstruction is carried out over samples of the dielectric profile\\'s derivative. Then, like the conventional BIM, the nonlinear problem is iteratively converted into a sequence of linear problems (in derivative samples) and sparsity constraint is enforced on each linear problem using the thresholded Landweber iterations. Numerical results, which demonstrate the efficiency and accuracy of the proposed method in reconstructing piecewise-continuous dielectric profiles, are presented.

  13. Arbitrary parameters in implicit regularization and democracy within perturbative description of 2-dimensional gravitational anomalies

    Energy Technology Data Exchange (ETDEWEB)

    Souza, Leonardo A.M. [Federal University of Minas Gerais, Physics Department, ICEx, P.O. Box 702, 30.161-970, Belo Horizonte MG (Brazil)]. E-mail: lamsouza@fisica.ufmg.br; Sampaio, Marcos [Federal University of Minas Gerais, Physics Department, ICEx, P.O. Box 702, 30.161-970, Belo Horizonte MG (Brazil)]. E-mail: msampaio@fisica.ufmg.br; Nemes, M.C. [Federal University of Minas Gerais, Physics Department, ICEx, P.O. Box 702, 30.161-970, Belo Horizonte MG (Brazil)]. E-mail: carolina@fisica.ufmg.br

    2006-01-26

    We show that the Implicit Regularization Technique is useful to display quantum symmetry breaking in a complete regularization independent fashion. Arbitrary parameters are expressed by finite differences between integrals of the same superficial degree of divergence whose value is fixed on physical grounds (symmetry requirements or phenomenology). We study Weyl fermions on a classical gravitational background in two dimensions and show that, assuming Lorentz symmetry, the Weyl and Einstein Ward identities reduce to a set of algebraic equations for the arbitrary parameters which allows us to study the Ward identities on equal footing. We conclude in a renormalization independent way that the axial part of the Einstein Ward identity is always violated. Moreover whereas we can preserve the pure tensor part of the Einstein Ward identity at the expense of violating the Weyl Ward identities we may as well violate the former and preserve the latter.

  14. A closed expression for the UV-divergent parts of one-loop tensor integrals in dimensional regularization

    Science.gov (United States)

    Sulyok, G.

    2017-07-01

    Starting from the general definition of a one-loop tensor N-point function, we use its Feynman parametrization to calculate the ultraviolet (UV-)divergent part of an arbitrary tensor coefficient in the framework of dimensional regularization. In contrast to existing recursion schemes, we are able to present a general analytic result in closed form that enables direct determination of the UV-divergent part of any one-loop tensor N-point coefficient independent from UV-divergent parts of other one-loop tensor N-point coefficients. Simplified formulas and explicit expressions are presented for A-, B-, C-, D-, E-, and F-functions.

  15. One-loop counterterms for the dimensional regularization of arbitrary Lagrangians

    OpenAIRE

    Pronin, P.; Stepanyantz, K.

    1996-01-01

    We present master formulas for the divergent part of the one-loop effective action for an arbitrary (both minimal and nonminimal) operators of any order in the 4-dimensional curved space. They can be considered as computer algorithms, because the one-loop calculations are then reduced to the simplest algebraic operations. Some test applications are considered by REDUCE analytical calculation system.

  16. Renormalized Polyakov loop in the deconfined phase of SU(N) gauge theory and gauge-string duality.

    Science.gov (United States)

    Andreev, Oleg

    2009-05-29

    We use gauge-string duality to analytically evaluate the renormalized Polyakov loop in pure Yang-Mills theories. For SU(3), the result is in quite good agreement with lattice simulations for a broad temperature range.

  17. Polyakov loop and spin correlators on finite lattices. A study beyond the mass gap

    International Nuclear Information System (INIS)

    Engels, J.; Neuhaus, T.

    1995-01-01

    We derive an analytic expression for point-to-point correlation functions of the Polyakov loop based on the transfer matrix formalism. For the 2D Ising model we show that the results deduced from point-point spin correlators are coinciding with those from zero momentum correlators. We investigate the contributions from eigenvalues of the transfer matrix beyond the mass gap and discuss the limitations and possibilities of such an analysis. The finite size behaviour of the obtained 2D Ising model matrix elements is examined. The point-to-point correlator formula is then applied to Polyakov loop data in finite temperature SU(2) gauge theory. The leading matrix element shows all expected scaling properties. Just above the critical point we find a Debye screening mass μ D /T∼4, independent of the volume. ((orig.))

  18. Transport coefficients in the Polyakov quark meson coupling model: A relaxation time approximation

    Science.gov (United States)

    Abhishek, Aman; Mishra, Hiranmaya; Ghosh, Sabyasachi

    2018-01-01

    We compute the transport coefficients, namely, the coefficients of shear and bulk viscosities, as well as thermal conductivity for hot and dense matter. The calculations are performed within the Polyakov quark meson model. The estimation of the transport coefficients is made using the Boltzmann kinetic equation within the relaxation time approximation. The energy-dependent relaxation time is estimated from meson-meson scattering, quark-meson scattering, and quark-quark scattering within the model. In our calculations, the shear viscosity to entropy ratio and the coefficient of thermal conductivity show a minimum at the critical temperature, while the ratio of bulk viscosity to entropy density exhibits a peak at this transition point. The effect of confinement modeled through a Polyakov loop potential plays an important role both below and above the critical temperature.

  19. Systematic Dimensionality Reduction for Quantum Walks: Optimal Spatial Search and Transport on Non-Regular Graphs

    Science.gov (United States)

    Novo, Leonardo; Chakraborty, Shantanav; Mohseni, Masoud; Neven, Hartmut; Omar, Yasser

    2015-09-01

    Continuous time quantum walks provide an important framework for designing new algorithms and modelling quantum transport and state transfer problems. Often, the graph representing the structure of a problem contains certain symmetries that confine the dynamics to a smaller subspace of the full Hilbert space. In this work, we use invariant subspace methods, that can be computed systematically using the Lanczos algorithm, to obtain the reduced set of states that encompass the dynamics of the problem at hand without the specific knowledge of underlying symmetries. First, we apply this method to obtain new instances of graphs where the spatial quantum search algorithm is optimal: complete graphs with broken links and complete bipartite graphs, in particular, the star graph. These examples show that regularity and high-connectivity are not needed to achieve optimal spatial search. We also show that this method considerably simplifies the calculation of quantum transport efficiencies. Furthermore, we observe improved efficiencies by removing a few links from highly symmetric graphs. Finally, we show that this reduction method also allows us to obtain an upper bound for the fidelity of a single qubit transfer on an XY spin network.

  20. Systematic Dimensionality Reduction for Quantum Walks: Optimal Spatial Search and Transport on Non-Regular Graphs.

    Science.gov (United States)

    Novo, Leonardo; Chakraborty, Shantanav; Mohseni, Masoud; Neven, Hartmut; Omar, Yasser

    2015-09-02

    Continuous time quantum walks provide an important framework for designing new algorithms and modelling quantum transport and state transfer problems. Often, the graph representing the structure of a problem contains certain symmetries that confine the dynamics to a smaller subspace of the full Hilbert space. In this work, we use invariant subspace methods, that can be computed systematically using the Lanczos algorithm, to obtain the reduced set of states that encompass the dynamics of the problem at hand without the specific knowledge of underlying symmetries. First, we apply this method to obtain new instances of graphs where the spatial quantum search algorithm is optimal: complete graphs with broken links and complete bipartite graphs, in particular, the star graph. These examples show that regularity and high-connectivity are not needed to achieve optimal spatial search. We also show that this method considerably simplifies the calculation of quantum transport efficiencies. Furthermore, we observe improved efficiencies by removing a few links from highly symmetric graphs. Finally, we show that this reduction method also allows us to obtain an upper bound for the fidelity of a single qubit transfer on an XY spin network.

  1. Critical dimension of bosonic string theory and zeta-function regularization

    International Nuclear Information System (INIS)

    Vanzo, L.; Zerbini, S.; Istituto Nazionale di Fisica Nucleare, Povo

    1988-01-01

    A derivation of the critical dimension of the Polyakov bosonic string is presented. It is based on the use of the anholonomic formalism, a ghost-anti-ghost symmetric action, zeta-function regularization and the Seeley method of pseudo-differential operators. (orig.)

  2. Automated three-dimensional morphology-based clustering of human erythrocytes with regular shapes: stomatocytes, discocytes, and echinocytes

    Science.gov (United States)

    Ahmadzadeh, Ezat; Jaferzadeh, Keyvan; Lee, Jieun; Moon, Inkyu

    2017-07-01

    We present unsupervised clustering methods for automatic grouping of human red blood cells (RBCs) extracted from RBC quantitative phase images obtained by digital holographic microscopy into three RBC clusters with regular shapes, including biconcave, stomatocyte, and sphero-echinocyte. We select some good features related to the RBC profile and morphology, such as RBC average thickness, sphericity coefficient, and mean corpuscular volume, and clustering methods, including density-based spatial clustering applications with noise, k-medoids, and k-means, are applied to the set of morphological features. The clustering results of RBCs using a set of three-dimensional features are compared against a set of two-dimensional features. Our experimental results indicate that by utilizing the introduced set of features, two groups of biconcave RBCs and old RBCs (suffering from the sphero-echinocyte process) can be perfectly clustered. In addition, by increasing the number of clusters, the three RBC types can be effectively clustered in an automated unsupervised manner with high accuracy. The performance evaluation of the clustering techniques reveals that they can assist hematologists in further diagnosis.

  3. Regularity criterion for solutions of the three-dimensional Cahn-Hilliard-Navier-Stokes equations and associated computations.

    Science.gov (United States)

    Gibbon, John D; Pal, Nairita; Gupta, Anupam; Pandit, Rahul

    2016-12-01

    We consider the three-dimensional (3D) Cahn-Hilliard equations coupled to, and driven by, the forced, incompressible 3D Navier-Stokes equations. The combination, known as the Cahn-Hilliard-Navier-Stokes (CHNS) equations, is used in statistical mechanics to model the motion of a binary fluid. The potential development of singularities (blow-up) in the contours of the order parameter ϕ is an open problem. To address this we have proved a theorem that closely mimics the Beale-Kato-Majda theorem for the 3D incompressible Euler equations [J. T. Beale, T. Kato, and A. J. Majda, Commun. Math. Phys. 94, 61 (1984)CMPHAY0010-361610.1007/BF01212349]. By taking an L^{∞} norm of the energy of the full binary system, designated as E_{∞}, we have shown that ∫_{0}^{t}E_{∞}(τ)dτ governs the regularity of solutions of the full 3D system. Our direct numerical simulations (DNSs) of the 3D CHNS equations for (a) a gravity-driven Rayleigh Taylor instability and (b) a constant-energy-injection forcing, with 128^{3} to 512^{3} collocation points and over the duration of our DNSs confirm that E_{∞} remains bounded as far as our computations allow.

  4. Crossing the dividing surface of transition state theory. IV. Dynamical regularity and dimensionality reduction as key features of reactive trajectories.

    Science.gov (United States)

    Lorquet, J C

    2017-04-07

    higher energies, these characteristics persist, but to a lesser degree. Recrossings of the dividing surface then become much more frequent and the phase space volumes of initial conditions that generate recrossing-free trajectories decrease. Altogether, one ends up with an additional illustration of the concept of reactive cylinder (or conduit) in phase space that reactive trajectories must follow. Reactivity is associated with dynamical regularity and dimensionality reduction, whatever the shape of the potential energy surface, no matter how strong its anharmonicity, and whatever the curvature of its reaction path. Both simplifying features persist during the entire reactive process, up to complete separation of fragments. The ergodicity assumption commonly assumed in statistical theories is inappropriate for reactive trajectories.

  5. Crossing the dividing surface of transition state theory. IV. Dynamical regularity and dimensionality reduction as key features of reactive trajectories

    Science.gov (United States)

    Lorquet, J. C.

    2017-04-01

    energies, these characteristics persist, but to a lesser degree. Recrossings of the dividing surface then become much more frequent and the phase space volumes of initial conditions that generate recrossing-free trajectories decrease. Altogether, one ends up with an additional illustration of the concept of reactive cylinder (or conduit) in phase space that reactive trajectories must follow. Reactivity is associated with dynamical regularity and dimensionality reduction, whatever the shape of the potential energy surface, no matter how strong its anharmonicity, and whatever the curvature of its reaction path. Both simplifying features persist during the entire reactive process, up to complete separation of fragments. The ergodicity assumption commonly assumed in statistical theories is inappropriate for reactive trajectories.

  6. The NSVZ scheme for N=1 SQED with Nf flavors, regularized by the dimensional reduction, in the three-loop approximation

    Directory of Open Access Journals (Sweden)

    S.S. Aleshin

    2017-01-01

    Full Text Available At the three-loop level we analyze, how the NSVZ relation appears for N=1 SQED regularized by the dimensional reduction. This is done by the method analogous to the one which was earlier used for the theories regularized by higher derivatives. Within the dimensional technique, the loop integrals cannot be written as integrals of double total derivatives. However, similar structures can be written in the considered approximation and are taken as a starting point. Then we demonstrate that, unlike the higher derivative regularization, the NSVZ relation is not valid for the renormalization group functions defined in terms of the bare coupling constant. However, for the renormalization group functions defined in terms of the renormalized coupling constant, it is possible to impose boundary conditions to the renormalization constants giving the NSVZ scheme in the three-loop order. They are similar to the all-loop ones defining the NSVZ scheme obtained with the higher derivative regularization, but are more complicated. The NSVZ schemes constructed with the dimensional reduction and with the higher derivative regularization are related by a finite renormalization in the considered approximation.

  7. The existence and regularity of time-periodic solutions to the three-dimensional Navier–Stokes equations in the whole space

    International Nuclear Information System (INIS)

    Kyed, Mads

    2014-01-01

    The existence, uniqueness and regularity of time-periodic solutions to the Navier–Stokes equations in the three-dimensional whole space are investigated. We consider the Navier–Stokes equations with a non-zero drift term corresponding to the physical model of a fluid flow around a body that moves with a non-zero constant velocity. The existence of a strong time-periodic solution is shown for small time-periodic data. It is further shown that this solution is unique in a large class of weak solutions that can be considered physically reasonable. Finally, we establish regularity properties for any strong solution regardless of its size. (paper)

  8. Finite temperature and the Polyakov loop in the covariant variational approach to Yang-Mills Theory

    Directory of Open Access Journals (Sweden)

    Quandt Markus

    2017-01-01

    Full Text Available We extend the covariant variational approach for Yang-Mills theory in Landau gauge to non-zero temperatures. Numerical solutions for the thermal propagators are presented and compared to high-precision lattice data. To study the deconfinement phase transition, we adapt the formalism to background gauge and compute the effective action of the Polyakov loop for the colour groups SU(2 and SU(3. Using the zero-temperature propagators as input, all parameters are fixed at T = 0 and we find a clear signal for a deconfinement phase transition at finite temperatures, which is second order for SU(2 and first order for SU(3. The critical temperatures obtained are in reasonable agreement with lattice data.

  9. Improving thoracic four-dimensional cone-beam CT reconstruction with anatomical-adaptive image regularization (AAIR)

    International Nuclear Information System (INIS)

    Shieh, Chun-Chien; Kipritidis, John; O'Brien, Ricky T; Cooper, Benjamin J; Keall, Paul J; Kuncic, Zdenka

    2015-01-01

    Total-variation (TV) minimization reconstructions can significantly reduce noise and streaks in thoracic four-dimensional cone-beam computed tomography (4D CBCT) images compared to the Feldkamp–Davis–Kress (FDK) algorithm currently used in practice. TV minimization reconstructions are, however, prone to over-smoothing anatomical details and are also computationally inefficient. The aim of this study is to demonstrate a proof of concept that these disadvantages can be overcome by incorporating the general knowledge of the thoracic anatomy via anatomy segmentation into the reconstruction. The proposed method, referred as the anatomical-adaptive image regularization (AAIR) method, utilizes the adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS) framework, but introduces an additional anatomy segmentation step in every iteration. The anatomy segmentation information is implemented in the reconstruction using a heuristic approach to adaptively suppress over-smoothing at anatomical structures of interest. The performance of AAIR depends on parameters describing the weighting of the anatomy segmentation prior and segmentation threshold values. A sensitivity study revealed that the reconstruction outcome is not sensitive to these parameters as long as they are chosen within a suitable range. AAIR was validated using a digital phantom and a patient scan and was compared to FDK, ASD-POCS and the prior image constrained compressed sensing (PICCS) method. For the phantom case, AAIR reconstruction was quantitatively shown to be the most accurate as indicated by the mean absolute difference and the structural similarity index. For the patient case, AAIR resulted in the highest signal-to-noise ratio (i.e. the lowest level of noise and streaking) and the highest contrast-to-noise ratios for the tumor and the bony anatomy (i.e. the best visibility of anatomical details). Overall, AAIR was much less prone to over-smoothing anatomical details compared to ASD-POCS and

  10. Maximal Sobolev regularity for solutions of elliptic equations in infinite dimensional Banach spaces endowed with a weighted Gaussian measure

    Science.gov (United States)

    Cappa, G.; Ferrari, S.

    2016-12-01

    Let X be a separable Banach space endowed with a non-degenerate centered Gaussian measure μ. The associated Cameron-Martin space is denoted by H. Let ν =e-U μ, where U : X → R is a sufficiently regular convex and continuous function. In this paper we are interested in the W 2 , 2 regularity of the weak solutions of elliptic equations of the type

  11. Topological Symmetry, Spin Liquids and CFT Duals of Polyakov Model with Massless Fermions

    Energy Technology Data Exchange (ETDEWEB)

    Unsal, Mithat

    2008-04-30

    We prove the absence of a mass gap and confinement in the Polyakov model with massless complex fermions in any representation of the gauge group. A U(1){sub *} topological shift symmetry protects the masslessness of one dual photon. This symmetry emerges in the IR as a consequence of the Callias index theorem and abelian duality. For matter in the fundamental representation, the infrared limits of this class of theories interpolate between weakly and strongly coupled conformal field theory (CFT) depending on the number of flavors, and provide an infinite class of CFTs in d = 3 dimensions. The long distance physics of the model is same as certain stable spin liquids. Altering the topology of the adjoint Higgs field by turning it into a compact scalar does not change the long distance dynamics in perturbation theory, however, non-perturbative effects lead to a mass gap for the gauge fluctuations. This provides conceptual clarity to many subtle issues about compact QED{sub 3} discussed in the context of quantum magnets, spin liquids and phase fluctuation models in cuprate superconductors. These constructions also provide new insights into zero temperature gauge theory dynamics on R{sup 2,1} and R{sup 2,1} x S{sup 1}. The confined versus deconfined long distance dynamics is characterized by a discrete versus continuous topological symmetry.

  12. Entropy-based viscous regularization for the multi-dimensional Euler equations in low-Mach and transonic flows

    Energy Technology Data Exchange (ETDEWEB)

    Marc O Delchini; Jean E. Ragusa; Ray A. Berry

    2015-07-01

    We present a new version of the entropy viscosity method, a viscous regularization technique for hyperbolic conservation laws, that is well-suited for low-Mach flows. By means of a low-Mach asymptotic study, new expressions for the entropy viscosity coefficients are derived. These definitions are valid for a wide range of Mach numbers, from subsonic flows (with very low Mach numbers) to supersonic flows, and no longer depend on an analytical expression for the entropy function. In addition, the entropy viscosity method is extended to Euler equations with variable area for nozzle flow problems. The effectiveness of the method is demonstrated using various 1-D and 2-D benchmark tests: flow in a converging–diverging nozzle; Leblanc shock tube; slow moving shock; strong shock for liquid phase; low-Mach flows around a cylinder and over a circular hump; and supersonic flow in a compression corner. Convergence studies are performed for smooth solutions and solutions with shocks present.

  13. Properties of the twisted Polyakov loop coupling and the infrared fixed point in the SU(3) gauge theories

    Science.gov (United States)

    Itou, Etsuko

    2013-08-01

    We report the nonperturbative behavior of the twisted Polyakov loop (TPL) coupling constant for the SU(3) gauge theories defined by the ratio of Polyakov loop correlators in finite volume with twisted boundary condition. We reveal the vacuum structures and the phase structure for the lattice gauge theory with the twisted boundary condition. Carrying out the numerical simulations, we determine the nonperturbative running coupling constant in this renormalization scheme for the quenched QCD and N_f=12 SU(3) gauge theories. First, we study the quenched QCD theory using the plaquette gauge action. The TPL coupling constant has a fake fixed point in the confinement phase. We discuss this fake fixed point of the TPL scheme and obtain the nonperturbative running coupling constant in the deconfinement phase, where the magnitude of the Polyakov loop shows the nonzero values. We also investigate the system coupled to fundamental fermions. Since we use the naive staggered fermion with the twisted boundary condition in our simulation, only multiples of 12 are allowed for the number of flavors. According to the perturbative two-loop analysis, the N_f=12 SU(3) gauge theory might have a conformal fixed point in the infrared region. However, recent lattice studies show controversial results for the existence of the fixed point. We point out possible problems in previous work, and present our careful study. Finally, we find the infrared fixed point (IRFP) and discuss the robustness of the nontrivial IRFP of a many-flavor system under the change of the analysis method. Some preliminary results were reported in the proceedings [E. Bilgici et al., PoS(Lattice 2009), 063 (2009); Itou et al., PoS(Lattice 2010), 054 (2010)] and the letter paper [T. Aoyama et al., arXiv:1109.5806 [hep-lat

  14. Wilson-Polyakov loops for critical strings and superstrings at finite temperature

    International Nuclear Information System (INIS)

    Green, M.B.

    1992-01-01

    An open string with end-points fixed at spatial separation L is a string theory analogue of the static quark-antiquark system in quenched QCD. Folowing a review of the quantum mechanics of this system in critical bosonic string theory the partition function at finite β (the inverse temperature) for fixed end-point open strings is discussed. This is related by a conformal transformation ('world-sheet duality') to the correlation function of two closed strings fixed at distinct spatial points (a string theory analogue of two Wilson-Polyakov loops). Temperature duality (β → β' = 4π 2 /β) relates this correlation function, in turn, to the finite-temperature Green function for a closed strong propagating between initial and final states that are at distinct (euclidean) space-time points. In addition, spatial duality relates the fixed end-point open string to the familiar open string with free end-points. A generalization to fixed end-points superstrings is suggested, in which the superalgebra may be viewed as the spatial dual of the usual open-string superalgebra. At zero temperature world-sheet duality relates the partition function of supersymmetric fixed end-point open strings to the correlation function of point-like closed-string states. These couple to combinations of the scalar and pseudoscalar states of a type-2b superstring superfield. At finite temperature supersymmetry is broken and this correlation function involves the propagation of non-supersymmetric states with non-zero winding numbers (which formally include a tachyon at temperatures above the Hagedorn transition). Temperature duality again relates the partition function to the finite-temperature Green function describing the propagator for point-like closed-string states of the dual theory, in which supersymmetry is broken. The singularity that arises in the critical bosonic theory as L is reduced below L = 2 π√α' is absent in the superstring and the static potential is well defined for all

  15. Regular icosahedron

    OpenAIRE

    Mihelak, Veronika

    2016-01-01

    Here are collected properties of regular icosahedron which are useful for students of mathematics or mathematics teachers who can prepare exercises for talented students in elementary or middle school. The initial section describes the basic properties of regular polyhedra: tetrahedron, cube, dodecahedron, octahedron and of course icosahedron. We have proven that there are only five regular or platonic solids and have verified Euler's polyhedron formula for them. Then we focused on selected p...

  16. Maximal ? -regularity

    NARCIS (Netherlands)

    Van Neerven, J.M.A.M.; Veraar, M.C.; Weis, L.

    2015-01-01

    In this paper, we prove maximal regularity estimates in “square function spaces” which are commonly used in harmonic analysis, spectral theory, and stochastic analysis. In particular, they lead to a new class of maximal regularity results for both deterministic and stochastic equations in L p

  17. A Three-Dimensional Finite Element Analysis of the Stress Distribution Generated by Splinted and Nonsplinted Prostheses in the Rehabilitation of Various Bony Ridges with Regular or Short Morse Taper Implants.

    Science.gov (United States)

    Toniollo, Marcelo Bighetti; Macedo, Ana Paula; Rodrigues, Renata Cristina; Ribeiro, Ricardo Faria; de Mattos, Maria G

    The aim of this study was to compare the biomechanical performance of splinted or nonsplinted prostheses over short- or regular-length Morse taper implants (5 mm and 11 mm, respectively) in the posterior area of the mandible using finite element analysis. Three-dimensional geometric models of regular implants (Ø 4 × 11 mm) and short implants (Ø 4 × 5 mm) were placed into a simulated model of the left posterior mandible that included the first premolar tooth; all teeth posterior to this tooth had been removed. The four experimental groups were as follows: regular group SP (three regular implants were rehabilitated with splinted prostheses), regular group NSP (three regular implants were rehabilitated with nonsplinted prostheses), short group SP (three short implants were rehabilitated with splinted prostheses), and short group NSP (three short implants were rehabilitated with nonsplinted prostheses). Oblique forces were simulated in molars (365 N) and premolars (200 N). Qualitative and quantitative analyses of the minimum principal stress in bone were performed using ANSYS Workbench software, version 10.0. The use of splinting in the short group reduced the stress to the bone surrounding the implants and tooth. The use of NSP or SP in the regular group resulted in similar stresses. The best indication when there are short implants is to use SP. Use of NSP is feasible only when regular implants are present.

  18. Adaptive regularization

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Rasmussen, Carl Edward; Svarer, C.

    1994-01-01

    Regularization, e.g., in the form of weight decay, is important for training and optimization of neural network architectures. In this work the authors provide a tool based on asymptotic sampling theory, for iterative estimation of weight decay parameters. The basic idea is to do a gradient desce...

  19. Perturbative Noncommutative Regularization

    CERN Document Server

    Hawkins, E J

    1999-01-01

    I propose a nonperturbative regularization of quantum field theories with contact interactions (primarily, scalar field theories). This is given by the geometric quantization of compact Kähler manifolds and generalizes what has already been proposed by Madore, Grosse, Klimčík, and Prešnajder for the two-sphere. I discuss the perturbation theory derived from this regularized model and propose an approximation technique for evaluating the Feynman diagrams. This amounts to a momentum cutoff combined with phase factors at vertices. To illustrate the exact and approximate calculations, I present, as examples, the simplest diagrams for the lf4 model on the spaces S2,S 2×S2 , and CP2 . This regularization fails for noncompact spaces. I give a brief dimensional analysis argument as to why this is so. I also discuss the relevance of the topology of Feynman diagrams to their ultra-violet and infra-red divergence behavior in this model.

  20. Dimensionality

    International Nuclear Information System (INIS)

    Barrow, J.D.

    1983-01-01

    The role played by the dimensions of space and space-time in determining the form of various physical laws and constants of Nature is examined. Low dimensional manifolds are also seen to possess special mathematical properties. The concept of fractal dimension is introduced and the recent renaissance of Kaluza-Klein theories obtained by dimensional reduction from higher dimensional gravity or supergravity theories is discussed. A formulation of the anthropic principle is suggested. (author)

  1. Constraints and hidden symmetry in two-dimensional gravity

    Energy Technology Data Exchange (ETDEWEB)

    Barcelos-Neto, J. (Instituto de Fisica, Universidade Federal do Rio de Janeiro, Rio de Janeiro, Rio de Janeiro 21945-970 (Brazil))

    1994-01-15

    We study the hidden symmetry of Polyakov two-dimensional gravity by means of first-class constraints. These are obtained from the combination of Fourier mode expansions of the usual (second-class) constraints of the theory. We show that, more than the usual SL(2,[ital R]), there is a hidden Virasoro symmetry in the theory. The results of the above analysis are also confirmed from the point of view of a geometrical symplectic treatment.

  2. Three-Dimensional Finite Element Analysis Surface Stress Distribution on Regular and Short Morse Taper Implants Generated by Splinted and Nonsplinted Prostheses in the Rehabilitation of Various Bony Ridges.

    Science.gov (United States)

    Toniollo, Marcelo Bighetti; Macedo, Ana Paula; Pupim, Denise; Zaparolli, Danilo; de Mattos, Maria da Gloria Chiarello

    2016-05-01

    This study used finite element analysis to compare the biomechanical performance of splinted (SP) and nonsplinted (NSP) prostheses to regular and short length Morse taper implants in the posterior side of the mandible. The authors used 3-dimensional geometric models of regular implants (∅4 × 11 mm) and short implants (∅4 × 5 mm) housed in the corresponding bone edges of the posterior left mandibular hemiarch involving tooth 34. The 8 experimental groups were: the control group SP (3 regular implants rehabilitated with SP), group 1SP (2 regular and 1 short implants rehabilitated with SP), group 2SP (1 regular and 2 short implants rehabilitated with SP), group 3SP (3 short implants rehabilitated with SP), the control group NSP (3 regular implants rehabilitated with NSP), group 1NSP (2 and 1 short implants rehabilitated with NSP), group 2NSP (1 regular and 2 short implants rehabilitated with NSP), and group 3NSP (3 short implants rehabilitated with NSP). Oblique forces were simulated in the molars (365 N) and premolars (200 N). Qualitative and quantitative analysis of the distribution of Von Mises equivalent stress (implants, components, and infrastructure) was performed using the AnsysWorkbench10.0 software. The results showed that the use of SP provides several advantages and benefits, reducing the stresses placed on the implant surface, on the transmucosal abutment areas and on the interior region of the infrastructure. The use of NSP was advantageous in reducing the stresses on the abutments and in the distal interproximal area of connection between the crowns.

  3. Bose-Einstein condensation in chains with power-law hoppings: Exact mapping on the critical behavior in d -dimensional regular lattices

    Science.gov (United States)

    Dias, W. S.; Bertrand, D.; Lyra, M. L.

    2017-06-01

    Recent experimental progress on the realization of quantum systems with highly controllable long-range interactions has impelled the study of quantum phase transitions in low-dimensional systems with power-law couplings. Long-range couplings mimic higher-dimensional effects in several physical contexts. Here, we provide the exact relation between the spectral dimension d at the band bottom and the exponent α that tunes the range of power-law hoppings of a one-dimensional ideal lattice Bose gas. We also develop a finite-size scaling analysis to obtain some relevant critical exponents and the critical temperature of the BEC transition. In particular, an irrelevant dangerous scaling field has to be taken into account when the hopping range is sufficiently large to make the effective dimensionality d >4 .

  4. Bose-Einstein condensation in chains with power-law hoppings: Exact mapping on the critical behavior in d-dimensional regular lattices.

    Science.gov (United States)

    Dias, W S; Bertrand, D; Lyra, M L

    2017-06-01

    Recent experimental progress on the realization of quantum systems with highly controllable long-range interactions has impelled the study of quantum phase transitions in low-dimensional systems with power-law couplings. Long-range couplings mimic higher-dimensional effects in several physical contexts. Here, we provide the exact relation between the spectral dimension d at the band bottom and the exponent α that tunes the range of power-law hoppings of a one-dimensional ideal lattice Bose gas. We also develop a finite-size scaling analysis to obtain some relevant critical exponents and the critical temperature of the BEC transition. In particular, an irrelevant dangerous scaling field has to be taken into account when the hopping range is sufficiently large to make the effective dimensionality d>4.

  5. Continuum-regularized quantum gravity

    International Nuclear Information System (INIS)

    Chan Huesum; Halpern, M.B.

    1987-01-01

    The recent continuum regularization of d-dimensional Euclidean gravity is generalized to arbitrary power-law measure and studied in some detail as a representative example of coordinate-invariant regularization. The weak-coupling expansion of the theory illustrates a generic geometrization of regularized Schwinger-Dyson rules, generalizing previous rules in flat space and flat superspace. The rules are applied in a non-trivial explicit check of Einstein invariance at one loop: the cosmological counterterm is computed and its contribution is included in a verification that the graviton mass is zero. (orig.)

  6. On the gradient of the Green tensor in two-dimensional elastodynamic problems, and related integrals: Distributional approach and regularization, with application to nonuniformly moving sources

    OpenAIRE

    Pellegrini, Yves-Patrick; Lazar, Markus

    2015-01-01

    The two-dimensional elastodynamic Green tensor is the primary building block of solutions of linear elasticity problems dealing with nonuniformly moving rectilinear line sources, such as dislocations. Elastodynamic solutions for these problems involve derivatives of this Green tensor, which stand as hypersingular kernels. These objects, well defined as distributions, prove cumbersome to handle in practice. This paper, restricted to isotropic media, examines some of their representations in th...

  7. Regular Expression Pocket Reference

    CERN Document Server

    Stubblebine, Tony

    2007-01-01

    This handy little book offers programmers a complete overview of the syntax and semantics of regular expressions that are at the heart of every text-processing application. Ideal as a quick reference, Regular Expression Pocket Reference covers the regular expression APIs for Perl 5.8, Ruby (including some upcoming 1.9 features), Java, PHP, .NET and C#, Python, vi, JavaScript, and the PCRE regular expression libraries. This concise and easy-to-use reference puts a very powerful tool for manipulating text and data right at your fingertips. Composed of a mixture of symbols and text, regular exp

  8. Regular simplex refinement by regular simplices

    NARCIS (Netherlands)

    Casado, L.G.; Tóth, B.G.; Hendrix, E.M.T.; García, I.

    2014-01-01

    A naturalway to define branching in Branch-and-Bound for blending problemis to do bisection. The disadvantage of bisectioning is that partition sets are in general irregular. A regular simplex with fixed orientation can be determined by its center and size, allowing storage savings in a Branchand-

  9. θ-regular spaces

    Directory of Open Access Journals (Sweden)

    Dragan S. Janković

    1985-01-01

    Full Text Available In this paper we define a topological space X to be θ-regular if every filterbase in X with a nonempty θ-adherence has a nonempty adherence. It is shown that the class of θ-regular topological spaces includes rim-compact topological spaces and that θ-regular H(i (Hausdorff topological spaces are compact (regular. The concept of θ-regularity is used to extend a closed graph theorem of Rose [1]. It is established that an r-subcontinuous closed graph function into a θ-regular topological space is continuous. Another sufficient condition for continuity of functions due to Rose [1] is also extended by introducing the concept of almost weak continuity which is weaker than both weak continuity of Levine and almost continuity of Husain. It is shown that an almost weakly continuous closed graph function into a strongly locally compact topological space is continuous.

  10. Regularization by External Variables

    DEFF Research Database (Denmark)

    Bossolini, Elena; Edwards, R.; Glendinning, P. A.

    2016-01-01

    Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind of regula......Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind...

  11. Supervised scale-regularized linear convolutionary filters

    DEFF Research Database (Denmark)

    Loog, Marco; Lauze, Francois Bernard

    2017-01-01

    We start by demonstrating that an elementary learning task—learning a linear filter from training data by means of regression—can be solved very efficiently for feature spaces of very high dimensionality. In a second step, firstly, acknowledging that such high-dimensional learning tasks typically...... filter. In particular, we demonstrate that it clearly outperforms the de facto standard Tikhonov regularization, which is the one employed in ridge regression or Wiener filtering....

  12. Regularities of multifractal measures

    Indian Academy of Sciences (India)

    Abstract. First, we prove the decomposition theorem for the regularities of multifractal. Hausdorff measure and packing measure in Rd . This decomposition theorem enables us to split a set into regular and irregular parts, so that we can analyze each separately, and recombine them without affecting density properties. Next ...

  13. Regularities of multifractal measures

    Indian Academy of Sciences (India)

    First, we prove the decomposition theorem for the regularities of multifractal Hausdorff measure and packing measure in R R d . This decomposition theorem enables us to split a set into regular and irregular parts, so that we can analyze each separately, and recombine them without affecting density properties. Next, we ...

  14. Regularities of Multifractal Measures

    Indian Academy of Sciences (India)

    First, we prove the decomposition theorem for the regularities of multifractal Hausdorff measure and packing measure in R R d . This decomposition theorem enables us to split a set into regular and irregular parts, so that we can analyze each separately, and recombine them without affecting density properties. Next, we ...

  15. Stochastic analytic regularization

    International Nuclear Information System (INIS)

    Alfaro, J.

    1984-07-01

    Stochastic regularization is reexamined, pointing out a restriction on its use due to a new type of divergence which is not present in the unregulated theory. Furthermore, we introduce a new form of stochastic regularization which permits the use of a minimal subtraction scheme to define the renormalized Green functions. (author)

  16. Dynamic stabilization of regular linear systems

    NARCIS (Netherlands)

    Weiss, G; Curtain, RF

    We consider a general class of infinite-dimensional linear systems, called regular linear systems, for which convenient representations are known to exist both in time and in frequency domain, For this class of systems, we investigate the concepts of stabilizability and detectability, in particular,

  17. Regular expression containment

    DEFF Research Database (Denmark)

    Henglein, Fritz; Nielsen, Lasse

    2011-01-01

    We present a new sound and complete axiomatization of regular expression containment. It consists of the conventional axiomatiza- tion of concatenation, alternation, empty set and (the singleton set containing) the empty string as an idempotent semiring, the fixed- point rule E* = 1 + E × E......* for Kleene-star, and a general coin- duction rule as the only additional rule. Our axiomatization gives rise to a natural computational inter- pretation of regular expressions as simple types that represent parse trees, and of containment proofs as coercions. This gives the axiom- atization a Curry......-Howard-style constructive interpretation: Con- tainment proofs do not only certify a language-theoretic contain- ment, but, under our computational interpretation, constructively transform a membership proof of a string in one regular expres- sion into a membership proof of the same string in another regular expression. We...

  18. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  19. Hierarchical regular small-world networks

    International Nuclear Information System (INIS)

    Boettcher, Stefan; Goncalves, Bruno; Guclu, Hasan

    2008-01-01

    Two new networks are introduced that resemble small-world properties. These networks are recursively constructed but retain a fixed, regular degree. They possess a unique one-dimensional lattice backbone overlaid by a hierarchical sequence of long-distance links, mixing real-space and small-world features. Both networks, one 3-regular and the other 4-regular, lead to distinct behaviors, as revealed by renormalization group studies. The 3-regular network is planar, has a diameter growing as √N with system size N, and leads to super-diffusion with an exact, anomalous exponent d w = 1.306..., but possesses only a trivial fixed point T c = 0 for the Ising ferromagnet. In turn, the 4-regular network is non-planar, has a diameter growing as ∼2 √(log 2 N 2 ) , exhibits 'ballistic' diffusion (d w = 1), and a non-trivial ferromagnetic transition, T c > 0. It suggests that the 3-regular network is still quite 'geometric', while the 4-regular network qualifies as a true small world with mean-field properties. As an engineering application we discuss synchronization of processors on these networks. (fast track communication)

  20. Regular phantom black holes.

    Science.gov (United States)

    Bronnikov, K A; Fabris, J C

    2006-06-30

    We study self-gravitating, static, spherically symmetric phantom scalar fields with arbitrary potentials (favored by cosmological observations) and single out 16 classes of possible regular configurations with flat, de Sitter, and anti-de Sitter asymptotics. Among them are traversable wormholes, bouncing Kantowski-Sachs (KS) cosmologies, and asymptotically flat black holes (BHs). A regular BH has a Schwarzschild-like causal structure, but the singularity is replaced by a de Sitter infinity, giving a hypothetic BH explorer a chance to survive. It also looks possible that our Universe has originated in a phantom-dominated collapse in another universe, with KS expansion and isotropization after crossing the horizon. Explicit examples of regular solutions are built and discussed. Possible generalizations include k-essence type scalar fields (with a potential) and scalar-tensor gravity.

  1. Regularized Structural Equation Modeling

    Science.gov (United States)

    Jacobucci, Ross; Grimm, Kevin J.; McArdle, John J.

    2016-01-01

    A new method is proposed that extends the use of regularization in both lasso and ridge regression to structural equation models. The method is termed regularized structural equation modeling (RegSEM). RegSEM penalizes specific parameters in structural equation models, with the goal of creating easier to understand and simpler models. Although regularization has gained wide adoption in regression, very little has transferred to models with latent variables. By adding penalties to specific parameters in a structural equation model, researchers have a high level of flexibility in reducing model complexity, overcoming poor fitting models, and the creation of models that are more likely to generalize to new samples. The proposed method was evaluated through a simulation study, two illustrative examples involving a measurement model, and one empirical example involving the structural part of the model to demonstrate RegSEM’s utility. PMID:27398019

  2. Discretization of variational regularization in Banach spaces

    International Nuclear Information System (INIS)

    Pöschl, Christiane; Resmerita, Elena; Scherzer, Otmar

    2010-01-01

    Consider a nonlinear ill-posed operator equation F(u) = y, where F is defined on a Banach space X. In this paper we analyze finite-dimensional variational regularization, which takes into account operator approximations and noisy data. As shown in the literature, depending on the setting, convergence of the regularized solutions of the finite-dimensional problems can be with respect to the strong or just a weak topology. In this paper our contribution is twofold. First, we derive convergence rates in terms of Bregman distances in the convex regularization setting under appropriate sourcewise representation of a solution of the equation. Secondly, for particular regularization realizations in nonseparable Banach spaces, we discuss the finite-dimensional approximations of the spaces and the type of convergence, which is needed for the convergence analysis. These considerations lay the fundament for efficient numerical implementation. In particular, we emphasize on the space X of finite total variation functions and analyze in detail the cases when X is the space of the functions of finite bounded deformation and the L ∞ -space. The latter two settings are of interest in numerous problems arising in optimal control, machine learning and engineering

  3. Regularity of Bound States

    DEFF Research Database (Denmark)

    Faupin, Jeremy; Møller, Jacob Schach; Skibsted, Erik

    2011-01-01

    We study regularity of bound states pertaining to embedded eigenvalues of a self-adjoint operator H, with respect to an auxiliary operator A that is conjugate to H in the sense of Mourre. We work within the framework of singular Mourre theory which enables us to deal with confined massless Pauli–......–Fierz models, our primary example, and many-body AC-Stark Hamiltonians. In the simpler context of regular Mourre theory, our results boil down to an improvement of results obtained recently in [8, 9]....

  4. Manifold Regularized Reinforcement Learning.

    Science.gov (United States)

    Li, Hongliang; Liu, Derong; Wang, Ding

    2018-04-01

    This paper introduces a novel manifold regularized reinforcement learning scheme for continuous Markov decision processes. Smooth feature representations for value function approximation can be automatically learned using the unsupervised manifold regularization method. The learned features are data-driven, and can be adapted to the geometry of the state space. Furthermore, the scheme provides a direct basis representation extension for novel samples during policy learning and control. The performance of the proposed scheme is evaluated on two benchmark control tasks, i.e., the inverted pendulum and the energy storage problem. Simulation results illustrate the concepts of the proposed scheme and show that it can obtain excellent performance.

  5. Annotation of Regular Polysemy

    DEFF Research Database (Denmark)

    Martinez Alonso, Hector

    Regular polysemy has received a lot of attention from the theory of lexical semantics and from computational linguistics. However, there is no consensus on how to represent the sense of underspecified examples at the token level, namely when annotating or disambiguating senses of metonymic words...

  6. Sewing Polyakov amplitudes. Pt. 1

    International Nuclear Information System (INIS)

    Carlip, S.; Clements, M.; DellaPietra, S.; DellaPietra, V.

    1990-01-01

    We consider the problem of reconstructing the correlation functions of a conformal field theory on a surface Σ from the correlation functions on a surface Σ' obtained from Σ by cutting along a closed curve. We show that under quite general conditions, the correlation functions on the cut surface can be 'sewn' by integrating over appropriate boundary values of the fields. (orig.)

  7. Regularizing mappings of Lévy measures

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Thorbjørnsen, Steen

    2006-01-01

    In this paper we introduce and study a regularizing one-to-one mapping from the class of one-dimensional Lévy measures into itself. This mapping appeared implicitly in our previous paper [O.E. Barndorff-Nielsen, S. Thorbjørnsen, A connection between free and classical infinite divisibility, Inf....... Dim. Anal. Quant. Probab. 7 (2004) 573–590], where we introduced a one-to-one mapping from the class of one-dimensional infinitely divisible probability measures into itself. Based on the investigation of in the present paper, we deduce further properties of . In particular it is proved that maps...

  8. Sparse structure regularized ranking

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-04-17

    Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse structure, we assume that each multimedia object could be represented as a sparse linear combination of all other objects, and combination coefficients are regarded as a similarity measure between objects and used to regularize their ranking scores. Moreover, we propose to learn the sparse combination coefficients and the ranking scores simultaneously. A unified objective function is constructed with regard to both the combination coefficients and the ranking scores, and is optimized by an iterative algorithm. Experiments on two multimedia database retrieval data sets demonstrate the significant improvements of the propose algorithm over state-of-the-art ranking score learning algorithms.

  9. 'Regular' and 'emergency' repair

    International Nuclear Information System (INIS)

    Luchnik, N.V.

    1975-01-01

    Experiments on the combined action of radiation and a DNA inhibitor using Crepis roots and on split-dose irradiation of human lymphocytes lead to the conclusion that there are two types of repair. The 'regular' repair takes place twice in each mitotic cycle and ensures the maintenance of genetic stability. The 'emergency' repair is induced at all stages of the mitotic cycle by high levels of injury. (author)

  10. Regularization of divergent integrals

    OpenAIRE

    Felder, Giovanni; Kazhdan, David

    2016-01-01

    We study the Hadamard finite part of divergent integrals of differential forms with singularities on submanifolds. We give formulae for the dependence of the finite part on the choice of regularization and express them in terms of a suitable local residue map. The cases where the submanifold is a complex hypersurface in a complex manifold and where it is a boundary component of a manifold with boundary, arising in string perturbation theory, are treated in more detail.

  11. Regularizing portfolio optimization

    International Nuclear Information System (INIS)

    Still, Susanne; Kondor, Imre

    2010-01-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  12. Regularizing portfolio optimization

    Science.gov (United States)

    Still, Susanne; Kondor, Imre

    2010-07-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  13. Regular non-twisting S-branes

    International Nuclear Information System (INIS)

    Obregon, Octavio; Quevedo, Hernando; Ryan, Michael P.

    2004-01-01

    We construct a family of time and angular dependent, regular S-brane solutions which corresponds to a simple analytical continuation of the Zipoy-Voorhees 4-dimensional vacuum spacetime. The solutions are asymptotically flat and turn out to be free of singularities without requiring a twist in space. They can be considered as the simplest non-singular generalization of the singular S0-brane solution. We analyze the properties of a representative of this family of solutions and show that it resembles to some extent the asymptotic properties of the regular Kerr S-brane. The R-symmetry corresponds, however, to the general lorentzian symmetry. Several generalizations of this regular solution are derived which include a charged S-brane and an additional dilatonic field. (author)

  14. The geometry of continuum regularization

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1987-03-01

    This lecture is primarily an introduction to coordinate-invariant regularization, a recent advance in the continuum regularization program. In this context, the program is seen as fundamentally geometric, with all regularization contained in regularized DeWitt superstructures on field deformations

  15. Regular black hole in three dimensions

    OpenAIRE

    Myung, Yun Soo; Yoon, Myungseok

    2008-01-01

    We find a new black hole in three dimensional anti-de Sitter space by introducing an anisotropic perfect fluid inspired by the noncommutative black hole. This is a regular black hole with two horizons. We compare thermodynamics of this black hole with that of non-rotating BTZ black hole. The first-law of thermodynamics is not compatible with the Bekenstein-Hawking entropy.

  16. Continuum regularized Yang-Mills theory

    International Nuclear Information System (INIS)

    Sadun, L.A.

    1987-01-01

    Using the machinery of stochastic quantization, Z. Bern, M. B. Halpern, C. Taubes and I recently proposed a continuum regularization technique for quantum field theory. This regularization may be implemented by applying a regulator to either the (d + 1)-dimensional Parisi-Wu Langevin equation or, equivalently, to the d-dimensional second order Schwinger-Dyson (SD) equations. This technique is non-perturbative, respects all gauge and Lorentz symmetries, and is consistent with a ghost-free gauge fixing (Zwanziger's). This thesis is a detailed study of this regulator, and of regularized Yang-Mills theory, using both perturbative and non-perturbative techniques. The perturbative analysis comes first. The mechanism of stochastic quantization is reviewed, and a perturbative expansion based on second-order SD equations is developed. A diagrammatic method (SD diagrams) for evaluating terms of this expansion is developed. We apply the continuum regulator to a scalar field theory. Using SD diagrams, we show that all Green functions can be rendered finite to all orders in perturbation theory. Even non-renormalizable theories can be regularized. The continuum regulator is then applied to Yang-Mills theory, in conjunction with Zwanziger's gauge fixing. A perturbative expansion of the regulator is incorporated into the diagrammatic method. It is hoped that the techniques discussed in this thesis will contribute to the construction of a renormalized Yang-Mills theory is 3 and 4 dimensions

  17. Regularity of Minimal Surfaces

    CERN Document Server

    Dierkes, Ulrich; Tromba, Anthony J; Kuster, Albrecht

    2010-01-01

    "Regularity of Minimal Surfaces" begins with a survey of minimal surfaces with free boundaries. Following this, the basic results concerning the boundary behaviour of minimal surfaces and H-surfaces with fixed or free boundaries are studied. In particular, the asymptotic expansions at interior and boundary branch points are derived, leading to general Gauss-Bonnet formulas. Furthermore, gradient estimates and asymptotic expansions for minimal surfaces with only piecewise smooth boundaries are obtained. One of the main features of free boundary value problems for minimal surfaces is t

  18. Dimensional Reduction and Hadronic Processes

    Science.gov (United States)

    Signer, Adrian; Stöckinger, Dominik

    2008-11-01

    We consider the application of regularization by dimensional reduction to NLO corrections of hadronic processes. The general collinear singularity structure is discussed, the origin of the regularization-scheme dependence is identified and transition rules to other regularization schemes are derived.

  19. Modular Regularization Algorithms

    DEFF Research Database (Denmark)

    Jacobsen, Michael

    2004-01-01

    The class of linear ill-posed problems is introduced along with a range of standard numerical tools and basic concepts from linear algebra, statistics and optimization. Known algorithms for solving linear inverse ill-posed problems are analyzed to determine how they can be decomposed into indepen......The class of linear ill-posed problems is introduced along with a range of standard numerical tools and basic concepts from linear algebra, statistics and optimization. Known algorithms for solving linear inverse ill-posed problems are analyzed to determine how they can be decomposed...... into independent modules. These modules are then combined to form new regularization algorithms with other properties than those we started out with. Several variations are tested using the Matlab toolbox MOORe Tools created in connection with this thesis. Object oriented programming techniques are explained...... and used to set up the illposed problems in the toolbox. Hereby, we are able to write regularization algorithms that automatically exploit structure in the ill-posed problem without being rewritten explicitly. We explain how to implement a stopping criteria for a parameter choice method based upon...

  20. Anomaly Structure of Regularized Supergravity

    Science.gov (United States)

    Butter, Daniel; Gaillard, Mary K.

    2015-01-01

    On-shell Pauli-Villars regularization of the one-loop divergences of supergravity theories is used to study the anomaly structure of supergravity and the cancellation of field theory anomalies under a U (1 ) gauge transformation and under the T -duality group of modular transformations in effective supergravity theories with three Kähler moduli Ti obtained from orbifold compactification of the weakly coupled heterotic string. This procedure requires constraints on the chiral matter representations of the gauge group that are consistent with known results from orbifold compactifications. Pauli-Villars (PV) regulator fields allow for the cancellation of all quadratic and logarithmic divergences, as well as most linear divergences. If all linear divergences were canceled, the theory would be anomaly free, with noninvariance of the action arising only from Pauli-Villars masses. However there are linear divergences associated with nonrenormalizable gravitino/gaugino interactions that cannot be canceled by PV fields. The resulting chiral anomaly forms a supermultiplet with the corresponding conformal anomaly, provided the ultraviolet cutoff has the appropriate field dependence, in which case total derivative terms, such as Gauss-Bonnet, do not drop out from the effective action. The anomalies can be partially canceled by the four-dimensional version of the Green-Schwarz mechanism, but additional counterterms, and/or a more elaborate set of Pauli-Villars fields and couplings, are needed to cancel the full anomaly, including D -term contributions to the conformal anomaly that are nonlinear in the parameters of the anomalous transformations.

  1. On fuzzy multiset regular languages

    Directory of Open Access Journals (Sweden)

    B. K. Sharma

    2017-03-01

    Full Text Available The purpose of present work is to study some algebraic aspect of fuzzy multiset regular languages. In between, we show the equivalence of multiset regular language and fuzzy multiset regular language. Finally, we introduce the concept of pumping lemma for fuzzy multiset regular languages, which we use to establish a necessary and sufficient condition for a fuzzy multiset language to be non-constant.

  2. Adaptive Regularization of Neural Classifiers

    DEFF Research Database (Denmark)

    Andersen, Lars Nonboe; Larsen, Jan; Hansen, Lars Kai

    1997-01-01

    We present a regularization scheme which iteratively adapts the regularization parameters by minimizing the validation error. It is suggested to use the adaptive regularization scheme in conjunction with optimal brain damage pruning to optimize the architecture and to avoid overfitting. Furthermore...

  3. Ensemble manifold regularization.

    Science.gov (United States)

    Geng, Bo; Tao, Dacheng; Xu, Chao; Yang, Linjun; Hua, Xian-Sheng

    2012-06-01

    We propose an automatic approximation of the intrinsic manifold for general semi-supervised learning (SSL) problems. Unfortunately, it is not trivial to define an optimization function to obtain optimal hyperparameters. Usually, cross validation is applied, but it does not necessarily scale up. Other problems derive from the suboptimality incurred by discrete grid search and the overfitting. Therefore, we develop an ensemble manifold regularization (EMR) framework to approximate the intrinsic manifold by combining several initial guesses. Algorithmically, we designed EMR carefully so it 1) learns both the composite manifold and the semi-supervised learner jointly, 2) is fully automatic for learning the intrinsic manifold hyperparameters implicitly, 3) is conditionally optimal for intrinsic manifold approximation under a mild and reasonable assumption, and 4) is scalable for a large number of candidate manifold hyperparameters, from both time and space perspectives. Furthermore, we prove the convergence property of EMR to the deterministic matrix at rate root-n. Extensive experiments over both synthetic and real data sets demonstrate the effectiveness of the proposed framework.

  4. Chaos regularization of quantum tunneling rates

    International Nuclear Information System (INIS)

    Pecora, Louis M.; Wu Dongho; Lee, Hoshik; Antonsen, Thomas; Lee, Ming-Jer; Ott, Edward

    2011-01-01

    Quantum tunneling rates through a barrier separating two-dimensional, symmetric, double-well potentials are shown to depend on the classical dynamics of the billiard trajectories in each well and, hence, on the shape of the wells. For shapes that lead to regular (integrable) classical dynamics the tunneling rates fluctuate greatly with eigenenergies of the states sometimes by over two orders of magnitude. Contrarily, shapes that lead to completely chaotic trajectories lead to tunneling rates whose fluctuations are greatly reduced, a phenomenon we call regularization of tunneling rates. We show that a random-plane-wave theory of tunneling accounts for the mean tunneling rates and the small fluctuation variances for the chaotic systems.

  5. Convex nonnegative matrix factorization with manifold regularization.

    Science.gov (United States)

    Hu, Wenjun; Choi, Kup-Sze; Wang, Peiliang; Jiang, Yunliang; Wang, Shitong

    2015-03-01

    Nonnegative Matrix Factorization (NMF) has been extensively applied in many areas, including computer vision, pattern recognition, text mining, and signal processing. However, nonnegative entries are usually required for the data matrix in NMF, which limits its application. Besides, while the basis and encoding vectors obtained by NMF can represent the original data in low dimension, the representations do not always reflect the intrinsic geometric structure embedded in the data. Motivated by manifold learning and Convex NMF (CNMF), we propose a novel matrix factorization method called Graph Regularized and Convex Nonnegative Matrix Factorization (GCNMF) by introducing a graph regularized term into CNMF. The proposed matrix factorization technique not only inherits the intrinsic low-dimensional manifold structure, but also allows the processing of mixed-sign data matrix. Clustering experiments on nonnegative and mixed-sign real-world data sets are conducted to demonstrate the effectiveness of the proposed method. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Bayesian regularization of diffusion tensor images

    DEFF Research Database (Denmark)

    Frandsen, Jesper; Hobolth, Asger; Østergaard, Leif

    2007-01-01

    Diffusion tensor imaging (DTI) is a powerful tool in the study of the course of nerve fibre bundles in the human brain. Using DTI, the local fibre orientation in each image voxel can be described by a diffusion tensor which is constructed from local measurements of diffusion coefficients along...... several directions. The measured diffusion coefficients and thereby the diffusion tensors are subject to noise, leading to possibly flawed representations of the three dimensional fibre bundles. In this paper we develop a Bayesian procedure for regularizing the diffusion tensor field, fully utilizing...

  7. Existence of connected regular and nearly regular graphs

    OpenAIRE

    Ganesan, Ghurumuruhan

    2018-01-01

    For integers $k \\geq 2$ and $n \\geq k+1$, we prove the following: If $n\\cdot k$ is even, there is a connected $k$-regular graph on $n$ vertices. If $n\\cdot k$ is odd, there is a connected nearly $k$-regular graph on $n$ vertices.

  8. Fluctuations of quantum fields via zeta function regularization

    International Nuclear Information System (INIS)

    Cognola, Guido; Zerbini, Sergio; Elizalde, Emilio

    2002-01-01

    Explicit expressions for the expectation values and the variances of some observables, which are bilinear quantities in the quantum fields on a D-dimensional manifold, are derived making use of zeta function regularization. It is found that the variance, related to the second functional variation of the effective action, requires a further regularization and that the relative regularized variance turns out to be 2/N, where N is the number of the fields, thus being independent of the dimension D. Some illustrating examples are worked through. The issue of the stress tensor is also briefly addressed

  9. Low-Rank Matrix Factorization With Adaptive Graph Regularizer.

    Science.gov (United States)

    Lu, Gui-Fu; Wang, Yong; Zou, Jian

    2016-05-01

    In this paper, we present a novel low-rank matrix factorization algorithm with adaptive graph regularizer (LMFAGR). We extend the recently proposed low-rank matrix with manifold regularization (MMF) method with an adaptive regularizer. Different from MMF, which constructs an affinity graph in advance, LMFAGR can simultaneously seek graph weight matrix and low-dimensional representations of data. That is, graph construction and low-rank matrix factorization are incorporated into a unified framework, which results in an automatically updated graph rather than a predefined one. The experimental results on some data sets demonstrate that the proposed algorithm outperforms the state-of-the-art low-rank matrix factorization methods.

  10. Laplacian manifold regularization method for fluorescence molecular tomography

    Science.gov (United States)

    He, Xuelei; Wang, Xiaodong; Yi, Huangjian; Chen, Yanrong; Zhang, Xu; Yu, Jingjing; He, Xiaowei

    2017-04-01

    Sparse regularization methods have been widely used in fluorescence molecular tomography (FMT) for stable three-dimensional reconstruction. Generally, ℓ1-regularization-based methods allow for utilizing the sparsity nature of the target distribution. However, in addition to sparsity, the spatial structure information should be exploited as well. A joint ℓ1 and Laplacian manifold regularization model is proposed to improve the reconstruction performance, and two algorithms (with and without Barzilai-Borwein strategy) are presented to solve the regularization model. Numerical studies and in vivo experiment demonstrate that the proposed Gradient projection-resolved Laplacian manifold regularization method for the joint model performed better than the comparative algorithm for ℓ1 minimization method in both spatial aggregation and location accuracy.

  11. Emergent criticality and Friedan scaling in a two-dimensional frustrated Heisenberg antiferromagnet

    Science.gov (United States)

    Orth, Peter P.; Chandra, Premala; Coleman, Piers; Schmalian, Jörg

    2014-03-01

    We study a two-dimensional frustrated Heisenberg antiferromagnet on the windmill lattice consisting of triangular and dual honeycomb lattice sites. In the classical ground state, the spins on different sublattices are decoupled, but quantum and thermal fluctuations drive the system into a coplanar state via an "order from disorder" mechanism. We obtain the finite temperature phase diagram using renormalization group approaches. In the coplanar regime, the relative U(1) phase between the spins on the two sublattices decouples from the remaining degrees of freedom, and is described by a six-state clock model with an emergent critical phase. At lower temperatures, the system enters a Z6 broken phase with long-range phase correlations. We derive these results by two distinct renormalization group approaches to two-dimensional magnetism: Wilson-Polyakov scaling and Friedan's geometric approach to nonlinear sigma models where the scaling of the spin stiffnesses is governed by the Ricci flow of a 4D metric tensor.

  12. Regular homotopy of Hurwitz curves

    International Nuclear Information System (INIS)

    Auroux, D; Kulikov, Vik S; Shevchishin, V V

    2004-01-01

    We prove that any two irreducible cuspidal Hurwitz curves C 0 adn C 1 (or, more generally, two curves with A-type singularities) in the Hirzebruch surface F N with the same homology classes and sets of singularities are regular homotopic. Moreover, they are symplectically regular homotopic if C 0 and C 1 are symplectic with respect to a compatible symplectic form

  13. A criterion for regular sequences

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    Note that every sequence is a strongly regular as well as regular sequence on the zero ... following statement given in Chapter II, 6.1 of [4]. .... for financial support. The authors sincerely thank Harmut Wiebe for stimulating discus- sions. References. [1] Bruns W and Herzog J, Cohen–Macaulay rings (Cambridge Studies in ...

  14. Regularized Generalized Canonical Correlation Analysis

    Science.gov (United States)

    Tenenhaus, Arthur; Tenenhaus, Michel

    2011-01-01

    Regularized generalized canonical correlation analysis (RGCCA) is a generalization of regularized canonical correlation analysis to three or more sets of variables. It constitutes a general framework for many multi-block data analysis methods. It combines the power of multi-block data analysis methods (maximization of well identified criteria) and…

  15. Online co-regularized algorithms

    NARCIS (Netherlands)

    Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.

    2012-01-01

    We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks

  16. Regularity criteria for incompressible magnetohydrodynamics equations in three dimensions

    International Nuclear Information System (INIS)

    Lin, Hongxia; Du, Lili

    2013-01-01

    In this paper, we give some new global regularity criteria for three-dimensional incompressible magnetohydrodynamics (MHD) equations. More precisely, we provide some sufficient conditions in terms of the derivatives of the velocity or pressure, for the global regularity of strong solutions to 3D incompressible MHD equations in the whole space, as well as for periodic boundary conditions. Moreover, the regularity criterion involving three of the nine components of the velocity gradient tensor is also obtained. The main results generalize the recent work by Cao and Wu (2010 Two regularity criteria for the 3D MHD equations J. Diff. Eqns 248 2263–74) and the analysis in part is based on the works by Cao C and Titi E (2008 Regularity criteria for the three-dimensional Navier–Stokes equations Indiana Univ. Math. J. 57 2643–61; 2011 Gobal regularity criterion for the 3D Navier–Stokes equations involving one entry of the velocity gradient tensor Arch. Rational Mech. Anal. 202 919–32) for 3D incompressible Navier–Stokes equations. (paper)

  17. Variational regularization of 3D data experiments with Matlab

    CERN Document Server

    Montegranario, Hebert

    2014-01-01

    Variational Regularization of 3D Data provides an introduction to variational methods for data modelling and its application in computer vision. In this book, the authors identify interpolation as an inverse problem that can be solved by Tikhonov regularization. The proposed solutions are generalizations of one-dimensional splines, applicable to n-dimensional data and the central idea is that these splines can be obtained by regularization theory using a trade-off between the fidelity of the data and smoothness properties.As a foundation, the authors present a comprehensive guide to the necessary fundamentals of functional analysis and variational calculus, as well as splines. The implementation and numerical experiments are illustrated using MATLAB®. The book also includes the necessary theoretical background for approximation methods and some details of the computer implementation of the algorithms. A working knowledge of multivariable calculus and basic vector and matrix methods should serve as an adequat...

  18. Regular variation on measure chains

    Czech Academy of Sciences Publication Activity Database

    Řehák, Pavel; Vitovec, J.

    2010-01-01

    Roč. 72, č. 1 (2010), s. 439-448 ISSN 0362-546X R&D Projects: GA AV ČR KJB100190701 Institutional research plan: CEZ:AV0Z10190503 Keywords : regularly varying function * regularly varying sequence * measure chain * time scale * embedding theorem * representation theorem * second order dynamic equation * asymptotic properties Subject RIV: BA - General Mathematics Impact factor: 1.279, year: 2010 http://www.sciencedirect.com/science/article/pii/S0362546X09008475

  19. Manifold Regularized Correlation Object Tracking

    OpenAIRE

    Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling

    2017-01-01

    In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped fr...

  20. On geodesics in low regularity

    Science.gov (United States)

    Sämann, Clemens; Steinbauer, Roland

    2018-02-01

    We consider geodesics in both Riemannian and Lorentzian manifolds with metrics of low regularity. We discuss existence of extremal curves for continuous metrics and present several old and new examples that highlight their subtle interrelation with solutions of the geodesic equations. Then we turn to the initial value problem for geodesics for locally Lipschitz continuous metrics and generalize recent results on existence, regularity and uniqueness of solutions in the sense of Filippov.

  1. Commuting Π-regular rings

    Directory of Open Access Journals (Sweden)

    Shervin Sahebi

    2014-05-01

    Full Text Available ‎$R$ is called commuting regular ring (resp‎. ‎semigroupif‎ for each $x,y\\in R$ there exists $a\\in R$‎ such that$xy=yxayx$‎. ‎In this paper‎, ‎we introduce the concept of‎‎commuting $\\pi$-regular rings (resp‎. ‎semigroups and‎‎study various properties of them.

  2. Automatic Constraint Detection for 2D Layout Regularization.

    Science.gov (United States)

    Jiang, Haiyong; Nan, Liangliang; Yan, Dong-Ming; Dong, Weiming; Zhang, Xiaopeng; Wonka, Peter

    2016-08-01

    In this paper, we address the problem of constraint detection for layout regularization. The layout we consider is a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important in digitizing plans or images, such as floor plans and facade images, and in the improvement of user-created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm that automatically detects constraints. We evaluate the proposed framework using a variety of input layouts from different applications. Our results demonstrate that our method has superior performance to the state of the art.

  3. Automatic Constraint Detection for 2D Layout Regularization

    KAUST Repository

    Jiang, Haiyong

    2015-09-18

    In this paper, we address the problem of constraint detection for layout regularization. As layout we consider a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important for digitizing plans or images, such as floor plans and facade images, and for the improvement of user created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate the layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm to automatically detect constraints. In our results, we evaluate the proposed framework on a variety of input layouts from different applications, which demonstrates our method has superior performance to the state of the art.

  4. From inactive to regular jogger

    DEFF Research Database (Denmark)

    Lund-Cramer, Pernille; Brinkmann Løite, Vibeke; Bredahl, Thomas Viskum Gjelstrup

    limited in terms of maintaining a behavior change. The purpose of this study was to investigate individual, cognitive, social, and contextual factors influencing the adoption and maintenance of regular self-organized jogging, and how they were manifested among former inactive adults. Methods A qualitative...... to translate intention into regular behavior. TTM: Informants expressed rapid progression from the pre-contemplation to the action stage caused by an early shift in the decisional balance towards advantages overweighing disadvantages. This was followed by a continuous improvement in self-efficacy, which...... jogging-related self-efficacy, and deployment of realistic goal setting was significant in the achievement of regular jogging behavior. Cognitive factors included a positive change in both affective and instrumental beliefs about jogging. Expectations from society and social relations had limited effect...

  5. Metric regularity and subdifferential calculus

    International Nuclear Information System (INIS)

    Ioffe, A D

    2000-01-01

    The theory of metric regularity is an extension of two classical results: the Lyusternik tangent space theorem and the Graves surjection theorem. Developments in non-smooth analysis in the 1980s and 1990s paved the way for a number of far-reaching extensions of these results. It was also well understood that the phenomena behind the results are of metric origin, not connected with any linear structure. At the same time it became clear that some basic hypotheses of the subdifferential calculus are closely connected with the metric regularity of certain set-valued maps. The survey is devoted to the metric theory of metric regularity and its connection with subdifferential calculus in Banach spaces

  6. Manifold Regularized Correlation Object Tracking.

    Science.gov (United States)

    Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling

    2018-05-01

    In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped from both target and nontarget regions. Thus, the final classifier in our method is trained with positive, negative, and unlabeled base samples, which is a semisupervised learning framework. A block optimization strategy is further introduced to learn a manifold regularization-based correlation filter for efficient online tracking. Experiments on two public tracking data sets demonstrate the superior performance of our tracker compared with the state-of-the-art tracking approaches.

  7. Toric Geometry of the Regular Convex Polyhedra

    Directory of Open Access Journals (Sweden)

    Fiammetta Battaglia

    2017-01-01

    Full Text Available We describe symplectic and complex toric spaces associated with the five regular convex polyhedra. The regular tetrahedron and the cube are rational and simple, the regular octahedron is not simple, the regular dodecahedron is not rational, and the regular icosahedron is neither simple nor rational. We remark that the last two cases cannot be treated via standard toric geometry.

  8. Regularization of quantum field theories

    International Nuclear Information System (INIS)

    Rayski, J.

    1985-01-01

    General idea of regularization and renormalization in quantum field theory is presented. It is postulated that it is possible not to go to infinity with the auxiliary masses of regularization but to attach to them a certain physical meaning, but it is equivalent with a violation of unitarity of the operator of evolution in time. It may be achieved in two different ways: it might be simply assumed that only the direction but not the length of the state vector possesses a physical meaning and that not all possible physical events are predictable. 3 refs., 1 fig. (author)

  9. Regular variation on measure chains

    Czech Academy of Sciences Publication Activity Database

    Řehák, Pavel; Vitovec, J.

    2010-01-01

    Roč. 72, č. 1 (2010), s. 439-448 ISSN 0362-546X R&D Projects: GA AV ČR KJB100190701 Institutional research plan: CEZ:AV0Z10190503 Keywords : regularly varying function * regularly varying sequence * measure chain * time scale * embedding theorem * representation theorem * second order dynamic equation * asymptotic properties Subject RIV: BA - General Mathematics Impact factor: 1.279, year: 2010 http://www. science direct.com/ science /article/pii/S0362546X09008475

  10. Matrix regularization of 4-manifolds

    OpenAIRE

    Trzetrzelewski, M.

    2012-01-01

    We consider products of two 2-manifolds such as S^2 x S^2, embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)xSU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N^2 x N^2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S...

  11. Dynamic transition from Mach to regular reflection of shock waves in a steady flow

    CSIR Research Space (South Africa)

    Naidoo, K

    2014-07-01

    Full Text Available The steady, two-dimensional transition criteria between regular and Mach reflection are well established. Little has been done on the dynamic effect on transition due to a rapidly rotating wedge. Results from experiments and computations done...

  12. Low regularity solutions of the Chern-Simons-Higgs equations in the Lorentz gauge

    Directory of Open Access Journals (Sweden)

    Nikolaos Bournaveas

    2009-09-01

    Full Text Available We prove local well-posedness for the 2+1-dimensional Chern-Simons-Higgs equations in the Lorentz gauge with initial data of low regularity. Our result improves earlier results by Huh [10, 11].

  13. Interval matrices: Regularity generates singularity

    Czech Academy of Sciences Publication Activity Database

    Rohn, Jiří; Shary, S.P.

    2018-01-01

    Roč. 540, 1 March (2018), s. 149-159 ISSN 0024-3795 Institutional support: RVO:67985807 Keywords : interval matrix * regularity * singularity * P-matrix * absolute value equation * diagonally singilarizable matrix Subject RIV: BA - General Mathematics Impact factor: 0.973, year: 2016

  14. Empirical laws, regularity and necessity

    NARCIS (Netherlands)

    Koningsveld, H.

    1973-01-01

    In this book I have tried to develop an analysis of the concept of an empirical law, an analysis that differs in many ways from the alternative analyse's found in contemporary literature dealing with the subject.

    1 am referring especially to two well-known views, viz. the regularity and

  15. Regularization in Matrix Relevance Learning

    NARCIS (Netherlands)

    Schneider, Petra; Bunte, Kerstin; Stiekema, Han; Hammer, Barbara; Villmann, Thomas; Biehl, Michael

    A In this paper, we present a regularization technique to extend recently proposed matrix learning schemes in learning vector quantization (LVQ). These learning algorithms extend the concept of adaptive distance measures in LVQ to the use of relevance matrices. In general, metric learning can

  16. Grouping pursuit through a regularization solution surface.

    Science.gov (United States)

    Shen, Xiaotong; Huang, Hsin-Cheng

    2010-06-01

    Extracting grouping structure or identifying homogenous subgroups of predictors in regression is crucial for high-dimensional data analysis. A low-dimensional structure in particular-grouping, when captured in a regression model, enables to enhance predictive performance and to facilitate a model's interpretability Grouping pursuit extracts homogenous subgroups of predictors most responsible for outcomes of a response. This is the case in gene network analysis, where grouping reveals gene functionalities with regard to progression of a disease. To address challenges in grouping pursuit, we introduce a novel homotopy method for computing an entire solution surface through regularization involving a piecewise linear penalty. This nonconvex and overcomplete penalty permits adaptive grouping and nearly unbiased estimation, which is treated with a novel concept of grouped subdifferentials and difference convex programming for efficient computation. Finally, the proposed method not only achieves high performance as suggested by numerical analysis, but also has the desired optimality with regard to grouping pursuit and prediction as showed by our theoretical results.

  17. Regular languages, regular grammars and automata in splicing systems

    Science.gov (United States)

    Mohamad Jan, Nurhidaya; Fong, Wan Heng; Sarmin, Nor Haniza

    2013-04-01

    Splicing system is known as a mathematical model that initiates the connection between the study of DNA molecules and formal language theory. In splicing systems, languages called splicing languages refer to the set of double-stranded DNA molecules that may arise from an initial set of DNA molecules in the presence of restriction enzymes and ligase. In this paper, some splicing languages resulted from their respective splicing systems are shown. Since all splicing languages are regular, languages which result from the splicing systems can be further investigated using grammars and automata in the field of formal language theory. The splicing language can be written in the form of regular languages generated by grammar. Besides that, splicing systems can be accepted by automata. In this research, two restriction enzymes are used in splicing systems namely BfuCI and NcoI.

  18. Regularity of minimal and almost minimal sets and cones : J. Taylor's theorem for beginners

    OpenAIRE

    David, Guy

    2012-01-01

    Notes for lectures that were given in the Séminaire de Mathématiques Supérieures (on metric spaces and transport) , Montréal, 2011; We discuss various settings for the Plateau problem, a proof of J. Taylor's regularity theorem for $2$-dimensional almost minimal sets, some applications, and potential extensions of regularity results to the boundary.

  19. Regular and conformal regular cores for static and rotating solutions

    International Nuclear Information System (INIS)

    Azreg-Aïnou, Mustapha

    2014-01-01

    Using a new metric for generating rotating solutions, we derive in a general fashion the solution of an imperfect fluid and that of its conformal homolog. We discuss the conditions that the stress–energy tensors and invariant scalars be regular. On classical physical grounds, it is stressed that conformal fluids used as cores for static or rotating solutions are exempt from any malicious behavior in that they are finite and defined everywhere.

  20. Regular and conformal regular cores for static and rotating solutions

    Energy Technology Data Exchange (ETDEWEB)

    Azreg-Aïnou, Mustapha

    2014-03-07

    Using a new metric for generating rotating solutions, we derive in a general fashion the solution of an imperfect fluid and that of its conformal homolog. We discuss the conditions that the stress–energy tensors and invariant scalars be regular. On classical physical grounds, it is stressed that conformal fluids used as cores for static or rotating solutions are exempt from any malicious behavior in that they are finite and defined everywhere.

  1. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  2. Circuit complexity of regular languages

    Czech Academy of Sciences Publication Activity Database

    Koucký, Michal

    2009-01-01

    Roč. 45, č. 4 (2009), s. 865-879 ISSN 1432-4350 R&D Projects: GA ČR GP201/07/P276; GA MŠk(CZ) 1M0545 Institutional research plan: CEZ:AV0Z10190503 Keywords : regular languages * circuit complexity * upper and lower bounds Subject RIV: BA - General Mathematics Impact factor: 0.726, year: 2009

  3. CT Image Reconstruction in a Low Dimensional Manifold

    OpenAIRE

    Cong, Wenxiang; Wang, Ge; Yang, Qingsong; Hsieh, Jiang; Li, Jia; Lai, Rongjie

    2017-01-01

    Regularization methods are commonly used in X-ray CT image reconstruction. Different regularization methods reflect the characterization of different prior knowledge of images. In a recent work, a new regularization method called a low-dimensional manifold model (LDMM) is investigated to characterize the low-dimensional patch manifold structure of natural images, where the manifold dimensionality characterizes structural information of an image. In this paper, we propose a CT image reconstruc...

  4. Manifestly scale-invariant regularization and quantum effective operators

    CERN Document Server

    Ghilencea, D.M.

    2016-01-01

    Scale invariant theories are often used to address the hierarchy problem, however the regularization of their quantum corrections introduces a dimensionful coupling (dimensional regularization) or scale (Pauli-Villars, etc) which break this symmetry explicitly. We show how to avoid this problem and study the implications of a manifestly scale invariant regularization in (classical) scale invariant theories. We use a dilaton-dependent subtraction function $\\mu(\\sigma)$ which after spontaneous breaking of scale symmetry generates the usual DR subtraction scale $\\mu(\\langle\\sigma\\rangle)$. One consequence is that "evanescent" interactions generated by scale invariance of the action in $d=4-2\\epsilon$ (but vanishing in $d=4$), give rise to new, finite quantum corrections. We find a (finite) correction $\\Delta U(\\phi,\\sigma)$ to the one-loop scalar potential for $\\phi$ and $\\sigma$, beyond the Coleman-Weinberg term. $\\Delta U$ is due to an evanescent correction ($\\propto\\epsilon$) to the field-dependent masses (of...

  5. Variational analysis of regular mappings theory and applications

    CERN Document Server

    Ioffe, Alexander D

    2017-01-01

    This monograph offers the first systematic account of (metric) regularity theory in variational analysis. It presents new developments alongside classical results and demonstrates the power of the theory through applications to various problems in analysis and optimization theory. The origins of metric regularity theory can be traced back to a series of fundamental ideas and results of nonlinear functional analysis and global analysis centered around problems of existence and stability of solutions of nonlinear equations. In variational analysis, regularity theory goes far beyond the classical setting and is also concerned with non-differentiable and multi-valued operators. The present volume explores all basic aspects of the theory, from the most general problems for mappings between metric spaces to those connected with fairly concrete and important classes of operators acting in Banach and finite dimensional spaces. Written by a leading expert in the field, the book covers new and powerful techniques, whic...

  6. Regularity of large solutions for the compressible magnetohydrodynamic equations

    Directory of Open Access Journals (Sweden)

    Qin Yuming

    2011-01-01

    Full Text Available Abstract In this paper, we consider the initial-boundary value problem of one-dimensional compressible magnetohydrodynamics flows. The existence and continuous dependence of global solutions in H 1 have been established in Chen and Wang (Z Angew Math Phys 54, 608-632, 2003. We will obtain the regularity of global solutions under certain assumptions on the initial data by deriving some new a priori estimates.

  7. BER analysis of regularized least squares for BPSK recovery

    KAUST Repository

    Ben Atitallah, Ismail

    2017-06-20

    This paper investigates the problem of recovering an n-dimensional BPSK signal x0 ∈ {−1, 1}n from m-dimensional measurement vector y = Ax+z, where A and z are assumed to be Gaussian with iid entries. We consider two variants of decoders based on the regularized least squares followed by hard-thresholding: the case where the convex relaxation is from {−1, 1}n to ℝn and the box constrained case where the relaxation is to [−1, 1]n. For both cases, we derive an exact expression of the bit error probability when n and m grow simultaneously large at a fixed ratio. For the box constrained case, we show that there exists a critical value of the SNR, above which the optimal regularizer is zero. On the other side, the regularization can further improve the performance of the box relaxation at low to moderate SNR regimes. We also prove that the optimal regularizer in the bit error rate sense for the unboxed case is nothing but the MMSE detector.

  8. Graph Regularized Auto-Encoders for Image Representation.

    Science.gov (United States)

    Yiyi Liao; Yue Wang; Yong Liu

    2017-06-01

    Image representation has been intensively explored in the domain of computer vision for its significant influence on the relative tasks such as image clustering and classification. It is valuable to learn a low-dimensional representation of an image which preserves its inherent information from the original image space. At the perspective of manifold learning, this is implemented with the local invariant idea to capture the intrinsic low-dimensional manifold embedded in the high-dimensional input space. Inspired by the recent successes of deep architectures, we propose a local invariant deep nonlinear mapping algorithm, called graph regularized auto-encoder (GAE). With the graph regularization, the proposed method preserves the local connectivity from the original image space to the representation space, while the stacked auto-encoders provide explicit encoding model for fast inference and powerful expressive capacity for complex modeling. Theoretical analysis shows that the graph regularizer penalizes the weighted Frobenius norm of the Jacobian matrix of the encoder mapping, where the weight matrix captures the local property in the input space. Furthermore, the underlying effects on the hidden representation space are revealed, providing insightful explanation to the advantage of the proposed method. Finally, the experimental results on both clustering and classification tasks demonstrate the effectiveness of our GAE as well as the correctness of the proposed theoretical analysis, and it also suggests that GAE is a superior solution to the current deep representation learning techniques comparing with variant auto-encoders and existing local invariant methods.

  9. From inactive to regular jogger

    DEFF Research Database (Denmark)

    Lund-Cramer, Pernille; Brinkmann Løite, Vibeke; Bredahl, Thomas Viskum Gjelstrup

    of Planned Behavior (TPB) and The Transtheoretical Model (TTM). Coding and analysis of interviews were performed using NVivo 10 software. Results TPB: During the behavior change process, the intention to jogging shifted from a focus on weight loss and improved fitness to both physical health, psychological......Title From inactive to regular jogger - a qualitative study of achieved behavioral change among recreational joggers Authors Pernille Lund-Cramer & Vibeke Brinkmann Løite Purpose Despite extensive knowledge of barriers to physical activity, most interventions promoting physical activity have proven...

  10. Regularization methods in Banach spaces

    CERN Document Server

    Schuster, Thomas; Hofmann, Bernd; Kazimierski, Kamil S

    2012-01-01

    Regularization methods aimed at finding stable approximate solutions are a necessary tool to tackle inverse and ill-posed problems. Usually the mathematical model of an inverse problem consists of an operator equation of the first kind and often the associated forward operator acts between Hilbert spaces. However, for numerous problems the reasons for using a Hilbert space setting seem to be based rather on conventions than on an approprimate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, sparsity constraints using general Lp-norms or the B

  11. Academic Training Lecture - Regular Programme

    CERN Multimedia

    PH Department

    2011-01-01

    Regular Lecture Programme 9 May 2011 ACT Lectures on Detectors - Inner Tracking Detectors by Pippa Wells (CERN) 10 May 2011 ACT Lectures on Detectors - Calorimeters (2/5) by Philippe Bloch (CERN) 11 May 2011 ACT Lectures on Detectors - Muon systems (3/5) by Kerstin Hoepfner (RWTH Aachen) 12 May 2011 ACT Lectures on Detectors - Particle Identification and Forward Detectors by Peter Krizan (University of Ljubljana and J. Stefan Institute, Ljubljana, Slovenia) 13 May 2011 ACT Lectures on Detectors - Trigger and Data Acquisition (5/5) by Dr. Brian Petersen (CERN) from 11:00 to 12:00 at CERN ( Bldg. 222-R-001 - Filtration Plant )

  12. Regular capacities on metrizable spaces

    Directory of Open Access Journals (Sweden)

    T. M. Cherkovskyi

    2014-07-01

    Full Text Available It is proved that for a (not necessarily compact metric space: the metrics on the space of capacities in the sense of Zarichnyi and Prokhorov are equal; completeness of the space of capacities is equivalent to completeness of the original space. It is shown that for the capacities on metrizable spaces the properties of $\\omega$-smoothness and of $\\tau$-smoothness are equivalent precisely on the separable spaces, and the properties of $\\omega$-smoothness and of regularity w.r.t. some (then w.r.t. any admissible metric are equivalent precisely on the compact spaces.

  13. Regularized Statistical Analysis of Anatomy

    DEFF Research Database (Denmark)

    Sjöstrand, Karl

    2007-01-01

    This thesis presents the application and development of regularized methods for the statistical analysis of anatomical structures. Focus is on structure-function relationships in the human brain, such as the connection between early onset of Alzheimer’s disease and shape changes of the corpus...... and mind. Statistics represents a quintessential part of such investigations as they are preluded by a clinical hypothesis that must be verified based on observed data. The massive amounts of image data produced in each examination pose an important and interesting statistical challenge...... efficient algorithms which make the analysis of large data sets feasible, and gives examples of applications....

  14. On the MSE Performance and Optimization of Regularized Problems

    KAUST Repository

    Alrashdi, Ayed

    2016-11-01

    The amount of data that has been measured, transmitted/received, and stored in the recent years has dramatically increased. So, today, we are in the world of big data. Fortunately, in many applications, we can take advantages of possible structures and patterns in the data to overcome the curse of dimensionality. The most well known structures include sparsity, low-rankness, block sparsity. This includes a wide range of applications such as machine learning, medical imaging, signal processing, social networks and computer vision. This also led to a specific interest in recovering signals from noisy compressed measurements (Compressed Sensing (CS) problem). Such problems are generally ill-posed unless the signal is structured. The structure can be captured by a regularizer function. This gives rise to a potential interest in regularized inverse problems, where the process of reconstructing the structured signal can be modeled as a regularized problem. This thesis particularly focuses on finding the optimal regularization parameter for such problems, such as ridge regression, LASSO, square-root LASSO and low-rank Generalized LASSO. Our goal is to optimally tune the regularizer to minimize the mean-squared error (MSE) of the solution when the noise variance or structure parameters are unknown. The analysis is based on the framework of the Convex Gaussian Min-max Theorem (CGMT) that has been used recently to precisely predict performance errors.

  15. Regularized Label Relaxation Linear Regression.

    Science.gov (United States)

    Fang, Xiaozhao; Xu, Yong; Li, Xuelong; Lai, Zhihui; Wong, Wai Keung; Fang, Bingwu

    2018-04-01

    Linear regression (LR) and some of its variants have been widely used for classification problems. Most of these methods assume that during the learning phase, the training samples can be exactly transformed into a strict binary label matrix, which has too little freedom to fit the labels adequately. To address this problem, in this paper, we propose a novel regularized label relaxation LR method, which has the following notable characteristics. First, the proposed method relaxes the strict binary label matrix into a slack variable matrix by introducing a nonnegative label relaxation matrix into LR, which provides more freedom to fit the labels and simultaneously enlarges the margins between different classes as much as possible. Second, the proposed method constructs the class compactness graph based on manifold learning and uses it as the regularization item to avoid the problem of overfitting. The class compactness graph is used to ensure that the samples sharing the same labels can be kept close after they are transformed. Two different algorithms, which are, respectively, based on -norm and -norm loss functions are devised. These two algorithms have compact closed-form solutions in each iteration so that they are easily implemented. Extensive experiments show that these two algorithms outperform the state-of-the-art algorithms in terms of the classification accuracy and running time.

  16. Sparse High Dimensional Models in Economics.

    Science.gov (United States)

    Fan, Jianqing; Lv, Jinchi; Qi, Lei

    2011-09-01

    This paper reviews the literature on sparse high dimensional models and discusses some applications in economics and finance. Recent developments of theory, methods, and implementations in penalized least squares and penalized likelihood methods are highlighted. These variable selection methods are proved to be effective in high dimensional sparse modeling. The limits of dimensionality that regularization methods can handle, the role of penalty functions, and their statistical properties are detailed. Some recent advances in ultra-high dimensional sparse modeling are also briefly discussed.

  17. Wave dynamics of regular and chaotic rays

    International Nuclear Information System (INIS)

    McDonald, S.W.

    1983-09-01

    In order to investigate general relationships between waves and rays in chaotic systems, I study the eigenfunctions and spectrum of a simple model, the two-dimensional Helmholtz equation in a stadium boundary, for which the rays are ergodic. Statistical measurements are performed so that the apparent randomness of the stadium modes can be quantitatively contrasted with the familiar regularities observed for the modes in a circular boundary (with integrable rays). The local spatial autocorrelation of the eigenfunctions is constructed in order to indirectly test theoretical predictions for the nature of the Wigner distribution corresponding to chaotic waves. A portion of the large-eigenvalue spectrum is computed and reported in an appendix; the probability distribution of successive level spacings is analyzed and compared with theoretical predictions. The two principal conclusions are: 1) waves associated with chaotic rays may exhibit randomly situated localized regions of high intensity; 2) the Wigner function for these waves may depart significantly from being uniformly distributed over the surface of constant frequency in the ray phase space

  18. Using dimensional reduction for hadronic collisions

    Science.gov (United States)

    Signer, Adrian; Stöckinger, Dominik

    2009-02-01

    We discuss how to apply regularization by dimensional reduction for computing hadronic cross sections at next-to-leading order. We analyze the infrared singularity structure, demonstrate that there are no problems with factorization, and show how to use dimensional reduction in conjunction with standard parton distribution functions. We clarify that different versions of dimensional reduction with different infrared and factorization behaviour have been used in the literature. Finally, we give transition rules for translating the various parts of next-to-leading order cross sections from dimensional reduction to other regularization schemes.

  19. Regularity of Dual Gabor Windows

    Directory of Open Access Journals (Sweden)

    Ole Christensen

    2013-01-01

    Full Text Available We present a construction of dual windows associated with Gabor frames with compactly supported windows. The size of the support of the dual windows is comparable to that of the given window. Under certain conditions, we prove that there exist dual windows with higher regularity than the canonical dual window. On the other hand, there are cases where no differentiable dual window exists, even in the overcomplete case. As a special case of our results, we show that there exists a common smooth dual window for an interesting class of Gabor frames. In particular, for any value of K∈ℕ, there is a smooth function h which simultaneously is a dual window for all B-spline generated Gabor frames {EmbTnBN(x/2}m,n∈ℕ for B-splines BN of order N=1,…,2K+1 with a fixed and sufficiently small value of b.

  20. EXPRESSÃO REGULAR NUMÉRICA

    Directory of Open Access Journals (Sweden)

    Bruno Vier Hoffmeister

    2014-06-01

    Full Text Available This article defines the formal definition of the computer program language Numeric Regular Expression. A language concept inspired by Regular Expression syntax, applying your power and flexibility to numeric chains are describe.

  1. Generalization performance of regularized neural network models

    DEFF Research Database (Denmark)

    Larsen, Jan; Hansen, Lars Kai

    1994-01-01

    Architecture optimization is a fundamental problem of neural network modeling. The optimal architecture is defined as the one which minimizes the generalization error. This paper addresses estimation of the generalization performance of regularized, complete neural network models. Regularization...

  2. Contour Propagation With Riemannian Elasticity Regularization

    DEFF Research Database (Denmark)

    Bjerre, Troels; Hansen, Mads Fogtmann; Sapru, W.

    2011-01-01

    guided corrections. This study compares manual delineations in replanning CT scans of head-and-neck patients to automatic contour propagation using deformable registration with Riemannian regularization. The potential benefit of locally assigned regularization parameters according to tissue type...

  3. BRST gauge fixing and regularization

    International Nuclear Information System (INIS)

    Damgaard, P.H.; Jonghe, F. de; Sollacher, R.

    1995-05-01

    In the presence of consistent regulators, the standard procedure of BRST gauge fixing (or moving from one gauge to another) can require non-trivial modifications. These modifications occur at the quantum level, and gauges exist which are only well-defined when quantum mechanical modifications are correctly taken into account. We illustrate how this phenomenon manifests itself in the solvable case of two-dimensional bosonization in the path-integral formalism. As a by-product, we show how to derive smooth bosonization in Batalin-Vilkovisky Lagrangian BRST quantization. (orig.)

  4. Regular graph construction for semi-supervised learning

    International Nuclear Information System (INIS)

    Vega-Oliveros, Didier A; Berton, Lilian; Eberle, Andre Mantini; Lopes, Alneu de Andrade; Zhao, Liang

    2014-01-01

    Semi-supervised learning (SSL) stands out for using a small amount of labeled points for data clustering and classification. In this scenario graph-based methods allow the analysis of local and global characteristics of the available data by identifying classes or groups regardless data distribution and representing submanifold in Euclidean space. Most of methods used in literature for SSL classification do not worry about graph construction. However, regular graphs can obtain better classification accuracy compared to traditional methods such as k-nearest neighbor (kNN), since kNN benefits the generation of hubs and it is not appropriate for high-dimensionality data. Nevertheless, methods commonly used for generating regular graphs have high computational cost. We tackle this problem introducing an alternative method for generation of regular graphs with better runtime performance compared to methods usually find in the area. Our technique is based on the preferential selection of vertices according some topological measures, like closeness, generating at the end of the process a regular graph. Experiments using the global and local consistency method for label propagation show that our method provides better or equal classification rate in comparison with kNN

  5. REGULARIZED D-BAR METHOD FOR THE INVERSE CONDUCTIVITY PROBLEM

    DEFF Research Database (Denmark)

    Knudsen, Kim; Lassas, Matti; Mueller, Jennifer

    2009-01-01

    A strategy for regularizing the inversion procedure for the two-dimensional D-bar reconstruction algorithm based on the global uniqueness proof of Nachman [Ann. Math. 143 (1996)] for the ill-posed inverse conductivity problem is presented. The strategy utilizes truncation of the boundary integral...... equation and the scattering transform. It is shown that this leads to a bound on the error in the scattering transform and a stable reconstruction of the conductivity; an explicit rate of convergence in appropriate Banach spaces is derived as well. Numerical results are also included, demonstrating...... the convergence of the reconstructed conductivity to the true conductivity as the noise level tends to zero. The results provide a link between two traditions of inverse problems research: theory of regularization and inversion methods based on complex geometrical optics. Also, the procedure is a novel...

  6. Manifestly scale-invariant regularization and quantum effective operators

    Science.gov (United States)

    Ghilencea, D. M.

    2016-05-01

    Scale-invariant theories are often used to address the hierarchy problem. However the regularization of their quantum corrections introduces a dimensionful coupling (dimensional regularization) or scale (Pauli-Villars, etc) which breaks this symmetry explicitly. We show how to avoid this problem and study the implications of a manifestly scale-invariant regularization in (classical) scale-invariant theories. We use a dilaton-dependent subtraction function μ (σ ) which, after spontaneous breaking of the scale symmetry, generates the usual dimensional regularization subtraction scale μ (⟨σ ⟩) . One consequence is that "evanescent" interactions generated by scale invariance of the action in d =4 -2 ɛ (but vanishing in d =4 ) give rise to new, finite quantum corrections. We find a (finite) correction Δ U (ϕ ,σ ) to the one-loop scalar potential for ϕ and σ , beyond the Coleman-Weinberg term. Δ U is due to an evanescent correction (∝ɛ ) to the field-dependent masses (of the states in the loop) which multiplies the pole (∝1 /ɛ ) of the momentum integral to give a finite quantum result. Δ U contains a nonpolynomial operator ˜ϕ6/σ2 of known coefficient and is independent of the subtraction dimensionless parameter. A more general μ (ϕ ,σ ) is ruled out since, in their classical decoupling limit, the visible sector (of the Higgs ϕ ) and hidden sector (dilaton σ ) still interact at the quantum level; thus, the subtraction function must depend on the dilaton only, μ ˜σ . The method is useful in models where preserving scale symmetry at quantum level is important.

  7. Elementary Particle Spectroscopy in Regular Solid Rewrite

    Science.gov (United States)

    Trell, Erik

    2008-10-01

    The Nilpotent Universal Computer Rewrite System (NUCRS) has operationalized the radical ontological dilemma of Nothing at All versus Anything at All down to the ground recursive syntax and principal mathematical realisation of this categorical dichotomy as such and so governing all its sui generis modalities, leading to fulfilment of their individual terms and compass when the respective choice sequence operations are brought to closure. Focussing on the general grammar, NUCRS by pure logic and its algebraic notations hence bootstraps Quantum Mechanics, aware that it "is the likely keystone of a fundamental computational foundation" also for e.g. physics, molecular biology and neuroscience. The present work deals with classical geometry where morphology is the modality, and ventures that the ancient regular solids are its specific rewrite system, in effect extensively anticipating the detailed elementary particle spectroscopy, and further on to essential structures at large both over the inorganic and organic realms. The geodetic antipode to Nothing is extension, with natural eigenvector the endless straight line which when deployed according to the NUCRS as well as Plotelemeian topographic prescriptions forms a real three-dimensional eigenspace with cubical eigenelements where observed quark-skewed quantum-chromodynamical particle events self-generate as an Aristotelean phase transition between the straight and round extremes of absolute endlessness under the symmetry- and gauge-preserving, canonical coset decomposition SO(3)×O(5) of Lie algebra SU(3). The cubical eigen-space and eigen-elements are the parental state and frame, and the other solids are a range of transition matrix elements and portions adapting to the spherical root vector symmetries and so reproducibly reproducing the elementary particle spectroscopy, including a modular, truncated octahedron nano-composition of the Electron which piecemeal enter into molecular structures or compressed to each

  8. Manifold Based Low-rank Regularization for Image Restoration and Semi-supervised Learning

    OpenAIRE

    Lai, Rongjie; Li, Jia

    2017-01-01

    Low-rank structures play important role in recent advances of many problems in image science and data science. As a natural extension of low-rank structures for data with nonlinear structures, the concept of the low-dimensional manifold structure has been considered in many data processing problems. Inspired by this concept, we consider a manifold based low-rank regularization as a linear approximation of manifold dimension. This regularization is less restricted than the global low-rank regu...

  9. The geometric $\\beta$-function in curved space-time under operator regularization

    OpenAIRE

    Agarwala, Susama

    2009-01-01

    In this paper, I compare the generators of the renormalization group flow, or the geometric $\\beta$-functions for dimensional regularization and operator regularization. I then extend the analysis to show that the geometric $\\beta$-function for a scalar field theory on a closed compact Riemannian manifold is defined on the entire manifold. I then extend the analysis to find the generator of the renormalization group flow for a conformal scalar-field theories on the same manifolds. The geometr...

  10. A Regularity Criterion for Positive Part of Radial Component in the Case of Axially Symmetric Navier-Stokes Equations

    Directory of Open Access Journals (Sweden)

    Kubica Adam

    2015-03-01

    Full Text Available We examine the conditional regularity of the solutions of Navier-Stokes equations in the entire three-dimensional space under the assumption that the data are axially symmetric. We show that if positive part of the radial component of velocity satisfies a weighted Serrin type condition and in addition angular component satisfies some condition, then the solution is regular.

  11. Adaptive regularization of noisy linear inverse problems

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Madsen, Kristoffer Hougaard; Lehn-Schiøler, Tue

    2006-01-01

    In the Bayesian modeling framework there is a close relation between regularization and the prior distribution over parameters. For prior distributions in the exponential family, we show that the optimal hyper-parameter, i.e., the optimal strength of regularization, satisfies a simple relation: T......: The expectation of the regularization function, i.e., takes the same value in the posterior and prior distribution. We present three examples: two simulations, and application in fMRI neuroimaging....

  12. Regularity effect in prospective memory during aging

    Directory of Open Access Journals (Sweden)

    Geoffrey Blondelle

    2016-10-01

    Full Text Available Background: Regularity effect can affect performance in prospective memory (PM, but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults. Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30, 16 intermediate adults (40–55, and 25 older adults (65–80. The task, adapted from the Virtual Week, was designed to manipulate the regularity of the various activities of daily life that were to be recalled (regular repeated activities vs. irregular non-repeated activities. We examine the role of several cognitive functions including certain dimensions of executive functions (planning, inhibition, shifting, and binding, short-term memory, and retrospective episodic memory to identify those involved in PM, according to regularity and age. Results: A mixed-design ANOVA showed a main effect of task regularity and an interaction between age and regularity: an age-related difference in PM performances was found for irregular activities (older < young, but not for regular activities. All participants recalled more regular activities than irregular ones with no age effect. It appeared that recalling of regular activities only involved planning for both intermediate and older adults, while recalling of irregular ones were linked to planning, inhibition, short-term memory, binding, and retrospective episodic memory. Conclusion: Taken together, our data suggest that planning capacities seem to play a major role in remembering to perform intended actions with advancing age. Furthermore, the age-PM-paradox may be attenuated when the experimental design is adapted by implementing a familiar context through the use of activities of daily living. The clinical

  13. Harmonic R-matrices for scattering amplitudes and spectral regularization

    Energy Technology Data Exchange (ETDEWEB)

    Ferro, Livia; Plefka, Jan [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Lukowski, Tomasz [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Humboldt-Universitaet, Berlin (Germany). Inst. fuer Mathematik; Humboldt-Univ. Berlin (Germany). IRIS Adlershof; Meneghelli, Carlo [Hamburg Univ. (Germany). Fachbereich 11 - Mathematik; Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Theory Group; Staudacher, Matthias [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Mathematik; Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Max-Planck-Institut fuer Gravitationsphysik (Albert-Einstein-Institut), Potsdam (Germany)

    2012-12-15

    Planar N=4 super Yang-Mills appears to be integrable. While this allows to find this theory's exact spectrum, integrability has hitherto been of no direct use for scattering amplitudes. To remedy this, we deform all scattering amplitudes by a spectral parameter. The deformed tree-level four-point function turns out to be essentially the one-loop R-matrix of the integrable N=4 spin chain satisfying the Yang-Baxter equation. Deformed on-shell three-point functions yield novel three-leg R-matrices satisfying bootstrap equations. Finally, we supply initial evidence that the spectral parameter might find its use as a novel symmetry-respecting regulator replacing dimensional regularization. Its physical meaning is a local deformation of particle helicity, a fact which might be useful for a much larger class of non-integrable four-dimensional field theories.

  14. Iterative Regularization with Minimum-Residual Methods

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg; Hansen, Per Christian

    2007-01-01

    subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....

  15. Iterative regularization with minimum-residual methods

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg; Hansen, Per Christian

    2006-01-01

    subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES - their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....

  16. Cluster Algebras and Symmetries of Regular Tilings

    OpenAIRE

    Scherlis, Adam

    2015-01-01

    The classification of Grassmannian cluster algebras resembles that of regular polygonal tilings. We conjecture that this resemblance may indicate a deeper connection between these seemingly unrelated structures.

  17. Adaptive L1/2 Shooting Regularization Method for Survival Analysis Using Gene Expression Data

    Directory of Open Access Journals (Sweden)

    Xiao-Ying Liu

    2013-01-01

    Full Text Available A new adaptive L1/2 shooting regularization method for variable selection based on the Cox’s proportional hazards mode being proposed. This adaptive L1/2 shooting algorithm can be easily obtained by the optimization of a reweighed iterative series of L1 penalties and a shooting strategy of L1/2 penalty. Simulation results based on high dimensional artificial data show that the adaptive L1/2 shooting regularization method can be more accurate for variable selection than Lasso and adaptive Lasso methods. The results from real gene expression dataset (DLBCL also indicate that the L1/2 regularization method performs competitively.

  18. On infinite regular and chiral maps

    OpenAIRE

    Arredondo, John A.; Valdez, Camilo Ramírez y Ferrán

    2015-01-01

    We prove that infinite regular and chiral maps take place on surfaces with at most one end. Moreover, we prove that an infinite regular or chiral map on an orientable surface with genus can only be realized on the Loch Ness monster, that is, the topological surface of infinite genus with one end.

  19. A regularized stationary mean-field game

    KAUST Repository

    Yang, Xianjin

    2016-04-19

    In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.

  20. Automating InDesign with Regular Expressions

    CERN Document Server

    Kahrel, Peter

    2006-01-01

    If you need to make automated changes to InDesign documents beyond what basic search and replace can handle, you need regular expressions, and a bit of scripting to make them work. This Short Cut explains both how to write regular expressions, so you can find and replace the right things, and how to use them in InDesign specifically.

  1. Regularization algorithms based on total least squares

    DEFF Research Database (Denmark)

    Hansen, Per Christian; O'Leary, Dianne P.

    1996-01-01

    Discretizations of inverse problems lead to systems of linear equations with a highly ill-conditioned coefficient matrix, and in order to compute stable solutions to these systems it is necessary to apply regularization methods. Classical regularization methods, such as Tikhonov's method or trunc...

  2. Regularization of the one-matrix models

    International Nuclear Information System (INIS)

    Jurkiewicz, J.

    1990-04-01

    We analyze the critical properties of the one-matrix model near its critical point, corresponding to the continuum limit. We consider the model with quartic and six-order interactions. This last can be viewed as a regularization of the model. We show that the regularized theory develops a phase structure in which it is impossible to reach the standard continuum limit. (orig.)

  3. On bigraded regularities of Rees algebra

    Indian Academy of Sciences (India)

    Ramakrishna Nanduri

    2017-08-03

    Aug 3, 2017 ... multigraded analog of Castelnuovo–Mumford regularity in terms of local cohomology were defined in [12] (also see [14]). In their notion, it is a subset of the abelian group considered under grading. Whereas we define the local cohomology regularity in a dif- ferent way (see Definition 2.3). We also define ...

  4. Regularization modeling for large-eddy simulation

    NARCIS (Netherlands)

    Geurts, Bernardus J.; Holm, D.D.

    2003-01-01

    A new modeling approach for large-eddy simulation (LES) is obtained by combining a "regularization principle" with an explicit filter and its inversion. This regularization approach allows a systematic derivation of the implied subgrid model, which resolves the closure problem. The central role of

  5. On 3-Chromatic Distance-Regular Graphs

    NARCIS (Netherlands)

    Blokhuis, A.; Brouwer, A.E.; Haemers, W.H.

    2006-01-01

    We give some necessary conditions for a graph to be 3-chromatic in terms of the spectrum of the adjacency matrix.For all known distance-regular graphs it is determined whether they are 3-chromatic.A start is made with the classification of 3-chromatic distance-regular graphs, and it is shown that

  6. Regular Event Structures and Finite Petri Nets

    DEFF Research Database (Denmark)

    Nielsen, M.; Thiagarajan, P.S.

    2002-01-01

    We present the notion of regular event structures and conjecture that they correspond exactly to finite 1-safe Petri nets. We show that the conjecture holds for the conflict-free case. Even in this restricted setting, the proof is non-trivial and involves a natural subclass of regular event...

  7. Regularization techniques in realistic Laplacian computation.

    Science.gov (United States)

    Bortel, Radoslav; Sovka, Pavel

    2007-11-01

    This paper explores regularization options for the ill-posed spline coefficient equations in the realistic Laplacian computation. We investigate the use of the Tikhonov regularization, truncated singular value decomposition, and the so-called lambda-correction with the regularization parameter chosen by the L-curve, generalized cross-validation, quasi-optimality, and the discrepancy principle criteria. The provided range of regularization techniques is much wider than in the previous works. The improvement of the realistic Laplacian is investigated by simulations on the three-shell spherical head model. The conclusion is that the best performance is provided by the combination of the Tikhonov regularization and the generalized cross-validation criterion-a combination that has never been suggested for this task before.

  8. Thin-shell wormholes from the regular Hayward black hole

    Energy Technology Data Exchange (ETDEWEB)

    Halilsoy, M.; Ovgun, A.; Mazharimousavi, S.H. [Eastern Mediterranean University, Department of Physics, Mersin 10 (Turkey)

    2014-03-15

    We revisit the regular black hole found by Hayward in 4-dimensional static, spherically symmetric spacetime. To find a possible source for such a spacetime we resort to the nonlinear electrodynamics in general relativity. It is found that a magnetic field within this context gives rise to the regular Hayward black hole. By employing such a regular black hole we construct a thin-shell wormhole for the case of various equations of state on the shell. We abbreviate a general equation of state by p = ψ(σ) where p is the surface pressure which is a function of the mass density (σ). In particular, linear, logarithmic, Chaplygin, etc. forms of equations of state are considered. In each case we study the stability of the thin shell against linear perturbations.We plot the stability regions by tuning the parameters of the theory. It is observed that the role of the Hayward parameter is to make the TSW more stable. Perturbations of the throat with small velocity condition are also studied. The matter of our TSWs, however, remains exotic. (orig.)

  9. Asymptotic performance of regularized quadratic discriminant analysis based classifiers

    KAUST Repository

    Elkhalil, Khalil

    2017-12-13

    This paper carries out a large dimensional analysis of the standard regularized quadratic discriminant analysis (QDA) classifier designed on the assumption that data arise from a Gaussian mixture model. The analysis relies on fundamental results from random matrix theory (RMT) when both the number of features and the cardinality of the training data within each class grow large at the same pace. Under some mild assumptions, we show that the asymptotic classification error converges to a deterministic quantity that depends only on the covariances and means associated with each class as well as the problem dimensions. Such a result permits a better understanding of the performance of regularized QDA and can be used to determine the optimal regularization parameter that minimizes the misclassification error probability. Despite being valid only for Gaussian data, our theoretical findings are shown to yield a high accuracy in predicting the performances achieved with real data sets drawn from popular real data bases, thereby making an interesting connection between theory and practice.

  10. Enhancing Low-Rank Subspace Clustering by Manifold Regularization.

    Science.gov (United States)

    Liu, Junmin; Chen, Yijun; Zhang, JiangShe; Xu, Zongben

    2014-07-25

    Recently, low-rank representation (LRR) method has achieved great success in subspace clustering (SC), which aims to cluster the data points that lie in a union of low-dimensional subspace. Given a set of data points, LRR seeks the lowest rank representation among the many possible linear combinations of the bases in a given dictionary or in terms of the data itself. However, LRR only considers the global Euclidean structure, while the local manifold structure, which is often important for many real applications, is ignored. In this paper, to exploit the local manifold structure of the data, a manifold regularization characterized by a Laplacian graph has been incorporated into LRR, leading to our proposed Laplacian regularized LRR (LapLRR). An efficient optimization procedure, which is based on alternating direction method of multipliers (ADMM), is developed for LapLRR. Experimental results on synthetic and real data sets are presented to demonstrate that the performance of LRR has been enhanced by using the manifold regularization.

  11. A Variance Minimization Criterion to Feature Selection Using Laplacian Regularization.

    Science.gov (United States)

    He, Xiaofei; Ji, Ming; Zhang, Chiyuan; Bao, Hujun

    2011-10-01

    In many information processing tasks, one is often confronted with very high-dimensional data. Feature selection techniques are designed to find the meaningful feature subset of the original features which can facilitate clustering, classification, and retrieval. In this paper, we consider the feature selection problem in unsupervised learning scenarios, which is particularly difficult due to the absence of class labels that would guide the search for relevant information. Based on Laplacian regularized least squares, which finds a smooth function on the data manifold and minimizes the empirical loss, we propose two novel feature selection algorithms which aim to minimize the expected prediction error of the regularized regression model. Specifically, we select those features such that the size of the parameter covariance matrix of the regularized regression model is minimized. Motivated from experimental design, we use trace and determinant operators to measure the size of the covariance matrix. Efficient computational schemes are also introduced to solve the corresponding optimization problems. Extensive experimental results over various real-life data sets have demonstrated the superiority of the proposed algorithms.

  12. Three-Dimensional Flows

    CERN Document Server

    Araujo, Vitor; Viana, Marcelo

    2010-01-01

    In this book, the authors present the elements of a general theory for flows on three-dimensional compact boundaryless manifolds, encompassing flows with equilibria accumulated by regular orbits. The book aims to provide a global perspective of this theory and make it easier for the reader to digest the growing literature on this subject. This is not the first book on the subject of dynamical systems, but there are distinct aspects which together make this book unique. Firstly, this book treats mostly continuous time dynamical systems, instead of its discrete counterpart, exhaustively treated

  13. Functional MRI using regularized parallel imaging acquisition.

    Science.gov (United States)

    Lin, Fa-Hsuan; Huang, Teng-Yi; Chen, Nan-Kuei; Wang, Fu-Nien; Stufflebeam, Steven M; Belliveau, John W; Wald, Lawrence L; Kwong, Kenneth K

    2005-08-01

    Parallel MRI techniques reconstruct full-FOV images from undersampled k-space data by using the uncorrelated information from RF array coil elements. One disadvantage of parallel MRI is that the image signal-to-noise ratio (SNR) is degraded because of the reduced data samples and the spatially correlated nature of multiple RF receivers. Regularization has been proposed to mitigate the SNR loss originating due to the latter reason. Since it is necessary to utilize static prior to regularization, the dynamic contrast-to-noise ratio (CNR) in parallel MRI will be affected. In this paper we investigate the CNR of regularized sensitivity encoding (SENSE) acquisitions. We propose to implement regularized parallel MRI acquisitions in functional MRI (fMRI) experiments by incorporating the prior from combined segmented echo-planar imaging (EPI) acquisition into SENSE reconstructions. We investigated the impact of regularization on the CNR by performing parametric simulations at various BOLD contrasts, acceleration rates, and sizes of the active brain areas. As quantified by receiver operating characteristic (ROC) analysis, the simulations suggest that the detection power of SENSE fMRI can be improved by regularized reconstructions, compared to unregularized reconstructions. Human motor and visual fMRI data acquired at different field strengths and array coils also demonstrate that regularized SENSE improves the detection of functionally active brain regions. 2005 Wiley-Liss, Inc

  14. Multiple graph regularized protein domain ranking

    KAUST Repository

    Wang, Jim Jing-Yan

    2012-11-19

    Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.

  15. Regularization of instabilities in gravity theories

    Science.gov (United States)

    Ramazanoǧlu, Fethi M.

    2018-01-01

    We investigate instabilities and their regularization in theories of gravitation. Instabilities can be beneficial since their growth often leads to prominent observable signatures, which makes them especially relevant to relatively low signal-to-noise ratio measurements such as gravitational wave detections. An indefinitely growing instability usually renders a theory unphysical; hence, a desirable instability should also come with underlying physical machinery that stops the growth at finite values, i.e., regularization mechanisms. The prototypical gravity theory that presents such an instability is the spontaneous scalarization phenomena of scalar-tensor theories, which feature a tachyonic instability. We identify the regularization mechanisms in this theory and show that they can be utilized to regularize other instabilities as well. Namely, we present theories in which spontaneous growth is triggered by a ghost rather than a tachyon and numerically calculate stationary solutions of scalarized neutron stars in these theories. We speculate on the possibility of regularizing known divergent instabilities in certain gravity theories using our findings and discuss alternative theories of gravitation in which regularized instabilities may be present. Even though we study many specific examples, our main point is the recognition of regularized instabilities as a common theme and unifying mechanism in a vast array of gravity theories.

  16. Multiple graph regularized protein domain ranking.

    Science.gov (United States)

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-11-19

    Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  17. Multiple graph regularized protein domain ranking

    Directory of Open Access Journals (Sweden)

    Wang Jim

    2012-11-01

    Full Text Available Abstract Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  18. J-regular rings with injectivities

    OpenAIRE

    Shen, Liang

    2010-01-01

    A ring $R$ is called a J-regular ring if R/J(R) is von Neumann regular, where J(R) is the Jacobson radical of R. It is proved that if R is J-regular, then (i) R is right n-injective if and only if every homomorphism from an $n$-generated small right ideal of $R$ to $R_{R}$ can be extended to one from $R_{R}$ to $R_{R}$; (ii) R is right FP-injective if and only if R is right (J, R)-FP-injective. Some known results are improved.

  19. Completeness and regularity of generalized fuzzy graphs.

    Science.gov (United States)

    Samanta, Sovan; Sarkar, Biswajit; Shin, Dongmin; Pal, Madhumangal

    2016-01-01

    Fuzzy graphs are the backbone of many real systems like networks, image, scheduling, etc. But, due to some restriction on edges, fuzzy graphs are limited to represent for some systems. Generalized fuzzy graphs are appropriate to avoid such restrictions. In this study generalized fuzzy graphs are introduced. In this study, matrix representation of generalized fuzzy graphs is described. Completeness and regularity are two important parameters of graph theory. Here, regular and complete generalized fuzzy graphs are introduced. Some properties of them are discussed. After that, effective regular graphs are exemplified.

  20. Diagrammatic methods in phase-space regularization

    International Nuclear Information System (INIS)

    Bern, Z.; Halpern, M.B.; California Univ., Berkeley

    1987-11-01

    Using the scalar prototype and gauge theory as the simplest possible examples, diagrammatic methods are developed for the recently proposed phase-space form of continuum regularization. A number of one-loop and all-order applications are given, including general diagrammatic discussions of the nogrowth theorem and the uniqueness of the phase-space stochastic calculus. The approach also generates an alternate derivation of the equivalence of the large-β phase-space regularization to the more conventional coordinate-space regularization. (orig.)

  1. Partial regularity of viscosity solutions for a class of Kolmogorov equations arising from mathematical finance

    Science.gov (United States)

    Rosestolato, M.; Święch, A.

    2017-02-01

    We study value functions which are viscosity solutions of certain Kolmogorov equations. Using PDE techniques we prove that they are C 1 + α regular on special finite dimensional subspaces. The problem has origins in hedging derivatives of risky assets in mathematical finance.

  2. Lattice-Valued Convergence Spaces: Weaker Regularity and p-Regularity

    Directory of Open Access Journals (Sweden)

    Lingqiang Li

    2014-01-01

    Full Text Available By using some lattice-valued Kowalsky’s dual diagonal conditions, some weaker regularities for Jäger’s generalized stratified L-convergence spaces and those for Boustique et al’s stratified L-convergence spaces are defined and studied. Here, the lattice L is a complete Heyting algebra. Some characterizations and properties of weaker regularities are presented. For Jäger’s generalized stratified L-convergence spaces, a notion of closures of stratified L-filters is introduced and then a new p-regularity is defined. At last, the relationships between p-regularities and weaker regularities are established.

  3. Analysis of regularized inversion of data corrupted by white Gaussian noise

    International Nuclear Information System (INIS)

    Kekkonen, Hanne; Lassas, Matti; Siltanen, Samuli

    2014-01-01

    Tikhonov regularization is studied in the case of linear pseudodifferential operator as the forward map and additive white Gaussian noise as the measurement error. The measurement model for an unknown function u(x) is m(x) = Au(x) + δ ε (x), where δ > 0 is the noise magnitude. If ε was an L 2 -function, Tikhonov regularization gives an estimate T α (m) = u∈H r arg min { ||Au-m|| L 2 2 + α||u|| H r 2 } for u where α = α(δ) is the regularization parameter. Here penalization of the Sobolev norm ||u|| H r covers the cases of standard Tikhonov regularization (r = 0) and first derivative penalty (r = 1). Realizations of white Gaussian noise are almost never in L 2 , but do belong to H s with probability one if s < 0 is small enough. A modification of Tikhonov regularization theory is presented, covering the case of white Gaussian measurement noise. Furthermore, the convergence of regularized reconstructions to the correct solution as δ → 0 is proven in appropriate function spaces using microlocal analysis. The convergence of the related finite-dimensional problems to the infinite-dimensional problem is also analysed. (paper)

  4. From recreational to regular drug use

    DEFF Research Database (Denmark)

    Järvinen, Margaretha; Ravn, Signe

    2011-01-01

    of parties, intoxication becoming a goal in itself, easier access to drugs, learning to recognise alternative effects of drugs and experiences of loss of control. The analysis shows that these dimensions are at play not only when young people develop a regular drug use pattern but also when they attempt......This article analyses the process of going from recreational use to regular and problematic use of illegal drugs. We present a model containing six career contingencies relevant for young people’s progress from recreational to regular drug use: the closing of social networks, changes in forms...... to extricate themselves from this pattern. Hence, when regular drug users talk about their future, it is not a future characterised by total abstinence from illegal drugs but a future where they have rolled back their drug use career to the recreational drug use pattern they started out with. Empirically...

  5. Analytic stochastic regularization in fermionic gauge theories

    International Nuclear Information System (INIS)

    Abdalla, E.; Viana, R.L.

    1987-11-01

    We analyse the influence of the Analytic Stochastic Regularization method in gauge symmetry, evaluating the 1-loop photon propagator correction for spinor QED. Consequences in the non-abelian case are discussed. (author) [pt

  6. Weakly supervised object detection with posterior regularization

    OpenAIRE

    Bilen, Hakan; Pedersoli, Marco; Tuytelaars, Tinne

    2014-01-01

    Bilen H., Pedersoli M., Tuytelaars T., ''Weakly supervised object detection with posterior regularization'', 25th British machine vision conference - BMVC 2014, 12 pp., September 1-5, 2014, Nottingham, UK.

  7. Optimal Tikhonov regularization for DEER spectroscopy

    Science.gov (United States)

    Edwards, Thomas H.; Stoll, Stefan

    2018-03-01

    Tikhonov regularization is the most commonly used method for extracting distance distributions from experimental double electron-electron resonance (DEER) spectroscopy data. This method requires the selection of a regularization parameter, α , and a regularization operator, L. We analyze the performance of a large set of α selection methods and several regularization operators, using a test set of over half a million synthetic noisy DEER traces. These are generated from distance distributions obtained from in silico double labeling of a protein crystal structure of T4 lysozyme with the spin label MTSSL. We compare the methods and operators based on their ability to recover the model distance distributions from the noisy time traces. The results indicate that several α selection methods perform quite well, among them the Akaike information criterion and the generalized cross validation method with either the first- or second-derivative operator. They perform significantly better than currently utilized L-curve methods.

  8. Geometric regularizations and dual conifold transitions

    International Nuclear Information System (INIS)

    Landsteiner, Karl; Lazaroiu, Calin I.

    2003-01-01

    We consider a geometric regularization for the class of conifold transitions relating D-brane systems on noncompact Calabi-Yau spaces to certain flux backgrounds. This regularization respects the SL(2,Z) invariance of the flux superpotential, and allows for computation of the relevant periods through the method of Picard-Fuchs equations. The regularized geometry is a noncompact Calabi-Yau which can be viewed as a monodromic fibration, with the nontrivial monodromy being induced by the regulator. It reduces to the original, non-monodromic background when the regulator is removed. Using this regularization, we discuss the simple case of the local conifold, and show how the relevant field-theoretic information can be extracted in this approach. (author)

  9. Regular-fat dairy and human health

    DEFF Research Database (Denmark)

    Astrup, Arne; Bradley, Beth H Rice; Brenna, J Thomas

    2016-01-01

    In recent history, some dietary recommendations have treated dairy fat as an unnecessary source of calories and saturated fat in the human diet. These assumptions, however, have recently been brought into question by current research on regular fat dairy products and human health. In an effort to......, cheese and yogurt, can be important components of an overall healthy dietary pattern. Systematic examination of the effects of dietary patterns that include regular-fat milk, cheese and yogurt on human health is warranted....

  10. Deterministic automata for extended regular expressions

    Directory of Open Access Journals (Sweden)

    Syzdykov Mirzakhmet

    2017-12-01

    Full Text Available In this work we present the algorithms to produce deterministic finite automaton (DFA for extended operators in regular expressions like intersection, subtraction and complement. The method like “overriding” of the source NFA(NFA not defined with subset construction rules is used. The past work described only the algorithm for AND-operator (or intersection of regular languages; in this paper the construction for the MINUS-operator (and complement is shown.

  11. Online Manifold Regularization by Dual Ascending Procedure

    OpenAIRE

    Sun, Boliang; Li, Guohui; Jia, Li; Zhang, Hui

    2013-01-01

    We propose a novel online manifold regularization framework based on the notion of duality in constrained optimization. The Fenchel conjugate of hinge functions is a key to transfer manifold regularization from offline to online in this paper. Our algorithms are derived by gradient ascent in the dual function. For practical purpose, we propose two buffering strategies and two sparse approximations to reduce the computational complexity. Detailed experiments verify the utility of our approache...

  12. Viscoplastic regularization of local damage models: revisited

    Science.gov (United States)

    Niazi, M. S.; Wisselink, H. H.; Meinders, T.

    2013-02-01

    Local damage models are known to produce pathological mesh dependent results. Regularization techniques are therefore mandatory if local damage models are used for academic research or industrial applications. The viscoplastic framework can be used for regularization of local damage models. Despite of the easy implementation of viscoplasticity, this method of regularization did not gain much popularity in comparison to the non-local or gradient damage models. This work is an effort to further explore viscoplastic regularization for quasi-static problems. The focus of this work is on ductile materials. Two different types of strain rate hardening models i.e. the Power law (with a multiplicative strain rate part) and the simplified Bergström van Liempt (with an additive strain rate part) models are used in this study. The modified Lemaitre's anisotropic damage model with a strain rate dependency was used in this study. It was found that the primary viscoplastic length scale is a function of the hardening and softening (damage) parameters and does not depend upon the prescribed strain rate whereas the secondary length scale is a function of the strain rate. As damage grows, the effective regularization length gradually decreases. When the effective regularization length gets shorter than the element length numerical results become mesh dependent again. This loss of objectivity can not be solved but the effect can be minimized by selecting a very fine mesh or by prescribing high deformation velocities.

  13. Dimensional analysis and group theory in astrophysics

    CERN Document Server

    Kurth, Rudolf

    2013-01-01

    Dimensional Analysis and Group Theory in Astrophysics describes how dimensional analysis, refined by mathematical regularity hypotheses, can be applied to purely qualitative physical assumptions. The book focuses on the continuous spectral of the stars and the mass-luminosity relationship. The text discusses the technique of dimensional analysis, covering both relativistic phenomena and the stellar systems. The book also explains the fundamental conclusion of dimensional analysis, wherein the unknown functions shall be given certain specified forms. The Wien and Stefan-Boltzmann Laws can be si

  14. Optimal Design of the Adaptive Normalized Matched Filter Detector using Regularized Tyler Estimators

    KAUST Repository

    Kammoun, Abla

    2017-10-25

    This article addresses improvements on the design of the adaptive normalized matched filter (ANMF) for radar detection. It is well-acknowledged that the estimation of the noise-clutter covariance matrix is a fundamental step in adaptive radar detection. In this paper, we consider regularized estimation methods which force by construction the eigenvalues of the covariance estimates to be greater than a positive regularization parameter ρ. This makes them more suitable for high dimensional problems with a limited number of secondary data samples than traditional sample covariance estimates. The motivation behind this work is to understand the effect and properly set the value of ρthat improves estimate conditioning while maintaining a low estimation bias. More specifically, we consider the design of the ANMF detector for two kinds of regularized estimators, namely the regularized sample covariance matrix (RSCM), the regularized Tyler estimator (RTE). The rationale behind this choice is that the RTE is efficient in mitigating the degradation caused by the presence of impulsive noises while inducing little loss when the noise is Gaussian. Based on asymptotic results brought by recent tools from random matrix theory, we propose a design for the regularization parameter that maximizes the asymptotic detection probability under constant asymptotic false alarm rates. Provided Simulations support the efficiency of the proposed method, illustrating its gain over conventional settings of the regularization parameter.

  15. Regularities, Natural Patterns and Laws of Nature

    Directory of Open Access Journals (Sweden)

    Stathis Psillos

    2014-02-01

    Full Text Available  The goal of this paper is to sketch an empiricist metaphysics of laws of nature. The key idea is that there are regularities without regularity-enforcers. Differently put, there are natural laws without law-makers of a distinct metaphysical kind. This sketch will rely on the concept of a natural pattern and more significantly on the existence of a network of natural patterns in nature. The relation between a regularity and a pattern will be analysed in terms of mereology.  Here is the road map. In section 2, I will briefly discuss the relation between empiricism and metaphysics, aiming to show that an empiricist metaphysics is possible. In section 3, I will offer arguments against stronger metaphysical views of laws. Then, in section 4 I will motivate nomic objectivism. In section 5, I will address the question ‘what is a regularity?’ and will develop a novel answer to it, based on the notion of a natural pattern. In section 6, I will raise the question: ‘what is a law of nature?’, the answer to which will be: a law of nature is a regularity that is characterised by the unity of a natural pattern.

  16. Mining High Utility Itemsets with Regular Occurrence

    Directory of Open Access Journals (Sweden)

    Komate Amphawan

    2016-09-01

    Full Text Available High utility itemset mining (HUIM plays an important role in the data mining community and in a wide range of applications. For example, in retail business it is used for finding sets of sold products that give high profit, low cost, etc. These itemsets can help improve marketing strategies, make promotions/ advertisements, etc. However, since HUIM only considers utility values of items/itemsets, it may not be sufficient to observe product-buying behavior of customers such as information related to “regular purchases of sets of products having a high profit margin”. To address this issue, the occurrence behavior of itemsets (in the term of regularity simultaneously with their utility values was investigated. Then, the problem of mining high utility itemsets with regular occurrence (MHUIR to find sets of co-occurrence items with high utility values and regular occurrence in a database was considered. An efficient single-pass algorithm, called MHUIRA, was introduced. A new modified utility-list structure, called NUL, was designed to efficiently maintain utility values and occurrence information and to increase the efficiency of computing the utility of itemsets. Experimental studies on real and synthetic datasets and complexity analyses are provided to show the efficiency of MHUIRA combined with NUL in terms of time and space usage for mining interesting itemsets based on regularity and utility constraints.

  17. Dimensional Analysis

    CERN Document Server

    Tan, Qingming

    2011-01-01

    Dimensional analysis is an essential scientific method and a powerful tool for solving problems in physics and engineering. This book starts by introducing the Pi Theorem, which is the theoretical foundation of dimensional analysis. It also provides ample and detailed examples of how dimensional analysis is applied to solving problems in various branches of mechanics. The book covers the extensive findings on explosion mechanics and impact dynamics contributed by the author's research group over the past forty years at the Chinese Academy of Sciences. The book is intended for advanced undergra

  18. Regularized multivariate regression models with skew-t error distributions

    KAUST Repository

    Chen, Lianfu

    2014-06-01

    We consider regularization of the parameters in multivariate linear regression models with the errors having a multivariate skew-t distribution. An iterative penalized likelihood procedure is proposed for constructing sparse estimators of both the regression coefficient and inverse scale matrices simultaneously. The sparsity is introduced through penalizing the negative log-likelihood by adding L1-penalties on the entries of the two matrices. Taking advantage of the hierarchical representation of skew-t distributions, and using the expectation conditional maximization (ECM) algorithm, we reduce the problem to penalized normal likelihood and develop a procedure to minimize the ensuing objective function. Using a simulation study the performance of the method is assessed, and the methodology is illustrated using a real data set with a 24-dimensional response vector. © 2014 Elsevier B.V.

  19. Total Variation Regularization for Functions with Values in a Manifold

    KAUST Repository

    Lellmann, Jan

    2013-12-01

    While total variation is among the most popular regularizers for variational problems, its extension to functions with values in a manifold is an open problem. In this paper, we propose the first algorithm to solve such problems which applies to arbitrary Riemannian manifolds. The key idea is to reformulate the variational problem as a multilabel optimization problem with an infinite number of labels. This leads to a hard optimization problem which can be approximately solved using convex relaxation techniques. The framework can be easily adapted to different manifolds including spheres and three-dimensional rotations, and allows to obtain accurate solutions even with a relatively coarse discretization. With numerous examples we demonstrate that the proposed framework can be applied to variational models that incorporate chromaticity values, normal fields, or camera trajectories. © 2013 IEEE.

  20. Regularization in global sound equalization based on effort variation

    DEFF Research Database (Denmark)

    Stefanakis, Nick; Sarris, John; Jacobsen, Finn

    2009-01-01

    Sound equalization in closed spaces can be significantly improved by generating propagating waves that are naturally associated with the geometry, as, for example, plane waves in rectangular enclosures. This paper presents a control approach termed effort variation regularization based on this idea....... Effort variation equalization involves modifying the conventional cost function in sound equalization, which is based on minimizing least-squares reproduction errors, by adding a term that is proportional to the squared deviations between complex source strengths, calculated independently for the sources...... at each of the two walls perpendicular to the direction of propagation. Simulation results in a two-dimensional room of irregular shape and in a rectangular room with sources randomly distributed on two opposite walls demonstrate that the proposed technique leads to smaller global reproduction errors...

  1. Fractional Regularization Term for Variational Image Registration

    Directory of Open Access Journals (Sweden)

    Rafael Verdú-Monedero

    2009-01-01

    Full Text Available Image registration is a widely used task of image analysis with applications in many fields. Its classical formulation and current improvements are given in the spatial domain. In this paper a regularization term based on fractional order derivatives is formulated. This term is defined and implemented in the frequency domain by translating the energy functional into the frequency domain and obtaining the Euler-Lagrange equations which minimize it. The new regularization term leads to a simple formulation and design, being applicable to higher dimensions by using the corresponding multidimensional Fourier transform. The proposed regularization term allows for a real gradual transition from a diffusion registration to a curvature registration which is best suited to some applications and it is not possible in the spatial domain. Results with 3D actual images show the validity of this approach.

  2. Regularization and Migration Policy in Europe

    Directory of Open Access Journals (Sweden)

    Philippe de Bruycker

    2001-05-01

    Full Text Available The following pages present, in a general way, the contents of Regularization of illegal immigrants in the European Union, which includes a comparative synthesis and statistical information for each of the eight countries involved; a description of actions since the beginning of the year 2000; and a systematic analysis of the different categories of foreigners, the types of regularization carried out, and the rules that have governed these actions.In relation to regularization, the author considers the political coherence of the actions taken by the member states as well as how they relate to two ever more crucial aspects of immigration policy –the integration of legal resident immigrants and the fight againstillegal immigration in the context of a control of migratory flows.

  3. Online Manifold Regularization by Dual Ascending Procedure

    Directory of Open Access Journals (Sweden)

    Boliang Sun

    2013-01-01

    Full Text Available We propose a novel online manifold regularization framework based on the notion of duality in constrained optimization. The Fenchel conjugate of hinge functions is a key to transfer manifold regularization from offline to online in this paper. Our algorithms are derived by gradient ascent in the dual function. For practical purpose, we propose two buffering strategies and two sparse approximations to reduce the computational complexity. Detailed experiments verify the utility of our approaches. An important conclusion is that our online MR algorithms can handle the settings where the target hypothesis is not fixed but drifts with the sequence of examples. We also recap and draw connections to earlier works. This paper paves a way to the design and analysis of online manifold regularization algorithms.

  4. Representing diffusion MRI in 5-D simplifies regularization and segmentation of white matter tracts.

    Science.gov (United States)

    Jonasson, Lisa; Bresson, Xavier; Thiran, Jean-Philippe; Wedeen, Van J; Hagmann, Patric

    2007-11-01

    We present a new five-dimensional (5-D) space representation of diffusion magnetic resonance imaging (dMRI) of high angular resolution. This 5-D space is basically a non-Euclidean space of position and orientation in which crossing fiber tracts can be clearly disentangled, that cannot be separated in three-dimensional position space. This new representation provides many possibilities for processing and analysis since classical methods for scalar images can be extended to higher dimensions even if the spaces are not Euclidean. In this paper, we show examples of how regularization and segmentation of dMRI is simplified with this new representation. The regularization is used with the purpose of denoising and but also to facilitate the segmentation task by using several scales, each scale representing a different level of resolution. We implement in five dimensions the Chan-Vese method combined with active contours without edges for the segmentation and the total variation functional for the regularization. The purpose of this paper is to explore the possibility of segmenting white matter structures directly as entirely separated bundles in this 5-D space. We will present results from a synthetic model and results on real data of a human brain acquired with diffusion spectrum magnetic resonance imaging (MRI), one of the dMRI of high angular resolution available. These results will lead us to the conclusion that this new high-dimensional representation indeed simplifies the problem of segmentation and regularization.

  5. Numerical Regularization of Ill-Posed Problems.

    Science.gov (United States)

    1980-07-09

    a 073 CINCINNATI UNIV O DEPT OF MATHENATICAL SCIENCES P/S ll1 NUMERICAL REGULARIZATION OF ILL-POSEO PROBLENS(U) JULso C 0 GROTSCH AFOSR-9 -OO9...regularization and projection methods, Proc. Annual Conference of the Association of Computing Machinery (1973), 415-419. [7) A. Sard, Approximations based on...solving incorreitly posed problems, U.S.S.R. Computational Math. and Math. Phys. 14(1974), 24-33. 4 I, I 11 4. L. J. Lardy, A series representation of

  6. Regularity of difference equations on Banach spaces

    CERN Document Server

    Agarwal, Ravi P; Lizama, Carlos

    2014-01-01

    This work introduces readers to the topic of maximal regularity for difference equations. The authors systematically present the method of maximal regularity, outlining basic linear difference equations along with relevant results. They address recent advances in the field, as well as basic semigroup and cosine operator theories in the discrete setting. The authors also identify some open problems that readers may wish to take up for further research. This book is intended for graduate students and researchers in the area of difference equations, particularly those with advance knowledge of and interest in functional analysis.

  7. Matrix regularization of embedded 4-manifolds

    International Nuclear Information System (INIS)

    Trzetrzelewski, Maciej

    2012-01-01

    We consider products of two 2-manifolds such as S 2 ×S 2 , embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)⊗SU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N 2 ×N 2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S 3 also possible).

  8. Conservative regularization of compressible dissipationless two-fluid plasmas

    Science.gov (United States)

    Krishnaswami, Govind S.; Sachdev, Sonakshi; Thyagaraja, A.

    2018-02-01

    This paper extends our earlier approach [cf. A. Thyaharaja, Phys. Plasmas 17, 032503 (2010) and Krishnaswami et al., Phys. Plasmas 23, 022308 (2016)] to obtaining à priori bounds on enstrophy in neutral fluids and ideal magnetohydrodynamics. This results in a far-reaching local, three-dimensional, non-linear, dispersive generalization of a KdV-type regularization to compressible/incompressible dissipationless 2-fluid plasmas and models derived therefrom (quasi-neutral, Hall, and ideal MHD). It involves the introduction of vortical and magnetic "twirl" terms λl 2 ( w l + ( q l / m l ) B ) × ( ∇ × w l ) in the ion/electron velocity equations ( l = i , e ) where w l are vorticities. The cut-off lengths λl and number densities nl must satisfy λl 2 n l = C l , where Cl are constants. A novel feature is that the "flow" current ∑ l q l n l v l in Ampère's law is augmented by a solenoidal "twirl" current ∑ l ∇ × ∇ × λl 2 j flow , l . The resulting equations imply conserved linear and angular momenta and a positive definite swirl energy density E * which includes an enstrophic contribution ∑ l ( 1 / 2 ) λl 2 ρ l wl 2 . It is shown that the equations admit a Hamiltonian-Poisson bracket formulation. Furthermore, singularities in ∇ × B are conservatively regularized by adding ( λB 2 / 2 μ 0 ) ( ∇ × B ) 2 to E * . Finally, it is proved that among regularizations that admit a Hamiltonian formulation and preserve the continuity equations along with the symmetries of the ideal model, the twirl term is unique and minimal in non-linearity and space derivatives of velocities.

  9. Regular Network Class Features Enhancement Using an Evolutionary Synthesis Algorithm

    Directory of Open Access Journals (Sweden)

    O. G. Monahov

    2014-01-01

    Full Text Available This paper investigates a solution of the optimization problem concerning the construction of diameter-optimal regular networks (graphs. Regular networks are of practical interest as the graph-theoretical models of reliable communication networks of parallel supercomputer systems, as a basis of the structure in a model of small world in optical and neural networks. It presents a new class of parametrically described regular networks - hypercirculant networks (graphs. An approach that uses evolutionary algorithms for the automatic generation of parametric descriptions of optimal hypercirculant networks is developed. Synthesis of optimal hypercirculant networks is based on the optimal circulant networks with smaller degree of nodes. To construct optimal hypercirculant networks is used a template of circulant network from the known optimal families of circulant networks with desired number of nodes and with smaller degree of nodes. Thus, a generating set of the circulant network is used as a generating subset of the hypercirculant network, and the missing generators are synthesized by means of the evolutionary algorithm, which is carrying out minimization of diameter (average diameter of networks. A comparative analysis of the structural characteristics of hypercirculant, toroidal, and circulant networks is conducted. The advantage hypercirculant networks under such structural characteristics, as diameter, average diameter, and the width of bisection, with comparable costs of the number of nodes and the number of connections is demonstrated. It should be noted the advantage of hypercirculant networks of dimension three over four higher-dimensional tori. Thus, the optimization of hypercirculant networks of dimension three is more efficient than the introduction of an additional dimension for the corresponding toroidal structures. The paper also notes the best structural parameters of hypercirculant networks in comparison with iBT-networks previously

  10. Nonnegative Matrix Factorization with Rank Regularization and Hard Constraint.

    Science.gov (United States)

    Shang, Ronghua; Liu, Chiyang; Meng, Yang; Jiao, Licheng; Stolkin, Rustam

    2017-09-01

    Nonnegative matrix factorization (NMF) is well known to be an effective tool for dimensionality reduction in problems involving big data. For this reason, it frequently appears in many areas of scientific and engineering literature. This letter proposes a novel semisupervised NMF algorithm for overcoming a variety of problems associated with NMF algorithms, including poor use of prior information, negative impact on manifold structure of the sparse constraint, and inaccurate graph construction. Our proposed algorithm, nonnegative matrix factorization with rank regularization and hard constraint (NMFRC), incorporates label information into data representation as a hard constraint, which makes full use of prior information. NMFRC also measures pairwise similarity according to geodesic distance rather than Euclidean distance. This results in more accurate measurement of pairwise relationships, resulting in more effective manifold information. Furthermore, NMFRC adopts rank constraint instead of norm constraints for regularization to balance the sparseness and smoothness of data. In this way, the new data representation is more representative and has better interpretability. Experiments on real data sets suggest that NMFRC outperforms four other state-of-the-art algorithms in terms of clustering accuracy.

  11. Multilinear Graph Embedding: Representation and Regularization for Images.

    Science.gov (United States)

    Chen, Yi-Lei; Hsu, Chiou-Ting

    2014-02-01

    Given a set of images, finding a compact and discriminative representation is still a big challenge especially when multiple latent factors are hidden in the way of data generation. To represent multifactor images, although multilinear models are widely used to parameterize the data, most methods are based on high-order singular value decomposition (HOSVD), which preserves global statistics but interprets local variations inadequately. To this end, we propose a novel method, called multilinear graph embedding (MGE), as well as its kernelization MKGE to leverage the manifold learning techniques into multilinear models. Our method theoretically links the linear, nonlinear, and multilinear dimensionality reduction. We also show that the supervised MGE encodes informative image priors for image regularization, provided that an image is represented as a high-order tensor. From our experiments on face and gait recognition, the superior performance demonstrates that MGE better represents multifactor images than classic methods, including HOSVD and its variants. In addition, the significant improvement in image (or tensor) completion validates the potential of MGE for image regularization.

  12. Regularization and the potential of effective field theory in nucleon-nucleon scattering

    International Nuclear Information System (INIS)

    Phillips, D.R.

    1998-04-01

    This paper examines the role that regularization plays in the definition of the potential used in effective field theory (EFT) treatments of the nucleon-nucleon interaction. The author considers N N scattering in S-wave channels at momenta well below the pion mass. In these channels (quasi-)bound states are present at energies well below the scale m π 2 /M expected from naturalness arguments. He asks whether, in the presence of such a shallow bound state, there is a regularization scheme which leads to an EFT potential that is both useful and systematic. In general, if a low-lying bound state is present then cutoff regularization leads to an EFT potential which is useful but not systematic, and dimensional regularization with minimal subtraction leads to one which is systematic but not useful. The recently-proposed technique of dimensional regularization with power-law divergence subtraction allows the definition of an EFT potential which is both useful and systematic

  13. Regularized Data Assimilation and Fusion of non-Gaussian States Exhibiting Sparse Prior in Transform Domains

    Science.gov (United States)

    Ebtehaj, M.; Foufoula, E.

    2012-12-01

    Improved estimation of geophysical state variables in a noisy environment from down-sampled observations and background model forecasts has been the subject of growing research in the past decades. Often the number of degrees of freedom in high-dimensional non-Gaussian natural states is quite small compared to their ambient dimensionality, a property often revealed as a sparse representation in an appropriately chosen domain. Aiming to increase the hydrometeorological forecast skill and motivated by wavelet-domain sparsity of some land-surface geophysical states, new framework is presented that recast the classical variational data assimilation/fusion (DA/DF) problem via L_1 regularization in the wavelet domain. Our results suggest that proper regularization can lead to more accurate recovery of a wide range of smooth/non-smooth geophysical states exhibiting remarkable non-Gaussian features. The promise of the proposed framework is demonstrated in multi-sensor satellite and land-based precipitation data fusion, while the regularized DA is performed on the heat equation in a 4D-VAR context, using sparse regularization in the wavelet domain.; ; Top panel: Noisy observations of the linear advection diffusion equation at five consecutive snapshots, middle panel: Classical 4D-VAR and bottom panel: l_1 regularized 4D-VAR with improved results.

  14. A new regularity-based algorithm for characterizing heterogeneities from digitized core image

    Science.gov (United States)

    Gaci, Said; Zaourar, Naima; Hachay, Olga

    2014-05-01

    The two-dimensional multifractional Brownian motion (2D-mBm) is receiving an increasing interest in image processing. However, one difficulty inherent to this fractal model is the estimation of its local Hölderian regularity function. In this paper, we suggest a new estimator of the local Hölder exponent of 2D-mBm paths. The suggested algorithm has been first tested on synthetic 2D-mBm paths, then implemented on digitized image data of a core extracted from an Algerian borehole. The obtained regularity map shows a clear correlation with the geological features observed on the investigated core. These lithological discontinuities are reflected by local variations of the Hölder exponent value. However, no clear relationship can be drawn between regularity and digitized data. To conclude, the suggested algorithm may be a powerful tool for exploring heterogeneities from core images using the regularity exponents. Keywords: core image, two-dimensional multifractional Brownian motion, fractal, regularity.

  15. On a correspondence between regular and non-regular operator monotone functions

    DEFF Research Database (Denmark)

    Gibilisco, P.; Hansen, Frank; Isola, T.

    2009-01-01

    We prove the existence of a bijection between the regular and the non-regular operator monotone functions satisfying a certain functional equation. As an application we give a new proof of the operator monotonicity of certain functions related to the Wigner-Yanase-Dyson skew information....

  16. The geometric β-function in curved space-time under operator regularization

    Energy Technology Data Exchange (ETDEWEB)

    Agarwala, Susama [Mathematical Institute, Oxford University, Oxford OX2 6GG (United Kingdom)

    2015-06-15

    In this paper, I compare the generators of the renormalization group flow, or the geometric β-functions, for dimensional regularization and operator regularization. I then extend the analysis to show that the geometric β-function for a scalar field theory on a closed compact Riemannian manifold is defined on the entire manifold. I then extend the analysis to find the generator of the renormalization group flow to conformally coupled scalar-field theories on the same manifolds. The geometric β-function in this case is not defined.

  17. The geometric β-function in curved space-time under operator regularization

    International Nuclear Information System (INIS)

    Agarwala, Susama

    2015-01-01

    In this paper, I compare the generators of the renormalization group flow, or the geometric β-functions, for dimensional regularization and operator regularization. I then extend the analysis to show that the geometric β-function for a scalar field theory on a closed compact Riemannian manifold is defined on the entire manifold. I then extend the analysis to find the generator of the renormalization group flow to conformally coupled scalar-field theories on the same manifolds. The geometric β-function in this case is not defined

  18. On regular riesz operators | Raubenheimer | Quaestiones ...

    African Journals Online (AJOL)

    The r-asymptotically quasi finite rank operators on Banach lattices are examples of regular Riesz operators. We characterise Riesz elements in a subalgebra of a Banach algebra in terms of Riesz elements in the Banach algebra. This enables us to characterise r-asymptotically quasi finite rank operators in terms of adjoint ...

  19. The canonical controller and its regularity

    NARCIS (Netherlands)

    Willems, Jan C.; Belur, Madhu N.; Anak Agung Julius, A.A.J.; Trentelman, Harry L.

    2003-01-01

    This paper deals with properties of canonical controllers. We first specify the behavior that they implement. It follows that a canonical controller implements the desired controlled behavior if and only if the desired behavior is implementable. We subsequently investigate the regularity of the

  20. On the regularization procedure in classical electrodynamics

    International Nuclear Information System (INIS)

    Yaremko, Yu

    2003-01-01

    We consider the self-action problem in classical electrodynamics. A strict geometrical sense of commonly used renormalization of mass is made. A regularization procedure is proposed which relies on energy-momentum and angular momentum balance equations. We correct the expression for angular momentum tensor obtained by us in a previous paper (2002 J. Phys. A: Math. Gen. 35 831)

  1. Tikhonov Regularization and Total Least Squares

    DEFF Research Database (Denmark)

    Golub, G. H.; Hansen, Per Christian; O'Leary, D. P.

    2000-01-01

    formulation involves a least squares problem, can be recast in a total least squares formulation suited for problems in which both the coefficient matrix and the right-hand side are known only approximately. We analyze the regularizing properties of this method and demonstrate by a numerical example that...

  2. Deconvolution and Regularization with Toeplitz Matrices

    DEFF Research Database (Denmark)

    Hansen, Per Christian

    2002-01-01

    of these discretized deconvolution problems, with emphasis on methods that take the special structure of the matrix into account. Wherever possible, analogies to classical DFT-based deconvolution problems are drawn. Among other things, we present direct methods for regularization with Toeplitz matrices, and we show...

  3. On Comparison of Adaptive Regularization Methods

    DEFF Research Database (Denmark)

    Sigurdsson, Sigurdur; Larsen, Jan; Hansen, Lars Kai

    2000-01-01

    Modeling with flexible models, such as neural networks, requires careful control of the model complexity and generalization ability of the resulting model which finds expression in the ubiquitous bias-variance dilemma. Regularization is a tool for optimizing the model structure reducing variance ...

  4. Regular and context-free nominal traces

    DEFF Research Database (Denmark)

    Degano, Pierpaolo; Ferrari, Gian-Luigi; Mezzetti, Gianluca

    2017-01-01

    Two kinds of automata are presented, for recognising new classes of regular and context-free nominal languages. We compare their expressive power with analogous proposals in the literature, showing that they express novel classes of languages. Although many properties of classical languages hold ...

  5. Complexity in union-free regular languages

    Czech Academy of Sciences Publication Activity Database

    Jirásková, G.; Masopust, Tomáš

    2011-01-01

    Roč. 22, č. 7 (2011), s. 1639-1653 ISSN 0129-0541 Institutional research plan: CEZ:AV0Z10190503 Keywords : Union-free regular language * one-cycle-free- path automaton * descriptional complexity Subject RIV: BA - General Mathematics Impact factor: 0.379, year: 2011 http://www.worldscinet.com/ijfcs/22/2207/S0129054111008933.html

  6. One-loop counterterms for dimensionally reduced quantum gravity

    International Nuclear Information System (INIS)

    Atkatz, D.

    1980-01-01

    The technique of regularization by dimensional reduction is applied to source-free quantum gravity. The one-loop counterterms for the effective gravity-matter system are calculated in the background field formalism. The ersatz matter fields which arise in this regularization scheme are found to have no effect on the renormalizability of the theory. (orig.)

  7. Six-dimensional Yang black holes in dilaton gravity

    International Nuclear Information System (INIS)

    Abbott, Michael C.; Lowe, David A.

    2008-01-01

    We study the six-dimensional dilaton gravity Yang black holes of Bergshoeff, Gibbons and Townsend, which carry (1,-1) charge in SU(2)xSU(2) gauge group. We find what values of the asymptotic parameters (mass and scalar charge) lead to a regular horizon, and show that there are no regular solutions with an extremal horizon

  8. PS-Regular Sets in Topology and Generalized Topology

    Directory of Open Access Journals (Sweden)

    Ankit Gupta

    2014-01-01

    Full Text Available We define and study a new class of regular sets called PS-regular sets. Properties of these sets are investigated for topological spaces and generalized topological spaces. Decompositions of regular open sets and regular closed sets are provided using PS-regular sets. Semiconnectedness is characterized by using PS-regular sets. PS-continuity and almost PS-continuity are introduced and investigated.

  9. Tetravalent one-regular graphs of order 4p2

    DEFF Research Database (Denmark)

    Feng, Yan-Quan; Kutnar, Klavdija; Marusic, Dragan

    2014-01-01

    A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified.......A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified....

  10. Time Traveling Regularization for Inverse Heat Transfer Problems

    Directory of Open Access Journals (Sweden)

    Elisan dos Santos Magalhães

    2018-02-01

    Full Text Available This work presents a technique called Time Traveling Regularization (TTR applied to an optimization technique in order to solve ill-posed problems. This new methodology does not interfere in the minimization technique process. The Golden Section method together with TTR are applied only to the objective function which will be minimized. It consists of finding an ideal timeline that minimizes an objective function in a defined future time step. In order to apply the proposed methodology, inverse heat conduction problems were studied. Controlled experiments were performed on 5052 aluminum and AISI 304 stainless steel samples to validate the proposed technique. One-dimensional and three-dimensional heat input experiments were carried out for the 5052 aluminum and AISI 304 stainless steel samples, respectively. The Sequential Function Specification Method (SFSM was also used to be compared with the results of heat flux obtained by TTR. The estimated heat flux presented a good agreement when compared with experimental values and those estimated by SFSM. Moreover, TTR presented lower residuals than the SFSM.

  11. Descriptor Learning via Supervised Manifold Regularization for Multioutput Regression.

    Science.gov (United States)

    Zhen, Xiantong; Yu, Mengyang; Islam, Ali; Bhaduri, Mousumi; Chan, Ian; Li, Shuo

    2017-09-01

    Multioutput regression has recently shown great ability to solve challenging problems in both computer vision and medical image analysis. However, due to the huge image variability and ambiguity, it is fundamentally challenging to handle the highly complex input-target relationship of multioutput regression, especially with indiscriminate high-dimensional representations. In this paper, we propose a novel supervised descriptor learning (SDL) algorithm for multioutput regression, which can establish discriminative and compact feature representations to improve the multivariate estimation performance. The SDL is formulated as generalized low-rank approximations of matrices with a supervised manifold regularization. The SDL is able to simultaneously extract discriminative features closely related to multivariate targets and remove irrelevant and redundant information by transforming raw features into a new low-dimensional space aligned to targets. The achieved discriminative while compact descriptor largely reduces the variability and ambiguity for multioutput regression, which enables more accurate and efficient multivariate estimation. We conduct extensive evaluation of the proposed SDL on both synthetic data and real-world multioutput regression tasks for both computer vision and medical image analysis. Experimental results have shown that the proposed SDL can achieve high multivariate estimation accuracy on all tasks and largely outperforms the algorithms in the state of the arts. Our method establishes a novel SDL framework for multioutput regression, which can be widely used to boost the performance in different applications.

  12. A New Method for Determining Optimal Regularization Parameter in Near-Field Acoustic Holography

    Directory of Open Access Journals (Sweden)

    Yue Xiao

    2018-01-01

    Full Text Available Tikhonov regularization method is effective in stabilizing reconstruction process of the near-field acoustic holography (NAH based on the equivalent source method (ESM, and the selection of the optimal regularization parameter is a key problem that determines the regularization effect. In this work, a new method for determining the optimal regularization parameter is proposed. The transfer matrix relating the source strengths of the equivalent sources to the measured pressures on the hologram surface is augmented by adding a fictitious point source with zero strength. The minimization of the norm of this fictitious point source strength is as the criterion for choosing the optimal regularization parameter since the reconstructed value should tend to zero. The original inverse problem in calculating the source strengths is converted into a univariate optimization problem which is solved by a one-dimensional search technique. Two numerical simulations with a point driven simply supported plate and a pulsating sphere are investigated to validate the performance of the proposed method by comparison with the L-curve method. The results demonstrate that the proposed method can determine the regularization parameter correctly and effectively for the reconstruction in NAH.

  13. Maximum-likelihood constrained regularized algorithms: an objective criterion for the determination of regularization parameters

    Science.gov (United States)

    Lanteri, Henri; Roche, Muriel; Cuevas, Olga; Aime, Claude

    1999-12-01

    We propose regularized versions of Maximum Likelihood algorithms for Poisson process with non-negativity constraint. For such process, the best-known (non- regularized) algorithm is that of Richardson-Lucy, extensively used for astronomical applications. Regularization is necessary to prevent an amplification of the noise during the iterative reconstruction; this can be done either by limiting the iteration number or by introducing a penalty term. In this Communication, we focus our attention on the explicit regularization using Tikhonov (Identity and Laplacian operator) or entropy terms (Kullback-Leibler and Csiszar divergences). The algorithms are established from the Kuhn-Tucker first order optimality conditions for the minimization of the Lagrange function and from the method of successive substitutions. The algorithms may be written in a `product form'. Numerical illustrations are given for simulated images corrupted by photon noise. The effects of the regularization are shown in the Fourier plane. The tests we have made indicate that a noticeable improvement of the results may be obtained for some of these explicitly regularized algorithms. We also show that a comparison with a Wiener filter can give the optimal regularizing conditions (operator and strength).

  14. Testing the Equivalence of Regular Languages

    Directory of Open Access Journals (Sweden)

    Marco Almeida

    2009-07-01

    Full Text Available The minimal deterministic finite automaton is generally used to determine regular languages equality. Antimirov and Mosses proposed a rewrite system for deciding regular expressions equivalence of which Almeida et al. presented an improved variant. Hopcroft and Karp proposed an almost linear algorithm for testing the equivalence of two deterministic finite automata that avoids minimisation. In this paper we improve the best-case running time, present an extension of this algorithm to non-deterministic finite automata, and establish a relationship between this algorithm and the one proposed in Almeida et al. We also present some experimental comparative results. All these algorithms are closely related with the recent coalgebraic approach to automata proposed by Rutten.

  15. Modeling Regular Replacement for String Constraint Solving

    Science.gov (United States)

    Fu, Xiang; Li, Chung-Chih

    2010-01-01

    Bugs in user input sanitation of software systems often lead to vulnerabilities. Among them many are caused by improper use of regular replacement. This paper presents a precise modeling of various semantics of regular substitution, such as the declarative, finite, greedy, and reluctant, using finite state transducers (FST). By projecting an FST to its input/output tapes, we are able to solve atomic string constraints, which can be applied to both the forward and backward image computation in model checking and symbolic execution of text processing programs. We report several interesting discoveries, e.g., certain fragments of the general problem can be handled using less expressive deterministic FST. A compact representation of FST is implemented in SUSHI, a string constraint solver. It is applied to detecting vulnerabilities in web applications

  16. Extreme values, regular variation and point processes

    CERN Document Server

    Resnick, Sidney I

    1987-01-01

    Extremes Values, Regular Variation and Point Processes is a readable and efficient account of the fundamental mathematical and stochastic process techniques needed to study the behavior of extreme values of phenomena based on independent and identically distributed random variables and vectors It presents a coherent treatment of the distributional and sample path fundamental properties of extremes and records It emphasizes the core primacy of three topics necessary for understanding extremes the analytical theory of regularly varying functions; the probabilistic theory of point processes and random measures; and the link to asymptotic distribution approximations provided by the theory of weak convergence of probability measures in metric spaces The book is self-contained and requires an introductory measure-theoretic course in probability as a prerequisite Almost all sections have an extensive list of exercises which extend developments in the text, offer alternate approaches, test mastery and provide for enj...

  17. Neural Classifier Construction using Regularization, Pruning

    DEFF Research Database (Denmark)

    Hintz-Madsen, Mads; Hansen, Lars Kai; Larsen, Jan

    1998-01-01

    In this paper we propose a method for construction of feed-forward neural classifiers based on regularization and adaptive architectures. Using a penalized maximum likelihood scheme, we derive a modified form of the entropic error measure and an algebraic estimate of the test error. In conjunctio...... with optimal brain damage pruning, a test error estimate is used to select the network architecture. The scheme is evaluated on four classification problems.......In this paper we propose a method for construction of feed-forward neural classifiers based on regularization and adaptive architectures. Using a penalized maximum likelihood scheme, we derive a modified form of the entropic error measure and an algebraic estimate of the test error. In conjunction...

  18. Basic analysis of regularized series and products

    CERN Document Server

    Jorgenson, Jay A

    1993-01-01

    Analytic number theory and part of the spectral theory of operators (differential, pseudo-differential, elliptic, etc.) are being merged under amore general analytic theory of regularized products of certain sequences satisfying a few basic axioms. The most basic examples consist of the sequence of natural numbers, the sequence of zeros with positive imaginary part of the Riemann zeta function, and the sequence of eigenvalues, say of a positive Laplacian on a compact or certain cases of non-compact manifolds. The resulting theory is applicable to ergodic theory and dynamical systems; to the zeta and L-functions of number theory or representation theory and modular forms; to Selberg-like zeta functions; andto the theory of regularized determinants familiar in physics and other parts of mathematics. Aside from presenting a systematic account of widely scattered results, the theory also provides new results. One part of the theory deals with complex analytic properties, and another part deals with Fourier analys...

  19. Describing chaotic attractors: Regular and perpetual points

    Science.gov (United States)

    Dudkowski, Dawid; Prasad, Awadhesh; Kapitaniak, Tomasz

    2018-03-01

    We study the concepts of regular and perpetual points for describing the behavior of chaotic attractors in dynamical systems. The idea of these points, which have been recently introduced to theoretical investigations, is thoroughly discussed and extended into new types of models. We analyze the correlation between regular and perpetual points, as well as their relation with phase space, showing the potential usefulness of both types of points in the qualitative description of co-existing states. The ability of perpetual points in finding attractors is indicated, along with its potential cause. The location of chaotic trajectories and sets of considered points is investigated and the study on the stability of systems is shown. The statistical analysis of the observing desired states is performed. We focus on various types of dynamical systems, i.e., chaotic flows with self-excited and hidden attractors, forced mechanical models, and semiconductor superlattices, exhibiting the universality of appearance of the observed patterns and relations.

  20. Preconditioners for regularized saddle point matrices

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe

    2011-01-01

    Roč. 19, č. 2 (2011), s. 91-112 ISSN 1570-2820 Institutional research plan: CEZ:AV0Z30860518 Keywords : saddle point matrices * preconditioning * regularization * eigenvalue clustering Subject RIV: BA - General Mathematics Impact factor: 0.533, year: 2011 http://www.degruyter.com/view/j/jnma.2011.19.issue-2/jnum.2011.005/jnum.2011.005. xml

  1. Regularizing and Optimizing LSTM Language Models

    OpenAIRE

    Merity, Stephen; Keskar, Nitish Shirish; Socher, Richard

    2017-01-01

    Recurrent neural networks (RNNs), such as long short-term memory networks (LSTMs), serve as a fundamental building block for many sequence learning tasks, including machine translation, language modeling, and question answering. In this paper, we consider the specific problem of word-level language modeling and investigate strategies for regularizing and optimizing LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on hidden-to-hidden weights as a form of recurrent r...

  2. On bigraded regularities of Rees algebra

    Indian Academy of Sciences (India)

    For any homogeneous ideal I in K [ x 1 , . . . , x n ] of analytic spread ℓ , we show that for the Rees algebra R ( I ) , r e g ( 0 , 1 ) s y z \\sl ( R ( I ) ) = r e g ( 0 , 1 ) T \\sl ( R ( I ) ) . We compute a formula for the (0, 1)-regularity of R ( I ) , which is a bigraded analog of Theorem1.1 of Aramova and Herzog ( A m . J . M a t h . 122 ( 4 ) ...

  3. Sparse regularization for force identification using dictionaries

    Science.gov (United States)

    Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng

    2016-04-01

    The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.

  4. Scattering amplitudes of regularized bosonic strings

    Science.gov (United States)

    Ambjørn, J.; Makeenko, Y.

    2017-10-01

    We compute scattering amplitudes of the regularized bosonic Nambu-Goto string in the mean-field approximation, disregarding fluctuations of the Lagrange multiplier and an independent metric about their mean values. We use the previously introduced Lilliputian scaling limit to recover the Regge behavior of the amplitudes with the usual linear Regge trajectory in space-time dimensions d >2 . We demonstrate a stability of this minimum of the effective action under fluctuations for d <26 .

  5. Annotation of regular polysemy and underspecification

    DEFF Research Database (Denmark)

    Martínez Alonso, Héctor; Pedersen, Bolette Sandford; Bel, Núria

    2013-01-01

    We present the result of an annotation task on regular polysemy for a series of seman- tic classes or dot types in English, Dan- ish and Spanish. This article describes the annotation process, the results in terms of inter-encoder agreement, and the sense distributions obtained with two methods......: majority voting with a theory-compliant backoff strategy, and MACE, an unsuper- vised system to choose the most likely sense from all the annotations....

  6. Regularized robust coding for face recognition.

    Science.gov (United States)

    Yang, Meng; Zhang, Lei; Yang, Jian; Zhang, David

    2013-05-01

    Recently the sparse representation based classification (SRC) has been proposed for robust face recognition (FR). In SRC, the testing image is coded as a sparse linear combination of the training samples, and the representation fidelity is measured by the l2-norm or l1 -norm of the coding residual. Such a sparse coding model assumes that the coding residual follows Gaussian or Laplacian distribution, which may not be effective enough to describe the coding residual in practical FR systems. Meanwhile, the sparsity constraint on the coding coefficients makes the computational cost of SRC very high. In this paper, we propose a new face coding model, namely regularized robust coding (RRC), which could robustly regress a given signal with regularized regression coefficients. By assuming that the coding residual and the coding coefficient are respectively independent and identically distributed, the RRC seeks for a maximum a posterior solution of the coding problem. An iteratively reweighted regularized robust coding (IR(3)C) algorithm is proposed to solve the RRC model efficiently. Extensive experiments on representative face databases demonstrate that the RRC is much more effective and efficient than state-of-the-art sparse representation based methods in dealing with face occlusion, corruption, lighting, and expression changes, etc.

  7. Regular handicap tournaments of high degree

    Directory of Open Access Journals (Sweden)

    Dalibor Froncek

    2016-09-01

    Full Text Available A  handicap distance antimagic labeling of a graph $G=(V,E$ with $n$ vertices is a bijection ${f}: V\\to \\{ 1,2,\\ldots ,n\\} $ with the property that ${f}(x_i=i$ and the sequence of the weights $w(x_1,w(x_2,\\ldots,w(x_n$ (where $w(x_i=\\sum\\limits_{x_j\\in N(x_i}f(x_j$ forms an increasing arithmetic progression with difference one. A graph $G$ is a {\\em handicap distance antimagic graph} if it allows a handicap distance antimagic labeling. We construct $(n-7$-regular handicap distance antimagic graphs for every order $n\\equiv2\\pmod4$ with a few small exceptions. This result complements results by Kov\\'a\\v{r}, Kov\\'a\\v{r}ov\\'a, and Krajc~[P. Kov\\'a\\v{r}, T. Kov\\'a\\v{r}ov\\'a, B. Krajc, On handicap labeling of regular graphs, manuscript, personal communication, 2016] who found such graphs with regularities smaller than $n-7$.

  8. Ambiguity and regularization in parallel MRI.

    Science.gov (United States)

    Gol, Derya; Potter, Lee C

    2011-01-01

    In this paper, we formulate the parallel magnetic resonance imaging(pMRI) as a multichannel blind deconvolution problem with subsampling. First, the model allows formal characterization of image solutions consistent with data obtained by uniform subsampling of k-space. Second, the model allows analysis of the minimum set of required calibration data. Third, the filter bank formulation provides analysis of the sufficient sizes of interpolation kernels in widely used reconstruction techniques. Fourth, the model suggests principled development of regularization terms to fight ambiguity and ill-conditioning; specifically, subspace regularization is adapted from the blind image super-resolution work of Sroubek et al. [11]. Finally, characterization of the consistent set of image solutions leads to a cautionary comment on L1 regularization for the peculiar class of piece-wise constant images. Thus, it is proposed that the analysis of the subsampled blind deconvolution task provides insight into both the multiply determined nature of the pMRI task and possible design strategies for sampling and reconstruction.

  9. Solution path for manifold regularized semisupervised classification.

    Science.gov (United States)

    Wang, Gang; Wang, Fei; Chen, Tao; Yeung, Dit-Yan; Lochovsky, Frederick H

    2012-04-01

    Traditional learning algorithms use only labeled data for training. However, labeled examples are often difficult or time consuming to obtain since they require substantial human labeling efforts. On the other hand, unlabeled data are often relatively easy to collect. Semisupervised learning addresses this problem by using large quantities of unlabeled data with labeled data to build better learning algorithms. In this paper, we use the manifold regularization approach to formulate the semisupervised learning problem where a regularization framework which balances a tradeoff between loss and penalty is established. We investigate different implementations of the loss function and identify the methods which have the least computational expense. The regularization hyperparameter, which determines the balance between loss and penalty, is crucial to model selection. Accordingly, we derive an algorithm that can fit the entire path of solutions for every value of the hyperparameter. Its computational complexity after preprocessing is quadratic only in the number of labeled examples rather than the total number of labeled and unlabeled examples.

  10. Regularizations: different recipes for identical situations

    International Nuclear Information System (INIS)

    Gambin, E.; Lobo, C.O.; Battistel, O.A.

    2004-03-01

    We present a discussion where the choice of the regularization procedure and the routing for the internal lines momenta are put at the same level of arbitrariness in the analysis of Ward identities involving simple and well-known problems in QFT. They are the complex self-interacting scalar field and two simple models where the SVV and AVV process are pertinent. We show that, in all these problems, the conditions to symmetry relations preservation are put in terms of the same combination of divergent Feynman integrals, which are evaluated in the context of a very general calculational strategy, concerning the manipulations and calculations involving divergences. Within the adopted strategy, all the arbitrariness intrinsic to the problem are still maintained in the final results and, consequently, a perfect map can be obtained with the corresponding results of the traditional regularization techniques. We show that, when we require an universal interpretation for the arbitrariness involved, in order to get consistency with all stated physical constraints, a strong condition is imposed for regularizations which automatically eliminates the ambiguities associated to the routing of the internal lines momenta of loops. The conclusion is clean and sound: the association between ambiguities and unavoidable symmetry violations in Ward identities cannot be maintained if an unique recipe is required for identical situations in the evaluation of divergent physical amplitudes. (author)

  11. Pole masses of quarks in dimensional reduction

    International Nuclear Information System (INIS)

    Avdeev, L.V.; Kalmykov, M.Yu.

    1997-01-01

    Pole masses of quarks in quantum chromodynamics are calculated to the two-loop order in the framework of the regularization by dimensional reduction. For the diagram with a light quark loop, the non-Euclidean asymptotic expansion is constructed with the external momentum on the mass shell of a heavy quark

  12. Explicit solutions of one-dimensional total variation problem

    Science.gov (United States)

    Makovetskii, Artyom; Voronin, Sergei; Kober, Vitaly

    2015-09-01

    This work deals with denosing of a one-dimensional signal corrupted by additive white Gaussian noise. A common way to solve the problem is to utilize the total variation (TV) method. Basically, the TV regularization minimizes a functional consisting of the sum of fidelity and regularization terms. We derive explicit solutions of the one-dimensional TV regularization problem that help us to restore noisy signals with a direct, non-iterative algorithm. Computer simulation results are provided to illustrate the performance of the proposed algorithm for restoration of noisy signals.

  13. Scientific data interpolation with low dimensional manifold model

    International Nuclear Information System (INIS)

    Zhu, Wei; Wang, Bao; Barnard, Richard C.; Hauck, Cory D.

    2017-01-01

    Here, we propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace–Beltrami operator in the Euler–Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on data compression and interpolation from both regular and irregular samplings.

  14. Scientific data interpolation with low dimensional manifold model

    Science.gov (United States)

    Zhu, Wei; Wang, Bao; Barnard, Richard; Hauck, Cory D.; Jenko, Frank; Osher, Stanley

    2018-01-01

    We propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace-Beltrami operator in the Euler-Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on data compression and interpolation from both regular and irregular samplings.

  15. Dimensional Analysis

    Indian Academy of Sciences (India)

    Dimensional analysis is a useful tool which finds important applications in physics and engineering. It is most effective when there exist a maximal number of dimensionless quantities constructed out of the relevant physical variables. Though a complete theory of dimen- sional analysis was developed way back in 1914 in a.

  16. Dimensional Analysis

    Indian Academy of Sciences (India)

    to understand and quite straightforward to use. Dimensional analysis is a topic which every student of 'science encounters in elementary physics courses. The basics of this topic are taught and learnt quite hurriedly (and forgotten fairly quickly thereafter!) It does not generally receive the attention and the respect it deserves ...

  17. Brandteknisk Dimensionering

    DEFF Research Database (Denmark)

    Carlsen, Bent Erik; Jensen, Bjarne Chr.; Olesen, Frits Bolonius

    grundlag for at vurdere, om - og i givet fald hvordan - brandteknisk dimensionering af bærende konstruktioner vil kunne indføres i DIF's konstruktionsnormer, indeholder et skitseforslag til, efter hvilke principper dette vil kunne gøres. Men derudover har udvalget i fire dataoplæg (rapportens bilag 1...

  18. Boundary Equations and Regularity Theory for Geometric Variational Systems with Neumann Data

    Science.gov (United States)

    Schikorra, Armin

    2018-02-01

    We study boundary regularity of maps from two-dimensional domains into manifolds which are critical with respect to a generic conformally invariant variational functional and which, at the boundary, intersect perpendicularly with a support manifold. For example, harmonic maps, or H-surfaces, with a partially free boundary condition. In the interior it is known, by the celebrated work of Rivière, that these maps satisfy a system with an antisymmetric potential, from which one can derive the interior regularity of the solution. Avoiding a reflection argument, we show that these maps satisfy along the boundary a system of equations which also exhibits a (nonlocal) antisymmetric potential that combines information from the interior potential and the geometric Neumann boundary condition. We then proceed to show boundary regularity for solutions to such systems.

  19. Instabilities of the zeta-function regularization in the presence of symmetries

    International Nuclear Information System (INIS)

    Rasetti, M.

    1980-01-01

    The zeta-function regularization method requires the calculation of the spectrum-generating function zeta sub(M) of a generic real, elliptic, self-adjoint differential operator on a manifold M. An asymptotic expansion for zeta sub(M) is given for the class of all symmetric spaces of rank 1, sufficient to compute its Mellin transform and deduce the regularization of the corresponding quadratic path integrals. The summability properties of the generalized zeta-function introduce physical instabilities in the system as negative specific heat. The technique (and the instability as well) is shown to hold - under the assumed symmetry properties - in any dimension (preserving both the global and local properties of the manifold, as opposed to the dimensional regularization, where one adds extra flat dimensions only). (author)

  20. Regularity for the evolution of p-harmonic maps

    Science.gov (United States)

    Misawa, Masashi

    2018-02-01

    This paper presents our study of regularity for p-harmonic map heat flows. We devise a monotonicity-type formula of scaled energy and establish a criterion for a uniform regularity estimate for regular p-harmonic map heat flows. As application we show the small data global in the time existence of regular p-harmonic map heat flow.

  1. 48 CFR 6302.12 - Regular procedure (Rule 12).

    Science.gov (United States)

    2010-10-01

    ... CONTRACT APPEALS RULES OF PROCEDURE 6302.12 Regular procedure (Rule 12). Under the regular procedure the parties are required to file pleadings with the Board (Rule 13). The regular procedure affords the parties... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Regular procedure (Rule 12...

  2. Romberg extrapolation for Euler summation-based cubature on regular regions.

    Science.gov (United States)

    Freeden, W; Gerhards, C

    2017-01-01

    Romberg extrapolation is a long-known method to improve the convergence rate of the trapezoidal rule on intervals. For simple regions such as the cube [Formula: see text] it is directly transferable to cubature in q dimensions. In this paper, we formulate Romberg extrapolation for Euler summation-based cubature on arbitrary q -dimensional regular regions [Formula: see text] and derive an explicit representation for the remainder term.

  3. Regular Functions with Values in Ternary Number System on the Complex Clifford Analysis

    Directory of Open Access Journals (Sweden)

    Ji Eun Kim

    2013-01-01

    Full Text Available We define a new modified basis i^ which is an association of two bases, e1 and e2. We give an expression of the form z=x0+ i ^z0-, where x0 is a real number and z0- is a complex number on three-dimensional real skew field. And we research the properties of regular functions with values in ternary field and reduced quaternions by Clifford analysis.

  4. Analysis of regularized inversion of data corrupted by white Gaussian noise

    Science.gov (United States)

    Kekkonen, Hanne; Lassas, Matti; Siltanen, Samuli

    2014-04-01

    Tikhonov regularization is studied in the case of linear pseudodifferential operator as the forward map and additive white Gaussian noise as the measurement error. The measurement model for an unknown function u(x) is \\begin{eqnarray*} m(x) = Au(x) + \\delta \\varepsilon (x), \\end{eqnarray*} where δ > 0 is the noise magnitude. If ɛ was an L2-function, Tikhonov regularization gives an estimate \\begin{eqnarray*} T_\\alpha (m) = \\mathop {{arg\\, min}}_{u\\in H^r} \\big \\lbrace \\Vert A u-m\\Vert _{L^2}^2+ \\alpha \\Vert u\\Vert _{H^r}^2 \\big \\rbrace \\end{eqnarray*} for u where α = α(δ) is the regularization parameter. Here penalization of the Sobolev norm \\Vert u\\Vert _{H^r} covers the cases of standard Tikhonov regularization (r = 0) and first derivative penalty (r = 1). Realizations of white Gaussian noise are almost never in L2, but do belong to Hs with probability one if s solution as δ → 0 is proven in appropriate function spaces using microlocal analysis. The convergence of the related finite-dimensional problems to the infinite-dimensional problem is also analysed.

  5. Optimal convergence rates for Tikhonov regularization in Besov scales

    International Nuclear Information System (INIS)

    Lorenz, D A; Trede, D

    2008-01-01

    In this paper we deal with linear inverse problems and convergence rates for Tikhonov regularization. We consider regularization in a scale of Banach spaces, namely the scale of Besov spaces. We show that regularization in Banach scales differs from regularization in Hilbert scales in the sense that it is possible that stronger source conditions may lead to weaker convergence rates and vice versa. Moreover, we present optimal source conditions for regularization in Besov scales

  6. Temporal regularity of the environment drives time perception

    OpenAIRE

    van Rijn, H; Rhodes, D; Di Luca, M

    2016-01-01

    It’s reasonable to assume that a regularly paced sequence should be perceived as regular, but here we show that perceived regularity depends on the context in which the sequence is embedded. We presented one group of participants with perceptually regularly paced sequences, and another group of participants with mostly irregularly paced sequences (75% irregular, 25% regular). The timing of the final stimulus in each sequence could be var- ied. In one experiment, we asked whether the last stim...

  7. Regularity of the Interband Light Absorption Coefficient

    Indian Academy of Sciences (India)

    In this paper we consider the interband light absorption coefficient (ILAC), in a symmetric form, in the case of random operators on the -dimensional lattice. We show that the symmetrized version of ILAC is either continuous or has a component which has the same modulus of continuity as the density of states.

  8. Regular-, irregular-, and pseudo-character processing in Chinese: The regularity effect in normal adult readers

    Directory of Open Access Journals (Sweden)

    Dustin Kai Yan Lau

    2014-03-01

    Full Text Available Background Unlike alphabetic languages, Chinese uses a logographic script. However, the pronunciation of many character’s phonetic radical has the same pronunciation as the character as a whole. These are considered regular characters and can be read through a lexical non-semantic route (Weekes & Chen, 1999. Pseudocharacters are another way to study this non-semantic route. A pseudocharacter is the combination of existing semantic and phonetic radicals in their legal positions resulting in a non-existing character (Ho, Chan, Chung, Lee, & Tsang, 2007. Pseudocharacters can be pronounced by direct derivation from the sound of its phonetic radical. Conversely, if the pronunciation of a character does not follow that of the phonetic radical, it is considered as irregular and can only be correctly read through the lexical-semantic route. The aim of the current investigation was to examine reading aloud in normal adults. We hypothesized that the regularity effect, previously described for alphabetical scripts and acquired dyslexic patients of Chinese (Weekes & Chen, 1999; Wu, Liu, Sun, Chromik, & Zhang, 2014, would also be present in normal adult Chinese readers. Method Participants. Thirty (50% female native Hong Kong Cantonese speakers with a mean age of 19.6 years and a mean education of 12.9 years. Stimuli. Sixty regular-, 60 irregular-, and 60 pseudo-characters (with at least 75% of name agreement in Chinese were matched by initial phoneme, number of strokes and family size. Additionally, regular- and irregular-characters were matched by frequency (low and consistency. Procedure. Each participant was asked to read aloud the stimuli presented on a laptop using the DMDX software. The order of stimuli presentation was randomized. Data analysis. ANOVAs were carried out by participants and items with RTs and errors as dependent variables and type of stimuli (regular-, irregular- and pseudo-character as repeated measures (F1 or between subject

  9. Regularized binormal ROC method in disease classification using microarray data

    Directory of Open Access Journals (Sweden)

    Huang Jian

    2006-05-01

    Full Text Available Abstract Background An important application of microarrays is to discover genomic biomarkers, among tens of thousands of genes assayed, for disease diagnosis and prognosis. Thus it is of interest to develop efficient statistical methods that can simultaneously identify important biomarkers from such high-throughput genomic data and construct appropriate classification rules. It is also of interest to develop methods for evaluation of classification performance and ranking of identified biomarkers. Results The ROC (receiver operating characteristic technique has been widely used in disease classification with low dimensional biomarkers. Compared with the empirical ROC approach, the binormal ROC is computationally more affordable and robust in small sample size cases. We propose using the binormal AUC (area under the ROC curve as the objective function for two-sample classification, and the scaled threshold gradient directed regularization method for regularized estimation and biomarker selection. Tuning parameter selection is based on V-fold cross validation. We develop Monte Carlo based methods for evaluating the stability of individual biomarkers and overall prediction performance. Extensive simulation studies show that the proposed approach can generate parsimonious models with excellent classification and prediction performance, under most simulated scenarios including model mis-specification. Application of the method to two cancer studies shows that the identified genes are reasonably stable with satisfactory prediction performance and biologically sound implications. The overall classification performance is satisfactory, with small classification errors and large AUCs. Conclusion In comparison to existing methods, the proposed approach is computationally more affordable without losing the optimality possessed by the standard ROC method.

  10. Regular physical exercise: way to healthy life.

    Science.gov (United States)

    Siddiqui, N I; Nessa, A; Hossain, M A

    2010-01-01

    Any bodily activity or movement that enhances and maintains overall health and physical fitness is called physical exercise. Habit of regular physical exercise has got numerous benefits. Exercise is of various types such as aerobic exercise, anaerobic exercise and flexibility exercise. Aerobic exercise moves the large muscle groups with alternate contraction and relaxation, forces to deep breath, heart to pump more blood with adequate tissue oxygenation. It is also called cardiovascular exercise. Examples of aerobic exercise are walking, running, jogging, swimming etc. In anaerobic exercise, there is forceful contraction of muscle with stretching, usually mechanically aided and help to build up muscle strength and muscle bulk. Examples are weight lifting, pulling, pushing, sprinting etc. Flexibility exercise is one type of stretching exercise to improve the movements of muscles, joints and ligaments. Walking is a good example of aerobic exercise, easy to perform, safe, effective, does not require any training or equipment and less chance of injury. Regular 30 minutes brisk walking in the morning with 150 minutes per week is a good exercise. Regular exercise improves the cardiovascular status, reduces the risk of cardiac disease, high blood pressure and cerebrovascular disease. It reduces body weight, improves insulin sensitivity, helps in glycemic control, prevents obesity and diabetes mellitus. It is helpful for relieving anxiety, stress, brings a sense of well being and overall physical fitness. Global trend is mechanization, labor savings and leading to epidemic of long term chronic diseases like diabetes mellitus, cardiovascular diseases etc. All efforts should be made to create public awareness promoting physical activity, physically demanding recreational pursuits and providing adequate facilities.

  11. Regularization mechanism in blind tip reconstruction procedure

    International Nuclear Information System (INIS)

    Jóźwiak, G.; Henrykowski, A.; Masalska, A.; Gotszalk, T.

    2012-01-01

    In quantitative investigations of mechanical and chemical surface parameters using atomic force microscopy (AFM) techniques the determination of the probe radius and shape is required. To the most favorable methods of the microprobe characterization belongs the blind tip reconstruction method (BTR). The BTR similar to many other inverse problems is sensitive to noise and needs the so-called regularization mechanism. In this article we describe and investigate two the most popular regularization schemes, which were proposed in Villarubia et al. (1997) and Tian et al. (2008) . We have shown that the procedure described in Tian et al. (2008) enables very effective probe shape reconstruction if we know the statistics of noise present in the AFM system. The increase of effectiveness with relation to the procedure described in Villarubia (1997) is so significant that makes it possible to reconstruct probes with much larger resolution. We have also noticed the fact, that probes reconstructed by means of the procedure presented in Tian et al. (2008) have flat apexes for AFM images with low signal to noise ratio (SNR). We propose procedure, which can improve the probe apex reconstruction. It uses the AFM image to estimate the initial shape of the reconstructed probe. This shape may be further improved by the BTR algorithm. We have shown that it is possible only for the procedure described in Tian et al. (2008) . -- Highlights: ► We study regularization mechanism of blind tip reconstruction. ► We propose combination of direct probe imaging with BTR to improve the reconstruction of a probe apex. ► The possibility of improving efficiency of the BTR procedure is presented. ► The possibility of improving resolution of a reconstructed probe is presented.

  12. Convergence and fluctuations of Regularized Tyler estimators

    KAUST Repository

    Kammoun, Abla

    2015-10-26

    This article studies the behavior of regularized Tyler estimators (RTEs) of scatter matrices. The key advantages of these estimators are twofold. First, they guarantee by construction a good conditioning of the estimate and second, being a derivative of robust Tyler estimators, they inherit their robustness properties, notably their resilience to the presence of outliers. Nevertheless, one major problem that poses the use of RTEs in practice is represented by the question of setting the regularization parameter p. While a high value of p is likely to push all the eigenvalues away from zero, it comes at the cost of a larger bias with respect to the population covariance matrix. A deep understanding of the statistics of RTEs is essential to come up with appropriate choices for the regularization parameter. This is not an easy task and might be out of reach, unless one considers asymptotic regimes wherein the number of observations n and/or their size N increase together. First asymptotic results have recently been obtained under the assumption that N and n are large and commensurable. Interestingly, no results concerning the regime of n going to infinity with N fixed exist, even though the investigation of this assumption has usually predated the analysis of the most difficult N and n large case. This motivates our work. In particular, we prove in the present paper that the RTEs converge to a deterministic matrix when n → ∞ with N fixed, which is expressed as a function of the theoretical covariance matrix. We also derive the fluctuations of the RTEs around this deterministic matrix and establish that these fluctuations converge in distribution to a multivariate Gaussian distribution with zero mean and a covariance depending on the population covariance and the parameter.

  13. Effort variation regularization in sound field reproduction

    DEFF Research Database (Denmark)

    Stefanakis, Nick; Jacobsen, Finn; Sarris, Ioannis

    2010-01-01

    In this paper, active control is used in order to reproduce a given sound field in an extended spatial region. A method is proposed which minimizes the reproduction error at a number of control positions with the reproduction sources holding a certain relation within their complex strengths......), and adaptive wave field synthesis (AWFS), both under free-field conditions and in reverberant rooms. It is shown that effort variation regularization overcomes the problems associated with small spaces and with a low ratio of direct to reverberant energy, improving thus the reproduction accuracy...... in the listening room....

  14. Regularization by truncated total least squares

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Fierro, R.D; Golub, G.H

    1997-01-01

    The total least squares (TLS) method is a successful method for noise reduction in linear least squares problems in a number of applications. The TLS method is suited to problems in which both the coefficient matrix and the right-hand side are not precisely known. This paper focuses on the use...... matrix. We express our results in terms of the singular value decomposition (SVD) of the coefficient matrix rather than the augmented matrix. This leads to insight into the filtering properties of the truncated TLS method as compared to regularized least squares solutions. In addition, we propose...

  15. Constructing regular graphs with smallest defining number

    OpenAIRE

    Omoomi, Behnaz; Soltankhah, Nasrin

    2008-01-01

    In a given graph $G$, a set $S$ of vertices with an assignment of colors is a {\\sf defining set of the vertex coloring of $G$}, if there exists a unique extension of the colors of $S$ to a $\\Cchi(G)$-coloring of the vertices of $G$. A defining set with minimum cardinality is called a {\\sf smallest defining set} (of vertex coloring) and its cardinality, the {\\sf defining number}, is denoted by $d(G, \\Cchi)$. Let $ d(n, r, \\Cchi = k)$ be the smallest defining number of all $r$-regular $k$-chrom...

  16. Indefinite metric and regularization of electrodynamics

    International Nuclear Information System (INIS)

    Gaudin, M.

    1984-06-01

    The invariant regularization of Pauli and Villars in quantum electrodynamics can be considered as deriving from a local and causal lagrangian theory for spin 1/2 bosons, by introducing an indefinite metric and a condition on the allowed states similar to the Lorentz condition. The consequences are the asymptotic freedom of the photon's propagator. We present a calcultion of the effective charge to the fourth order in the coupling as a function of the auxiliary masses, the theory avoiding all mass divergencies to this order [fr

  17. Regular Advisory Group on Spent Fuel Management

    International Nuclear Information System (INIS)

    1993-01-01

    The Regular Advisory Group on Spent Fuel Management (RAGSFM) was established in accordance with the recommendations of the Expert Group on International Spent Fuel Management in 1982. The Advisory Group consists of nominated experts from countries with considerable experience and/or requirements in such aspects of the back-end of the fuel cycle as storage, safety, transportation and treatment of spent fuel. The RAGSFM activities cover the following main topics: a) Analysis and summary of spent fuel arisings and storage facilities; b) Interface between spent fuel storage and transportation activities; c) Spent fuel storage process and technology and related safety issues; d)Treatment of spent fuel

  18. Green operators for low regularity spacetimes

    Science.gov (United States)

    Sanchez Sanchez, Yafet; Vickers, James

    2018-02-01

    In this paper we define and construct advanced and retarded Green operators for the wave operator on spacetimes with low regularity. In order to do so we require that the spacetime satisfies the condition of generalised hyperbolicity which is equivalent to well-posedness of the classical inhomogeneous problem with zero initial data where weak solutions are properly supported. Moreover, we provide an explicit formula for the kernel of the Green operators in terms of an arbitrary eigenbasis of H 1 and a suitable Green matrix that solves a system of second order ODEs.

  19. Two-pass greedy regular expression parsing

    DEFF Research Database (Denmark)

    Grathwohl, Niels Bjørn Bugge; Henglein, Fritz; Nielsen, Lasse

    2013-01-01

    by: operating in only 2 passes; using only O(m) words of random-access memory (independent of n); requiring only kn bits of sequentially written and read log storage, where k ...We present new algorithms for producing greedy parses for regular expressions (REs) in a semi-streaming fashion. Our lean-log algorithm executes in time O(mn) for REs of size m and input strings of size n and outputs a compact bit-coded parse tree representation. It improves on previous algorithms...

  20. Stream Processing Using Grammars and Regular Expressions

    DEFF Research Database (Denmark)

    Rasmussen, Ulrik Terp

    disambiguation. The first algorithm operates in two passes in a semi-streaming fashion, using a constant amount of working memory and an auxiliary tape storage which is written in the first pass and consumed by the second. The second algorithm is a single-pass and optimally streaming algorithm which outputs...... as much of the parse tree as is semantically possible based on the input prefix read so far, and resorts to buffering as many symbols as is required to resolve the next choice. Optimality is obtained by performing a PSPACE-complete pre-analysis on the regular expression. In the second part we present...

  1. Strategies for regular segmented reductions on GPU

    DEFF Research Database (Denmark)

    Larsen, Rasmus Wriedt; Henriksen, Troels

    2017-01-01

    We present and evaluate an implementation technique for regular segmented reductions on GPUs. Existing techniques tend to be either consistent in performance but relatively inefficient in absolute terms, or optimised for specific workloads and thereby exhibiting bad performance for certain input...... is in the context of the Futhark compiler, the implementation technique is applicable to any library or language that has a need for segmented reductions. We evaluate the technique on four microbenchmarks, two of which we also compare to implementations in the CUB library for GPU programming, as well as on two...

  2. Total variation regularization for fMRI-based prediction of behavior

    Science.gov (United States)

    Michel, Vincent; Gramfort, Alexandre; Varoquaux, Gaël; Eger, Evelyn; Thirion, Bertrand

    2011-01-01

    While medical imaging typically provides massive amounts of data, the extraction of relevant information for predictive diagnosis remains a difficult challenge. Functional MRI (fMRI) data, that provide an indirect measure of task-related or spontaneous neuronal activity, are classically analyzed in a mass-univariate procedure yielding statistical parametric maps. This analysis framework disregards some important principles of brain organization: population coding, distributed and overlapping representations. Multivariate pattern analysis, i.e., the prediction of behavioural variables from brain activation patterns better captures this structure. To cope with the high dimensionality of the data, the learning method has to be regularized. However, the spatial structure of the image is not taken into account in standard regularization methods, so that the extracted features are often hard to interpret. More informative and interpretable results can be obtained with the ℓ1 norm of the image gradient, a.k.a. its Total Variation (TV), as regularization. We apply for the first time this method to fMRI data, and show that TV regularization is well suited to the purpose of brain mapping while being a powerful tool for brain decoding. Moreover, this article presents the first use of TV regularization for classification. PMID:21317080

  3. Information-theoretic semi-supervised metric learning via entropy regularization.

    Science.gov (United States)

    Niu, Gang; Dai, Bo; Yamada, Makoto; Sugiyama, Masashi

    2014-08-01

    We propose a general information-theoretic approach to semi-supervised metric learning called SERAPH (SEmi-supervised metRic leArning Paradigm with Hypersparsity) that does not rely on the manifold assumption. Given the probability parameterized by a Mahalanobis distance, we maximize its entropy on labeled data and minimize its entropy on unlabeled data following entropy regularization. For metric learning, entropy regularization improves manifold regularization by considering the dissimilarity information of unlabeled data in the unsupervised part, and hence it allows the supervised and unsupervised parts to be integrated in a natural and meaningful way. Moreover, we regularize SERAPH by trace-norm regularization to encourage low-dimensional projections associated with the distance metric. The nonconvex optimization problem of SERAPH could be solved efficiently and stably by either a gradient projection algorithm or an EM-like iterative algorithm whose M-step is convex. Experiments demonstrate that SERAPH compares favorably with many well-known metric learning methods, and the learned Mahalanobis distance possesses high discriminability even under noisy environments.

  4. Color normalization of histology slides using graph regularized sparse NMF

    Science.gov (United States)

    Sha, Lingdao; Schonfeld, Dan; Sethi, Amit

    2017-03-01

    Computer based automatic medical image processing and quantification are becoming popular in digital pathology. However, preparation of histology slides can vary widely due to differences in staining equipment, procedures and reagents, which can reduce the accuracy of algorithms that analyze their color and texture information. To re- duce the unwanted color variations, various supervised and unsupervised color normalization methods have been proposed. Compared with supervised color normalization methods, unsupervised color normalization methods have advantages of time and cost efficient and universal applicability. Most of the unsupervised color normaliza- tion methods for histology are based on stain separation. Based on the fact that stain concentration cannot be negative and different parts of the tissue absorb different stains, nonnegative matrix factorization (NMF), and particular its sparse version (SNMF), are good candidates for stain separation. However, most of the existing unsupervised color normalization method like PCA, ICA, NMF and SNMF fail to consider important information about sparse manifolds that its pixels occupy, which could potentially result in loss of texture information during color normalization. Manifold learning methods like Graph Laplacian have proven to be very effective in interpreting high-dimensional data. In this paper, we propose a novel unsupervised stain separation method called graph regularized sparse nonnegative matrix factorization (GSNMF). By considering the sparse prior of stain concentration together with manifold information from high-dimensional image data, our method shows better performance in stain color deconvolution than existing unsupervised color deconvolution methods, especially in keeping connected texture information. To utilized the texture information, we construct a nearest neighbor graph between pixels within a spatial area of an image based on their distances using heat kernal in lαβ space. The

  5. Maze Navigation by Honeybees: Learning Path Regularity

    Science.gov (United States)

    Zhang, Shaowu; Mizutani, Akiko; Srinivasan, Mandyam V.

    2000-01-01

    We investigated the ability of honeybees to learn mazes of four types: constant-turn mazes, in which the appropriate turn is always in the same direction in each decision chamber; zig-zag mazes, in which the appropriate turn is alternately left and right in successive decision chambers; irregular mazes, in which there is no readily apparent pattern to the turns; and variable irregular mazes, in which the bees were trained to learn several irregular mazes simultaneously. The bees were able to learn to navigate all four types of maze. Performance was best in the constant-turn mazes, somewhat poorer in the zig-zag mazes, poorer still in the irregular mazes, and poorest in the variable irregular mazes. These results demonstrate that bees do not navigate such mazes simply by memorizing the entire sequence of appropriate turns. Rather, performance in the various configurations depends on the existence of regularity in the structure of the maze and on the ease with which this regularity is recognized and learned. PMID:11112795

  6. Supporting Regularized Logistic Regression Privately and Efficiently.

    Science.gov (United States)

    Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei

    2016-01-01

    As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.

  7. From Regular to Strictly Locally Testable Languages

    Directory of Open Access Journals (Sweden)

    Stefano Crespi Reghizzi

    2011-08-01

    Full Text Available A classical result (often credited to Y. Medvedev states that every language recognized by a finite automaton is the homomorphic image of a local language, over a much larger so-called local alphabet, namely the alphabet of the edges of the transition graph. Local languages are characterized by the value k=2 of the sliding window width in the McNaughton and Papert's infinite hierarchy of strictly locally testable languages (k-slt. We generalize Medvedev's result in a new direction, studying the relationship between the width and the alphabetic ratio telling how much larger the local alphabet is. We prove that every regular language is the image of a k-slt language on an alphabet of doubled size, where the width logarithmically depends on the automaton size, and we exhibit regular languages for which any smaller alphabetic ratio is insufficient. More generally, we express the trade-off between alphabetic ratio and width as a mathematical relation derived from a careful encoding of the states. At last we mention some directions for theoretical development and application.

  8. Multiple graph regularized nonnegative matrix factorization

    KAUST Repository

    Wang, Jim Jing-Yan

    2013-10-01

    Non-negative matrix factorization (NMF) has been widely used as a data representation method based on components. To overcome the disadvantage of NMF in failing to consider the manifold structure of a data set, graph regularized NMF (GrNMF) has been proposed by Cai et al. by constructing an affinity graph and searching for a matrix factorization that respects graph structure. Selecting a graph model and its corresponding parameters is critical for this strategy. This process is usually carried out by cross-validation or discrete grid search, which are time consuming and prone to overfitting. In this paper, we propose a GrNMF, called MultiGrNMF, in which the intrinsic manifold is approximated by a linear combination of several graphs with different models and parameters inspired by ensemble manifold regularization. Factorization metrics and linear combination coefficients of graphs are determined simultaneously within a unified object function. They are alternately optimized in an iterative algorithm, thus resulting in a novel data representation algorithm. Extensive experiments on a protein subcellular localization task and an Alzheimer\\'s disease diagnosis task demonstrate the effectiveness of the proposed algorithm. © 2013 Elsevier Ltd. All rights reserved.

  9. Multiview Hessian regularization for image annotation.

    Science.gov (United States)

    Liu, Weifeng; Tao, Dacheng

    2013-07-01

    The rapid development of computer hardware and Internet technology makes large scale data dependent models computationally tractable, and opens a bright avenue for annotating images through innovative machine learning algorithms. Semisupervised learning (SSL) therefore received intensive attention in recent years and was successfully deployed in image annotation. One representative work in SSL is Laplacian regularization (LR), which smoothes the conditional distribution for classification along the manifold encoded in the graph Laplacian, however, it is observed that LR biases the classification function toward a constant function that possibly results in poor generalization. In addition, LR is developed to handle uniformly distributed data (or single-view data), although instances or objects, such as images and videos, are usually represented by multiview features, such as color, shape, and texture. In this paper, we present multiview Hessian regularization (mHR) to address the above two problems in LR-based image annotation. In particular, mHR optimally combines multiple HR, each of which is obtained from a particular view of instances, and steers the classification function that varies linearly along the data manifold. We apply mHR to kernel least squares and support vector machines as two examples for image annotation. Extensive experiments on the PASCAL VOC'07 dataset validate the effectiveness of mHR by comparing it with baseline algorithms, including LR and HR.

  10. Hawking fluxes and anomalies in rotating regular black holes with a time-delay

    International Nuclear Information System (INIS)

    Takeuchi, Shingo

    2016-01-01

    Based on the anomaly cancellation method we compute the Hawking fluxes (the Hawking thermal flux and the total flux of energy-momentum tensor) from a four-dimensional rotating regular black hole with a time-delay. To this purpose, in the three metrics proposed in [1], we try to perform the dimensional reduction in which the anomaly cancellation method is feasible at the near-horizon region in a general scalar field theory. As a result we can demonstrate that the dimensional reduction is possible in two of those metrics. Hence we perform the anomaly cancellation method and compute the Hawking fluxes in those two metrics. Our Hawking fluxes involve three effects: (1) quantum gravity effect regularizing the core of the black holes, (2) rotation of the black hole, (3) time-delay. Further in this paper toward the metric in which the dimensional could not be performed, we argue that it would be some problematic metric, and mention its cause. The Hawking fluxes we compute in this study could be considered to correspond to more realistic Hawking fluxes. Further what Hawking fluxes can be obtained from the anomaly cancellation method would be interesting in terms of the relation between a consistency of quantum field theories and black hole thermodynamics. (paper)

  11. Accretion onto some well-known regular black holes

    International Nuclear Information System (INIS)

    Jawad, Abdul; Shahzad, M.U.

    2016-01-01

    In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes. (orig.)

  12. Laplacian embedded regression for scalable manifold regularization.

    Science.gov (United States)

    Chen, Lin; Tsang, Ivor W; Xu, Dong

    2012-06-01

    Semi-supervised learning (SSL), as a powerful tool to learn from a limited number of labeled data and a large number of unlabeled data, has been attracting increasing attention in the machine learning community. In particular, the manifold regularization framework has laid solid theoretical foundations for a large family of SSL algorithms, such as Laplacian support vector machine (LapSVM) and Laplacian regularized least squares (LapRLS). However, most of these algorithms are limited to small scale problems due to the high computational cost of the matrix inversion operation involved in the optimization problem. In this paper, we propose a novel framework called Laplacian embedded regression by introducing an intermediate decision variable into the manifold regularization framework. By using ∈-insensitive loss, we obtain the Laplacian embedded support vector regression (LapESVR) algorithm, which inherits the sparse solution from SVR. Also, we derive Laplacian embedded RLS (LapERLS) corresponding to RLS under the proposed framework. Both LapESVR and LapERLS possess a simpler form of a transformed kernel, which is the summation of the original kernel and a graph kernel that captures the manifold structure. The benefits of the transformed kernel are two-fold: (1) we can deal with the original kernel matrix and the graph Laplacian matrix in the graph kernel separately and (2) if the graph Laplacian matrix is sparse, we only need to perform the inverse operation for a sparse matrix, which is much more efficient when compared with that for a dense one. Inspired by kernel principal component analysis, we further propose to project the introduced decision variable into a subspace spanned by a few eigenvectors of the graph Laplacian matrix in order to better reflect the data manifold, as well as accelerate the calculation of the graph kernel, allowing our methods to efficiently and effectively cope with large scale SSL problems. Extensive experiments on both toy and real

  13. Drug-Target Interaction Prediction with Graph Regularized Matrix Factorization.

    Science.gov (United States)

    Ezzat, Ali; Zhao, Peilin; Wu, Min; Li, Xiao-Li; Kwoh, Chee-Keong

    2017-01-01

    Experimental determination of drug-target interactions is expensive and time-consuming. Therefore, there is a continuous demand for more accurate predictions of interactions using computational techniques. Algorithms have been devised to infer novel interactions on a global scale where the input to these algorithms is a drug-target network (i.e., a bipartite graph where edges connect pairs of drugs and targets that are known to interact). However, these algorithms had difficulty predicting interactions involving new drugs or targets for which there are no known interactions (i.e., "orphan" nodes in the network). Since data usually lie on or near to low-dimensional non-linear manifolds, we propose two matrix factorization methods that use graph regularization in order to learn such manifolds. In addition, considering that many of the non-occurring edges in the network are actually unknown or missing cases, we developed a preprocessing step to enhance predictions in the "new drug" and "new target" cases by adding edges with intermediate interaction likelihood scores. In our cross validation experiments, our methods achieved better results than three other state-of-the-art methods in most cases. Finally, we simulated some "new drug" and "new target" cases and found that GRMF predicted the left-out interactions reasonably well.

  14. Revealing hidden regularities with a general approach to fission

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, Karl-Heinz; Jurado, Beatriz [Chemin du Solarium, CENBG, CNRS/IN2P3, B. P. 120, Gradignan (France)

    2015-12-15

    Selected aspects of a general approach to nuclear fission are described with the focus on the possible benefit of meeting the increasing need of nuclear data for the existing and future emerging nuclear applications. The most prominent features of this approach are the evolution of quantum-mechanical wave functions in systems with complex shape, memory effects in the dynamics of stochastic processes, the influence of the Second Law of thermodynamics on the evolution of open systems in terms of statistical mechanics, and the topological properties of a continuous function in multi-dimensional space. It is demonstrated that this approach allows reproducing the measured fission barriers and the observed properties of the fission fragments and prompt neutrons. Our approach is based on sound physical concepts, as demonstrated by the fact that practically all the parameters have a physical meaning, and reveals a high degree of regularity in the fission observables. Therefore, we expect a good predictive power within the region extending from Po isotopes to Sg isotopes where the model parameters have been adjusted. Our approach can be extended to other regions provided that there is enough empirical information available that allows determining appropriate values of the model parameters. Possibilities for combining this general approach with microscopic models are suggested. These are supposed to enhance the predictive power of the general approach and to help improving or adjusting the microscopic models. This could be a way to overcome the present difficulties for producing evaluations with the required accuracy. (orig.)

  15. Explicit formulas for regularized products and series

    CERN Document Server

    Jorgenson, Jay; Goldfeld, Dorian

    1994-01-01

    The theory of explicit formulas for regularized products and series forms a natural continuation of the analytic theory developed in LNM 1564. These explicit formulas can be used to describe the quantitative behavior of various objects in analytic number theory and spectral theory. The present book deals with other applications arising from Gaussian test functions, leading to theta inversion formulas and corresponding new types of zeta functions which are Gaussian transforms of theta series rather than Mellin transforms, and satisfy additive functional equations. Their wide range of applications includes the spectral theory of a broad class of manifolds and also the theory of zeta functions in number theory and representation theory. Here the hyperbolic 3-manifolds are given as a significant example.

  16. Regularization by truncated total least squares

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Fierro, R.D; Golub, G.H

    1997-01-01

    of TLS for solving problems with very ill-conditioned coefficient matrices whose singular values decay gradually (so-called discrete ill-posed problems), where some regularization is necessary to stabilize the computed solution. We filter the solution by truncating the small singular values of the TLS...... in the piecewise parabolic method. The scheme takes into account all the discontinuities in ideal MHD and is in a strict conservation form. The scheme is applied to numerical examples, which include shock-tube problems in ideal MHD and various interactions between strong MHD shocks. All the waves involved...... schemes for relativistic hydrodynamical equations. Such an approximate Riemann solver is presented in this paper which treats all waves emanating from an initial discontinuity as themselves discontinuous. Therefore, jump conditions for shocks are approximately used for rarefaction waves. The solver...

  17. Singular tachyon kinks from regular profiles

    International Nuclear Information System (INIS)

    Copeland, E.J.; Saffin, P.M.; Steer, D.A.

    2003-01-01

    We demonstrate how Sen's singular kink solution of the Born-Infeld tachyon action can be constructed by taking the appropriate limit of initially regular profiles. It is shown that the order in which different limits are taken plays an important role in determining whether or not such a solution is obtained for a wide class of potentials. Indeed, by introducing a small parameter into the action, we are able circumvent the results of a recent paper which derived two conditions on the asymptotic tachyon potential such that the singular kink could be recovered in the large amplitude limit of periodic solutions. We show that this is explained by the non-commuting nature of two limits, and that Sen's solution is recovered if the order of the limits is chosen appropriately

  18. Discriminative Elastic-Net Regularized Linear Regression.

    Science.gov (United States)

    Zhang, Zheng; Lai, Zhihui; Xu, Yong; Shao, Ling; Wu, Jian; Xie, Guo-Sen

    2017-03-01

    In this paper, we aim at learning compact and discriminative linear regression models. Linear regression has been widely used in different problems. However, most of the existing linear regression methods exploit the conventional zero-one matrix as the regression targets, which greatly narrows the flexibility of the regression model. Another major limitation of these methods is that the learned projection matrix fails to precisely project the image features to the target space due to their weak discriminative capability. To this end, we present an elastic-net regularized linear regression (ENLR) framework, and develop two robust linear regression models which possess the following special characteristics. First, our methods exploit two particular strategies to enlarge the margins of different classes by relaxing the strict binary targets into a more feasible variable matrix. Second, a robust elastic-net regularization of singular values is introduced to enhance the compactness and effectiveness of the learned projection matrix. Third, the resulting optimization problem of ENLR has a closed-form solution in each iteration, which can be solved efficiently. Finally, rather than directly exploiting the projection matrix for recognition, our methods employ the transformed features as the new discriminate representations to make final image classification. Compared with the traditional linear regression model and some of its variants, our method is much more accurate in image classification. Extensive experiments conducted on publicly available data sets well demonstrate that the proposed framework can outperform the state-of-the-art methods. The MATLAB codes of our methods can be available at http://www.yongxu.org/lunwen.html.

  19. Regularization of Instantaneous Frequency Attribute Computations

    Science.gov (United States)

    Yedlin, M. J.; Margrave, G. F.; Van Vorst, D. G.; Ben Horin, Y.

    2014-12-01

    We compare two different methods of computation of a temporally local frequency:1) A stabilized instantaneous frequency using the theory of the analytic signal.2) A temporally variant centroid (or dominant) frequency estimated from a time-frequency decomposition.The first method derives from Taner et al (1979) as modified by Fomel (2007) and utilizes the derivative of the instantaneous phase of the analytic signal. The second method computes the power centroid (Cohen, 1995) of the time-frequency spectrum, obtained using either the Gabor or Stockwell Transform. Common to both methods is the necessity of division by a diagonal matrix, which requires appropriate regularization.We modify Fomel's (2007) method by explicitly penalizing the roughness of the estimate. Following Farquharson and Oldenburg (2004), we employ both the L curve and GCV methods to obtain the smoothest model that fits the data in the L2 norm.Using synthetic data, quarry blast, earthquakes and the DPRK tests, our results suggest that the optimal method depends on the data. One of the main applications for this work is the discrimination between blast events and earthquakesFomel, Sergey. " Local seismic attributes." , Geophysics, 72.3 (2007): A29-A33.Cohen, Leon. " Time frequency analysis theory and applications." USA: Prentice Hall, (1995).Farquharson, Colin G., and Douglas W. Oldenburg. "A comparison of automatic techniques for estimating the regularization parameter in non-linear inverse problems." Geophysical Journal International 156.3 (2004): 411-425.Taner, M. Turhan, Fulton Koehler, and R. E. Sheriff. " Complex seismic trace analysis." Geophysics, 44.6 (1979): 1041-1063.

  20. Regular Generalized Star Star closed sets in Bitopological Spaces

    OpenAIRE

    K. Kannan; D. Narasimhan; K. Chandrasekhara Rao; R. Ravikumar

    2011-01-01

    The aim of this paper is to introduce the concepts of τ1τ2-regular generalized star star closed sets , τ1τ2-regular generalized star star open sets and study their basic properties in bitopological spaces.

  1. Regular Marijuana Users May Have Impaired Brain Reward Centers

    Science.gov (United States)

    ... users may have impaired brain reward centers Regular marijuana users may have impaired brain reward centers Email ... July 30, 2014 New research shows that regular marijuana users show impairments in the brain’s ability to ...

  2. Vacuum polarization in two-dimensional static spacetimes and dimensional reduction

    Science.gov (United States)

    Balbinot, Roberto; Fabbri, Alessandro; Nicolini, Piero; Sutton, Patrick J.

    2002-07-01

    We obtain an analytic approximation for the effective action of a quantum scalar field in a general static two-dimensional spacetime. We apply this to the dilaton gravity model resulting from the spherical reduction of a massive, non-minimally coupled scalar field in the four-dimensional Schwarzschild geometry. Careful analysis near the event horizon shows the resulting two-dimensional system to be regular in the Hartle-Hawking state for general values of the field mass, coupling, and angular momentum, while at spatial infinity it reduces to a thermal gas at the black-hole temperature.

  3. Regularization by discretization in Banach spaces

    Science.gov (United States)

    Hämarik, Uno; Kaltenbacher, Barbara; Kangro, Urve; Resmerita, Elena

    2016-03-01

    We consider ill-posed linear operator equations with operators acting between Banach spaces. For solution approximation, the methods of choice here are projection methods onto finite dimensional subspaces, thus extending existing results from Hilbert space settings. More precisely, general projection methods, the least squares method and the least error method are analyzed. In order to appropriately choose the dimension of the subspace, we consider a priori and a posteriori choices by the discrepancy principle and by the monotone error rule. Analytical considerations and numerical tests are provided for a collocation method applied to a Volterra integral equation in one-dimension space.

  4. Adaptation for Regularization Operators in Learning Theory

    Science.gov (United States)

    2006-09-10

    fρ over specific prior classes defined in term of finiteness of the constants Cr and Ds. The main assumption is the requirement mv ≥ m/ log m. Since...dimensional, the choice R(λ̇) = r̄ fulfills trivially the required conditions. Second, from definition (25), it is clear that if the sequence {a(λi...59–85, February 2005. [7] E. De Vito, L. Rosasco, and A. Caponnetto. Discretization error analysis for tikhonov regu- larization. to appear in Analisys

  5. Regular Exercisers Have Stronger Pelvic Floor Muscles than Non-Regular Exercisers at Midpregnancy.

    Science.gov (United States)

    Bø, Kari; Ellstrøm Engh, Marie; Hilde, Gunvor

    2017-12-26

    Today, all healthy pregnant women are encouraged to be physically active throughout pregnancy, with recommendations to participate in at least 30 min of aerobic activity on most days of the week, in addition to perform strength training of the major muscle groups 2-3 days per week, and also pelvic floor muscle training. There is, however, an ongoing debate whether general physical activity enhances or declines pelvic floor muscle function. To compare vaginal resting pressure, pelvic floor muscle strength and endurance in regular exercisers (exercise ≥ 30 minutes ≥ 3 times per week) and non-exercisers at mid-pregnancy. Furthermore, to assess whether regular general exercise or pelvic floor muscle strength was associated with urinary incontinence. This was a cross-sectional study at mean gestational week 20.9 (± 1.4) including 218 nulliparous pregnant women, mean age 28.6 years (range 19-40) and pre-pregnancy body mass index 23.9 kg/m 2 (SD 4.0). Vaginal resting pressure, pelvic floor muscle strength and pelvic floor muscle endurance were measured by a high precision pressure transducer connected to a vaginal balloon. International Consultation on Incontinence Questionnaire Urinary Incontinence Short Form was used to assess urinary incontinence. Differences between groups were analyzed using Independent Sample T-test. Linear regression analysis was conducted to adjust for pre-pregnancy body mass index, age, smoking during pregnancy and regular pelvic floor muscle training during pregnancy. P-value was set to ≤ 0.05. Regular exercisers had statistically significant stronger ( mean 6.4 cm H 2 O (95% CI: 1.7, 11.2)) and more enduring ( mean 39.9 cm H 2 Osec (95% CI: 42.2, 75.7)) pelvic floor muscles. Only pelvic floor muscle strength remained statistically significant, when adjusting for possible confounders. Pelvic floor muscle strength and not regular general exercise was associated with urinary continence (adjusted B: -6.4 (95% CI: -11.5, -1.4)). Regular

  6. Exclusion of children with intellectual disabilities from regular ...

    African Journals Online (AJOL)

    Study investigated why teachers exclude children with intellectual disability from the regular classrooms in Nigeria. Participants were, 169 regular teachers randomly selected from Oyo and Ogun states. Questionnaire was used to collect data result revealed that 57.4% regular teachers could not cope with children with ID ...

  7. 39 CFR 6.1 - Regular meetings, annual meeting.

    Science.gov (United States)

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false Regular meetings, annual meeting. 6.1 Section 6.1 Postal Service UNITED STATES POSTAL SERVICE THE BOARD OF GOVERNORS OF THE U.S. POSTAL SERVICE MEETINGS (ARTICLE VI) § 6.1 Regular meetings, annual meeting. The Board shall meet regularly on a schedule...

  8. Lattice simulation of a center symmetric three dimensional effective theory for SU(2) Yang-Mills

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Dominik

    2010-11-17

    We present lattice simulations of a center symmetric dimensionally reduced effective field theory for SU(2) Yang Mills which employ thermal Wilson lines and three-dimensional magnetic fields as fundamental degrees of freedom. The action is composed of a gauge invariant kinetic term, spatial gauge fields and a potential for theWilson line which includes a ''fuzzy'' bag term to generate non-perturbative fluctuations between Z(2) degenerate ground states. The model is studied in the limit where the gauge fields are set to zero as well as the full model with gauge fields. We confirm that, at moderately weak coupling, the ''fuzzy'' bag term leads to eigenvalue repulsion in a finite region above the deconfining phase transition which shrinks in the extreme weak-coupling limit. A non-trivial Z(N) symmetric vacuum arises in the confined phase. The effective potential for the Polyakov loop in the theory with gauge fields is extracted from the simulations including all modes of the loop as well as for cooled configurations where the hard modes have been averaged out. The former is found to exhibit a non-analytic contribution while the latter can be described by a mean-field like ansatz with quadratic and quartic terms, plus a Vandermonde potential which depends upon the location within the phase diagram. Other results include the exact location of the phase boundary in the plane spanned by the coupling parameters, correlation lengths of several operators in the magnetic and electric sectors and the spatial string tension. We also present results from simulations of the full 4D Yang-Mills theory and attempt to make a qualitative comparison to the 3D effective theory. (orig.)

  9. 2-regularity and 2-normality conditions for systems with impulsive controls

    Directory of Open Access Journals (Sweden)

    Pavlova Natal'ya

    2007-01-01

    Full Text Available In this paper a controlled system with impulsive controls in the neighborhood of an abnormal point is investigated. The set of pairs (u,μ is considered as a class of admissible controls, where u is a measurable essentially bounded function and μ is a finite-dimensional Borel measure, such that for any Borel set B, μ(B is a subset of the given convex closed pointed cone. In this article the concepts of 2-regularity and 2-normality for the abstract mapping Ф, operating from the given Banach space into a finite-dimensional space, are introduced. The concepts of 2-regularity and 2-normality play a great role in the course of derivation of the first and the second order necessary conditions for the optimal control problem, consisting of the minimization of a certain functional on the set of the admissible processes. These concepts are also important for obtaining the sufficient conditions for the local controllability of the nonlinear systems. The convenient criterion for 2-regularity along the prescribed direction and necessary conditions for 2-normality of systems, linear in control, are introduced in this article as well.

  10. MRI reconstruction with joint global regularization and transform learning.

    Science.gov (United States)

    Tanc, A Korhan; Eksioglu, Ender M

    2016-10-01

    Sparsity based regularization has been a popular approach to remedy the measurement scarcity in image reconstruction. Recently, sparsifying transforms learned from image patches have been utilized as an effective regularizer for the Magnetic Resonance Imaging (MRI) reconstruction. Here, we infuse additional global regularization terms to the patch-based transform learning. We develop an algorithm to solve the resulting novel cost function, which includes both patchwise and global regularization terms. Extensive simulation results indicate that the introduced mixed approach has improved MRI reconstruction performance, when compared to the algorithms which use either of the patchwise transform learning or global regularization terms alone. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Accreting fluids onto regular black holes via Hamiltonian approach

    Energy Technology Data Exchange (ETDEWEB)

    Jawad, Abdul [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan); Shahzad, M.U. [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan); University of Central Punjab, CAMS, UCP Business School, Lahore (Pakistan)

    2017-08-15

    We investigate the accretion of test fluids onto regular black holes such as Kehagias-Sfetsos black holes and regular black holes with Dagum distribution function. We analyze the accretion process when different test fluids are falling onto these regular black holes. The accreting fluid is being classified through the equation of state according to the features of regular black holes. The behavior of fluid flow and the existence of sonic points is being checked for these regular black holes. It is noted that the three-velocity depends on critical points and the equation of state parameter on phase space. (orig.)

  12. Does regular repositioning prevent pressure ulcers?

    Science.gov (United States)

    Krapfl, Lee Ann; Gray, Mikel

    2008-01-01

    Prolonged exposure to pressure is the primary etiologic factor of a pressure ulcer (PU) and effective preventive interventions must avoid or minimize this exposure. Therefore, frequent repositioning of the patient has long been recommended as a means of preventing PU. To review the evidence on the efficacy of repositioning as a PU prevention intervention. A systematic review of electronic databases MEDLINE and CINAHL, from January 1960 to July 2008, was undertaken. Studies were limited to prospective randomized clinical trials or quasi-experimental studies that compared repositioning to any other preventive interventions or any study that compared various techniques of repositioning such as turning frequency. Only those studies that measured the primary outcome of interest, PU incidence, were included in our review. Limited evidence suggests that repositioning every 4 hours, when combined with an appropriate pressure redistribution surface, is just as effective for the prevention of facility- acquired PUs as a more frequent (every 2 hour) regimen. There is insufficient evidence to determine whether a 30 degrees lateral position is superior to a 90 degrees lateral position or a semi-Fowler's position. The current regulatory and legal environment has focused increased attention on PU prevention. Pressure redistribution methods and the frequency of application are among the first factors scrutinized when a PU develops. Our clinical experience validates that regular movement of the immobilized patient is important, but evidence defining the optimal frequency of repositioning or optimal positioning is lacking.

  13. Regularities and irregularities in order flow data

    Science.gov (United States)

    Theissen, Martin; Krause, Sebastian M.; Guhr, Thomas

    2017-11-01

    We identify and analyze statistical regularities and irregularities in the recent order flow of different NASDAQ stocks, focusing on the positions where orders are placed in the order book. This includes limit orders being placed outside of the spread, inside the spread and (effective) market orders. Based on the pairwise comparison of the order flow of different stocks, we perform a clustering of stocks into groups with similar behavior. This is useful to assess systemic aspects of stock price dynamics. We find that limit order placement inside the spread is strongly determined by the dynamics of the spread size. Most orders, however, arrive outside of the spread. While for some stocks order placement on or next to the quotes is dominating, deeper price levels are more important for other stocks. As market orders are usually adjusted to the quote volume, the impact of market orders depends on the order book structure, which we find to be quite diverse among the analyzed stocks as a result of the way limit order placement takes place.

  14. Regularities of praseodymium oxide dissolution in acids

    International Nuclear Information System (INIS)

    Savin, V.D.; Elyutin, A.V.; Mikhajlova, N.P.; Eremenko, Z.V.; Opolchenova, N.L.

    1989-01-01

    The regularities of Pr 2 O 3 , Pr 2 O 5 and Pr(OH) 3 interaction with inorganic acids are studied. pH of the solution and oxidation-reduction potential registrated at 20±1 deg C are the working parameters of studies. It is found that the amount of all oxides dissolved increase in the series of acids - nitric, hydrochloric and sulfuric, in this case for hydrochloric and sulfuric acid it increases in the series of oxides Pr 2 O 3 , Pr 2 O 5 and Pr(OH) 3 . It is noted that Pr 2 O 5 has a high value of oxidation-reduction potential with a positive sign in the whole disslolving range. A low positive value of a redox potential during dissolving belongs to Pr(OH) 3 and in the case of Pr 2 O 3 dissloving redox potential is negative. The schemes of dissolving processes which do not agree with classical assumptions are presented

  15. Fast nonlinear susceptibility inversion with variational regularization.

    Science.gov (United States)

    Milovic, Carlos; Bilgic, Berkin; Zhao, Bo; Acosta-Cabronero, Julio; Tejos, Cristian

    2018-01-10

    Quantitative susceptibility mapping can be performed through the minimization of a function consisting of data fidelity and regularization terms. For data consistency, a Gaussian-phase noise distribution is often assumed, which breaks down when the signal-to-noise ratio is low. A previously proposed alternative is to use a nonlinear data fidelity term, which reduces streaking artifacts, mitigates noise amplification, and results in more accurate susceptibility estimates. We hereby present a novel algorithm that solves the nonlinear functional while achieving computation speeds comparable to those for a linear formulation. We developed a nonlinear quantitative susceptibility mapping algorithm (fast nonlinear susceptibility inversion) based on the variable splitting and alternating direction method of multipliers, in which the problem is split into simpler subproblems with closed-form solutions and a decoupled nonlinear inversion hereby solved with a Newton-Raphson iterative procedure. Fast nonlinear susceptibility inversion performance was assessed using numerical phantom and in vivo experiments, and was compared against the nonlinear morphology-enabled dipole inversion method. Fast nonlinear susceptibility inversion achieves similar accuracy to nonlinear morphology-enabled dipole inversion but with significantly improved computational efficiency. The proposed method enables accurate reconstructions in a fraction of the time required by state-of-the-art quantitative susceptibility mapping methods. Magn Reson Med, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.

  16. Regular expressions compiler and some applications

    International Nuclear Information System (INIS)

    Saldana A, H.

    1978-01-01

    We deal with high level programming language of a Regular Expressions Compiler (REC). The first chapter is an introduction in which the history of the REC development and the problems related to its numerous applicatons are described. The syntactic and sematic rules as well as the language features are discussed just after the introduction. Concerning the applicatons as examples, an adaptation is given in order to solve numerical problems and another for the data manipulation. The last chapter is an exposition of ideas and techniques about the compiler construction. Examples of the adaptation to numerical problems show the applications to education, vector analysis, quantum mechanics, physics, mathematics and other sciences. The rudiments of an operating system for a minicomputer are the examples of the adaptation to symbolic data manipulaton. REC is a programming language that could be applied to solve problems in almost any human activity. Handling of computer graphics, control equipment, research on languages, microprocessors and general research are some of the fields in which this programming language can be applied and developed. (author)

  17. Toroidal regularization of the guiding center Lagrangian

    Science.gov (United States)

    Burby, J. W.; Ellison, C. L.

    2017-11-01

    In the Lagrangian theory of guiding center motion, an effective magnetic field B*=B +(m /e )v∥∇× b appears prominently in the equations of motion. Because the parallel component of this field can vanish, there is a range of parallel velocities where the Lagrangian guiding center equations of motion are either ill-defined or very badly behaved. Moreover, the velocity dependence of B* greatly complicates the identification of canonical variables and therefore the formulation of symplectic integrators for guiding center dynamics. This letter introduces a simple coordinate transformation that alleviates both these problems simultaneously. In the new coordinates, the Liouville volume element is equal to the toroidal contravariant component of the magnetic field. Consequently, the large-velocity singularity is completely eliminated. Moreover, passing from the new coordinate system to canonical coordinates is extremely simple, even if the magnetic field is devoid of flux surfaces. We demonstrate the utility of this approach in regularizing the guiding center Lagrangian by presenting a new and stable one-step variational integrator for guiding centers moving in arbitrary time-dependent electromagnetic fields.

  18. Regularity and approximability of electronic wave functions

    CERN Document Server

    Yserentant, Harry

    2010-01-01

    The electronic Schrödinger equation describes the motion of N-electrons under Coulomb interaction forces in a field of clamped nuclei. The solutions of this equation, the electronic wave functions, depend on 3N variables, with three spatial dimensions for each electron. Approximating these solutions is thus inordinately challenging, and it is generally believed that a reduction to simplified models, such as those of the Hartree-Fock method or density functional theory, is the only tenable approach. This book seeks to show readers that this conventional wisdom need not be ironclad: the regularity of the solutions, which increases with the number of electrons, the decay behavior of their mixed derivatives, and the antisymmetry enforced by the Pauli principle contribute properties that allow these functions to be approximated with an order of complexity which comes arbitrarily close to that for a system of one or two electrons. The text is accessible to a mathematical audience at the beginning graduate level as...

  19. Regularities development of entrepreneurial structures in regions

    Directory of Open Access Journals (Sweden)

    Julia Semenovna Pinkovetskaya

    2012-12-01

    Full Text Available Consider regularities and tendencies for the three types of entrepreneurial structures — small enterprises, medium enterprises and individual entrepreneurs. The aim of the research was to confirm the possibilities of describing indicators of aggregate entrepreneurial structures with the use of normal law distribution functions. Presented proposed by the author the methodological approach and results of construction of the functions of the density distribution for the main indicators for the various objects: the Russian Federation, regions, as well as aggregates ofentrepreneurial structures, specialized in certain forms ofeconomic activity. All the developed functions, as shown by the logical and statistical analysis, are of high quality and well-approximate the original data. In general, the proposed methodological approach is versatile and can be used in further studies of aggregates of entrepreneurial structures. The received results can be applied in solving a wide range of problems justify the need for personnel and financial resources at the federal, regional and municipal levels, as well as the formation of plans and forecasts of development entrepreneurship and improvement of this sector of the economy.

  20. A regularity-based modeling of oil borehole logs

    Science.gov (United States)

    Gaci, Said; Zaourar, Naima

    2013-04-01

    Multifractional Brownian motions (mBms) are successfully used to describe borehole logs behavior. These local fractal models allow to investigate the depth-evolution of regularity of the logs, quantified by the Hölder exponent (H). In this study, a regularity analysis is carried out on datasets recorded in Algerian oil boreholes located in different geological settings. The obtained regularity profiles show a clear correlation with lithology. Each lithological discontinuity corresponds to a jump in H value. Moreover, for a given borehole, all the regularity logs are significantly correlated and lead to similar lithological segmentations. Therefore, the Hölderian regularity is a robust property which can be used to characterize lithological heterogeneities. However, this study does not draw any relation between the recorded physical property and its estimated regularity degree for all the analyzed logs. Keywords: well logs, regularity, Hölder exponent, multifractional Brownian motion

  1. Total variation regularization for a backward time-fractional diffusion problem

    International Nuclear Information System (INIS)

    Wang, Liyan; Liu, Jijun

    2013-01-01

    Consider a two-dimensional backward problem for a time-fractional diffusion process, which can be considered as image de-blurring where the blurring process is assumed to be slow diffusion. In order to avoid the over-smoothing effect for object image with edges and to construct a fast reconstruction scheme, the total variation regularizing term and the data residual error in the frequency domain are coupled to construct the cost functional. The well posedness of this optimization problem is studied. The minimizer is sought approximately using the iteration process for a series of optimization problems with Bregman distance as a penalty term. This iteration reconstruction scheme is essentially a new regularizing scheme with coupling parameter in the cost functional and the iteration stopping times as two regularizing parameters. We give the choice strategy for the regularizing parameters in terms of the noise level of measurement data, which yields the optimal error estimate on the iterative solution. The series optimization problems are solved by alternative iteration with explicit exact solution and therefore the amount of computation is much weakened. Numerical implementations are given to support our theoretical analysis on the convergence rate and to show the significant reconstruction improvements. (paper)

  2. Manifold regularized multitask learning for semi-supervised multilabel image classification.

    Science.gov (United States)

    Luo, Yong; Tao, Dacheng; Geng, Bo; Xu, Chao; Maybank, Stephen J

    2013-02-01

    It is a significant challenge to classify images with multiple labels by using only a small number of labeled samples. One option is to learn a binary classifier for each label and use manifold regularization to improve the classification performance by exploring the underlying geometric structure of the data distribution. However, such an approach does not perform well in practice when images from multiple concepts are represented by high-dimensional visual features. Thus, manifold regularization is insufficient to control the model complexity. In this paper, we propose a manifold regularized multitask learning (MRMTL) algorithm. MRMTL learns a discriminative subspace shared by multiple classification tasks by exploiting the common structure of these tasks. It effectively controls the model complexity because different tasks limit one another's search volume, and the manifold regularization ensures that the functions in the shared hypothesis space are smooth along the data manifold. We conduct extensive experiments, on the PASCAL VOC'07 dataset with 20 classes and the MIR dataset with 38 classes, by comparing MRMTL with popular image classification algorithms. The results suggest that MRMTL is effective for image classification.

  3. Two-Stage Regularized Linear Discriminant Analysis for 2-D Data.

    Science.gov (United States)

    Zhao, Jianhua; Shi, Lei; Zhu, Ji

    2015-08-01

    Fisher linear discriminant analysis (LDA) involves within-class and between-class covariance matrices. For 2-D data such as images, regularized LDA (RLDA) can improve LDA due to the regularized eigenvalues of the estimated within-class matrix. However, it fails to consider the eigenvectors and the estimated between-class matrix. To improve these two matrices simultaneously, we propose in this paper a new two-stage method for 2-D data, namely a bidirectional LDA (BLDA) in the first stage and the RLDA in the second stage, where both BLDA and RLDA are based on the Fisher criterion that tackles correlation. BLDA performs the LDA under special separable covariance constraints that incorporate the row and column correlations inherent in 2-D data. The main novelty is that we propose a simple but effective statistical test to determine the subspace dimensionality in the first stage. As a result, the first stage reduces the dimensionality substantially while keeping the significant discriminant information in the data. This enables the second stage to perform RLDA in a much lower dimensional subspace, and thus improves the two estimated matrices simultaneously. Experiments on a number of 2-D synthetic and real-world data sets show that BLDA+RLDA outperforms several closely related competitors.

  4. MR-NTD: Manifold Regularization Nonnegative Tucker Decomposition for Tensor Data Dimension Reduction and Representation.

    Science.gov (United States)

    Li, Xutao; Ng, Michael K; Cong, Gao; Ye, Yunming; Wu, Qingyao

    2017-08-01

    With the advancement of data acquisition techniques, tensor (multidimensional data) objects are increasingly accumulated and generated, for example, multichannel electroencephalographies, multiview images, and videos. In these applications, the tensor objects are usually nonnegative, since the physical signals are recorded. As the dimensionality of tensor objects is often very high, a dimension reduction technique becomes an important research topic of tensor data. From the perspective of geometry, high-dimensional objects often reside in a low-dimensional submanifold of the ambient space. In this paper, we propose a new approach to perform the dimension reduction for nonnegative tensor objects. Our idea is to use nonnegative Tucker decomposition (NTD) to obtain a set of core tensors of smaller sizes by finding a common set of projection matrices for tensor objects. To preserve geometric information in tensor data, we employ a manifold regularization term for the core tensors constructed in the Tucker decomposition. An algorithm called manifold regularization NTD (MR-NTD) is developed to solve the common projection matrices and core tensors in an alternating least squares manner. The convergence of the proposed algorithm is shown, and the computational complexity of the proposed method scales linearly with respect to the number of tensor objects and the size of the tensor objects, respectively. These theoretical results show that the proposed algorithm can be efficient. Extensive experimental results have been provided to further demonstrate the effectiveness and efficiency of the proposed MR-NTD algorithm.

  5. Gene selection for microarray data classification via subspace learning and manifold regularization.

    Science.gov (United States)

    Tang, Chang; Cao, Lijuan; Zheng, Xiao; Wang, Minhui

    2017-12-19

    With the rapid development of DNA microarray technology, large amount of genomic data has been generated. Classification of these microarray data is a challenge task since gene expression data are often with thousands of genes but a small number of samples. In this paper, an effective gene selection method is proposed to select the best subset of genes for microarray data with the irrelevant and redundant genes removed. Compared with original data, the selected gene subset can benefit the classification task. We formulate the gene selection task as a manifold regularized subspace learning problem. In detail, a projection matrix is used to project the original high dimensional microarray data into a lower dimensional subspace, with the constraint that the original genes can be well represented by the selected genes. Meanwhile, the local manifold structure of original data is preserved by a Laplacian graph regularization term on the low-dimensional data space. The projection matrix can serve as an importance indicator of different genes. An iterative update algorithm is developed for solving the problem. Experimental results on six publicly available microarray datasets and one clinical dataset demonstrate that the proposed method performs better when compared with other state-of-the-art methods in terms of microarray data classification. Graphical Abstract The graphical abstract of this work.

  6. Regularity of the Speed of Biased Random Walk in a One-Dimensional Percolation Model

    Science.gov (United States)

    Gantert, Nina; Meiners, Matthias; Müller, Sebastian

    2018-03-01

    We consider biased random walks on the infinite cluster of a conditional bond percolation model on the infinite ladder graph. Axelson-Fisk and Häggström established for this model a phase transition for the asymptotic linear speed \\overline{v} of the walk. Namely, there exists some critical value λ c>0 such that \\overline{v}>0 if λ \\in (0,λ c) and \\overline{v}=0 if λ ≥ λ c. We show that the speed \\overline{v} is continuous in λ on (0,∞) and differentiable on (0,λ c/2). Moreover, we characterize the derivative as a covariance. For the proof of the differentiability of \\overline{v} on (0,λ c/2), we require and prove a central limit theorem for the biased random walk. Additionally, we prove that the central limit theorem fails to hold for λ ≥ λ c/2.

  7. Choice of regularization in adjoint tomography based on two-dimensional synthetic tests

    Czech Academy of Sciences Publication Activity Database

    Valentová, L.; Gallovič, F.; Růžek, Bohuslav; de la Puente, J.; Moczo, P.

    2015-01-01

    Roč. 202, č. 2 (2015), s. 787-799 ISSN 0956-540X Institutional support: RVO:67985530 Keywords : numerical approximations and analysis * seismic tomography * Europe Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 2.484, year: 2015

  8. Ultraviolet asymptotic behavior of the photon propagator in dimensionally regularized quantum electrodynamics

    International Nuclear Information System (INIS)

    Krasnikov, N.V.

    1991-01-01

    Study of the ultraviolet behavior of asymptotically nonfree theories is one of the most important problems of quantum field theory. Unfortunately, not too much is known about the ultraviolet properties in asymptotically nonfree theories; the main obstacle is the growth of the effective coupling constant in the ultraviolet region, which renders perturbation theory inapplicable. It is shown that in quantum electrodynamics in n = 4 + 2 var-epsilon space-time (var-epsilon > 0) the photon propagator has the ultraviolet asymptotic behavior D(k 2 ) ∼ (k 2 ) -1-var-epsilon . In the case var-epsilon R ≤ -3π var-epsilon + O(var-epsilon 2 )

  9. Determinants of Scanpath Regularity in Reading.

    Science.gov (United States)

    von der Malsburg, Titus; Kliegl, Reinhold; Vasishth, Shravan

    2015-09-01

    Scanpaths have played an important role in classic research on reading behavior. Nevertheless, they have largely been neglected in later research perhaps due to a lack of suitable analytical tools. Recently, von der Malsburg and Vasishth (2011) proposed a new measure for quantifying differences between scanpaths and demonstrated that this measure can recover effects that were missed with the traditional eyetracking measures. However, the sentences used in that study were difficult to process and scanpath effects accordingly strong. The purpose of the present study was to test the validity, sensitivity, and scope of applicability of the scanpath measure, using simple sentences that are typically read from left to right. We derived predictions for the regularity of scanpaths from the literature on oculomotor control, sentence processing, and cognitive aging and tested these predictions using the scanpath measure and a large database of eye movements. All predictions were confirmed: Sentences with short words and syntactically more difficult sentences elicited more irregular scanpaths. Also, older readers produced more irregular scanpaths than younger readers. In addition, we found an effect that was not reported earlier: Syntax had a smaller influence on the eye movements of older readers than on those of young readers. We discuss this interaction of syntactic parsing cost with age in terms of shifts in processing strategies and a decline of executive control as readers age. Overall, our results demonstrate the validity and sensitivity of the scanpath measure and thus establish it as a productive and versatile tool for reading research. Copyright © 2014 Cognitive Science Society, Inc.

  10. Least squares QR-based decomposition provides an efficient way of computing optimal regularization parameter in photoacoustic tomography.

    Science.gov (United States)

    Shaw, Calvin B; Prakash, Jaya; Pramanik, Manojit; Yalavarthy, Phaneendra K

    2013-08-01

    A computationally efficient approach that computes the optimal regularization parameter for the Tikhonov-minimization scheme is developed for photoacoustic imaging. This approach is based on the least squares-QR decomposition which is a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution enabled via finding an optimal regularization parameter. The computational efficiency and performance of the proposed method are shown using a test case of numerical blood vessel phantom, where the initial pressure is exactly known for quantitative comparison.

  11. Yang-Mills theories in axial and light-cone gauges, analytic regularization and Ward identities

    International Nuclear Information System (INIS)

    Lee, H.C.

    1984-12-01

    The application of the principles of generalization and analytic continuation to the regularization of divergent Feynman integrals is discussed. The technique, or analytic regularization, which is a generalization of dimensional regularization, is used to derive analytic representations for two classes of massless two-point integrals. The first class is based on the principal-value prescription and includes integrals encountered in quantum field theories in the ghost-free axial gauge (n.A=0), reducing in a special case to integrals in the light-cone gauge (n.A=0,n 2 =0). The second class is based on the Mandelstam prescription devised espcially for the light-cone gauge. For some light-cone gauge integrals the two representations are not equivalent. Both classes include as a subclass integrals in the Lorentz covariant 'zeta-gauges'. The representations are used to compute one-loop corrections to the self-energy and the three-vertex in Yang-Mills theories in the axial and light-cone gauges, showing that the two- and three-point Ward identities are satisfied; to illustrate that ultraviolet and infrared singularities, indistinguishable in dimensional regularization, can be separated analytically; and to show that certain tadpole integrals vanish because of an exact cancellation between ultraviolet and infrared singularities. In the axial gauge, the wavefunction and vertex renormalization constants, Z 3 and Z 1 , are identical, so that the β-function can be directly derived from Z 3 the result being the same as that computed in the covariant zeta-gauges. Preliminary results suggest that the light-cone gauge in the Mandelstam prescription, but not in the principal value prescription, has the same renormalization property of the axial gauge

  12. Dimensionality Reduction using Similarity-induced Embeddings

    OpenAIRE

    Passalis, Nikolaos; Tefas, Anastasios

    2017-01-01

    The vast majority of Dimensionality Reduction (DR) techniques rely on second-order statistics to define their optimization objective. Even though this provides adequate results in most cases, it comes with several shortcomings. The methods require carefully designed regularizers and they are usually prone to outliers. In this work, a new DR framework, that can directly model the target distribution using the notion of similarity instead of distance, is introduced. The proposed framework, call...

  13. Current redistribution in resistor networks: Fat-tail statistics in regular and small-world networks.

    Science.gov (United States)

    Lehmann, Jörg; Bernasconi, Jakob

    2017-03-01

    The redistribution of electrical currents in resistor networks after single-bond failures is analyzed in terms of current-redistribution factors that are shown to depend only on the topology of the network and on the values of the bond resistances. We investigate the properties of these current-redistribution factors for regular network topologies (e.g., d-dimensional hypercubic lattices) as well as for small-world networks. In particular, we find that the statistics of the current redistribution factors exhibits a fat-tail behavior, which reflects the long-range nature of the current redistribution as determined by Kirchhoff's circuit laws.

  14. Confinement in Polyakov gauge and the QCD phase diagram

    Energy Technology Data Exchange (ETDEWEB)

    Marhauser, Marc Florian

    2009-10-14

    We investigate Quantum Chromodynamics (QCD) in the framework of the functional renormalisation group (fRG). Thereby describing the phase transition from the phase with confined quarks into the quark-gluon-plasma phase. We focus on a physical gauge in which the mechanism driving the phase transition is discernible. We find results compatible with lattice QCD data, as well as with functional methods applied in different gauges. The phase transition is of the expected order and we computed critical exponents. Extensions of the model are discussed. When investigating the QCD phase diagram, we compute the effects of dynamical quarks at finite density on the running of the gauge coupling. Additionally, we calculate how these affect the deconfinement phase transition, also, dynamical quarks allow for the inclusion of a finite chemical potential. Concluding the investigation of the phase diagram, we establish a relation between confinement and chiral symmetry breaking, which is tied to the dynamical generation of hadron masses. In the investigations, we often encounter scale dependent fields. We investigate a footing on which these can be dealt with in a uniform way. (orig.)

  15. Existence domains for invariant reactions in binary regular solution ...

    Indian Academy of Sciences (India)

    Unknown

    two phases (e.g. a liquid and a solid phase) has been examined using the regular solution model. The necessary conditions for the ... Binary phase diagrams; invariant reactions; regular solution model. 1. Introduction. Using the regular ...... Nb–Ta, Nb–W, Os–Re, Os–Ru, Pd–Pt, Pt–Rh,. Re–Ru, Ta–W, V–W]. R + T MN [Cr–V, ...

  16. Bregman Distance to L1 Regularized Logistic Regression

    OpenAIRE

    Gupta, Mithun Das; Huang, Thomas S.

    2010-01-01

    In this work we investigate the relationship between Bregman distances and regularized Logistic Regression model. We present a detailed study of Bregman Distance minimization, a family of generalized entropy measures associated with convex functions. We convert the L1-regularized logistic regression into this more general framework and propose a primal-dual method based algorithm for learning the parameters. We pose L1-regularized logistic regression into Bregman distance minimization and the...

  17. Regularization of plurisubharmonic functions with a net of good points

    OpenAIRE

    Li, Long

    2017-01-01

    The purpose of this article is to present a new regularization technique of quasi-plurisubharmoinc functions on a compact Kaehler manifold. The idea is to regularize the function on local coordinate balls first, and then glue each piece together. Therefore, all the higher order terms in the complex Hessian of this regularization vanish at the center of each coordinate ball, and all the centers build a delta-net of the manifold eventually.

  18. Dynamic MRI Using SmooThness Regularization on Manifolds (SToRM).

    Science.gov (United States)

    Poddar, Sunrita; Jacob, Mathews

    2016-04-01

    We introduce a novel algorithm to recover real time dynamic MR images from highly under-sampled k- t space measurements. The proposed scheme models the images in the dynamic dataset as points on a smooth, low dimensional manifold in high dimensional space. We propose to exploit the non-linear and non-local redundancies in the dataset by posing its recovery as a manifold smoothness regularized optimization problem. A navigator acquisition scheme is used to determine the structure of the manifold, or equivalently the associated graph Laplacian matrix. The estimated Laplacian matrix is used to recover the dataset from undersampled measurements. The utility of the proposed scheme is demonstrated by comparisons with state of the art methods in multi-slice real-time cardiac and speech imaging applications.

  19. Zeta-function regularization approach to finite temperature effects in Kaluza-Klein space-times

    International Nuclear Information System (INIS)

    Bytsenko, A.A.; Vanzo, L.; Zerbini, S.

    1992-01-01

    In the framework of heat-kernel approach to zeta-function regularization, in this paper the one-loop effective potential at finite temperature for scalar and spinor fields on Kaluza-Klein space-time of the form M p x M c n , where M p is p-dimensional Minkowski space-time is evaluated. In particular, when the compact manifold is M c n = H n /Γ, the Selberg tracer formula associated with discrete torsion-free group Γ of the n-dimensional Lobachevsky space H n is used. An explicit representation for the thermodynamic potential valid for arbitrary temperature is found. As a result a complete high temperature expansion is presented and the roles of zero modes and topological contributions is discussed

  20. Regular Breakfast and Blood Lead Levels among Preschool Children

    Directory of Open Access Journals (Sweden)

    Needleman Herbert

    2011-04-01

    Full Text Available Abstract Background Previous studies have shown that fasting increases lead absorption in the gastrointestinal tract of adults. Regular meals/snacks are recommended as a nutritional intervention for lead poisoning in children, but epidemiological evidence of links between fasting and blood lead levels (B-Pb is rare. The purpose of this study was to examine the association between eating a regular breakfast and B-Pb among children using data from the China Jintan Child Cohort Study. Methods Parents completed a questionnaire regarding children's breakfast-eating habit (regular or not, demographics, and food frequency. Whole blood samples were collected from 1,344 children for the measurements of B-Pb and micronutrients (iron, copper, zinc, calcium, and magnesium. B-Pb and other measures were compared between children with and without regular breakfast. Linear regression modeling was used to evaluate the association between regular breakfast and log-transformed B-Pb. The association between regular breakfast and risk of lead poisoning (B-Pb≥10 μg/dL was examined using logistic regression modeling. Results Median B-Pb among children who ate breakfast regularly and those who did not eat breakfast regularly were 6.1 μg/dL and 7.2 μg/dL, respectively. Eating breakfast was also associated with greater zinc blood levels. Adjusting for other relevant factors, the linear regression model revealed that eating breakfast regularly was significantly associated with lower B-Pb (beta = -0.10 units of log-transformed B-Pb compared with children who did not eat breakfast regularly, p = 0.02. Conclusion The present study provides some initial human data supporting the notion that eating a regular breakfast might reduce B-Pb in young children. To our knowledge, this is the first human study exploring the association between breakfast frequency and B-Pb in young children.

  1. The limit distribution of the maximum increment of a random walk with dependent regularly varying jump sizes

    DEFF Research Database (Denmark)

    Mikosch, Thomas Valentin; Moser, Martin

    2013-01-01

    We investigate the maximum increment of a random walk with heavy-tailed jump size distribution. Here heavy-tailedness is understood as regular variation of the finite-dimensional distributions. The jump sizes constitute a strictly stationary sequence. Using a continuous mapping argument acting on...... on the point processes of the normalized jump sizes, we prove that the maximum increment of the random walk converges in distribution to a Fréchet distributed random variable.......We investigate the maximum increment of a random walk with heavy-tailed jump size distribution. Here heavy-tailedness is understood as regular variation of the finite-dimensional distributions. The jump sizes constitute a strictly stationary sequence. Using a continuous mapping argument acting...

  2. Comparison of two three-dimensional cephalometric analysis computer software

    OpenAIRE

    Sawchuk, Dena; Alhadlaq, Adel; Alkhadra, Thamer; Carlyle, Terry D; Kusnoto, Budi; El-Bialy, Tarek

    2014-01-01

    Background: Three-dimensional cephalometric analyses are getting more attraction in orthodontics. The aim of this study was to compare two softwares to evaluate three-dimensional cephalometric analyses of orthodontic treatment outcomes. Materials and Methods: Twenty cone beam computed tomography images were obtained using i-CAT® imaging system from patient's records as part of their regular orthodontic records. The images were analyzed using InVivoDental5.0 (Anatomage Inc.) and 3DCeph™ (Unive...

  3. Validation of the nicotine dependence syndrome scale (NDSS): a criterion-group design contrasting chippers and regular smokers

    OpenAIRE

    Shiffman, Saul; Sayette, Michael A.

    2005-01-01

    The nicotine dependence syndrome scale (NDSS) is a new multi-dimensional measure of nicotine dependence, yielding five scores for different aspects of dependence as well as a total score. In this study, we tested the NDSS in a young adult sample (mean age = 24), using an extreme-groups comparison between non-dependent smokers (chippers, n = 123) and regular smokers (n = 130). Scores on each NDSS subscale strongly discriminated between the groups, with the NDSS-total discriminating them almost...

  4. The Effects of Regular Exercise on the Physical Fitness Levels

    Science.gov (United States)

    Kirandi, Ozlem

    2016-01-01

    The purpose of the present research is investigating the effects of regular exercise on the physical fitness levels among sedentary individuals. The total of 65 sedentary male individuals between the ages of 19-45, who had never exercises regularly in their lives, participated in the present research. Of these participants, 35 wanted to be…

  5. Adaptive Regularization of Neural Networks Using Conjugate Gradient

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    1998-01-01

    Andersen et al. (1997) and Larsen et al. (1996, 1997) suggested a regularization scheme which iteratively adapts regularization parameters by minimizing validation error using simple gradient descent. In this contribution we present an improved algorithm based on the conjugate gradient technique........ Numerical experiments with feedforward neural networks successfully demonstrate improved generalization ability and lower computational cost...

  6. Pairing renormalization and regularization within the local density approximation

    International Nuclear Information System (INIS)

    Borycki, P.J.; Dobaczewski, J.; Nazarewicz, W.; Stoitsov, M.V.

    2006-01-01

    We discuss methods used in mean-field theories to treat pairing correlations within the local density approximation. Pairing renormalization and regularization procedures are compared in spherical and deformed nuclei. Both prescriptions give fairly similar results, although the theoretical motivation, simplicity, and stability of the regularization procedure make it a method of choice for future applications

  7. On refinement of the unit simplex using regular simplices

    NARCIS (Netherlands)

    G.-Tóth, B.; Hendrix, E.M.T.; Casado, L.G.; García, I.

    2016-01-01

    A natural way to define branching in branch and bound (B&B) for blending problems is bisection. The consequence of using bisection is that partition sets are in general irregular. The question is how to use regular simplices in the refinement of the unit simplex. A regular simplex with fixed

  8. Degree-regular triangulations of torus and Klein bottle

    Indian Academy of Sciences (India)

    Home; Journals; Proceedings – Mathematical Sciences; Volume 115; Issue 3 ... A triangulation of a connected closed surface is called degree-regular if each of its vertices have the same degree. ... In [5], Datta and Nilakantan have classified all the degree-regular triangulations of closed surfaces on at most 11 vertices.

  9. Feature extraction using regular expression in detecting proper ...

    African Journals Online (AJOL)

    Feature extraction using regular expression in detecting proper noun for Malay news articles based on KNN algorithm. S Sulaiman, R.A. Wahid, F Morsidi. Abstract. No Abstract. Keywords: data mining; named entity recognition; regular expression; natural language processing. Full Text: EMAIL FREE FULL TEXT EMAIL ...

  10. Degree-regular triangulations of torus and Klein bottle

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    torial 2-manifolds are the boundaries of the tetrahedron, the octahedron, the icosahedron and the 6-vertex real projective plane [4, 5]. The combinatorial manifolds T3,3,0 and T6,2,2. (in Examples 2 and 3) are combinatorially regular. Schulte and Wills [10, 11] have con- structed two combinatorially regular triangulations of ...

  11. Analysis of regularized Navier-Stokes equations, 2

    Science.gov (United States)

    Ou, Yuh-Roung; Sritharan, S. S.

    1989-01-01

    A practically important regularization of the Navier-Stokes equations was analyzed. As a continuation of the previous work, the structure of the attractors characterizing the solutins was studied. Local as well as global invariant manifolds were found. Regularity properties of these manifolds are analyzed.

  12. Strictly-regular number system and data structures

    DEFF Research Database (Denmark)

    Elmasry, Amr Ahmed Abd Elmoneim; Jensen, Claus; Katajainen, Jyrki

    2010-01-01

    We introduce a new number system that we call the strictly-regular system, which efficiently supports the operations: digit-increment, digit-decrement, cut, concatenate, and add. Compared to other number systems, the strictly-regular system has distinguishable properties. It is superior to the re...

  13. Geometric aspects of 2-walk-regular graphs

    NARCIS (Netherlands)

    Camara Vallejo, M.; van Dam, E.R.; Koolen, J.H.; Park, J.

    2013-01-01

    A t-walk-regular graph is a graph for which the number of walks of given length between two vertices depends only on the distance between these two vertices, as long as this distance is at most t. Such graphs generalize distance-regular graphs and t-arc-transitive graphs. In this paper, we will

  14. On the Generating Power of Regularly Controlled Bidirectional Grammars

    NARCIS (Netherlands)

    Asveld, P.R.J.; Hogendorp, J.A.; Hogendorp, J.A.

    1991-01-01

    RCB-grammars or regularly controlled bidirectional grammars are context-free grammars of which the rules can be used in a productive and in a reductive fashion. In addition, the application of these rules is controlled by a regular language. Several modes of derivation can be distinguished for this

  15. On the Generating Power of Regularly Controlled Bidirectional Grammars

    NARCIS (Netherlands)

    Asveld, P.R.J.; Hogendorp, Jan Anne

    1989-01-01

    RCB-grammars or regularly controlled bidirectional grammars are context-free grammars of which the rules can be used in a productive and in a reductive fashion. In addition, the application of these rules is controlled by a regular language. Several modes of derivation can be distinguished for this

  16. Chiral quark soliton model with Pauli-Villars regularization

    Science.gov (United States)

    Kubota, T.; Wakamatsu, M.; Watabe, T.

    1999-07-01

    The Pauli-Villars regularization scheme is often used for evaluating parton distributions within the framework of the chiral quark soliton model with the inclusion of the vacuum polarization effects. Its simplest version with a single subtraction term should, however, be taken with some caution, since it does not fully get rid of divergences contained in scalar and psuedoscalar quark densities appearing in the soliton equation of motion. To remedy this shortcoming, we propose here its natural extension, i.e., the Pauli-Villars regularization scheme with multisubtraction terms. We also carry out a comparative analysis of the Pauli-Villars regularization scheme and more popular proper-time one. It turns out that some isovector observables such as the isovector magnetic moment of the nucleon are rather sensitive to the choice of the regularization schemes. In the process of tracing the origin of this sensitivity, a noticeable difference of the two regularization schemes is revealed.

  17. Regularized Regression and Density Estimation based on Optimal Transport

    KAUST Repository

    Burger, M.

    2012-03-11

    The aim of this paper is to investigate a novel nonparametric approach for estimating and smoothing density functions as well as probability densities from discrete samples based on a variational regularization method with the Wasserstein metric as a data fidelity. The approach allows a unified treatment of discrete and continuous probability measures and is hence attractive for various tasks. In particular, the variational model for special regularization functionals yields a natural method for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations and provide a detailed analysis. Moreover, we compute special self-similar solutions for standard regularization functionals and we discuss several computational approaches and results. © 2012 The Author(s).

  18. R package MVR for Joint Adaptive Mean-Variance Regularization and Variance Stabilization.

    Science.gov (United States)

    Dazard, Jean-Eudes; Xu, Hua; Rao, J Sunil

    2011-01-01

    We present an implementation in the R language for statistical computing of our recent non-parametric joint adaptive mean-variance regularization and variance stabilization procedure. The method is specifically suited for handling difficult problems posed by high-dimensional multivariate datasets ( p ≫ n paradigm), such as in 'omics'-type data, among which are that the variance is often a function of the mean, variable-specific estimators of variances are not reliable, and tests statistics have low powers due to a lack of degrees of freedom. The implementation offers a complete set of features including: (i) normalization and/or variance stabilization function, (ii) computation of mean-variance-regularized t and F statistics, (iii) generation of diverse diagnostic plots, (iv) synthetic and real 'omics' test datasets, (v) computationally efficient implementation, using C interfacing, and an option for parallel computing, (vi) manual and documentation on how to setup a cluster. To make each feature as user-friendly as possible, only one subroutine per functionality is to be handled by the end-user. It is available as an R package, called MVR ('Mean-Variance Regularization'), downloadable from the CRAN.

  19. The Hamiltonian formulation of regular rth-order Lagrangian field theories

    International Nuclear Information System (INIS)

    Shadwick, W.F.

    1982-01-01

    A Hamiltonian formulation of regular rth-order Lagrangian field theories over an m-dimensional manifold is presented in terms of the Hamilton-Cartan formalism. It is demonstrated that a uniquely determined Cartan m-form may be associated to an rth-order Lagrangian by imposing conditions of congruence modulo a suitably defined system of contact m-forms. A geometric regularity condition is given and it is shown that, for a regular Lagrangian, the momenta defined by the Hamilton-Cartan formalism, together with the coordinates on the (r-1)st-order jet bundle, are a minimal set of local coordinates needed to express the Euler-Lagrange equations. When r is greater than one, the number of variables required is strictly less than the dimension of the (2r-1)st order jet bundle. It is shown that, in these coordinates, the Euler-Lagrange equations take the first-order Hamiltonian form given by de Donder. It is also shown that the geometrically natural generalization of the Hamilton-Jacobi procedure for finding extremals is equivalent to de Donder's Hamilton-Jacobi equation. (orig.)

  20. Progressive image denoising through hybrid graph Laplacian regularization: a unified framework.

    Science.gov (United States)

    Liu, Xianming; Zhai, Deming; Zhao, Debin; Zhai, Guangtao; Gao, Wen

    2014-04-01

    Recovering images from corrupted observations is necessary for many real-world applications. In this paper, we propose a unified framework to perform progressive image recovery based on hybrid graph Laplacian regularized regression. We first construct a multiscale representation of the target image by Laplacian pyramid, then progressively recover the degraded image in the scale space from coarse to fine so that the sharp edges and texture can be eventually recovered. On one hand, within each scale, a graph Laplacian regularization model represented by implicit kernel is learned, which simultaneously minimizes the least square error on the measured samples and preserves the geometrical structure of the image data space. In this procedure, the intrinsic manifold structure is explicitly considered using both measured and unmeasured samples, and the nonlocal self-similarity property is utilized as a fruitful resource for abstracting a priori knowledge of the images. On the other hand, between two successive scales, the proposed model is extended to a projected high-dimensional feature space through explicit kernel mapping to describe the interscale correlation, in which the local structure regularity is learned and propagated from coarser to finer scales. In this way, the proposed algorithm gradually recovers more and more image details and edges, which could not been recovered in previous scale. We test our algorithm on one typical image recovery task: impulse noise removal. Experimental results on benchmark test images demonstrate that the proposed method achieves better performance than state-of-the-art algorithms.

  1. Two hybrid regularization frameworks for solving the electrocardiography inverse problem

    Energy Technology Data Exchange (ETDEWEB)

    Jiang Mingfeng; Xia Ling; Shou Guofa; Liu Feng [Department of Biomedical Engineering, Zhejiang University, Hangzhou 310027 (China); Crozier, Stuart [School of Information Technology and Electrical Engineering, University of Queensland, St. Lucia, Brisbane, Queensland 4072 (Australia)], E-mail: xialing@zju.edu.cn

    2008-09-21

    In this paper, two hybrid regularization frameworks, LSQR-Tik and Tik-LSQR, which integrate the properties of the direct regularization method (Tikhonov) and the iterative regularization method (LSQR), have been proposed and investigated for solving ECG inverse problems. The LSQR-Tik method is based on the Lanczos process, which yields a sequence of small bidiagonal systems to approximate the original ill-posed problem and then the Tikhonov regularization method is applied to stabilize the projected problem. The Tik-LSQR method is formulated as an iterative LSQR inverse, augmented with a Tikhonov-like prior information term. The performances of these two hybrid methods are evaluated using a realistic heart-torso model simulation protocol, in which the heart surface source method is employed to calculate the simulated epicardial potentials (EPs) from the action potentials (APs), and then the acquired EPs are used to calculate simulated body surface potentials (BSPs). The results show that the regularized solutions obtained by the LSQR-Tik method are approximate to those of the Tikhonov method, the computational cost of the LSQR-Tik method, however, is much less than that of the Tikhonov method. Moreover, the Tik-LSQR scheme can reconstruct the epcicardial potential distribution more accurately, specifically for the BSPs with large noisy cases. This investigation suggests that hybrid regularization methods may be more effective than separate regularization approaches for ECG inverse problems.

  2. Two hybrid regularization frameworks for solving the electrocardiography inverse problem

    Science.gov (United States)

    Jiang, Mingfeng; Xia, Ling; Shou, Guofa; Liu, Feng; Crozier, Stuart

    2008-09-01

    In this paper, two hybrid regularization frameworks, LSQR-Tik and Tik-LSQR, which integrate the properties of the direct regularization method (Tikhonov) and the iterative regularization method (LSQR), have been proposed and investigated for solving ECG inverse problems. The LSQR-Tik method is based on the Lanczos process, which yields a sequence of small bidiagonal systems to approximate the original ill-posed problem and then the Tikhonov regularization method is applied to stabilize the projected problem. The Tik-LSQR method is formulated as an iterative LSQR inverse, augmented with a Tikhonov-like prior information term. The performances of these two hybrid methods are evaluated using a realistic heart-torso model simulation protocol, in which the heart surface source method is employed to calculate the simulated epicardial potentials (EPs) from the action potentials (APs), and then the acquired EPs are used to calculate simulated body surface potentials (BSPs). The results show that the regularized solutions obtained by the LSQR-Tik method are approximate to those of the Tikhonov method, the computational cost of the LSQR-Tik method, however, is much less than that of the Tikhonov method. Moreover, the Tik-LSQR scheme can reconstruct the epcicardial potential distribution more accurately, specifically for the BSPs with large noisy cases. This investigation suggests that hybrid regularization methods may be more effective than separate regularization approaches for ECG inverse problems.

  3. Semisupervised Support Vector Machines With Tangent Space Intrinsic Manifold Regularization.

    Science.gov (United States)

    Sun, Shiliang; Xie, Xijiong

    2016-09-01

    Semisupervised learning has been an active research topic in machine learning and data mining. One main reason is that labeling examples is expensive and time-consuming, while there are large numbers of unlabeled examples available in many practical problems. So far, Laplacian regularization has been widely used in semisupervised learning. In this paper, we propose a new regularization method called tangent space intrinsic manifold regularization. It is intrinsic to data manifold and favors linear functions on the manifold. Fundamental elements involved in the formulation of the regularization are local tangent space representations, which are estimated by local principal component analysis, and the connections that relate adjacent tangent spaces. Simultaneously, we explore its application to semisupervised classification and propose two new learning algorithms called tangent space intrinsic manifold regularized support vector machines (TiSVMs) and tangent space intrinsic manifold regularized twin SVMs (TiTSVMs). They effectively integrate the tangent space intrinsic manifold regularization consideration. The optimization of TiSVMs can be solved by a standard quadratic programming, while the optimization of TiTSVMs can be solved by a pair of standard quadratic programmings. The experimental results of semisupervised classification problems show the effectiveness of the proposed semisupervised learning algorithms.

  4. Learning regularization parameters for general-form Tikhonov

    International Nuclear Information System (INIS)

    Chung, Julianne; Español, Malena I

    2017-01-01

    Computing regularization parameters for general-form Tikhonov regularization can be an expensive and difficult task, especially if multiple parameters or many solutions need to be computed in real time. In this work, we assume training data is available and describe an efficient learning approach for computing regularization parameters that can be used for a large set of problems. We consider an empirical Bayes risk minimization framework for finding regularization parameters that minimize average errors for the training data. We first extend methods from Chung et al (2011 SIAM J. Sci. Comput. 33 3132–52) to the general-form Tikhonov problem. Then we develop a learning approach for multi-parameter Tikhonov problems, for the case where all involved matrices are simultaneously diagonalizable. For problems where this is not the case, we describe an approach to compute near-optimal regularization parameters by using operator approximations for the original problem. Finally, we propose a new class of regularizing filters, where solutions correspond to multi-parameter Tikhonov solutions, that requires less data than previously proposed optimal error filters, avoids the generalized SVD, and allows flexibility and novelty in the choice of regularization matrices. Numerical results for 1D and 2D examples using different norms on the errors show the effectiveness of our methods. (paper)

  5. Anodic dissolution of alloys during electrochemical dimensional machining of parts

    International Nuclear Information System (INIS)

    Davydov, A.D.

    1980-01-01

    Analysis of the main regularities of anodic dissolution of alloys at current high densities, which is necessary for the explanation and prediction of the results of electrochemical dimensional machining of parts, is carried out. Examples when chemical composition produces the determining effect upon anodic behaviour and electrochemical treatment of the alloys are analyzed

  6. Regularization theory for ill-posed problems selected topics

    CERN Document Server

    Lu, Shuai

    2013-01-01

    Thismonograph is a valuable contribution to thehighly topical and extremly productive field ofregularisationmethods for inverse and ill-posed problems. The author is an internationally outstanding and acceptedmathematicianin this field. In his book he offers a well-balanced mixtureof basic and innovative aspects.He demonstrates new,differentiatedviewpoints, and important examples for applications. The bookdemontrates thecurrent developments inthe field of regularization theory,such as multiparameter regularization and regularization in learning theory. The book is written for graduate and PhDs

  7. Chiral anomaly, fermionic determinant and two dimensional models

    International Nuclear Information System (INIS)

    Rego Monteiro, M.A. do.

    1985-01-01

    The chiral anomaly in random pair dimension is analysed. This anomaly is perturbatively calculated by dimensional regularization method. A new method for non-perturbative Jacobian calculation of a general chiral transformation, 1.e., finite and non-Abelian, is developed. This method is used for non-perturbative chiral anomaly calculation, as an alternative to bosonization of two-dimensional theories for massless fermions and to study the phenomenum of fermion number fractionalization. The fermionic determinant from two-dimensional quantum chromodynamics is also studied, and calculated, exactly, as in decoupling gauge as with out reference to a particular gauge. (M.C.K.) [pt

  8. L1-norm locally linear representation regularization multi-source adaptation learning.

    Science.gov (United States)

    Tao, Jianwen; Wen, Shiting; Hu, Wenjun

    2015-09-01

    In most supervised domain adaptation learning (DAL) tasks, one has access only to a small number of labeled examples from target domain. Therefore the success of supervised DAL in this "small sample" regime needs the effective utilization of the large amounts of unlabeled data to extract information that is useful for generalization. Toward this end, we here use the geometric intuition of manifold assumption to extend the established frameworks in existing model-based DAL methods for function learning by incorporating additional information about the target geometric structure of the marginal distribution. We would like to ensure that the solution is smooth with respect to both the ambient space and the target marginal distribution. In doing this, we propose a novel L1-norm locally linear representation regularization multi-source adaptation learning framework which exploits the geometry of the probability distribution, which has two techniques. Firstly, an L1-norm locally linear representation method is presented for robust graph construction by replacing the L2-norm reconstruction measure in LLE with L1-norm one, which is termed as L1-LLR for short. Secondly, considering the robust graph regularization, we replace traditional graph Laplacian regularization with our new L1-LLR graph Laplacian regularization and therefore construct new graph-based semi-supervised learning framework with multi-source adaptation constraint, which is coined as L1-MSAL method. Moreover, to deal with the nonlinear learning problem, we also generalize the L1-MSAL method by mapping the input data points from the input space to a high-dimensional reproducing kernel Hilbert space (RKHS) via a nonlinear mapping. Promising experimental results have been obtained on several real-world datasets such as face, visual video and object. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Prior knowledge regularization in statistical medical image tasks

    DEFF Research Database (Denmark)

    Crimi, Alessandro; Sporring, Jon; de Bruijne, Marleen

    2009-01-01

    for regularizing thecovariance matrix using prior knowledge. Our method is evaluated forreconstructing and modeling vertebra and cartilage shapes from a lowerdimensional representation and a conditional model. For these centralproblems, the proposed methodology outperforms the traditional MLEmethod...

  10. The entire regularization path for the support vector domain description

    DEFF Research Database (Denmark)

    Sjöstrand, Karl; Larsen, Rasmus

    2006-01-01

    -class support vector machine classifier. Recently, it was shown that the regularization path of the support vector machine is piecewise linear, and that the entire path can be computed efficiently. This pa- per shows that this property carries over to the support vector domain description. Using our results......The support vector domain description is a one-class classi- fication method that estimates the shape and extent of the distribution of a data set. This separates the data into outliers, outside the decision boundary, and inliers on the inside. The method bears close resemblance to the two...... the solution to the one-class classification can be solved for any amount of regularization with roughly the same computational complexity required to solve for a particularly value of the regularization parameter. The possibility of evaluating the results for any amount of regularization not only offers more...

  11. BILINGUAL AND REGULAR CLASS STUDENTS’ ATTITUDES TOWARDS ENGLISH

    Directory of Open Access Journals (Sweden)

    Nihta Liando

    2013-01-01

    Full Text Available This article presents the results of the study investigating the relationship between the students’ attitudes towards English and their English achievements in bilingual and regular classes and investigating the differences. The study was conducted in a junior high school in Manado. There are 30 Year VIII students in each bilingual class and each regular class. The results are as follows. First, there is a significant correlation between the students’ attitudes towards English and their achievements. Second, there is a significant difference in their English achievements between bilingual class students and regular students. Third, female students have higher English achievements than male students. Bilingual class students have more positive attitudes and higher English learning achievements than regular class students.

  12. A Regularized Algorithm for the Proximal Split Feasibility Problem

    Directory of Open Access Journals (Sweden)

    Zhangsong Yao

    2014-01-01

    Full Text Available The proximal split feasibility problem has been studied. A regularized method has been presented for solving the proximal split feasibility problem. Strong convergence theorem is given.

  13. Mini-Stroke vs. Regular Stroke: What's the Difference?

    Science.gov (United States)

    ... How is a ministroke different from a regular stroke? Answers from Jerry W. Swanson, M.D. When ... brain, spinal cord or retina, which may cause stroke-like symptoms but does not damage brain cells ...

  14. q-regular variation and q-difference equations

    International Nuclear Information System (INIS)

    Rehak, Pavel; VItovec, JirI

    2008-01-01

    We introduce the concept of q-regularly varying functions and establish basic properties of such functions. Among other things it is shown that considering regular variation in q-calculus is somehow natural and leads to interesting observations and simplifications compared with classical continuous and discrete theories. The obtained theory is applied to an investigation of asymptotic behavior of solutions to linear second-order q-difference equations

  15. Borderline personality disorder and regularly drinking alcohol before sex.

    Science.gov (United States)

    Thompson, Ronald G; Eaton, Nicholas R; Hu, Mei-Chen; Hasin, Deborah S

    2017-07-01

    Drinking alcohol before sex increases the likelihood of engaging in unprotected intercourse, having multiple sexual partners and becoming infected with sexually transmitted infections. Borderline personality disorder (BPD), a complex psychiatric disorder characterised by pervasive instability in emotional regulation, self-image, interpersonal relationships and impulse control, is associated with substance use disorders and sexual risk behaviours. However, no study has examined the relationship between BPD and drinking alcohol before sex in the USA. This study examined the association between BPD and regularly drinking before sex in a nationally representative adult sample. Participants were 17 491 sexually active drinkers from Wave 2 of the National Epidemiologic Survey on Alcohol and Related Conditions. Logistic regression models estimated effects of BPD diagnosis, specific borderline diagnostic criteria and BPD criterion count on the likelihood of regularly (mostly or always) drinking alcohol before sex, adjusted for controls. Borderline personality disorder diagnosis doubled the odds of regularly drinking before sex [adjusted odds ratio (AOR) = 2.26; confidence interval (CI) = 1.63, 3.14]. Of nine diagnostic criteria, impulsivity in areas that are self-damaging remained a significant predictor of regularly drinking before sex (AOR = 1.82; CI = 1.42, 2.35). The odds of regularly drinking before sex increased by 20% for each endorsed criterion (AOR = 1.20; CI = 1.14, 1.27) DISCUSSION AND CONCLUSIONS: This is the first study to examine the relationship between BPD and regularly drinking alcohol before sex in the USA. Substance misuse treatment should assess regularly drinking before sex, particularly among patients with BPD, and BPD treatment should assess risk at the intersection of impulsivity, sexual behaviour and substance use. [Thompson Jr RG, Eaton NR, Hu M-C, Hasin DS Borderline personality disorder and regularly drinking alcohol

  16. Estimation of the global regularity of a multifractional Brownian motion

    DEFF Research Database (Denmark)

    Lebovits, Joachim; Podolskij, Mark

    This paper presents a new estimator of the global regularity index of a multifractional Brownian motion. Our estimation method is based upon a ratio statistic, which compares the realized global quadratic variation of a multifractional Brownian motion at two different frequencies. We show that a ...... that a logarithmic transformation of this statistic converges in probability to the minimum of the Hurst functional parameter, which is, under weak assumptions, identical to the global regularity index of the path....

  17. Human visual system automatically encodes sequential regularities of discrete events.

    Science.gov (United States)

    Kimura, Motohiro; Schröger, Erich; Czigler, István; Ohira, Hideki

    2010-06-01

    For our adaptive behavior in a dynamically changing environment, an essential task of the brain is to automatically encode sequential regularities inherent in the environment into a memory representation. Recent studies in neuroscience have suggested that sequential regularities embedded in discrete sensory events are automatically encoded into a memory representation at the level of the sensory system. This notion is largely supported by evidence from investigations using auditory mismatch negativity (auditory MMN), an event-related brain potential (ERP) correlate of an automatic memory-mismatch process in the auditory sensory system. However, it is still largely unclear whether or not this notion can be generalized to other sensory modalities. The purpose of the present study was to investigate the contribution of the visual sensory system to the automatic encoding of sequential regularities using visual mismatch negativity (visual MMN), an ERP correlate of an automatic memory-mismatch process in the visual sensory system. To this end, we conducted a sequential analysis of visual MMN in an oddball sequence consisting of infrequent deviant and frequent standard stimuli, and tested whether the underlying memory representation of visual MMN generation contains only a sensory memory trace of standard stimuli (trace-mismatch hypothesis) or whether it also contains sequential regularities extracted from the repetitive standard sequence (regularity-violation hypothesis). The results showed that visual MMN was elicited by first deviant (deviant stimuli following at least one standard stimulus), second deviant (deviant stimuli immediately following first deviant), and first standard (standard stimuli immediately following first deviant), but not by second standard (standard stimuli immediately following first standard). These results are consistent with the regularity-violation hypothesis, suggesting that the visual sensory system automatically encodes sequential

  18. Total Variation Regularization of Matrix-Valued Images

    Directory of Open Access Journals (Sweden)

    Oddvar Christiansen

    2007-01-01

    Blomgren and Chan in 1998. We treat the diffusion matrix D implicitly as the product D=LLT, and work with the elements of L as variables, instead of working directly on the elements of D. This ensures positive definiteness of the tensor during the regularization flow, which is essential when regularizing DTI. We perform numerical experiments on both synthetical data and 3D human brain DTI, and measure the quantitative behavior of the proposed model.

  19. Viscous Regularization of the Euler Equations and Entropy Principles

    KAUST Repository

    Guermond, Jean-Luc

    2014-03-11

    This paper investigates a general class of viscous regularizations of the compressible Euler equations. A unique regularization is identified that is compatible with all the generalized entropies, à la [Harten et al., SIAM J. Numer. Anal., 35 (1998), pp. 2117-2127], and satisfies the minimum entropy principle. A connection with a recently proposed phenomenological model by [H. Brenner, Phys. A, 370 (2006), pp. 190-224] is made. © 2014 Society for Industrial and Applied Mathematics.

  20. q-regular variation and q-difference equations

    Energy Technology Data Exchange (ETDEWEB)

    Rehak, Pavel [Institute of Mathematics, Academy of Sciences of the Czech Republic, Zizkova 22, CZ61662 Brno (Czech Republic); VItovec, JirI [Department of Mathematical Analysis, Faculty of Science, Masaryk University Brno, Janackovo NamestI2a, CZ60200 Brno (Czech Republic)], E-mail: rehak@math.muni.cz, E-mail: vitovec@math.muni.cz

    2008-12-12

    We introduce the concept of q-regularly varying functions and establish basic properties of such functions. Among other things it is shown that considering regular variation in q-calculus is somehow natural and leads to interesting observations and simplifications compared with classical continuous and discrete theories. The obtained theory is applied to an investigation of asymptotic behavior of solutions to linear second-order q-difference equations.

  1. Sleep and Student Success: The Role of Regularity vs. Duration

    OpenAIRE

    Luong, Phuc; Lusher, Lester; Yasenov, Vasil

    2017-01-01

    Recent correlational studies and media reports have suggested that sleep regularity – the variation in the amount of sleep one gets across days – is a stronger determinant of student success than sleep duration – the total amount of sleep one receives. We identify the causal impacts of sleep regularity and sleep duration on student success by leveraging over 165,000 student-classroom observations from a large university in Vietnam where incoming freshmen were randomly assigned into course sch...

  2. Estimation of the global regularity of a multifractional Brownian motion

    OpenAIRE

    Lebovits, Joachim; Podolskij, Mark

    2016-01-01

    This paper presents a new estimator of the global regularity index of a multifractional Brownian motion. Our estimation method is based upon a ratio statistic, which compares the realized global quadratic variation of a multifractional Brownian motion at two different frequencies. We show that a logarithmic transformation of this statistic converges in probability to the minimum of the Hurst functional parameter, which is, under weak assumptions, identical to the global regularity index of th...

  3. Iterative regularization methods for nonlinear ill-posed problems

    CERN Document Server

    Scherzer, Otmar; Kaltenbacher, Barbara

    2008-01-01

    Nonlinear inverse problems appear in many applications, and typically they lead to mathematical models that are ill-posed, i.e., they are unstable under data perturbations. Those problems require a regularization, i.e., a special numerical treatment. This book presents regularization schemes which are based on iteration methods, e.g., nonlinear Landweber iteration, level set methods, multilevel methods and Newton type methods.

  4. Regularization of the quantum field theory of charges and monopoles

    International Nuclear Information System (INIS)

    Panagiotakopoulos, C.

    1981-09-01

    A gauge invariant regularization procedure for quantum field theories of electric and magnetic charges based on Zwanziger's local formulation is proposed. The bare regularized full Green's functions of gauge invariant operators are shown to be Lorentz invariant. This would have as a consequence the Lorentz invariance of the finite Green's functions that might result after any reasonable subtraction if such a subtraction can be found. (author)

  5. Label-Informed Non-negative Matrix Factorization with Manifold Regularization for Discriminative Subnetwork Detection.

    Science.gov (United States)

    Watanabe, Takanori; Tunc, Birkan; Parker, Drew; Kim, Junghoon; Verma, Ragini

    2016-10-01

    In this paper, we present a novel method for obtaining a low dimensional representation of a complex brain network that: (1) can be interpreted in a neurobiologically meaningful way, (2) emphasizes group differences by accounting for label information, and (3) captures the variation in disease subtypes/severity by respecting the intrinsic manifold structure underlying the data. Our method is a supervised variant of non-negative matrix factorization (NMF), and achieves dimensionality reduction by extracting an orthogonal set of subnetworks that are interpretable, reconstructive of the original data, and also discriminative at the group level. In addition, the method includes a manifold regularizer that encourages the low dimensional representations to be smooth with respect to the intrinsic geometry of the data, allowing subjects with similar disease-severity to share similar network representations. While the method is generalizable to other types of non-negative network data, in this work we have used structural connectomes (SCs) derived from diffusion data to identify the cortical/subcortical connections that have been disrupted in abnormal neurological state. Experiments on a traumatic brain injury (TBI) dataset demonstrate that our method can identify subnetworks that can reliably classify TBI from controls and also reveal insightful connectivity patterns that may be indicative of a biomarker.

  6. Geostatistical regularization operators for geophysical inverse problems on irregular meshes

    Science.gov (United States)

    Jordi, C.; Doetsch, J.; Günther, T.; Schmelzbach, C.; Robertsson, J. OA

    2018-05-01

    Irregular meshes allow to include complicated subsurface structures into geophysical modelling and inverse problems. The non-uniqueness of these inverse problems requires appropriate regularization that can incorporate a priori information. However, defining regularization operators for irregular discretizations is not trivial. Different schemes for calculating smoothness operators on irregular meshes have been proposed. In contrast to classical regularization constraints that are only defined using the nearest neighbours of a cell, geostatistical operators include a larger neighbourhood around a particular cell. A correlation model defines the extent of the neighbourhood and allows to incorporate information about geological structures. We propose an approach to calculate geostatistical operators for inverse problems on irregular meshes by eigendecomposition of a covariance matrix that contains the a priori geological information. Using our approach, the calculation of the operator matrix becomes tractable for 3-D inverse problems on irregular meshes. We tested the performance of the geostatistical regularization operators and compared them against the results of anisotropic smoothing in inversions of 2-D surface synthetic electrical resistivity tomography (ERT) data as well as in the inversion of a realistic 3-D cross-well synthetic ERT scenario. The inversions of 2-D ERT and seismic traveltime field data with geostatistical regularization provide results that are in good accordance with the expected geology and thus facilitate their interpretation. In particular, for layered structures the geostatistical regularization provides geologically more plausible results compared to the anisotropic smoothness constraints.

  7. Bounded Perturbation Regularization for Linear Least Squares Estimation

    KAUST Repository

    Ballal, Tarig

    2017-10-18

    This paper addresses the problem of selecting the regularization parameter for linear least-squares estimation. We propose a new technique called bounded perturbation regularization (BPR). In the proposed BPR method, a perturbation with a bounded norm is allowed into the linear transformation matrix to improve the singular-value structure. Following this, the problem is formulated as a min-max optimization problem. Next, the min-max problem is converted to an equivalent minimization problem to estimate the unknown vector quantity. The solution of the minimization problem is shown to converge to that of the ℓ2 -regularized least squares problem, with the unknown regularizer related to the norm bound of the introduced perturbation through a nonlinear constraint. A procedure is proposed that combines the constraint equation with the mean squared error (MSE) criterion to develop an approximately optimal regularization parameter selection algorithm. Both direct and indirect applications of the proposed method are considered. Comparisons with different Tikhonov regularization parameter selection methods, as well as with other relevant methods, are carried out. Numerical results demonstrate that the proposed method provides significant improvement over state-of-the-art methods.

  8. The relationship between lifestyle regularity and subjective sleep quality

    Science.gov (United States)

    Monk, Timothy H.; Reynolds, Charles F 3rd; Buysse, Daniel J.; DeGrazia, Jean M.; Kupfer, David J.

    2003-01-01

    In previous work we have developed a diary instrument-the Social Rhythm Metric (SRM), which allows the assessment of lifestyle regularity-and a questionnaire instrument--the Pittsburgh Sleep Quality Index (PSQI), which allows the assessment of subjective sleep quality. The aim of the present study was to explore the relationship between lifestyle regularity and subjective sleep quality. Lifestyle regularity was assessed by both standard (SRM-17) and shortened (SRM-5) metrics; subjective sleep quality was assessed by the PSQI. We hypothesized that high lifestyle regularity would be conducive to better sleep. Both instruments were given to a sample of 100 healthy subjects who were studied as part of a variety of different experiments spanning a 9-yr time frame. Ages ranged from 19 to 49 yr (mean age: 31.2 yr, s.d.: 7.8 yr); there were 48 women and 52 men. SRM scores were derived from a two-week diary. The hypothesis was confirmed. There was a significant (rho = -0.4, p subjects with higher levels of lifestyle regularity reported fewer sleep problems. This relationship was also supported by a categorical analysis, where the proportion of "poor sleepers" was doubled in the "irregular types" group as compared with the "non-irregular types" group. Thus, there appears to be an association between lifestyle regularity and good sleep, though the direction of causality remains to be tested.

  9. Near-Regular Structure Discovery Using Linear Programming

    KAUST Repository

    Huang, Qixing

    2014-06-02

    Near-regular structures are common in manmade and natural objects. Algorithmic detection of such regularity greatly facilitates our understanding of shape structures, leads to compact encoding of input geometries, and enables efficient generation and manipulation of complex patterns on both acquired and synthesized objects. Such regularity manifests itself both in the repetition of certain geometric elements, as well as in the structured arrangement of the elements. We cast the regularity detection problem as an optimization and efficiently solve it using linear programming techniques. Our optimization has a discrete aspect, that is, the connectivity relationships among the elements, as well as a continuous aspect, namely the locations of the elements of interest. Both these aspects are captured by our near-regular structure extraction framework, which alternates between discrete and continuous optimizations. We demonstrate the effectiveness of our framework on a variety of problems including near-regular structure extraction, structure-preserving pattern manipulation, and markerless correspondence detection. Robustness results with respect to geometric and topological noise are presented on synthesized, real-world, and also benchmark datasets. © 2014 ACM.

  10. Recursive support vector machines for dimensionality reduction.

    Science.gov (United States)

    Tao, Qing; Chu, Dejun; Wang, Jue

    2008-01-01

    The usual dimensionality reduction technique in supervised learning is mainly based on linear discriminant analysis (LDA), but it suffers from singularity or undersampled problems. On the other hand, a regular support vector machine (SVM) separates the data only in terms of one single direction of maximum margin, and the classification accuracy may be not good enough. In this letter, a recursive SVM (RSVM) is presented, in which several orthogonal directions that best separate the data with the maximum margin are obtained. Theoretical analysis shows that a completely orthogonal basis can be derived in feature subspace spanned by the training samples and the margin is decreasing along the recursive components in linearly separable cases. As a result, a new dimensionality reduction technique based on multilevel maximum margin components and then a classifier with high accuracy are achieved. Experiments in synthetic and several real data sets show that RSVM using multilevel maximum margin features can do efficient dimensionality reduction and outperform regular SVM in binary classification problems.

  11. Two-dimensional errors

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements

  12. Dimensionality reduction methods:

    OpenAIRE

    Amenta, Pietro; D'Ambra, Luigi; Gallo, Michele

    2005-01-01

    In case one or more sets of variables are available, the use of dimensional reduction methods could be necessary. In this contest, after a review on the link between the Shrinkage Regression Methods and Dimensional Reduction Methods, authors provide a different multivariate extension of the Garthwaite's PLS approach (1994) where a simple linear regression coefficients framework could be given for several dimensional reduction methods.

  13. An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jinchao; Qin Chenghu; Jia Kebin; Han Dong; Liu Kai; Zhu Shouping; Yang Xin; Tian Jie [Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China); College of Electronic Information and Control Engineering, Beijing University of Technology, Beijing 100124 (China); Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China); Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China) and School of Life Sciences and Technology, Xidian University, Xi' an 710071 (China)

    2011-11-15

    Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescent photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used

  14. Dimensional Enhancement via Supersymmetry

    Directory of Open Access Journals (Sweden)

    M. G. Faux

    2011-01-01

    of supersymmetry in one time-like dimension. This is enabled by algebraic criteria, derived, exhibited, and utilized in this paper, which indicate which subset of one-dimensional supersymmetric models describes “shadows” of higher-dimensional models. This formalism delineates that minority of one-dimensional supersymmetric models which can “enhance” to accommodate extra dimensions. As a consistency test, we use our formalism to reproduce well-known conclusions about supersymmetric field theories using one-dimensional reasoning exclusively. And we introduce the notion of “phantoms” which usefully accommodate higher-dimensional gauge invariance in the context of shadow multiplets in supersymmetric quantum mechanics.

  15. Dimensional cosmological principles

    International Nuclear Information System (INIS)

    Chi, L.K.

    1985-01-01

    The dimensional cosmological principles proposed by Wesson require that the density, pressure, and mass of cosmological models be functions of the dimensionless variables which are themselves combinations of the gravitational constant, the speed of light, and the spacetime coordinates. The space coordinate is not the comoving coordinate. In this paper, the dimensional cosmological principle and the dimensional perfect cosmological principle are reformulated by using the comoving coordinate. The dimensional perfect cosmological principle is further modified to allow the possibility that mass creation may occur. Self-similar spacetimes are found to be models obeying the new dimensional cosmological principle

  16. Dimensionality Reduction Algorithms on High Dimensional Datasets

    Directory of Open Access Journals (Sweden)

    Iwan Syarif

    2014-12-01

    Full Text Available Classification problem especially for high dimensional datasets have attracted many researchers in order to find efficient approaches to address them. However, the classification problem has become very complicatedespecially when the number of possible different combinations of variables is so high. In this research, we evaluate the performance of Genetic Algorithm (GA and Particle Swarm Optimization (PSO as feature selection algorithms when applied to high dimensional datasets.Our experiments show that in terms of dimensionality reduction, PSO is much better than GA. PSO has successfully reduced the number of attributes of 8 datasets to 13.47% on average while GA is only 31.36% on average. In terms of classification performance, GA is slightly better than PSO. GA‐ reduced datasets have better performance than their original ones on 5 of 8 datasets while PSO is only 3 of 8 datasets. Keywords: feature selection, dimensionality reduction, Genetic Algorithm (GA, Particle Swarm Optmization (PSO.

  17. Dimensionality Reduction Algorithms on High Dimensional Datasets

    OpenAIRE

    Iwan Syarif

    2014-01-01

    Classification problem especially for high dimensional datasets have attracted many researchers in order to find efficient approaches to address them. However, the classification problem has become very complicatedespecially when the number of possible different combinations of variables is so high. In this research, we evaluate the performance of Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) as feature selection algorithms when applied to high dimensional datasets.Our experime...

  18. Dimensional-reduction anomaly

    Science.gov (United States)

    Frolov, V.; Sutton, P.; Zelnikov, A.

    2000-01-01

    In a wide class of D-dimensional spacetimes which are direct or semi-direct sums of a (D-n)-dimensional space and an n-dimensional homogeneous ``internal'' space, a field can be decomposed into modes. As a result of this mode decomposition, the main objects which characterize the free quantum field, such as Green functions and heat kernels, can effectively be reduced to objects in a (D-n)-dimensional spacetime with an external dilaton field. We study the problem of the dimensional reduction of the effective action for such spacetimes. While before renormalization the original D-dimensional effective action can be presented as a ``sum over modes'' of (D-n)-dimensional effective actions, this property is violated after renormalization. We calculate the corresponding anomalous terms explicitly, illustrating the effect with some simple examples.

  19. A sparse grid based method for generative dimensionality reduction of high-dimensional data

    Science.gov (United States)

    Bohn, Bastian; Garcke, Jochen; Griebel, Michael

    2016-03-01

    Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.

  20. A new method for the regularization of a class of divergent Feynman integrals in covariant and axial gauges

    International Nuclear Information System (INIS)

    Lee, H.C.; Milgram, M.S.

    1984-07-01

    A hybrid of dimensional and analytic regularization is used to regulate and uncover a Meijer's G-function representation for a class of massless, divergent Feynman integrals in an axial gauge. Integrals in the covariant gauge belong to a subclass and those in the light-cone gauge are reached by analytic continuation. The method decouples the physical ultraviolet and infrared singularities from the spurious axial gauge singularity but regulates all three simultaneously. For the axial gauge singularity, the new analytic method is more powerful and elegant than the old principal value prescription, but the two methods yield identical infinite as well as regular parts. It is shown that dimensional and analytic regularization can be made equivalent, implying that the former method is free from spurious γ5-anomalies and the latter preserves gauge invariance. The hybrid method permits the evaluation of integrals containing arbritrary integer powers of logarithms in the integrand by differentiation with respect to exponents. Such 'exponent derivatives' generate the same set of 'polylogs' as that generated in multi-loop integrals in perturbation theories and may be useful for solving equations in nonperturbation theories. The close relation between the method of exponent derivatives and the prescription of 't Hooft and Veltman for treating overlapping divergencies is pointed out. It is demonstrated that both methods generate functions that are free from unrecognizable logarithmic infinite parts. Nonperturbation theories expressed in terms of exponent derivatives are thus renormalizable. Some intriguing connections between nonperturbation theories and nonintegral exponents are pointed out

  1. Efficacy of a Respiratory Training System on the Regularity of Breathing

    International Nuclear Information System (INIS)

    Shin, Eun Hyuk; Park, Hee Chul; Han, Young Yih; Ju, Sang Gyu; Shin, Jung Suk; Ahn, Yong Chan

    2008-01-01

    In order to enhance the efficiency of respiratory gated 4-dimensional radiation therapy for more regular and stable respiratory period and amplitude, a respiration training system was designed, and its efficacy was evaluated. Materials and Methods: The experiment was designed to measure the difference in respiration regularity following the use of a training system. A total of 11 subjects (9 volunteers and 2 patients) were included in the experiments. Three different breathing signals, including free breathing (free-breathing), guided breathing that followed training software (guided-breathing), and free breathing after the guided-breathing (post guided-breathing), were consecutively recorded in each subject. The peak-to-peak (PTP) period of the breathing signal, standard deviation (SD), peak-amplitude and its SD, area of the one cycle of the breathing wave form, and its root mean square (RMS) were measured and computed. Results: The temporal regularity was significantly improved in guided-breathing since the SD of breathing period reduced (free-breathing 0.568 vs guided-breathing 0.344, p=0.0013). The SD of the breathing period representing the post guided-breathing was also reduced, but the difference was not statistically significant (free-breathing 0.568 vs. guided-breathing 0.512, p=ns). Also the SD of measured amplitude was reduced in guided-breathing (free-breathing 1.317 vs. guided-breathing 1.068, p=0.187), although not significant. This indicated that the tidal volume for each breath was kept more even in guided-breathing compared to free-breathing. There was no change in breathing pattern between free-breathing and guided-breathing. The average area of breathing wave form and its RMS in postguided-breathing, however, was reduced by 7% and 5.9%, respectively. Conclusion: The guided-breathing was more stable and regular than the other forms of breathing data. Therefore, the developed respiratory training system was effective in improving the temporal

  2. Rotating Hayward’s regular black hole as particle accelerator

    International Nuclear Information System (INIS)

    Amir, Muhammed; Ghosh, Sushant G.

    2015-01-01

    Recently, Bañados, Silk and West (BSW) demonstrated that the extremal Kerr black hole can act as a particle accelerator with arbitrarily high center-of-mass energy (E CM ) when the collision takes place near the horizon. The rotating Hayward’s regular black hole, apart from Mass (M) and angular momentum (a), has a new parameter g (g>0 is a constant) that provides a deviation from the Kerr black hole. We demonstrate that for each g, with M=1, there exist critical a E and r H E , which corresponds to a regular extremal black hole with degenerate horizons, and a E decreases whereas r H E increases with increase in g. While aregular non-extremal black hole with outer and inner horizons. We apply the BSW process to the rotating Hayward’s regular black hole, for different g, and demonstrate numerically that the E CM diverges in the vicinity of the horizon for the extremal cases thereby suggesting that a rotating regular black hole can also act as a particle accelerator and thus in turn provide a suitable framework for Plank-scale physics. For a non-extremal case, there always exist a finite upper bound for the E CM , which increases with the deviation parameter g.

  3. Image Super-Resolution via Adaptive Regularization and Sparse Representation.

    Science.gov (United States)

    Cao, Feilong; Cai, Miaomiao; Tan, Yuanpeng; Zhao, Jianwei

    2016-07-01

    Previous studies have shown that image patches can be well represented as a sparse linear combination of elements from an appropriately selected over-complete dictionary. Recently, single-image super-resolution (SISR) via sparse representation using blurred and downsampled low-resolution images has attracted increasing interest, where the aim is to obtain the coefficients for sparse representation by solving an l0 or l1 norm optimization problem. The l0 optimization is a nonconvex and NP-hard problem, while the l1 optimization usually requires many more measurements and presents new challenges even when the image is the usual size, so we propose a new approach for SISR recovery based on regularization nonconvex optimization. The proposed approach is potentially a powerful method for recovering SISR via sparse representations, and it can yield a sparser solution than the l1 regularization method. We also consider the best choice for lp regularization with all p in (0, 1), where we propose a scheme that adaptively selects the norm value for each image patch. In addition, we provide a method for estimating the best value of the regularization parameter λ adaptively, and we discuss an alternate iteration method for selecting p and λ . We perform experiments, which demonstrates that the proposed regularization nonconvex optimization method can outperform the convex optimization method and generate higher quality images.

  4. Method of transferring regular shaped vessel into cell

    International Nuclear Information System (INIS)

    Murai, Tsunehiko.

    1997-01-01

    The present invention concerns a method of transferring regular shaped vessels from a non-contaminated area to a contaminated cell. A passage hole for allowing the regular shaped vessels to pass in the longitudinal direction is formed to a partitioning wall at the bottom of the contaminated cell. A plurality of regular shaped vessel are stacked in multiple stages in a vertical direction from the non-contaminated area present below the passage hole, allowed to pass while being urged and transferred successively into the contaminated cell. As a result, since they are transferred while substantially closing the passage hole by the regular shaped vessels, radiation rays or contaminated materials are prevented from discharging from the contaminated cell to the non-contaminated area. Since there is no requirement to open/close an isolation door frequently, the workability upon transfer can be improved remarkably. In addition, the sealing member for sealing the gap between the regular shaped vessel passing through the passage hole and the partitioning wall of the bottom is disposed to the passage hole, the contaminated materials in the contaminated cells can be prevented from discharging from the gap to the non-contaminated area. (N.H.)

  5. Regularization of regional gravity fields from GOCE data

    Science.gov (United States)

    Naeimi, M.; Flury, J.; Brieden, P.

    2014-12-01

    Regional gravity field recovery in spherical radial base functions is a strongly ill-posed problem. The ill-posedness is mainly due the restriction of observations as well as the base functions to a specific area. We investigate this ill-posedness as well as the related regularization process. We compare four different methods for the choice of regularization parameter and discuss their characteristics. Moreover, two different kinds of shape coefficients for the spherical radial base functions are used to assess the impact of shape coefficients on the regularization process. These shape coefficients are the Shannon coefficients with no smoothing properties and the Spline kernel with smoothing features. As the data set, we use two months of real GOCE ultra-sensitive gravity gradient components which are Txx, Tyy, Tzz and Txz. The regional solutions are considered in Amazon. Our results indicate, that the method used for the choice of regularization parameter is directly influenced by the shape of spherical radial base functions. In addition we conclude, that the Shannon kernel along with a proper regularization approach delivers satisfactory results with physically meaningful coefficients.

  6. Regularity theory for quasilinear elliptic systems and Monge—Ampère equations in two dimensions

    CERN Document Server

    Schulz, Friedmar

    1990-01-01

    These lecture notes have been written as an introduction to the characteristic theory for two-dimensional Monge-Ampère equations, a theory largely developed by H. Lewy and E. Heinz which has never been presented in book form. An exposition of the Heinz-Lewy theory requires auxiliary material which can be found in various monographs, but which is presented here, in part because the focus is different, and also because these notes have an introductory character. Self-contained introductions to the regularity theory of elliptic systems, the theory of pseudoanalytic functions and the theory of conformal mappings are included. These notes grew out of a seminar given at the University of Kentucky in the fall of 1988 and are intended for graduate students and researchers interested in this area.

  7. Symplectic finite element scheme: application to a driven problem with a regular singularity

    Energy Technology Data Exchange (ETDEWEB)

    Pletzer, A. [Ecole Polytechnique Federale, Lausanne (Switzerland). Centre de Recherche en Physique des Plasma (CRPP)

    1996-02-01

    A new finite element (FE) scheme, based on the decomposition of a second order differential equation into a set of first order symplectic (Hamiltonian) equations, is presented and tested on one-dimensional, driven Sturm-Liouville problem. Error analysis shows improved cubic convergence in the energy norm for piecewise linear `tent` elements, as compared to quadratic convergence for the standard and hybrid FE methods. The convergence deteriorates in the presence of a regular singular point, but can be recovered by appropriate mesh node packing. Optimal mesh packing exponents are derived to ensure cubic (respectively quadratic) convergence with minimal numerical error. A further suppression of the numerical error by a factor proportional to the square of the leading exponent of the singular solution, is achieved for a model problem based on determining the nonideal magnetohydrodynamic stability of a fusion plasma. (author) 7 figs., 14 refs.

  8. Regular network model for the sea ice-albedo feedback in the Arctic.

    Science.gov (United States)

    Müller-Stoffels, Marc; Wackerbauer, Renate

    2011-03-01

    The Arctic Ocean and sea ice form a feedback system that plays an important role in the global climate. The complexity of highly parameterized global circulation (climate) models makes it very difficult to assess feedback processes in climate without the concurrent use of simple models where the physics is understood. We introduce a two-dimensional energy-based regular network model to investigate feedback processes in an Arctic ice-ocean layer. The model includes the nonlinear aspect of the ice-water phase transition, a nonlinear diffusive energy transport within a heterogeneous ice-ocean lattice, and spatiotemporal atmospheric and oceanic forcing at the surfaces. First results for a horizontally homogeneous ice-ocean layer show bistability and related hysteresis between perennial ice and perennial open water for varying atmospheric heat influx. Seasonal ice cover exists as a transient phenomenon. We also find that ocean heat fluxes are more efficient than atmospheric heat fluxes to melt Arctic sea ice.

  9. A reconstruction algorithm for electrical impedance tomography based on sparsity regularization

    KAUST Repository

    Jin, Bangti

    2011-08-24

    This paper develops a novel sparse reconstruction algorithm for the electrical impedance tomography problem of determining a conductivity parameter from boundary measurements. The sparsity of the \\'inhomogeneity\\' with respect to a certain basis is a priori assumed. The proposed approach is motivated by a Tikhonov functional incorporating a sparsity-promoting ℓ 1-penalty term, and it allows us to obtain quantitative results when the assumption is valid. A novel iterative algorithm of soft shrinkage type was proposed. Numerical results for several two-dimensional problems with both single and multiple convex and nonconvex inclusions were presented to illustrate the features of the proposed algorithm and were compared with one conventional approach based on smoothness regularization. © 2011 John Wiley & Sons, Ltd.

  10. Regularity results for the minimum time function with Hörmander vector fields

    Science.gov (United States)

    Albano, Paolo; Cannarsa, Piermarco; Scarinci, Teresa

    2018-03-01

    In a bounded domain of Rn with boundary given by a smooth (n - 1)-dimensional manifold, we consider the homogeneous Dirichlet problem for the eikonal equation associated with a family of smooth vector fields {X1 , … ,XN } subject to Hörmander's bracket generating condition. We investigate the regularity of the viscosity solution T of such problem. Due to the presence of characteristic boundary points, singular trajectories may occur. First, we characterize these trajectories as the closed set of all points at which the solution loses point-wise Lipschitz continuity. Then, we prove that the local Lipschitz continuity of T, the local semiconcavity of T, and the absence of singular trajectories are equivalent properties. Finally, we show that the last condition is satisfied whenever the characteristic set of {X1 , … ,XN } is a symplectic manifold. We apply our results to several examples.

  11. Breast ultrasound tomography with total-variation regularization

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Lianjie [Los Alamos National Laboratory; Li, Cuiping [KARMANOS CANCER INSTIT.; Duric, Neb [KARMANOS CANCER INSTIT

    2009-01-01

    Breast ultrasound tomography is a rapidly developing imaging modality that has the potential to impact breast cancer screening and diagnosis. A new ultrasound breast imaging device (CURE) with a ring array of transducers has been designed and built at Karmanos Cancer Institute, which acquires both reflection and transmission ultrasound signals. To extract the sound-speed information from the breast data acquired by CURE, we have developed an iterative sound-speed image reconstruction algorithm for breast ultrasound transmission tomography based on total-variation (TV) minimization. We investigate applicability of the TV tomography algorithm using in vivo ultrasound breast data from 61 patients, and compare the results with those obtained using the Tikhonov regularization method. We demonstrate that, compared to the Tikhonov regularization scheme, the TV regularization method significantly improves image quality, resulting in sound-speed tomography images with sharp (preserved) edges of abnormalities and few artifacts.

  12. Wavelet domain image restoration with adaptive edge-preserving regularization.

    Science.gov (United States)

    Belge, M; Kilmer, M E; Miller, E L

    2000-01-01

    In this paper, we consider a wavelet based edge-preserving regularization scheme for use in linear image restoration problems. Our efforts build on a collection of mathematical results indicating that wavelets are especially useful for representing functions that contain discontinuities (i.e., edges in two dimensions or jumps in one dimension). We interpret the resulting theory in a statistical signal processing framework and obtain a highly flexible framework for adapting the degree of regularization to the local structure of the underlying image. In particular, we are able to adapt quite easily to scale-varying and orientation-varying features in the image while simultaneously retaining the edge preservation properties of the regularizer. We demonstrate a half-quadratic algorithm for obtaining the restorations from observed data.

  13. Adiabatic regularization for gauge fields and the conformal anomaly

    Science.gov (United States)

    Chu, Chong-Sun; Koyama, Yoji

    2017-03-01

    Adiabatic regularization for quantum field theory in conformally flat spacetime is known for scalar and Dirac fermion fields. In this paper, we complete the construction by establishing the adiabatic regularization scheme for the gauge field. We show that the adiabatic expansion for the mode functions and the adiabatic vacuum can be defined in a similar way using Wentzel-Kramers-Brillouin-type (WKB-type) solutions as the scalar fields. As an application of the adiabatic method, we compute the trace of the energy momentum tensor and reproduce the known result for the conformal anomaly obtained by the other regularization methods. The availability of the adiabatic expansion scheme for the gauge field allows one to study various renormalized physical quantities of theories coupled to (non-Abelian) gauge fields in conformally flat spacetime, such as conformal supersymmetric Yang Mills, inflation, and cosmology.

  14. Typed and unambiguous pattern matching on strings using regular expressions

    DEFF Research Database (Denmark)

    Brabrand, Claus; Thomsen, Jakob G.

    2010-01-01

    We show how to achieve typed and unambiguous declarative pattern matching on strings using regular expressions extended with a simple recording operator. We give a characterization of ambiguity of regular expressions that leads to a sound and complete static analysis. The analysis is capable...... of pinpointing all ambiguities in terms of the structure of the regular expression and report shortest ambiguous strings. We also show how pattern matching can be integrated into statically typed programming languages for deconstructing strings and reproducing typed and structured values. We validate our...... approach by giving a full implementation of the approach presented in this paper. The resulting tool, reg-exp-rec, adds typed and unambiguous pattern matching to Java in a stand-alone and non-intrusive manner. We evaluate the approach using several realistic examples....

  15. An implementation of N-body chain regularization

    Science.gov (United States)

    Mikkola, Seppo; Aarseth, Sverre J.

    1993-11-01

    The chain regularization method (Mikkola and Aarseth 1990) for high accuracy computation of particle motions in small N-body systems has been reformulated. We discuss the transformation formulas, equations of motion and selection of a chain of interparticle vectors such that the critical interactions requiring regularization are included in the chain. The Kustaaheimo-Stiefel (KS) coordinate transformation and a time transformation is used to regularize the dominant terms of the equations of motion. The method has been implemented for an arbitrary number of bodies, with the option of external perturbations. This formulation has been succesfully tested in a general N-body program for strongly interacting subsystems. An easy to use computer program, written in FORTRAN, is available on request.

  16. Low-rank matrix approximation with manifold regularization.

    Science.gov (United States)

    Zhang, Zhenyue; Zhao, Keke

    2013-07-01

    This paper proposes a new model of low-rank matrix factorization that incorporates manifold regularization to the matrix factorization. Superior to the graph-regularized nonnegative matrix factorization, this new regularization model has globally optimal and closed-form solutions. A direct algorithm (for data with small number of points) and an alternate iterative algorithm with inexact inner iteration (for large scale data) are proposed to solve the new model. A convergence analysis establishes the global convergence of the iterative algorithm. The efficiency and precision of the algorithm are demonstrated numerically through applications to six real-world datasets on clustering and classification. Performance comparison with existing algorithms shows the effectiveness of the proposed method for low-rank factorization in general.

  17. Selecting protein families for environmental features based on manifold regularization.

    Science.gov (United States)

    Jiang, Xingpeng; Xu, Weiwei; Park, E K; Li, Guangrong

    2014-06-01

    Recently, statistics and machine learning have been developed to identify functional or taxonomic features of environmental features or physiological status. Important proteins (or other functional and taxonomic entities) to environmental features can be potentially used as biosensors. A major challenge is how the distribution of protein and gene functions embodies the adaption of microbial communities across environments and host habitats. In this paper, we propose a novel regularization method for linear regression to adapt the challenge. The approach is inspired by local linear embedding (LLE) and we call it a manifold-constrained regularization for linear regression (McRe). The novel regularization procedure also has potential to be used in solving other linear systems. We demonstrate the efficiency and the performance of the approach in both simulation and real data.

  18. Processing SPARQL queries with regular expressions in RDF databases.

    Science.gov (United States)

    Lee, Jinsoo; Pham, Minh-Duc; Lee, Jihwan; Han, Wook-Shin; Cho, Hune; Yu, Hwanjo; Lee, Jeong-Hoon

    2011-03-29

    As the Resource Description Framework (RDF) data model is widely used for modeling and sharing a lot of online bioinformatics resources such as Uniprot (dev.isb-sib.ch/projects/uniprot-rdf) or Bio2RDF (bio2rdf.org), SPARQL - a W3C recommendation query for RDF databases - has become an important query language for querying the bioinformatics knowledge bases. Moreover, due to the diversity of users' requests for extracting information from the RDF data as well as the lack of users' knowledge about the exact value of each fact in the RDF databases, it is desirable to use the SPARQL query with regular expression patterns for querying the RDF data. To the best of our knowledge, there is currently no work that efficiently supports regular expression processing in SPARQL over RDF databases. Most of the existing techniques for processing regular expressions are designed for querying a text corpus, or only for supporting the matching over the paths in an RDF graph. In this paper, we propose a novel framework for supporting regular expression processing in SPARQL query. Our contributions can be summarized as follows. 1) We propose an efficient framework for processing SPARQL queries with regular expression patterns in RDF databases. 2) We propose a cost model in order to adapt the proposed framework in the existing query optimizers. 3) We build a prototype for the proposed framework in C++ and conduct extensive experiments demonstrating the efficiency and effectiveness of our technique. Experiments with a full-blown RDF engine show that our framework outperforms the existing ones by up to two orders of magnitude in processing SPARQL queries with regular expression patterns.

  19. Multi-index Stochastic Collocation Convergence Rates for Random PDEs with Parametric Regularity

    KAUST Repository

    Haji Ali, Abdul Lateef

    2016-08-26

    We analyze the recent Multi-index Stochastic Collocation (MISC) method for computing statistics of the solution of a partial differential equation (PDE) with random data, where the random coefficient is parametrized by means of a countable sequence of terms in a suitable expansion. MISC is a combination technique based on mixed differences of spatial approximations and quadratures over the space of random data, and naturally, the error analysis uses the joint regularity of the solution with respect to both the variables in the physical domain and parametric variables. In MISC, the number of problem solutions performed at each discretization level is not determined by balancing the spatial and stochastic components of the error, but rather by suitably extending the knapsack-problem approach employed in the construction of the quasi-optimal sparse-grids and Multi-index Monte Carlo methods, i.e., we use a greedy optimization procedure to select the most effective mixed differences to include in the MISC estimator. We apply our theoretical estimates to a linear elliptic PDE in which the log-diffusion coefficient is modeled as a random field, with a covariance similar to a Matérn model, whose realizations have spatial regularity determined by a scalar parameter. We conduct a complexity analysis based on a summability argument showing algebraic rates of convergence with respect to the overall computational work. The rate of convergence depends on the smoothness parameter, the physical dimensionality and the efficiency of the linear solver. Numerical experiments show the effectiveness of MISC in this infinite dimensional setting compared with the Multi-index Monte Carlo method and compare the convergence rate against the rates predicted in our theoretical analysis. © 2016 SFoCM

  20. Application of Tikhonov regularization method to wind retrieval from scatterometer data II: cyclone wind retrieval with consideration of rain

    International Nuclear Information System (INIS)

    Zhong Jian; Huang Si-Xun; Fei Jian-Fang; Du Hua-Dong; Zhang Liang

    2011-01-01

    According to the conclusion of the simulation experiments in paper I, the Tikhonov regularization method is applied to cyclone wind retrieval with a rain-effect-considering geophysical model function (called GMF+Rain). The GMF+Rain model which is based on the NASA scatterometer-2 (NSCAT2) GMF is presented to compensate for the effects of rain on cyclone wind retrieval. With the multiple solution scheme (MSS), the noise of wind retrieval is effectively suppressed, but the influence of the background increases. It will cause a large wind direction error in ambiguity removal when the background error is large. However, this can be mitigated by the new ambiguity removal method of Tikhonov regularization as proved in the simulation experiments. A case study on an extratropical cyclone of hurricane observed with SeaWinds at 25-km resolution shows that the retrieved wind speed for areas with rain is in better agreement with that derived from the best track analysis for the GMF+Rain model, but the wind direction obtained with the two-dimensional variational (2DVAR) ambiguity removal is incorrect. The new method of Tikhonov regularization effectively improves the performance of wind direction ambiguity removal through choosing appropriate regularization parameters and the retrieved wind speed is almost the same as that obtained from the 2DVAR. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  1. Stability of negative ionization fronts: Regularization by electric screening?

    International Nuclear Information System (INIS)

    Arrayas, Manuel; Ebert, Ute

    2004-01-01

    We recently have proposed that a reduced interfacial model for streamer propagation is able to explain spontaneous branching. Such models require regularization. In the present paper we investigate how transversal Fourier modes of a planar ionization front are regularized by the electric screening length. For a fixed value of the electric field ahead of the front we calculate the dispersion relation numerically. These results guide the derivation of analytical asymptotes for arbitrary fields: for small wave-vector k, the growth rate s(k) grows linearly with k, for large k, it saturates at some positive plateau value. We give a physical interpretation of these results

  2. Regularization Techniques for Linear Least-Squares Problems

    KAUST Repository

    Suliman, Mohamed

    2016-04-01

    Linear estimation is a fundamental branch of signal processing that deals with estimating the values of parameters from a corrupted measured data. Throughout the years, several optimization criteria have been used to achieve this task. The most astonishing attempt among theses is the linear least-squares. Although this criterion enjoyed a wide popularity in many areas due to its attractive properties, it appeared to suffer from some shortcomings. Alternative optimization criteria, as a result, have been proposed. These new criteria allowed, in one way or another, the incorporation of further prior information to the desired problem. Among theses alternative criteria is the regularized least-squares (RLS). In this thesis, we propose two new algorithms to find the regularization parameter for linear least-squares problems. In the constrained perturbation regularization algorithm (COPRA) for random matrices and COPRA for linear discrete ill-posed problems, an artificial perturbation matrix with a bounded norm is forced into the model matrix. This perturbation is introduced to enhance the singular value structure of the matrix. As a result, the new modified model is expected to provide a better stabilize substantial solution when used to estimate the original signal through minimizing the worst-case residual error function. Unlike many other regularization algorithms that go in search of minimizing the estimated data error, the two new proposed algorithms are developed mainly to select the artifcial perturbation bound and the regularization parameter in a way that approximately minimizes the mean-squared error (MSE) between the original signal and its estimate under various conditions. The first proposed COPRA method is developed mainly to estimate the regularization parameter when the measurement matrix is complex Gaussian, with centered unit variance (standard), and independent and identically distributed (i.i.d.) entries. Furthermore, the second proposed COPRA

  3. A Parametric Abstract Domain for Lattice-Valued Regular Expressions

    DEFF Research Database (Denmark)

    Midtgaard, Jan; Nielson, Flemming; Nielson, Hanne Riis

    2016-01-01

    We present a lattice-valued generalization of regular expressions as an abstract domain for static analysis. The parametric abstract domain rests on a generalization of Brzozowski derivatives and works for both finite and infinite lattices. We develop both a co-inductive, simulation algorithm...... for deciding ordering between two domain elements and a widening operator for the domain. Finally we illustrate the domain with a static analysis that analyses a communicating process against a lattice-valued regular expression expressing the environment’s network communication....

  4. Parameter optimization in the regularized kernel minimum noise fraction transformation

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2012-01-01

    Based on the original, linear minimum noise fraction (MNF) transformation and kernel principal component analysis, a kernel version of the MNF transformation was recently introduced. Inspired by we here give a simple method for finding optimal parameters in a regularized version of kernel MNF...... analysis. We consider the model signal-to-noise ratio (SNR) as a function of the kernel parameters and the regularization parameter. In 2-4 steps of increasingly refined grid searches we find the parameters that maximize the model SNR. An example based on data from the DLR 3K camera system is given....

  5. Regularization of the multipolar form of quantum electrodynamics

    International Nuclear Information System (INIS)

    Shirokov, M.I.

    1991-01-01

    The multipolar form of quantum electrodynamics has been proposed by Power, Zienau et al. It is widely used in nonrelativistic calculations but has the deficiency: its Hamiltonians has a divergent operator term. It is shown that the divergency can be removed by a regularization of the unitary transformation which converts the Coulomb gauge into the multipolar form. The regularized multipolar form is proven to have the same ultraviolet radiative divergencies as the Coulomb gauge electrodynamics. It is also demonstrated that the interaction with soft photons is represented by the usual electric dipole term eqE and interatomic Coulomb interactions persist to be absent. 17 refs.; 2 figs

  6. Persistent low-grade inflammation and regular exercise

    DEFF Research Database (Denmark)

    Åström, Maj-brit; Feigh, Michael; Pedersen, Bente Klarlund

    2010-01-01

    Persistent low-grade systemic inflammation is a feature of chronic diseases such as cardiovascular disease (CVD), type 2 diabetes and dementia and evidence exists that inflammation is a causal factor in the development of insulin resistance and atherosclerosis. Regular exercise offers protection...... against all of these diseases and recent evidence suggests that the protective effect of exercise may to some extent be ascribed to an anti-inflammatory effect of regular exercise. Visceral adiposity contributes to systemic inflammation and is independently associated with the occurrence of CVD, type 2...

  7. Joint optimization of fluence field modulation and regularization in task-driven computed tomography

    Science.gov (United States)

    Gang, G. J.; Siewerdsen, J. H.; Stayman, J. W.

    2017-03-01

    Purpose: This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. Methods: We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index (d') across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength (β) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. Results: The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. Conclusions: The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.

  8. Regularized iterative weighted filtered backprojection for helical cone-beam CT

    International Nuclear Information System (INIS)

    Sunnegaardh, Johan; Danielsson, Per-Erik

    2008-01-01

    Contemporary reconstruction methods employed for clinical helical cone-beam computed tomography (CT) are analytical (noniterative) but mathematically nonexact, i.e., the reconstructed image contains so called cone-beam artifacts, especially for higher cone angles. Besides cone artifacts, these methods also suffer from windmill artifacts: alternating dark and bright regions creating spiral-like patterns occurring in the vicinity of high z-direction derivatives. In this article, the authors examine the possibility to suppress cone and windmill artifacts by means of iterative application of nonexact three-dimensional filtered backprojection, where the analytical part of the reconstruction brings about accelerated convergence. Specifically, they base their investigations on the weighted filtered backprojection method [Stierstorfer et al., Phys. Med. Biol. 49, 2209-2218 (2004)]. Enhancement of high frequencies and amplification of noise is a common but unwanted side effect in many acceleration attempts. They have employed linear regularization to avoid these effects and to improve the convergence properties of the iterative scheme. Artifacts and noise, as well as spatial resolution in terms of modulation transfer functions and slice sensitivity profiles have been measured. The results show that for cone angles up to ±2.78 deg., cone artifacts are suppressed and windmill artifacts are alleviated within three iterations. Furthermore, regularization parameters controlling spatial resolution can be tuned so that image quality in terms of spatial resolution and noise is preserved. Simulations with higher number of iterations and long objects (exceeding the measured region) verify that the size of the reconstructible region is not reduced, and that the regularization greatly improves the convergence properties of the iterative scheme. Taking these results into account, and the possibilities to extend the proposed method with more accurate modeling of the acquisition process

  9. Non-ergodic delocalized phase in Anderson model on Bethe lattice and regular graph

    Science.gov (United States)

    Kravtsov, V. E.; Altshuler, B. L.; Ioffe, L. B.

    2018-02-01

    We develop a novel analytical approach to the problem of single particle localization in infinite dimensional spaces such as Bethe lattice and random regular graph models. The key ingredient of the approach is the notion of the inverted order thermodynamic limit (IOTL) in which the coupling to the environment goes to zero before the system size goes to infinity. Using IOTL and Replica Symmetry Breaking (RSB) formalism we derive analytical expressions for the fractal dimension D1 that distinguishes between the extended ergodic, D1 = 1, and extended non-ergodic (multifractal), 0 graphs with the branching number K. We also employ RSB formalism to derive the analytical expression ln Styp-1 = - 〈 ln S 〉 ∼(Wc - W) - 1 for the typical imaginary part of self-energy Styp in the non-ergodic phase close to the Anderson transition in the conventional thermodynamic limit. We prove the existence of an extended non-ergodic phase in a broad range of disorder strength and energy and establish the phase diagrams of the models as a function of disorder and energy. The results of the analytical theory are compared with large-scale population dynamics and with the exact diagonalization of Anderson model on random regular graphs. We discuss the consequences of these results for the many body localization.

  10. Joint Sparse and Low-Rank Multitask Learning with Laplacian-Like Regularization for Hyperspectral Classification

    Directory of Open Access Journals (Sweden)

    Zhi He

    2018-02-01

    Full Text Available Multitask learning (MTL has recently provided significant performance improvements in supervised classification of hyperspectral images (HSIs by incorporating shared information across multiple tasks. However, the original MTL cannot effectively exploit both local and global structures of the HSI and the class label information is not fully used. Moreover, although the mathematical morphology (MM has attracted considerable interest in feature extraction of HSI, it remains a challenging issue to sufficiently utilize multiple morphological profiles obtained by various structuring elements (SEs. In this paper, we propose a joint sparse and low-rank MTL method with Laplacian-like regularization (termed as sllMTL for hyperspectral classification by utilizing the three-dimensional morphological profiles (3D-MPs features. The main steps of the proposed method are twofold. First, the 3D-MPs are extracted by the 3D-opening and 3D-closing operators. Different SEs are adopted to result in multiple 3D-MPs. Second, sllMTL is proposed for hyperspectral classification by taking the 3D-MPs as features of different tasks. In the sllMTL, joint sparse and low-rank structures are exploited to capture the task specificity and relatedness, respectively. Laplacian-like regularization is also added to make full use of the label information of training samples. Experiments on three datasets demonstrate the OA of the proposed method is at least about 2% higher than other state-of-the-art methods with very limited training samples.

  11. Bilinear Regularized Locality Preserving Learning on Riemannian Graph for Motor Imagery BCI.

    Science.gov (United States)

    Xie, Xiaofeng; Yu, Zhu Liang; Gu, Zhenghui; Zhang, Jun; Cen, Ling; Li, Yuanqing

    2018-03-01

    In off-line training of motor imagery-based brain-computer interfaces (BCIs), to enhance the generalization performance of the learned classifier, the local information contained in test data could be used to improve the performance of motor imagery as well. Further considering that the covariance matrices of electroencephalogram (EEG) signal lie on Riemannian manifold, in this paper, we construct a Riemannian graph to incorporate the information of training and test data into processing. The adjacency and weight in Riemannian graph are determined by the geodesic distance of Riemannian manifold. Then, a new graph embedding algorithm, called bilinear regularized locality preserving (BRLP), is derived upon the Riemannian graph for addressing the problems of high dimensionality frequently arising in BCIs. With a proposed regularization term encoding prior information of EEG channels, the BRLP could obtain more robust performance. Finally, an efficient classification algorithm based on extreme learning machine is proposed to perform on the tangent space of learned embedding. Experimental evaluations on the BCI competition and in-house data sets reveal that the proposed algorithms could obtain significantly higher performance than many competition algorithms after using same filter process.

  12. SAR Target Recognition via Local Sparse Representation of Multi-Manifold Regularized Low-Rank Approximation

    Directory of Open Access Journals (Sweden)

    Meiting Yu

    2018-02-01

    Full Text Available The extraction of a valuable set of features and the design of a discriminative classifier are crucial for target recognition in SAR image. Although various features and classifiers have been proposed over the years, target recognition under extended operating conditions (EOCs is still a challenging problem, e.g., target with configuration variation, different capture orientations, and articulation. To address these problems, this paper presents a new strategy for target recognition. We first propose a low-dimensional representation model via incorporating multi-manifold regularization term into the low-rank matrix factorization framework. Two rules, pairwise similarity and local linearity, are employed for constructing multiple manifold regularization. By alternately optimizing the matrix factorization and manifold selection, the feature representation model can not only acquire the optimal low-rank approximation of original samples, but also capture the intrinsic manifold structure information. Then, to take full advantage of the local structure property of features and further improve the discriminative ability, local sparse representation is proposed for classification. Finally, extensive experiments on moving and stationary target acquisition and recognition (MSTAR database demonstrate the effectiveness of the proposed strategy, including target recognition under EOCs, as well as the capability of small training size.

  13. Manifold optimization-based analysis dictionary learning with an ℓ1∕2-norm regularizer.

    Science.gov (United States)

    Li, Zhenni; Ding, Shuxue; Li, Yujie; Yang, Zuyuan; Xie, Shengli; Chen, Wuhui

    2018-02-01

    Recently there has been increasing attention towards analysis dictionary learning. In analysis dictionary learning, it is an open problem to obtain the strong sparsity-promoting solutions efficiently while simultaneously avoiding the trivial solutions of the dictionary. In this paper, to obtain the strong sparsity-promoting solutions, we employ the ℓ 1∕2 norm as a regularizer. The very recent study on ℓ 1∕2 norm regularization theory in compressive sensing shows that its solutions can give sparser results than using the ℓ 1 norm. We transform a complex nonconvex optimization into a number of one-dimensional minimization problems. Then the closed-form solutions can be obtained efficiently. To avoid trivial solutions, we apply manifold optimization to update the dictionary directly on the manifold satisfying the orthonormality constraint, so that the dictionary can avoid the trivial solutions well while simultaneously capturing the intrinsic properties of the dictionary. The experiments with synthetic and real-world data verify that the proposed algorithm for analysis dictionary learning can not only obtain strong sparsity-promoting solutions efficiently, but also learn more accurate dictionary in terms of dictionary recovery and image processing than the state-of-the-art algorithms. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. A path-integral approach for bosonic effective theories for Fermion fields in four and three dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Botelho, Luiz C.L

    1998-02-01

    We study four dimensional Effective Bosonic Field Theories for massive fermion field in the infrared region and massive fermion in ultraviolet region by using an appropriate Fermion Path Integral Chiral variable change and the Polyakov`s Fermi-Bose transmutation in the 3D-Abelian Thrirring model. (author) 14 refs.

  15. Bayesian supervised dimensionality reduction.

    Science.gov (United States)

    Gönen, Mehmet

    2013-12-01

    Dimensionality reduction is commonly used as a preprocessing step before training a supervised learner. However, coupled training of dimensionality reduction and supervised learning steps may improve the prediction performance. In this paper, we introduce a simple and novel Bayesian supervised dimensionality reduction method that combines linear dimensionality reduction and linear supervised learning in a principled way. We present both Gibbs sampling and variational approximation approaches to learn the proposed probabilistic model for multiclass classification. We also extend our formulation toward model selection using automatic relevance determination in order to find the intrinsic dimensionality. Classification experiments on three benchmark data sets show that the new model significantly outperforms seven baseline linear dimensionality reduction algorithms on very low dimensions in terms of generalization performance on test data. The proposed model also obtains the best results on an image recognition task in terms of classification and retrieval performances.

  16. Dimensionality Reduction Ensembles

    OpenAIRE

    Farrelly, Colleen M.

    2017-01-01

    Ensemble learning has had many successes in supervised learning, but it has been rare in unsupervised learning and dimensionality reduction. This study explores dimensionality reduction ensembles, using principal component analysis and manifold learning techniques to capture linear, nonlinear, local, and global features in the original dataset. Dimensionality reduction ensembles are tested first on simulation data and then on two real medical datasets using random forest classifiers; results ...

  17. Dimensionality reduction in Bayesian estimation algorithms

    Directory of Open Access Journals (Sweden)

    G. W. Petty

    2013-09-01

    Full Text Available An idealized synthetic database loosely resembling 3-channel passive microwave observations of precipitation against a variable background is employed to examine the performance of a conventional Bayesian retrieval algorithm. For this dataset, algorithm performance is found to be poor owing to an irreconcilable conflict between the need to find matches in the dependent database versus the need to exclude inappropriate matches. It is argued that the likelihood of such conflicts increases sharply with the dimensionality of the observation space of real satellite sensors, which may utilize 9 to 13 channels to retrieve precipitation, for example. An objective method is described for distilling the relevant information content from N real channels into a much smaller number (M of pseudochannels while also regularizing the background (geophysical plus instrument noise component. The pseudochannels are linear combinations of the original N channels obtained via a two-stage principal component analysis of the dependent dataset. Bayesian retrievals based on a single pseudochannel applied to the independent dataset yield striking improvements in overall performance. The differences between the conventional Bayesian retrieval and reduced-dimensional Bayesian retrieval suggest that a major potential problem with conventional multichannel retrievals – whether Bayesian or not – lies in the common but often inappropriate assumption of diagonal error covariance. The dimensional reduction technique described herein avoids this problem by, in effect, recasting the retrieval problem in a coordinate system in which the desired covariance is lower-dimensional, diagonal, and unit magnitude.

  18. Dimensionality reduction in Bayesian estimation algorithms

    Science.gov (United States)

    Petty, G. W.

    2013-09-01

    An idealized synthetic database loosely resembling 3-channel passive microwave observations of precipitation against a variable background is employed to examine the performance of a conventional Bayesian retrieval algorithm. For this dataset, algorithm performance is found to be poor owing to an irreconcilable conflict between the need to find matches in the dependent database versus the need to exclude inappropriate matches. It is argued that the likelihood of such conflicts increases sharply with the dimensionality of the observation space of real satellite sensors, which may utilize 9 to 13 channels to retrieve precipitation, for example. An objective method is described for distilling the relevant information content from N real channels into a much smaller number (M) of pseudochannels while also regularizing the background (geophysical plus instrument) noise component. The pseudochannels are linear combinations of the original N channels obtained via a two-stage principal component analysis of the dependent dataset. Bayesian retrievals based on a single pseudochannel applied to the independent dataset yield striking improvements in overall performance. The differences between the conventional Bayesian retrieval and reduced-dimensional Bayesian retrieval suggest that a major potential problem with conventional multichannel retrievals - whether Bayesian or not - lies in the common but often inappropriate assumption of diagonal error covariance. The dimensional reduction technique described herein avoids this problem by, in effect, recasting the retrieval problem in a coordinate system in which the desired covariance is lower-dimensional, diagonal, and unit magnitude.

  19. Exactly solvable quantum few-body systems associated with the symmetries of the three-dimensional and four-dimensional icosahedra

    Directory of Open Access Journals (Sweden)

    T. Scoquart, J. J. Seaward, S. G. Jackson, M. Olshanii

    2016-10-01

    Full Text Available The purpose of this article is to demonstrate that non-crystallographic reflection groups can be used to build new solvable quantum particle systems. We explicitly construct a one-parametric family of solvable four-body systems on a line, related to the symmetry of a regular icosahedron: in two distinct limiting cases the system is constrained to a half-line. We repeat the program for a 600-cell, a four-dimensional generalization of the regular three-dimensional icosahedron.

  20. Implicit Learning of L2 Word Stress Regularities

    Science.gov (United States)

    Chan, Ricky K. W.; Leung, Janny H. C.

    2014-01-01

    This article reports an experiment on the implicit learning of second language stress regularities, and presents a methodological innovation on awareness measurement. After practising two-syllable Spanish words, native Cantonese speakers with English as a second language (L2) completed a judgement task. Critical items differed only in placement of…