WorldWideScience

Sample records for two-component zeroth-order regular

  1. Zeroth order regular approximation approach to electric dipole moment interactions of the electron

    Science.gov (United States)

    Gaul, Konstantin; Berger, Robert

    2017-07-01

    A quasi-relativistic two-component approach for an efficient calculation of P ,T -odd interactions caused by a permanent electric dipole moment of the electron (eEDM) is presented. The approach uses a (two-component) complex generalized Hartree-Fock and a complex generalized Kohn-Sham scheme within the zeroth order regular approximation. In applications to select heavy-elemental polar diatomic molecular radicals, which are promising candidates for an eEDM experiment, the method is compared to relativistic four-component electron-correlation calculations and confirms values for the effective electric field acting on the unpaired electron for RaF, BaF, YbF, and HgF. The calculations show that purely relativistic effects, involving only the lower component of the Dirac bi-spinor, are well described by treating only the upper component explicitly.

  2. Compact Dual-Band Zeroth-Order Resonance Antenna

    International Nuclear Information System (INIS)

    Xu He-Xiu; Wang Guang-Ming; Gong Jian-Qiang

    2012-01-01

    A novel microstrip zeroth-order resonator (ZOR) antenna and its equivalent circuit model are exploited with two zeroth-order resonances. It is constructed based on a resonant-type composite right/left handed transmission line (CRLH TL) using a Wunderlich-shaped extended complementary single split ring resonator pair (W-ECSSRRP) and a series capacitive gap. The gap either can be utilized for double negative (DNG) ZOR antenna or be removed to engineer a simplified elision-negative ZOR (ENG) antenna. For verification, a DNG ZOR antenna sample is fabricated and measured. Numerical and experimental results agree well with each other, indicating that the omnidirectional radiations occur at two frequency bands which are accounted for by two shunt branches in the circuit model. The size of the antenna is 49% more compact than its previous counterpart. The superiority of W-ECSSRRP over CSSRRP lies in the lower fundamental resonance of the antenna by 38.2% and the introduction of a higher zeroth-order resonance. (fundamental areas of phenomenology(including applications))

  3. An infinite-order two-component relativistic Hamiltonian by a simple one-step transformation.

    Science.gov (United States)

    Ilias, Miroslav; Saue, Trond

    2007-02-14

    The authors report the implementation of a simple one-step method for obtaining an infinite-order two-component (IOTC) relativistic Hamiltonian using matrix algebra. They apply the IOTC Hamiltonian to calculations of excitation and ionization energies as well as electric and magnetic properties of the radon atom. The results are compared to corresponding calculations using identical basis sets and based on the four-component Dirac-Coulomb Hamiltonian as well as Douglas-Kroll-Hess and zeroth-order regular approximation Hamiltonians, all implemented in the DIRAC program package, thus allowing a comprehensive comparison of relativistic Hamiltonians within the finite basis approximation.

  4. Relativistic nuclear magnetic resonance J-coupling with ultrasoft pseudopotentials and the zeroth-order regular approximation

    International Nuclear Information System (INIS)

    Green, Timothy F. G.; Yates, Jonathan R.

    2014-01-01

    We present a method for the first-principles calculation of nuclear magnetic resonance (NMR) J-coupling in extended systems using state-of-the-art ultrasoft pseudopotentials and including scalar-relativistic effects. The use of ultrasoft pseudopotentials is allowed by extending the projector augmented wave (PAW) method of Joyce et al. [J. Chem. Phys. 127, 204107 (2007)]. We benchmark it against existing local-orbital quantum chemical calculations and experiments for small molecules containing light elements, with good agreement. Scalar-relativistic effects are included at the zeroth-order regular approximation level of theory and benchmarked against existing local-orbital quantum chemical calculations and experiments for a number of small molecules containing the heavy row six elements W, Pt, Hg, Tl, and Pb, with good agreement. Finally, 1 J(P-Ag) and 2 J(P-Ag-P) couplings are calculated in some larger molecular crystals and compared against solid-state NMR experiments. Some remarks are also made as to improving the numerical stability of dipole perturbations using PAW

  5. Calibration Method to Eliminate Zeroth Order Effect in Lateral Shearing Interferometry

    Science.gov (United States)

    Fang, Chao; Xiang, Yang; Qi, Keqi; Chen, Dawei

    2018-04-01

    In this paper, a calibration method is proposed which eliminates the zeroth order effect in lateral shearing interferometry. An analytical expression of the calibration error function is deduced, and the relationship between the phase-restoration error and calibration error is established. The analytical results show that the phase-restoration error introduced by the calibration error is proportional to the phase shifting error and zeroth order effect. The calibration method is verified using simulations and experiments. The simulation results show that the phase-restoration error is approximately proportional to the phase shift error and zeroth order effect, when the phase shifting error is less than 2° and the zeroth order effect is less than 0.2. The experimental result shows that compared with the conventional method with 9-frame interferograms, the calibration method with 5-frame interferograms achieves nearly the same restoration accuracy.

  6. Zeroth order resonator (ZOR) based RFID antenna design

    Science.gov (United States)

    Masud, Muhammad Mubeen

    Meander-line and multi-layer antennas have been used extensively to design compact UHF radio frequency identification (RFID) tags; however the overall size reduction of meander-line antennas is limited by the amount of parasitic inductance that can be introduced by each meander-line segment, and multi-layer antennas can be too costly. In this study, a new compact antenna topology for passive UHF RFID tags based on zeroth order resonant (ZOR) design techniques is presented. The antenna consists of lossy coplanar conductors and either inter-connected inter-digital capacitor (IDC) or shunt inductor unit-cells with a ZOR frequency near the operating frequency of the antenna. Setting the ZOR frequency near the operating frequency is a key component in the design process because the unit-cells chosen for the design are inductive at the operating frequency. This makes the unit-cells very useful for antenna miniaturization. These new designs in this work have several benefits: the coplanar layout can be printed on a single layer, matching inductive loops that reduce antenna efficiency are not required and ZOR analysis can be used for the design. Finally, for validation, prototype antennas are designed, fabricated and tested.

  7. Zeroth-order exchange energy as a criterion for optimized atomic basis sets in interatomic force calculations

    International Nuclear Information System (INIS)

    Varandas, A.J.C.

    1980-01-01

    A suggestion is made for using the zeroth-order exchange term, at the one-exchange level, in the perturbation development of the interaction energy as a criterion for optmizing the atomic basis sets in interatomic force calculations. The approach is illustrated for the case of two helium atoms. (orig.)

  8. On the validity of localized approximation for an on-axis zeroth-order Bessel beam

    International Nuclear Information System (INIS)

    Gouesbet, Gérard; Lock, J.A.; Ambrosio, L.A.; Wang, J.J.

    2017-01-01

    Localized approximation procedures are efficient ways to evaluate beam shape coefficients of laser beams, and are particularly useful when other methods are ineffective or inefficient. Several papers in the literature have reported the use of such procedures to evaluate the beam shape coefficients of Bessel beams. Examining the specific case of an on-axis zeroth-order Bessel beam, we demonstrate that localized approximation procedures are valid only for small axicon angles. - Highlights: • The localized approximation has been widely used to evaluate the Beam Shape Coefficients (BSCs) of Bessel beams. • The validity of this approximation is examined in the case of an on-axis zeroth-order Bessel beam. • It is demonstrated, in this specific example, that the localized approximation is efficient only for small enough axicon angles. • It is easily argued that this result must remain true for any kind of Bessel beams.

  9. Noise is the new signal: Moving beyond zeroth-order geomorphology (Invited)

    Science.gov (United States)

    Jerolmack, D. J.

    2010-12-01

    The last several decades have witnessed a rapid growth in our understanding of landscape evolution, led by the development of geomorphic transport laws - time- and space-averaged equations relating mass flux to some physical process(es). In statistical mechanics this approach is called mean field theory (MFT), in which complex many-body interactions are replaced with an external field that represents the average effect of those interactions. Because MFT neglects all fluctuations around the mean, it has been described as a zeroth-order fluctuation model. The mean field approach to geomorphology has enabled the development of landscape evolution models, and led to a fundamental understanding of many landform patterns. Recent research, however, has highlighted two limitations of MFT: (1) The integral (averaging) time and space scales in geomorphic systems are sometimes poorly defined and often quite large, placing the mean field approximation on uncertain footing, and; (2) In systems exhibiting fractal behavior, an integral scale does not exist - e.g., properties like mass flux are scale-dependent. In both cases, fluctuations in sediment transport are non-negligible over the scales of interest. In this talk I will synthesize recent experimental and theoretical work that confronts these limitations. Discrete element models of fluid and grain interactions show promise for elucidating transport mechanics and pattern-forming instabilities, but require detailed knowledge of micro-scale processes and are computationally expensive. An alternative approach is to begin with a reasonable MFT, and then add higher-order terms that capture the statistical dynamics of fluctuations. In either case, moving beyond zeroth-order geomorphology requires a careful examination of the origins and structure of transport “noise”. I will attempt to show how studying the signal in noise can both reveal interesting new physics, and also help to formalize the applicability of geomorphic

  10. Temperature, transitivity, and the zeroth law

    DEFF Research Database (Denmark)

    Bergthorsson, Bjørn

    1977-01-01

    Different statements of the zeroth law are examined. Two types of statements—which characterize two aspects of temperature—are found. A new formulation of the zeroth law is given and a corollary is stated. By means of this corollary it is shown how temperature and transitivity are used to disclose...

  11. Exploration of zeroth-order wavefunctions and energies as a first step toward intramolecular symmetry-adapted perturbation theory

    Science.gov (United States)

    Gonthier, Jérôme F.; Corminboeuf, Clémence

    2014-04-01

    Non-covalent interactions occur between and within all molecules and have a profound impact on structural and electronic phenomena in chemistry, biology, and material science. Understanding the nature of inter- and intramolecular interactions is essential not only for establishing the relation between structure and properties, but also for facilitating the rational design of molecules with targeted properties. These objectives have motivated the development of theoretical schemes decomposing intermolecular interactions into physically meaningful terms. Among the various existing energy decomposition schemes, Symmetry-Adapted Perturbation Theory (SAPT) is one of the most successful as it naturally decomposes the interaction energy into physical and intuitive terms. Unfortunately, analogous approaches for intramolecular energies are theoretically highly challenging and virtually nonexistent. Here, we introduce a zeroth-order wavefunction and energy, which represent the first step toward the development of an intramolecular variant of the SAPT formalism. The proposed energy expression is based on the Chemical Hamiltonian Approach (CHA), which relies upon an asymmetric interpretation of the electronic integrals. The orbitals are optimized with a non-hermitian Fock matrix based on two variants: one using orbitals strictly localized on individual fragments and the other using canonical (delocalized) orbitals. The zeroth-order wavefunction and energy expression are validated on a series of prototypical systems. The computed intramolecular interaction energies demonstrate that our approach combining the CHA with strictly localized orbitals achieves reasonable interaction energies and basis set dependence in addition to producing intuitive energy trends. Our zeroth-order wavefunction is the primary step fundamental to the derivation of any perturbation theory correction, which has the potential to truly transform our understanding and quantification of non

  12. Exploration of zeroth-order wavefunctions and energies as a first step toward intramolecular symmetry-adapted perturbation theory

    International Nuclear Information System (INIS)

    Gonthier, Jérôme F.; Corminboeuf, Clémence

    2014-01-01

    Non-covalent interactions occur between and within all molecules and have a profound impact on structural and electronic phenomena in chemistry, biology, and material science. Understanding the nature of inter- and intramolecular interactions is essential not only for establishing the relation between structure and properties, but also for facilitating the rational design of molecules with targeted properties. These objectives have motivated the development of theoretical schemes decomposing intermolecular interactions into physically meaningful terms. Among the various existing energy decomposition schemes, Symmetry-Adapted Perturbation Theory (SAPT) is one of the most successful as it naturally decomposes the interaction energy into physical and intuitive terms. Unfortunately, analogous approaches for intramolecular energies are theoretically highly challenging and virtually nonexistent. Here, we introduce a zeroth-order wavefunction and energy, which represent the first step toward the development of an intramolecular variant of the SAPT formalism. The proposed energy expression is based on the Chemical Hamiltonian Approach (CHA), which relies upon an asymmetric interpretation of the electronic integrals. The orbitals are optimized with a non-hermitian Fock matrix based on two variants: one using orbitals strictly localized on individual fragments and the other using canonical (delocalized) orbitals. The zeroth-order wavefunction and energy expression are validated on a series of prototypical systems. The computed intramolecular interaction energies demonstrate that our approach combining the CHA with strictly localized orbitals achieves reasonable interaction energies and basis set dependence in addition to producing intuitive energy trends. Our zeroth-order wavefunction is the primary step fundamental to the derivation of any perturbation theory correction, which has the potential to truly transform our understanding and quantification of non

  13. Design of a broadband hexagonal-shaped zeroth-order resonance antenna with metamaterials

    Energy Technology Data Exchange (ETDEWEB)

    Woo, Dong Sik; Kim, Kang Wook; Choi, Hyun Chul [Kyungpook National University, Daegu (Korea, Republic of)

    2014-11-15

    A broadband hexagonal-shaped metamaterials (MTMs)-based zeroth-order resonant (ZOR) antenna was designed and fabricated. The hexagonal shape of a top patch on a mushroom structure makes not only direct-current paths between the two ends of the patch but also round-current paths along the outside of the patch, thereby widening the resonance frequency of the mushroom MTM antenna. According to the shape of the hexagon patch, the presented antenna achieved impedance bandwidth of 58.6% corresponding to ultra-wideband technology. The proposed ZOR antenna was modeled by utilizing a composite right- and left-handed (CRLH) transmission line and provided 4 to 9.3 dBi of the antenna gain with reduced size as compared to conventional microstrip antennas at Ku- to K-band frequencies.

  14. Zeroth Poisson Homology, Foliated Cohomology and Perfect Poisson Manifolds

    Science.gov (United States)

    Martínez-Torres, David; Miranda, Eva

    2018-01-01

    We prove that, for compact regular Poisson manifolds, the zeroth homology group is isomorphic to the top foliated cohomology group, and we give some applications. In particular, we show that, for regular unimodular Poisson manifolds, top Poisson and foliated cohomology groups are isomorphic. Inspired by the symplectic setting, we define what a perfect Poisson manifold is. We use these Poisson homology computations to provide families of perfect Poisson manifolds.

  15. Zeroth-order design report for the next linear collider. Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    Raubenheimer, T.O. [ed.

    1996-05-01

    This Zeroth Order Design Report (ZDR) for the Next Linear Collider (NLC) has been completed as a feasibility study for a TeV-scale linear collider that incorporates a room-temperature accelerator powered by rf microwaves at 11.424 GHz--similar to that presently used in the SLC, but at four times the rf frequency. The purpose of this study is to examine the complete systems of such a collider, to understand how the parts fit together, and to make certain that every required piece has been included. The design presented here is not fully engineered in any sense, but to be assured that the NLC can be built, attention has been given to a number of critical components and issues that present special challenges. More engineering and development of a number of mechanical and electrical systems remain to be done, but the conclusion of this study is that indeed the NLC is technically feasible and can be expected to reach the performance levels required to perform research at the TeV energy scale. Volume one covers the following: the introduction; electron source; positron source; NLC damping rings; bunch compressors and prelinac; low-frequency linacs and compressors; main linacs; design and dynamics; and RF systems for main linacs.

  16. Zeroth-order design report for the next linear collider. Volume 1

    International Nuclear Information System (INIS)

    Raubenheimer, T.O.

    1996-05-01

    This Zeroth Order Design Report (ZDR) for the Next Linear Collider (NLC) has been completed as a feasibility study for a TeV-scale linear collider that incorporates a room-temperature accelerator powered by rf microwaves at 11.424 GHz--similar to that presently used in the SLC, but at four times the rf frequency. The purpose of this study is to examine the complete systems of such a collider, to understand how the parts fit together, and to make certain that every required piece has been included. The design presented here is not fully engineered in any sense, but to be assured that the NLC can be built, attention has been given to a number of critical components and issues that present special challenges. More engineering and development of a number of mechanical and electrical systems remain to be done, but the conclusion of this study is that indeed the NLC is technically feasible and can be expected to reach the performance levels required to perform research at the TeV energy scale. Volume one covers the following: the introduction; electron source; positron source; NLC damping rings; bunch compressors and prelinac; low-frequency linacs and compressors; main linacs; design and dynamics; and RF systems for main linacs

  17. Zeroth-order design report for the next linear collider. Volume 2

    International Nuclear Information System (INIS)

    Raubenheimer, T.O.

    1996-05-01

    This Zeroth-Order Design Report (ZDR) for the Next Linear Collider (NLC) has been completed as a feasibility study for a TeV-scale linear collider that incorporates a room-temperature accelerator powered by rf microwaves at 11.424 GHz--similar to that presently used in the SLC, but at four times the rf frequency. The purpose of this study is to examine the complete systems of such a collider, to understand how the parts fit together, and to make certain that every required piece has been included. The ''design'' presented here is not fully engineered in any sense, but to be assured that the NLC can be built, attention has been given to a number of critical components and issues that present special challenges. More engineering and development of a number of mechanical and electrical systems remain to be done, but the conclusion of this study is that indeed the NLC is technically feasible and can be expected to reach the performance levels required to perform research at the TeV energy scale. Volume II covers the following: collimation systems; IP switch and big bend; final focus; the interaction region; multiple bunch issues; control systems; instrumentation; machine protection systems; NLC reliability considerations; NLC conventional facilities. Also included are four appendices on the following topics: An RF power source upgrade to the NLC; a second interaction region for gamma-gamma, gamma-electron; ground motion: theory and measurement; and beam-based feedback: theory and implementation

  18. Zeroth-order design report for the next linear collider. Volume 2

    Energy Technology Data Exchange (ETDEWEB)

    Raubenheimer, T.O. [ed.

    1996-05-01

    This Zeroth-Order Design Report (ZDR) for the Next Linear Collider (NLC) has been completed as a feasibility study for a TeV-scale linear collider that incorporates a room-temperature accelerator powered by rf microwaves at 11.424 GHz--similar to that presently used in the SLC, but at four times the rf frequency. The purpose of this study is to examine the complete systems of such a collider, to understand how the parts fit together, and to make certain that every required piece has been included. The ``design`` presented here is not fully engineered in any sense, but to be assured that the NLC can be built, attention has been given to a number of critical components and issues that present special challenges. More engineering and development of a number of mechanical and electrical systems remain to be done, but the conclusion of this study is that indeed the NLC is technically feasible and can be expected to reach the performance levels required to perform research at the TeV energy scale. Volume II covers the following: collimation systems; IP switch and big bend; final focus; the interaction region; multiple bunch issues; control systems; instrumentation; machine protection systems; NLC reliability considerations; NLC conventional facilities. Also included are four appendices on the following topics: An RF power source upgrade to the NLC; a second interaction region for gamma-gamma, gamma-electron; ground motion: theory and measurement; and beam-based feedback: theory and implementation.

  19. Incomplete nonextensive statistics and the zeroth law of thermodynamics

    International Nuclear Information System (INIS)

    Huang Zhi-Fu; Ou Cong-Jie; Chen Jin-Can

    2013-01-01

    On the basis of the entropy of incomplete statistics (IS) and the joint probability factorization condition, two controversial problems existing in IS are investigated: one is what expression of the internal energy is reasonable for a composite system and the other is whether the traditional zeroth law of thermodynamics is suitable for IS. Some new equivalent expressions of the internal energy of a composite system are derived through accurate mathematical calculation. Moreover, a self-consistent calculation is used to expound that the zeroth law of thermodynamics is also suitable for IS, but it cannot be proven theoretically. Finally, it is pointed out that the generalized zeroth law of thermodynamics for incomplete nonextensive statistics is unnecessary and the nonextensive assumptions for the composite internal energy will lead to mathematical contradiction. (general)

  20. Phase-only optical encryption based on the zeroth-order phase-contrast technique

    Science.gov (United States)

    Pizolato, José Carlos; Neto, Luiz Gonçalves

    2009-09-01

    A phase-only encryption/decryption scheme with the readout based on the zeroth-order phase-contrast technique (ZOPCT), without the use of a phase-changing plate on the Fourier plane of an optical system based on the 4f optical correlator, is proposed. The encryption of a gray-level image is achieved by multiplying the phase distribution obtained directly from the gray-level image by a random phase distribution. The robustness of the encoding is assured by the nonlinearity intrinsic to the proposed phase-contrast method and the random phase distribution used in the encryption process. The experimental system has been implemented with liquid-crystal spatial modulators to generate phase-encrypted masks and a decrypting key. The advantage of this method is the easy scheme to recover the gray-level information from the decrypted phase-only mask applying the ZOPCT. An analysis of this decryption method was performed against brute force attacks.

  1. Examples of the Zeroth Theorem of the History of Science

    Energy Technology Data Exchange (ETDEWEB)

    Jackson, J.D.

    2007-08-24

    The zeroth theorem of the history of science, enunciated byE. P. Fischer, states that a discovery (rule,regularity, insight) namedafter someone (often) did not originate with that person. I present fiveexamples from physics: the Lorentz condition partial muAmu = 0 definingthe Lorentz gauge of the electromagnetic potentials; the Dirac deltafunction, delta(x); the Schumann resonances of the earth-ionospherecavity; the Weizsacker-Williams method of virtual quanta; the BMTequation of spin dynamics. I give illustrated thumbnail sketches of boththe true and reputed discoverers and quote from their "discovery"publications.

  2. Regularity for 3D Navier-Stokes equations in terms of two components of the vorticity

    Directory of Open Access Journals (Sweden)

    Sadek Gala

    2010-10-01

    Full Text Available We establish regularity conditions for the 3D Navier-Stokes equation via two components of the vorticity vector. It is known that if a Leray-Hopf weak solution $u$ satisfies $$ ilde{omega}in L^{2/(2-r}(0,T;L^{3/r}(mathbb{R}^3quad hbox{with }0two components of the vorticity, $omega =operatorname{curl}u$, then $u$ becomes the classical solution on $(0,T]$ (see [5]. We prove the regularity of Leray-Hopf weak solution $u$ under each of the following two (weaker conditions: $$displaylines{ ilde{omega}in L^{2/(2-r}(0,T;dot {mathcal{M}}_{2, 3/r}(mathbb{R}^3quad hbox{for }0regularity criterion improves the results in Chae-Choe [5].

  3. The zeroth law in quasi-homogeneous thermodynamics and black holes

    Directory of Open Access Journals (Sweden)

    Alessandro Bravetti

    2017-11-01

    Full Text Available Motivated by black holes thermodynamics, we consider the zeroth law of thermodynamics for systems whose entropy is a quasi-homogeneous function of the extensive variables. We show that the generalized Gibbs–Duhem identity and the Maxwell construction for phase coexistence based on the standard zeroth law are incompatible in this case. We argue that the generalized Gibbs–Duhem identity suggests a revision of the zeroth law which in turns permits to reconsider Maxwell's construction in analogy with the standard case. The physical feasibility of our proposal is considered in the particular case of black holes.

  4. Disorder-Induced Order in Two-Component Bose-Einstein Condensates

    International Nuclear Information System (INIS)

    Niederberger, A.; Schulte, T.; Wehr, J.; Lewenstein, M.; Sanchez-Palencia, L.; Sacha, K.

    2008-01-01

    We propose and analyze a general mechanism of disorder-induced order in two-component Bose-Einstein condensates, analogous to corresponding effects established for XY spin models. We show that a random Raman coupling induces a relative phase of π/2 between the two BECs and that the effect is robust. We demonstrate it in one, two, and three dimensions at T=0 and present evidence that it persists at small T>0. Applications to phase control in ultracold spinor condensates are discussed

  5. A Zeroth Law Compatible Model to Kerr Black Hole Thermodynamics

    Directory of Open Access Journals (Sweden)

    Viktor G. Czinner

    2017-02-01

    Full Text Available We consider the thermodynamic and stability problem of Kerr black holes arising from the nonextensive/nonadditive nature of the Bekenstein–Hawking entropy formula. Nonadditive thermodynamics is often criticized by asserting that the zeroth law cannot be compatible with nonadditive composition rules, so in this work we follow the so-called formal logarithm method to derive an additive entropy function for Kerr black holes also satisfying the zeroth law’s requirement. Starting from the most general, equilibrium compatible, nonadditive entropy composition rule of Abe, we consider the simplest non-parametric approach that is generated by the explicit nonadditive form of the Bekenstein–Hawking formula. This analysis extends our previous results on the Schwarzschild case, and shows that the zeroth law-compatible temperature function in the model is independent of the mass–energy parameter of the black hole. By applying the Poincaré turning point method, we also study the thermodynamic stability problem in the system.

  6. Recovering four-component solutions by the inverse transformation of the infinite-order two-component wave functions

    International Nuclear Information System (INIS)

    Barysz, Maria; Mentel, Lukasz; Leszczynski, Jerzy

    2009-01-01

    The two-component Hamiltonian of the infinite-order two-component (IOTC) theory is obtained by a unitary block-diagonalizing transformation of the Dirac-Hamiltonian. Once the IOTC spin orbitals are calculated, they can be back transformed into four-component solutions. The transformed four component solutions are then used to evaluate different moments of the electron density distribution. This formally exact method may, however, suffer from certain approximations involved in its numerical implementation. As shown by the present study, with sufficiently large basis set of Gaussian functions, the Dirac values of these moments are fully recovered in spite of using the approximate identity resolution into eigenvectors of the p 2 operator.

  7. Tetravalent one-regular graphs of order 4p2

    DEFF Research Database (Denmark)

    Feng, Yan-Quan; Kutnar, Klavdija; Marusic, Dragan

    2014-01-01

    A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified.......A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified....

  8. SQED two-loop beta function in the context of Implicit regularization

    International Nuclear Information System (INIS)

    Cherchiglia, Adriano Lana; Sampaio, Marcos; Nemes, Maria Carolina

    2013-01-01

    Full text: In this work we present the state-of-art for Implicit Regularization (IReg) in the context of supersymmetric theories. IReg is a four-dimensional regularization technique in momentum space which disentangles, in a consistent way at arbitrary order, the divergencies, regularization dependent and finite parts of any Feynman amplitude. Since it does not resort to modifications on the physical space-time dimensions of the underlying quantum field theoretical model, it can be consistently applied to supersymmetric theories. First we describe the technique and present previous results for supersymmetric models: the two-loop beta function for the Wess-Zumino model (both in the component and superfield formalism); the two-loop beta function for Super Yang-Mills (in the superfield formalism using the background field technique). After, we present our calculation of the two-loop beta function for massless and massive SQED using the superfield formalism with and without resorting to the background field technique. We find that only in the second case the two-loop divergence cancels out. We argue it is due to an anomalous Jacobian under the rescaling of the fields in the path-integral which is necessary for the application of the supersymmetric background field technique. We find, however, that in both cases the two-loop coefficients of beta function are non-null. Finally we briefly discuss the anomaly puzzle in the context of our technique. (author)

  9. Relativistic DFT calculations of hyperfine coupling constants in the 5d hexafluorido complexes

    DEFF Research Database (Denmark)

    Haase, Pi Ariane Bresling; Repisky, Michal; Komorovsky, Stanislav

    2018-01-01

    We have investigated the performance of the most popular relativistic density functional theory methods, zeroth order regular approximation (ZORA) and 4-component Dirac-Kohn-Sham (DKS), in the calculation of the recently measured hyperfine coupling constants of ReIV and IrIV in their hexafluorido...

  10. Regular perturbation theory for two-electron atoms

    International Nuclear Information System (INIS)

    Feranchuk, I.D.; Triguk, V.V.

    2011-01-01

    Regular perturbation theory (RPT) for the ground and excited states of two-electron atoms or ions is developed. It is shown for the first time that summation of the matrix elements from the electron-electron interaction operator over all intermediate states can be calculated in a closed form by means of the two-particle Coulomb Green's function constructed in the Letter. It is shown that the second order approximation of RPT includes the main part of the correlation energy both for the ground and excited states. This approach can be also useful for description of two-electron atoms in external fields. -- Highlights: → We develop regular perturbation theory for the two-electron atoms or ions. → We calculate the sum of the matrix elements over all intermediate states. → We construct the two-particle Coulomb Green's function.

  11. Cognitive components of regularity processing in the auditory domain.

    Directory of Open Access Journals (Sweden)

    Stefan Koelsch

    Full Text Available BACKGROUND: Music-syntactic irregularities often co-occur with the processing of physical irregularities. In this study we constructed chord-sequences such that perceived differences in the cognitive processing between regular and irregular chords could not be due to the sensory processing of acoustic factors like pitch repetition or pitch commonality (the major component of 'sensory dissonance'. METHODOLOGY/PRINCIPAL FINDINGS: Two groups of subjects (musicians and nonmusicians were investigated with electroencephalography (EEG. Irregular chords elicited an early right anterior negativity (ERAN in the event-related brain potentials (ERPs. The ERAN had a latency of around 180 ms after the onset of the music-syntactically irregular chords, and had maximum amplitude values over right anterior electrode sites. CONCLUSIONS/SIGNIFICANCE: Because irregular chords were hardly detectable based on acoustical factors (such as pitch repetition and sensory dissonance, this ERAN effect reflects for the most part cognitive (not sensory components of regularity-based, music-syntactic processing. Our study represents a methodological advance compared to previous ERP-studies investigating the neural processing of music-syntactically irregular chords.

  12. Unusual interlayer quantum transport behavior caused by the zeroth Landau level in YbMnBi2.

    Science.gov (United States)

    Liu, J Y; Hu, J; Graf, D; Zou, T; Zhu, M; Shi, Y; Che, S; Radmanesh, S M A; Lau, C N; Spinu, L; Cao, H B; Ke, X; Mao, Z Q

    2017-09-21

    Relativistic fermions in topological quantum materials are characterized by linear energy-momentum dispersion near band crossing points. Under magnetic fields, relativistic fermions acquire Berry phase of π in cyclotron motion, leading to a zeroth Landau level (LL) at the crossing point, a signature unique to relativistic fermions. Here we report the unusual interlayer quantum transport behavior resulting from the zeroth LL mode observed in the time reversal symmetry breaking type II Weyl semimetal YbMnBi 2 . The interlayer magnetoresistivity and Hall conductivity of this material are found to exhibit surprising angular dependences under high fields, which can be well fitted by a model, which considers the interlayer quantum tunneling transport of the zeroth LL's Weyl fermions. Our results shed light on the unusual role of zeroth LLl mode in transport.The transport behavior of the carriers residing in the lowest Landau level is hard to observe in most topological materials. Here, Liu et al. report a surprising angular dependence of the interlayer magnetoresistivity and Hall conductivity arising from the lowest Landau level under high magnetic field in type II Weyl semimetal YbMnBi 2 .

  13. Higher order total variation regularization for EIT reconstruction.

    Science.gov (United States)

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut

    2018-01-08

    Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.

  14. REGULAR METHOD FOR SYNTHESIS OF BASIC BENT-SQUARES OF RANDOM ORDER

    Directory of Open Access Journals (Sweden)

    A. V. Sokolov

    2016-01-01

    Full Text Available The paper is devoted to the class construction of the most non-linear Boolean bent-functions of any length N = 2k (k = 2, 4, 6…, on the basis of their spectral representation – Agievich bent squares. These perfect algebraic constructions are used as a basis to build many new cryptographic primitives, such as generators of pseudo-random key sequences, crypto graphic S-boxes, etc. Bent-functions also find their application in the construction of C-codes in the systems with code division multiple access (CDMA to provide the lowest possible value of Peak-to-Average Power Ratio (PAPR k = 1, as well as for the construction of error-correcting codes and systems of orthogonal biphasic signals. All the numerous applications of bent-functions relate to the theory of their synthesis. However, regular methods for complete class synthesis of bent-functions of any length N = 2k are currently unknown. The paper proposes a regular synthesis method for the basic Agievich bent squares of any order n, based on a regular operator of dyadic shift. Classification for a complete set of spectral vectors of lengths (l = 8, 16, … based on a criterion of the maximum absolute value and set of absolute values of spectral components has been carried out in the paper. It has been shown that any spectral vector can be a basis for building bent squares. Results of the synthesis for the Agievich bent squares of order n = 8 have been generalized and it has been revealed that there are only 3 basic bent squares for this order, while the other 5 can be obtained with help the operation of step-cyclic shift. All the basic bent squares of order n = 16 have been synthesized that allows to construct the bent-functions of length N = 256. The obtained basic bent squares can be used either for direct synthesis of bent-functions and their practical application or for further research in order to synthesize new structures of bent squares of orders n = 16, 32, 64, …

  15. The Hamiltonian formulation of regular rth-order Lagrangian field theories

    International Nuclear Information System (INIS)

    Shadwick, W.F.

    1982-01-01

    A Hamiltonian formulation of regular rth-order Lagrangian field theories over an m-dimensional manifold is presented in terms of the Hamilton-Cartan formalism. It is demonstrated that a uniquely determined Cartan m-form may be associated to an rth-order Lagrangian by imposing conditions of congruence modulo a suitably defined system of contact m-forms. A geometric regularity condition is given and it is shown that, for a regular Lagrangian, the momenta defined by the Hamilton-Cartan formalism, together with the coordinates on the (r-1)st-order jet bundle, are a minimal set of local coordinates needed to express the Euler-Lagrange equations. When r is greater than one, the number of variables required is strictly less than the dimension of the (2r-1)st order jet bundle. It is shown that, in these coordinates, the Euler-Lagrange equations take the first-order Hamiltonian form given by de Donder. It is also shown that the geometrically natural generalization of the Hamilton-Jacobi procedure for finding extremals is equivalent to de Donder's Hamilton-Jacobi equation. (orig.)

  16. Regularities and irregularities in order flow data

    Science.gov (United States)

    Theissen, Martin; Krause, Sebastian M.; Guhr, Thomas

    2017-11-01

    We identify and analyze statistical regularities and irregularities in the recent order flow of different NASDAQ stocks, focusing on the positions where orders are placed in the order book. This includes limit orders being placed outside of the spread, inside the spread and (effective) market orders. Based on the pairwise comparison of the order flow of different stocks, we perform a clustering of stocks into groups with similar behavior. This is useful to assess systemic aspects of stock price dynamics. We find that limit order placement inside the spread is strongly determined by the dynamics of the spread size. Most orders, however, arrive outside of the spread. While for some stocks order placement on or next to the quotes is dominating, deeper price levels are more important for other stocks. As market orders are usually adjusted to the quote volume, the impact of market orders depends on the order book structure, which we find to be quite diverse among the analyzed stocks as a result of the way limit order placement takes place.

  17. The Analysis of Two-Way Functional Data Using Two-Way Regularized Singular Value Decompositions

    KAUST Repository

    Huang, Jianhua Z.

    2009-12-01

    Two-way functional data consist of a data matrix whose row and column domains are both structured, for example, temporally or spatially, as when the data are time series collected at different locations in space. We extend one-way functional principal component analysis (PCA) to two-way functional data by introducing regularization of both left and right singular vectors in the singular value decomposition (SVD) of the data matrix. We focus on a penalization approach and solve the nontrivial problem of constructing proper two-way penalties from oneway regression penalties. We introduce conditional cross-validated smoothing parameter selection whereby left-singular vectors are cross- validated conditional on right-singular vectors, and vice versa. The concept can be realized as part of an alternating optimization algorithm. In addition to the penalization approach, we briefly consider two-way regularization with basis expansion. The proposed methods are illustrated with one simulated and two real data examples. Supplemental materials available online show that several "natural" approaches to penalized SVDs are flawed and explain why so. © 2009 American Statistical Association.

  18. Supersymmetric dimensional regularization

    International Nuclear Information System (INIS)

    Siegel, W.; Townsend, P.K.; van Nieuwenhuizen, P.

    1980-01-01

    There is a simple modification of dimension regularization which preserves supersymmetry: dimensional reduction to real D < 4, followed by analytic continuation to complex D. In terms of component fields, this means fixing the ranges of all indices on the fields (and therefore the numbers of Fermi and Bose components). For superfields, it means continuing in the dimensionality of x-space while fixing the dimensionality of theta-space. This regularization procedure allows the simple manipulation of spinor derivatives in supergraph calculations. The resulting rules are: (1) First do all algebra exactly as in D = 4; (2) Then do the momentum integrals as in ordinary dimensional regularization. This regularization procedure needs extra rules before one can say that it is consistent. Such extra rules needed for superconformal anomalies are discussed. Problems associated with renormalizability and higher order loops are also discussed

  19. Spectra of primordial fluctuations in two-perfect-fluid regular bounces

    International Nuclear Information System (INIS)

    Finelli, Fabio; Peter, Patrick; Pinto-Neto, Nelson

    2008-01-01

    We introduce analytic solutions for a class of two components bouncing models, where the bounce is triggered by a negative energy density perfect fluid. The equation of state of the two components are constant in time, but otherwise unrelated. By numerically integrating regular equations for scalar cosmological perturbations, we find that the (would-be) growing mode of the Newtonian potential before the bounce never matches with the growing mode in the expanding stage. For the particular case of a negative energy density component with a stiff equation of state we give a detailed analytic study, which is in complete agreement with the numerical results. We also perform analytic and numerical calculations for long wavelength tensor perturbations, obtaining that, in most cases of interest, the tensor spectral index is independent of the negative energy fluid and given by the spectral index of the growing mode in the contracting stage. We compare our results with previous investigations in the literature

  20. Dynamics of a strongly driven two-component Bose-Einstein condensate

    International Nuclear Information System (INIS)

    Salmond, G.L.; Holmes, C.A.; Milburn, G.J.

    2002-01-01

    We consider a two-component Bose-Einstein condensate in two spatially localized modes of a double-well potential, with periodic modulation of the tunnel coupling between the two modes. We treat the driven quantum field using a two-mode expansion and define the quantum dynamics in terms of the Floquet Operator for the time periodic Hamiltonian of the system. It has been shown that the corresponding semiclassical mean-field dynamics can exhibit regions of regular and chaotic motion. We show here that the quantum dynamics can exhibit dynamical tunneling between regions of regular motion, centered on fixed points (resonances) of the semiclassical dynamics

  1. A multiresolution method for solving the Poisson equation using high order regularization

    DEFF Research Database (Denmark)

    Hejlesen, Mads Mølholm; Walther, Jens Honore

    2016-01-01

    We present a novel high order multiresolution Poisson solver based on regularized Green's function solutions to obtain exact free-space boundary conditions while using fast Fourier transforms for computational efficiency. Multiresolution is a achieved through local refinement patches and regulari......We present a novel high order multiresolution Poisson solver based on regularized Green's function solutions to obtain exact free-space boundary conditions while using fast Fourier transforms for computational efficiency. Multiresolution is a achieved through local refinement patches...... and regularized Green's functions corresponding to the difference in the spatial resolution between the patches. The full solution is obtained utilizing the linearity of the Poisson equation enabling super-position of solutions. We show that the multiresolution Poisson solver produces convergence rates...

  2. A regularized relaxed ordered subset list-mode reconstruction algorithm and its preliminary application to undersampling PET imaging

    International Nuclear Information System (INIS)

    Cao, Xiaoqing; Xie, Qingguo; Xiao, Peng

    2015-01-01

    List mode format is commonly used in modern positron emission tomography (PET) for image reconstruction due to certain special advantages. In this work, we proposed a list mode based regularized relaxed ordered subset (LMROS) algorithm for static PET imaging. LMROS is able to work with regularization terms which can be formulated as twice differentiable convex functions. Such a versatility would make LMROS a convenient and general framework for fulfilling different regularized list mode reconstruction methods. LMROS was applied to two simulated undersampling PET imaging scenarios to verify its effectiveness. Convex quadratic function, total variation constraint, non-local means and dictionary learning based regularization methods were successfully realized for different cases. The results showed that the LMROS algorithm was effective and some regularization methods greatly reduced the distortions and artifacts caused by undersampling. (paper)

  3. Bypassing the Limits of Ll Regularization: Convex Sparse Signal Processing Using Non-Convex Regularization

    Science.gov (United States)

    Parekh, Ankit

    Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal

  4. Two-dimensional behavior of solitons in a low-β plasma with convective motion

    International Nuclear Information System (INIS)

    Makino, Mitsuhiro; Kamimura, Tetsuo; Sato, Tetsuya.

    1981-01-01

    The initial value problem of the Hasegawa-Mima (HM) equation, which describes the propagation of drift waves in a low beta magnetized plasma, is numerically studied. Solitons are formed from an initial sinusoidal wave. For a wide range of initial conditions, the number of solitons and the recurrence time agree well with those obtained from the KdV eq. reduced from the HM eq. by Nozaki et al. As a result of nonlinear interactions among different solitons, their peak positions shift in the direction normal to the zeroth order convective motion in a regular but different fashion. When we start from a sinusoidal wave, the peaks of the generated soliton train line up on a line at an angle with respect to the convective direction. Two-deimensional collisions of different solitons are examined. (author)

  5. Conductivity of two-component systems

    Energy Technology Data Exchange (ETDEWEB)

    Kuijper, A. de; Hofman, J.P.; Waal, J.A. de [Shell Research BV, Rijswijk (Netherlands). Koninklijke/Shell Exploratie en Productie Lab.; Sandor, R.K.J. [Shell International Petroleum Maatschappij, The Hague (Netherlands)

    1996-01-01

    The authors present measurements and computer simulation results on the electrical conductivity of nonconducting grains embedded in a conductive brine host. The shapes of the grains ranged from prolate-ellipsoidal (with an axis ratio of 5:1) through spherical to oblate-ellipsoidal (with an axis ratio of 1:5). The conductivity was studied as a function of porosity and packing, and Archie`s cementation exponent was found to depend on porosity. They used spatially regular and random configurations with aligned and nonaligned packings. The experimental results agree well with the computer simulation data. This data set will enable extensive tests of models for calculating the anisotropic conductivity of two-component systems.

  6. The Jump Set under Geometric Regularization. Part 1: Basic Technique and First-Order Denoising

    KAUST Repository

    Valkonen, Tuomo

    2015-01-01

    © 2015 Society for Industrial and Applied Mathematics. Let u ∈ BV(Ω) solve the total variation (TV) denoising problem with L2-squared fidelity and data f. Caselles, Chambolle, and Novaga [Multiscale Model. Simul., 6 (2008), pp. 879-894] have shown the containment Hm-1 (Ju \\\\Jf) = 0 of the jump set Ju of u in that of f. Their proof unfortunately depends heavily on the co-area formula, as do many results in this area, and as such is not directly extensible to higher-order, curvature-based, and other advanced geometric regularizers, such as total generalized variation and Euler\\'s elastica. These have received increased attention in recent times due to their better practical regularization properties compared to conventional TV or wavelets. We prove analogous jump set containment properties for a general class of regularizers. We do this with novel Lipschitz transformation techniques and do not require the co-area formula. In the present Part 1 we demonstrate the general technique on first-order regularizers, while in Part 2 we will extend it to higher-order regularizers. In particular, we concentrate in this part on TV and, as a novelty, Huber-regularized TV. We also demonstrate that the technique would apply to nonconvex TV models as well as the Perona-Malik anisotropic diffusion, if these approaches were well-posed to begin with.

  7. A second order anti-diffusive Lagrange-remap scheme for two-component flows

    Directory of Open Access Journals (Sweden)

    Lagoutière Frédéric

    2011-11-01

    Full Text Available We build a non-dissipative second order algorithm for the approximate resolution of the one-dimensional Euler system of compressible gas dynamics with two components. The considered model was proposed in [1]. The algorithm is based on [8] which deals with a non-dissipative first order resolution in Lagrange-remap formalism. In the present paper we describe, in the same framework, an algorithm that is second order accurate in time and space, and that preserves sharp interfaces. Numerical results reported at the end of the paper are very encouraging, showing the interest of the second order accuracy for genuinely non-linear waves. Nous construisons un algorithme d’ordre deux et non dissipatif pour la résolution approchée des équations d’Euler de la dynamique des gaz compressibles à deux constituants en dimension un. Le modèle que nous considérons est celui à cinq équations proposé et analysé dans [1]. L’algorithme est basé sur [8] qui proposait une résolution approchée à l’ordre un et non dissipative au moyen d’un splitting de type Lagrange-projection. Dans le présent article, nous décrivons, dans le même formalisme, un algorithme d’ordre deux en temps et en espace, qui préserve des interfaces « parfaites » entre les constituants. Les résultats numériques rapportés à la fin de l’article sont très encourageants ; ils montrent clairement les avantages d’un schéma d’ordre deux pour les ondes vraiment non linéaires.

  8. Deterministic time-reversible thermostats: chaos, ergodicity, and the zeroth law of thermodynamics

    Science.gov (United States)

    Patra, Puneet Kumar; Sprott, Julien Clinton; Hoover, William Graham; Griswold Hoover, Carol

    2015-09-01

    The relative stability and ergodicity of deterministic time-reversible thermostats, both singly and in coupled pairs, are assessed through their Lyapunov spectra. Five types of thermostat are coupled to one another through a single Hooke's-law harmonic spring. The resulting dynamics shows that three specific thermostat types, Hoover-Holian, Ju-Bulgac, and Martyna-Klein-Tuckerman, have very similar Lyapunov spectra in their equilibrium four-dimensional phase spaces and when coupled in equilibrium or nonequilibrium pairs. All three of these oscillator-based thermostats are shown to be ergodic, with smooth analytic Gaussian distributions in their extended phase spaces (coordinate, momentum, and two control variables). Evidently these three ergodic and time-reversible thermostat types are particularly useful as statistical-mechanical thermometers and thermostats. Each of them generates Gibbs' universal canonical distribution internally as well as for systems to which they are coupled. Thus they obey the zeroth law of thermodynamics, as a good heat bath should. They also provide dissipative heat flow with relatively small nonlinearity when two or more such temperature baths interact and provide useful deterministic replacements for the stochastic Langevin equation.

  9. On the Kählerian symmetries of the two-loop action of the effective string theory

    CERN Document Server

    Ozkurt, S S

    2003-01-01

    Sometimes ago, it has been proposed in a paper by N.Kaloper and K.A.Meissner (\\PR {\\bf D56} (1997) 7940) that if one makes local redefinitions of fields, it does not change the equations of motion (in the redefined fields); however, this comment has not generally been accepted, namely, the redefined fields satisfy different equations of motion. For this reason, in this paper, it is proved that the whole action can be written as a square of the zeroth-order field equations. In this way, we show that any solution of the zeroth-order field equations, which has some K\\"{a}hler symmetry, at the same time, is also a solution of the two-loop equations.

  10. Recursive regularization step for high-order lattice Boltzmann methods

    Science.gov (United States)

    Coreixas, Christophe; Wissocq, Gauthier; Puigt, Guillaume; Boussuge, Jean-François; Sagaut, Pierre

    2017-09-01

    A lattice Boltzmann method (LBM) with enhanced stability and accuracy is presented for various Hermite tensor-based lattice structures. The collision operator relies on a regularization step, which is here improved through a recursive computation of nonequilibrium Hermite polynomial coefficients. In addition to the reduced computational cost of this procedure with respect to the standard one, the recursive step allows to considerably enhance the stability and accuracy of the numerical scheme by properly filtering out second- (and higher-) order nonhydrodynamic contributions in under-resolved conditions. This is first shown in the isothermal case where the simulation of the doubly periodic shear layer is performed with a Reynolds number ranging from 104 to 106, and where a thorough analysis of the case at Re=3 ×104 is conducted. In the latter, results obtained using both regularization steps are compared against the Bhatnagar-Gross-Krook LBM for standard (D2Q9) and high-order (D2V17 and D2V37) lattice structures, confirming the tremendous increase of stability range of the proposed approach. Further comparisons on thermal and fully compressible flows, using the general extension of this procedure, are then conducted through the numerical simulation of Sod shock tubes with the D2V37 lattice. They confirm the stability increase induced by the recursive approach as compared with the standard one.

  11. A two-parameter family of double-power-law biorthonormal potential-density expansions

    Science.gov (United States)

    Lilley, Edward J.; Sanders, Jason L.; Evans, N. Wyn

    2018-05-01

    We present a two-parameter family of biorthonormal double-power-law potential-density expansions. Both the potential and density are given in closed analytic form and may be rapidly computed via recurrence relations. We show that this family encompasses all the known analytic biorthonormal expansions: the Zhao expansions (themselves generalizations of ones found earlier by Hernquist & Ostriker and by Clutton-Brock) and the recently discovered Lilley et al. (2017a) expansion. Our new two-parameter family includes expansions based around many familiar spherical density profiles as zeroth-order models, including the γ models and the Jaffe model. It also contains a basis expansion that reproduces the famous Navarro-Frenk-White (NFW) profile at zeroth order. The new basis expansions have been found via a systematic methodology which has wide applications in finding other new expansions. In the process, we also uncovered a novel integral transform solution to Poisson's equation.

  12. S-matrix regularities of two-dimensional sigma-models of Stiefel manifolds

    International Nuclear Information System (INIS)

    Flume-Gorczyca, B.

    1980-01-01

    The S-matrices of the two-dimensional nonlinear O(n + m)/O(n) and O(n + m)/O(n) x O(m) sigma-models corresponding to Stiefel and Grassmann manifolds, respectively, are compared in leading order in 1/n. It is shown, that after averaging over O(m) labels of the incoming and outgoing particles, the S-matrices of both models become identical. This result explains why commonly expected regularities of the Grassmann models, in particular absence of particle production, are found, modulo an O(m) average, also in Stiefel models. (orig.)

  13. An Iterative Regularization Method for Identifying the Source Term in a Second Order Differential Equation

    Directory of Open Access Journals (Sweden)

    Fairouz Zouyed

    2015-01-01

    Full Text Available This paper discusses the inverse problem of determining an unknown source in a second order differential equation from measured final data. This problem is ill-posed; that is, the solution (if it exists does not depend continuously on the data. In order to solve the considered problem, an iterative method is proposed. Using this method a regularized solution is constructed and an a priori error estimate between the exact solution and its regularized approximation is obtained. Moreover, numerical results are presented to illustrate the accuracy and efficiency of this method.

  14. On Regularity Criteria for the Two-Dimensional Generalized Liquid Crystal Model

    Directory of Open Access Journals (Sweden)

    Yanan Wang

    2014-01-01

    Full Text Available We establish the regularity criteria for the two-dimensional generalized liquid crystal model. It turns out that the global existence results satisfy our regularity criteria naturally.

  15. The zeroth law of thermodynamics and volume-preserving conservative system in equilibrium with stochastic damping

    International Nuclear Information System (INIS)

    Qian, Hong

    2014-01-01

    We propose a mathematical formulation of the zeroth law of thermodynamics and develop a stochastic dynamical theory, with a consistent irreversible thermodynamics, for systems possessing sustained conservative stationary current in phase space while in equilibrium with a heat bath. The theory generalizes underdamped mechanical equilibrium: dx=gdt+{−D∇ϕdt+√(2D)dB(t)}, with ∇⋅g=0 and {⋯} respectively representing phase-volume preserving dynamics and stochastic damping. The zeroth law implies stationary distribution u ss (x)=e −ϕ(x) . We find an orthogonality ∇ϕ⋅g=0 as a hallmark of the system. Stochastic thermodynamics based on time reversal (t,ϕ,g)→(−t,ϕ,−g) is formulated: entropy production e p # (t)=−dF(t)/dt; generalized “heat” h d # (t)=−dU(t)/dt, U(t)=∫ R n ϕ(x)u(x,t)dx being “internal energy”, and “free energy” F(t)=U(t)+∫ R n u(x,t)lnu(x,t)dx never increases. Entropy follows (dS)/(dt) =e p # −h d # . Our formulation is shown to be consistent with an earlier theory of P. Ao. Its contradistinctions to other theories, potential-flux decomposition, stochastic Hamiltonian system with even and odd variables, Klein–Kramers equation, Freidlin–Wentzell's theory, and GENERIC, are discussed.

  16. Position difference regularity of corresponding R-wave peaks for maternal ECG components from different abdominal points

    International Nuclear Information System (INIS)

    Zhang Jie-Min; Liu Hong-Xing; Huang Xiao-Lin; Si Jun-Feng; Guan Qun; Tang Li-Ming; Liu Tie-Bing

    2014-01-01

    We collected 343 groups of abdominal electrocardiogram (ECG) data from 78 pregnant women and deleted the channels unable for experts to determine R-wave peaks from them; then, based on these filtered data, the statistics of position difference of corresponding R-wave peaks for different maternal ECG components from different points were studied. The resultant statistics showed the regularity that the position difference of corresponding maternal R-wave peaks between different abdominal points does not exceed the range of 30 ms. The regularity was also proved using the fECG data from MIT—BIH PhysioBank. Additionally, the paper applied the obtained regularity, the range of position differences of the corresponding maternal R-wave peaks, to accomplish the automatic detection of maternal R-wave peaks in the recorded all initial 343 groups of abdominal signals, including the ones with the largest fetal ECG components, and all 55 groups of ECG data from MIT—BIH PhysioBank, achieving the successful separation of the maternal ECGs. (interdisciplinary physics and related areas of science and technology)

  17. Cognitive Aspects of Regularity Exhibit When Neighborhood Disappears

    Science.gov (United States)

    Chen, Sau-Chin; Hu, Jon-Fan

    2015-01-01

    Although regularity refers to the compatibility between pronunciation of character and sound of phonetic component, it has been suggested as being part of consistency, which is defined by neighborhood characteristics. Two experiments demonstrate how regularity effect is amplified or reduced by neighborhood characteristics and reveals the…

  18. Regularization independent analysis of the origin of two loop contributions to N=1 Super Yang-Mills beta function

    Energy Technology Data Exchange (ETDEWEB)

    Fargnoli, H.G.; Sampaio, Marcos; Nemes, M.C. [Federal University of Minas Gerais, ICEx, Physics Department, P.O. Box 702, Belo Horizonte, MG (Brazil); Hiller, B. [Coimbra University, Faculty of Science and Technology, Physics Department, Center of Computational Physics, Coimbra (Portugal); Baeta Scarpelli, A.P. [Setor Tecnico-Cientifico, Departamento de Policia Federal, Lapa, Sao Paulo (Brazil)

    2011-05-15

    We present both an ultraviolet and an infrared regularization independent analysis in a symmetry preserving framework for the N=1 Super Yang-Mills beta function to two loop order. We show explicitly that off-shell infrared divergences as well as the overall two loop ultraviolet divergence cancel out, whilst the beta function receives contributions of infrared modes. (orig.)

  19. Regularization independent analysis of the origin of two loop contributions to N=1 Super Yang-Mills beta function

    International Nuclear Information System (INIS)

    Fargnoli, H.G.; Sampaio, Marcos; Nemes, M.C.; Hiller, B.; Baeta Scarpelli, A.P.

    2011-01-01

    We present both an ultraviolet and an infrared regularization independent analysis in a symmetry preserving framework for the N=1 Super Yang-Mills beta function to two loop order. We show explicitly that off-shell infrared divergences as well as the overall two loop ultraviolet divergence cancel out, whilst the beta function receives contributions of infrared modes. (orig.)

  20. Two component plasma vortex approach to fusion

    International Nuclear Information System (INIS)

    Ikuta, Kazunari.

    1978-09-01

    Two component operation of the field reversed theta pinch plasma by injection of the energetic ion beam with energy of the order of 1 MeV is considered. A possible trapping scheme of the ion beam in the plasma is discussed in detail. (author)

  1. How calibration and reference spectra affect the accuracy of absolute soft X-ray solar irradiance measured by the SDO/EVE/ESP during high solar activity

    Science.gov (United States)

    Didkovsky, Leonid; Wieman, Seth; Woods, Thomas

    2016-10-01

    The Extreme ultraviolet Spectrophotometer (ESP), one of the channels of SDO's Extreme ultraviolet Variability Experiment (EVE), measures solar irradiance in several EUV and soft x-ray (SXR) bands isolated using thin-film filters and a transmission diffraction grating, and includes a quad-diode detector positioned at the grating zeroth-order to observe in a wavelength band from about 0.1 to 7.0 nm. The quad diode signal also includes some contribution from shorter wavelength in the grating's first-order and the ratio of zeroth-order to first-order signal depends on both source geometry, and spectral distribution. For example, radiometric calibration of the ESP zeroth-order at the NIST SURF BL-2 with a near-parallel beam provides a different zeroth-to-first-order ratio than modeled for solar observations. The relative influence of "uncalibrated" first-order irradiance during solar observations is a function of the solar spectral irradiance and the locations of large Active Regions or solar flares. We discuss how the "uncalibrated" first-order "solar" component and the use of variable solar reference spectra affect determination of absolute SXR irradiance which currently may be significantly overestimated during high solar activity.

  2. A two-way regularization method for MEG source reconstruction

    KAUST Repository

    Tian, Tian Siva; Huang, Jianhua Z.; Shen, Haipeng; Li, Zhimin

    2012-01-01

    The MEG inverse problem refers to the reconstruction of the neural activity of the brain from magnetoencephalography (MEG) measurements. We propose a two-way regularization (TWR) method to solve the MEG inverse problem under the assumptions that only a small number of locations in space are responsible for the measured signals (focality), and each source time course is smooth in time (smoothness). The focality and smoothness of the reconstructed signals are ensured respectively by imposing a sparsity-inducing penalty and a roughness penalty in the data fitting criterion. A two-stage algorithm is developed for fast computation, where a raw estimate of the source time course is obtained in the first stage and then refined in the second stage by the two-way regularization. The proposed method is shown to be effective on both synthetic and real-world examples. © Institute of Mathematical Statistics, 2012.

  3. A two-way regularization method for MEG source reconstruction

    KAUST Repository

    Tian, Tian Siva

    2012-09-01

    The MEG inverse problem refers to the reconstruction of the neural activity of the brain from magnetoencephalography (MEG) measurements. We propose a two-way regularization (TWR) method to solve the MEG inverse problem under the assumptions that only a small number of locations in space are responsible for the measured signals (focality), and each source time course is smooth in time (smoothness). The focality and smoothness of the reconstructed signals are ensured respectively by imposing a sparsity-inducing penalty and a roughness penalty in the data fitting criterion. A two-stage algorithm is developed for fast computation, where a raw estimate of the source time course is obtained in the first stage and then refined in the second stage by the two-way regularization. The proposed method is shown to be effective on both synthetic and real-world examples. © Institute of Mathematical Statistics, 2012.

  4. The structure of a lipid-water lamellar phase containing two types of lipid monolayers

    International Nuclear Information System (INIS)

    Ranck, J.L.; Luzzati, V.; Zaccai, G.

    1980-01-01

    One lamellar phase, observed in the mitochondrial lipids-water system at low temperature (ca 253 K) and at low water content (ca 15%), contains four lipid monolayers in its unit cell, two of type α and two of type β. Previous X-ray scattering studies of this phase led to an ambiguity: the phase could contain either two homogeneous bilayers, one α and one β, or two mixed bilayers, each formed by an α and a β monolayer. A solution to this problem was sought in a neutron scattering study as a function of the D 2 O/H 2 O ratio. Because of limited resolution, straightforward analysis of the neutron scattering data leads also to ambiguous results. Using a more sophisticated analysis based upon the zeroth- and second-order moments of the Patterson peaks relevant to the exchangeable components, it is shown that the weight of the evidence is in favour of a structure containing mixed bilayers. (Auth.)

  5. Ab-initio ZORA calculations

    NARCIS (Netherlands)

    Faas, S.; Snijders, Jaap; van Lenthe, J.H.; HernandezLaguna, A; Maruani, J; McWeeny, R; Wilson, S

    2000-01-01

    In this paper we present the first application of the ZORA (Zeroth Order Regular Approximation of the Dirac Fock equation) formalism in Ab Initio electronic structure calculations. The ZORA method, which has been tested previously in the context of Density Functional Theory, has been implemented in

  6. Lowest-order corrections to the RPA polarizability and GW self-energy of a semiconducting wire

    NARCIS (Netherlands)

    Groot, de H.J.; Ummels, R.T.M.; Bobbert, P.A.; van Haeringen, W.

    1996-01-01

    We present the results of the addition of lowest-order vertex and self-consistency corrections to the RPA polarizability and the GW self-energy for a semiconducting wire. It is found that, when starting from a local density approximation zeroth-order Green function and systematically including these

  7. Graph theoretical ordering of structures as a basis for systematic searches for regularities in molecular data

    International Nuclear Information System (INIS)

    Randic, M.; Wilkins, C.L.

    1979-01-01

    Selected molecular data on alkanes have been reexamined in a search for general regularities in isomeric variations. In contrast to the prevailing approaches concerned with fitting data by searching for optimal parameterization, the present work is primarily aimed at established trends, i.e., searching for relative magnitudes and their regularities among the isomers. Such an approach is complementary to curve fitting or correlation seeking procedures. It is particularly useful when there are incomplete data which allow trends to be recognized but no quantitative correlation to be established. One proceeds by first ordering structures. One way is to consider molecular graphs and enumerate paths of different length as the basic graph invariant. It can be shown that, for several thermodynamic molecular properties, the number of paths of length two (p 2 ) and length three (p 3 ) are critical. Hence, an ordering based on p 2 and p 3 indicates possible trends and behavior for many molecular properties, some of which relate to others, some which do not. By considering a grid graph derived by attributing to each isomer coordinates (p 2 ,p 3 ) and connecting points along the coordinate axis, one obtains a simple presentation useful for isomer structural interrelations. This skeletal frame is one upon which possible trends for different molecular properties may be conveniently represented. The significance of the results and their conceptual value is discussed. 16 figures, 3 tables

  8. Efficient implementation of one- and two-component analytical energy gradients in exact two-component theory

    Science.gov (United States)

    Franzke, Yannick J.; Middendorf, Nils; Weigend, Florian

    2018-03-01

    We present an efficient algorithm for one- and two-component analytical energy gradients with respect to nuclear displacements in the exact two-component decoupling approach to the one-electron Dirac equation (X2C). Our approach is a generalization of the spin-free ansatz by Cheng and Gauss [J. Chem. Phys. 135, 084114 (2011)], where the perturbed one-electron Hamiltonian is calculated by solving a first-order response equation. Computational costs are drastically reduced by applying the diagonal local approximation to the unitary decoupling transformation (DLU) [D. Peng and M. Reiher, J. Chem. Phys. 136, 244108 (2012)] to the X2C Hamiltonian. The introduced error is found to be almost negligible as the mean absolute error of the optimized structures amounts to only 0.01 pm. Our implementation in TURBOMOLE is also available within the finite nucleus model based on a Gaussian charge distribution. For a X2C/DLU gradient calculation, computational effort scales cubically with the molecular size, while storage increases quadratically. The efficiency is demonstrated in calculations of large silver clusters and organometallic iridium complexes.

  9. Generating Models of Infinite-State Communication Protocols Using Regular Inference with Abstraction

    Science.gov (United States)

    Aarts, Fides; Jonsson, Bengt; Uijen, Johan

    In order to facilitate model-based verification and validation, effort is underway to develop techniques for generating models of communication system components from observations of their external behavior. Most previous such work has employed regular inference techniques which generate modest-size finite-state models. They typically suppress parameters of messages, although these have a significant impact on control flow in many communication protocols. We present a framework, which adapts regular inference to include data parameters in messages and states for generating components with large or infinite message alphabets. A main idea is to adapt the framework of predicate abstraction, successfully used in formal verification. Since we are in a black-box setting, the abstraction must be supplied externally, using information about how the component manages data parameters. We have implemented our techniques by connecting the LearnLib tool for regular inference with the protocol simulator ns-2, and generated a model of the SIP component as implemented in ns-2.

  10. Nuclear structure and order-to-chaos transition

    International Nuclear Information System (INIS)

    Solov'ev, V.G.

    1995-01-01

    A general scheme of the nuclear many-body problem is presented. Different models for description of low-lying states and giant resonances are discussed. The wave functions of the low-lying states have a single dominating one-quasiparticle or quasiparticle O+ phonon or one-phonon component. They demonstrate the regularity in nuclei. Giant resonances are determined by strongly fragmented one-phonon components of the wave functions. The wave functions at higher excitation energies have two-, three-and many-phonon components. Based on the statement that there is order in the large and chaos in the small components of the nuclear wave functions, the order-to-chaos transition is treated as a transition from the large to the small components of the wave functions. A quasiparticle-phonon interaction is responsible for the fragmentation of one- and many-quasiparticle and phonon states and for the mixing of closely spaced states. Therefore, experimental investigation of the fragmentation of many-quasiparticle and phonon states plays a decisive role. 30 refs

  11. Replenishment policy for Entropic Order Quantity (EnOQ model with two component demand and partial back-logging under inflation

    Directory of Open Access Journals (Sweden)

    Bhanupriya Dash

    2017-09-01

    Full Text Available Background: Replenishment policy for entropic order quantity model with two component demand and partial backlogging under inflation is an important subject in the stock management. Methods: In this paper an inventory model for  non-instantaneous  deteriorating items with stock dependant consumption rate and partial back logged in addition the effect of inflection and time value of money on replacement policy with zero lead time consider was developed. Profit maximization model is formulated by considering the effects of partial backlogging under inflation with cash discounts. Further numerical example presented to evaluate the relative performance between the entropic order quantity and EOQ models separately. Numerical example is present to demonstrate the developed model and to illustrate the procedure. Lingo 13.0 version software used to derive optimal order quantity and total cost of inventory. Finally sensitivity analysis of the optimal solution with respect to different parameters of the system carried out. Results and conclusions: The obtained inventory model is very useful in retail business. This model can extend to total backorder.

  12. Two-way regularization for MEG source reconstruction via multilevel coordinate descent

    KAUST Repository

    Siva Tian, Tian

    2013-12-01

    Magnetoencephalography (MEG) source reconstruction refers to the inverse problem of recovering the neural activity from the MEG time course measurements. A spatiotemporal two-way regularization (TWR) method was recently proposed by Tian et al. to solve this inverse problem and was shown to outperform several one-way regularization methods and spatiotemporal methods. This TWR method is a two-stage procedure that first obtains a raw estimate of the source signals and then refines the raw estimate to ensure spatial focality and temporal smoothness using spatiotemporal regularized matrix decomposition. Although proven to be effective, the performance of two-stage TWR depends on the quality of the raw estimate. In this paper we directly solve the MEG source reconstruction problem using a multivariate penalized regression where the number of variables is much larger than the number of cases. A special feature of this regression is that the regression coefficient matrix has a spatiotemporal two-way structure that naturally invites a two-way penalty. Making use of this structure, we develop a computationally efficient multilevel coordinate descent algorithm to implement the method. This new one-stage TWR method has shown its superiority to the two-stage TWR method in three simulation studies with different levels of complexity and a real-world MEG data analysis. © 2013 Wiley Periodicals, Inc., A Wiley Company.

  13. Brazilian two-component TLD albedo neutron individual monitoring system

    Energy Technology Data Exchange (ETDEWEB)

    Martins, M.M., E-mail: marcelo@ird.gov.b [Instituto de Radioprotecao e Dosimetria (IRD), Av. Salvador Allende, s/n, CEP: 22780-160, Rio de Janeiro, RJ (Brazil); Mauricio, C.L.P., E-mail: claudia@ird.gov.b [Instituto de Radioprotecao e Dosimetria (IRD), Av. Salvador Allende, s/n, CEP: 22780-160, Rio de Janeiro, RJ (Brazil); Fonseca, E.S. da, E-mail: evaldo@ird.gov.b [Instituto de Radioprotecao e Dosimetria (IRD), Av. Salvador Allende, s/n, CEP: 22780-160, Rio de Janeiro, RJ (Brazil); Silva, A.X. da, E-mail: ademir@con.ufrj.b [Coordenacao dos Programas de Pos-Graduacao em Engenharia, COPPE/PEN Caixa Postal 68509, CEP: 21941-972, Rio de Janeiro, RJ (Brazil)

    2010-12-15

    Since 1983, Instituto de Radioprotecao e Dosimetria, Brazil, uses a TLD one-component albedo neutron monitor, which has a single different calibration factor specifically for each installation type. In order to improve its energy response, a two-component albedo monitor was developed, which measure the thermal neutron component besides the albedo one. The two-component monitor has been calibrated in reference neutron fields: thermal, five accelerator-produced monoenergetic beams (70, 144, 565, 1200 and 5000 keV) and five radionuclide sources ({sup 252}Cf, {sup 252}Cf(D{sub 2}O), {sup 241}Am-Be, {sup 241}Am-B and {sup 238}Pu-Be) at several distances. Since January 2008, mainly Brazilian workers who handle neutron sources at different distances and moderation, such as in well logging and calibration facilities are using it routinely.

  14. DEVELOPMENT OF INNOVATION MANAGEMENT THEORY BASED ON SYSTEM-WIDE REGULARITIES

    Directory of Open Access Journals (Sweden)

    Violetta N. Volkova

    2013-01-01

    Full Text Available The problem of a comprehension of the innovation management theory and an ability of its development on basis of system theory is set up. The authors consider features of management of socio-economic systems as open, self-organising systems with active components and give a classification of the systems’ regularities illustrating these features. The need to take into account the regularities of emergent, hierarchical order, equifinality, Ashby’s law of requisite variety, historicity and self-organization is shown.

  15. Spin-orbit ZORA and four-component Dirac-Coulomb estimation of relativistic corrections to isotropic nuclear shieldings and chemical shifts of noble gas dimers.

    Science.gov (United States)

    Jankowska, Marzena; Kupka, Teobald; Stobiński, Leszek; Faber, Rasmus; Lacerda, Evanildo G; Sauer, Stephan P A

    2016-02-05

    Hartree-Fock and density functional theory with the hybrid B3LYP and general gradient KT2 exchange-correlation functionals were used for nonrelativistic and relativistic nuclear magnetic shielding calculations of helium, neon, argon, krypton, and xenon dimers and free atoms. Relativistic corrections were calculated with the scalar and spin-orbit zeroth-order regular approximation Hamiltonian in combination with the large Slater-type basis set QZ4P as well as with the four-component Dirac-Coulomb Hamiltonian using Dyall's acv4z basis sets. The relativistic corrections to the nuclear magnetic shieldings and chemical shifts are combined with nonrelativistic coupled cluster singles and doubles with noniterative triple excitations [CCSD(T)] calculations using the very large polarization-consistent basis sets aug-pcSseg-4 for He, Ne and Ar, aug-pcSseg-3 for Kr, and the AQZP basis set for Xe. For the dimers also, zero-point vibrational (ZPV) corrections are obtained at the CCSD(T) level with the same basis sets were added. Best estimates of the dimer chemical shifts are generated from these nuclear magnetic shieldings and the relative importance of electron correlation, ZPV, and relativistic corrections for the shieldings and chemical shifts is analyzed. © 2015 Wiley Periodicals, Inc.

  16. Geometrical bucklings for two-dimensional regular polygonal regions using the finite Fourier transformation

    International Nuclear Information System (INIS)

    Mori, N.; Kobayashi, K.

    1996-01-01

    A two-dimensional neutron diffusion equation is solved for regular polygonal regions by the finite Fourier transformation, and geometrical bucklings are calculated for regular 3-10 polygonal regions. In the case of the regular triangular region, it is found that a simple and rigorous analytic solution is obtained for the geometrical buckling and the distribution of the neutron current along the outer boundary. (author)

  17. Higher-Order Components for Grid Programming

    CERN Document Server

    Dünnweber, Jan

    2009-01-01

    Higher-Order Components were developed within the CoreGRID European Network of Excellence and have become an optional extension of the popular Globus middleware. This book provides the reader with hands-on experience, describing a collection of example applications from various fields of science and engineering, including biology and physics.

  18. General inverse problems for regular variation

    DEFF Research Database (Denmark)

    Damek, Ewa; Mikosch, Thomas Valentin; Rosinski, Jan

    2014-01-01

    Regular variation of distributional tails is known to be preserved by various linear transformations of some random structures. An inverse problem for regular variation aims at understanding whether the regular variation of a transformed random object is caused by regular variation of components ...

  19. Chemical evolution of two-component galaxies. II

    International Nuclear Information System (INIS)

    Caimmi, R.

    1978-01-01

    In order to confirm and refine the results obtained in a previous paper the chemical evolution of two-component (spheroid + disk) galaxies is derived rejecting the instantaneous recycling approximation, by means of numerical computations, accounting for (i) the collapse phase of the gas, assumed to be uniform in density and composition, and (ii) a birth-rate stellar function. Computations are performed relatively to the solar neighbourhood and to model galaxies which closely resemble the real morphological sequence: in both cases, numerical results are compared with analytical ones. The numerical models of this paper constitute a first-order approximation, while higher order approximations could be made by rejecting the hypothesis of uniform density and composition, and making use of detailed dynamical models. (Auth.)

  20. Light-front QCD. II. Two-component theory

    International Nuclear Information System (INIS)

    Zhang, W.; Harindranath, A.

    1993-01-01

    The light-front gauge A a + =0 is known to be a convenient gauge in practical QCD calculations for short-distance behavior, but there are persistent concerns about its use because of its ''singular'' nature. The study of nonperturbative field theory quantizing on a light-front plane for hadronic bound states requires one to gain a priori systematic control of such gauge singularities. In the second paper of this series we study the two-component old-fashioned perturbation theory and various severe infrared divergences occurring in old-fashioned light-front Hamiltonian calculations for QCD. We also analyze the ultraviolet divergences associated with a large transverse momentum and examine three currently used regulators: an explicit transverse cutoff, transverse dimensional regularization, and a global cutoff. We discuss possible difficulties caused by the light-front gauge singularity in the applications of light-front QCD to both old-fashioned perturbative calculations for short-distance physics and upcoming nonperturbative investigations for hadronic bound states

  1. Long range order and giant components of quantum random graphs

    CERN Document Server

    Ioffe, D

    2006-01-01

    Mean field quantum random graphs give a natural generalization of classical Erd\\H{o}s-R\\'{e}nyi percolation model on complete graph $G_N$ with $p =\\beta /N$. Quantum case incorporates an additional parameter $\\lambda\\geq 0$, and the short-long range order transition should be studied in the $(\\beta ,\\lambda)$-quarter plane. In this work we explicitly compute the corresponding critical curve $\\gamma_c$, and derive results on two-point functions and sizes of connected components in both short and long range order regions. In this way the classical case corresponds to the limiting point $(\\beta_c ,0) = (1,0)$ on $\\gamma_c$.

  2. Conformal symmetry and non-relativistic second-order fluid dynamics

    International Nuclear Information System (INIS)

    Chao Jingyi; Schäfer, Thomas

    2012-01-01

    We study the constraints imposed by conformal symmetry on the equations of fluid dynamics at second order in the gradients of the hydrodynamic variables. At zeroth order, conformal symmetry implies a constraint on the equation of state, E 0 =2/3 P, where E 0 is the energy density and P is the pressure. At first order, conformal symmetry implies that the bulk viscosity must vanish. We show that at second order, conformal invariance requires that two-derivative terms in the stress tensor must be traceless, and that it determines the relaxation of dissipative stresses to the Navier–Stokes form. We verify these results by solving the Boltzmann equation at second order in the gradient expansion. We find that only a subset of the terms allowed by conformal symmetry appear. - Highlights: ► We derive conformal constraints for the stress tensor of a scale invariant fluid. ► We determine the relaxation time in kinetic theory. ► We compute the rate of entropy production in second-order fluid dynamics.

  3. Regularity criterion for solutions to the Navier Stokes equations in the whole 3D space based on two vorticity components

    Czech Academy of Sciences Publication Activity Database

    Guo, Z.; Kučera, P.; Skalák, Zdeněk

    2018-01-01

    Roč. 458, č. 1 (2018), s. 755-766 ISSN 0022-247X R&D Projects: GA ČR GA13-00522S Institutional support: RVO:67985874 Keywords : Navier Stokes equations * conditional regularity * regularity criteria * vorticity * Besov spaces * bony decomposition Subject RIV: BA - General Mathematics OBOR OECD: Fluids and plasma physics (including surface physics) Impact factor: 1.064, year: 2016

  4. Incremental projection approach of regularization for inverse problems

    Energy Technology Data Exchange (ETDEWEB)

    Souopgui, Innocent, E-mail: innocent.souopgui@usm.edu [The University of Southern Mississippi, Department of Marine Science (United States); Ngodock, Hans E., E-mail: hans.ngodock@nrlssc.navy.mil [Naval Research Laboratory (United States); Vidard, Arthur, E-mail: arthur.vidard@imag.fr; Le Dimet, François-Xavier, E-mail: ledimet@imag.fr [Laboratoire Jean Kuntzmann (France)

    2016-10-15

    This paper presents an alternative approach to the regularized least squares solution of ill-posed inverse problems. Instead of solving a minimization problem with an objective function composed of a data term and a regularization term, the regularization information is used to define a projection onto a convex subspace of regularized candidate solutions. The objective function is modified to include the projection of each iterate in the place of the regularization. Numerical experiments based on the problem of motion estimation for geophysical fluid images, show the improvement of the proposed method compared with regularization methods. For the presented test case, the incremental projection method uses 7 times less computation time than the regularization method, to reach the same error target. Moreover, at convergence, the incremental projection is two order of magnitude more accurate than the regularization method.

  5. Higher order methods for burnup calculations with Bateman solutions

    International Nuclear Information System (INIS)

    Isotalo, A.E.; Aarnio, P.A.

    2011-01-01

    Highlights: → Average microscopic reaction rates need to be estimated at each step. → Traditional predictor-corrector methods use zeroth and first order predictions. → Increasing predictor order greatly improves results. → Increasing corrector order does not improve results. - Abstract: A group of methods for burnup calculations solves the changes in material compositions by evaluating an explicit solution to the Bateman equations with constant microscopic reaction rates. This requires predicting representative averages for the one-group cross-sections and flux during each step, which is usually done using zeroth and first order predictions for their time development in a predictor-corrector calculation. In this paper we present the results of using linear, rather than constant, extrapolation on the predictor and quadratic, rather than linear, interpolation on the corrector. Both of these are done by using data from the previous step, and thus do not affect the stepwise running time. The methods were tested by implementing them into the reactor physics code Serpent and comparing the results from four test cases to accurate reference results obtained with very short steps. Linear extrapolation greatly improved results for thermal spectra and should be preferred over the constant one currently used in all Bateman solution based burnup calculations. The effects of using quadratic interpolation on the corrector were, on the other hand, predominantly negative, although not enough so to conclusively decide between the linear and quadratic variants.

  6. Diverse Regular Employees and Non-regular Employment (Japanese)

    OpenAIRE

    MORISHIMA Motohiro

    2011-01-01

    Currently there are high expectations for the introduction of policies related to diverse regular employees. These policies are a response to the problem of disparities between regular and non-regular employees (part-time, temporary, contract and other non-regular employees) and will make it more likely that workers can balance work and their private lives while companies benefit from the advantages of regular employment. In this paper, I look at two issues that underlie this discussion. The ...

  7. Two component systems: physiological effect of a third component.

    Directory of Open Access Journals (Sweden)

    Baldiri Salvado

    Full Text Available Signal transduction systems mediate the response and adaptation of organisms to environmental changes. In prokaryotes, this signal transduction is often done through Two Component Systems (TCS. These TCS are phosphotransfer protein cascades, and in their prototypical form they are composed by a kinase that senses the environmental signals (SK and by a response regulator (RR that regulates the cellular response. This basic motif can be modified by the addition of a third protein that interacts either with the SK or the RR in a way that could change the dynamic response of the TCS module. In this work we aim at understanding the effect of such an additional protein (which we call "third component" on the functional properties of a prototypical TCS. To do so we build mathematical models of TCS with alternative designs for their interaction with that third component. These mathematical models are analyzed in order to identify the differences in dynamic behavior inherent to each design, with respect to functionally relevant properties such as sensitivity to changes in either the parameter values or the molecular concentrations, temporal responsiveness, possibility of multiple steady states, or stochastic fluctuations in the system. The differences are then correlated to the physiological requirements that impinge on the functioning of the TCS. This analysis sheds light on both, the dynamic behavior of synthetically designed TCS, and the conditions under which natural selection might favor each of the designs. We find that a third component that modulates SK activity increases the parameter space where a bistable response of the TCS module to signals is possible, if SK is monofunctional, but decreases it when the SK is bifunctional. The presence of a third component that modulates RR activity decreases the parameter space where a bistable response of the TCS module to signals is possible.

  8. Shape and Symmetry Determine Two-Dimensional Melting Transitions of Hard Regular Polygons

    Directory of Open Access Journals (Sweden)

    Joshua A. Anderson

    2017-04-01

    Full Text Available The melting transition of two-dimensional systems is a fundamental problem in condensed matter and statistical physics that has advanced significantly through the application of computational resources and algorithms. Two-dimensional systems present the opportunity for novel phases and phase transition scenarios not observed in 3D systems, but these phases depend sensitively on the system and, thus, predicting how any given 2D system will behave remains a challenge. Here, we report a comprehensive simulation study of the phase behavior near the melting transition of all hard regular polygons with 3≤n≤14 vertices using massively parallel Monte Carlo simulations of up to 1×10^{6} particles. By investigating this family of shapes, we show that the melting transition depends upon both particle shape and symmetry considerations, which together can predict which of three different melting scenarios will occur for a given n. We show that systems of polygons with as few as seven edges behave like hard disks; they melt continuously from a solid to a hexatic fluid and then undergo a first-order transition from the hexatic phase to the isotropic fluid phase. We show that this behavior, which holds for all 7≤n≤14, arises from weak entropic forces among the particles. Strong directional entropic forces align polygons with fewer than seven edges and impose local order in the fluid. These forces can enhance or suppress the discontinuous character of the transition depending on whether the local order in the fluid is compatible with the local order in the solid. As a result, systems of triangles, squares, and hexagons exhibit a Kosterlitz-Thouless-Halperin-Nelson-Young (KTHNY predicted continuous transition between isotropic fluid and triatic, tetratic, and hexatic phases, respectively, and a continuous transition from the appropriate x-atic to the solid. In particular, we find that systems of hexagons display continuous two-step KTHNY melting. In

  9. EEG/MEG Source Reconstruction with Spatial-Temporal Two-Way Regularized Regression

    KAUST Repository

    Tian, Tian Siva; Huang, Jianhua Z.; Shen, Haipeng; Li, Zhimin

    2013-01-01

    In this work, we propose a spatial-temporal two-way regularized regression method for reconstructing neural source signals from EEG/MEG time course measurements. The proposed method estimates the dipole locations and amplitudes simultaneously

  10. Regular Single Valued Neutrosophic Hypergraphs

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam Malik

    2016-12-01

    Full Text Available In this paper, we define the regular and totally regular single valued neutrosophic hypergraphs, and discuss the order and size along with properties of regular and totally regular single valued neutrosophic hypergraphs. We also extend work on completeness of single valued neutrosophic hypergraphs.

  11. Three-particle correlation functions of quasi-two-dimensional one-component and binary colloid suspensions.

    Science.gov (United States)

    Ho, Hau My; Lin, Binhua; Rice, Stuart A

    2006-11-14

    We report the results of experimental determinations of the triplet correlation functions of quasi-two-dimensional one-component and binary colloid suspensions in which the colloid-colloid interaction is short ranged. The suspensions studied range in density from modestly dilute to solid. The triplet correlation function of the one-component colloid system reveals extensive ordering deep in the liquid phase. At the same density the ordering of the larger diameter component in a binary colloid system is greatly diminished by a very small amount of the smaller diameter component. The possible utilization of information contained in the triplet correlation function in the theory of melting of a quasi-two-dimensional system is briefly discussed.

  12. REGULARIZED FUNCTIONAL PRINCIPAL COMPONENT ANALYSIS AND AN APPLICATION ON THE SHARE PRICES OF THE COMPANIES BELONGING TO THE ISE-30 INDEX

    Directory of Open Access Journals (Sweden)

    İSTEM KÖYMEN KESER

    2013-06-01

    Full Text Available The objective of the Functional Data Analysis techniques is to study such type of data which consist of observed functions or curves evaluated at a finite subset of some real interval.   Techniques in Functional Data Analysis can be used to study the variation in a random sample of real functions, xi(t, i=1, 2, …, N and their derivatives. In practice, these functions are often a consequence of a preliminary smoothing process applied to discrete data and in this work, Spline Smoothing Methods are used.  As the number of functions and the number of observation points increases, it would be difficult to handle the functions  altogether. In order to overcome this complexity, we utilize Functional and Regularized Functional Principal Component Analyses where a high percentage of total variation  could be accounted for with only a few component functions.  Finally, an application on the daily closing data for the share prices of the companies belonging to the ISE-30 index is also given.

  13. Kinetic theory of two-temperature polyatomic plasmas

    Science.gov (United States)

    Orlac'h, Jean-Maxime; Giovangigli, Vincent; Novikova, Tatiana; Roca i Cabarrocas, Pere

    2018-03-01

    We investigate the kinetic theory of two-temperature plasmas for reactive polyatomic gas mixtures. The Knudsen number is taken proportional to the square root of the mass ratio between electrons and heavy-species, and thermal non-equilibrium between electrons and heavy species is allowed. The kinetic non-equilibrium framework also requires a weak coupling between electrons and internal energy modes of heavy species. The zeroth-order and first-order fluid equations are derived by using a generalized Chapman-Enskog method. Expressions for transport fluxes are obtained in terms of macroscopic variable gradients and the corresponding transport coefficients are expressed as bracket products of species perturbed distribution functions. The theory derived in this paper provides a consistent fluid model for non-thermal multicomponent plasmas.

  14. Mutual interaction between high and low stereo-regularity components for crystallization and melting behaviors of polypropylene blend fibers

    Science.gov (United States)

    Kawai, Kouya; Kohri, Youhei; Takarada, Wataru; Takebe, Tomoaki; Kanai, Toshitaka; Kikutani, Takeshi

    2016-03-01

    Crystallization and melting behaviors of blend fibers of two types of polypropylene (PP), i.e. high stereo-regularity/high molecular weight PP (HPP) and low stereo-regularity/low molecular weight PP (LPP), was investigated. Blend fibers consisting of various HPP/LPP compositions were prepared through the melt spinning process. Differential scanning calorimetry (DSC), temperature modulated DSC (TMDSC) and wide-angle X-ray diffraction (WAXD) analysis were applied for clarifying the crystallization and melting behaviors of individual components. In the DSC measurement of blend fibers with high LPP composition, continuous endothermic heat was detected between the melting peaks of LPP at around 40 °C and that of HPP at around 160 °C. Such endothermic heat was more distinct for the blend fibers with higher LPP composition indicating that the melting of LPP in the heating process was hindered because of the presence of HPP crystals. On the other hand, heat of crystallization was detected at around 90 °C in the case of blend fibers with LPP content of 30 to 70 wt%, indicating that the crystallization of HPP component was taking place during the heating of as-spun blend fibers in the DSC measurement. Through the TMDSC analysis, re-organization of the crystalline structure through the simultaneous melting and re-crystallization was detected in the cases of HPP and blend fibers, whereas re-crystallization was not detected during the melting of LPP fibers. In the WAXD analysis during the heating of fibers, amount of a-form crystal was almost constant up to the melting in the case of single component HPP fibers, whereas there was a distinct increase of the intensity of crystalline reflections from around 100 °C, right after the melting of LPP in the case of blend fibers. These results suggested that the crystallization of HPP in the spinning process as well as during the conditioning process after spinning was hindered by the presence of LPP.

  15. Mutual interaction between high and low stereo-regularity components for crystallization and melting behaviors of polypropylene blend fibers

    International Nuclear Information System (INIS)

    Kawai, Kouya; Takarada, Wataru; Kikutani, Takeshi; Kohri, Youhei; Takebe, Tomoaki; Kanai, Toshitaka

    2016-01-01

    Crystallization and melting behaviors of blend fibers of two types of polypropylene (PP), i.e. high stereo-regularity/high molecular weight PP (HPP) and low stereo-regularity/low molecular weight PP (LPP), was investigated. Blend fibers consisting of various HPP/LPP compositions were prepared through the melt spinning process. Differential scanning calorimetry (DSC), temperature modulated DSC (TMDSC) and wide-angle X-ray diffraction (WAXD) analysis were applied for clarifying the crystallization and melting behaviors of individual components. In the DSC measurement of blend fibers with high LPP composition, continuous endothermic heat was detected between the melting peaks of LPP at around 40 °C and that of HPP at around 160 °C. Such endothermic heat was more distinct for the blend fibers with higher LPP composition indicating that the melting of LPP in the heating process was hindered because of the presence of HPP crystals. On the other hand, heat of crystallization was detected at around 90 °C in the case of blend fibers with LPP content of 30 to 70 wt%, indicating that the crystallization of HPP component was taking place during the heating of as-spun blend fibers in the DSC measurement. Through the TMDSC analysis, re-organization of the crystalline structure through the simultaneous melting and re-crystallization was detected in the cases of HPP and blend fibers, whereas re-crystallization was not detected during the melting of LPP fibers. In the WAXD analysis during the heating of fibers, amount of a-form crystal was almost constant up to the melting in the case of single component HPP fibers, whereas there was a distinct increase of the intensity of crystalline reflections from around 100 °C, right after the melting of LPP in the case of blend fibers. These results suggested that the crystallization of HPP in the spinning process as well as during the conditioning process after spinning was hindered by the presence of LPP.

  16. Mutual interaction between high and low stereo-regularity components for crystallization and melting behaviors of polypropylene blend fibers

    Energy Technology Data Exchange (ETDEWEB)

    Kawai, Kouya; Takarada, Wataru; Kikutani, Takeshi, E-mail: kikutani.t.aa@m.titech.ac.jp [Department of Organic and Polymeric Materials, Graduate School of Science and Engineering, Tokyo Institute of Technology, 2-12-1, O-okayama, Meguro-ku, Tokyo 152-8552 (Japan); Kohri, Youhei; Takebe, Tomoaki [Performance Materials Laboratories, Idemitsu Kosan Co.,Ltd. (Japan); Kanai, Toshitaka [KT Polymer (Japan)

    2016-03-09

    Crystallization and melting behaviors of blend fibers of two types of polypropylene (PP), i.e. high stereo-regularity/high molecular weight PP (HPP) and low stereo-regularity/low molecular weight PP (LPP), was investigated. Blend fibers consisting of various HPP/LPP compositions were prepared through the melt spinning process. Differential scanning calorimetry (DSC), temperature modulated DSC (TMDSC) and wide-angle X-ray diffraction (WAXD) analysis were applied for clarifying the crystallization and melting behaviors of individual components. In the DSC measurement of blend fibers with high LPP composition, continuous endothermic heat was detected between the melting peaks of LPP at around 40 °C and that of HPP at around 160 °C. Such endothermic heat was more distinct for the blend fibers with higher LPP composition indicating that the melting of LPP in the heating process was hindered because of the presence of HPP crystals. On the other hand, heat of crystallization was detected at around 90 °C in the case of blend fibers with LPP content of 30 to 70 wt%, indicating that the crystallization of HPP component was taking place during the heating of as-spun blend fibers in the DSC measurement. Through the TMDSC analysis, re-organization of the crystalline structure through the simultaneous melting and re-crystallization was detected in the cases of HPP and blend fibers, whereas re-crystallization was not detected during the melting of LPP fibers. In the WAXD analysis during the heating of fibers, amount of a-form crystal was almost constant up to the melting in the case of single component HPP fibers, whereas there was a distinct increase of the intensity of crystalline reflections from around 100 °C, right after the melting of LPP in the case of blend fibers. These results suggested that the crystallization of HPP in the spinning process as well as during the conditioning process after spinning was hindered by the presence of LPP.

  17. A Comparison of Flame Spread Characteristics over Solids in Concurrent Flow Using Two Different Pyrolysis Models

    Directory of Open Access Journals (Sweden)

    Ya-Ting Tseng

    2011-01-01

    Full Text Available Two solid pyrolysis models are employed in a concurrent-flow flame spread model to compare the flame structure and spreading characteristics. The first is a zeroth-order surface pyrolysis, and the second is a first-order in-depth pyrolysis. Comparisons are made for samples when the spread rate reaches a steady value and the flame reaches a constant length. The computed results show (1 the mass burning rate distributions at the solid surface are qualitatively different near the flame (pyrolysis base region, (2 the first-order pyrolysis model shows that the propagating flame leaves unburnt solid fuel, and (3 the flame length and spread rate dependence on sample thickness are different for the two cases.

  18. Electronic and ionic ordering in condensed matter plasmas

    International Nuclear Information System (INIS)

    March, N.H.

    1981-01-01

    Recent progress in treating phase transitions induced by Coulomb interactions is reviewed. This is done by appealing to simple models, and in particular to the one-component plasma, with its quantum-mechanical counterpart jellium. The relevance of the phase transition, to a body-centred-cubic crystal in the classical one-component plasma, to the freezing of liquid metals Na and K is stressed. By generalizing these arguments to a two-component system, regularities in the freezing of the molten alkali halides are understandable. Sublattice disorder in superionics, driven by Coulomb forces, is then discussed. Finally, the ordering of electrons in jellium, in the limit of complete degeneracy, is considered: evidence being presented for the existence of electron liquids in molten Na and K. (author)

  19. Chaos regularization of quantum tunneling rates

    International Nuclear Information System (INIS)

    Pecora, Louis M.; Wu Dongho; Lee, Hoshik; Antonsen, Thomas; Lee, Ming-Jer; Ott, Edward

    2011-01-01

    Quantum tunneling rates through a barrier separating two-dimensional, symmetric, double-well potentials are shown to depend on the classical dynamics of the billiard trajectories in each well and, hence, on the shape of the wells. For shapes that lead to regular (integrable) classical dynamics the tunneling rates fluctuate greatly with eigenenergies of the states sometimes by over two orders of magnitude. Contrarily, shapes that lead to completely chaotic trajectories lead to tunneling rates whose fluctuations are greatly reduced, a phenomenon we call regularization of tunneling rates. We show that a random-plane-wave theory of tunneling accounts for the mean tunneling rates and the small fluctuation variances for the chaotic systems.

  20. The propagation of charged particles in a focussing magnetic field with random components

    International Nuclear Information System (INIS)

    Pauls, H.L.

    1993-01-01

    Boltzmann's equation which describes the evolution of the particle distribution function in a focussing magnetic field with finite helicity, is solved by expanding the distribution function in terms of orthogonal focussing eigenfunctions. The present work advances upon previous work by carrying the expansion of the particle distribution function to a higher order (Ν=7 compared to Ν=3), the adding of magnetic helicity, and the injection of a delta function instead of a Gaussian as initial distribution. Results from this model compare very well with those from other known numerical models, provided that a time constraint, which is a direct consequence of the truncation of the eigenfunction expansion, is satisfied. This model, which gives the solution of Boltzmann's equation to a very high degree of accuracy, is used to evaluate the densities predicted by three lower order (and hence easier to implement) models. Two of these models, where the anisotropic component of the distribution function is approximated to first and second order respectively, follow from the Born approximation technique, while the third follows from a truncated eigenfunction expansion of the particle distribution function. It is shown that the latter two models, which include the effect of the dispersion of the so-called coherent pulses, give a better description of the isotropic density than the model which ignores the effect. The main use of this dispersionless model is that it provides a zeroth order approximation to the speed of the coherent pulses in the presence of helicity and focussing. When the dispersion in the pulse is small, its speed is shown to be predicted quite well by this simple model. (author). 67 refs

  1. Nonlinear mode coupling in rotating stars and the r-mode instability in neutron stars

    International Nuclear Information System (INIS)

    Schenk, A.K.; Arras, P.; Flanagan, E.E.; Teukolsky, S.A.; Wasserman, I.

    2002-01-01

    entirely and the coupling of two r modes to one hybrid, or r-g rotational, mode vanishes to zeroth order in rotation frequency. The coupling of any three rotational modes vanishes to zeroth order in compressibility and in Ω. In nonzero-buoyancy stars, coupling of the r modes to each other vanishes to zeroth order in Ω. Couplings to regular modes (those modes whose frequencies are finite in the limit Ω→0), such as f modes, are not zero, but since the natural frequencies of these modes are relatively large in the slow rotation limit compared to those of the r modes, energy transfer to those modes is not expected to be efficient

  2. Two-Stage Regularized Linear Discriminant Analysis for 2-D Data.

    Science.gov (United States)

    Zhao, Jianhua; Shi, Lei; Zhu, Ji

    2015-08-01

    Fisher linear discriminant analysis (LDA) involves within-class and between-class covariance matrices. For 2-D data such as images, regularized LDA (RLDA) can improve LDA due to the regularized eigenvalues of the estimated within-class matrix. However, it fails to consider the eigenvectors and the estimated between-class matrix. To improve these two matrices simultaneously, we propose in this paper a new two-stage method for 2-D data, namely a bidirectional LDA (BLDA) in the first stage and the RLDA in the second stage, where both BLDA and RLDA are based on the Fisher criterion that tackles correlation. BLDA performs the LDA under special separable covariance constraints that incorporate the row and column correlations inherent in 2-D data. The main novelty is that we propose a simple but effective statistical test to determine the subspace dimensionality in the first stage. As a result, the first stage reduces the dimensionality substantially while keeping the significant discriminant information in the data. This enables the second stage to perform RLDA in a much lower dimensional subspace, and thus improves the two estimated matrices simultaneously. Experiments on a number of 2-D synthetic and real-world data sets show that BLDA+RLDA outperforms several closely related competitors.

  3. Regularized lattice Bhatnagar-Gross-Krook model for two- and three-dimensional cavity flow simulations.

    Science.gov (United States)

    Montessori, A; Falcucci, G; Prestininzi, P; La Rocca, M; Succi, S

    2014-05-01

    We investigate the accuracy and performance of the regularized version of the single-relaxation-time lattice Boltzmann equation for the case of two- and three-dimensional lid-driven cavities. The regularized version is shown to provide a significant gain in stability over the standard single-relaxation time, at a moderate computational overhead.

  4. Regularization algorithm within two-parameters for identification heat-coefficient in the parabolic equation

    International Nuclear Information System (INIS)

    Hinestroza Gutierrez, D.

    2006-08-01

    In this work a new and promising algorithm based on the minimization of especial functional that depends on two regularization parameters is considered for the identification of the heat conduction coefficient in the parabolic equation. This algorithm uses the adjoint and sensibility equations. One of the regularization parameters is associated with the heat-coefficient (as in conventional Tikhonov algorithms) but the other is associated with the calculated solution. (author)

  5. Regularization algorithm within two-parameters for identification heat-coefficient in the parabolic equation

    International Nuclear Information System (INIS)

    Hinestroza Gutierrez, D.

    2006-12-01

    In this work a new and promising algorithm based in the minimization of especial functional that depends on two regularization parameters is considered for identification of the heat conduction coefficient in the parabolic equation. This algorithm uses the adjoint and sensibility equations. One of the regularization parameters is associated with the heat-coefficient (as in conventional Tikhonov algorithms) but the other is associated with the calculated solution. (author)

  6. Global existence and blow-up phenomena for two-component Degasperis-Procesi system and two-component b-family system

    OpenAIRE

    Liu, Jingjing; Yin, Zhaoyang

    2014-01-01

    This paper is concerned with global existence and blow-up phenomena for two-component Degasperis-Procesi system and two-component b-family system. The strategy relies on our observation on new conservative quantities of these systems. Several new global existence results and a new blowup result of strong solutions to the two-component Degasperis- Procesi system and the two-component b-family system are presented by using these new conservative quantities.

  7. A Regular k-Shrinkage Thresholding Operator for the Removal of Mixed Gaussian-Impulse Noise

    Directory of Open Access Journals (Sweden)

    Han Pan

    2017-01-01

    Full Text Available The removal of mixed Gaussian-impulse noise plays an important role in many areas, such as remote sensing. However, traditional methods may be unaware of promoting the degree of the sparsity adaptively after decomposing into low rank component and sparse component. In this paper, a new problem formulation with regular spectral k-support norm and regular k-support l1 norm is proposed. A unified framework is developed to capture the intrinsic sparsity structure of all two components. To address the resulting problem, an efficient minimization scheme within the framework of accelerated proximal gradient is proposed. This scheme is achieved by alternating regular k-shrinkage thresholding operator. Experimental comparison with the other state-of-the-art methods demonstrates the efficacy of the proposed method.

  8. Selection of regularization parameter for l1-regularized damage detection

    Science.gov (United States)

    Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing

    2018-06-01

    The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.

  9. Neutrino stress tensor regularization in two-dimensional space-time

    International Nuclear Information System (INIS)

    Davies, P.C.W.; Unruh, W.G.

    1977-01-01

    The method of covariant point-splitting is used to regularize the stress tensor for a massless spin 1/2 (neutrino) quantum field in an arbitrary two-dimensional space-time. A thermodynamic argument is used as a consistency check. The result shows that the physical part of the stress tensor is identical with that of the massless scalar field (in the absence of Casimir-type terms) even though the formally divergent expression is equal to the negative of the scalar case. (author)

  10. Improvement of the beam quality of a diode laser with two active broad-area segments

    DEFF Research Database (Denmark)

    Chi, Mingjun; Thestrup, B.; Mortensen, J.L.

    2003-01-01

    The beam quality of a diode laser with two active segments was improved using an external cavity with collimating optics, a grating, and an output coupler. The beam quality of the output beam, which is the first-order diffractive beam from the grating, was improved by a factor of 2, and at least...... half of the freely running power of the laser was coupled out from the external cavity. The output power can be enhanced further by the feedback from the zeroth-order beam. The possibility of improving the beam quality further is discussed and a new double-external-cavity configuration is suggested....

  11. Optimizing the structure of Tetracyanoplatinate(II)

    DEFF Research Database (Denmark)

    Dohn, Asmus Ougaard; Møller, Klaus Braagaard; Sauer, Stephan P. A.

    2013-01-01

    . For the C-N bond these trends are reversed and an order of magnitude smaller. With respect to the basis set dependence we observed that a triple zeta basis set with polarization functions gives in general sufficiently converged results, but while for the Pt-C bond it is advantageous to include extra diffuse......The geometry of tetracyanoplatinate(II) (TCP) has been optimized with density functional theory (DFT) calculations in order to compare different computational strategies. Two approximate scalar relativistic methods, i.e. the scalar zeroth-order regular approximation (ZORA) and non...... is almost quantitatively reproduced in the ZORA and ECP calculations. In addition, the effect of the exchange-correlation functional and one-electron basis set was studied by employing the two generalized gradient approximation (GGA) functionals, BLYP and PBE, as well as their hybrid version B3LYP and PBE0...

  12. Higher-order momentum distributions and locally affine LDDMM registration

    DEFF Research Database (Denmark)

    Sommer, Stefan Horst; Nielsen, Mads; Darkner, Sune

    2013-01-01

    description of affine transformations and subsequent compact description of non-translational movement in a globally nonrigid deformation. The resulting representation contains directly interpretable information from both mathematical and modeling perspectives. We develop the mathematical construction......To achieve sparse parametrizations that allow intuitive analysis, we aim to represent deformation with a basis containing interpretable elements, and we wish to use elements that have the description capacity to represent the deformation compactly. To accomplish this, we introduce in this paper...... higher-order momentum distributions in the large deformation diffeomorphic metric mapping (LDDMM) registration framework. While the zeroth-order moments previously used in LDDMM only describe local displacement, the first-order momenta that are proposed here represent a basis that allows local...

  13. Determination of two-dimensional correlation lengths in an anisotropic two-component flow

    International Nuclear Information System (INIS)

    Thomson, O.

    1994-05-01

    Former studies have shown that correlation methods can be used for determination of various two-component flow parameters, among these the correlation length. In cases where the flow can be described as a mixture, in which the minority component forms spatially limited perturbations within the majority component, this parameter gives a good indication of the maximum extension of these perturbations. In the former studies, spherical symmetry of the perturbations has been assumed, and the correlation length has been measured in the direction of the flow (axially) only. However, if the flow structure is anisotropic, the correlation length will be different in different directions. In the present study, the method has been developed further, allowing also measurements perpendicular to the flow direction (radially). The measurements were carried out using laser beams and the two-component flows consisted of either glass beads and air or air and water. In order to make local measurements of both the axial and radial correlation length simultaneously, it is necessary to use 3 laser beams and to form the triple cross-covariance. This lead to some unforeseen complications, due to the character of this function. The experimental results are generally positive and size determinations with an accuracy of better than 10% have been achieved in most cases. Less accurate results appeared only for difficult conditions (symmetrical signals), when 3 beams were used. 5 refs, 13 figs, 3 tabs

  14. Order in large and chaos in small components of nuclear wave functions

    International Nuclear Information System (INIS)

    Soloviev, V.G.

    1992-06-01

    An investigation of the order and chaos of the nuclear excited states has shown that there is order in the large and chaos in the small quasiparticle or phonon components of the nuclear wave functions. The order-to-chaos transition is treated as a transition from the large to the small components of the nuclear wave function. The analysis has shown that relatively large many-quasiparticle components of the wave function at an excitation energy (4-8)MeV may exist. The large many-quasiparticle components of the wave functions of the neutron resonances are responsible for enhanced E1-, M1- and E2-transition probabilities from neutron resonance to levels lying (1-2)MeV below them. (author)

  15. Regularity criteria for incompressible magnetohydrodynamics equations in three dimensions

    International Nuclear Information System (INIS)

    Lin, Hongxia; Du, Lili

    2013-01-01

    In this paper, we give some new global regularity criteria for three-dimensional incompressible magnetohydrodynamics (MHD) equations. More precisely, we provide some sufficient conditions in terms of the derivatives of the velocity or pressure, for the global regularity of strong solutions to 3D incompressible MHD equations in the whole space, as well as for periodic boundary conditions. Moreover, the regularity criterion involving three of the nine components of the velocity gradient tensor is also obtained. The main results generalize the recent work by Cao and Wu (2010 Two regularity criteria for the 3D MHD equations J. Diff. Eqns 248 2263–74) and the analysis in part is based on the works by Cao C and Titi E (2008 Regularity criteria for the three-dimensional Navier–Stokes equations Indiana Univ. Math. J. 57 2643–61; 2011 Gobal regularity criterion for the 3D Navier–Stokes equations involving one entry of the velocity gradient tensor Arch. Rational Mech. Anal. 202 919–32) for 3D incompressible Navier–Stokes equations. (paper)

  16. Optical diffraction by ordered 2D arrays of silica microspheres

    Science.gov (United States)

    Shcherbakov, A. A.; Shavdina, O.; Tishchenko, A. V.; Veillas, C.; Verrier, I.; Dellea, O.; Jourlin, Y.

    2017-03-01

    The article presents experimental and theoretical studies of angular dependent diffraction properties of 2D monolayer arrays of silica microspheres. High-quality large area defect-free monolayers of 1 μm diameter silica microspheres were deposited by the Langmuir-Blodgett technique under an accurate optical control. Measured angular dependencies of zeroth and one of the first order diffraction efficiencies produced by deposited samples were simulated by the rigorous Generalized Source Method taking into account particle size dispersion and lattice nonideality.

  17. Primordial two-component maximally symmetric inflation

    Science.gov (United States)

    Enqvist, K.; Nanopoulos, D. V.; Quirós, M.; Kounnas, C.

    1985-12-01

    We propose a two-component inflation model, based on maximally symmetric supergravity, where the scales of reheating and the inflation potential at the origin are decoupled. This is possible because of the second-order phase transition from SU(5) to SU(3)×SU(2)×U(1) that takes place when φ≅φcinflation at the global minimum, and leads to a reheating temperature TR≅(1015-1016) GeV. This makes it possible to generate baryon asymmetry in the conventional way without any conflict with experimental data on proton lifetime. The mass of the gravitinos is m3/2≅1012 GeV, thus avoiding the gravitino problem. Monopoles are diluted by residual inflation in the broken phase below the cosmological bounds if φcUSA.

  18. On the regularity of mild solutions to complete higher order differential equations on Banach spaces

    Directory of Open Access Journals (Sweden)

    Nezam Iraniparast

    2015-09-01

    Full Text Available For the complete higher order differential equation u(n(t=Σk=0n-1Aku(k(t+f(t, t∈ R (* on a Banach space E, we give a new definition of mild solutions of (*. We then characterize the regular admissibility of a translation invariant subspace al M of BUC(R, E with respect to (* in terms of solvability of the operator equation Σj=0n-1AjXal Dj-Xal Dn = C. As application, almost periodicity of mild solutions of (* is proved.

  19. Higher-order force moments of active particles

    Science.gov (United States)

    Nasouri, Babak; Elfring, Gwynn J.

    2018-04-01

    Active particles moving through fluids generate disturbance flows due to their activity. For simplicity, the induced flow field is often modeled by the leading terms in a far-field approximation of the Stokes equations, whose coefficients are the force, torque, and stresslet (zeroth- and first-order force moments) of the active particle. This level of approximation is quite useful, but may also fail to predict more complex behaviors that are observed experimentally. In this study, to provide a better approximation, we evaluate the contribution of the second-order force moments to the flow field and, by reciprocal theorem, present explicit formulas for the stresslet dipole, rotlet dipole, and potential dipole for an arbitrarily shaped active particle. As examples of this method, we derive modified Faxén laws for active spherical particles and resolve higher-order moments for active rod-like particles.

  20. A study of water hammer phenomena in a one-component two-phase bubbly flow

    International Nuclear Information System (INIS)

    Fujii, Terushige; Akagawa, Koji

    2000-01-01

    Water hammer phenomena caused by a rapid valve closure, that is, shock phenomena in two-phase flows, are an important problem for the safety assessment of a hypothetical LOCA. This paper presents the results of experimental and analytical studies of the water hammer phenomena in a one-component tow-phase bubbly flow. In order to clarify the characteristics of water hammer phenomena, experiments for a one-component two-phase flow of Freon R-113 were conducted and a numerical simulation of pressure transients was developed. An overall picture of the water hammer phenomena in a one-component two-phase flow is presented an discussed. (author)

  1. Seventh regular meeting of the International Working Group on Reliability of Reactor Pressure Components, Vienna, 3-5 September 1985

    International Nuclear Information System (INIS)

    1986-07-01

    The seventh regular meeting of the IAEA International Working Group on Reliability of Reactor Pressure Components was held at the Agency's Headquarters in Vienna from 3 to 5 September 1985. The representatives of Member States and of the Commission of the European Communities reported the status of the research programmes in this field (12 presentations). A separate abstract was prepared for each of the presentations

  2. Dimensionally regularized Tsallis' statistical mechanics and two-body Newton's gravitation

    Science.gov (United States)

    Zamora, J. D.; Rocca, M. C.; Plastino, A.; Ferri, G. L.

    2018-05-01

    Typical Tsallis' statistical mechanics' quantifiers like the partition function and the mean energy exhibit poles. We are speaking of the partition function Z and the mean energy 〈 U 〉 . The poles appear for distinctive values of Tsallis' characteristic real parameter q, at a numerable set of rational numbers of the q-line. These poles are dealt with dimensional regularization resources. The physical effects of these poles on the specific heats are studied here for the two-body classical gravitation potential.

  3. Iterated Process Analysis over Lattice-Valued Regular Expressions

    DEFF Research Database (Denmark)

    Midtgaard, Jan; Nielson, Flemming; Nielson, Hanne Riis

    2016-01-01

    We present an iterated approach to statically analyze programs of two processes communicating by message passing. Our analysis operates over a domain of lattice-valued regular expressions, and computes increasingly better approximations of each process's communication behavior. Overall the work e...... extends traditional semantics-based program analysis techniques to automatically reason about message passing in a manner that can simultaneously analyze both values of variables as well as message order, message content, and their interdependencies.......We present an iterated approach to statically analyze programs of two processes communicating by message passing. Our analysis operates over a domain of lattice-valued regular expressions, and computes increasingly better approximations of each process's communication behavior. Overall the work...

  4. Regularized lattice Boltzmann model for immiscible two-phase flows with power-law rheology

    Science.gov (United States)

    Ba, Yan; Wang, Ningning; Liu, Haihu; Li, Qiang; He, Guoqiang

    2018-03-01

    In this work, a regularized lattice Boltzmann color-gradient model is developed for the simulation of immiscible two-phase flows with power-law rheology. This model is as simple as the Bhatnagar-Gross-Krook (BGK) color-gradient model except that an additional regularization step is introduced prior to the collision step. In the regularization step, the pseudo-inverse method is adopted as an alternative solution for the nonequilibrium part of the total distribution function, and it can be easily extended to other discrete velocity models no matter whether a forcing term is considered or not. The obtained expressions for the nonequilibrium part are merely related to macroscopic variables and velocity gradients that can be evaluated locally. Several numerical examples, including the single-phase and two-phase layered power-law fluid flows between two parallel plates, and the droplet deformation and breakup in a simple shear flow, are conducted to test the capability and accuracy of the proposed color-gradient model. Results show that the present model is more stable and accurate than the BGK color-gradient model for power-law fluids with a wide range of power-law indices. Compared to its multiple-relaxation-time counterpart, the present model can increase the computing efficiency by around 15%, while keeping the same accuracy and stability. Also, the present model is found to be capable of reasonably predicting the critical capillary number of droplet breakup.

  5. Supersymmetric Regularization Two-Loop QCD Amplitudes and Coupling Shifts

    Energy Technology Data Exchange (ETDEWEB)

    Dixon, Lance

    2002-03-08

    We present a definition of the four-dimensional helicity (FDH) regularization scheme valid for two or more loops. This scheme was previously defined and utilized at one loop. It amounts to a variation on the standard 't Hooft-Veltman scheme and is designed to be compatible with the use of helicity states for ''observed'' particles. It is similar to dimensional reduction in that it maintains an equal number of bosonic and fermionic states, as required for preserving supersymmetry. Supersymmetry Ward identities relate different helicity amplitudes in supersymmetric theories. As a check that the FDH scheme preserves supersymmetry, at least through two loops, we explicitly verify a number of these identities for gluon-gluon scattering (gg {yields} gg) in supersymmetric QCD. These results also cross-check recent non-trivial two-loop calculations in ordinary QCD. Finally, we compute the two-loop shift between the FDH coupling and the standard {bar M}{bar S} coupling, {alpha}{sub s}. The FDH shift is identical to the one for dimensional reduction. The two-loop coupling shifts are then used to obtain the three-loop QCD {beta} function in the FDH and dimensional reduction schemes.

  6. Supersymmetric Regularization Two-Loop QCD Amplitudes and Coupling Shifts

    International Nuclear Information System (INIS)

    Dixon, Lance

    2002-01-01

    We present a definition of the four-dimensional helicity (FDH) regularization scheme valid for two or more loops. This scheme was previously defined and utilized at one loop. It amounts to a variation on the standard 't Hooft-Veltman scheme and is designed to be compatible with the use of helicity states for ''observed'' particles. It is similar to dimensional reduction in that it maintains an equal number of bosonic and fermionic states, as required for preserving supersymmetry. Supersymmetry Ward identities relate different helicity amplitudes in supersymmetric theories. As a check that the FDH scheme preserves supersymmetry, at least through two loops, we explicitly verify a number of these identities for gluon-gluon scattering (gg → gg) in supersymmetric QCD. These results also cross-check recent non-trivial two-loop calculations in ordinary QCD. Finally, we compute the two-loop shift between the FDH coupling and the standard MS coupling, α s . The FDH shift is identical to the one for dimensional reduction. The two-loop coupling shifts are then used to obtain the three-loop QCD β function in the FDH and dimensional reduction schemes

  7. Two- and four-component relativistic generalized-active-space coupled cluster method: implementation and application to BiH.

    Science.gov (United States)

    Sørensen, Lasse K; Olsen, Jeppe; Fleig, Timo

    2011-06-07

    A string-based coupled-cluster method of general excitation rank and with optimal scaling which accounts for special relativity within the four-component framework is presented. The method opens the way for the treatment of multi-reference problems through an active-space inspired single-reference based state-selective expansion of the model space. The evaluation of the coupled-cluster vector function is implemented by considering contractions of elementary second-quantized operators without setting up the amplitude equations explicitly. The capabilities of the new method are demonstrated in application to the electronic ground state of the bismuth monohydride molecule. In these calculations simulated multi-reference expansions with both doubles and triples excitations into the external space as well as the regular coupled-cluster hierarchy up to full quadruples excitations are compared. The importance of atomic outer core-correlation for obtaining accurate results is shown. Comparison to the non-relativistic framework is performed throughout to illustrate the additional work of the transition to the four-component relativistic framework both in implementation and application. Furthermore, an evaluation of the highest order scaling for general-order expansions is presented. © 2011 American Institute of Physics

  8. Centered Differential Waveform Inversion with Minimum Support Regularization

    KAUST Repository

    Kazei, Vladimir

    2017-05-26

    Time-lapse full-waveform inversion has two major challenges. The first one is the reconstruction of a reference model (baseline model for most of approaches). The second is inversion for the time-lapse changes in the parameters. Common model approach is utilizing the information contained in all available data sets to build a better reference model for time lapse inversion. Differential (Double-difference) waveform inversion allows to reduce the artifacts introduced into estimates of time-lapse parameter changes by imperfect inversion for the baseline-reference model. We propose centered differential waveform inversion (CDWI) which combines these two approaches in order to benefit from both of their features. We apply minimum support regularization commonly used with electromagnetic methods of geophysical exploration. We test the CDWI method on synthetic dataset with random noise and show that, with Minimum support regularization, it provides better resolution of velocity changes than with total variation and Tikhonov regularizations in time-lapse full-waveform inversion.

  9. Semisupervised Support Vector Machines With Tangent Space Intrinsic Manifold Regularization.

    Science.gov (United States)

    Sun, Shiliang; Xie, Xijiong

    2016-09-01

    Semisupervised learning has been an active research topic in machine learning and data mining. One main reason is that labeling examples is expensive and time-consuming, while there are large numbers of unlabeled examples available in many practical problems. So far, Laplacian regularization has been widely used in semisupervised learning. In this paper, we propose a new regularization method called tangent space intrinsic manifold regularization. It is intrinsic to data manifold and favors linear functions on the manifold. Fundamental elements involved in the formulation of the regularization are local tangent space representations, which are estimated by local principal component analysis, and the connections that relate adjacent tangent spaces. Simultaneously, we explore its application to semisupervised classification and propose two new learning algorithms called tangent space intrinsic manifold regularized support vector machines (TiSVMs) and tangent space intrinsic manifold regularized twin SVMs (TiTSVMs). They effectively integrate the tangent space intrinsic manifold regularization consideration. The optimization of TiSVMs can be solved by a standard quadratic programming, while the optimization of TiTSVMs can be solved by a pair of standard quadratic programmings. The experimental results of semisupervised classification problems show the effectiveness of the proposed semisupervised learning algorithms.

  10. The three-point function in split dimensional regularization in the Coulomb gauge

    International Nuclear Information System (INIS)

    Leibbrandt, G.

    1998-01-01

    We use a gauge-invariant regularization procedure, called split dimensional regularization, to evaluate the quark self-energy Σ(p) and quark-quark-gluon vertex function Λ μ (p',p) in the Coulomb gauge, ∇-vector.A - vectora=0. The technique of split dimensional regularization was designed to regulate Coulomb-gauge Feynman integrals in non-Abelian theories. The technique which is based on two complex regulating parameters, ω and σ, is shown to generate a well-defined set of Coulomb-gauge integrals. A major component of this project deals with the evaluation of four-propagator and five-propagator Coulomb integrals, some of which are non-local. It is further argued that the standard one-loop BRST identity relating Σ and Λ μ , should by rights be replaced by a more general BRST identity which contains two additional contributions from ghost vertex diagrams. Despite the appearance of non-local Coulomb integrals, both Σ and Λ μ are local functions which satisfy the appropriate BRST identity. Application of split dimensional regularization to two-loop energy integrals is briefly discussed. (orig.)

  11. Detecting violations of temporal regularities in waking and sleeping two-month-old infants

    NARCIS (Netherlands)

    Otte, R.A.; Winkler, I.; Braeken, M.A.K.A.; Stekelenburg, J.J.; van der Stelt, O.; Van den Bergh, B.R.H.

    2013-01-01

    Correctly processing rapid sequences of sounds is essential for developmental milestones, such as language acquisition. We investigated the sensitivity of two-month-old infants to violations of a temporal regularity, by recording event-related brain potentials (ERPs) in an auditory oddball paradigm

  12. Multi-matrix loop equations: algebraic and differential structures and an approximation based on deformation quantization

    International Nuclear Information System (INIS)

    Krishnaswami, Govind S.

    2006-01-01

    Large-N multi-matrix loop equations are formulated as quadratic difference equations in concatenation of gluon correlations. Though non-linear, they involve highest rank correlations linearly. They are underdetermined in many cases. Additional linear equations for gluon correlations, associated to symmetries of action and measure are found. Loop equations aren't differential equations as they involve left annihilation, which doesn't satisfy the Leibnitz rule with concatenation. But left annihilation is a derivation of the commutative shuffle product. Moreover shuffle and concatenation combine to define a bialgebra. Motivated by deformation quantization, we expand concatenation around shuffle in powers of q, whose physical value is 1. At zeroth order the loop equations become quadratic PDEs in the shuffle algebra. If the variation of the action is linear in iterated commutators of left annihilations, these quadratic PDEs linearize by passage to shuffle reciprocal of correlations. Remarkably, this is true for regularized versions of the Yang-Mills, Chern-Simons and Gaussian actions. But the linear equations are underdetermined just as the loop equations were. For any particular solution, the shuffle reciprocal is explicitly inverted to get the zeroth order gluon correlations. To go beyond zeroth order, we find a Poisson bracket on the shuffle algebra and associative q-products interpolating between shuffle and concatenation. This method, and a complementary one of deforming annihilation rather than product are shown to give over and underestimates for correlations of a gaussian matrix model

  13. Two-way substitution effects on inventory in configure-to-order production systems

    DEFF Research Database (Denmark)

    Myrodia, Anna; Bonev, Martin; Hvam, Lars

    2015-01-01

    In designing configure-to-order productionsystems for a growing product variety, companies arechallenged with an increased complexity for obtaining highproductivity levels and cost-effectiveness. In academiaseveral optimization methods and conceptual frameworksfor substituting components......, or increasing storage capacityhave been proposed. Our study presents a practicalframework for quantifying the impact of a two-waysubstitution at different production stages and its impact oninventory utilization. In a case study we quantify the relationbetween component substitution, and inventory...... capacityutilization, while maintaining the production capacity as wellas the external product variety....

  14. Two hierarchies of multi-component Kaup-Newell equations and theirs integrable couplings

    International Nuclear Information System (INIS)

    Zhu Fubo; Ji Jie; Zhang Jianbin

    2008-01-01

    Two hierarchies of multi-component Kaup-Newell equations are derived from an arbitrary order matrix spectral problem, including positive non-isospectral Kaup-Newell hierarchy and negative non-isospectral Kaup-Newell hierarchy. Moreover, new integrable couplings of the resulting Kaup-Newell soliton hierarchies are constructed by enlarging the associated matrix spectral problem

  15. The derivative assay--an analysis of two fast components of DNA rejoining kinetics

    International Nuclear Information System (INIS)

    Sandstroem, B.E.

    1989-01-01

    The DNA rejoining kinetics of human U-118 MG cells were studied after gamma-irradiation with 4 Gy. The analysis of the sealing rate of the induced DNA strand breaks was made with a modification of the DNA unwinding technique. The modification meant that rather than just monitoring the number of existing breaks at each time of analysis, the velocity, at which the rejoining process proceeded, was determined. Two apparent first-order components of single-strand break repair could be identified during the 25 min of analysis. The half-times for the two components were 1.9 and 16 min, respectively

  16. Numerical analysis of the harmonic components of the Bragg wavelength content in spectral responses of apodized fiber Bragg gratings written by means of a phase mask with a variable phase step height.

    Science.gov (United States)

    Osuch, Tomasz

    2016-02-01

    The influence of the complex interference patterns created by a phase mask with variable diffraction efficiency in apodized fiber Bragg grating (FBGs) formation on their reflectance spectra is studied. The effect of the significant contributions of the zeroth and higher (m>±1) diffraction orders on the Bragg wavelength peak and its harmonic components is analyzed numerically. The results obtained for Gaussian and tanh apodization profiles are compared with similar data calculated for a uniform grating. It is demonstrated that when an apodized FBG is written using a phase mask with variable diffraction efficiency, significant enhancement of the harmonic components and a reduction of the Bragg wavelength peak in the grating spectral response are observed. This is particularly noticeable for the Gaussian apodization profile due to the substantial contributions of phase mask sections with relatively small phase steps in the FBG formation.

  17. The electronic structure and the state of compositional order in metallic alloys

    International Nuclear Information System (INIS)

    Gyorffy, B.L.; Johnson, D.D.; Pinski, F.J.; Nicholson, D.M.; Stocks, G.M.

    1987-01-01

    Many two-component (A,B) systems crystallize into a random solid solution. In such a state the atoms occupy a more or less regular array of lattice sites but each site can be A or B in a random fashion. Then, on lowering the temperature, the system will either phase separate or order, starting at some transition temperature T/sub c/. The aim of these lectures is to present a microscopic approach to the understanding of these scientifically interesting and technologically important processes. 64 refs., 19 figs

  18. Mass effects in three-point chronological current correlators in n-dimensional multifermion models

    International Nuclear Information System (INIS)

    Kucheryavyj, V.I.

    1991-01-01

    Three-types of quantities associated with three-point chronological fermion-current correlators having arbitrary Lorentz and internal structure are calculated in the n-dimensional multifermion models with different masses. The analysis of vector and axial-vector Ward identities for regular (finite) and dimensionally regularized values of these quantities is carried out. Quantum corrections to the canonical Ward identities are obtained. These corrections are generally homogenious functions of zeroth order in masses and under some definite conditions they are reduced to known axial-vector anomalies. The structure and properties of quantum corrections to AVV and AAA correlators in the four-dimension space-time are investigated in detail

  19. Continuum regularized Yang-Mills theory

    International Nuclear Information System (INIS)

    Sadun, L.A.

    1987-01-01

    Using the machinery of stochastic quantization, Z. Bern, M. B. Halpern, C. Taubes and I recently proposed a continuum regularization technique for quantum field theory. This regularization may be implemented by applying a regulator to either the (d + 1)-dimensional Parisi-Wu Langevin equation or, equivalently, to the d-dimensional second order Schwinger-Dyson (SD) equations. This technique is non-perturbative, respects all gauge and Lorentz symmetries, and is consistent with a ghost-free gauge fixing (Zwanziger's). This thesis is a detailed study of this regulator, and of regularized Yang-Mills theory, using both perturbative and non-perturbative techniques. The perturbative analysis comes first. The mechanism of stochastic quantization is reviewed, and a perturbative expansion based on second-order SD equations is developed. A diagrammatic method (SD diagrams) for evaluating terms of this expansion is developed. We apply the continuum regulator to a scalar field theory. Using SD diagrams, we show that all Green functions can be rendered finite to all orders in perturbation theory. Even non-renormalizable theories can be regularized. The continuum regulator is then applied to Yang-Mills theory, in conjunction with Zwanziger's gauge fixing. A perturbative expansion of the regulator is incorporated into the diagrammatic method. It is hoped that the techniques discussed in this thesis will contribute to the construction of a renormalized Yang-Mills theory is 3 and 4 dimensions

  20. Poisson image reconstruction with Hessian Schatten-norm regularization.

    Science.gov (United States)

    Lefkimmiatis, Stamatios; Unser, Michael

    2013-11-01

    Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework.

  1. Two-Way Regularized Fuzzy Clustering of Multiple Correspondence Analysis.

    Science.gov (United States)

    Kim, Sunmee; Choi, Ji Yeh; Hwang, Heungsun

    2017-01-01

    Multiple correspondence analysis (MCA) is a useful tool for investigating the interrelationships among dummy-coded categorical variables. MCA has been combined with clustering methods to examine whether there exist heterogeneous subclusters of a population, which exhibit cluster-level heterogeneity. These combined approaches aim to classify either observations only (one-way clustering of MCA) or both observations and variable categories (two-way clustering of MCA). The latter approach is favored because its solutions are easier to interpret by providing explicitly which subgroup of observations is associated with which subset of variable categories. Nonetheless, the two-way approach has been built on hard classification that assumes observations and/or variable categories to belong to only one cluster. To relax this assumption, we propose two-way fuzzy clustering of MCA. Specifically, we combine MCA with fuzzy k-means simultaneously to classify a subgroup of observations and a subset of variable categories into a common cluster, while allowing both observations and variable categories to belong partially to multiple clusters. Importantly, we adopt regularized fuzzy k-means, thereby enabling us to decide the degree of fuzziness in cluster memberships automatically. We evaluate the performance of the proposed approach through the analysis of simulated and real data, in comparison with existing two-way clustering approaches.

  2. Singular tachyon kinks from regular profiles

    International Nuclear Information System (INIS)

    Copeland, E.J.; Saffin, P.M.; Steer, D.A.

    2003-01-01

    We demonstrate how Sen's singular kink solution of the Born-Infeld tachyon action can be constructed by taking the appropriate limit of initially regular profiles. It is shown that the order in which different limits are taken plays an important role in determining whether or not such a solution is obtained for a wide class of potentials. Indeed, by introducing a small parameter into the action, we are able circumvent the results of a recent paper which derived two conditions on the asymptotic tachyon potential such that the singular kink could be recovered in the large amplitude limit of periodic solutions. We show that this is explained by the non-commuting nature of two limits, and that Sen's solution is recovered if the order of the limits is chosen appropriately

  3. Construction of normal-regular decisions of Bessel typed special system

    Science.gov (United States)

    Tasmambetov, Zhaksylyk N.; Talipova, Meiramgul Zh.

    2017-09-01

    Studying a special system of differential equations in the separate production of the second order is solved by the degenerate hypergeometric function reducing to the Bessel functions of two variables. To construct a solution of this system near regular and irregular singularities, we use the method of Frobenius-Latysheva applying the concepts of rank and antirank. There is proved the basic theorem that establishes the existence of four linearly independent solutions of studying system type of Bessel. To prove the existence of normal-regular solutions we establish necessary conditions for the existence of such solutions. The existence and convergence of a normally regular solution are shown using the notion of rank and antirank.

  4. Diagrammatic methods in phase-space regularization

    International Nuclear Information System (INIS)

    Bern, Z.; Halpern, M.B.; California Univ., Berkeley

    1987-11-01

    Using the scalar prototype and gauge theory as the simplest possible examples, diagrammatic methods are developed for the recently proposed phase-space form of continuum regularization. A number of one-loop and all-order applications are given, including general diagrammatic discussions of the nogrowth theorem and the uniqueness of the phase-space stochastic calculus. The approach also generates an alternate derivation of the equivalence of the large-β phase-space regularization to the more conventional coordinate-space regularization. (orig.)

  5. Onsager Vortex Formation in Two-component Bose-Einstein Condensates

    Science.gov (United States)

    Han, Junsik; Tsubota, Makoto

    2018-06-01

    We numerically study the dynamics of quantized vortices in two-dimensional two-component Bose-Einstein condensates (BECs) trapped by a box potential. For one-component BECs in a box potential, it is known that quantized vortices form Onsager vortices, which are clusters of same-sign vortices. We confirm that the vortices of the two components spatially separate from each other — even for miscible two-component BECs — suppressing the formation of Onsager vortices. This phenomenon is caused by the repulsive interaction between vortices belonging to different components, hence, suggesting a new possibility for vortex phase separation.

  6. Processing SPARQL queries with regular expressions in RDF databases

    Science.gov (United States)

    2011-01-01

    Background As the Resource Description Framework (RDF) data model is widely used for modeling and sharing a lot of online bioinformatics resources such as Uniprot (dev.isb-sib.ch/projects/uniprot-rdf) or Bio2RDF (bio2rdf.org), SPARQL - a W3C recommendation query for RDF databases - has become an important query language for querying the bioinformatics knowledge bases. Moreover, due to the diversity of users’ requests for extracting information from the RDF data as well as the lack of users’ knowledge about the exact value of each fact in the RDF databases, it is desirable to use the SPARQL query with regular expression patterns for querying the RDF data. To the best of our knowledge, there is currently no work that efficiently supports regular expression processing in SPARQL over RDF databases. Most of the existing techniques for processing regular expressions are designed for querying a text corpus, or only for supporting the matching over the paths in an RDF graph. Results In this paper, we propose a novel framework for supporting regular expression processing in SPARQL query. Our contributions can be summarized as follows. 1) We propose an efficient framework for processing SPARQL queries with regular expression patterns in RDF databases. 2) We propose a cost model in order to adapt the proposed framework in the existing query optimizers. 3) We build a prototype for the proposed framework in C++ and conduct extensive experiments demonstrating the efficiency and effectiveness of our technique. Conclusions Experiments with a full-blown RDF engine show that our framework outperforms the existing ones by up to two orders of magnitude in processing SPARQL queries with regular expression patterns. PMID:21489225

  7. Processing SPARQL queries with regular expressions in RDF databases.

    Science.gov (United States)

    Lee, Jinsoo; Pham, Minh-Duc; Lee, Jihwan; Han, Wook-Shin; Cho, Hune; Yu, Hwanjo; Lee, Jeong-Hoon

    2011-03-29

    As the Resource Description Framework (RDF) data model is widely used for modeling and sharing a lot of online bioinformatics resources such as Uniprot (dev.isb-sib.ch/projects/uniprot-rdf) or Bio2RDF (bio2rdf.org), SPARQL - a W3C recommendation query for RDF databases - has become an important query language for querying the bioinformatics knowledge bases. Moreover, due to the diversity of users' requests for extracting information from the RDF data as well as the lack of users' knowledge about the exact value of each fact in the RDF databases, it is desirable to use the SPARQL query with regular expression patterns for querying the RDF data. To the best of our knowledge, there is currently no work that efficiently supports regular expression processing in SPARQL over RDF databases. Most of the existing techniques for processing regular expressions are designed for querying a text corpus, or only for supporting the matching over the paths in an RDF graph. In this paper, we propose a novel framework for supporting regular expression processing in SPARQL query. Our contributions can be summarized as follows. 1) We propose an efficient framework for processing SPARQL queries with regular expression patterns in RDF databases. 2) We propose a cost model in order to adapt the proposed framework in the existing query optimizers. 3) We build a prototype for the proposed framework in C++ and conduct extensive experiments demonstrating the efficiency and effectiveness of our technique. Experiments with a full-blown RDF engine show that our framework outperforms the existing ones by up to two orders of magnitude in processing SPARQL queries with regular expression patterns.

  8. Least square regularized regression in sum space.

    Science.gov (United States)

    Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu

    2013-04-01

    This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.

  9. Laplacian embedded regression for scalable manifold regularization.

    Science.gov (United States)

    Chen, Lin; Tsang, Ivor W; Xu, Dong

    2012-06-01

    Semi-supervised learning (SSL), as a powerful tool to learn from a limited number of labeled data and a large number of unlabeled data, has been attracting increasing attention in the machine learning community. In particular, the manifold regularization framework has laid solid theoretical foundations for a large family of SSL algorithms, such as Laplacian support vector machine (LapSVM) and Laplacian regularized least squares (LapRLS). However, most of these algorithms are limited to small scale problems due to the high computational cost of the matrix inversion operation involved in the optimization problem. In this paper, we propose a novel framework called Laplacian embedded regression by introducing an intermediate decision variable into the manifold regularization framework. By using ∈-insensitive loss, we obtain the Laplacian embedded support vector regression (LapESVR) algorithm, which inherits the sparse solution from SVR. Also, we derive Laplacian embedded RLS (LapERLS) corresponding to RLS under the proposed framework. Both LapESVR and LapERLS possess a simpler form of a transformed kernel, which is the summation of the original kernel and a graph kernel that captures the manifold structure. The benefits of the transformed kernel are two-fold: (1) we can deal with the original kernel matrix and the graph Laplacian matrix in the graph kernel separately and (2) if the graph Laplacian matrix is sparse, we only need to perform the inverse operation for a sparse matrix, which is much more efficient when compared with that for a dense one. Inspired by kernel principal component analysis, we further propose to project the introduced decision variable into a subspace spanned by a few eigenvectors of the graph Laplacian matrix in order to better reflect the data manifold, as well as accelerate the calculation of the graph kernel, allowing our methods to efficiently and effectively cope with large scale SSL problems. Extensive experiments on both toy and real

  10. On the equivalence of different regularization methods

    International Nuclear Information System (INIS)

    Brzezowski, S.

    1985-01-01

    The R-circunflex-operation preceded by the regularization procedure is discussed. Some arguments are given, according to which the results may depend on the method of regularization, introduced in order to avoid divergences in perturbation calculations. 10 refs. (author)

  11. Genus Ranges of 4-Regular Rigid Vertex Graphs.

    Science.gov (United States)

    Buck, Dorothy; Dolzhenko, Egor; Jonoska, Nataša; Saito, Masahico; Valencia, Karin

    2015-01-01

    A rigid vertex of a graph is one that has a prescribed cyclic order of its incident edges. We study orientable genus ranges of 4-regular rigid vertex graphs. The (orientable) genus range is a set of genera values over all orientable surfaces into which a graph is embedded cellularly, and the embeddings of rigid vertex graphs are required to preserve the prescribed cyclic order of incident edges at every vertex. The genus ranges of 4-regular rigid vertex graphs are sets of consecutive integers, and we address two questions: which intervals of integers appear as genus ranges of such graphs, and what types of graphs realize a given genus range. For graphs with 2 n vertices ( n > 1), we prove that all intervals [ a, b ] for all a genus ranges. For graphs with 2 n - 1 vertices ( n ≥ 1), we prove that all intervals [ a, b ] for all a genus ranges. We also provide constructions of graphs that realize these ranges.

  12. Two-component multistep direct reactions: A microscopic approach

    International Nuclear Information System (INIS)

    Koning, A.J.; Chadwick, M.B.

    1998-03-01

    The authors present two principal advances in multistep direct theory: (1) A two-component formulation of multistep direct reactions, where neutron and proton excitations are explicitly accounted for in the evolution of the reaction, for all orders of scattering. While this may at first seem to be a formidable task, especially for multistep processes where the many possible reaction pathways becomes large in a two-component formalism, the authors show that this is not so -- a rather simple generalization of the FKK convolution expression 1 automatically generates these pathways. Such considerations are particularly relevant when simultaneously analyzing both neutron and proton emission spectra, which is always important since these processes represent competing decay channels. (2) A new, and fully microscopic, method for calculating MSD cross sections which does not make use of particle-hole state densities but instead directly calculates cross sections for all possible particle-hole excitations (again including an exact book-keeping of the neutron/proton type of the particle and hole at all stages of the reaction) determined from a simple non-interacting shell model. This is in contrast to all previous numerical approaches which sample only a small number of such states to estimate the DWBA strength, and utilize simple analytical formulae for the partial state density, based on the equidistant spacing model. The new approach has been applied, along with theories for multistep compound, compound, and collective reactions, to analyze experimental emission spectra for a range of targets and energies. The authors show that the theory correctly accounts for double-differential nucleon spectra

  13. Analytic stochastic regularization: gauge and supersymmetry theories

    International Nuclear Information System (INIS)

    Abdalla, M.C.B.

    1988-01-01

    Analytic stochastic regularization for gauge and supersymmetric theories is considered. Gauge invariance in spinor and scalar QCD is verified to brak fown by an explicit one loop computation of the two, theree and four point vertex function of the gluon field. As a result, non gauge invariant counterterms must be added. However, in the supersymmetric multiplets there is a cancellation rendering the counterterms gauge invariant. The calculation is considered at one loop order. (author) [pt

  14. Sparsity-regularized HMAX for visual recognition.

    Directory of Open Access Journals (Sweden)

    Xiaolin Hu

    Full Text Available About ten years ago, HMAX was proposed as a simple and biologically feasible model for object recognition, based on how the visual cortex processes information. However, the model does not encompass sparse firing, which is a hallmark of neurons at all stages of the visual pathway. The current paper presents an improved model, called sparse HMAX, which integrates sparse firing. This model is able to learn higher-level features of objects on unlabeled training images. Unlike most other deep learning models that explicitly address global structure of images in every layer, sparse HMAX addresses local to global structure gradually along the hierarchy by applying patch-based learning to the output of the previous layer. As a consequence, the learning method can be standard sparse coding (SSC or independent component analysis (ICA, two techniques deeply rooted in neuroscience. What makes SSC and ICA applicable at higher levels is the introduction of linear higher-order statistical regularities by max pooling. After training, high-level units display sparse, invariant selectivity for particular individuals or for image categories like those observed in human inferior temporal cortex (ITC and medial temporal lobe (MTL. Finally, on an image classification benchmark, sparse HMAX outperforms the original HMAX by a large margin, suggesting its great potential for computer vision.

  15. The Impact and Evaluation of Two School-Based Interventions on Intention to Register an Organ Donation Preference

    Science.gov (United States)

    Reubsaet, A.; Brug, J.; Kitslaar, J.; Van Hooff, J. P.; van den Borne, H. W.

    2004-01-01

    The present paper describes the impact and evaluation of two intervention components--a video with group discussion and an interactive computer-tailored program--in order to encourage adolescents to register their organ donation preference. Studies were conducted in school during regular school hours. The video with group discussion in class had a…

  16. Base stock policies with degraded service to larger orders

    DEFF Research Database (Denmark)

    Du, Bisheng; Larsen, Christian

    We study an inventory system controlled by a base stock policy assuming a compound renewal demand process. We extend the base stock policy by incorporating rules for degrading the service of larger orders. Two specific rules are considered, denoted Postpone(q,t) and Split(q), respectively. The aim...... of using these rules is to achieve a given order fill rate of the regular orders (those of size less than or equal to the parameter q) having less inventory. We develop mathematical expressions for the performance measures order fill rate (of the regular orders) and average on-hand inventory level. Based...

  17. UNFOLDED REGULAR AND SEMI-REGULAR POLYHEDRA

    Directory of Open Access Journals (Sweden)

    IONIŢĂ Elena

    2015-06-01

    Full Text Available This paper proposes a presentation unfolding regular and semi-regular polyhedra. Regular polyhedra are convex polyhedra whose faces are regular and equal polygons, with the same number of sides, and whose polyhedral angles are also regular and equal. Semi-regular polyhedra are convex polyhedra with regular polygon faces, several types and equal solid angles of the same type. A net of a polyhedron is a collection of edges in the plane which are the unfolded edges of the solid. Modeling and unfolding Platonic and Arhimediene polyhedra will be using 3dsMAX program. This paper is intended as an example of descriptive geometry applications.

  18. Study of higher order cumulant expansion of U(1) lattice gauge model at finite temperature

    International Nuclear Information System (INIS)

    Zheng Xite; Lei Chunhong; Li Yuliang; Chen Hong

    1993-01-01

    The order parameter, Polyakov line , of the U(1) gauge model on N σ 3 x N τ (N τ = 1) lattice by using the cumulant expansion is calculated to the 5-th order. The emphasis is put on the behaviour of the cumulant expansion in the intermediate coupling region. The necessity of higher order expansion is clarified from the connection between the cumulant expansion and the correlation length. The variational parameter in the n-th order calculation is determined by the requirement that corrections of the n-th order expansion to the zeroth order expansion finish. The agreement with the Monte Carlo simulation is obtained not only in the weak and strong coupling regions, but also in the intermediate coupling region except in the very vicinity of the phase transition point

  19. Phase diagram of two-component bosons on an optical lattice

    International Nuclear Information System (INIS)

    Altman, Ehud; Hofstetter, Walter; Demler, Eugene; Lukin, Mikhail D

    2003-01-01

    We present a theoretical analysis of the phase diagram of two-component bosons on an optical lattice. A new formalism is developed which treats the effective spin interactions in the Mott and superfluid phases on the same footing. Using this new approach we chart the phase boundaries of the broken spin symmetry states up to the Mott to superfluid transition and beyond. Near the transition point, the magnitude of spin exchange can be very large, which facilitates the experimental realization of spin-ordered states. We find that spin and quantum fluctuations have a dramatic effect on the transition, making it first order in extended regions of the phase diagram. When each species is at integer filling, an additional phase transition may occur, from a spin-ordered insulator to a Mott insulator with no broken symmetries. We determine the phase boundaries in this regime and show that this is essentially a Mott transition in the spin sector

  20. The three-point function in split dimensional regularization in the Coulomb gauge

    CERN Document Server

    Leibbrandt, G

    1998-01-01

    We use a gauge-invariant regularization procedure, called ``split dimensional regularization'', to evaluate the quark self-energy $\\Sigma (p)$ and quark-quark-gluon vertex function $\\Lambda_\\mu (p^\\prime,p)$ in the Coulomb gauge, $\\vec{\\bigtriangledown}\\cdot\\vec{A}^a = 0$. The technique of split dimensional regularization was designed to regulate Coulomb-gauge Feynman integrals in non-Abelian theories. The technique which is based on two complex regulating parameters, $\\omega$ and $\\sigma$, is shown to generate a well-defined set of Coulomb-gauge integrals. A major component of this project deals with the evaluation of four-propagator and five-propagator Coulomb integrals, some of which are nonlocal. It is further argued that the standard one-loop BRST identity relating $\\Sigma$ and $\\Lambda_\\mu$, should by rights be replaced by a more general BRST identity which contains two additional contributions from ghost vertex diagrams. Despite the appearance of nonlocal Coulomb integrals, both $\\Sigma$ and $\\Lambda_\\...

  1. Total variation regularization for a backward time-fractional diffusion problem

    International Nuclear Information System (INIS)

    Wang, Liyan; Liu, Jijun

    2013-01-01

    Consider a two-dimensional backward problem for a time-fractional diffusion process, which can be considered as image de-blurring where the blurring process is assumed to be slow diffusion. In order to avoid the over-smoothing effect for object image with edges and to construct a fast reconstruction scheme, the total variation regularizing term and the data residual error in the frequency domain are coupled to construct the cost functional. The well posedness of this optimization problem is studied. The minimizer is sought approximately using the iteration process for a series of optimization problems with Bregman distance as a penalty term. This iteration reconstruction scheme is essentially a new regularizing scheme with coupling parameter in the cost functional and the iteration stopping times as two regularizing parameters. We give the choice strategy for the regularizing parameters in terms of the noise level of measurement data, which yields the optimal error estimate on the iterative solution. The series optimization problems are solved by alternative iteration with explicit exact solution and therefore the amount of computation is much weakened. Numerical implementations are given to support our theoretical analysis on the convergence rate and to show the significant reconstruction improvements. (paper)

  2. Regularity criteria for the Navier–Stokes equations based on one component of velocity

    Czech Academy of Sciences Publication Activity Database

    Guo, Z.; Caggio, M.; Skalák, Zdeněk

    2017-01-01

    Roč. 35, June (2017), s. 379-396 ISSN 1468-1218 R&D Projects: GA ČR GA14-02067S Grant - others:Západočeská univerzita(CZ) SGS-2016-003; National Natural Science Foundation of China (CN) 11301394 Institutional support: RVO:67985874 Keywords : Navier–Stokes equations * regularity of solutions * regularity criteria * Anisotropic Lebesgue spaces Subject RIV: BK - Fluid Dynamics OBOR OECD: Fluids and plasma physics (including surface physics) Impact factor: 1.659, year: 2016

  3. Regularity criteria for the Navier–Stokes equations based on one component of velocity

    Czech Academy of Sciences Publication Activity Database

    Guo, Z.; Caggio, M.; Skalák, Zdeněk

    2017-01-01

    Roč. 35, June (2017), s. 379-396 ISSN 1468-1218 R&D Projects: GA ČR GA14-02067S Grant - others:Západočeská univerzita(CZ) SGS-2016-003; National Natural Science Foundation of China(CN) 11301394 Institutional support: RVO:67985874 Keywords : Navier–Stokes equations * regularity of solutions * regularity criteria * Anisotropic Lebesgue spaces Subject RIV: BK - Fluid Dynamics OBOR OECD: Fluids and plasma physics (including surface physics) Impact factor: 1.659, year: 2016

  4. Regular-fat dairy and human health

    DEFF Research Database (Denmark)

    Astrup, Arne; Bradley, Beth H Rice; Brenna, J Thomas

    2016-01-01

    In recent history, some dietary recommendations have treated dairy fat as an unnecessary source of calories and saturated fat in the human diet. These assumptions, however, have recently been brought into question by current research on regular fat dairy products and human health. In an effort to......, cheese and yogurt, can be important components of an overall healthy dietary pattern. Systematic examination of the effects of dietary patterns that include regular-fat milk, cheese and yogurt on human health is warranted....

  5. Aberration measurement of projection optics in lithographic tools based on two-beam interference theory

    International Nuclear Information System (INIS)

    Ma Mingying; Wang Xiangzhao; Wang Fan

    2006-01-01

    The degradation of image quality caused by aberrations of projection optics in lithographic tools is a serious problem in optical lithography. We propose what we believe to be a novel technique for measuring aberrations of projection optics based on two-beam interference theory. By utilizing the partial coherent imaging theory, a novel model that accurately characterizes the relative image displacement of a fine grating pattern to a large pattern induced by aberrations is derived. Both even and odd aberrations are extracted independently from the relative image displacements of the printed patterns by two-beam interference imaging of the zeroth and positive first orders. The simulation results show that by using this technique we can measure the aberrations present in the lithographic tool with higher accuracy

  6. Aberration measurement of projection optics in lithographic tools based on two-beam interference theory.

    Science.gov (United States)

    Ma, Mingying; Wang, Xiangzhao; Wang, Fan

    2006-11-10

    The degradation of image quality caused by aberrations of projection optics in lithographic tools is a serious problem in optical lithography. We propose what we believe to be a novel technique for measuring aberrations of projection optics based on two-beam interference theory. By utilizing the partial coherent imaging theory, a novel model that accurately characterizes the relative image displacement of a fine grating pattern to a large pattern induced by aberrations is derived. Both even and odd aberrations are extracted independently from the relative image displacements of the printed patterns by two-beam interference imaging of the zeroth and positive first orders. The simulation results show that by using this technique we can measure the aberrations present in the lithographic tool with higher accuracy.

  7. Inversion methods for fast-ion velocity-space tomography in fusion plasmas

    DEFF Research Database (Denmark)

    Jacobsen, Asger Schou; Stagner, L.; Salewski, Mirko

    2016-01-01

    Velocity-space tomography has been used to infer 2D fast-ion velocity distribution functions. Here we compare the performance of five different tomographic inversion methods: truncated singular value decomposition, maximum entropy, minimum Fisher information and zeroth and first-order Tikhonov...... regularization. The inversion methods are applied to fast-ion Dα measurements taken just before and just after a sawtooth crash in the ASDEX Upgrade tokamak as well as to synthetic measurements from different test distributions. We find that the methods regularizing by penalizing steep gradients or maximizing...... entropy perform best. We assess the uncertainty of the calculated inversions taking into account photon noise, uncertainties in the forward model as well as uncertainties introduced by the regularization which allows us to distinguish regions of high and low confidence in the tomographies. In high...

  8. Similarity-transformed perturbation theory on top of truncated local coupled cluster solutions: Theory and applications to intermolecular interactions

    Energy Technology Data Exchange (ETDEWEB)

    Azar, Richard Julian, E-mail: julianazar2323@berkeley.edu; Head-Gordon, Martin, E-mail: mhg@cchem.berkeley.edu [Kenneth S. Pitzer Center for Theoretical Chemistry, Department of Chemistry, University of California and Chemical Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States)

    2015-05-28

    Your correspondents develop and apply fully nonorthogonal, local-reference perturbation theories describing non-covalent interactions. Our formulations are based on a Löwdin partitioning of the similarity-transformed Hamiltonian into a zeroth-order intramonomer piece (taking local CCSD solutions as its zeroth-order eigenfunction) plus a first-order piece coupling the fragments. If considerations are limited to a single molecule, the proposed intermolecular similarity-transformed perturbation theory represents a frozen-orbital variant of the “(2)”-type theories shown to be competitive with CCSD(T) and of similar cost if all terms are retained. Different restrictions on the zeroth- and first-order amplitudes are explored in the context of large-computation tractability and elucidation of non-local effects in the space of singles and doubles. To accurately approximate CCSD intermolecular interaction energies, a quadratically growing number of variables must be included at zeroth-order.

  9. Regular variation on measure chains

    Czech Academy of Sciences Publication Activity Database

    Řehák, Pavel; Vitovec, J.

    2010-01-01

    Roč. 72, č. 1 (2010), s. 439-448 ISSN 0362-546X R&D Projects: GA AV ČR KJB100190701 Institutional research plan: CEZ:AV0Z10190503 Keywords : regularly varying function * regularly varying sequence * measure chain * time scale * embedding theorem * representation theorem * second order dynamic equation * asymptotic properties Subject RIV: BA - General Mathematics Impact factor: 1.279, year: 2010 http://www.sciencedirect.com/science/article/pii/S0362546X09008475

  10. Fabrication and characterization of one- and two-dimensional regular patterns produced employing multiple exposure holographic lithography

    DEFF Research Database (Denmark)

    Tamulevičius, S.; Jurkevičiute, A.; Armakavičius, N.

    2017-01-01

    In this paper we describe fabrication and characterization methods of two-dimensional periodic microstructures in photoresist with pitch of 1.2 urn and lattice constant 1.2-4.8 μm, formed using two-beam multiple exposure holographic lithography technique. The regular structures were recorded empl...

  11. Processing SPARQL queries with regular expressions in RDF databases

    Directory of Open Access Journals (Sweden)

    Cho Hune

    2011-03-01

    Full Text Available Abstract Background As the Resource Description Framework (RDF data model is widely used for modeling and sharing a lot of online bioinformatics resources such as Uniprot (dev.isb-sib.ch/projects/uniprot-rdf or Bio2RDF (bio2rdf.org, SPARQL - a W3C recommendation query for RDF databases - has become an important query language for querying the bioinformatics knowledge bases. Moreover, due to the diversity of users’ requests for extracting information from the RDF data as well as the lack of users’ knowledge about the exact value of each fact in the RDF databases, it is desirable to use the SPARQL query with regular expression patterns for querying the RDF data. To the best of our knowledge, there is currently no work that efficiently supports regular expression processing in SPARQL over RDF databases. Most of the existing techniques for processing regular expressions are designed for querying a text corpus, or only for supporting the matching over the paths in an RDF graph. Results In this paper, we propose a novel framework for supporting regular expression processing in SPARQL query. Our contributions can be summarized as follows. 1 We propose an efficient framework for processing SPARQL queries with regular expression patterns in RDF databases. 2 We propose a cost model in order to adapt the proposed framework in the existing query optimizers. 3 We build a prototype for the proposed framework in C++ and conduct extensive experiments demonstrating the efficiency and effectiveness of our technique. Conclusions Experiments with a full-blown RDF engine show that our framework outperforms the existing ones by up to two orders of magnitude in processing SPARQL queries with regular expression patterns.

  12. Effects of regularly consuming dietary fibre rich soluble cocoa products on bowel habits in healthy subjects: a free-living, two-stage, randomized, crossover, single-blind intervention

    Directory of Open Access Journals (Sweden)

    Sarriá Beatriz

    2012-04-01

    Full Text Available Abstract Background Dietary fibre is both preventive and therapeutic for bowel functional diseases. Soluble cocoa products are good sources of dietary fibre that may be supplemented with this dietary component. This study assessed the effects of regularly consuming two soluble cocoa products (A and B with different non-starch polysaccharides levels (NSP, 15.1 and 22.0% w/w, respectively on bowel habits using subjective intestinal function and symptom questionnaires, a daily diary and a faecal marker in healthy individuals. Methods A free-living, two-stage, randomized, crossover, single-blind intervention was carried out in 44 healthy men and women, between 18-55 y old, who had not taken dietary supplements, laxatives, or antibiotics six months before the start of the study. In the four-week-long intervention stages, separated by a three-week-wash-out stage, two servings of A and B, that provided 2.26 vs. 6.60 g/day of NSP respectively, were taken. In each stage, volunteers' diet was recorded using a 72-h food intake report. Results Regularly consuming cocoa A and B increased fibre intake, although only cocoa B significantly increased fibre intake (p Conclusions Regular consumption of the cocoa products increases dietary fibre intake to recommended levels and product B improves bowel habits. The use of both objective and subjective assessments to evaluate the effects of food on bowel habits is recommended.

  13. Elemental compositions of two extrasolar rocky planetesimals

    Energy Technology Data Exchange (ETDEWEB)

    Xu, S.; Jura, M.; Klein, B.; Zuckerman, B. [Department of Physics and Astronomy, University of California, Los Angeles, CA 90095-1562 (United States); Koester, D., E-mail: sxu@astro.ucla.edu, E-mail: jura@astro.ucla.edu, E-mail: kleinb@astro.ucla.edu, E-mail: ben@astro.ucla.edu, E-mail: koester@astrophysik.uni-kiel.de [Institut fur Theoretische Physik und Astrophysik, University of Kiel, D-24098 Kiel (Germany)

    2014-03-10

    We report Keck/HIRES and Hubble Space Telescope/COS spectroscopic studies of extrasolar rocky planetesimals accreted onto two hydrogen atmosphere white dwarfs, G29-38 and GD 133. In G29-38, eight elements are detected, including C, O, Mg, Si, Ca, Ti, Cr, and Fe while in GD 133, O, Si, Ca, and marginally Mg are seen. These two extrasolar planetesimals show a pattern of refractory enhancement and volatile depletion. For G29-38, the observed composition can be best interpreted as a blend of a chondritic object with some refractory-rich material, a result from post-nebular processing. Water is very depleted in the parent body accreted onto G29-38, based on the derived oxygen abundance. The inferred total mass accretion rate in GD 133 is the lowest of all known dusty white dwarfs, possibly due to non-steady state accretion. We continue to find that a variety of extrasolar planetesimals all resemble to zeroth order the elemental composition of bulk Earth.

  14. A regularized stationary mean-field game

    KAUST Repository

    Yang, Xianjin

    2016-01-01

    In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.

  15. A regularized stationary mean-field game

    KAUST Repository

    Yang, Xianjin

    2016-04-19

    In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.

  16. 1ST-ORDER NONADIABATIC COUPLING MATRIX-ELEMENTS FROM MULTICONFIGURATIONAL SELF-CONSISTENT-FIELD RESPONSE THEORY

    DEFF Research Database (Denmark)

    Bak, Keld L.; Jørgensen, Poul; Jensen, H.J.A.

    1992-01-01

    A new scheme for obtaining first-order nonadiabatic coupling matrix elements (FO-NACME) for multiconfigurational self-consistent-field (MCSCF) wave functions is presented. The FO-NACME are evaluated from residues of linear response functions. The residues involve the geometrical response of a ref......A new scheme for obtaining first-order nonadiabatic coupling matrix elements (FO-NACME) for multiconfigurational self-consistent-field (MCSCF) wave functions is presented. The FO-NACME are evaluated from residues of linear response functions. The residues involve the geometrical response...... to the full configuration interaction limit. Comparisons are made with state-averaged MCSCF results for MgH2 and finite-difference configuration interaction by perturbation with multiconfigurational zeroth-order wave function reflected by interactive process (CIPSI) results for BH....

  17. Reducing errors in the GRACE gravity solutions using regularization

    Science.gov (United States)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2012-09-01

    The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4

  18. Generalized Bregman distances and convergence rates for non-convex regularization methods

    International Nuclear Information System (INIS)

    Grasmair, Markus

    2010-01-01

    We generalize the notion of Bregman distance using concepts from abstract convexity in order to derive convergence rates for Tikhonov regularization with non-convex regularization terms. In particular, we study the non-convex regularization of linear operator equations on Hilbert spaces, showing that the conditions required for the application of the convergence rates results are strongly related to the standard range conditions from the convex case. Moreover, we consider the setting of sparse regularization, where we show that a rate of order δ 1/p holds, if the regularization term has a slightly faster growth at zero than |t| p

  19. Orbital order and effective mass enhancement in t2 g two-dimensional electron gases

    Science.gov (United States)

    Tolsma, John; Principi, Alessandro; Polini, Marco; MacDonald, Allan

    2015-03-01

    It is now possible to prepare d-electron two-dimensional electron gas systems that are confined near oxide heterojunctions and contain t2 g electrons with a density much smaller than one electron per metal atom. I will discuss a generic model that captures all qualitative features of electron-electron interaction physics in t2 g two-dimensional electron gas systems, and the use of a GW approximation to explore t2 g quasiparticle properties in this new context. t2 g electron gases contain a high density isotropic light mass xy component and low-density xz and yz anisotropic components with light and heavy masses in orthogonal directions. The high density light mass band screens interactions within the heavy bands. As a result the wave vector dependence of the self-energy is reduced and the effective mass is increased. When the density in the heavy bands is low, the difference in anisotropy between the two heavy bands favors orbital order. When orbital order does not occur, interactions still reshape the heavy-band Fermi surfaces. I will discuss these results in the context of recently reported magnetotransport experiments.

  20. Phosphatase activity tunes two-component system sensor detection threshold.

    Science.gov (United States)

    Landry, Brian P; Palanki, Rohan; Dyulgyarov, Nikola; Hartsough, Lucas A; Tabor, Jeffrey J

    2018-04-12

    Two-component systems (TCSs) are the largest family of multi-step signal transduction pathways in biology, and a major source of sensors for biotechnology. However, the input concentrations to which biosensors respond are often mismatched with application requirements. Here, we utilize a mathematical model to show that TCS detection thresholds increase with the phosphatase activity of the sensor histidine kinase. We experimentally validate this result in engineered Bacillus subtilis nitrate and E. coli aspartate TCS sensors by tuning their detection threshold up to two orders of magnitude. We go on to apply our TCS tuning method to recently described tetrathionate and thiosulfate sensors by mutating a widely conserved residue previously shown to impact phosphatase activity. Finally, we apply TCS tuning to engineer B. subtilis to sense and report a wide range of fertilizer concentrations in soil. This work will enable the engineering of tailor-made biosensors for diverse synthetic biology applications.

  1. HDE 245059: A WEAK-LINED T TAURI BINARY REVEALED BY CHANDRA AND KECK

    International Nuclear Information System (INIS)

    Baldovin-Saavedra, C.; Audard, M.; Duchene, G.; Guedel, M.; Skinner, S.L.; Paerels, F. B. S.; Ghez, A.; McCabe, C.

    2009-01-01

    We present the Chandra High Energy Transmission Grating Spectrometer and Keck observations of HDE 245059, a young weak-lined T Tauri star (WTTS), member of the pre-main-sequence group in the λ Orionis Cluster. Our high spatial resolution, near-infrared observations with Keck reveal that HDE 245059 is in fact a binary separated by 0.''87, probably composed of two WTTS based on their color indices. Based on this new information we have obtained an estimate of the masses of the binary components; ∼3 M sun and ∼2.5 M sun for the north and south components, respectively. We have also estimated the age of the system to be ∼2-3 Myr. We detect both components of the binary in the zeroth-order Chandra image and in the grating spectra. The light curves show X-ray variability of both sources and in particular a flaring event in the weaker southern component. The spectra of both stars show similar features: a combination of cool and hot plasma as demonstrated by several iron lines from Fe XVII to Fe XXV and a strong bremsstrahlung continuum at short wavelengths. We have fitted the combined grating and zeroth-order spectrum (considering the contribution of both stars) in XSPEC. The coronal abundances and emission measure distribution for the binary have been obtained using different methods, including a continuous emission measure distribution and a multi-temperature approximation. In all cases we have found that the emission is dominated by plasma between ∼8 and ∼15 MK a soft component at ∼4 MK and a hard component at ∼50 MK are also detected. The value of the hydrogen column density was low, N H ∼ 8 x 10 19 cm -2 , likely due to the clearing of the inner region of the λ Orionis cloud, where HDE 245059 is located. The abundance pattern shows an inverse first ionization potential effect for all elements from O to Fe, the only exception being Ca. To obtain the properties of the binary components, a 3-T model was fitted to the individual zeroth-order spectra

  2. Regular and chaotic motion of two dimensional electrons in a strong magnetic field

    International Nuclear Information System (INIS)

    Bar-Lev, Oded; Levit, Shimon.

    1992-05-01

    For two dimensional system of electrons in a strong magnetic field a standard approximation is the projection on a single Landau level. The resulting Hamiltonian is commonly treated semiclassically. An important element in applying the semiclassical approximation is the integrability of the corresponding classical system. We discuss the relevant integrability conditions and give a simple example of a non-integrable system-two interacting electrons in the presence of two impurities-which exhibits a coexistence of regular and chaotic classical motions. Since the inverse of the magnetic field plays the role of the Planck constant in these problems, one has the opportunity to control the 'closeness' of chaotic physical systems to the classical limit. (author)

  3. Regularizations of two-fold bifurcations in planar piecewise smooth systems using blowup

    DEFF Research Database (Denmark)

    Kristiansen, Kristian Uldall; Hogan, S. J.

    2015-01-01

    type of limit cycle that does not appear to be present in the original PWS system. For both types of limit cycle, we show that the criticality of the Hopf bifurcation that gives rise to periodic orbits is strongly dependent on the precise form of the regularization. Finally, we analyse the limit cycles...... as locally unique families of periodic orbits of the regularization and connect them, when possible, to limit cycles of the PWS system. We illustrate our analysis with numerical simulations and show how the regularized system can undergo a canard explosion phenomenon...

  4. Transferring Instantly the State of Higher-Order Linear Descriptor (Regular Differential Systems Using Impulsive Inputs

    Directory of Open Access Journals (Sweden)

    Athanasios D. Karageorgos

    2009-01-01

    Full Text Available In many applications, and generally speaking in many dynamical differential systems, the problem of transferring the initial state of the system to a desired state in (almost zero-time time is desirable but difficult to achieve. Theoretically, this can be achieved by using a linear combination of Dirac -function and its derivatives. Obviously, such an input is physically unrealizable. However, we can think of it approximately as a combination of small pulses of very high magnitude and infinitely small duration. In this paper, the approximation process of the distributional behaviour of higher-order linear descriptor (regular differential systems is presented. Thus, new analytical formulae based on linear algebra methods and generalized inverses theory are provided. Our approach is quite general and some significant conditions are derived. Finally, a numerical example is presented and discussed.

  5. Remark on the role of the order principle in component-composite duality

    International Nuclear Information System (INIS)

    Zenczykowski, P.

    1981-06-01

    Amplitudes with external currents are considered in the framework of the topological bootstrap theory. In the order-preserving approximation the currents induce representation changing around the 'composite' level loop diagrams. The typical no-representation change prescription is restored at the 'composite' level only after taking into account order violating corrections. The possible connection between 'component' and 'composite' level amplitudes in anomaly matching conditions is considered. (author)

  6. A sparsity-regularized Born iterative method for reconstruction of two-dimensional piecewise continuous inhomogeneous domains

    KAUST Repository

    Sandhu, Ali Imran; Desmal, Abdulla; Bagci, Hakan

    2016-01-01

    A sparsity-regularized Born iterative method (BIM) is proposed for efficiently reconstructing two-dimensional piecewise-continuous inhomogeneous dielectric profiles. Such profiles are typically not spatially sparse, which reduces the efficiency of the sparsity-promoting regularization. To overcome this problem, scattered fields are represented in terms of the spatial derivative of the dielectric profile and reconstruction is carried out over samples of the dielectric profile's derivative. Then, like the conventional BIM, the nonlinear problem is iteratively converted into a sequence of linear problems (in derivative samples) and sparsity constraint is enforced on each linear problem using the thresholded Landweber iterations. Numerical results, which demonstrate the efficiency and accuracy of the proposed method in reconstructing piecewise-continuous dielectric profiles, are presented.

  7. Regular approach for generating van der Waals Cs coefficients to arbitrary orders

    International Nuclear Information System (INIS)

    Ovsiannikov, Vitali D; Mitroy, J

    2006-01-01

    A completely general formalism is developed to describe the energy E disp = Σ s C s /R s of dispersion interaction between two atoms in spherically symmetric states. Explicit expressions are given up to the tenth order of perturbation theory for the dispersion energy E disp and dispersion coefficients C s . The method could, in principle, be used to derive the expressions for any s while including all contributing orders of perturbation theory for asymptotic interaction between two atoms. The theory is applied to the calculation of the complete series up to s = 30 for two hydrogen atoms in their ground state. A pseudo-state series expansion of the two-atom Green function gives rapid convergence of the series for radial matrix elements. The numerical values of C s are computed up to C 30 to a relative accuracy of 10 -7 or better. The dispersion coefficients for the hydrogen-antihydrogen interaction are obtained from the H-H coefficients by simply taking the absolute magnitude of C s

  8. Two-component feedback loops and deformed mechanics

    International Nuclear Information System (INIS)

    Tourigny, David S.

    2015-01-01

    It is shown that a general two-component feedback loop can be viewed as a deformed Hamiltonian system. Some of the implications of using ideas from theoretical physics to study biological processes are discussed. - Highlights: • Two-component molecular feedback loops are viewed as q-deformed Hamiltonian systems. • Deformations are reversed using Jackson derivatives to take advantage of working in the Hamiltonian limit. • New results are derived for the particular examples considered. • General deformations are suggested to be associated with a broader class of biological processes

  9. Ninth regular meeting of the International Working Group on Reliability of Reactor Pressure Components, Vienna, 18-20 October 1988

    International Nuclear Information System (INIS)

    1990-04-01

    The 9th regular meeting of the International Working Group on Reliability of Pressure Components took place from 18-20 October 1988 at the Agency's Headquarters. The meeting was attended by 25 representatives from 19 Member States and International Organizations. The agenda of the meeting included overviews of the national activities in the field of pressure retaining components of PWRs, review of the past IWGRRPC activities and updating of the working plan for years 1989-1992. A great deal of attention was paid to the involvement of the IWGRRPC in the Agency's programme on nuclear power plant ageing and life extension. Members of the IWGRRPC reviewed the long term plan of the activities and proposed a provisional list and scope of the IAEA Specialists' Meetings planned for the period 1989-1992. Seventeen papers were presented at the meeting. A separate abstract was prepared for each of these papers. Refs, figs and tabs

  10. Two-pass greedy regular expression parsing

    DEFF Research Database (Denmark)

    Grathwohl, Niels Bjørn Bugge; Henglein, Fritz; Nielsen, Lasse

    2013-01-01

    We present new algorithms for producing greedy parses for regular expressions (REs) in a semi-streaming fashion. Our lean-log algorithm executes in time O(mn) for REs of size m and input strings of size n and outputs a compact bit-coded parse tree representation. It improves on previous algorithms...... by: operating in only 2 passes; using only O(m) words of random-access memory (independent of n); requiring only kn bits of sequentially written and read log storage, where k ... and not requiring it to be stored at all. Previous RE parsing algorithms do not scale linearly with input size, or require substantially more log storage and employ 3 passes where the first consists of reversing the input, or do not or are not known to produce a greedy parse. The performance of our unoptimized C...

  11. DFT and experimental studies on structure and spectroscopic parameters of 3,6-diiodo-9-ethyl-9H-carbazole

    DEFF Research Database (Denmark)

    Radula-Janik, Klaudia; Kupka, Teobald; Ejsmont, Krzysztof

    2016-01-01

    The first report on crystal and molecular structure of 3,6-diiodo-9-ethyl-9H-carbazole is presented. Experimental room-temperature X-ray and 13C chemical shift studies were supported by advanced theoretical calculations using density functional theory (DFT). The 13C nuclear magnetic shieldings were...... predicted at the non-relativistic and relativistic level of theory using the zeroth-order regular approximation (ZORA). Theoretical relativistic calculations of chemical shifts of carbons C3 and C6, directly bonded to iodine atoms, produced a reasonable agreement with experiment (initial deviation from...

  12. Two New Multi-component BKP Hierarchies

    International Nuclear Information System (INIS)

    Wu Hongxia; Liu Xiaojun; Zeng Yunbo

    2009-01-01

    We firstly propose two kinds of new multi-component BKP (mcBKP) hierarchy based on the eigenfunction symmetry reduction and nonstandard reduction, respectively. The first one contains two types of BKP equation with self-consistent sources whose Lax representations are presented. The two mcBKP hierarchies both admit reductions to the k-constrained BKP hierarchy and to integrable (1+1)-dimensional hierarchy with self-consistent sources, which include two types of SK equation with self-consistent sources and of bi-directional SK equations with self-consistent sources.

  13. Using Tikhonov Regularization for Spatial Projections from CSR Regularized Spherical Harmonic GRACE Solutions

    Science.gov (United States)

    Save, H.; Bettadpur, S. V.

    2013-12-01

    It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.

  14. Finite difference order doubling in two dimensions

    International Nuclear Information System (INIS)

    Killingbeck, John P; Jolicard, Georges

    2008-01-01

    An order doubling process previously used to obtain eighth-order eigenvalues from the fourth-order Numerov method is applied to the perturbed oscillator in two dimensions. A simple method of obtaining high order finite difference operators is reported and an odd parity boundary condition is found to be effective in facilitating the smooth operation of the order doubling process

  15. Two component micro injection moulding for moulded interconnect devices

    DEFF Research Database (Denmark)

    Islam, Aminul

    2008-01-01

    Moulded interconnect devices (MIDs) contain huge possibilities for many applications in micro electro-mechanical-systems because of their capability of reducing the number of components, process steps and finally in miniaturization of the product. Among the available MID process chains, two...... component injection moulding is one of the most industrially adaptive processes. However, the use of two component injection moulding for MID fabrication, with circuit patterns in the sub-millimeter range, is still a big challenge at the present state of technology. The scope of the current Ph.D. project...... and a reasonable adhesion between them. • Selective metallization of the two component plastic part (coating one polymer with metal and leaving the other one uncoated) To overcome these two main issues in MID fabrication for micro applications, the current Ph.D. project explores the technical difficulties...

  16. Use of regularized algebraic methods in tomographic reconstruction

    International Nuclear Information System (INIS)

    Koulibaly, P.M.; Darcourt, J.; Blanc-Ferraud, L.; Migneco, O.; Barlaud, M.

    1997-01-01

    The algebraic methods are used in emission tomography to facilitate the compensation of attenuation and of Compton scattering. We have tested on a phantom the use of a regularization (a priori introduction of information), as well as the taking into account of spatial resolution variation with the depth (SRVD). Hence, we have compared the performances of the two methods by back-projection filtering (BPF) and of the two algebraic methods (AM) in terms of FWHM (by means of a point source), of the reduction of background noise (σ/m) on the homogeneous part of Jaszczak's phantom and of reconstruction speed (time unit = BPF). The BPF methods make use of a grade filter (maximal resolution, no noise treatment), single or associated with a Hann's low-pass (f c = 0.4), as well as of an attenuation correction. The AM which embody attenuation and scattering corrections are, on one side, the OS EM (Ordered Subsets, partitioning and rearranging of the projection matrix; Expectation Maximization) without regularization or SRVD correction, and, on the other side, the OS MAP EM (Maximum a posteriori), regularized and embodying the SRVD correction. A table is given containing for each used method (grade, Hann, OS EM and OS MAP EM) the values of FWHM, σ/m and time, respectively. One can observe that the OS MAP EM algebraic method allows ameliorating both the resolution, by taking into account the SRVD in the reconstruction process and noise treatment by regularization. In addition, due to the OS technique the reconstruction times are acceptable

  17. A sparsity-regularized Born iterative method for reconstruction of two-dimensional piecewise continuous inhomogeneous domains

    KAUST Repository

    Sandhu, Ali Imran

    2016-04-10

    A sparsity-regularized Born iterative method (BIM) is proposed for efficiently reconstructing two-dimensional piecewise-continuous inhomogeneous dielectric profiles. Such profiles are typically not spatially sparse, which reduces the efficiency of the sparsity-promoting regularization. To overcome this problem, scattered fields are represented in terms of the spatial derivative of the dielectric profile and reconstruction is carried out over samples of the dielectric profile\\'s derivative. Then, like the conventional BIM, the nonlinear problem is iteratively converted into a sequence of linear problems (in derivative samples) and sparsity constraint is enforced on each linear problem using the thresholded Landweber iterations. Numerical results, which demonstrate the efficiency and accuracy of the proposed method in reconstructing piecewise-continuous dielectric profiles, are presented.

  18. Order in nuclei and transition to chaos

    International Nuclear Information System (INIS)

    Soloviev, V.G.

    1995-01-01

    Based on the statement that there is order in the large and chaos in the small components of nuclear wave functions, the order-to-chaos transition is treated as a transition from the large to small components of wave functions. Therefore, experimental investigation of fragmentation of the many-quasiparticle and quasiparticle-phonon states plays a decisive role. The mixing of closely-spaced states having the same K π in the doubly even well-deformed nuclei is investigated. The quasiparticle-phonon interaction is responsible for fragmentation of the quasiparticle and phonon states and therefore for their mixing. Experimental investigation of the strength distribution of the many-quasiparticle and quasiparticle-phonon states should discover a new region of regularity in nuclei at intermediate excitation energies. A chaotic behaviour of nuclear states can be shifted to higher excitation energies. (author). 21 refs., 1 fig., 1 tab

  19. Order in nuclei and transition to chaos

    International Nuclear Information System (INIS)

    Soloviev, V.G.

    1995-01-01

    Based on the statement that there is order in the large and chaos in the small components of nuclear wave functions, the order-to-chaos transition is treated as a transition from the large to small components of wave functions. Therefore, experimental investigation of fragmentation of the many-quasiparticle and quasiparticle-phonon states a decisive role. The mixing of closely-spaced states having the same K π in the doubly even well-deformed nuclei is investigated. The quasiparticle-phonon interaction is responsible for fragmentation of the quasiparticle and phonon states and therefore for their mixing. Experimental investigation of the strength distribution of the many-quasiparticle and quasiparticle-phonon states should discover a new region of regularity in nuclei at intermediate excitation energies. A chaotic behaviour of nuclear states can be shifted to higher excitation energies. (author). 21 refs., 1 fig., 1 tab

  20. Short and medium range order in two-component silica glasses by positron annihilation spectroscopy

    International Nuclear Information System (INIS)

    Inoue, K.; Kataoka, H.; Nagai, Y.; Hasegawa, M.; Kobayashi, Y.

    2014-01-01

    The dependence of chemical composition on the average sizes of subnanometer-scale intrinsic structural open spaces surrounded by glass random networks in two-component silica-based glasses was investigated systematically using positronium (Ps) confined in the open spaces. The average sizes of the open spaces for SiO 2 -B 2 O 3 and SiO 2 -GeO 2 glasses are only slightly dependent on the chemical compositions because the B 2 O 3 and GeO 2 are glass network formers that are incorporated into the glass network of the base SiO 2 . However, the open space sizes for all SiO 2 -R 2 O (R = Li, Na, K) glasses, where R 2 O is a glass network modifier that occupies the open spaces, decrease rapidly with an increase in the R 2 O concentration. Despite the large difference in the ionic radii of the alkali metal (R) atoms, the open space sizes decrease similarly for all the alkali metal atoms studied. This dependence of the chemical composition on the open space sizes in SiO 2 -R 2 O observed by Ps shows that the alkali metal atoms do not randomly occupy the structural open spaces, but filling of the open spaces by R 2 O proceeds selectively from the larger to the smaller open spaces as the R 2 O concentrations are increased.

  1. Spin-excited oscillations in two-component fermion condensates

    International Nuclear Information System (INIS)

    Maruyama, Tomoyuki; Bertsch, George F.

    2006-01-01

    We investigate collective spin excitations in two-component fermion condensates with special consideration of unequal populations of the two components. The frequencies of monopole and dipole modes are calculated using Thomas-Fermi theory and the scaling approximation. As the fermion-fermion coupling is varied, the system shows various phases of the spin configuration. We demonstrate that spin oscillations have more sensitivity to the spin phase structures than the density oscillations

  2. A higher order space-time Galerkin scheme for time domain integral equations

    KAUST Repository

    Pray, Andrew J.

    2014-12-01

    Stability of time domain integral equation (TDIE) solvers has remained an elusive goal formany years. Advancement of this research has largely progressed on four fronts: 1) Exact integration, 2) Lubich quadrature, 3) smooth temporal basis functions, and 4) space-time separation of convolutions with the retarded potential. The latter method\\'s efficacy in stabilizing solutions to the time domain electric field integral equation (TD-EFIE) was previously reported for first-order surface descriptions (flat elements) and zeroth-order functions as the temporal basis. In this work, we develop the methodology necessary to extend the scheme to higher order surface descriptions as well as to enable its use with higher order basis functions in both space and time. These basis functions are then used in a space-time Galerkin framework. A number of results are presented that demonstrate convergence in time. The viability of the space-time separation method in producing stable results is demonstrated experimentally for these examples.

  3. A higher order space-time Galerkin scheme for time domain integral equations

    KAUST Repository

    Pray, Andrew J.; Beghein, Yves; Nair, Naveen V.; Cools, Kristof; Bagci, Hakan; Shanker, Balasubramaniam

    2014-01-01

    Stability of time domain integral equation (TDIE) solvers has remained an elusive goal formany years. Advancement of this research has largely progressed on four fronts: 1) Exact integration, 2) Lubich quadrature, 3) smooth temporal basis functions, and 4) space-time separation of convolutions with the retarded potential. The latter method's efficacy in stabilizing solutions to the time domain electric field integral equation (TD-EFIE) was previously reported for first-order surface descriptions (flat elements) and zeroth-order functions as the temporal basis. In this work, we develop the methodology necessary to extend the scheme to higher order surface descriptions as well as to enable its use with higher order basis functions in both space and time. These basis functions are then used in a space-time Galerkin framework. A number of results are presented that demonstrate convergence in time. The viability of the space-time separation method in producing stable results is demonstrated experimentally for these examples.

  4. Regularized Regression and Density Estimation based on Optimal Transport

    KAUST Repository

    Burger, M.; Franek, M.; Schonlieb, C.-B.

    2012-01-01

    for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations

  5. On the regularity of the covariance matrix of a discretized scalar field on the sphere

    Energy Technology Data Exchange (ETDEWEB)

    Bilbao-Ahedo, J.D. [Departamento de Física Moderna, Universidad de Cantabria, Av. los Castros s/n, 39005 Santander (Spain); Barreiro, R.B.; Herranz, D.; Vielva, P.; Martínez-González, E., E-mail: bilbao@ifca.unican.es, E-mail: barreiro@ifca.unican.es, E-mail: herranz@ifca.unican.es, E-mail: vielva@ifca.unican.es, E-mail: martinez@ifca.unican.es [Instituto de Física de Cantabria (CSIC-UC), Av. los Castros s/n, 39005 Santander (Spain)

    2017-02-01

    We present a comprehensive study of the regularity of the covariance matrix of a discretized field on the sphere. In a particular situation, the rank of the matrix depends on the number of pixels, the number of spherical harmonics, the symmetries of the pixelization scheme and the presence of a mask. Taking into account the above mentioned components, we provide analytical expressions that constrain the rank of the matrix. They are obtained by expanding the determinant of the covariance matrix as a sum of determinants of matrices made up of spherical harmonics. We investigate these constraints for five different pixelizations that have been used in the context of Cosmic Microwave Background (CMB) data analysis: Cube, Icosahedron, Igloo, GLESP and HEALPix, finding that, at least in the considered cases, the HEALPix pixelization tends to provide a covariance matrix with a rank closer to the maximum expected theoretical value than the other pixelizations. The effect of the propagation of numerical errors in the regularity of the covariance matrix is also studied for different computational precisions, as well as the effect of adding a certain level of noise in order to regularize the matrix. In addition, we investigate the application of the previous results to a particular example that requires the inversion of the covariance matrix: the estimation of the CMB temperature power spectrum through the Quadratic Maximum Likelihood algorithm. Finally, some general considerations in order to achieve a regular covariance matrix are also presented.

  6. Regularizations: different recipes for identical situations

    International Nuclear Information System (INIS)

    Gambin, E.; Lobo, C.O.; Battistel, O.A.

    2004-03-01

    We present a discussion where the choice of the regularization procedure and the routing for the internal lines momenta are put at the same level of arbitrariness in the analysis of Ward identities involving simple and well-known problems in QFT. They are the complex self-interacting scalar field and two simple models where the SVV and AVV process are pertinent. We show that, in all these problems, the conditions to symmetry relations preservation are put in terms of the same combination of divergent Feynman integrals, which are evaluated in the context of a very general calculational strategy, concerning the manipulations and calculations involving divergences. Within the adopted strategy, all the arbitrariness intrinsic to the problem are still maintained in the final results and, consequently, a perfect map can be obtained with the corresponding results of the traditional regularization techniques. We show that, when we require an universal interpretation for the arbitrariness involved, in order to get consistency with all stated physical constraints, a strong condition is imposed for regularizations which automatically eliminates the ambiguities associated to the routing of the internal lines momenta of loops. The conclusion is clean and sound: the association between ambiguities and unavoidable symmetry violations in Ward identities cannot be maintained if an unique recipe is required for identical situations in the evaluation of divergent physical amplitudes. (author)

  7. Ordering dynamics with two non-excluding options: bilingualism in language competition

    Science.gov (United States)

    Castelló, Xavier; Eguíluz, Víctor M.; San Miguel, Maxi

    2006-12-01

    We consider an extension of the voter model in which a set of interacting elements (agents) can be in either of two equivalent states (A or B) or in a third additional mixed (AB) state. The model is motivated by studies of language competition dynamics, where the AB state is associated with bilingualism. We study the ordering process and associated interface and coarsening dynamics in regular lattices and small world networks. Agents in the AB state define the interfaces, changing the interfacial noise driven coarsening of the voter model to curvature driven coarsening. This change in the coarsening mechanism is also shown to originate for a class of perturbations of the voter model dynamics. When interaction is through a small world network the AB agents restore coarsening, eliminating the metastable states of the voter model. The characteristic time to reach the absorbing state scales with system size as τ ~ lnN to be compared with the result τ ~ N for the voter model in a small world network.

  8. Low-order longitudinal modes of single-component plasmas

    International Nuclear Information System (INIS)

    Tinkle, M.D.; Greaves, R.G.; Surko, C.M.

    1995-01-01

    The low-order modes of spheroidal, pure electron plasmas have been studied experimentally, both in a cylindrical electrode structure and in a quadrupole trap. Comparison is made between measurements of mode frequencies, recent analytical theories, and numerical simulations. Effects considered include trap anharmonicity, image charges, and temperature. Quantitative agreement is obtained between the predictions and these measurements for spheroidal plasmas in the quadrupole trap. In many experiments on single-component plasmas, including antimatter plasmas, the standard diagnostic techniques used to measure the density and temperature are not appropriate. A new method is presented for determining the size, shape, average density, and temperature of a plasma confined in a Penning trap from measurements of the mode frequencies. copyright 1995 American Institute of Physics

  9. A combined reconstruction-classification method for diffuse optical tomography

    Energy Technology Data Exchange (ETDEWEB)

    Hiltunen, P [Department of Biomedical Engineering and Computational Science, Helsinki University of Technology, PO Box 3310, FI-02015 TKK (Finland); Prince, S J D; Arridge, S [Department of Computer Science, University College London, Gower Street London, WC1E 6B (United Kingdom)], E-mail: petri.hiltunen@tkk.fi, E-mail: s.prince@cs.ucl.ac.uk, E-mail: s.arridge@cs.ucl.ac.uk

    2009-11-07

    We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels.

  10. Hessian regularization based symmetric nonnegative matrix factorization for clustering gene expression and microbiome data.

    Science.gov (United States)

    Ma, Yuanyuan; Hu, Xiaohua; He, Tingting; Jiang, Xingpeng

    2016-12-01

    Nonnegative matrix factorization (NMF) has received considerable attention due to its interpretation of observed samples as combinations of different components, and has been successfully used as a clustering method. As an extension of NMF, Symmetric NMF (SNMF) inherits the advantages of NMF. Unlike NMF, however, SNMF takes a nonnegative similarity matrix as an input, and two lower rank nonnegative matrices (H, H T ) are computed as an output to approximate the original similarity matrix. Laplacian regularization has improved the clustering performance of NMF and SNMF. However, Laplacian regularization (LR), as a classic manifold regularization method, suffers some problems because of its weak extrapolating ability. In this paper, we propose a novel variant of SNMF, called Hessian regularization based symmetric nonnegative matrix factorization (HSNMF), for this purpose. In contrast to Laplacian regularization, Hessian regularization fits the data perfectly and extrapolates nicely to unseen data. We conduct extensive experiments on several datasets including text data, gene expression data and HMP (Human Microbiome Project) data. The results show that the proposed method outperforms other methods, which suggests the potential application of HSNMF in biological data clustering. Copyright © 2016. Published by Elsevier Inc.

  11. Regular behaviors in SU(2) Yang-Mills classical mechanics

    International Nuclear Information System (INIS)

    Xu Xiaoming

    1997-01-01

    In order to study regular behaviors in high-energy nucleon-nucleon collisions, a representation of the vector potential A i a is defined with respect to the (a,i)-dependence in the SU(2) Yang-Mills classical mechanics. Equations of the classical infrared field as well as effective potentials are derived for the elastic or inelastic collision of two plane wave in a three-mode model and the decay of an excited spherically-symmetric field

  12. Block correlated second order perturbation theory with a generalized valence bond reference function

    International Nuclear Information System (INIS)

    Xu, Enhua; Li, Shuhua

    2013-01-01

    The block correlated second-order perturbation theory with a generalized valence bond (GVB) reference (GVB-BCPT2) is proposed. In this approach, each geminal in the GVB reference is considered as a “multi-orbital” block (a subset of spin orbitals), and each occupied or virtual spin orbital is also taken as a single block. The zeroth-order Hamiltonian is set to be the summation of the individual Hamiltonians of all blocks (with explicit two-electron operators within each geminal) so that the GVB reference function and all excited configuration functions are its eigenfunctions. The GVB-BCPT2 energy can be directly obtained without iteration, just like the second order Møller–Plesset perturbation method (MP2), both of which are size consistent. We have applied this GVB-BCPT2 method to investigate the equilibrium distances and spectroscopic constants of 7 diatomic molecules, conformational energy differences of 8 small molecules, and bond-breaking potential energy profiles in 3 systems. GVB-BCPT2 is demonstrated to have noticeably better performance than MP2 for systems with significant multi-reference character, and provide reasonably accurate results for some systems with large active spaces, which are beyond the capability of all CASSCF-based methods

  13. Block correlated second order perturbation theory with a generalized valence bond reference function.

    Science.gov (United States)

    Xu, Enhua; Li, Shuhua

    2013-11-07

    The block correlated second-order perturbation theory with a generalized valence bond (GVB) reference (GVB-BCPT2) is proposed. In this approach, each geminal in the GVB reference is considered as a "multi-orbital" block (a subset of spin orbitals), and each occupied or virtual spin orbital is also taken as a single block. The zeroth-order Hamiltonian is set to be the summation of the individual Hamiltonians of all blocks (with explicit two-electron operators within each geminal) so that the GVB reference function and all excited configuration functions are its eigenfunctions. The GVB-BCPT2 energy can be directly obtained without iteration, just like the second order Mo̸ller-Plesset perturbation method (MP2), both of which are size consistent. We have applied this GVB-BCPT2 method to investigate the equilibrium distances and spectroscopic constants of 7 diatomic molecules, conformational energy differences of 8 small molecules, and bond-breaking potential energy profiles in 3 systems. GVB-BCPT2 is demonstrated to have noticeably better performance than MP2 for systems with significant multi-reference character, and provide reasonably accurate results for some systems with large active spaces, which are beyond the capability of all CASSCF-based methods.

  14. Consistent Partial Least Squares Path Modeling via Regularization.

    Science.gov (United States)

    Jung, Sunho; Park, JaeHong

    2018-01-01

    Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.

  15. Consistent Partial Least Squares Path Modeling via Regularization

    Directory of Open Access Journals (Sweden)

    Sunho Jung

    2018-02-01

    Full Text Available Partial least squares (PLS path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc, designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.

  16. Unified parametrization for quark and lepton mixing angles

    International Nuclear Information System (INIS)

    Rodejohann, Werner

    2009-01-01

    We propose a new parametrization for the quark and lepton mixing matrices: the two 12-mixing angles (the Cabibbo angle and the angle responsible for solar neutrino oscillations) are at zeroth order π/12 and π/5, respectively. The resulting 12-elements in the CKM and PMNS matrices, V us and U e2 , are in this order irrational but simple algebraic numbers. We note that the cosine of π/5 is the golden ratio divided by two. The difference between π/5 and the observed best-fit value of solar neutrino mixing is of the same order as the difference between the observed value and the one for tri-bimaximal mixing. In order to reproduce the central values of current fits, corrections to the zeroth order expressions are necessary. They are small and of the same order and sign for quarks and leptons. We parametrize the perturbations to the CKM and PMNS matrices in a 'triminimal' way, i.e., with three small rotations in an order corresponding to the order of the rotations in the PDG-description of mixing matrices

  17. Zeroth-order flutter prediction for cantilevered plates in supersonic flow

    CSIR Research Space (South Africa)

    Meijer, M-C

    2015-08-01

    Full Text Available An aeroelastic prediction framework in MATLAB with modularity in the quasi-steady aerodynamic methodology is developed. Local piston theory (LPT) is integrated with quasi-steady methods including shock-expansion theory and the Supersonic Hypersonic...

  18. Two efficient label-equivalence-based connected-component labeling algorithms for 3-D binary images.

    Science.gov (United States)

    He, Lifeng; Chao, Yuyan; Suzuki, Kenji

    2011-08-01

    Whenever one wants to distinguish, recognize, and/or measure objects (connected components) in binary images, labeling is required. This paper presents two efficient label-equivalence-based connected-component labeling algorithms for 3-D binary images. One is voxel based and the other is run based. For the voxel-based one, we present an efficient method of deciding the order for checking voxels in the mask. For the run-based one, instead of assigning each foreground voxel, we assign each run a provisional label. Moreover, we use run data to label foreground voxels without scanning any background voxel in the second scan. Experimental results have demonstrated that our voxel-based algorithm is efficient for 3-D binary images with complicated connected components, that our run-based one is efficient for those with simple connected components, and that both are much more efficient than conventional 3-D labeling algorithms.

  19. Two component memory of Rotstein effect in nuclear emulsions

    International Nuclear Information System (INIS)

    Gushchin, E.M.; Lebedev, A.N.; Somov, S.V.; Timofeev, M.K.; Tipografshchik, G.I.

    1991-01-01

    Two sharply differing memory components - fast and slow -are simultaneously detected during investigation into the controlled mode of fast charged particle detection in simple nuclear emulsions, with the emulsion trace sensitivity, corresponding to these components, being about 5 time different. The value of memory time is T m ≅40 μs for fast memory and T m ≅3.5 ms for the slow one. The detection of two Rotstein effect memory components confirms the correctness of the trap model

  20. Magnetic ordering of GdMn2

    International Nuclear Information System (INIS)

    Ouladdiaf, B.; Ritter, C.; Ballou, R.; Deportes, J.

    1999-01-01

    Complete text of publication follows. GdMn 2 crystallizes in the C15 cubic Laves phase structure. Within this structure Mn atoms lie at the vertices of regular tetrahedra stacked in the diamond arrangement connected by sharing vertices, leading to a strong geometric frustration. An antiferromagnetic magnetic order sets in below T N ∼ 105 K. It gives rise to a large magnetovolume effect (ΔV/V ∼ 1%). Thermal expansion data show two anomalies at 105 K and 35 K. The second anomaly was often interpreted as the ferromagnetic ordering of Gd sublattice. Moessbauer data indicate however, that Gd sublattice orders at T N ∼ 105 K as the Mn moments. Elastic neutron scattering measurements were performed using short wavelength neutron beam (λ = 0.5 A) on D9 at ILL. No magnetic contribution to the nuclear peaks was found excluding thereby any K = [0 0 0] component. However antiferromagnetic peaks indexed by a propagation vector [2/3 2/3 0] were observed leading to a non collinear magnetic arrangement of both Mn and Gd sublattices. The results are discussed by invoking the geometric frustration associated with the Mn atomic packing and the singlet state of the Gd ions. (author)

  1. DOE Order 5480.28 natural phenomena hazards mitigation system, structure, component database

    International Nuclear Information System (INIS)

    Conrads, T.J.

    1997-01-01

    This document describes the Prioritization Phase Database that was prepared for the Project Hanford Management Contractors to support the implementation of DOE Order 5480.28. Included within this document are three appendices which contain the prioritized list of applicable Project Hanford Management Contractors Systems, Structures, and Components. These appendices include those assets that comply with the requirements of DOE Order 5480.28, assets for which a waiver will be recommended, and assets requiring additional information before compliance can be ascertained

  2. 1 / n Expansion for the Number of Matchings on Regular Graphs and Monomer-Dimer Entropy

    Science.gov (United States)

    Pernici, Mario

    2017-08-01

    Using a 1 / n expansion, that is an expansion in descending powers of n, for the number of matchings in regular graphs with 2 n vertices, we study the monomer-dimer entropy for two classes of graphs. We study the difference between the extensive monomer-dimer entropy of a random r-regular graph G (bipartite or not) with 2 n vertices and the average extensive entropy of r-regular graphs with 2 n vertices, in the limit n → ∞. We find a series expansion for it in the numbers of cycles; with probability 1 it converges for dimer density p diverges as |ln(1-p)| for p → 1. In the case of regular lattices, we similarly expand the difference between the specific monomer-dimer entropy on a lattice and the one on the Bethe lattice; we write down its Taylor expansion in powers of p through the order 10, expressed in terms of the number of totally reducible walks which are not tree-like. We prove through order 6 that its expansion coefficients in powers of p are non-negative.

  3. A Class of Two-Component Adler—Bobenko—Suris Lattice Equations

    International Nuclear Information System (INIS)

    Fu Wei; Zhang Da-Jun; Zhou Ru-Guang

    2014-01-01

    We study a class of two-component forms of the famous list of the Adler—Bobenko—Suris lattice equations. The obtained two-component lattice equations are still consistent around the cube and they admit solutions with ‘jumping properties’ between two levels. (general)

  4. Regularization of divergent integrals

    OpenAIRE

    Felder, Giovanni; Kazhdan, David

    2016-01-01

    We study the Hadamard finite part of divergent integrals of differential forms with singularities on submanifolds. We give formulae for the dependence of the finite part on the choice of regularization and express them in terms of a suitable local residue map. The cases where the submanifold is a complex hypersurface in a complex manifold and where it is a boundary component of a manifold with boundary, arising in string perturbation theory, are treated in more detail.

  5. Cold component flow in a two-component mirror machine

    International Nuclear Information System (INIS)

    Rognlien, T.D.

    1975-12-01

    Steady-state solutions are given for the flow characteristics along the magnetic field of the cold plasma component in a two-component mirror machine. The hot plasma component is represented by a fixed density profile. The fluid equations are used to describe the cold plasma, which is assumed to be generated in a localized region at one end of the machine. The ion flow speed, v/sub i/, is required to satisfy the Bohm sheath condition at the end walls, i.e., v/sub i/ greater than or equal to c/sub s/, where c/sub s/ is the ion-acoustic speed. For the case when the cold plasma density, n/sub c/, is much less than the hot plasma density, n/sub h/, the cold plasma is stagnant and does not penetrate through the machine in the zero temperature case. The effect of a finite temperature is to allow for the penetration of a small amount of cold plasma through the machine. For the density range n/sub c/ approximately n/sub h/, the flow solutions are asymmetric about the midplane and have v/sub i/ = c/sub s/ near the midplane. Finally, for n/sub c/ much greater than n/sub h/, the solutions become symmetric about the midplane and approach the Lee--McNamara type solutions with v/sub i/ = c/sub s/ near the mirror throats

  6. Output regularization of SVM seizure predictors: Kalman Filter versus the "Firing Power" method.

    Science.gov (United States)

    Teixeira, Cesar; Direito, Bruno; Bandarabadi, Mojtaba; Dourado, António

    2012-01-01

    Two methods for output regularization of support vector machines (SVMs) classifiers were applied for seizure prediction in 10 patients with long-term annotated data. The output of the classifiers were regularized by two methods: one based on the Kalman Filter (KF) and other based on a measure called the "Firing Power" (FP). The FP is a quantification of the rate of the classification in the preictal class in a past time window. In order to enable the application of the KF, the classification problem was subdivided in a two two-class problem, and the real-valued output of SVMs was considered. The results point that the FP method raise less false alarms than the KF approach. However, the KF approach presents an higher sensitivity, but the high number of false alarms turns their applicability negligible in some situations.

  7. A regularity criterion for the Navier-Stokes equations based on the gradient of one velocity component

    Czech Academy of Sciences Publication Activity Database

    Skalák, Zdeněk

    2016-01-01

    Roč. 437, č. 1 (2016), s. 474-484 ISSN 0022-247X R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985874 Keywords : Navier - Stokes equations * regularity of solutions * regularity criteria Subject RIV: BK - Fluid Dynamics Impact factor: 1.064, year: 2016

  8. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin

    2015-01-01

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  9. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  10. Fluctuations of the SNR at the output of the MVDR with Regularized Tyler Estimators

    KAUST Repository

    Elkhalil, Khalil

    2016-12-27

    This paper analyzes the statistical properties of the signal-to-noise ratio (SNR) at the output of the Capon\\'s minimum variance distortionless response (MVDR) beamformers when operating over impulsive noises. Particularly, we consider the supervised case in which the receiver employs the regularized Tyler estimator in order to estimate the covariance matrix of the interference-plus-noise process using n observations of size N×1N×1. The choice for the regularized Tylor estimator (RTE) is motivated by its resilience to the presence of outliers and its regularization parameter that guarantees a good conditioning of the covariance estimate. Of particular interest in this paper is the derivation of the second order statistics of the SINR. To achieve this goal, we consider two different approaches. The first one is based on considering the classical regime, referred to as the n-large regime, in which N is assumed to be fixed while n grows to infinity. The second approach is built upon recent results developped within the framework of random matrix theory and assumes that N and n grow large together. Numerical results are provided in order to compare between the accuracies of each regime under different settings.

  11. EEG/MEG Source Reconstruction with Spatial-Temporal Two-Way Regularized Regression

    KAUST Repository

    Tian, Tian Siva

    2013-07-11

    In this work, we propose a spatial-temporal two-way regularized regression method for reconstructing neural source signals from EEG/MEG time course measurements. The proposed method estimates the dipole locations and amplitudes simultaneously through minimizing a single penalized least squares criterion. The novelty of our methodology is the simultaneous consideration of three desirable properties of the reconstructed source signals, that is, spatial focality, spatial smoothness, and temporal smoothness. The desirable properties are achieved by using three separate penalty functions in the penalized regression framework. Specifically, we impose a roughness penalty in the temporal domain for temporal smoothness, and a sparsity-inducing penalty and a graph Laplacian penalty in the spatial domain for spatial focality and smoothness. We develop a computational efficient multilevel block coordinate descent algorithm to implement the method. Using a simulation study with several settings of different spatial complexity and two real MEG examples, we show that the proposed method outperforms existing methods that use only a subset of the three penalty functions. © 2013 Springer Science+Business Media New York.

  12. Dynamic effects on the transition between two-dimensional regular and Mach reflection of shock waves in an ideal, steady supersonic free stream

    CSIR Research Space (South Africa)

    Naidoo, K

    2011-06-01

    Full Text Available research by Ernst Mach in 1878. The steady, two-dimensional transition criteria between regular and Mach reflection are well established. There has been little done to consider the dynamic effect of a rapidly rotating wedge on the transition between regular...

  13. Hierarchical regular small-world networks

    International Nuclear Information System (INIS)

    Boettcher, Stefan; Goncalves, Bruno; Guclu, Hasan

    2008-01-01

    Two new networks are introduced that resemble small-world properties. These networks are recursively constructed but retain a fixed, regular degree. They possess a unique one-dimensional lattice backbone overlaid by a hierarchical sequence of long-distance links, mixing real-space and small-world features. Both networks, one 3-regular and the other 4-regular, lead to distinct behaviors, as revealed by renormalization group studies. The 3-regular network is planar, has a diameter growing as √N with system size N, and leads to super-diffusion with an exact, anomalous exponent d w = 1.306..., but possesses only a trivial fixed point T c = 0 for the Ising ferromagnet. In turn, the 4-regular network is non-planar, has a diameter growing as ∼2 √(log 2 N 2 ) , exhibits 'ballistic' diffusion (d w = 1), and a non-trivial ferromagnetic transition, T c > 0. It suggests that the 3-regular network is still quite 'geometric', while the 4-regular network qualifies as a true small world with mean-field properties. As an engineering application we discuss synchronization of processors on these networks. (fast track communication)

  14. Optical diffraction by ordered 2D arrays of silica microspheres

    International Nuclear Information System (INIS)

    Shcherbakov, A.A.; Shavdina, O.; Tishchenko, A.V.; Veillas, C.; Verrier, I.; Dellea, O.; Jourlin, Y.

    2017-01-01

    The article presents experimental and theoretical studies of angular dependent diffraction properties of 2D monolayer arrays of silica microspheres. High-quality large area defect-free monolayers of 1 μm diameter silica microspheres were deposited by the Langmuir-Blodgett technique under an accurate optical control. Measured angular dependencies of zeroth and one of the first order diffraction efficiencies produced by deposited samples were simulated by the rigorous Generalized Source Method taking into account particle size dispersion and lattice nonideality. - Highlights: • High quality silica microsphere monolayer was fabricated. • Accurate measurements of diffraction efficiency angular dependencies. • Rigorous diffraction simulation of both ideal hexagonal and realistic microsphere arrangements. • Qualitative rationalization of the obtained results and the observed differences between the experiment and the theory.

  15. A TWO-COMPONENT POWER LAW COVERING NEARLY FOUR ORDERS OF MAGNITUDE IN THE POWER SPECTRUM OF SPITZER FAR-INFRARED EMISSION FROM THE LARGE MAGELLANIC CLOUD

    International Nuclear Information System (INIS)

    Block, David L.; Puerari, Ivanio; Elmegreen, Bruce G.; Bournaud, Frederic

    2010-01-01

    Power spectra of Large Magellanic Cloud (LMC) emission at 24, 70, and 160 μm observed with the Spitzer Space Telescope have a two-component power-law structure with a shallow slope of -1.6 at low wavenumber, k, and a steep slope of -2.9 at high k. The break occurs at k -1 ∼ 100-200 pc, which is interpreted as the line-of-sight thickness of the LMC disk. The slopes are slightly steeper for longer wavelengths, suggesting the cooler dust emission is smoother than the hot emission. The power spectrum (PS) covers ∼3.5 orders of magnitude, and the break in the slope is in the middle of this range on a logarithmic scale. Large-scale driving from galactic and extragalactic processes, including disk self-gravity, spiral waves, and bars, presumably causes the low-k structure in what is effectively a two-dimensional geometry. Small-scale driving from stellar processes and shocks causes the high-k structure in a three-dimensional geometry. This transition in dimensionality corresponds to the observed change in PS slope. A companion paper models the observed power law with a self-gravitating hydrodynamics simulation of a galaxy like the LMC.

  16. Reproduction of nearby sound sources using higher-order ambisonics with practical loudspeaker arrays

    DEFF Research Database (Denmark)

    Favrot, Sylvain Emmanuel; Buchholz, Jörg

    2012-01-01

    the impact of two existing and a new proposed regularization function on the reproduced sound fields and on the main auditory cue for nearby sound sources outside the median plane, i.e, low-frequencies interaural level differences (ILDs). The proposed regularization function led to a better reproduction......In order to reproduce nearby sound sources with distant loudspeakers to a single listener, the near field compensated (NFC) method for higher-order Ambisonics (HOA) has been previously proposed. In practical realization, this method requires the use of regularization functions. This study analyzes...... of point source sound fields compared to existing regularization functions for NFC-HOA. Measurements in realistic playback environments showed that, for very close sources, significant ILDs for frequencies above about 250 Hz can be reproduced with NFC-HOA and the proposed regularization function whereas...

  17. The 27 Possible Intrinsic Symmetry Groups of Two-Component Links

    Directory of Open Access Journals (Sweden)

    Jason Parsley

    2012-02-01

    Full Text Available We consider the “intrinsic” symmetry group of a two-component link L, defined to be the image ∑(L of the natural homomorphism from the standard symmetry group MCG(S3, L to the product MCG(S3 × MCG(L. This group, first defined by Whitten in 1969, records directly whether L is isotopic to a link L′ obtained from L by permuting components or reversing orientations; it is a subgroup of Γ2, the group of all such operations. For two-component links, we catalog the 27 possible intrinsic symmetry groups, which represent the subgroups of Γ2 up to conjugacy. We are able to provide prime, nonsplit examples for 21 of these groups; some are classically known, some are new. We catalog the frequency at which each group appears among all 77,036 of the hyperbolic two-component links of 14 or fewer crossings in Thistlethwaite’s table. We also provide some new information about symmetry groups of the 293 non-hyperbolic two-component links of 14 or fewer crossings in the table.

  18. Two component WIMP-FImP dark matter model with singlet fermion, scalar and pseudo scalar

    Energy Technology Data Exchange (ETDEWEB)

    Dutta Banik, Amit; Pandey, Madhurima; Majumdar, Debasish [Saha Institute of Nuclear Physics, HBNI, Astroparticle Physics and Cosmology Division, Kolkata (India); Biswas, Anirban [Harish Chandra Research Institute, Allahabad (India)

    2017-10-15

    We explore a two component dark matter model with a fermion and a scalar. In this scenario the Standard Model (SM) is extended by a fermion, a scalar and an additional pseudo scalar. The fermionic component is assumed to have a global U(1){sub DM} and interacts with the pseudo scalar via Yukawa interaction while a Z{sub 2} symmetry is imposed on the other component - the scalar. These ensure the stability of both dark matter components. Although the Lagrangian of the present model is CP conserving, the CP symmetry breaks spontaneously when the pseudo scalar acquires a vacuum expectation value (VEV). The scalar component of the dark matter in the present model also develops a VEV on spontaneous breaking of the Z{sub 2} symmetry. Thus the various interactions of the dark sector and the SM sector occur through the mixing of the SM like Higgs boson, the pseudo scalar Higgs like boson and the singlet scalar boson. We show that the observed gamma ray excess from the Galactic Centre as well as the 3.55 keV X-ray line from Perseus, Andromeda etc. can be simultaneously explained in the present two component dark matter model and the dark matter self interaction is found to be an order of magnitude smaller than the upper limit estimated from the observational results. (orig.)

  19. S2p core level spectroscopy of short chain oligothiophenes

    Science.gov (United States)

    Baseggio, O.; Toffoli, D.; Stener, M.; Fronzoni, G.; de Simone, M.; Grazioli, C.; Coreno, M.; Guarnaccio, A.; Santagata, A.; D'Auria, M.

    2017-12-01

    The Near-Edge X-ray-Absorption Fine-Structure (NEXAFS) and X-ray Photoemission Spectroscopy (XPS) of short-chain oligothiophenes (thiophene, 2,2'-bithiophene, and 2,2':5',2″-terthiophene) in the gas phase have been measured in the sulfur L2,3-edge region. The assignment of the spectral features is based on the relativistic two-component zeroth-order regular approximation time dependent density functional theory approach. The calculations allow us to estimate both the contribution of the spin-orbit splitting and of the molecular-field splitting to the sulfur binding energies and give results in good agreement with the experimental measurements. The deconvolution of the calculated S2p NEXAFS spectra into the two manifolds of excited states converging to the LIII and LII edges facilitates the attribution of the spectral structures. The main S2p NEXAFS features are preserved along the series both as concerns the energy positions and the nature of the transitions. This behaviour suggests that the electronic and geometrical environment of the sulfur atom in the three oligomers is relatively unaffected by the increasing chain length. This trend is also observed in the XPS spectra. The relatively simple structure of S2p NEXAFS spectra along the series reflects the localized nature of the virtual states involved in the core excitation process.

  20. Immediate and heterogeneous response of the LiaFSR two-component system of Bacillus subtilis to the peptide antibiotic bacitracin.

    Science.gov (United States)

    Kesel, Sara; Mader, Andreas; Höfler, Carolin; Mascher, Thorsten; Leisner, Madeleine

    2013-01-01

    Two-component signal transduction systems are one means of bacteria to respond to external stimuli. The LiaFSR two-component system of Bacillus subtilis consists of a regular two-component system LiaRS comprising the core Histidine Kinase (HK) LiaS and the Response Regulator (RR) LiaR and additionally the accessory protein LiaF, which acts as a negative regulator of LiaRS-dependent signal transduction. The complete LiaFSR system was shown to respond to various peptide antibiotics interfering with cell wall biosynthesis, including bacitracin. Here we study the response of the LiaFSR system to various concentrations of the peptide antibiotic bacitracin. Using quantitative fluorescence microscopy, we performed a whole population study analyzed on the single cell level. We investigated switching from the non-induced 'OFF' state into the bacitracin-induced 'ON' state by monitoring gene expression of a fluorescent reporter from the RR-regulated liaI promoter. We found that switching into the 'ON' state occurred within less than 20 min in a well-defined switching window, independent of the bacitracin concentration. The switching rate and the basal expression rate decreased at low bacitracin concentrations, establishing clear heterogeneity 60 min after bacitracin induction. Finally, we performed time-lapse microscopy of single cells confirming the quantitative response as obtained in the whole population analysis for high bacitracin concentrations. The LiaFSR system exhibits an immediate, heterogeneous and graded response to the inducer bacitracin in the exponential growth phase.

  1. Matrix regularization of 4-manifolds

    OpenAIRE

    Trzetrzelewski, M.

    2012-01-01

    We consider products of two 2-manifolds such as S^2 x S^2, embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)xSU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N^2 x N^2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S...

  2. Analysis of water hammer in two-component two-phase flows

    International Nuclear Information System (INIS)

    Warde, H.; Marzouk, E.; Ibrahim, S.

    1989-01-01

    The water hammer phenomena caused by a sudden valve closure in air-water two-phase flows must be clarified for the safety analysis of LOCA in reactors and further for the safety of boilers, chemical plants, pipe transport of fluids such as petroleum and natural gas. In the present work water hammer phenomena caused by sudden valve closure in two-component two-phase flows are investigated theoretically and experimentally. The phenomena are more complicated than in single phase-flows due to the fact of the presence of compressible component. Basic partial differential equations based on a one-dimensional homogeneous flow model are solved by the method of characteristic. The analysis is extended to include friction in a two-phase mixture depending on the local flow pattern. The profiles of the pressure transients, the propagation velocity of pressure waves and the effect of valve closure on the transient pressure are found. Different two-phase flow pattern and frictional pressure drop correlations were used including Baker, Chesholm and Beggs and Bril correlations. The effect of the flow pattern on the characteristic of wave propagation is discussed primarily to indicate the effect of void fraction on the velocity of wave propagation and on the attenuation of pressure waves. Transient pressure in the mixture were recorded at different air void fractions, rates of uniform valve closure and liquid flow velocities with the aid of pressure transducers, transient wave form recorders interfaced with an on-line pc computer. The results are compared with computation, and good agreement was obtained within experimental accuracy

  3. Two-component gravitational instability in spiral galaxies

    Science.gov (United States)

    Marchuk, A. A.; Sotnikova, N. Y.

    2018-04-01

    We applied a criterion of gravitational instability, valid for two-component and infinitesimally thin discs, to observational data along the major axis for seven spiral galaxies of early types. Unlike most papers, the dispersion equation corresponding to the criterion was solved directly without using any approximation. The velocity dispersion of stars in the radial direction σR was limited by the range of possible values instead of a fixed value. For all galaxies, the outer regions of the disc were analysed up to R ≤ 130 arcsec. The maximal and sub-maximal disc models were used to translate surface brightness into surface density. The largest destabilizing disturbance stars can exert on a gaseous disc was estimated. It was shown that the two-component criterion differs a little from the one-fluid criterion for galaxies with a large surface gas density, but it allows to explain large-scale star formation in those regions where the gaseous disc is stable. In the galaxy NGC 1167 star formation is entirely driven by the self-gravity of the stars. A comparison is made with the conventional approximations which also include the thickness effect and with models for different sound speed cg. It is shown that values of the effective Toomre parameter correspond to the instability criterion of a two-component disc Qeff < 1.5-2.5. This result is consistent with previous theoretical and observational studies.

  4. Coupling regularizes individual units in noisy populations

    International Nuclear Information System (INIS)

    Ly Cheng; Ermentrout, G. Bard

    2010-01-01

    The regularity of a noisy system can modulate in various ways. It is well known that coupling in a population can lower the variability of the entire network; the collective activity is more regular. Here, we show that diffusive (reciprocal) coupling of two simple Ornstein-Uhlenbeck (O-U) processes can regularize the individual, even when it is coupled to a noisier process. In cellular networks, the regularity of individual cells is important when a select few play a significant role. The regularizing effect of coupling surprisingly applies also to general nonlinear noisy oscillators. However, unlike with the O-U process, coupling-induced regularity is robust to different kinds of coupling. With two coupled noisy oscillators, we derive an asymptotic formula assuming weak noise and coupling for the variance of the period (i.e., spike times) that accurately captures this effect. Moreover, we find that reciprocal coupling can regularize the individual period of higher dimensional oscillators such as the Morris-Lecar and Brusselator models, even when coupled to noisier oscillators. Coupling can have a counterintuitive and beneficial effect on noisy systems. These results have implications for the role of connectivity with noisy oscillators and the modulation of variability of individual oscillators.

  5. Dimensional regularization in position space and a forest formula for regularized Epstein-Glaser renormalization

    Energy Technology Data Exchange (ETDEWEB)

    Keller, Kai Johannes

    2010-04-15

    The present work contains a consistent formulation of the methods of dimensional regularization (DimReg) and minimal subtraction (MS) in Minkowski position space. The methods are implemented into the framework of perturbative Algebraic Quantum Field Theory (pAQFT). The developed methods are used to solve the Epstein-Glaser recursion for the construction of time-ordered products in all orders of causal perturbation theory. A solution is given in terms of a forest formula in the sense of Zimmermann. A relation to the alternative approach to renormalization theory using Hopf algebras is established. (orig.)

  6. Dimensional regularization in position space and a forest formula for regularized Epstein-Glaser renormalization

    International Nuclear Information System (INIS)

    Keller, Kai Johannes

    2010-04-01

    The present work contains a consistent formulation of the methods of dimensional regularization (DimReg) and minimal subtraction (MS) in Minkowski position space. The methods are implemented into the framework of perturbative Algebraic Quantum Field Theory (pAQFT). The developed methods are used to solve the Epstein-Glaser recursion for the construction of time-ordered products in all orders of causal perturbation theory. A solution is given in terms of a forest formula in the sense of Zimmermann. A relation to the alternative approach to renormalization theory using Hopf algebras is established. (orig.)

  7. Wave dynamics of regular and chaotic rays

    International Nuclear Information System (INIS)

    McDonald, S.W.

    1983-09-01

    In order to investigate general relationships between waves and rays in chaotic systems, I study the eigenfunctions and spectrum of a simple model, the two-dimensional Helmholtz equation in a stadium boundary, for which the rays are ergodic. Statistical measurements are performed so that the apparent randomness of the stadium modes can be quantitatively contrasted with the familiar regularities observed for the modes in a circular boundary (with integrable rays). The local spatial autocorrelation of the eigenfunctions is constructed in order to indirectly test theoretical predictions for the nature of the Wigner distribution corresponding to chaotic waves. A portion of the large-eigenvalue spectrum is computed and reported in an appendix; the probability distribution of successive level spacings is analyzed and compared with theoretical predictions. The two principal conclusions are: 1) waves associated with chaotic rays may exhibit randomly situated localized regions of high intensity; 2) the Wigner function for these waves may depart significantly from being uniformly distributed over the surface of constant frequency in the ray phase space

  8. Dynamics of coherent states in regular and chaotic regimes of the non-integrable Dicke model

    Science.gov (United States)

    Lerma-Hernández, S.; Chávez-Carlos, J.; Bastarrachea-Magnani, M. A.; López-del-Carpio, B.; Hirsch, J. G.

    2018-04-01

    The quantum dynamics of initial coherent states is studied in the Dicke model and correlated with the dynamics, regular or chaotic, of their classical limit. Analytical expressions for the survival probability, i.e. the probability of finding the system in its initial state at time t, are provided in the regular regions of the model. The results for regular regimes are compared with those of the chaotic ones. It is found that initial coherent states in regular regions have a much longer equilibration time than those located in chaotic regions. The properties of the distributions for the initial coherent states in the Hamiltonian eigenbasis are also studied. It is found that for regular states the components with no negligible contribution are organized in sequences of energy levels distributed according to Gaussian functions. In the case of chaotic coherent states, the energy components do not have a simple structure and the number of participating energy levels is larger than in the regular cases.

  9. Bond strength of two component injection moulded MID

    DEFF Research Database (Denmark)

    Islam, Mohammad Aminul; Hansen, Hans Nørgaard; Tang, Peter Torben

    2006-01-01

    Most products of the future will require industrially adapted, cost effective production processes and on this issue two-component (2K) injection moulding is a potential candidate for MID manufacturing. MID based on 2k injection moulded plastic part with selectively metallised circuit tracks allows...... the two different plastic materials in the MID structure require good bonding between them. This paper finds suitable combinations of materials for MIDs from both bond strength and metallisation view-point. Plastic parts were made by two-shot injection moulding and the effects of some important process...... the integration of electrical and mechanical functionalities in a real 3D structure. If 2k injection moulding is applied with two polymers, of which one is plateable and the other is not, it will be possible to make 3D electrical structures directly on the component. To be applicable in the real engineering field...

  10. Regularization methods for ill-posed problems in multiple Hilbert scales

    International Nuclear Information System (INIS)

    Mazzieri, Gisela L; Spies, Ruben D

    2012-01-01

    Several convergence results in Hilbert scales under different source conditions are proved and orders of convergence and optimal orders of convergence are derived. Also, relations between those source conditions are proved. The concept of a multiple Hilbert scale on a product space is introduced, and regularization methods on these scales are defined, both for the case of a single observation and for the case of multiple observations. In the latter case, it is shown how vector-valued regularization functions in these multiple Hilbert scales can be used. In all cases, convergence is proved and orders and optimal orders of convergence are shown. Finally, some potential applications and open problems are discussed. (paper)

  11. Coordinate-invariant regularization

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1987-01-01

    A general phase-space framework for coordinate-invariant regularization is given. The development is geometric, with all regularization contained in regularized DeWitt Superstructures on field deformations. Parallel development of invariant coordinate-space regularization is obtained by regularized functional integration of the momenta. As representative examples of the general formulation, the regularized general non-linear sigma model and regularized quantum gravity are discussed. copyright 1987 Academic Press, Inc

  12. Predictability in Pathological Gambling? Applying the Duplication of Purchase Law to the Understanding of Cross-Purchases Between Regular and Pathological Gamblers.

    Science.gov (United States)

    Lam, Desmond; Mizerski, Richard

    2017-06-01

    The objective of this study is to explore the gambling participations and game purchase duplication of light regular, heavy regular and pathological gamblers by applying the Duplication of Purchase Law. Current study uses data collected by the Australian Productivity Commission for eight different types of games. Key behavioral statistics on light regular, heavy regular, and pathological gamblers were computed and compared. The key finding is that pathological gambling, just like regular gambling, follows the Duplication of Purchase Law, which states that the dominant factor of purchase duplication between two brands is their market shares. This means that gambling between any two games at pathological level, like any regular consumer purchases, exhibits "law-like" regularity based on the pathological gamblers' participation rate of each game. Additionally, pathological gamblers tend to gamble more frequently across all games except lotteries and instant as well as make greater cross-purchases compared to heavy regular gamblers. A better understanding of the behavioral traits between regular (particularly heavy regular) and pathological gamblers can be useful to public policy makers and social marketers in order to more accurately identify such gamblers and better manage the negative impacts of gambling.

  13. Giant regular polyhedra from calixarene carboxylates and uranyl

    Science.gov (United States)

    Pasquale, Sara; Sattin, Sara; Escudero-Adán, Eduardo C.; Martínez-Belmonte, Marta; de Mendoza, Javier

    2012-01-01

    Self-assembly of large multi-component systems is a common strategy for the bottom-up construction of discrete, well-defined, nanoscopic-sized cages. Icosahedral or pseudospherical viral capsids, built up from hundreds of identical proteins, constitute typical examples of the complexity attained by biological self-assembly. Chemical versions of the so-called 5 Platonic regular or 13 Archimedean semi-regular polyhedra are usually assembled combining molecular platforms with metals with commensurate coordination spheres. Here we report novel, self-assembled cages, using the conical-shaped carboxylic acid derivatives of calix[4]arene and calix[5]arene as ligands, and the uranyl cation UO22+ as a metallic counterpart, which coordinates with three carboxylates at the equatorial plane, giving rise to hexagonal bipyramidal architectures. As a result, octahedral and icosahedral anionic metallocages of nanoscopic dimensions are formed with an unusually small number of components. PMID:22510690

  14. Form factors and scattering amplitudes in N=4 SYM in dimensional and massive regularizations

    Energy Technology Data Exchange (ETDEWEB)

    Henn, Johannes M. [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; California Univ., Santa Barbara, CA (United States). Kavli Inst. for Theoretical Physics; Moch, Sven [California Univ., Santa Barbara, CA (United States). Kavli Inst. for Theoretical Physics; Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Naculich, Stephen G. [California Univ., Santa Barbara, CA (United States). Kavli Inst. for Theoretical Physics; Bowdoin College, Brunswick, ME (United States). Dept. of Physics

    2011-09-15

    The IR-divergent scattering amplitudes of N=4 supersymmetric Yang-Mills theory can be regulated in a variety of ways, including dimensional regularization and massive (or Higgs) regularization. The IR-finite part of an amplitude in different regularizations generally differs by an additive constant at each loop order, due to the ambiguity in separating finite and divergent contributions. We give a prescription for defining an unambiguous, regulator-independent finite part of the amplitude by factoring off a product of IR-divergent ''wedge'' functions. For the cases of dimensional regularization and the common-mass Higgs regulator, we define the wedge function in terms of a form factor, and demonstrate the regularization independence of the n-point amplitude through two loops. We also deduce the form of the wedge function for the more general differential-mass Higgs regulator, although we lack an explicit operator definition in this case. Finally, using extended dual conformal symmetry, we demonstrate the link between the differential-mass wedge function and the anomalous dual conformal Ward identity for the finite part of the scattering amplitude. (orig.)

  15. Form factors and scattering amplitudes in N=4 SYM in dimensional and massive regularizations

    International Nuclear Information System (INIS)

    Henn, Johannes M.; Naculich, Stephen G.; Bowdoin College, Brunswick, ME

    2011-09-01

    The IR-divergent scattering amplitudes of N=4 supersymmetric Yang-Mills theory can be regulated in a variety of ways, including dimensional regularization and massive (or Higgs) regularization. The IR-finite part of an amplitude in different regularizations generally differs by an additive constant at each loop order, due to the ambiguity in separating finite and divergent contributions. We give a prescription for defining an unambiguous, regulator-independent finite part of the amplitude by factoring off a product of IR-divergent ''wedge'' functions. For the cases of dimensional regularization and the common-mass Higgs regulator, we define the wedge function in terms of a form factor, and demonstrate the regularization independence of the n-point amplitude through two loops. We also deduce the form of the wedge function for the more general differential-mass Higgs regulator, although we lack an explicit operator definition in this case. Finally, using extended dual conformal symmetry, we demonstrate the link between the differential-mass wedge function and the anomalous dual conformal Ward identity for the finite part of the scattering amplitude. (orig.)

  16. Z-1 perturbation theory applied to the correlation energy problem of atoms

    International Nuclear Information System (INIS)

    Robinson, B.H.

    1975-01-01

    Rayleigh--Schroedinger Perturbation Theory is applied to obtain directly exact and explicit analytic formulas for the electron correlation energies of N electron systems in terms of their pairwise interactions through second order in Z -1 , where Z is the nucleus of the atom. It is demonstrated that the second order correlation energy may be expressed as exactly the sum of pairwise correlation energies. In the case of no zeroth order degeneracy, the zeroth and first order terms vanish. The expression for the pairwise energies is an infinite sum, all terms of which are of the same sign. There is no numerical differencing. In the case of zeroth order degeneracy it is shown that the above statement concerning the second order energy still holds, but the expressions are a bit more complicated. It is shown that they ''almost'' reduce to a much simpler form. Also, the computation of the first order correlation energy is considered

  17. Adaptive Second-Order Total Variation: An Approach Aware of Slope Discontinuities

    KAUST Repository

    Lenzen, Frank; Becker, Florian; Lellmann, Jan

    2013-01-01

    Total variation (TV) regularization, originally introduced by Rudin, Osher and Fatemi in the context of image denoising, has become widely used in the field of inverse problems. Two major directions of modifications of the original approach were proposed later on. The first concerns adaptive variants of TV regularization, the second focuses on higher-order TV models. In the present paper, we combine the ideas of both directions by proposing adaptive second-order TV models, including one anisotropic model. Experiments demonstrate that introducing adaptivity results in an improvement of the reconstruction error. © 2013 Springer-Verlag.

  18. Regularization of plurisubharmonic functions with a net of good points

    OpenAIRE

    Li, Long

    2017-01-01

    The purpose of this article is to present a new regularization technique of quasi-plurisubharmoinc functions on a compact Kaehler manifold. The idea is to regularize the function on local coordinate balls first, and then glue each piece together. Therefore, all the higher order terms in the complex Hessian of this regularization vanish at the center of each coordinate ball, and all the centers build a delta-net of the manifold eventually.

  19. Random packing of regular polygons and star polygons on a flat two-dimensional surface.

    Science.gov (United States)

    Cieśla, Michał; Barbasz, Jakub

    2014-08-01

    Random packing of unoriented regular polygons and star polygons on a two-dimensional flat continuous surface is studied numerically using random sequential adsorption algorithm. Obtained results are analyzed to determine the saturated random packing ratio as well as its density autocorrelation function. Additionally, the kinetics of packing growth and available surface function are measured. In general, stars give lower packing ratios than polygons, but when the number of vertexes is large enough, both shapes approach disks and, therefore, properties of their packing reproduce already known results for disks.

  20. Evaporation regularities for the components of alloys during vacuum melting

    International Nuclear Information System (INIS)

    Anoshkin, N.F.

    1977-01-01

    The peculiarities of changes in the content of alloying components in vacuum melting (exemplified by Ti and Mo alloys) and the formation of the ingot composition in the bottom, central, and peripheral portions are considered. For the purposes of the investigation a process model was adopted, which is characterized by negligibly small evaporation of the alloy base, complete smoothing-out of the composition in the liquid bath volume, the constancy of the temperature over the entire evaporation surface, and a number of other assumptions, whose correctness was confirmed by the experiment. It is shown that the best possibilities for suppression of evaporation of components with a high vapour pressure are offered by a vacuum arc or electric slag melting, because they make it possible to conduct the process at high pressures with minimum overheating. A method of refining by overheating was developed. A method for refining alloys with volatile components was found; it consists of the first remelting ro remove volatile impurities and their deposition in the peripheral layers of the ingot, and the second remelting, which ensures the averaging of the ingot composition. Typical versions of distribution of the volatile components or the impurity across the ingot are singled out

  1. Zero-range approximation for two-component boson systems

    International Nuclear Information System (INIS)

    Sogo, T.; Fedorov, D.V.; Jensen, A.S.

    2005-01-01

    The hyperspherical adiabatic expansion method is combined with the zero-range approximation to derive angular Faddeev-like equations for two-component boson systems. The angular eigenvalues are solutions to a transcendental equation obtained as a vanishing determinant of a 3 x 3 matrix. The eigenfunctions are linear combinations of Jacobi functions of argument proportional to the distance between pairs of particles. We investigate numerically the influence of two-body correlations on the eigenvalue spectrum, the eigenfunctions and the effective hyperradial potential. Correlations decrease or increase the distance between pairs for effectively attractive or repulsive interactions, respectively. New structures appear for non-identical components. Fingerprints can be found in the nodal structure of the density distributions of the condensates. (author)

  2. Regular approach for generating van der Waals C{sub s} coefficients to arbitrary orders

    Energy Technology Data Exchange (ETDEWEB)

    Ovsiannikov, Vitali D [Department of Physics, Voronezh State University, 394006 Voronezh (Russian Federation); Mitroy, J [Faculty of Technology, Charles Darwin University, Darwin, NT 0909 (Australia)

    2006-01-14

    A completely general formalism is developed to describe the energy E{sup disp} = {sigma}{sub s}C{sub s}/R{sup s} of dispersion interaction between two atoms in spherically symmetric states. Explicit expressions are given up to the tenth order of perturbation theory for the dispersion energy E{sup disp} and dispersion coefficients C{sub s}. The method could, in principle, be used to derive the expressions for any s while including all contributing orders of perturbation theory for asymptotic interaction between two atoms. The theory is applied to the calculation of the complete series up to s = 30 for two hydrogen atoms in their ground state. A pseudo-state series expansion of the two-atom Green function gives rapid convergence of the series for radial matrix elements. The numerical values of C{sub s} are computed up to C{sub 30} to a relative accuracy of 10{sup -7} or better. The dispersion coefficients for the hydrogen-antihydrogen interaction are obtained from the H-H coefficients by simply taking the absolute magnitude of C{sub s}.

  3. Itinerant Ferromagnetism in a Polarized Two-Component Fermi Gas

    DEFF Research Database (Denmark)

    Massignan, Pietro; Yu, Zhenhua; Bruun, Georg

    2013-01-01

    We analyze when a repulsively interacting two-component Fermi gas becomes thermodynamically unstable against phase separation. We focus on the strongly polarized limit, where the free energy of the homogeneous mixture can be calculated accurately in terms of well-defined quasiparticles, the repul......We analyze when a repulsively interacting two-component Fermi gas becomes thermodynamically unstable against phase separation. We focus on the strongly polarized limit, where the free energy of the homogeneous mixture can be calculated accurately in terms of well-defined quasiparticles...

  4. Toward robust high resolution fluorescence tomography: a hybrid row-action edge preserving regularization

    Science.gov (United States)

    Behrooz, Ali; Zhou, Hao-Min; Eftekhar, Ali A.; Adibi, Ali

    2011-02-01

    Depth-resolved localization and quantification of fluorescence distribution in tissue, called Fluorescence Molecular Tomography (FMT), is highly ill-conditioned as depth information should be extracted from limited number of surface measurements. Inverse solvers resort to regularization algorithms that penalize Euclidean norm of the solution to overcome ill-posedness. While these regularization algorithms offer good accuracy, their smoothing effects result in continuous distributions which lack high-frequency edge-type features of the actual fluorescence distribution and hence limit the resolution offered by FMT. We propose an algorithm that penalizes the total variation (TV) norm of the solution to preserve sharp transitions and high-frequency components in the reconstructed fluorescence map while overcoming ill-posedness. The hybrid algorithm is composed of two levels: 1) An Algebraic Reconstruction Technique (ART), performed on FMT data for fast recovery of a smooth solution that serves as an initial guess for the iterative TV regularization, 2) A time marching TV regularization algorithm, inspired by the Rudin-Osher-Fatemi TV image restoration, performed on the initial guess to further enhance the resolution and accuracy of the reconstruction. The performance of the proposed method in resolving fluorescent tubes inserted in a liquid tissue phantom imaged by a non-contact CW trans-illumination FMT system is studied and compared to conventional regularization schemes. It is observed that the proposed method performs better in resolving fluorescence inclusions at higher depths.

  5. Regularized Regression and Density Estimation based on Optimal Transport

    KAUST Repository

    Burger, M.

    2012-03-11

    The aim of this paper is to investigate a novel nonparametric approach for estimating and smoothing density functions as well as probability densities from discrete samples based on a variational regularization method with the Wasserstein metric as a data fidelity. The approach allows a unified treatment of discrete and continuous probability measures and is hence attractive for various tasks. In particular, the variational model for special regularization functionals yields a natural method for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations and provide a detailed analysis. Moreover, we compute special self-similar solutions for standard regularization functionals and we discuss several computational approaches and results. © 2012 The Author(s).

  6. Structural characterization of the packings of granular regular polygons.

    Science.gov (United States)

    Wang, Chuncheng; Dong, Kejun; Yu, Aibing

    2015-12-01

    By using a recently developed method for discrete modeling of nonspherical particles, we simulate the random packings of granular regular polygons with three to 11 edges under gravity. The effects of shape and friction on the packing structures are investigated by various structural parameters, including packing fraction, the radial distribution function, coordination number, Voronoi tessellation, and bond-orientational order. We find that packing fraction is generally higher for geometrically nonfrustrated regular polygons, and can be increased by the increase of edge number and decrease of friction. The changes of packing fraction are linked with those of the microstructures, such as the variations of the translational and orientational orders and local configurations. In particular, the free areas of Voronoi tessellations (which are related to local packing fractions) can be described by log-normal distributions for all polygons. The quantitative analyses establish a clearer picture for the packings of regular polygons.

  7. New regular black hole solutions

    International Nuclear Information System (INIS)

    Lemos, Jose P. S.; Zanchin, Vilson T.

    2011-01-01

    In the present work we consider general relativity coupled to Maxwell's electromagnetism and charged matter. Under the assumption of spherical symmetry, there is a particular class of solutions that correspond to regular charged black holes whose interior region is de Sitter, the exterior region is Reissner-Nordstroem and there is a charged thin-layer in-between the two. The main physical and geometrical properties of such charged regular black holes are analyzed.

  8. Investigation of low-latitude hydrogen emission in terms of a two-component interstellar gas model

    International Nuclear Information System (INIS)

    Baker, P.L.; Burton, W.B.

    1975-01-01

    The high-resolution 21-cm hydrogen line observations at low galactic latitude of Burton and Verschuur have been analyzed to determine the large-scale distribution of galactic hydrogen. The distribution parameters are found by model fitting. Optical depth affects have been computed using a two-component gas model. Analysis shows that a multiphase description of the medium is essential to the interpretation of low-latitude emission observations. Where possible, the number of free parameters in the gas model has been reduced. Calculations were performed for a one-component, uniform spin temperature, gas model in order to show the systematic departures between this model and the data caused by the incorrect treatment of the optical depth effect. In the two-component gas, radiative transfer is treated by a Monte Carlo calculation since the opacity of the gas arises in a randomly distributed, cold, optically thick, low velocity-dispersion, cloud medium. The emission arises in both the cloud medium and a smoothly distributed, optically thin, high velocity-dispersion, intercloud medium. The synthetic profiles computed from the two-component model reproduce both the large-scale trends of the observed emission profiles and the magnitude of the small-scale emission irregularities. The analysis permits the determination of values for []he thickness of the galactic disk between half density points, the total observed neutral hydrogen mass of the Galaxy, and the central number density of the intercloud atoms. In addition, the analysis is sensitive to the size of clouds contributing to the observations. Computations also show that synthetic emission profiles based on the two-component model display both the zero-velocity and high-velocity ridges, indicative of optical thinness on a large scale, in spite of the presence of optically thick gas

  9. Metric regularity and subdifferential calculus

    International Nuclear Information System (INIS)

    Ioffe, A D

    2000-01-01

    The theory of metric regularity is an extension of two classical results: the Lyusternik tangent space theorem and the Graves surjection theorem. Developments in non-smooth analysis in the 1980s and 1990s paved the way for a number of far-reaching extensions of these results. It was also well understood that the phenomena behind the results are of metric origin, not connected with any linear structure. At the same time it became clear that some basic hypotheses of the subdifferential calculus are closely connected with the metric regularity of certain set-valued maps. The survey is devoted to the metric theory of metric regularity and its connection with subdifferential calculus in Banach spaces

  10. Analytical energy gradient for the two-component normalized elimination of the small component method

    Science.gov (United States)

    Zou, Wenli; Filatov, Michael; Cremer, Dieter

    2015-06-01

    The analytical gradient for the two-component Normalized Elimination of the Small Component (2c-NESC) method is presented. The 2c-NESC is a Dirac-exact method that employs the exact two-component one-electron Hamiltonian and thus leads to exact Dirac spin-orbit (SO) splittings for one-electron atoms. For many-electron atoms and molecules, the effect of the two-electron SO interaction is modeled by a screened nucleus potential using effective nuclear charges as proposed by Boettger [Phys. Rev. B 62, 7809 (2000)]. The effect of spin-orbit coupling (SOC) on molecular geometries is analyzed utilizing the properties of the frontier orbitals and calculated SO couplings. It is shown that bond lengths can either be lengthened or shortened under the impact of SOC where in the first case the influence of low lying excited states with occupied antibonding orbitals plays a role and in the second case the jj-coupling between occupied antibonding and unoccupied bonding orbitals dominates. In general, the effect of SOC on bond lengths is relatively small (≤5% of the scalar relativistic changes in the bond length). However, large effects are found for van der Waals complexes Hg2 and Cn2, which are due to the admixture of more bonding character to the highest occupied spinors.

  11. Lavrentiev regularization method for nonlinear ill-posed problems

    International Nuclear Information System (INIS)

    Kinh, Nguyen Van

    2002-10-01

    In this paper we shall be concerned with Lavientiev regularization method to reconstruct solutions x 0 of non ill-posed problems F(x)=y o , where instead of y 0 noisy data y δ is an element of X with absolut(y δ -y 0 ) ≤ δ are given and F:X→X is an accretive nonlinear operator from a real reflexive Banach space X into itself. In this regularization method solutions x α δ are obtained by solving the singularly perturbed nonlinear operator equation F(x)+α(x-x*)=y δ with some initial guess x*. Assuming certain conditions concerning the operator F and the smoothness of the element x*-x 0 we derive stability estimates which show that the accuracy of the regularized solutions is order optimal provided that the regularization parameter α has been chosen properly. (author)

  12. A two-component NZRI metamaterial based rectangular cloak

    Directory of Open Access Journals (Sweden)

    Sikder Sunbeam Islam

    2015-10-01

    Full Text Available A new two-component, near zero refractive index (NZRI metamaterial is presented for electromagnetic rectangular cloaking operation in the microwave range. In the basic design a pi-shaped, metamaterial was developed and its characteristics were investigated for the two major axes (x and z-axis wave propagation through the material. For the z-axis wave propagation, it shows more than 2 GHz bandwidth and for the x-axis wave propagation; it exhibits more than 1 GHz bandwidth of NZRI property. The metamaterial was then utilized in designing a rectangular cloak where a metal cylinder was cloaked perfectly in the C-band area of microwave regime. The experimental result was provided for the metamaterial and the cloak and these results were compared with the simulated results. This is a novel and promising design for its two-component NZRI characteristics and rectangular cloaking operation in the electromagnetic paradigm.

  13. Class of regular bouncing cosmologies

    Science.gov (United States)

    Vasilić, Milovan

    2017-06-01

    In this paper, I construct a class of everywhere regular geometric sigma models that possess bouncing solutions. Precisely, I show that every bouncing metric can be made a solution of such a model. My previous attempt to do so by employing one scalar field has failed due to the appearance of harmful singularities near the bounce. In this work, I use four scalar fields to construct a class of geometric sigma models which are free of singularities. The models within the class are parametrized by their background geometries. I prove that, whatever background is chosen, the dynamics of its small perturbations is classically stable on the whole time axis. Contrary to what one expects from the structure of the initial Lagrangian, the physics of background fluctuations is found to carry two tensor, two vector, and two scalar degrees of freedom. The graviton mass, which naturally appears in these models, is shown to be several orders of magnitude smaller than its experimental bound. I provide three simple examples to demonstrate how this is done in practice. In particular, I show that graviton mass can be made arbitrarily small.

  14. Adaptive regularization of noisy linear inverse problems

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Madsen, Kristoffer Hougaard; Lehn-Schiøler, Tue

    2006-01-01

    In the Bayesian modeling framework there is a close relation between regularization and the prior distribution over parameters. For prior distributions in the exponential family, we show that the optimal hyper-parameter, i.e., the optimal strength of regularization, satisfies a simple relation: T......: The expectation of the regularization function, i.e., takes the same value in the posterior and prior distribution. We present three examples: two simulations, and application in fMRI neuroimaging....

  15. Analytic study of nonperturbative solutions in open string field theory

    International Nuclear Information System (INIS)

    Bars, I.; Kishimoto, I.; Matsuo, Y.

    2003-01-01

    We propose an analytic framework to study the nonperturbative solutions of Witten's open string field theory. The method is based on the Moyal star formulation where the kinetic term can be split into two parts. The first one describes the spectrum of two identical half strings which are independent from each other. The second one, which we call midpoint correction, shifts the half string spectrum to that of the standard open string. We show that the nonlinear equation of motion of string field theory is exactly solvable at zeroth order in the midpoint correction. An infinite number of solutions are classified in terms of projection operators. Among them, there exists only one stable solution which is identical to the standard butterfly state. We include the effect of the midpoint correction around each exact zeroth order solution as a perturbation expansion which can be formally summed to the complete exact solution

  16. Spin-orbit coupling calculations with the two-component normalized elimination of the small component method

    Science.gov (United States)

    Filatov, Michael; Zou, Wenli; Cremer, Dieter

    2013-07-01

    A new algorithm for the two-component Normalized Elimination of the Small Component (2cNESC) method is presented and tested in the calculation of spin-orbit (SO) splittings for a series of heavy atoms and their molecules. The 2cNESC is a Dirac-exact method that employs the exact two-component one-electron Hamiltonian and thus leads to exact Dirac SO splittings for one-electron atoms. For many-electron atoms and molecules, the effect of the two-electron SO interaction is modeled by a screened nucleus potential using effective nuclear charges as proposed by Boettger [Phys. Rev. B 62, 7809 (2000), 10.1103/PhysRevB.62.7809]. The use of the screened nucleus potential for the two-electron SO interaction leads to accurate spinor energy splittings, for which the deviations from the accurate Dirac Fock-Coulomb values are on the average far below the deviations observed for other effective one-electron SO operators. For hydrogen halides HX (X = F, Cl, Br, I, At, and Uus) and mercury dihalides HgX2 (X = F, Cl, Br, I) trends in spinor energies and SO splittings as obtained with the 2cNESC method are analyzed and discussed on the basis of coupling schemes and the electronegativity of X.

  17. Analytic regularization of the Yukawa model at finite temperature

    International Nuclear Information System (INIS)

    Malbouisson, A.P.C.; Svaiter, N.F.; Svaiter, B.F.

    1996-07-01

    It is analysed the one-loop fermionic contribution for the scalar effective potential in the temperature dependent Yukawa model. Ir order to regularize the model a mix between dimensional and analytic regularization procedures is used. It is found a general expression for the fermionic contribution in arbitrary spacetime dimension. It is also found that in D = 3 this contribution is finite. (author). 19 refs

  18. Zeroth order phase transition in a holographic superconductor with single impurity

    NARCIS (Netherlands)

    Zeng, Hua Bi; Zhang, Hai-Qing

    We investigate the single normal impurity effect in a superconductor by the holographic method. When the size of impurity is much smaller than the host superconductor, we can reproduce the Anderson theorem, which states that a conventional s-wave superconductor is robust to a normal (non-magnetic)

  19. SU-E-J-67: Evaluation of Breathing Patterns for Respiratory-Gated Radiation Therapy Using Respiration Regularity Index

    Energy Technology Data Exchange (ETDEWEB)

    Cheong, K; Lee, M; Kang, S; Yoon, J; Park, S; Hwang, T; Kim, H; Kim, K; Han, T; Bae, H [Hallym University College of Medicine, Anyang (Korea, Republic of)

    2014-06-01

    Purpose: Despite the importance of accurately estimating the respiration regularity of a patient in motion compensation treatment, an effective and simply applicable method has rarely been reported. The authors propose a simple respiration regularity index based on parameters derived from a correspondingly simplified respiration model. Methods: In order to simplify a patient's breathing pattern while preserving the data's intrinsic properties, we defined a respiration model as a power of cosine form with a baseline drift. According to this respiration formula, breathing-pattern fluctuation could be explained using four factors: sample standard deviation of respiration period, sample standard deviation of amplitude and the results of simple regression of the baseline drift (slope and standard deviation of residuals of a respiration signal. Overall irregularity (δ) was defined as a Euclidean norm of newly derived variable using principal component analysis (PCA) for the four fluctuation parameters. Finally, the proposed respiration regularity index was defined as ρ=ln(1+(1/ δ))/2, a higher ρ indicating a more regular breathing pattern. Subsequently, we applied it to simulated and clinical respiration signals from real-time position management (RPM; Varian Medical Systems, Palo Alto, CA) and investigated respiration regularity. Moreover, correlations between the regularity of the first session and the remaining fractions were investigated using Pearson's correlation coefficient. Results: The respiration regularity was determined based on ρ; patients with ρ<0.3 showed worse regularity than the others, whereas ρ>0.7 was suitable for respiratory-gated radiation therapy (RGRT). Fluctuations in breathing cycle and amplitude were especially determinative of ρ. If the respiration regularity of a patient's first session was known, it could be estimated through subsequent sessions. Conclusions: Respiration regularity could be objectively determined

  20. Numerical simulation of stress distribution in Inconel 718 components realized by metal injection molding during supercritical debinding

    Science.gov (United States)

    Agne, Aboubakry; Barrière, Thierry

    2018-05-01

    Metal injection molding (MIM) is a process combining advantages of thermoplastic injection molding and powder metallurgy process in order to manufacture components with complex and near net-shape geometries. The debinding of a green component can be performed in two steps, first by using solvent debinding in order to extract the organic part of the binder and then by thermal degradation of the rest of the binder. A shorter and innovative method for extracting an organic binder involves the use of supercritical fluid instead of a regular solvent. The debinding via a supercritical fluid was recently investigated to extract organic binders contained in components obtained by Metal Injection Molding. It consists to put the component in an enclosure subjected to high pressure and temperature. The supercritical fluid has various properties depending on these two conditions, e.g., density and viscosity. The high-pressure combined with the high temperature during the process affect the component structure. Three mechanisms contributing to the deformation of the component can been differentiated: thermal expansion, binder extraction and supercritical fluid effect on the outer surfaces of the component. If one supposes that, the deformation due to binder extraction is negligible, thermal expansion and the fluid effect are probably the main mechanisms that can produce several stress. A finite-element model, which couples fluid-structures interaction and structural mechanics, has been developed and performed on the Comsol Multiphysics® finite-element software platform allowed to estimate the stress distribution during the supercritical debinding of MIM component composed of Inconel 718 powders, polypropylene, polyethylene glycol and stearic acid as binder. The proposed numerical simulations allow the estimation of the stress distribution with respect to the processing parameters for MIM components during the supercritical debinding process using a stationary solver.

  1. Regular-, irregular-, and pseudo-character processing in Chinese: The regularity effect in normal adult readers

    Directory of Open Access Journals (Sweden)

    Dustin Kai Yan Lau

    2014-03-01

    Full Text Available Background Unlike alphabetic languages, Chinese uses a logographic script. However, the pronunciation of many character’s phonetic radical has the same pronunciation as the character as a whole. These are considered regular characters and can be read through a lexical non-semantic route (Weekes & Chen, 1999. Pseudocharacters are another way to study this non-semantic route. A pseudocharacter is the combination of existing semantic and phonetic radicals in their legal positions resulting in a non-existing character (Ho, Chan, Chung, Lee, & Tsang, 2007. Pseudocharacters can be pronounced by direct derivation from the sound of its phonetic radical. Conversely, if the pronunciation of a character does not follow that of the phonetic radical, it is considered as irregular and can only be correctly read through the lexical-semantic route. The aim of the current investigation was to examine reading aloud in normal adults. We hypothesized that the regularity effect, previously described for alphabetical scripts and acquired dyslexic patients of Chinese (Weekes & Chen, 1999; Wu, Liu, Sun, Chromik, & Zhang, 2014, would also be present in normal adult Chinese readers. Method Participants. Thirty (50% female native Hong Kong Cantonese speakers with a mean age of 19.6 years and a mean education of 12.9 years. Stimuli. Sixty regular-, 60 irregular-, and 60 pseudo-characters (with at least 75% of name agreement in Chinese were matched by initial phoneme, number of strokes and family size. Additionally, regular- and irregular-characters were matched by frequency (low and consistency. Procedure. Each participant was asked to read aloud the stimuli presented on a laptop using the DMDX software. The order of stimuli presentation was randomized. Data analysis. ANOVAs were carried out by participants and items with RTs and errors as dependent variables and type of stimuli (regular-, irregular- and pseudo-character as repeated measures (F1 or between subject

  2. Competitive Adsorption of a Two-Component Gas on a Deformable Adsorbent

    OpenAIRE

    Usenko, A. S.

    2013-01-01

    We investigate the competitive adsorption of a two-component gas on the surface of an adsorbent whose adsorption properties vary in adsorption due to the adsorbent deformation. The essential difference of adsorption isotherms for a deformable adsorbent both from the classical Langmuir adsorption isotherms of a two-component gas and from the adsorption isotherms of a one-component gas taking into account variations in adsorption properties of the adsorbent in adsorption is obtained. We establi...

  3. Anomalous NMR Relaxation in Cartilage Matrix Components and Native Cartilage: Fractional-Order Models

    Science.gov (United States)

    Magin, Richard L.; Li, Weiguo; Velasco, M. Pilar; Trujillo, Juan; Reiter, David A.; Morgenstern, Ashley; Spencer, Richard G.

    2011-01-01

    We present a fractional-order extension of the Bloch equations to describe anomalous NMR relaxation phenomena (T1 and T2). The model has solutions in the form of Mittag-Leffler and stretched exponential functions that generalize conventional exponential relaxation. Such functions have been shown by others to be useful for describing dielectric and viscoelastic relaxation in complex, heterogeneous materials. Here, we apply these fractional-order T1 and T2 relaxation models to experiments performed at 9.4 and 11.7 Tesla on type I collagen gels, chondroitin sulfate mixtures, and to bovine nasal cartilage (BNC), a largely isotropic and homogeneous form of cartilage. The results show that the fractional-order analysis captures important features of NMR relaxation that are typically described by multi-exponential decay models. We find that the T2 relaxation of BNC can be described in a unique way by a single fractional-order parameter (α), in contrast to the lack of uniqueness of multi-exponential fits in the realistic setting of a finite signal-to-noise ratio. No anomalous behavior of T1 was observed in BNC. In the single-component gels, for T2 measurements, increasing the concentration of the largest components of cartilage matrix, collagen and chondroitin sulfate, results in a decrease in α, reflecting a more restricted aqueous environment. The quality of the curve fits obtained using Mittag-Leffler and stretched exponential functions are in some cases superior to those obtained using mono- and bi-exponential models. In both gels and BNC, α appears to account for microstructural complexity in the setting of an altered distribution of relaxation times. This work suggests the utility of fractional-order models to describe T2 NMR relaxation processes in biological tissues. PMID:21498095

  4. Online Manifold Regularization by Dual Ascending Procedure

    OpenAIRE

    Sun, Boliang; Li, Guohui; Jia, Li; Zhang, Hui

    2013-01-01

    We propose a novel online manifold regularization framework based on the notion of duality in constrained optimization. The Fenchel conjugate of hinge functions is a key to transfer manifold regularization from offline to online in this paper. Our algorithms are derived by gradient ascent in the dual function. For practical purpose, we propose two buffering strategies and two sparse approximations to reduce the computational complexity. Detailed experiments verify the utility of our approache...

  5. Analytical energy gradient for the two-component normalized elimination of the small component method

    Energy Technology Data Exchange (ETDEWEB)

    Zou, Wenli; Filatov, Michael; Cremer, Dieter, E-mail: dcremer@smu.edu [Computational and Theoretical Chemistry Group (CATCO), Department of Chemistry, Southern Methodist University, 3215 Daniel Ave, Dallas, Texas 75275-0314 (United States)

    2015-06-07

    The analytical gradient for the two-component Normalized Elimination of the Small Component (2c-NESC) method is presented. The 2c-NESC is a Dirac-exact method that employs the exact two-component one-electron Hamiltonian and thus leads to exact Dirac spin-orbit (SO) splittings for one-electron atoms. For many-electron atoms and molecules, the effect of the two-electron SO interaction is modeled by a screened nucleus potential using effective nuclear charges as proposed by Boettger [Phys. Rev. B 62, 7809 (2000)]. The effect of spin-orbit coupling (SOC) on molecular geometries is analyzed utilizing the properties of the frontier orbitals and calculated SO couplings. It is shown that bond lengths can either be lengthened or shortened under the impact of SOC where in the first case the influence of low lying excited states with occupied antibonding orbitals plays a role and in the second case the jj-coupling between occupied antibonding and unoccupied bonding orbitals dominates. In general, the effect of SOC on bond lengths is relatively small (≤5% of the scalar relativistic changes in the bond length). However, large effects are found for van der Waals complexes Hg{sub 2} and Cn{sub 2}, which are due to the admixture of more bonding character to the highest occupied spinors.

  6. Two component injection moulding: an interface quality and bond strength dilemma

    DEFF Research Database (Denmark)

    Islam, Mohammad Aminul; Hansen, Hans Nørgaard; Tang, Peter Torben

    2008-01-01

    on quality parameters of the two component parts. Most engineering applications of two component injection moulding calls for high bond strength between the two polymers, on the other hand a sharp and well-defined interface between the two polymers are required for applications like selective metallization...... of polymers, parts for micro applications and also for the aesthetic purpose of the final product. The investigation presented in this paper indicates a dilemma between obtaining reasonably good bond strength and at the same time keeping the interface quality suitable for applications. The required process...... conditions for a sharp and well-defined interface are exactly the opposite of what is congenial for higher bond strength. So in the production of two component injection moulded parts, there is a compromise to make between the interface quality and the bond strength of the two polymers. Also the injection...

  7. (2+1-dimensional regular black holes with nonlinear electrodynamics sources

    Directory of Open Access Journals (Sweden)

    Yun He

    2017-11-01

    Full Text Available On the basis of two requirements: the avoidance of the curvature singularity and the Maxwell theory as the weak field limit of the nonlinear electrodynamics, we find two restricted conditions on the metric function of (2+1-dimensional regular black hole in general relativity coupled with nonlinear electrodynamics sources. By the use of the two conditions, we obtain a general approach to construct (2+1-dimensional regular black holes. In this manner, we construct four (2+1-dimensional regular black holes as examples. We also study the thermodynamic properties of the regular black holes and verify the first law of black hole thermodynamics.

  8. Characterization of a two-component thermoluminescent albedo dosemeter according to ISO 21909

    Energy Technology Data Exchange (ETDEWEB)

    Martins, M.M., E-mail: marcelo@ird.gov.b [Instituto de Radioprotecao e Dosimetria (IRD/CNEN), Av. Salvador Allende s/n, CEP 22780-160, Rio de Janeiro, RJ (Brazil); Mauricio, C.L.P., E-mail: claudia@ird.gov.b [Instituto de Radioprotecao e Dosimetria (IRD/CNEN), Av. Salvador Allende s/n, CEP 22780-160, Rio de Janeiro, RJ (Brazil); Pereira, W.W., E-mail: walsan@ird.gov.b [Instituto de Radioprotecao e Dosimetria (IRD/CNEN), Av. Salvador Allende s/n, CEP 22780-160, Rio de Janeiro, RJ (Brazil); Silva, A.X. da, E-mail: ademir@con.ufrj.b [Coordenacao dos Programas de Pos-Graduacao em Engenharia, COPPE/PEN Caixa Postal 68509, CEP 21941-972, Rio de Janeiro, RJ (Brazil)

    2011-05-15

    A two-component thermoluminescent albedo neutron monitoring system was developed at Instituto de Radioprotecao e Dosimetria, Brazil. As there is no Brazilian regulation for neutron individual monitoring service, the system was tested according to the ISO 21909 standard. This standard provides performance and test requirements for determining the acceptability of personal neutron dosemeters to be used for the measurement of personal dose equivalent, H{sub p}(10), in neutron fields with energies ranging from thermal to 20 MeV. Up to 40 dosemeters were used in order to accomplish satisfactorily the requirements of some tests. Despite operational difficulties, this albedo system passed all ISO 21909 performance requirements. The results and problems throughout this characterization are discussed in this paper.

  9. Optical components based on two-photon absorption process in functionalized polymers

    International Nuclear Information System (INIS)

    Klein, S.; Barsella, A.; Taupier, G.; Stortz, V.; Fort, A.; Dorkenoo, K.D.

    2006-01-01

    We report on the fabrication of basic elements needed in optical circuits in a photopolymerizable resin, using a two-photon absorption (TPA) process to perform a selective polymerization. By taking advantage of the high spatial selectivity of the TPA approach, we can control the value of the local index of refraction in the material and realize permanent optical pathways in the bulk of photopolymerizable matrices. The computer-controlled design of such pathways allows creating optical circuits. As an example of application, optical fibers separated by millimetric distances and placed in arbitrary positions have been connected with moderate losses. Moreover, active components, such as electro-optical Mach-Zehnder interferometers, can be fabricated using photopolymers functionalized with non-linear optical chromophores, in order to be integrated in micro-optical circuits

  10. Two component micro injection molding for MID fabrication

    DEFF Research Database (Denmark)

    Islam, Mohammad Aminul; Hansen, Hans Nørgaard; Tang, Peter Torben

    2009-01-01

    Molded Interconnect Devices (MIDs) are plastic substrates with electrical infrastructure. The fabrication of MIDs is usually based on injection molding and different process chains may be identified from this starting point. The use of MIDs has been driven primarily by the automotive sector......, but recently the medical sector seems more and more interested. In particular the possibility of miniaturization of 3D components with electrical infrastructure is attractive. The paper describes possible manufacturing routes and challenges of miniaturized MIDs based on two component micro injection molding...

  11. Duas crianças cegas congênitas no primeiro ciclo da escola regular Two congenitally blind children in the first cycle of regular school

    Directory of Open Access Journals (Sweden)

    Fernando Jorge Costa Figueiredo

    2010-04-01

    Full Text Available O estudo tem como objetivo investigar com maior profundidade a pesquisa sobre representação mental da realidade em crianças com cegueira congênita, comparando-as com crianças normovisuais no ensino básico da escola regular em Portugal. A partir de fundamentos teóricos, pretende-se analisar as diferentes crianças ao longo do tempo, bem como a sociedade atual perante as crianças diferentes. Foram feitos dois estudos de caso, combinando dados de natureza quantitativa e qualitativa. A análise desses casos revela dois caminhos diferentes na integração de crianças com cegueira congênita no primeiro ciclo do ensino básico, sendo que essa diferenciação não resulta dos processos de adaptação ao aluno concreto numa perspectiva humanista e, sim, dos condicionamentos físicos (escolas e organizacionais (Educação Especial.This study aims to look in more detail into the mental representation of reality in congenitally blind children when compared with normal-sighted children in basic education in a regular school in Portugal. Starting with the theoretical fundamentals, its intention is to analyze different children over time as well as the current society, vis-à-vis these children. We undertook two case studies and combined quantitative and qualitative data. The analysis of these cases reveals two different paths in the integration of the congenitally blind children, a differentiation that does not result from processes of adapting to the specific child from a humanistic perspective, but rather from the physical (schools and organizational (Special Education conditions.

  12. The massless two-loop two-point function

    International Nuclear Information System (INIS)

    Bierenbaum, I.; Weinzierl, S.

    2003-01-01

    We consider the massless two-loop two-point function with arbitrary powers of the propagators and derive a representation from which we can obtain the Laurent expansion to any desired order in the dimensional regularization parameter ε. As a side product, we show that in the Laurent expansion of the two-loop integral only rational numbers and multiple zeta values occur. Our method of calculation obtains the two-loop integral as a convolution product of two primitive one-loop integrals. We comment on the generalization of this product structure to higher loop integrals. (orig.)

  13. Fractional Regularization Term for Variational Image Registration

    Directory of Open Access Journals (Sweden)

    Rafael Verdú-Monedero

    2009-01-01

    Full Text Available Image registration is a widely used task of image analysis with applications in many fields. Its classical formulation and current improvements are given in the spatial domain. In this paper a regularization term based on fractional order derivatives is formulated. This term is defined and implemented in the frequency domain by translating the energy functional into the frequency domain and obtaining the Euler-Lagrange equations which minimize it. The new regularization term leads to a simple formulation and design, being applicable to higher dimensions by using the corresponding multidimensional Fourier transform. The proposed regularization term allows for a real gradual transition from a diffusion registration to a curvature registration which is best suited to some applications and it is not possible in the spatial domain. Results with 3D actual images show the validity of this approach.

  14. Combining two major ATLAS inner detector components

    CERN Multimedia

    Maximilien Brice

    2006-01-01

    The semiconductor tracker is inserted into the transition radiation tracker for the ATLAS experiment at the LHC. These make up two of the three major components of the inner detector. They will work together to measure the trajectories produced in the proton-proton collisions at the centre of the detector when the LHC is switched on in 2008.

  15. Online Manifold Regularization by Dual Ascending Procedure

    Directory of Open Access Journals (Sweden)

    Boliang Sun

    2013-01-01

    Full Text Available We propose a novel online manifold regularization framework based on the notion of duality in constrained optimization. The Fenchel conjugate of hinge functions is a key to transfer manifold regularization from offline to online in this paper. Our algorithms are derived by gradient ascent in the dual function. For practical purpose, we propose two buffering strategies and two sparse approximations to reduce the computational complexity. Detailed experiments verify the utility of our approaches. An important conclusion is that our online MR algorithms can handle the settings where the target hypothesis is not fixed but drifts with the sequence of examples. We also recap and draw connections to earlier works. This paper paves a way to the design and analysis of online manifold regularization algorithms.

  16. SU-E-J-67: Evaluation of Breathing Patterns for Respiratory-Gated Radiation Therapy Using Respiration Regularity Index

    International Nuclear Information System (INIS)

    Cheong, K; Lee, M; Kang, S; Yoon, J; Park, S; Hwang, T; Kim, H; Kim, K; Han, T; Bae, H

    2014-01-01

    Purpose: Despite the importance of accurately estimating the respiration regularity of a patient in motion compensation treatment, an effective and simply applicable method has rarely been reported. The authors propose a simple respiration regularity index based on parameters derived from a correspondingly simplified respiration model. Methods: In order to simplify a patient's breathing pattern while preserving the data's intrinsic properties, we defined a respiration model as a power of cosine form with a baseline drift. According to this respiration formula, breathing-pattern fluctuation could be explained using four factors: sample standard deviation of respiration period, sample standard deviation of amplitude and the results of simple regression of the baseline drift (slope and standard deviation of residuals of a respiration signal. Overall irregularity (δ) was defined as a Euclidean norm of newly derived variable using principal component analysis (PCA) for the four fluctuation parameters. Finally, the proposed respiration regularity index was defined as ρ=ln(1+(1/ δ))/2, a higher ρ indicating a more regular breathing pattern. Subsequently, we applied it to simulated and clinical respiration signals from real-time position management (RPM; Varian Medical Systems, Palo Alto, CA) and investigated respiration regularity. Moreover, correlations between the regularity of the first session and the remaining fractions were investigated using Pearson's correlation coefficient. Results: The respiration regularity was determined based on ρ; patients with ρ 0.7 was suitable for respiratory-gated radiation therapy (RGRT). Fluctuations in breathing cycle and amplitude were especially determinative of ρ. If the respiration regularity of a patient's first session was known, it could be estimated through subsequent sessions. Conclusions: Respiration regularity could be objectively determined using a respiration regularity index, ρ. Such single-index testing of

  17. Manifold Regularized Reinforcement Learning.

    Science.gov (United States)

    Li, Hongliang; Liu, Derong; Wang, Ding

    2018-04-01

    This paper introduces a novel manifold regularized reinforcement learning scheme for continuous Markov decision processes. Smooth feature representations for value function approximation can be automatically learned using the unsupervised manifold regularization method. The learned features are data-driven, and can be adapted to the geometry of the state space. Furthermore, the scheme provides a direct basis representation extension for novel samples during policy learning and control. The performance of the proposed scheme is evaluated on two benchmark control tasks, i.e., the inverted pendulum and the energy storage problem. Simulation results illustrate the concepts of the proposed scheme and show that it can obtain excellent performance.

  18. Anisotropic properties of phase separation in two-component dipolar Bose-Einstein condensates

    Science.gov (United States)

    Wang, Wei; Li, Jinbin

    2018-03-01

    Using Crank-Nicolson method, we calculate ground state wave functions of two-component dipolar Bose-Einstein condensates (BECs) and show that, due to dipole-dipole interaction (DDI), the condensate mixture displays anisotropic phase separation. The effects of DDI, inter-component s-wave scattering, strength of trap potential and particle numbers on the density profiles are investigated. Three types of two-component profiles are present, first cigar, along z-axis and concentric torus, second pancake (or blood cell), in xy-plane, and two non-uniform ellipsoid, separated by the pancake and third two dumbbell shapes.

  19. Superfluid drag in the two-component Bose-Hubbard model

    Science.gov (United States)

    Sellin, Karl; Babaev, Egor

    2018-03-01

    In multicomponent superfluids and superconductors, co- and counterflows of components have, in general, different properties. A. F. Andreev and E. P. Bashkin [Sov. Phys. JETP 42, 164 (1975)] discussed, in the context of He3/He4 superfluid mixtures, that interparticle interactions produce a dissipationless drag. The drag can be understood as a superflow of one component induced by phase gradients of the other component. Importantly, the drag can be both positive (entrainment) and negative (counterflow). The effect is known to have crucial importance for many properties of diverse physical systems ranging from the dynamics of neutron stars and rotational responses of Bose mixtures of ultracold atoms to magnetic responses of multicomponent superconductors. Although substantial literature exists that includes the drag interaction phenomenologically, only a few regimes are covered by quantitative studies of the microscopic origin of the drag and its dependence on microscopic parameters. Here we study the microscopic origin and strength of the drag interaction in a quantum system of two-component bosons on a lattice with short-range interaction. By performing quantum Monte Carlo simulations of a two-component Bose-Hubbard model we obtain dependencies of the drag strength on the boson-boson interactions and properties of the optical lattice. Of particular interest are the strongly correlated regimes where the ratio of coflow and counterflow superfluid stiffnesses can diverge, corresponding to the case of saturated drag.

  20. Indefinite metric and regularization of electrodynamics

    International Nuclear Information System (INIS)

    Gaudin, M.

    1984-06-01

    The invariant regularization of Pauli and Villars in quantum electrodynamics can be considered as deriving from a local and causal lagrangian theory for spin 1/2 bosons, by introducing an indefinite metric and a condition on the allowed states similar to the Lorentz condition. The consequences are the asymptotic freedom of the photon's propagator. We present a calcultion of the effective charge to the fourth order in the coupling as a function of the auxiliary masses, the theory avoiding all mass divergencies to this order [fr

  1. Matrix regularization of embedded 4-manifolds

    International Nuclear Information System (INIS)

    Trzetrzelewski, Maciej

    2012-01-01

    We consider products of two 2-manifolds such as S 2 ×S 2 , embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)⊗SU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N 2 ×N 2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S 3 also possible).

  2. Anisotropic Third-Order Regularization for Sparse Digital Elevation Models

    KAUST Repository

    Lellmann, Jan

    2013-01-01

    We consider the problem of interpolating a surface based on sparse data such as individual points or level lines. We derive interpolators satisfying a list of desirable properties with an emphasis on preserving the geometry and characteristic features of the contours while ensuring smoothness across level lines. We propose an anisotropic third-order model and an efficient method to adaptively estimate both the surface and the anisotropy. Our experiments show that the approach outperforms AMLE and higher-order total variation methods qualitatively and quantitatively on real-world digital elevation data. © 2013 Springer-Verlag.

  3. A Regularized Linear Dynamical System Framework for Multivariate Time Series Analysis.

    Science.gov (United States)

    Liu, Zitao; Hauskrecht, Milos

    2015-01-01

    Linear Dynamical System (LDS) is an elegant mathematical framework for modeling and learning Multivariate Time Series (MTS). However, in general, it is difficult to set the dimension of an LDS's hidden state space. A small number of hidden states may not be able to model the complexities of a MTS, while a large number of hidden states can lead to overfitting. In this paper, we study learning methods that impose various regularization penalties on the transition matrix of the LDS model and propose a regularized LDS learning framework (rLDS) which aims to (1) automatically shut down LDSs' spurious and unnecessary dimensions, and consequently, address the problem of choosing the optimal number of hidden states; (2) prevent the overfitting problem given a small amount of MTS data; and (3) support accurate MTS forecasting. To learn the regularized LDS from data we incorporate a second order cone program and a generalized gradient descent method into the Maximum a Posteriori framework and use Expectation Maximization to obtain a low-rank transition matrix of the LDS model. We propose two priors for modeling the matrix which lead to two instances of our rLDS. We show that our rLDS is able to recover well the intrinsic dimensionality of the time series dynamics and it improves the predictive performance when compared to baselines on both synthetic and real-world MTS datasets.

  4. Coexistence of Two Singularities in Dewetting Flows: Regularizing the Corner Tip

    NARCIS (Netherlands)

    Peters, I.R.; Snoeijer, Jacobus Hendrikus; Daerr, Adrian; Limat, Laurent

    2009-01-01

    Entrainment in wetting and dewetting flows often occurs through the formation of a corner with a very sharp tip. This corner singularity comes on top of the divergence of viscous stress near the contact line, which is only regularized at molecular scales. We investigate the fine structure of corners

  5. Closedness type regularity conditions in convex optimization and beyond

    Directory of Open Access Journals (Sweden)

    Sorin-Mihai Grad

    2016-09-01

    Full Text Available The closedness type regularity conditions have proven during the last decade to be viable alternatives to their more restrictive interiority type counterparts, in both convex optimization and different areas where it was successfully applied. In this review article we de- and reconstruct some closedness type regularity conditions formulated by means of epigraphs and subdifferentials, respectively, for general optimization problems in order to stress that they arise naturally when dealing with such problems. The results are then specialized for constrained and unconstrained convex optimization problems. We also hint towards other classes of optimization problems where closedness type regularity conditions were successfully employed and discuss other possible applications of them.

  6. Regularized principal covariates regression and its application to finding coupled patterns in climate fields

    Science.gov (United States)

    Fischer, M. J.

    2014-02-01

    There are many different methods for investigating the coupling between two climate fields, which are all based on the multivariate regression model. Each different method of solving the multivariate model has its own attractive characteristics, but often the suitability of a particular method for a particular problem is not clear. Continuum regression methods search the solution space between the conventional methods and thus can find regression model subspaces that mix the attractive characteristics of the end-member subspaces. Principal covariates regression is a continuum regression method that is easily applied to climate fields and makes use of two end-members: principal components regression and redundancy analysis. In this study, principal covariates regression is extended to additionally span a third end-member (partial least squares or maximum covariance analysis). The new method, regularized principal covariates regression, has several attractive features including the following: it easily applies to problems in which the response field has missing values or is temporally sparse, it explores a wide range of model spaces, and it seeks a model subspace that will, for a set number of components, have a predictive skill that is the same or better than conventional regression methods. The new method is illustrated by applying it to the problem of predicting the southern Australian winter rainfall anomaly field using the regional atmospheric pressure anomaly field. Regularized principal covariates regression identifies four major coupled patterns in these two fields. The two leading patterns, which explain over half the variance in the rainfall field, are related to the subtropical ridge and features of the zonally asymmetric circulation.

  7. Practical error estimates for Reynolds' lubrication approximation and its higher order corrections

    Energy Technology Data Exchange (ETDEWEB)

    Wilkening, Jon

    2008-12-10

    Reynolds lubrication approximation is used extensively to study flows between moving machine parts, in narrow channels, and in thin films. The solution of Reynolds equation may be thought of as the zeroth order term in an expansion of the solution of the Stokes equations in powers of the aspect ratio {var_epsilon} of the domain. In this paper, we show how to compute the terms in this expansion to arbitrary order on a two-dimensional, x-periodic domain and derive rigorous, a-priori error bounds for the difference between the exact solution and the truncated expansion solution. Unlike previous studies of this sort, the constants in our error bounds are either independent of the function h(x) describing the geometry, or depend on h and its derivatives in an explicit, intuitive way. Specifically, if the expansion is truncated at order 2k, the error is O({var_epsilon}{sup 2k+2}) and h enters into the error bound only through its first and third inverse moments {integral}{sub 0}{sup 1} h(x){sup -m} dx, m = 1,3 and via the max norms {parallel} 1/{ell}! h{sup {ell}-1}{partial_derivative}{sub x}{sup {ell}}h{parallel}{sub {infinity}}, 1 {le} {ell} {le} 2k + 2. We validate our estimates by comparing with finite element solutions and present numerical evidence that suggests that even when h is real analytic and periodic, the expansion solution forms an asymptotic series rather than a convergent series.

  8. Motion of curves and solutions of two multi-component mKdV equations

    International Nuclear Information System (INIS)

    Yao Ruoxia; Qu Changzheng; Li Zhibin

    2005-01-01

    Two classes of multi-component mKdV equations have been shown to be integrable. One class called the multi-component geometric mKdV equation is exactly the system for curvatures of curves when the motion of the curves is governed by the mKdV flow. In this paper, exact solutions including solitary wave solutions of the two- and three-component mKdV equations are obtained, the symmetry reductions of the two-component geometric mKdV equation to ODE systems corresponding to it's Lie point symmetry groups are also given. Curves and their behavior corresponding to solitary wave solutions of the two-component geometric mKdV equation are presented

  9. Laplacian manifold regularization method for fluorescence molecular tomography

    Science.gov (United States)

    He, Xuelei; Wang, Xiaodong; Yi, Huangjian; Chen, Yanrong; Zhang, Xu; Yu, Jingjing; He, Xiaowei

    2017-04-01

    Sparse regularization methods have been widely used in fluorescence molecular tomography (FMT) for stable three-dimensional reconstruction. Generally, ℓ1-regularization-based methods allow for utilizing the sparsity nature of the target distribution. However, in addition to sparsity, the spatial structure information should be exploited as well. A joint ℓ1 and Laplacian manifold regularization model is proposed to improve the reconstruction performance, and two algorithms (with and without Barzilai-Borwein strategy) are presented to solve the regularization model. Numerical studies and in vivo experiment demonstrate that the proposed Gradient projection-resolved Laplacian manifold regularization method for the joint model performed better than the comparative algorithm for ℓ1 minimization method in both spatial aggregation and location accuracy.

  10. Regularizing portfolio optimization

    International Nuclear Information System (INIS)

    Still, Susanne; Kondor, Imre

    2010-01-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  11. Regularizing portfolio optimization

    Science.gov (United States)

    Still, Susanne; Kondor, Imre

    2010-07-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  12. A convergence analysis of the iteratively regularized Gauss–Newton method under the Lipschitz condition

    International Nuclear Information System (INIS)

    Jin Qinian

    2008-01-01

    In this paper we consider the iteratively regularized Gauss–Newton method for solving nonlinear ill-posed inverse problems. Under merely the Lipschitz condition, we prove that this method together with an a posteriori stopping rule defines an order optimal regularization method if the solution is regular in some suitable sense

  13. Manifold Regularized Correlation Object Tracking.

    Science.gov (United States)

    Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling

    2018-05-01

    In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped from both target and nontarget regions. Thus, the final classifier in our method is trained with positive, negative, and unlabeled base samples, which is a semisupervised learning framework. A block optimization strategy is further introduced to learn a manifold regularization-based correlation filter for efficient online tracking. Experiments on two public tracking data sets demonstrate the superior performance of our tracker compared with the state-of-the-art tracking approaches.

  14. Chloroplast two-component systems: evolution of the link between photosynthesis and gene expression.

    Science.gov (United States)

    Puthiyaveetil, Sujith; Allen, John F

    2009-06-22

    Two-component signal transduction, consisting of sensor kinases and response regulators, is the predominant signalling mechanism in bacteria. This signalling system originated in prokaryotes and has spread throughout the eukaryotic domain of life through endosymbiotic, lateral gene transfer from the bacterial ancestors and early evolutionary precursors of eukaryotic, cytoplasmic, bioenergetic organelles-chloroplasts and mitochondria. Until recently, it was thought that two-component systems inherited from an ancestral cyanobacterial symbiont are no longer present in chloroplasts. Recent research now shows that two-component systems have survived in chloroplasts as products of both chloroplast and nuclear genes. Comparative genomic analysis of photosynthetic eukaryotes shows a lineage-specific distribution of chloroplast two-component systems. The components and the systems they comprise have homologues in extant cyanobacterial lineages, indicating their ancient cyanobacterial origin. Sequence and functional characteristics of chloroplast two-component systems point to their fundamental role in linking photosynthesis with gene expression. We propose that two-component systems provide a coupling between photosynthesis and gene expression that serves to retain genes in chloroplasts, thus providing the basis of cytoplasmic, non-Mendelian inheritance of plastid-associated characters. We discuss the role of this coupling in the chronobiology of cells and in the dialogue between nuclear and cytoplasmic genetic systems.

  15. Restrictive metric regularity and generalized differential calculus in Banach spaces

    Directory of Open Access Journals (Sweden)

    Bingwu Wang

    2004-10-01

    Full Text Available We consider nonlinear mappings f:X→Y between Banach spaces and study the notion of restrictive metric regularity of f around some point x¯, that is, metric regularity of f from X into the metric space E=f(X. Some sufficient as well as necessary and sufficient conditions for restrictive metric regularity are obtained, which particularly include an extension of the classical Lyusternik-Graves theorem in the case when f is strictly differentiable at x¯ but its strict derivative ∇f(x¯ is not surjective. We develop applications of the results obtained and some other techniques in variational analysis to generalized differential calculus involving normal cones to nonsmooth and nonconvex sets, coderivatives of set-valued mappings, as well as first-order and second-order subdifferentials of extended real-valued functions.

  16. A two-component dark matter model with real singlet scalars ...

    Indian Academy of Sciences (India)

    2016-01-05

    Jan 5, 2016 ... We propose a two-component dark matter (DM) model, each component of which is a real singlet scalar, to explain results from both direct and indirect detection experiments. We put the constraints on the model parameters from theoretical bounds, PLANCK relic density results and direct DM experiments.

  17. Soft-edged magnet models for higher-order beam-optics map codes

    International Nuclear Information System (INIS)

    Walstrom, P.L.

    2004-01-01

    Continuously varying surface and volume source-density distributions are used to model magnetic fields inside of cylindrical volumes. From these distributions, a package of subroutines computes on-axis generalized gradients and their derivatives at arbitrary points on the magnet axis for input to the numerical map-generating subroutines of the Lie-algebraic map code Marylie. In the present version of the package, the magnet menu includes: (1) cylindrical current-sheet or radially thick current distributions with either open boundaries or with a surrounding cylindrical boundary with normal field lines (which models high-permeability iron), (2) Halbach-type permanent multipole magnets, either as sheet magnets or as radially thick magnets, (3) modeling of arbitrary fields inside a cylinder by use of a fictitious current sheet. The subroutines provide on-axis gradients and their z derivatives to essentially arbitrary order, although in the present third- and fifth-order Marylie only the zeroth through sixth derivatives are needed. The formalism is especially useful in beam-optics applications, such as magnetic lenses, where realistic treatment of fringe-field effects is needed

  18. SPATIAL MODELING OF SOLID-STATE REGULAR POLYHEDRA (SOLIDS OF PLATON IN AUTOCAD SYSTEM

    Directory of Open Access Journals (Sweden)

    P. V. Bezditko

    2009-03-01

    Full Text Available This article describes the technology of modeling regular polyhedra by graphic methods. The authors came to the conclusion that in order to create solid models of regular polyhedra the method of extrusion is best to use.

  19. Dental Services and Attitudes towards its regular Utilization ... - Ibadan

    African Journals Online (AJOL)

    Background: Regular utilization of dental services is key to the attainment of optimal oral health state, an integral component of general health and well being needed for effective productivity by working personnel. Objective: This study assessed the rate and pattern of dental service utilization among civil servants and their ...

  20. Regularized non-stationary morphological reconstruction algorithm for weak signal detection in microseismic monitoring: methodology

    Science.gov (United States)

    Huang, Weilin; Wang, Runqiu; Chen, Yangkang

    2018-05-01

    Microseismic signal is typically weak compared with the strong background noise. In order to effectively detect the weak signal in microseismic data, we propose a mathematical morphology based approach. We decompose the initial data into several morphological multiscale components. For detection of weak signal, a non-stationary weighting operator is proposed and introduced into the process of reconstruction of data by morphological multiscale components. The non-stationary weighting operator can be obtained by solving an inversion problem. The regularized non-stationary method can be understood as a non-stationary matching filtering method, where the matching filter has the same size as the data to be filtered. In this paper, we provide detailed algorithmic descriptions and analysis. The detailed algorithm framework, parameter selection and computational issue for the regularized non-stationary morphological reconstruction (RNMR) method are presented. We validate the presented method through a comprehensive analysis through different data examples. We first test the proposed technique using a synthetic data set. Then the proposed technique is applied to a field project, where the signals induced from hydraulic fracturing are recorded by 12 three-component geophones in a monitoring well. The result demonstrates that the RNMR can improve the detectability of the weak microseismic signals. Using the processed data, the short-term-average over long-term average picking algorithm and Geiger's method are applied to obtain new locations of microseismic events. In addition, we show that the proposed RNMR method can be used not only in microseismic data but also in reflection seismic data to detect the weak signal. We also discussed the extension of RNMR from 1-D to 2-D or a higher dimensional version.

  1. Earliest Recollections and Birth Order: Two Adlerian Exercises.

    Science.gov (United States)

    Parrott, Les

    1992-01-01

    Presents two exercises designed to demonstrate the influence of two Adlerian principles on personality. Includes exercises dealing with birth order and earliest recollection. Concludes that the exercises actively demonstrate major concepts for counseling courses in Adlerian psychotherapy. Reports that students rated both exercises highly, with…

  2. Combining kernel matrix optimization and regularization to improve particle size distribution retrieval

    Science.gov (United States)

    Ma, Qian; Xia, Houping; Xu, Qiang; Zhao, Lei

    2018-05-01

    A new method combining Tikhonov regularization and kernel matrix optimization by multi-wavelength incidence is proposed for retrieving particle size distribution (PSD) in an independent model with improved accuracy and stability. In comparison to individual regularization or multi-wavelength least squares, the proposed method exhibited better anti-noise capability, higher accuracy and stability. While standard regularization typically makes use of the unit matrix, it is not universal for different PSDs, particularly for Junge distributions. Thus, a suitable regularization matrix was chosen by numerical simulation, with the second-order differential matrix found to be appropriate for most PSD types.

  3. Bose-Einstein condensation in magnetic traps. Introduction to the theory

    International Nuclear Information System (INIS)

    Pitaevskii, Lev P

    1998-01-01

    The recent realization of Bose-Einstein condensation in atomic gases opens new possibilities for the observation of macroscopic quantum phenomena. There are two important features of these systems - weak interaction and significant spatial inhomogeneity. Because of this a non-trivial 'zeroth-order' theory exists, compared to the 'first-order' Bogolubov theory. The zeroth-order theory is based on the mean-field Gross-Pitaevskii equation for the condensate ψ-function. The equation is classical in its essence but contains the constant ℎ explicitly. Phenomena such as collective modes, interference, tunneling, Josephson-like current and quantized vortex lines can be described using this equation. Elementary excitations define the thermodynamic behavior of the system and result in a Landau-type damping of collective modes. Fluctuations of the phase of the condensate wave function restrict the monochromaticity of the Josephson current. Fluctuations of the numbers of quanta result in quantum collapse-revival of the collective oscillations. (special issue)

  4. Weak nonlinear matter waves in a trapped two-component Bose-Einstein condensates

    International Nuclear Information System (INIS)

    Yong Wenmei; Xue Jukui

    2008-01-01

    The dynamics of the weak nonlinear matter solitary waves in two-component Bose-Einstein condensates (BEC) with cigar-shaped external potential are investigated analytically by a perturbation method. In the small amplitude limit, the two-components can be decoupled and the dynamics of solitary waves are governed by a variable-coefficient Korteweg-de Vries (KdV) equation. The reduction to the KdV equation may be useful to understand the dynamics of nonlinear matter waves in two-component BEC. The analytical expressions for the evolution of soliton, emitted radiation profiles and soliton oscillation frequency are also obtained

  5. Tunneling into quantum wires: regularization of the tunneling Hamiltonian and consistency between free and bosonized fermions

    OpenAIRE

    Filippone, Michele; Brouwer, Piet

    2016-01-01

    Tunneling between a point contact and a one-dimensional wire is usually described with the help of a tunneling Hamiltonian that contains a delta function in position space. Whereas the leading order contribution to the tunneling current is independent of the way this delta function is regularized, higher-order corrections with respect to the tunneling amplitude are known to depend on the regularization. Instead of regularizing the delta function in the tunneling Hamiltonian, one may also obta...

  6. Density profiles and collective excitations of a trapped two-component Fermi vapour

    International Nuclear Information System (INIS)

    Amoruso, M.; Meccoli, I.; Minguzzi, A.; Tosi, M.P.

    1999-08-01

    We discuss the ground state and the small-amplitude excitations of a degenerate vapour of fermionic atoms placed in two hyperfine states inside a spherical harmonic trap. An equations-of-motion approach is set up to discuss the hydrodynamic dissipation processes from the interactions between the two components of the fluid beyond mean-field theory and to emphasize analogies with spin dynamics and spin diffusion in a homogeneous Fermi liquid. The conditions for the establishment of a collisional regime via scattering against cold-atom impurities are analyzed. The equilibrium density profiles are then calculated for a two-component vapour of 40 K atoms: they are little modified by the interactions for presently relevant values of the system parameters, but spatial separation of the two components will spontaneously arise as the number of atoms in the trap is increased. The eigenmodes of collective oscillation in both the total particle number density and the concentration density are evaluated analytically in the special case of a symmetric two-component vapour in the collisional regime. The dispersion relation of the surface modes for the total particle density reduces in this case to that of a one-component Fermi vapour, whereas the frequencies of all other modes are shifted by the interactions. (author)

  7. Comments on X. Yin, A. Wen, Y. Chen, and T. Wang, `Studies in an optical millimeter-wave generation scheme via two parallel dual-parallel Mach-Zehnder modulators', Journal of Modern Optics, 58(8), 2011, pp. 665-673

    Science.gov (United States)

    Hasan, Mehedi; Maldonado-Basilio, Ramón; Hall, Trevor J.

    2015-04-01

    Yin et al. have described an innovative filter-less optical millimeter-wave generation scheme for octotupling of a 10 GHz RF oscillator, or sedecimtupling of a 5 GHz RF oscillator using two parallel dual-parallel Mach-Zehnder modulators (DP-MZMs). The great merit of their design is the suppression of all harmonics except those of order ? (octotupling) or all harmonics except those of order ? (sedecimtupling), where ? is an integer. A demerit of their scheme is the requirement to set a precise RF signal modulation index in order to suppress the zeroth order optical carrier. The purpose of this comment is to show that, in the case of the octotupling function, all harmonics may be suppressed except those of order ?, where ? is an odd integer, by the simple addition of an optical ? phase shift between the two DP-MZMs and an adjustment of the RF drive phases. Since the carrier is suppressed in the modified architecture, the octotupling circuit is thereby released of the strict requirement to set the drive level to a precise value without any significant increase in circuit complexity.

  8. Numerical analysis of a non equilibrium two-component two-compressible flow in porous media

    KAUST Repository

    Saad, Bilal Mohammed

    2013-09-01

    We propose and analyze a finite volume scheme to simulate a non equilibrium two components (water and hydrogen) two phase flow (liquid and gas) model. In this model, the assumption of local mass non equilibrium is ensured and thus the velocity of the mass exchange between dissolved hydrogen and hydrogen in the gas phase is supposed finite. The proposed finite volume scheme is fully implicit in time together with a phase-by-phase upwind approach in space and it is discretize the equations in their general form with gravity and capillary terms We show that the proposed scheme satisfies the maximum principle for the saturation and the concentration of the dissolved hydrogen. We establish stability results on the velocity of each phase and on the discrete gradient of the concentration. We show the convergence of a subsequence to a weak solution of the continuous equations as the size of the discretization tends to zero. At our knowledge, this is the first convergence result of finite volume scheme in the case of two component two phase compressible flow in several space dimensions.

  9. A regularization method for solving the Poisson equation for mixed unbounded-periodic domains

    DEFF Research Database (Denmark)

    Spietz, Henrik Juul; Mølholm Hejlesen, Mads; Walther, Jens Honoré

    2018-01-01

    the regularized unbounded-periodic Green's functions can be implemented in an FFT-based Poisson solver to obtain a convergence rate corresponding to the regularization order of the Green's function. The high order is achieved without any additional computational cost from the conventional FFT-based Poisson solver...... and enables the calculation of the derivative of the solution to the same high order by direct spectral differentiation. We illustrate an application of the FFT-based Poisson solver by using it with a vortex particle mesh method for the approximation of incompressible flow for a problem with a single periodic...

  10. Distance-regular graphs

    NARCIS (Netherlands)

    van Dam, Edwin R.; Koolen, Jack H.; Tanaka, Hajime

    2016-01-01

    This is a survey of distance-regular graphs. We present an introduction to distance-regular graphs for the reader who is unfamiliar with the subject, and then give an overview of some developments in the area of distance-regular graphs since the monograph 'BCN'[Brouwer, A.E., Cohen, A.M., Neumaier,

  11. Parameter optimization in the regularized kernel minimum noise fraction transformation

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2012-01-01

    Based on the original, linear minimum noise fraction (MNF) transformation and kernel principal component analysis, a kernel version of the MNF transformation was recently introduced. Inspired by we here give a simple method for finding optimal parameters in a regularized version of kernel MNF...... analysis. We consider the model signal-to-noise ratio (SNR) as a function of the kernel parameters and the regularization parameter. In 2-4 steps of increasingly refined grid searches we find the parameters that maximize the model SNR. An example based on data from the DLR 3K camera system is given....

  12. Direct gauging of the Poincare group V. Group scaling, classical gauge theory, and gravitational corrections

    International Nuclear Information System (INIS)

    Edelen, D.G.B.

    1986-01-01

    Homogeneous scaling of the group space of the Poincare group, P 10 , is shown to induce scalings of all geometric quantities associated with the local action of P 10 . The field equations for both the translation and the Lorentz rotation compensating fields reduce to O(1) equations if the scaling parameter is set equal to the general relativistic gravitational coupling constant 8πGc -4 . Standard expansions of all field variables in power series in the scaling parameter give the following results. The zeroth-order field equations are exactly the classical field equations for matter fields on Minkowski space subject to local action of an internal symmetry group (classical gauge theory). The expansion process is shown to break P 10 -gauge covariance of the theory, and hence solving the zeroth-order field equations imposes an implicit system of P 10 -gauge conditions. Explicit systems of field equations are obtained for the first- and higher-order approximations. The first-order translation field equations are driven by the momentum-energy tensor of the matter and internal compensating fields in the zeroth order (classical gauge theory), while the first-order Lorentz rotation field equations are driven by the spin currents of the same classical gauge theory. Field equations for the first-order gravitational corrections to the matter fields and the gauge fields for the internal symmetry group are obtained. Direct Poincare gauge theory is thus shown to satisfy the first two of the three-part acid test of any unified field theory. Satisfaction of the third part of the test, at least for finite neighborhoods, seems probable

  13. Competitive adsorption of a two-component gas on a deformable adsorbent

    International Nuclear Information System (INIS)

    Usenko, A S

    2014-01-01

    We investigate the competitive adsorption of a two-component gas on the surface of an adsorbent whose adsorption properties vary due to the adsorbent deformation. The essential difference of adsorption isotherms for a deformable adsorbent both from the classical Langmuir adsorption isotherms of a two-component gas and from the adsorption isotherms of a one-component gas is obtained, taking into account variations in the adsorption properties of the adsorbent in adsorption. We establish bistability and tristability of the system caused by variations in adsorption properties of the adsorbent in competitive adsorption of gas particles on it. We derive conditions under which adsorption isotherms of a binary gas mixture have two stable asymptotes. It is shown that the specific features of the behavior of the system under study can be described in terms of a potential of the known explicit form. (paper)

  14. Accelerating Large Data Analysis By Exploiting Regularities

    Science.gov (United States)

    Moran, Patrick J.; Ellsworth, David

    2003-01-01

    We present techniques for discovering and exploiting regularity in large curvilinear data sets. The data can be based on a single mesh or a mesh composed of multiple submeshes (also known as zones). Multi-zone data are typical to Computational Fluid Dynamics (CFD) simulations. Regularities include axis-aligned rectilinear and cylindrical meshes as well as cases where one zone is equivalent to a rigid-body transformation of another. Our algorithms can also discover rigid-body motion of meshes in time-series data. Next, we describe a data model where we can utilize the results from the discovery process in order to accelerate large data visualizations. Where possible, we replace general curvilinear zones with rectilinear or cylindrical zones. In rigid-body motion cases we replace a time-series of meshes with a transformed mesh object where a reference mesh is dynamically transformed based on a given time value in order to satisfy geometry requests, on demand. The data model enables us to make these substitutions and dynamic transformations transparently with respect to the visualization algorithms. We present results with large data sets where we combine our mesh replacement and transformation techniques with out-of-core paging in order to achieve significant speed-ups in analysis.

  15. Infinite-component conformal fields. Spectral representation of the two-point function

    International Nuclear Information System (INIS)

    Zaikov, R.P.; Tcholakov, V.

    1975-01-01

    The infinite-component conformal fields (with respect to the stability subgroup) are considered. The spectral representation of the conformally invariant two-point function is obtained. This function is nonvanishing as/lso for one ''fundamental'' and one infinite-component field

  16. Regular expressions cookbook

    CERN Document Server

    Goyvaerts, Jan

    2009-01-01

    This cookbook provides more than 100 recipes to help you crunch data and manipulate text with regular expressions. Every programmer can find uses for regular expressions, but their power doesn't come worry-free. Even seasoned users often suffer from poor performance, false positives, false negatives, or perplexing bugs. Regular Expressions Cookbook offers step-by-step instructions for some of the most common tasks involving this tool, with recipes for C#, Java, JavaScript, Perl, PHP, Python, Ruby, and VB.NET. With this book, you will: Understand the basics of regular expressions through a

  17. Methods of producing epoxides from alkenes using a two-component catalyst system

    Science.gov (United States)

    Kung, Mayfair C.; Kung, Harold H.; Jiang, Jian

    2013-07-09

    Methods for the epoxidation of alkenes are provided. The methods include the steps of exposing the alkene to a two-component catalyst system in an aqueous solution in the presence of carbon monoxide and molecular oxygen under conditions in which the alkene is epoxidized. The two-component catalyst system comprises a first catalyst that generates peroxides or peroxy intermediates during oxidation of CO with molecular oxygen and a second catalyst that catalyzes the epoxidation of the alkene using the peroxides or peroxy intermediates. A catalyst system composed of particles of suspended gold and titanium silicalite is one example of a suitable two-component catalyst system.

  18. Anisotropic Third-Order Regularization for Sparse Digital Elevation Models

    KAUST Repository

    Lellmann, Jan; Morel, Jean-Michel; Schö nlieb, Carola-Bibiane

    2013-01-01

    features of the contours while ensuring smoothness across level lines. We propose an anisotropic third-order model and an efficient method to adaptively estimate both the surface and the anisotropy. Our experiments show that the approach outperforms AMLE

  19. Surgeons’ muscle load during robotic-assisted laparoscopy performed with a regular office chair and the preferred of two ergonomic chairs

    DEFF Research Database (Denmark)

    Dalager, T.; Jensen, P. T.; Winther, T. S.

    2018-01-01

    associated with poor ergonomics and musculoskeletal pain. The ergonomic condition in the robotic console is partially dependent upon the chair provided, which often is a regular office chair. Our study quantified and compared the muscular load during robotic-assisted laparoscopy using one of two custom built...

  20. Use of regularized principal component analysis to model anatomical changes during head and neck radiation therapy for treatment adaptation and response assessment

    International Nuclear Information System (INIS)

    Chetvertkov, Mikhail A.; Siddiqui, Farzan; Chetty, Indrin; Kumarasiri, Akila; Liu, Chang; Gordon, J. James; Kim, Jinkoo

    2016-01-01

    Purpose: To develop standard (SPCA) and regularized (RPCA) principal component analysis models of anatomical changes from daily cone beam CTs (CBCTs) of head and neck (H&N) patients and assess their potential use in adaptive radiation therapy, and for extracting quantitative information for treatment response assessment. Methods: Planning CT images of ten H&N patients were artificially deformed to create “digital phantom” images, which modeled systematic anatomical changes during radiation therapy. Artificial deformations closely mirrored patients’ actual deformations and were interpolated to generate 35 synthetic CBCTs, representing evolving anatomy over 35 fractions. Deformation vector fields (DVFs) were acquired between pCT and synthetic CBCTs (i.e., digital phantoms) and between pCT and clinical CBCTs. Patient-specific SPCA and RPCA models were built from these synthetic and clinical DVF sets. EigenDVFs (EDVFs) having the largest eigenvalues were hypothesized to capture the major anatomical deformations during treatment. Results: Principal component analysis (PCA) models achieve variable results, depending on the size and location of anatomical change. Random changes prevent or degrade PCA’s ability to detect underlying systematic change. RPCA is able to detect smaller systematic changes against the background of random fraction-to-fraction changes and is therefore more successful than SPCA at capturing systematic changes early in treatment. SPCA models were less successful at modeling systematic changes in clinical patient images, which contain a wider range of random motion than synthetic CBCTs, while the regularized approach was able to extract major modes of motion. Conclusions: Leading EDVFs from the both PCA approaches have the potential to capture systematic anatomical change during H&N radiotherapy when systematic changes are large enough with respect to random fraction-to-fraction changes. In all cases the RPCA approach appears to be more

  1. Use of regularized principal component analysis to model anatomical changes during head and neck radiation therapy for treatment adaptation and response assessment

    Energy Technology Data Exchange (ETDEWEB)

    Chetvertkov, Mikhail A., E-mail: chetvertkov@wayne.edu [Department of Radiation Oncology, Wayne State University School of Medicine, Detroit, Michigan 48201 and Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan 48202 (United States); Siddiqui, Farzan; Chetty, Indrin; Kumarasiri, Akila; Liu, Chang; Gordon, J. James [Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan 48202 (United States); Kim, Jinkoo [Department of Radiation Oncology, Stony Brook University Hospital, Stony Brook, New York 11794 (United States)

    2016-10-15

    Purpose: To develop standard (SPCA) and regularized (RPCA) principal component analysis models of anatomical changes from daily cone beam CTs (CBCTs) of head and neck (H&N) patients and assess their potential use in adaptive radiation therapy, and for extracting quantitative information for treatment response assessment. Methods: Planning CT images of ten H&N patients were artificially deformed to create “digital phantom” images, which modeled systematic anatomical changes during radiation therapy. Artificial deformations closely mirrored patients’ actual deformations and were interpolated to generate 35 synthetic CBCTs, representing evolving anatomy over 35 fractions. Deformation vector fields (DVFs) were acquired between pCT and synthetic CBCTs (i.e., digital phantoms) and between pCT and clinical CBCTs. Patient-specific SPCA and RPCA models were built from these synthetic and clinical DVF sets. EigenDVFs (EDVFs) having the largest eigenvalues were hypothesized to capture the major anatomical deformations during treatment. Results: Principal component analysis (PCA) models achieve variable results, depending on the size and location of anatomical change. Random changes prevent or degrade PCA’s ability to detect underlying systematic change. RPCA is able to detect smaller systematic changes against the background of random fraction-to-fraction changes and is therefore more successful than SPCA at capturing systematic changes early in treatment. SPCA models were less successful at modeling systematic changes in clinical patient images, which contain a wider range of random motion than synthetic CBCTs, while the regularized approach was able to extract major modes of motion. Conclusions: Leading EDVFs from the both PCA approaches have the potential to capture systematic anatomical change during H&N radiotherapy when systematic changes are large enough with respect to random fraction-to-fraction changes. In all cases the RPCA approach appears to be more

  2. Regularized Laplace-Fourier-Domain Full Waveform Inversion Using a Weighted l 2 Objective Function

    Science.gov (United States)

    Jun, Hyunggu; Kwon, Jungmin; Shin, Changsoo; Zhou, Hongbo; Cogan, Mike

    2017-03-01

    Full waveform inversion (FWI) can be applied to obtain an accurate velocity model that contains important geophysical and geological information. FWI suffers from the local minimum problem when the starting model is not sufficiently close to the true model. Therefore, an accurate macroscale velocity model is essential for successful FWI, and Laplace-Fourier-domain FWI is appropriate for obtaining such a velocity model. However, conventional Laplace-Fourier-domain FWI remains an ill-posed and ill-conditioned problem, meaning that small errors in the data can result in large differences in the inverted model. This approach also suffers from certain limitations related to the logarithmic objective function. To overcome the limitations of conventional Laplace-Fourier-domain FWI, we introduce a weighted l 2 objective function, instead of the logarithmic objective function, as the data-domain objective function, and we also introduce two different model-domain regularizations: first-order Tikhonov regularization and prior model regularization. The weighting matrix for the data-domain objective function is constructed to suitably enhance the far-offset information. Tikhonov regularization smoothes the gradient, and prior model regularization allows reliable prior information to be taken into account. Two hyperparameters are obtained through trial and error and used to control the trade-off and achieve an appropriate balance between the data-domain and model-domain gradients. The application of the proposed regularizations facilitates finding a unique solution via FWI, and the weighted l 2 objective function ensures a more reasonable residual, thereby improving the stability of the gradient calculation. Numerical tests performed using the Marmousi synthetic dataset show that the use of the weighted l 2 objective function and the model-domain regularizations significantly improves the Laplace-Fourier-domain FWI. Because the Laplace-Fourier-domain FWI is improved, the

  3. LL-regular grammars

    NARCIS (Netherlands)

    Nijholt, Antinus

    1980-01-01

    Culik II and Cogen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this paper we consider an analogous extension of the LL(k) grammars called the LL-regular grammars. The relation of this class of grammars to other classes of grammars will be shown. Any LL-regular

  4. A Jeziorski-Monkhorst fully uncontracted multi-reference perturbative treatment. I. Principles, second-order versions, and tests on ground state potential energy curves

    Science.gov (United States)

    Giner, Emmanuel; Angeli, Celestino; Garniron, Yann; Scemama, Anthony; Malrieu, Jean-Paul

    2017-06-01

    The present paper introduces a new multi-reference perturbation approach developed at second order, based on a Jeziorski-Mokhorst expansion using individual Slater determinants as perturbers. Thanks to this choice of perturbers, an effective Hamiltonian may be built, allowing for the dressing of the Hamiltonian matrix within the reference space, assumed here to be a CAS-CI. Such a formulation accounts then for the coupling between the static and dynamic correlation effects. With our new definition of zeroth-order energies, these two approaches are strictly size-extensive provided that local orbitals are used, as numerically illustrated here and formally demonstrated in the Appendix. Also, the present formalism allows for the factorization of all double excitation operators, just as in internally contracted approaches, strongly reducing the computational cost of these two approaches with respect to other determinant-based perturbation theories. The accuracy of these methods has been investigated on ground-state potential curves up to full dissociation limits for a set of six molecules involving single, double, and triple bond breaking together with an excited state calculation. The spectroscopic constants obtained with the present methods are found to be in very good agreement with the full configuration interaction results. As the present formalism does not use any parameter or numerically unstable operation, the curves obtained with the two methods are smooth all along the dissociation path.

  5. An investigation of the general regularity of size dependence of reaction kinetics of nanoparticles

    International Nuclear Information System (INIS)

    Cui, Zixiang; Duan, Huijuan; Xue, Yongqiang; Li, Ping

    2015-01-01

    In the processes of preparation and application of nanomaterials, the chemical reactions of nanoparticles are often involved, and the size of nanoparticles has dramatic influence on the reaction kinetics. Nevertheless, there are many conflicts on regularities of size dependence of reaction kinetic parameters, and these conflicts have not been explained so far. In this paper, taking the reaction of nano-ZnO (average diameter is from 20.96 to 53.31 nm) with acrylic acid solution as a system, the influence regularities of the particle size on the kinetic parameters were researched. The regularities were consistent with that in most literatures, but inconsistent with that in a few of literatures, the reasons for the conflicts were interpreted. The reasons can be attributed to two factors: one is improper data processing for fewer data points, and the other is the difference between solid particles and porous particles. A general regularity of the size dependence of reaction kinetics for solid particles was obtained. The regularity shows that with the size of nanoparticles decreasing, the rate constant and the reaction order increase, while the apparent activation energy and the pre-exponential factor decrease; and the relationships of the logarithm of rate constant, the logarithm of pre-exponential factor, and the apparent activation energy to the reciprocal of the particle size are linear, respectively

  6. Implementation of fractional order integrator/differentiator on field programmable gate array

    OpenAIRE

    K.P.S. Rana; V. Kumar; N. Mittra; N. Pramanik

    2016-01-01

    Concept of fractional order calculus is as old as the regular calculus. With the advent of high speed and cost effective computing power, now it is possible to model the real world control and signal processing problems using fractional order calculus. For the past two decades, applications of fractional order calculus, in system modeling, control and signal processing, have grown rapidly. This paper presents a systematic procedure for hardware implementation of the basic operators of fractio...

  7. Two-component injection moulding simulation of ABS-POM micro structured surfaces

    DEFF Research Database (Denmark)

    Tosello, Guido; Hansen, Hans Nørgaard; Islam, Aminul

    2013-01-01

    Multi-component micro injection moulding (μIM) processes such as two-component (2k) μIM are the key technologies for the mass fabrication of multi-material micro products. 2k-μIM experiments involving a miniaturized test component with micro features in the sub-mm dimensional range and moulding...... a pair of thermoplastic materials (ABS and POM) were conducted. Three dimensional process simulations based on the finite element method have been performed to explore the capability of predicting filling pattern shape at component-level and surface micro feature-level in a polymer/polymer overmoulding...

  8. Development of a fully automated, web-based, tailored intervention promoting regular physical activity among insufficiently active adults with type 2 diabetes: integrating the I-change model, self-determination theory, and motivational interviewing components.

    Science.gov (United States)

    Moreau, Michel; Gagnon, Marie-Pierre; Boudreau, François

    2015-02-17

    Type 2 diabetes is a major challenge for Canadian public health authorities, and regular physical activity is a key factor in the management of this disease. Given that fewer than half of people with type 2 diabetes in Canada are sufficiently active to meet the recommendations, effective programs targeting the adoption of regular physical activity (PA) are in demand for this population. Many researchers argue that Web-based, tailored interventions targeting PA are a promising and effective avenue for sedentary populations like Canadians with type 2 diabetes, but few have described the detailed development of this kind of intervention. This paper aims to describe the systematic development of the Web-based, tailored intervention, Diabète en Forme, promoting regular aerobic PA among adult Canadian francophones with type 2 diabetes. This paper can be used as a reference for health professionals interested in developing similar interventions. We also explored the integration of theoretical components derived from the I-Change Model, Self-Determination Theory, and Motivational Interviewing, which is a potential path for enhancing the effectiveness of tailored interventions on PA adoption and maintenance. The intervention development was based on the program-planning model for tailored interventions of Kreuter et al. An additional step was added to the model to evaluate the intervention's usability prior to the implementation phase. An 8-week intervention was developed. The key components of the intervention include a self-monitoring tool for PA behavior, a weekly action planning tool, and eight tailored motivational sessions based on attitude, self-efficacy, intention, type of motivation, PA behavior, and other constructs and techniques. Usability evaluation, a step added to the program-planning model, helped to make several improvements to the intervention prior to the implementation phase. The intervention development cost was about CDN $59,700 and took approximately

  9. Charge ordering in two-dimensional ionic liquids

    Science.gov (United States)

    Perera, Aurélien; Urbic, Tomaz

    2018-04-01

    The structural properties of model two-dimensional (2D) ionic liquids are examined, with a particular focus on the charge ordering process, with the use of computer simulation and integral equation theories. The influence of the logarithmic form of the Coulomb interaction, versus that of a 3D screened interaction form, is analysed. Charge order is found to hold and to be analogous for both interaction models, despite their very different form. The influence of charge ordering in the low density regime is discussed in relation to well known properties of 2D Coulomb fluids, such as the Kosterlitz-Thouless transition and criticality. The present study suggests the existence of a stable thermodynamic labile cluster phase, implying the existence of a liquid-liquid "transition" above the liquid-gas binodal. The liquid-gas and Kosterlitz-Thouless transitions would then take place inside the predicted cluster phase.

  10. A two-component copula with links to insurance

    Directory of Open Access Journals (Sweden)

    Ismail S.

    2017-12-01

    Full Text Available This paper presents a new copula to model dependencies between insurance entities, by considering how insurance entities are affected by both macro and micro factors. The model used to build the copula assumes that the insurance losses of two companies or lines of business are related through a random common loss factor which is then multiplied by an individual random company factor to get the total loss amounts. The new two-component copula is not Archimedean and it extends the toolkit of copulas for the insurance industry.

  11. Regularization Techniques for Linear Least-Squares Problems

    KAUST Repository

    Suliman, Mohamed

    2016-04-01

    Linear estimation is a fundamental branch of signal processing that deals with estimating the values of parameters from a corrupted measured data. Throughout the years, several optimization criteria have been used to achieve this task. The most astonishing attempt among theses is the linear least-squares. Although this criterion enjoyed a wide popularity in many areas due to its attractive properties, it appeared to suffer from some shortcomings. Alternative optimization criteria, as a result, have been proposed. These new criteria allowed, in one way or another, the incorporation of further prior information to the desired problem. Among theses alternative criteria is the regularized least-squares (RLS). In this thesis, we propose two new algorithms to find the regularization parameter for linear least-squares problems. In the constrained perturbation regularization algorithm (COPRA) for random matrices and COPRA for linear discrete ill-posed problems, an artificial perturbation matrix with a bounded norm is forced into the model matrix. This perturbation is introduced to enhance the singular value structure of the matrix. As a result, the new modified model is expected to provide a better stabilize substantial solution when used to estimate the original signal through minimizing the worst-case residual error function. Unlike many other regularization algorithms that go in search of minimizing the estimated data error, the two new proposed algorithms are developed mainly to select the artifcial perturbation bound and the regularization parameter in a way that approximately minimizes the mean-squared error (MSE) between the original signal and its estimate under various conditions. The first proposed COPRA method is developed mainly to estimate the regularization parameter when the measurement matrix is complex Gaussian, with centered unit variance (standard), and independent and identically distributed (i.i.d.) entries. Furthermore, the second proposed COPRA

  12. Two-component microinjection moulding for MID fabrication

    DEFF Research Database (Denmark)

    Islam, Aminul; Hansen, Hans Nørgaard; Tang, Peter Torben

    2010-01-01

    Moulded interconnect devices (MIDs) are plastic substrates with electrical infrastructure. The fabrication of MIDs is usually based on injection moulding, and different process chains may be identified from this starting point. The use of MIDs has been driven primarily by the automotive sector......, but recently, the medical sector seems more and more interested. In particular, the possibility of miniaturisation of three-dimensional components with electrical infrastructure is attractive. The present paper describes possible manufacturing routes and challenges of miniaturised MIDs based on two...

  13. The Thermal-hydraulic Analysis for the Aging Effect of the Component in CANDU-6 Reactor

    International Nuclear Information System (INIS)

    Bae, Jun Ho; Jung, Jong Yeob

    2014-01-01

    CANDU reactor consists of a lot of components, including pressure tube, reactor pump, steam generator, feeder pipe, and so on. These components become to have the aging characteristics as the reactor operates for a long time. The aging phenomena of these components lead to the change of operating parameters, and it finally results to the decrease of the operating safety margin. Actually, due to the aging characteristics of components, CANDU reactor power plant has the operating license for the duration of 30 years and the plant regularly check the plant operating state in the overhaul period. As the reactor experiences the aging, the reactor operators should reduce the reactor power level in order to keep the minimum safety margin, and it results to the deficit of economical profit. Therefore, in order to establish the safety margin for the aged reactor, the aging characteristics for components should be analyzed and the effect of aging of components on the operating parameter should be studied. In this study, the aging characteristics of components are analyzed and revealed how the aging of components affects to the operating parameter by using NUCIRC code. Finally, by scrutinizing the effect of operating parameter on the operating safety margin, the effect of aging of components on the safety margin has been revealed

  14. Regularity for a clamped grid equation $u_{xxxx}+u_{yyyy}=f $ on a domain with a corner

    Directory of Open Access Journals (Sweden)

    Tymofiy Gerasimov

    2009-04-01

    Full Text Available The operator $L=frac{partial ^{4}}{partial x^{4}} +frac{partial ^{4}}{partial y^{4}}$ appears in a model for the vertical displacement of a two-dimensional grid that consists of two perpendicular sets of elastic fibers or rods. We are interested in the behaviour of such a grid that is clamped at the boundary and more specifically near a corner of the domain. Kondratiev supplied the appropriate setting in the sense of Sobolev type spaces tailored to find the optimal regularity. Inspired by the Laplacian and the Bilaplacian models one expect, except maybe for some special angles that the optimal regularity improves when angle decreases. For the homogeneous Dirichlet problem with this special non-isotropic fourth order operator such a result does not hold true. We will show the existence of an interval $( frac{1}{2}pi ,omega _{star }$, $omega _{star }/pi approx 0.528dots$ (in degrees $omega _{star }approx 95.1dots^{circ} $, in which the optimal regularity improves with increasing opening angle.

  15. Dissipative dynamics with the corrected propagator method. Numerical comparison between fully quantum and mixed quantum/classical simulations

    International Nuclear Information System (INIS)

    Gelman, David; Schwartz, Steven D.

    2010-01-01

    The recently developed quantum-classical method has been applied to the study of dissipative dynamics in multidimensional systems. The method is designed to treat many-body systems consisting of a low dimensional quantum part coupled to a classical bath. Assuming the approximate zeroth order evolution rule, the corrections to the quantum propagator are defined in terms of the total Hamiltonian and the zeroth order propagator. Then the corrections are taken to the classical limit by introducing the frozen Gaussian approximation for the bath degrees of freedom. The evolution of the primary part is governed by the corrected propagator yielding the exact quantum dynamics. The method has been tested on two model systems coupled to a harmonic bath: (i) an anharmonic (Morse) oscillator and (ii) a double-well potential. The simulations have been performed at zero temperature. The results have been compared to the exact quantum simulations using the surrogate Hamiltonian approach.

  16. Exact solutions to two higher order nonlinear Schroedinger equations

    International Nuclear Information System (INIS)

    Xu Liping; Zhang Jinliang

    2007-01-01

    Using the homogeneous balance principle and F-expansion method, the exact solutions to two higher order nonlinear Schroedinger equations which describe the propagation of femtosecond pulses in nonlinear fibres are obtained with the aid of a set of subsidiary higher order ordinary differential equations (sub-equations for short)

  17. On symmetric structures of order two

    Directory of Open Access Journals (Sweden)

    Michel Bousquet

    2008-04-01

    Full Text Available Let (ω n 0 < n be the sequence known as Integer Sequence A047749 http://www.research.att.com/ njas/sequences/A047749 In this paper, we show that the integer ω n enumerates various kinds of symmetric structures of order two. We first consider ternary trees having a reflexive symmetry and we relate all symmetric combinatorial objects by means of bijection. We then generalize the symmetric structures and correspondences to an infinite family of symmetric objects.

  18. Transcriptome analysis of all two-component regulatory system mutants of Escherichia coli K-12.

    Science.gov (United States)

    Oshima, Taku; Aiba, Hirofumi; Masuda, Yasushi; Kanaya, Shigehiko; Sugiura, Masahito; Wanner, Barry L; Mori, Hirotada; Mizuno, Takeshi

    2002-10-01

    We have systematically examined the mRNA profiles of 36 two-component deletion mutants, which include all two-component regulatory systems of Escherichia coli, under a single growth condition. DNA microarray results revealed that the mutants belong to one of three groups based on their gene expression profiles in Luria-Bertani broth under aerobic conditions: (i) those with no or little change; (ii) those with significant changes; and (iii) those with drastic changes. Under these conditions, the anaeroresponsive ArcB/ArcA system, the osmoresponsive EnvZ/OmpR system and the response regulator UvrY showed the most drastic changes. Cellular functions such as flagellar synthesis and expression of the RpoS regulon were affected by multiple two-component systems. A high correlation coefficient of expression profile was found between several two-component mutants. Together, these results support the view that a network of functional interactions, such as cross-regulation, exists between different two-component systems. The compiled data are avail-able at our website (http://ecoli.aist-nara.ac.jp/xp_analysis/ 2_components).

  19. Regular black holes: electrically charged solutions, Reissner-Nordstroem outside a De Sitter core

    Energy Technology Data Exchange (ETDEWEB)

    Lemos, Jose P.S. [Universidade Tecnica de Lisboa (CENTRA/IST/UTL) (Portugal). Instituto Superior Tecnico. Centro Multidisciplinar de Astrofisica; Zanchin, Vilson T. [Universidade Federal do ABC (UFABC), Santo Andre, SP (Brazil). Centro de Ciencias Naturais e Humanas

    2011-07-01

    Full text: The understanding of the inside of a black hole is of crucial importance in order to have the correct picture of a black hole as a whole. The singularities that lurk inside of the usual black hole solutions are things to avoid. Their substitution by a regular part is of great interest, the process generating regular black holes. In the present work regular black hole solutions are found within general relativity coupled to Maxwell's electromagnetism and charged matter. We show that there are objects which correspond to regular charged black holes, whose interior region is de Sitter, whose exterior region is Reissner-Nordstroem, and the boundary between both regions is made of an electrically charged spherically symmetric coat. There are several solutions: the regular nonextremal black holes with a null matter boundary, the regular nonextremal black holes with a timelike matter boundary, the regular extremal black holes with a timelike matter boundary, and the regular overcharged stars with a timelike matter boundary. The main physical and geometrical properties of such charged regular solutions are analyzed. (author)

  20. Regular, dependable, mechanical - J.F. Struensee and the mechanical state (1770-1772)

    DEFF Research Database (Denmark)

    Lassen, Frank Beck

    2009-01-01

    orders was to reform the central administration thus creating a truly enlightened regime.        Reforms inspired by a quite unsystematic combination of French materialism and Prussian cameralism were initiated to promote regularity and swiftness within and between the different departments...... of the administration. As part of the attempt to legitimize such an enterprise, the metaphor of the machine was invoked - if not frequently then at least regularly - in order to indicate the general direction of the sometimes impossibly different reforms.        Based on theoretical premises first presented by Hans...

  1. Fluctuations of the SNR at the output of the MVDR with Regularized Tyler Estimators

    KAUST Repository

    Elkhalil, Khalil; Kammoun, Abla; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2016-01-01

    case in which the receiver employs the regularized Tyler estimator in order to estimate the covariance matrix of the interference-plus-noise process using n observations of size N×1N×1. The choice for the regularized Tylor estimator (RTE) is motivated

  2. A minimal model for two-component dark matter

    International Nuclear Information System (INIS)

    Esch, Sonja; Klasen, Michael; Yaguna, Carlos E.

    2014-01-01

    We propose and study a new minimal model for two-component dark matter. The model contains only three additional fields, one fermion and two scalars, all singlets under the Standard Model gauge group. Two of these fields, one fermion and one scalar, are odd under a Z_2 symmetry that renders them simultaneously stable. Thus, both particles contribute to the observed dark matter density. This model resembles the union of the singlet scalar and the singlet fermionic models but it contains some new features of its own. We analyze in some detail its dark matter phenomenology. Regarding the relic density, the main novelty is the possible annihilation of one dark matter particle into the other, which can affect the predicted relic density in a significant way. Regarding dark matter detection, we identify a new contribution that can lead either to an enhancement or to a suppression of the spin-independent cross section for the scalar dark matter particle. Finally, we define a set of five benchmarks models compatible with all present bounds and examine their direct detection prospects at planned experiments. A generic feature of this model is that both particles give rise to observable signals in 1-ton direct detection experiments. In fact, such experiments will be able to probe even a subdominant dark matter component at the percent level.

  3. Hamilton-Jacobi theorems for regular reducible Hamiltonian systems on a cotangent bundle

    Science.gov (United States)

    Wang, Hong

    2017-09-01

    In this paper, some of formulations of Hamilton-Jacobi equations for Hamiltonian system and regular reduced Hamiltonian systems are given. At first, an important lemma is proved, and it is a modification for the corresponding result of Abraham and Marsden (1978), such that we can prove two types of geometric Hamilton-Jacobi theorem for a Hamiltonian system on the cotangent bundle of a configuration manifold, by using the symplectic form and dynamical vector field. Then these results are generalized to the regular reducible Hamiltonian system with symmetry and momentum map, by using the reduced symplectic form and the reduced dynamical vector field. The Hamilton-Jacobi theorems are proved and two types of Hamilton-Jacobi equations, for the regular point reduced Hamiltonian system and the regular orbit reduced Hamiltonian system, are obtained. As an application of the theoretical results, the regular point reducible Hamiltonian system on a Lie group is considered, and two types of Lie-Poisson Hamilton-Jacobi equation for the regular point reduced system are given. In particular, the Type I and Type II of Lie-Poisson Hamilton-Jacobi equations for the regular point reduced rigid body and heavy top systems are shown, respectively.

  4. Analysis for a two-dissimilar-component cold standby repairable system with repair priority

    International Nuclear Information System (INIS)

    Leung, Kit Nam Francis; Zhang Yuanlin; Lai, Kin Keung

    2011-01-01

    In this paper, a cold standby repairable system consisting of two dissimilar components and one repairman is studied. Assume that working time distributions and repair time distributions of the two components are both exponential, and Component 1 has repair priority when both components are broken down. After repair, Component 1 follows a geometric process repair while Component 2 obeys a perfect repair. Under these assumptions, using the perfect repair model, the geometric process repair model and the supplementary variable technique, we not only study some important reliability indices, but also consider a replacement policy T, under which the system is replaced when the working age of Component 1 reaches T. Our problem is to determine an optimal policy T* such that the long-run average loss per unit time (i.e. average loss rate) of the system is minimized. The explicit expression for the average loss rate of the system is derived, and the corresponding optimal replacement policy T* can be found numerically. Finally, a numerical example for replacement policy T is given to illustrate some theoretical results and the model's applicability. - Highlights: → A two-dissimilar-component cold standby system with repair priority is formulated. → The successive up/repair times of Component 1 form a decreasing/increasing geometric process. → Not only some reliability indices but also a replacement policy are studied.

  5. Simple waves in a two-component Bose-Einstein condensate

    Science.gov (United States)

    Ivanov, S. K.; Kamchatnov, A. M.

    2018-04-01

    We study the dynamics of so-called simple waves in a two-component Bose-Einstein condensate. The evolution of the condensate is described by Gross-Pitaevskii equations which can be reduced for these simple wave solutions to a system of ordinary differential equations which coincide with those derived by Ovsyannikov for the two-layer fluid dynamics. We solve the Ovsyannikov system for two typical situations of large and small difference between interspecies and intraspecies nonlinear interaction constants. Our analytic results are confirmed by numerical simulations.

  6. The role of the Kubo number in two-component turbulence

    International Nuclear Information System (INIS)

    Qin, G.; Shalchi, A.

    2013-01-01

    We explore the random walk of magnetic field lines in two-component turbulence by using computer simulations. It is often assumed that the two-component model provides a good approximation for solar wind turbulence. We explore the dependence of the field line diffusion coefficient on the Kubo number which is a fundamental and characteristic quantity in the theory of turbulence. We show that there are two transport regimes. One is the well-known quasilinear regime in which the diffusion coefficient is proportional to the Kubo number squared, and the second one is a nonlinear regime in which the diffusion coefficient is directly proportional to the Kubo number. The so-called percolative transport regime which is often discussed in the literature cannot be found. The numerical results obtained in the present paper confirm analytical theories for random walking field lines developed in the past

  7. Benefits of regular aerobic exercise for executive functioning in healthy populations.

    Science.gov (United States)

    Guiney, Hayley; Machado, Liana

    2013-02-01

    Research suggests that regular aerobic exercise has the potential to improve executive functioning, even in healthy populations. The purpose of this review is to elucidate which components of executive functioning benefit from such exercise in healthy populations. In light of the developmental time course of executive functions, we consider separately children, young adults, and older adults. Data to date from studies of aging provide strong evidence of exercise-linked benefits related to task switching, selective attention, inhibition of prepotent responses, and working memory capacity; furthermore, cross-sectional fitness data suggest that working memory updating could potentially benefit as well. In young adults, working memory updating is the main executive function shown to benefit from regular exercise, but cross-sectional data further suggest that task-switching and post error performance may also benefit. In children, working memory capacity has been shown to benefit, and cross-sectional data suggest potential benefits for selective attention and inhibitory control. Although more research investigating exercise-related benefits for specific components of executive functioning is clearly needed in young adults and children, when considered across the age groups, ample evidence indicates that regular engagement in aerobic exercise can provide a simple means for healthy people to optimize a range of executive functions.

  8. Sparse regularization for force identification using dictionaries

    Science.gov (United States)

    Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng

    2016-04-01

    The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.

  9. Two-component thermosensitive hydrogels : Phase separation affecting rheological behavior

    NARCIS (Netherlands)

    Abbadessa, Anna; Landín, Mariana; Oude Blenke, Erik; Hennink, Wim E.; Vermonden, Tina

    2017-01-01

    Extracellular matrices are mainly composed of a mixture of different biopolymers and therefore the use of two or more building blocks for the development of tissue-mimicking hydrogels is nowadays an attractive strategy in tissue-engineering. Multi-component hydrogel systems may undergo phase

  10. An iterative method for Tikhonov regularization with a general linear regularization operator

    NARCIS (Netherlands)

    Hochstenbach, M.E.; Reichel, L.

    2010-01-01

    Tikhonov regularization is one of the most popular approaches to solve discrete ill-posed problems with error-contaminated data. A regularization operator and a suitable value of a regularization parameter have to be chosen. This paper describes an iterative method, based on Golub-Kahan

  11. Short-time regularity assessment of fibrillatory waves from the surface ECG in atrial fibrillation

    International Nuclear Information System (INIS)

    Alcaraz, Raúl; Martínez, Arturo; Hornero, Fernando; Rieta, José J

    2012-01-01

    This paper proposes the first non-invasive method for direct and short-time regularity quantification of atrial fibrillatory (f) waves from the surface ECG in atrial fibrillation (AF). Regularity is estimated by computing individual morphological variations among f waves, which are delineated and extracted from the atrial activity (AA) signal, making use of an adaptive signed correlation index. The algorithm was tested on real AF surface recordings in order to discriminate atrial signals with different organization degrees, providing a notably higher global accuracy (90.3%) than the two non-invasive AF organization estimates defined to date: the dominant atrial frequency (70.5%) and sample entropy (76.1%). Furthermore, due to its ability to assess AA regularity wave to wave, the proposed method is also able to pursue AF organization time course more precisely than the aforementioned indices. As a consequence, this work opens a new perspective in the non-invasive analysis of AF, such as the individualized study of each f wave, that could improve the understanding of AF mechanisms and become useful for its clinical treatment. (paper)

  12. Sparse structure regularized ranking

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-04-17

    Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse structure, we assume that each multimedia object could be represented as a sparse linear combination of all other objects, and combination coefficients are regarded as a similarity measure between objects and used to regularize their ranking scores. Moreover, we propose to learn the sparse combination coefficients and the ranking scores simultaneously. A unified objective function is constructed with regard to both the combination coefficients and the ranking scores, and is optimized by an iterative algorithm. Experiments on two multimedia database retrieval data sets demonstrate the significant improvements of the propose algorithm over state-of-the-art ranking score learning algorithms.

  13. Linear deflectometry - Regularization and experimental design [Lineare deflektometrie - Regularisierung und experimentelles design

    KAUST Repository

    Balzer, Jonathan

    2011-01-01

    Specular surfaces can be measured with deflectometric methods. The solutions form a one-parameter family whose properties are discussed in this paper. We show in theory and experiment that the shape sensitivity of solutions decreases with growing distance from the optical center of the imaging component of the sensor system and propose a novel regularization strategy. Recommendations for the construction of a measurement setup aim for benefiting this strategy as well as the contrarian standard approach of regularization by specular stereo. © Oldenbourg Wissenschaftsverlag.

  14. Stability equation and two-component Eigenmode for domain walls in scalar potential model

    International Nuclear Information System (INIS)

    Dias, G.S.; Graca, E.L.; Rodrigues, R. de Lima

    2002-08-01

    Supersymmetric quantum mechanics involving a two-component representation and two-component eigenfunctions is applied to obtain the stability equation associated to a potential model formulated in terms of two coupled real scalar fields. We investigate the question of stability by introducing an operator technique for the Bogomol'nyi-Prasad-Sommerfield (BPS) and non-BPS states on two domain walls in a scalar potential model with minimal N 1-supersymmetry. (author)

  15. Regular Expression Pocket Reference

    CERN Document Server

    Stubblebine, Tony

    2007-01-01

    This handy little book offers programmers a complete overview of the syntax and semantics of regular expressions that are at the heart of every text-processing application. Ideal as a quick reference, Regular Expression Pocket Reference covers the regular expression APIs for Perl 5.8, Ruby (including some upcoming 1.9 features), Java, PHP, .NET and C#, Python, vi, JavaScript, and the PCRE regular expression libraries. This concise and easy-to-use reference puts a very powerful tool for manipulating text and data right at your fingertips. Composed of a mixture of symbols and text, regular exp

  16. Two Component Injection Moulding for Moulded Interconnect Devices

    DEFF Research Database (Denmark)

    Islam, Aminul

    component (2k) injection moulding is one of the most industrially adaptive processes. However, the use of two component injection moulding for MID fabrication, with circuit patterns in sub-millimeter range, is still a big challenge. This book searches for the technical difficulties associated...... with the process and makes attempts to overcome those challenges. In search of suitable polymer materials for MID applications, potential materials are characterized in terms of polymer-polymer bond strength, polymer-polymer interface quality and selective metallization. The experimental results find the factors...... which can effectively control the quality of 2k moulded parts and metallized MIDs. This book presents documented knowledge about MID process chains, 2k moulding and selective metallization which can be valuable source of information for both academic and industrial users....

  17. Regularization of the Boundary-Saddle-Node Bifurcation

    Directory of Open Access Journals (Sweden)

    Xia Liu

    2018-01-01

    Full Text Available In this paper we treat a particular class of planar Filippov systems which consist of two smooth systems that are separated by a discontinuity boundary. In such systems one vector field undergoes a saddle-node bifurcation while the other vector field is transversal to the boundary. The boundary-saddle-node (BSN bifurcation occurs at a critical value when the saddle-node point is located on the discontinuity boundary. We derive a local topological normal form for the BSN bifurcation and study its local dynamics by applying the classical Filippov’s convex method and a novel regularization approach. In fact, by the regularization approach a given Filippov system is approximated by a piecewise-smooth continuous system. Moreover, the regularization process produces a singular perturbation problem where the original discontinuous set becomes a center manifold. Thus, the regularization enables us to make use of the established theories for continuous systems and slow-fast systems to study the local behavior around the BSN bifurcation.

  18. Linear deflectometry - Regularization and experimental design [Lineare deflektometrie - Regularisierung und experimentelles design

    KAUST Repository

    Balzer, Jonathan; Werling, Stefan; Beyerer, Jü rgen

    2011-01-01

    distance from the optical center of the imaging component of the sensor system and propose a novel regularization strategy. Recommendations for the construction of a measurement setup aim for benefiting this strategy as well as the contrarian standard

  19. Local coordination and medium range order in molten trivalent metal chlorides: The role of screening by the chlorine component

    International Nuclear Information System (INIS)

    Pastore, G.; Tosi, M.P.

    1995-11-01

    Earlier work has identified the metal ion size R M as a relevant parameter in determining the evolution of the liquid structure of trivalent metal chlorides across the series from LaCl 3 (R M approx. 1.4 A) to AlCl 3 (R M approx. 0.8 A). Here we highlight the structural role of the chlorines by contrasting the structure of fully equilibrated melts with that of disordered systems obtained by quenching the chlorine component. Main attention is given to how the suppression of screening of the polyvalent ions by the chlorines changes trends in the local liquid structure (first neighbour coordination and partial radial distribution functions) and in the intermediate range order (first sharp diffraction peak in the partial structure factors). The main microscopic consequences of structural quenching of the chlorine component are a reduction in short range order and an enhancement of intermediate range order in the metal ion component, as well as the suppression of a tendency to molecular-type states at the lower end of the range of R M . (author). 23 refs, 6 figs

  20. Domain Walls and Textured Vortices in a Two-Component Ginzburg-Landau Model

    DEFF Research Database (Denmark)

    Madsen, Søren Peder; Gaididei, Yu. B.; Christiansen, Peter Leth

    2005-01-01

    coupling between the two order parameters a ''textured vortex'' is found by analytical and numerical solution of the Ginzburg-Landau equations. With a Josephson type coupling between the two order parameters we find the system to split up in two domains separated by a domain wall, where the order parameter...... is depressed to zero....

  1. Comparative Analysis of Wolbachia Genomes Reveals Streamlining and Divergence of Minimalist Two-Component Systems

    Science.gov (United States)

    Christensen, Steen; Serbus, Laura Renee

    2015-01-01

    Two-component regulatory systems are commonly used by bacteria to coordinate intracellular responses with environmental cues. These systems are composed of functional protein pairs consisting of a sensor histidine kinase and cognate response regulator. In contrast to the well-studied Caulobacter crescentus system, which carries dozens of these pairs, the streamlined bacterial endosymbiont Wolbachia pipientis encodes only two pairs: CckA/CtrA and PleC/PleD. Here, we used bioinformatic tools to compare characterized two-component system relays from C. crescentus, the related Anaplasmataceae species Anaplasma phagocytophilum and Ehrlichia chaffeensis, and 12 sequenced Wolbachia strains. We found the core protein pairs and a subset of interacting partners to be highly conserved within Wolbachia and these other Anaplasmataceae. Genes involved in two-component signaling were positioned differently within the various Wolbachia genomes, whereas the local context of each gene was conserved. Unlike Anaplasma and Ehrlichia, Wolbachia two-component genes were more consistently found clustered with metabolic genes. The domain architecture and key functional residues standard for two-component system proteins were well-conserved in Wolbachia, although residues that specify cognate pairing diverged substantially from other Anaplasmataceae. These findings indicate that Wolbachia two-component signaling pairs share considerable functional overlap with other α-proteobacterial systems, whereas their divergence suggests the potential for regulatory differences and cross-talk. PMID:25809075

  2. Nonlinear low frequency electrostatic structures in a magnetized two-component auroral plasma

    Energy Technology Data Exchange (ETDEWEB)

    Rufai, O. R., E-mail: rajirufai@gmail.com [University of the Western Cape, Bellville 7535, Cape-Town (South Africa); Scientific Computing, Memorial University of Newfoundland, St John' s, Newfoundland and Labrador A1C 5S7 (Canada); Bharuthram, R., E-mail: rbharuthram@uwc.ac.za [University of the Western Cape, Bellville 7535, Cape-Town (South Africa); Singh, S. V., E-mail: satyavir@iigs.iigm.res.in; Lakhina, G. S., E-mail: lakhina@iigs.iigm.res.in [University of the Western Cape, Bellville 7535, Cape-Town (South Africa); Indian Institute of Geomagnetism, New Panvel (W), Navi Mumbai 410218 (India)

    2016-03-15

    Finite amplitude nonlinear ion-acoustic solitons, double layers, and supersolitons in a magnetized two-component plasma composed of adiabatic warm ions fluid and energetic nonthermal electrons are studied by employing the Sagdeev pseudopotential technique and assuming the charge neutrality condition at equilibrium. The model generates supersoliton structures at supersonic Mach numbers regime in addition to solitons and double layers, whereas in the unmagnetized two-component plasma case only, soliton and double layer solutions can be obtained. Further investigation revealed that wave obliqueness plays a critical role for the evolution of supersoliton structures in magnetized two-component plasmas. In addition, the effect of ion temperature and nonthermal energetic electron tends to decrease the speed of oscillation of the nonlinear electrostatic structures. The present theoretical results are compared with Viking satellite observations.

  3. Quasi regular polygons and their duals with Coxeter symmetries Dn represented by complex numbers

    International Nuclear Information System (INIS)

    Koca, M; Koca, N O

    2011-01-01

    This paper deals with tiling of the plane by quasi regular polygons and their duals. The problem is motivated from the fact that the graphene, infinite number of carbon molecules forming a honeycomb lattice, may have states with two bond lengths and equal bond angles or one bond length and different bond angles. We prove that the Euclidean plane can be tiled with two tiles consisting of quasi regular hexagons with two different lengths (isogonal hexagons) and regular hexagons. The dual lattice is constructed with the isotoxal hexagons (equal edges but two different interior angles) and regular hexagons. We also give similar tilings of the plane with the quasi regular polygons along with the regular polygons possessing the Coxeter symmetries D n , n=2,3,4,5. The group elements as well as the vertices of the polygons are represented by the complex numbers.

  4. Regularity and irreversibility of weekly travel behavior

    NARCIS (Netherlands)

    Kitamura, R.; van der Hoorn, A.I.J.M.

    1987-01-01

    Dynamic characteristics of travel behavior are analyzed in this paper using weekly travel diaries from two waves of panel surveys conducted six months apart. An analysis of activity engagement indicates the presence of significant regularity in weekly activity participation between the two waves.

  5. Symmetrical components and power analysis for a two-phase microgrid system

    DEFF Research Database (Denmark)

    Alibeik, M.; Santos Jr., E. C. dos; Blaabjerg, Frede

    2014-01-01

    This paper presents a mathematical model for the symmetrical components and power analysis of a new microgrid system consisting of three wires and two voltages in quadrature, which is designated as a two-phase microgrid. The two-phase microgrid presents the following advantages: 1) constant power...

  6. Exploiting Lexical Regularities in Designing Natural Language Systems.

    Science.gov (United States)

    1988-04-01

    ELEMENT. PROJECT. TASKN Artificial Inteligence Laboratory A1A4WR NTumet 0) 545 Technology Square Cambridge, MA 02139 Ln *t- CONTROLLING OFFICE NAME AND...RO-RI95 922 EXPLOITING LEXICAL REGULARITIES IN DESIGNING NATURAL 1/1 LANGUAGE SYSTENS(U) MASSACHUSETTS INST OF TECH CAMBRIDGE ARTIFICIAL INTELLIGENCE...oes.ary and ftdou.Ip hr Nl wow" L,2This paper presents the lexical component of the START Question Answering system developed at the MIT Artificial

  7. A comparison of two-component and quadratic models to assess survival of irradiated stage-7 oocytes of Drosophila melanogaster

    International Nuclear Information System (INIS)

    Peres, C.A.; Koo, J.O.

    1981-01-01

    In this paper, the quadratic model to analyse data of this kind, i.e. S/S 0 = exp(-αD-bD 2 ), where S and Ssub(o) are defined as before is proposed is shown that the same biological interpretation can be given to the parameters α and A and to the parameters β and B. Furthermore it is shown that the quadratic model involves one probabilistic stage more than the two-component model, and therefore the quadratic model would perhaps be more appropriate as a dose-response model for survival of irradiated stage-7 oocytes of Drosophila melanogaster. In order to apply these results, the data presented by Sankaranarayanan and Sankaranarayanan and Volkers are reanalysed using the quadratic model. It is shown that the quadratic model fits better than the two-component model to the data in most situations. (orig./AJ)

  8. A dynamical regularization algorithm for solving inverse source problems of elliptic partial differential equations

    Science.gov (United States)

    Zhang, Ye; Gong, Rongfang; Cheng, Xiaoliang; Gulliksson, Mårten

    2018-06-01

    This study considers the inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary data. The unknown source term is to be determined by additional boundary conditions. Unlike the existing methods found in the literature, which usually employ the first-order in time gradient-like system (such as the steepest descent methods) for numerically solving the regularized optimization problem with a fixed regularization parameter, we propose a novel method with a second-order in time dissipative gradient-like system and a dynamical selected regularization parameter. A damped symplectic scheme is proposed for the numerical solution. Theoretical analysis is given for both the continuous model and the numerical algorithm. Several numerical examples are provided to show the robustness of the proposed algorithm.

  9. The second-order interference of two independent single-mode He-Ne lasers

    Science.gov (United States)

    Liu, Jianbin; Le, Mingnan; Bai, Bin; Wang, Wentao; Chen, Hui; Zhou, Yu; Li, Fu-li; Xu, Zhuo

    2015-09-01

    The second-order spatial and temporal interference patterns with two independent single-mode continuous-wave He-Ne lasers are observed when these two lasers are incident to two adjacent input ports of a 1:1 non-polarizing beam splitter, respectively. Two-photon interference based on the superposition principle in Feynman's path integral theory is employed to interpret the experimental results. The conditions to observe the second-order interference pattern with two independent single-mode continuous-wave lasers are discussed. It is concluded that frequency stability is important to observe the second-order interference pattern with two independent light beams.

  10. A new approach to nonlinear constrained Tikhonov regularization

    KAUST Repository

    Ito, Kazufumi

    2011-09-16

    We present a novel approach to nonlinear constrained Tikhonov regularization from the viewpoint of optimization theory. A second-order sufficient optimality condition is suggested as a nonlinearity condition to handle the nonlinearity of the forward operator. The approach is exploited to derive convergence rate results for a priori as well as a posteriori choice rules, e.g., discrepancy principle and balancing principle, for selecting the regularization parameter. The idea is further illustrated on a general class of parameter identification problems, for which (new) source and nonlinearity conditions are derived and the structural property of the nonlinearity term is revealed. A number of examples including identifying distributed parameters in elliptic differential equations are presented. © 2011 IOP Publishing Ltd.

  11. Adaptive synchronization between two different order and topology dynamical systems

    International Nuclear Information System (INIS)

    Bowong, S.; Moukam Kakmeni, F.M.; Yamapi, R.

    2006-07-01

    This contribution studies adaptive synchronization between two dynamical systems of different order whose topological structure is also different. By order we mean the number of first order differential equations. The problem is closely related to the synchronization of strictly different systems. The master system is given by a sixth order equation with chaotic behavior whereas the slave system is a fourth-order nonautonomous with rational nonlinear terms. Based on the Lyapunov stability theory, sufficient conditions for the synchronization have been analyzed theoretically and numerically. (author)

  12. Coherent quantum phase slip in two-component bosonic atomtronic circuits

    International Nuclear Information System (INIS)

    Gallemí, A; Mateo, A Muñoz; Mayol, R; Guilleumas, M

    2016-01-01

    Coherent quantum phase slip consists in the coherent transfer of vortices in superfluids. We investigate this phenomenon in two miscible coherently coupled components of a spinor Bose gas confined in a toroidal trap. After imprinting different vortex states, i.e. states with quantized circulation, on each component, we demonstrate that during the whole dynamics the system remains in a linear superposition of two current states in spite of the nonlinearity, and can be mapped onto a linear Josephson problem. We propose this system as a good candidate for the realization of a Mooij–Harmans qubit and remark its feasibility for implementation in current experiments with 87 Rb, since we have used values for the physical parameters currently available in laboratories. (paper)

  13. Critical density for Landau damping in a two-electron-component plasma

    Energy Technology Data Exchange (ETDEWEB)

    Rupp, Constantin F.; López, Rodrigo A.; Araneda, Jaime A. [Departamento de Física, Facultad de Ciencias Físicas y Matemáticas, Universidad de Concepción, Concepción (Chile)

    2015-10-15

    The asymptotic evolution of an initial perturbation in a collisionless two-electron-component plasma with different temperatures is studied numerically. The transition between linear and nonlinear damping regimes is determined by slowly varying the density of the secondary electron-component using high-resolution Vlasov-Poisson simulations. It is shown that, for fixed amplitude perturbations, this transition behaves as a critical phenomenon with time scales and field amplitudes exhibiting power-law dependencies on the threshold density, similar to the critical amplitude behavior in a single-component plasma.

  14. Reduced order modeling of flashing two-phase jets

    Energy Technology Data Exchange (ETDEWEB)

    Gurecky, William, E-mail: william.gurecky@utexas.edu; Schneider, Erich, E-mail: eschneider@mail.utexas.edu; Ballew, Davis, E-mail: davisballew@utexas.edu

    2015-12-01

    Highlights: • Accident simulation requires ability to quickly predict two-phase flashing jet's damage potential. • A reduced order modeling methodology informed by experimental or computational data is described. • Zone of influence volumes are calculated for jets of various upstream thermodynamic conditions. - Abstract: In the event of a Loss of Coolant Accident (LOCA) in a pressurized water reactor, the escaping coolant produces a highly energetic flashing jet with the potential to damage surrounding structures. In LOCA analysis, the goal is often to evaluate many break scenarios in a Monte Carlo style simulation to evaluate the resilience of a reactor design. Therefore, in order to quickly predict the damage potential of flashing jets, it is of interest to develop a reduced order model that relates the damage potential of a jet to the pressure and temperature upstream of the break and the distance from the break to a given object upon which the jet is impinging. This work presents framework for producing a Reduced Order Model (ROM) that may be informed by measured data, Computational Fluid Dynamics (CFD) simulations, or a combination of both. The model is constructed by performing regression analysis on the pressure field data, allowing the impingement pressure to be quickly reconstructed for any given upstream thermodynamic condition within the range of input data. The model is applicable to both free and fully impinging two-phase flashing jets.

  15. Chemical association in simple models of molecular and ionic fluids. III. The cavity function

    Science.gov (United States)

    Zhou, Yaoqi; Stell, George

    1992-01-01

    Exact equations which relate the cavity function to excess solvation free energies and equilibrium association constants are rederived by using a thermodynamic cycle. A zeroth-order approximation, derived previously by us as a simple interpolation scheme, is found to be very accurate if the associative bonding occurs on or near the surface of the repulsive core of the interaction potential. If the bonding radius is substantially less than the core radius, the approximation overestimates the association degree and the association constant. For binary association, the zeroth-order approximation is equivalent to the first-order thermodynamic perturbation theory (TPT) of Wertheim. For n-particle association, the combination of the zeroth-order approximation with a ``linear'' approximation (for n-particle distribution functions in terms of the two-particle function) yields the first-order TPT result. Using our exact equations to go beyond TPT, near-exact analytic results for binary hard-sphere association are obtained. Solvent effects on binary hard-sphere association and ionic association are also investigated. A new rule which generalizes Le Chatelier's principle is used to describe the three distinct forms of behaviors involving solvent effects that we find. The replacement of the dielectric-continuum solvent model by a dipolar hard-sphere model leads to improved agreement with an experimental observation. Finally, equation of state for an n-particle flexible linear-chain fluid is derived on the basis of a one-parameter approximation that interpolates between the generalized Kirkwood superposition approximation and the linear approximation. A value of the parameter that appears to be near optimal in the context of this application is obtained from comparison with computer-simulation data.

  16. Empirical laws, regularity and necessity

    NARCIS (Netherlands)

    Koningsveld, H.

    1973-01-01

    In this book I have tried to develop an analysis of the concept of an empirical law, an analysis that differs in many ways from the alternative analyse's found in contemporary literature dealing with the subject.

    1 am referring especially to two well-known views, viz. the regularity and

  17. Directional Total Generalized Variation Regularization for Impulse Noise Removal

    DEFF Research Database (Denmark)

    Kongskov, Rasmus Dalgas; Dong, Yiqiu

    2017-01-01

    this regularizer for directional images is highly advantageous. In order to estimate directions in impulse noise corrupted images, which is much more challenging compared to Gaussian noise corrupted images, we introduce a new Fourier transform-based method. Numerical experiments show that this method is more...

  18. An extended L-curve method for choosing a regularization parameter in electrical resistance tomography

    International Nuclear Information System (INIS)

    Xu, Yanbin; Pei, Yang; Dong, Feng

    2016-01-01

    The L-curve method is a popular regularization parameter choice method for the ill-posed inverse problem of electrical resistance tomography (ERT). However the method cannot always determine a proper parameter for all situations. An investigation into those situations where the L-curve method failed show that a new corner point appears on the L-curve and the parameter corresponding to the new corner point can obtain a satisfactory reconstructed solution. Thus an extended L-curve method, which determines the regularization parameter associated with either global corner or the new corner, is proposed. Furthermore, two strategies are provided to determine the new corner–one is based on the second-order differential of L-curve, and the other is based on the curvature of L-curve. The proposed method is examined by both numerical simulations and experimental tests. And the results indicate that the extended method can handle the parameter choice problem even in the case where the typical L-curve method fails. Finally, in order to reduce the running time of the method, the extended method is combined with a projection method based on the Krylov subspace, which was able to boost the extended L-curve method. The results verify that the speed of the extended L-curve method is distinctly improved. The proposed method extends the application of the L-curve in the field of choosing regularization parameter with an acceptable running time and can also be used in other kinds of tomography. (paper)

  19. Local unitary transformation method for large-scale two-component relativistic calculations. II. Extension to two-electron Coulomb interaction.

    Science.gov (United States)

    Seino, Junji; Nakai, Hiromi

    2012-10-14

    The local unitary transformation (LUT) scheme at the spin-free infinite-order Douglas-Kroll-Hess (IODKH) level [J. Seino and H. Nakai, J. Chem. Phys. 136, 244102 (2012)], which is based on the locality of relativistic effects, has been extended to a four-component Dirac-Coulomb Hamiltonian. In the previous study, the LUT scheme was applied only to a one-particle IODKH Hamiltonian with non-relativistic two-electron Coulomb interaction, termed IODKH/C. The current study extends the LUT scheme to a two-particle IODKH Hamiltonian as well as one-particle one, termed IODKH/IODKH, which has been a real bottleneck in numerical calculation. The LUT scheme with the IODKH/IODKH Hamiltonian was numerically assessed in the diatomic molecules HX and X(2) and hydrogen halide molecules, (HX)(n) (X = F, Cl, Br, and I). The total Hartree-Fock energies calculated by the LUT method agree well with conventional IODKH/IODKH results. The computational cost of the LUT method is reduced drastically compared with that of the conventional method. In addition, the LUT method achieves linear-scaling with respect to the system size and a small prefactor.

  20. Quantum-phase dynamics of two-component Bose-Einstein condensates: Collapse-revival of macroscopic superposition states

    International Nuclear Information System (INIS)

    Nakano, Masayoshi; Kishi, Ryohei; Ohta, Suguru; Takahashi, Hideaki; Furukawa, Shin-ichi; Yamaguchi, Kizashi

    2005-01-01

    We investigate the long-time dynamics of two-component dilute gas Bose-Einstein condensates with relatively different two-body interactions and Josephson couplings between the two components. Although in certain parameter regimes the quantum state of the system is known to evolve into macroscopic superposition, i.e., Schroedinger cat state, of two states with relative atom number differences between the two components, the Schroedinger cat state is also found to repeat the collapse and revival behavior in the long-time region. The dynamical behavior of the Pegg-Barnett phase difference between the two components is shown to be closely connected with the dynamics of the relative atom number difference for different parameters. The variation in the relative magnitude between the Josephson coupling and intra- and inter-component two-body interaction difference turns out to significantly change not only the size of the Schroedinger cat state but also its collapse-revival period, i.e., the lifetime of the Schroedinger cat state

  1. Dynamics of a two-dimensional order-disorder transition

    International Nuclear Information System (INIS)

    Sahni, P.S.; Dee, G.; Gunton, J.D.; Phani, M.; Lebowitz, J.L.; Kalos, M.

    1981-01-01

    We present results of a Monte Carlo study of the time development of a two-dimensional order-disorder model binary alloy following a quench to low temperature from a disordered, high-temperature state. The behavior is qualitatively quite similar to that seen in a recent study of a three-dimensional system. The structure function exhibits a scaling of the form K 2 (t)S(k,t) = G(k/K(t)) where the moment K(t) decreases with time approximately like t/sup -1/2/. If one interprets this moment as being inversely proportional to the domain size, the characteristic domain growth rate is proportional to t/sup -1/2/. Additional insight into this time evolution is obtained from studying the development of the short-range order, as well as from monitoring the growth of a compact ordered domain embedded in a region of opposite order. All these results are consistent with the picture of domain growth as proposed by Lifshitz and by Cahn and Allen

  2. Spectral curves in gauge/string dualities: integrability, singular sectors and regularization

    International Nuclear Information System (INIS)

    Konopelchenko, Boris; Alonso, Luis Martínez; Medina, Elena

    2013-01-01

    We study the moduli space of the spectral curves y 2 = W′(z) 2 + f(z) which characterize the vacua of N=1 U(n) supersymmetric gauge theories with an adjoint Higgs field and a polynomial tree level potential W(z). The integrable structure of the Whitham equations is used to determine the spectral curves from their moduli. An alternative characterization of the spectral curves in terms of critical points of a family of polynomial solutions W to Euler–Poisson–Darboux equations is provided. The equations for these critical points are a generalization of the planar limit equations for one-cut random matrix models. Moreover, singular spectral curves with higher order branch points turn out to be described by degenerate critical points of W. As a consequence we propose a multiple scaling limit method of regularization and show that, in the simplest cases, it leads to the Painlevè-I equation and its multi-component generalizations. (paper)

  3. A novel two-component system involved in secretion stress response in Streptomyces lividans.

    Directory of Open Access Journals (Sweden)

    Sonia Gullón

    Full Text Available BACKGROUND: Misfolded proteins accumulating outside the bacterial cytoplasmic membrane can interfere with the secretory machinery, hence the existence of quality factors to eliminate these misfolded proteins is of capital importance in bacteria that are efficient producers of secretory proteins. These bacteria normally use a specific two-component system to respond to the stress produced by the accumulation of the misfolded proteins, by activating the expression of HtrA-like proteases to specifically eliminate the incorrectly folded proteins. METHODOLOGY/PRINCIPAL FINDINGS: Overproduction of alpha-amylase in S. lividans causing secretion stress permitted the identification of a two-component system (SCO4156-SCO4155 that regulates three HtrA-like proteases which appear to be involved in secretion stress response. Mutants in each of the genes forming part of the two-genes operon that encodes the sensor and regulator protein components accumulated misfolded proteins outside the cell, strongly suggesting the involvement of this two-component system in the S. lividans secretion stress response. CONCLUSIONS/SIGNIFICANCE: To our knowledge this is the first time that a specific secretion stress response two-component system is found to control the expression of three HtrA-like protease genes in S. lividans, a bacterium that has been repeatedly used as a host for the synthesis of homologous and heterologous secretory proteins of industrial application.

  4. Automatic Constraint Detection for 2D Layout Regularization.

    Science.gov (United States)

    Jiang, Haiyong; Nan, Liangliang; Yan, Dong-Ming; Dong, Weiming; Zhang, Xiaopeng; Wonka, Peter

    2016-08-01

    In this paper, we address the problem of constraint detection for layout regularization. The layout we consider is a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important in digitizing plans or images, such as floor plans and facade images, and in the improvement of user-created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm that automatically detects constraints. We evaluate the proposed framework using a variety of input layouts from different applications. Our results demonstrate that our method has superior performance to the state of the art.

  5. Automatic Constraint Detection for 2D Layout Regularization

    KAUST Repository

    Jiang, Haiyong

    2015-09-18

    In this paper, we address the problem of constraint detection for layout regularization. As layout we consider a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important for digitizing plans or images, such as floor plans and facade images, and for the improvement of user created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate the layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm to automatically detect constraints. In our results, we evaluate the proposed framework on a variety of input layouts from different applications, which demonstrates our method has superior performance to the state of the art.

  6. The two-component afterglow of Swift GRB 050802

    Science.gov (United States)

    Oates, S. R.; de Pasquale, M.; Page, M. J.; Blustin, A. J.; Zane, S.; McGowan, K.; Mason, K. O.; Poole, T. S.; Schady, P.; Roming, P. W. A.; Page, K. L.; Falcone, A.; Gehrels, N.

    2007-09-01

    This paper investigates GRB 050802, one of the best examples of a Swift gamma-ray burst afterglow that shows a break in the X-ray light curve, while the optical counterpart decays as a single power law. This burst has an optically bright afterglow of 16.5 mag, detected throughout the 170-650nm spectral range of the Ultraviolet and Optical Telescope (UVOT) onboard Swift. Observations began with the X-ray Telescope and UVOT telescopes 286s after the initial trigger and continued for 1.2 ×106s. The X-ray light curve consists of three power-law segments: a rise until 420s, followed by a slow decay with α =0.63 +/-0.03 until 5000s, after which, the light curve decays faster with a slope of α3 =1.59 +/-0.03. The optical light curve decays as a single power law with αO =0.82 +/-0.03 throughout the observation. The X-ray data on their own are consistent with the break at 5000s being due to the end of energy injection. Modelling the optical to X-ray spectral energy distribution, we find that the optical afterglow cannot be produced by the same component as the X-ray emission at late times, ruling out a single-component afterglow. We therefore considered two-component jet models and find that the X-ray and optical emission is best reproduced by a model in which both components are energy injected for the duration of the observed afterglow and the X-ray break at 5000s is due to a jet break in the narrow component. This bright, well-observed burst is likely a guide for interpreting the surprising finding of Swift that bursts seldom display achromatic jet breaks.

  7. Multilinear Graph Embedding: Representation and Regularization for Images.

    Science.gov (United States)

    Chen, Yi-Lei; Hsu, Chiou-Ting

    2014-02-01

    Given a set of images, finding a compact and discriminative representation is still a big challenge especially when multiple latent factors are hidden in the way of data generation. To represent multifactor images, although multilinear models are widely used to parameterize the data, most methods are based on high-order singular value decomposition (HOSVD), which preserves global statistics but interprets local variations inadequately. To this end, we propose a novel method, called multilinear graph embedding (MGE), as well as its kernelization MKGE to leverage the manifold learning techniques into multilinear models. Our method theoretically links the linear, nonlinear, and multilinear dimensionality reduction. We also show that the supervised MGE encodes informative image priors for image regularization, provided that an image is represented as a high-order tensor. From our experiments on face and gait recognition, the superior performance demonstrates that MGE better represents multifactor images than classic methods, including HOSVD and its variants. In addition, the significant improvement in image (or tensor) completion validates the potential of MGE for image regularization.

  8. Radiative generation of quark masses and mixing angles in the two Higgs doublet model

    International Nuclear Information System (INIS)

    Ibarra, Alejandro; Solaguren-Beascoa, Ana

    2014-01-01

    We present a framework to generate the quark mass hierarchies and mixing angles by extending the Standard Model with one extra Higgs doublet. The charm and strange quark masses are generated by small quantum effects, thus explaining the hierarchy between the second and third generation quark masses. All the mixing angles are also generated by small quantum effects: the Cabibbo angle is generated at zeroth order in perturbation theory, while the remaining off-diagonal entries of the Cabibbo–Kobayashi–Maskawa matrix are generated at first order, hence explaining the observed hierarchy |V ub |,|V cb |≪|V us |. The values of the radiatively generated parameters depend only logarithmically on the heavy Higgs mass, therefore this framework can be reconciled with the stringent limits on flavor violation by postulating a sufficiently large new physics scale

  9. A two-component generalization of the reduced Ostrovsky equation and its integrable semi-discrete analogue

    International Nuclear Information System (INIS)

    Feng, Bao-Feng; Maruno, Ken-ichi; Ohta, Yasuhiro

    2017-01-01

    In the present paper, we propose a two-component generalization of the reduced Ostrovsky (Vakhnenko) equation, whose differential form can be viewed as the short-wave limit of a two-component Degasperis–Procesi (DP) equation. They are integrable due to the existence of Lax pairs. Moreover, we have shown that the two-component reduced Ostrovsky equation can be reduced from an extended BKP hierarchy with negative flow through a pseudo 3-reduction and a hodograph (reciprocal) transform. As a by-product, its bilinear form and N -soliton solution in terms of pfaffians are presented. One- and two-soliton solutions are provided and analyzed. In the second part of the paper, we start with a modified BKP hierarchy, which is a Bäcklund transformation of the above extended BKP hierarchy, an integrable semi-discrete analogue of the two-component reduced Ostrovsky equation is constructed by defining an appropriate discrete hodograph transform and dependent variable transformations. In particular, the backward difference form of above semi-discrete two-component reduced Ostrovsky equation gives rise to the integrable semi-discretization of the short wave limit of a two-component DP equation. Their N -soliton solutions in terms of pffafians are also provided. (paper)

  10. Robust regularized least-squares beamforming approach to signal estimation

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2017-05-12

    In this paper, we address the problem of robust adaptive beamforming of signals received by a linear array. The challenge associated with the beamforming problem is twofold. Firstly, the process requires the inversion of the usually ill-conditioned covariance matrix of the received signals. Secondly, the steering vector pertaining to the direction of arrival of the signal of interest is not known precisely. To tackle these two challenges, the standard capon beamformer is manipulated to a form where the beamformer output is obtained as a scaled version of the inner product of two vectors. The two vectors are linearly related to the steering vector and the received signal snapshot, respectively. The linear operator, in both cases, is the square root of the covariance matrix. A regularized least-squares (RLS) approach is proposed to estimate these two vectors and to provide robustness without exploiting prior information. Simulation results show that the RLS beamformer using the proposed regularization algorithm outperforms state-of-the-art beamforming algorithms, as well as another RLS beamformers using a standard regularization approaches.

  11. Regularization dependence on phase diagram in Nambu–Jona-Lasinio model

    International Nuclear Information System (INIS)

    Kohyama, H.; Kimura, D.; Inagaki, T.

    2015-01-01

    We study the regularization dependence on meson properties and the phase diagram of quark matter by using the two flavor Nambu–Jona-Lasinio model. The model also has the parameter dependence in each regularization, so we explicitly give the model parameters for some sets of the input observables, then investigate its effect on the phase diagram. We find that the location or the existence of the critical end point highly depends on the regularization methods and the model parameters. Then we think that regularization and parameters are carefully considered when one investigates the QCD critical end point in the effective model studies

  12. Compact perturbative expressions for neutrino oscillations in matter

    Energy Technology Data Exchange (ETDEWEB)

    Denton, Peter B. [Theoretical Physics Department, Fermi National Accelerator Laboratory,P.O. Box 500, Batavia, IL 60510 (United States); Physics & Astronomy Department, Vanderbilt University,PMB 401807, 2301 Vanderbilt Place, Nashville, TN 37235 (United States); Minakata, Hisakazu [Instituto de Física, Universidade de São Paulo,C.P. 66.318, 05315-970 São Paulo (Brazil); Department of Physics, Yachay Tech University,San Miguel de Urcuquí 100119 (Ecuador); Parke, Stephen J. [Theoretical Physics Department, Fermi National Accelerator Laboratory,P.O. Box 500, Batavia, IL 60510 (United States)

    2016-06-08

    We further develop and extend a recent perturbative framework for neutrino oscillations in uniform matter density so that the resulting oscillation probabilities are accurate for the complete matter potential versus baseline divided by neutrino energy plane. This extension also gives the exact oscillation probabilities in vacuum for all values of baseline divided by neutrino energy. The expansion parameter used is related to the ratio of the solar to the atmospheric Δm{sup 2} scales but with a unique choice of the atmospheric Δm{sup 2} such that certain first-order effects are taken into account in the zeroth-order Hamiltonian. Using a mixing matrix formulation, this framework has the exceptional feature that the neutrino oscillation probability in matter has the same structure as in vacuum, to all orders in the expansion parameter. It also contains all orders in the matter potential and sin θ{sub 13}. It facilitates immediate physical interpretation of the analytic results, and makes the expressions for the neutrino oscillation probabilities extremely compact and very accurate even at zeroth order in our perturbative expansion. The first and second order results are also given which improve the precision by approximately two or more orders of magnitude per perturbative order.

  13. Measuring two-phase and two-component mixtures by radiometric technique

    International Nuclear Information System (INIS)

    Mackuliak, D.; Rajniak, I.

    1984-01-01

    The possibility was tried of the application of the radiometric method in measuring steam water content. The experiments were carried out in model conditions where steam was replaced with the two-component mixture of water and air. The beta radiation source was isotope 204 Tl (Esub(max)=0.765 MeV) with an activity of 19.35 MBq. Measurements were carried out within the range of the surface density of the mixture from 0.119 kg.m -2 to 0.130 kg.m -2 . Mixture speed was 5.1 m.s -1 to 7.1 m.s -1 . The observed dependence of relative pulse frequency on the specific water content in the mixture was approximated by a linear regression. (B.S.)

  14. Ordered one-component plasmas: Phase transitions, normal modes, large systems, and experiments in a storage ring

    International Nuclear Information System (INIS)

    Schiffer, J.P.

    1994-01-01

    The property of cold one-component plasmas, confined by external forces, to form an ordered array has been known for some time both from simulations and from experiment. The purpose of this talk is to summarize some recent work on simulations and some new experimental results. The author discusses some experimental work on real storage rings, magnetic storage devices in which partials circulate with large kinetic energies and for which laser cooling is used on partially ionized ions to attain temperatures ten or more orders of magnitude lower than their kinetic energies

  15. Parameter identification for continuous point emission source based on Tikhonov regularization method coupled with particle swarm optimization algorithm.

    Science.gov (United States)

    Ma, Denglong; Tan, Wei; Zhang, Zaoxiao; Hu, Jun

    2017-03-05

    In order to identify the parameters of hazardous gas emission source in atmosphere with less previous information and reliable probability estimation, a hybrid algorithm coupling Tikhonov regularization with particle swarm optimization (PSO) was proposed. When the source location is known, the source strength can be estimated successfully by common Tikhonov regularization method, but it is invalid when the information about both source strength and location is absent. Therefore, a hybrid method combining linear Tikhonov regularization and PSO algorithm was designed. With this method, the nonlinear inverse dispersion model was transformed to a linear form under some assumptions, and the source parameters including source strength and location were identified simultaneously by linear Tikhonov-PSO regularization method. The regularization parameters were selected by L-curve method. The estimation results with different regularization matrixes showed that the confidence interval with high-order regularization matrix is narrower than that with zero-order regularization matrix. But the estimation results of different source parameters are close to each other with different regularization matrixes. A nonlinear Tikhonov-PSO hybrid regularization was also designed with primary nonlinear dispersion model to estimate the source parameters. The comparison results of simulation and experiment case showed that the linear Tikhonov-PSO method with transformed linear inverse model has higher computation efficiency than nonlinear Tikhonov-PSO method. The confidence intervals from linear Tikhonov-PSO are more reasonable than that from nonlinear method. The estimation results from linear Tikhonov-PSO method are similar to that from single PSO algorithm, and a reasonable confidence interval with some probability levels can be additionally given by Tikhonov-PSO method. Therefore, the presented linear Tikhonov-PSO regularization method is a good potential method for hazardous emission

  16. Anharmonic potential in the oscillator representation

    International Nuclear Information System (INIS)

    Dineykhan, M.; Efimov, G.V.

    1994-01-01

    In the non relativistic and relativized Schroedinger equation the Wick ordering method called the oscillator representation is proposed to calculate the energy spectrum for a wide class of potentials allowing the existence of a bound state. The oscillator representation method gives a unique regular way to describe and calculate the energy levels of ground as well as orbital and radial excitation states for a wide class of potentials. The results of the zeroth approximation oscillator representation are in good agreement with the exact values for the anharmonic potentials. The oscillator representation method was applied to the relativized Schroedinger equation too. The perturbation series converges fairly fast, i.e., the highest perturbation corrections over the interaction Hamiltonian are small enough. 29 refs.; 4 tabs. (author)

  17. Fracton topological order from nearest-neighbor two-spin interactions and dualities

    Science.gov (United States)

    Slagle, Kevin; Kim, Yong Baek

    2017-10-01

    Fracton topological order describes a remarkable phase of matter, which can be characterized by fracton excitations with constrained dynamics and a ground-state degeneracy that increases exponentially with the length of the system on a three-dimensional torus. However, previous models exhibiting this order require many-spin interactions, which may be very difficult to realize in a real material or cold atom system. In this work, we present a more physically realistic model which has the so-called X-cube fracton topological order [Vijay, Haah, and Fu, Phys. Rev. B 94, 235157 (2016), 10.1103/PhysRevB.94.235157] but only requires nearest-neighbor two-spin interactions. The model lives on a three-dimensional honeycomb-based lattice with one to two spin-1/2 degrees of freedom on each site and a unit cell of six sites. The model is constructed from two orthogonal stacks of Z2 topologically ordered Kitaev honeycomb layers [Kitaev, Ann. Phys. 321, 2 (2006), 10.1016/j.aop.2005.10.005], which are coupled together by a two-spin interaction. It is also shown that a four-spin interaction can be included to instead stabilize 3+1D Z2 topological order. We also find dual descriptions of four quantum phase transitions in our model, all of which appear to be discontinuous first-order transitions.

  18. The geometry of continuum regularization

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1987-03-01

    This lecture is primarily an introduction to coordinate-invariant regularization, a recent advance in the continuum regularization program. In this context, the program is seen as fundamentally geometric, with all regularization contained in regularized DeWitt superstructures on field deformations

  19. Assessment of First- and Second-Order Wave-Excitation Load Models for Cylindrical Substructures: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Pereyra, Brandon; Wendt, Fabian; Robertson, Amy; Jonkman, Jason

    2017-03-09

    The hydrodynamic loads on an offshore wind turbine's support structure present unique engineering challenges for offshore wind. Two typical approaches used for modeling these hydrodynamic loads are potential flow (PF) and strip theory (ST), the latter via Morison's equation. This study examines the first- and second-order wave-excitation surge forces on a fixed cylinder in regular waves computed by the PF and ST approaches to (1) verify their numerical implementations in HydroDyn and (2) understand when the ST approach breaks down. The numerical implementation of PF and ST in HydroDyn, a hydrodynamic time-domain solver implemented as a module in the FAST wind turbine engineering tool, was verified by showing the consistency in the first- and second-order force output between the two methods across a range of wave frequencies. ST is known to be invalid at high frequencies, and this study investigates where the ST solution diverges from the PF solution. Regular waves across a range of frequencies were run in HydroDyn for a monopile substructure. As expected, the solutions for the first-order (linear) wave-excitation loads resulting from these regular waves are similar for PF and ST when the diameter of the cylinder is small compared to the length of the waves (generally when the diameter-to-wavelength ratio is less than 0.2). The same finding applies to the solutions for second-order wave-excitation loads, but for much smaller diameter-to-wavelength ratios (based on wavelengths of first-order waves).

  20. 'Regular' and 'emergency' repair

    International Nuclear Information System (INIS)

    Luchnik, N.V.

    1975-01-01

    Experiments on the combined action of radiation and a DNA inhibitor using Crepis roots and on split-dose irradiation of human lymphocytes lead to the conclusion that there are two types of repair. The 'regular' repair takes place twice in each mitotic cycle and ensures the maintenance of genetic stability. The 'emergency' repair is induced at all stages of the mitotic cycle by high levels of injury. (author)

  1. Regular expression containment

    DEFF Research Database (Denmark)

    Henglein, Fritz; Nielsen, Lasse

    2011-01-01

    We present a new sound and complete axiomatization of regular expression containment. It consists of the conventional axiomatiza- tion of concatenation, alternation, empty set and (the singleton set containing) the empty string as an idempotent semiring, the fixed- point rule E* = 1 + E × E......* for Kleene-star, and a general coin- duction rule as the only additional rule. Our axiomatization gives rise to a natural computational inter- pretation of regular expressions as simple types that represent parse trees, and of containment proofs as coercions. This gives the axiom- atization a Curry......-Howard-style constructive interpretation: Con- tainment proofs do not only certify a language-theoretic contain- ment, but, under our computational interpretation, constructively transform a membership proof of a string in one regular expres- sion into a membership proof of the same string in another regular expression. We...

  2. Molecular and electronic structures of oxo-bis(benzene-1,2-dithiolato)chromate(V) monoanions. A combined experimental and density functional study.

    Science.gov (United States)

    Kapre, Ruta; Ray, Kallol; Sylvestre, Isabelle; Weyhermüller, Thomas; DeBeer George, Serena; Neese, Frank; Wieghardt, Karl

    2006-05-01

    Two oxo-bis(benzene-1,2-dithiolato)chromate(V) complexes, namely, [CrO(L(Bu))2]1- and [CrO(L(Me))2]1-, have been synthesized and studied by UV-vis, EPR, magnetic circular dichroism (MCD), and X-ray absorption spectroscopy and by X-ray crystallography; their electro- and magnetochemistries are reported. H2L(Bu) represents the pro-ligand 3,5-di-tert-butylbenzene-1,2-dithiol, and H2L(Me) is the corresponding 4-methyl-benzene-1,2-dithiol. A structural feature of interest for both the complexes is the folding of the dithiolate ligands about the S-S vector providing Cs symmetry to the complexes. Geometry optimizations using all-electron density functional theory with scalar relativistic corrections at the second-order Douglas-Kroll-Hess (DKH2) and zeroth-order regular approximation (ZORA) levels result in excellent agreement with the experimentally determined structures and electronic and S K-edge X-ray absorption spectra. From DFT calculations, the Cs instead of C2v symmetry for the complexes is attributed to the strong S(3p) --> Cr(3d(x2-y2)) pi-donation in Cs geometry providing additional stability to the complexes.

  3. Accelerator system for producing two-component beams for studies of interactive surface effects

    International Nuclear Information System (INIS)

    Kaminsky, M.; Das, S.K.; Ekern, R.; Hess, D.C.

    1977-01-01

    For studies of interactive surface effects caused by the simultaneous bombardment of targets by both chemically active and inactive ion species (e.g., D + and He + , respectively) a two beam component accelerator facility was placed in operation. One component, consisting of light ions (e.g., H, D, He) is accelerated by a 2-MV Van de Graaff accelerator which provides a mass analyzed and focussed beam for the energy range from approximately 100-keV to 2-MeV (for singly charged ions). The other component is a beam of light ions in the energy range from approximately 10-keV to 100-keV. This is furnished by a 100-kV dc accelerator system which provides a mass analyzed focussed beam. This beam is guided into the beam line of the Van de Graaff accelerator electrostatically, and with the aid of beam steerers it is made to be co-axial with the Van de Graaff generated beam. The angle of incidence becomes hereby a free parameter for the interaction of the mixed beams with a surface. For each beam component, current densities of 650 μA cm -2 on target can readily be obtained. In order to reduce carbon contamination of the irradiated targets significantly, stainless steel beam lines have been used together with a combination of turbomolecular pumps and ion-sublimation pumps.A total pressure of 2 to 3 x 10 -8 torr in the beam lines and of 2 x 10 -9 torr in the target chamber can be obtained readily. Experimental results on the surface damage of Ni bombarded simultaneously with He + and D + ions are presented. The importance of such studies of interactive surface effects for the controlled thermonuclear fusion program are discussed

  4. Statistical wave function

    International Nuclear Information System (INIS)

    Levine, R.D.

    1988-01-01

    Statistical considerations are applied to quantum mechanical amplitudes. The physical motivation is the progress in the spectroscopy of highly excited states. The corresponding wave functions are strongly mixed. In terms of a basis set of eigenfunctions of a zeroth-order Hamiltonian with good quantum numbers, such wave functions have contributions from many basis states. The vector x is considered whose components are the expansion coefficients in that basis. Any amplitude can be written as a dagger x x. It is argued that the components of x and hence other amplitudes can be regarded as random variables. The maximum entropy formalism is applied to determine the corresponding distribution function. Two amplitudes a dagger x x and b dagger x x are independently distributed if b dagger x a = 0. It is suggested that the theory of quantal measurements implies that, in general, one can one determine the distribution of amplitudes and not the amplitudes themselves

  5. A two-component dark matter model with real singlet scalars ...

    Indian Academy of Sciences (India)

    Theoretical framework. In the present work, the dark matter candidate has two components S and S′ both of ... The scalar sector potential (for Higgs and two real singlet scalars) in this framework can then be written .... In this work we obtain the allowed values of model parameters (δ2, δ′2, MS and M′S) using three direct ...

  6. Magnetic ordering in arrays of one-dimensional nanoparticle chains

    International Nuclear Information System (INIS)

    Serantes, D; Baldomir, D; Pereiro, M; Hernando, B; Prida, V M; Sanchez Llamazares, J L; Zhukov, A; Ilyn, M; Gonzalez, J

    2009-01-01

    The magnetic order in parallel-aligned one-dimensional (1D) chains of magnetic nanoparticles is studied using a Monte Carlo technique. If the easy anisotropy axes are collinear along the chains a macroscopic mean-field approach indicates antiferromagnetic (AFM) order even when no interparticle interactions are taken into account, which evidences that a mean-field treatment is inadequate for the study of the magnetic order in these highly anisotropic systems. From the direct microscopic analysis of the evolution of the magnetic moments, we observe spontaneous intra-chain ferromagnetic (FM)-type and inter-chain AFM-type ordering at low temperatures (although not completely regular) for the easy-axes collinear case, whereas a random distribution of the anisotropy axes leads to a sort of intra-chain AFM arrangement with no inter-chain regular order. When the magnetic anisotropy is neglected a perfectly regular intra-chain FM-like order is attained. Therefore it is shown that the magnetic anisotropy, and particularly the spatial distribution of the easy axes, is a key parameter governing the magnetic ordering type of 1D-nanoparticle chains.

  7. Strong Bisimilarity and Regularity of Basic Parallel Processes is PSPACE-Hard

    DEFF Research Database (Denmark)

    Srba, Jirí

    2002-01-01

    We show that the problem of checking whether two processes definable in the syntax of Basic Parallel Processes (BPP) are strongly bisimilar is PSPACE-hard. We also demonstrate that there is a polynomial time reduction from the strong bisimilarity checking problem of regular BPP to the strong...... regularity (finiteness) checking of BPP. This implies that strong regularity of BPP is also PSPACE-hard....

  8. Globals of Completely Regular Monoids

    Institute of Scientific and Technical Information of China (English)

    Wu Qian-qian; Gan Ai-ping; Du Xian-kun

    2015-01-01

    An element of a semigroup S is called irreducible if it cannot be expressed as a product of two elements in S both distinct from itself. In this paper we show that the class C of all completely regular monoids with irreducible identity elements satisfies the strong isomorphism property and so it is globally determined.

  9. Reflections on Teaching and Research: Two Inseparable Components in Higher Education

    Science.gov (United States)

    Chan Fong Yee, Fanny

    2014-01-01

    Teaching and research are two inseparable components in higher education. There are continuous debates about the relationship between the two. Does good teaching always lead to good research, and vice versa? This paper critically examines the impact of current policy on the two academic practices and discusses how it shapes one's professional…

  10. Relation between the pole and the minimally subtracted mass in dimensional regularization and dimensional reduction to three-loop order

    Energy Technology Data Exchange (ETDEWEB)

    Marquard, P.; Mihaila, L.; Steinhauser, M. [Karlsruhe Univ. (T.H.) (Germany). Inst. fuer Theoretische Teilchenphysik; Piclum, J.H. [Karlsruhe Univ. (T.H.) (Germany). Inst. fuer Theoretische Teilchenphysik]|[Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik

    2007-02-15

    We compute the relation between the pole quark mass and the minimally subtracted quark mass in the framework of QCD applying dimensional reduction as a regularization scheme. Special emphasis is put on the evanescent couplings and the renormalization of the {epsilon}-scalar mass. As a by-product we obtain the three-loop on-shell renormalization constants Z{sub m}{sup OS} and Z{sub 2}{sup OS} in dimensional regularization and thus provide the first independent check of the analytical results computed several years ago. (orig.)

  11. Higher order Stark effect and transition probabilities on hyperfine structure components of hydrogen like atoms

    Energy Technology Data Exchange (ETDEWEB)

    Pal' chikov, V.G. [National Research Institute for Physical-Technical and Radiotechnical Measurements - VNIIFTRI (Russian Federation)], E-mail: vitpal@mail.ru

    2000-08-15

    A quantum-electrodynamical (QED) perturbation theory is developed for hydrogen and hydrogen-like atomic systems with interaction between bound electrons and radiative field being treated as the perturbation. The dependence of the perturbed energy of levels on hyperfine structure (hfs) effects and on the higher-order Stark effect is investigated. Numerical results have been obtained for the transition probability between the hfs components of hydrogen-like bismuth.

  12. Regularization by External Variables

    DEFF Research Database (Denmark)

    Bossolini, Elena; Edwards, R.; Glendinning, P. A.

    2016-01-01

    Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind of regula......Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind...

  13. arXiv Describing dynamical fluctuations and genuine correlations by Weibull regularity

    CERN Document Server

    Nayak, Ranjit K.; Sarkisyan-Grinbaum, Edward K.; Tasevsky, Marek

    The Weibull parametrization of the multiplicity distribution is used to describe the multidimensional local fluctuations and genuine multiparticle correlations measured by OPAL in the large statistics $e^{+}e^{-} \\to Z^{0} \\to hadrons$ sample. The data are found to be well reproduced by the Weibull model up to higher orders. The Weibull predictions are compared to the predictions by the two other models, namely by the negative binomial and modified negative binomial distributions which mostly failed to fit the data. The Weibull regularity, which is found to reproduce the multiplicity distributions along with the genuine correlations, looks to be the optimal model to describe the multiparticle production process.

  14. A regularized vortex-particle mesh method for large eddy simulation

    Science.gov (United States)

    Spietz, H. J.; Walther, J. H.; Hejlesen, M. M.

    2017-11-01

    We present recent developments of the remeshed vortex particle-mesh method for simulating incompressible fluid flow. The presented method relies on a parallel higher-order FFT based solver for the Poisson equation. Arbitrary high order is achieved through regularization of singular Green's function solutions to the Poisson equation and recently we have derived novel high order solutions for a mixture of open and periodic domains. With this approach the simulated variables may formally be viewed as the approximate solution to the filtered Navier Stokes equations, hence we use the method for Large Eddy Simulation by including a dynamic subfilter-scale model based on test-filters compatible with the aforementioned regularization functions. Further the subfilter-scale model uses Lagrangian averaging, which is a natural candidate in light of the Lagrangian nature of vortex particle methods. A multiresolution variation of the method is applied to simulate the benchmark problem of the flow past a square cylinder at Re = 22000 and the obtained results are compared to results from the literature.

  15. Multiple graph regularized nonnegative matrix factorization

    KAUST Repository

    Wang, Jim Jing-Yan

    2013-10-01

    Non-negative matrix factorization (NMF) has been widely used as a data representation method based on components. To overcome the disadvantage of NMF in failing to consider the manifold structure of a data set, graph regularized NMF (GrNMF) has been proposed by Cai et al. by constructing an affinity graph and searching for a matrix factorization that respects graph structure. Selecting a graph model and its corresponding parameters is critical for this strategy. This process is usually carried out by cross-validation or discrete grid search, which are time consuming and prone to overfitting. In this paper, we propose a GrNMF, called MultiGrNMF, in which the intrinsic manifold is approximated by a linear combination of several graphs with different models and parameters inspired by ensemble manifold regularization. Factorization metrics and linear combination coefficients of graphs are determined simultaneously within a unified object function. They are alternately optimized in an iterative algorithm, thus resulting in a novel data representation algorithm. Extensive experiments on a protein subcellular localization task and an Alzheimer\\'s disease diagnosis task demonstrate the effectiveness of the proposed algorithm. © 2013 Elsevier Ltd. All rights reserved.

  16. Regular and context-free nominal traces

    DEFF Research Database (Denmark)

    Degano, Pierpaolo; Ferrari, Gian-Luigi; Mezzetti, Gianluca

    2017-01-01

    Two kinds of automata are presented, for recognising new classes of regular and context-free nominal languages. We compare their expressive power with analogous proposals in the literature, showing that they express novel classes of languages. Although many properties of classical languages hold ...

  17. Two-component scattering model and the electron density spectrum

    Science.gov (United States)

    Zhou, A. Z.; Tan, J. Y.; Esamdin, A.; Wu, X. J.

    2010-02-01

    In this paper, we discuss a rigorous treatment of the refractive scintillation caused by a two-component interstellar scattering medium and a Kolmogorov form of density spectrum. It is assumed that the interstellar scattering medium is composed of a thin-screen interstellar medium (ISM) and an extended interstellar medium. We consider the case that the scattering of the thin screen concentrates in a thin layer represented by a δ function distribution and that the scattering density of the extended irregular medium satisfies the Gaussian distribution. We investigate and develop equations for the flux density structure function corresponding to this two-component ISM geometry in the scattering density distribution and compare our result with the observations. We conclude that the refractive scintillation caused by this two-component ISM scattering gives a more satisfactory explanation for the observed flux density variation than does the single extended medium model. The level of refractive scintillation is strongly sensitive to the distribution of scattering material along the line of sight (LOS). The theoretical modulation indices are comparatively less sensitive to the scattering strength of the thin-screen medium, but they critically depend on the distance from the observer to the thin screen. The logarithmic slope of the structure function is sensitive to the scattering strength of the thin-screen medium, but is relatively insensitive to the thin-screen location. Therefore, the proposed model can be applied to interpret the structure functions of flux density observed in pulsar PSR B2111 + 46 and PSR B0136 + 57. The result suggests that the medium consists of a discontinuous distribution of plasma turbulence embedded in the interstellar medium. Thus our work provides some insight into the distribution of the scattering along the LOS to the pulsar PSR B2111 + 46 and PSR B0136 + 57.

  18. Dynamic simulation of natural convection bypass two-circuit cycle refrigerator-freezer and its application Part I: Component models

    International Nuclear Information System (INIS)

    Ding Guoliang; Zhang Chunlu; Lu Zhili

    2004-01-01

    In order to reduce the greenhouse gas emissions, efficient household refrigerator/freezers (RFs) are required. Bypass two-circuit cycle RFs with one compressor are proved to be more efficient than two-evaporator in series cycle RFs. In order to study the characteristics and improve the design of bypass two-circuit cycle RFs, a dynamic model is developed in this paper. In part I, the mathematic models of all components are presented, considering not only the accuracy of the models but also the computation stability and speed to solve the models. An efficiency model that requires a single calorimeter data point at the standard test condition is employed for compressor. A multi-zone model is employed for condenser and for evaporator, with its wall thermal capacity considered by effective metal method. The approximate integral analytic model is employed for adiabatic capillary tube, and the effective inlet enthalpy method is used to transfer the non-adiabatic capillary tube to adiabatic capillary tube. The z-transfer function model is employed for cabinet load calculation

  19. Electron acoustic-Langmuir solitons in a two-component electron plasma

    Science.gov (United States)

    McKenzie, J. F.

    2003-04-01

    We investigate the conditions under which ‘high-frequency’ electron acoustic Langmuir solitons can be constructed in a plasma consisting of protons and two electron populations: one ‘cold’ and the other ‘hot’. Conservation of total momentum can be cast as a structure equation either for the ‘cold’ or ‘hot’ electron flow speed in a stationary wave using the Bernoulli energy equations for each species. The linearized version of the governing equations gives the dispersion equation for the stationary waves of the system, from which follows the necessary but not sufficient conditions for the existence of soliton structures; namely that the wave speed must be less than the acoustic speed of the ‘hot’ electron component and greater than the low-frequency compound acoustic speed of the two electron populations. In this wave speed regime linear waves are ‘evanescent’, giving rise to the exponential growth or decay, which readily can give rise to non-linear effects that may balance dispersion and allow soliton formation. In general the ‘hot’ component must be more abundant than the ‘cold’ one and the wave is characterized by a compression of the ‘cold’ component and an expansion in the ‘hot’ component necessitating a potential dip. Both components are driven towards their sonic points; the ‘cold’ from above and the ‘hot’ from below. It is this transonic feature which limits the amplitude of the soliton. If the ‘hot’ component is not sufficiently abundant the window for soliton formation shrinks to a narrow speed regime which is quasi-transonic relative to the ‘hot’ electron acoustic speed, and it is shown that smooth solitons cannot be constructed. In the special case of a very cold electron population (i.e. ‘highly supersonic’) and the other population being very hot (i.e. ‘highly subsonic’) with adiabatic index 2, the structure equation simplifies and can be integrated in terms of elementary

  20. Regularity theory for quasilinear elliptic systems and Monge—Ampère equations in two dimensions

    CERN Document Server

    Schulz, Friedmar

    1990-01-01

    These lecture notes have been written as an introduction to the characteristic theory for two-dimensional Monge-Ampère equations, a theory largely developed by H. Lewy and E. Heinz which has never been presented in book form. An exposition of the Heinz-Lewy theory requires auxiliary material which can be found in various monographs, but which is presented here, in part because the focus is different, and also because these notes have an introductory character. Self-contained introductions to the regularity theory of elliptic systems, the theory of pseudoanalytic functions and the theory of conformal mappings are included. These notes grew out of a seminar given at the University of Kentucky in the fall of 1988 and are intended for graduate students and researchers interested in this area.

  1. Fermion-number violation in regularizations that preserve fermion-number symmetry

    Science.gov (United States)

    Golterman, Maarten; Shamir, Yigal

    2003-01-01

    There exist both continuum and lattice regularizations of gauge theories with fermions which preserve chiral U(1) invariance (“fermion number”). Such regularizations necessarily break gauge invariance but, in a covariant gauge, one recovers gauge invariance to all orders in perturbation theory by including suitable counterterms. At the nonperturbative level, an apparent conflict then arises between the chiral U(1) symmetry of the regularized theory and the existence of ’t Hooft vertices in the renormalized theory. The only possible resolution of the paradox is that the chiral U(1) symmetry is broken spontaneously in the enlarged Hilbert space of the covariantly gauge-fixed theory. The corresponding Goldstone pole is unphysical. The theory must therefore be defined by introducing a small fermion-mass term that breaks explicitly the chiral U(1) invariance and is sent to zero after the infinite-volume limit has been taken. Using this careful definition (and a lattice regularization) for the calculation of correlation functions in the one-instanton sector, we show that the ’t Hooft vertices are recovered as expected.

  2. REGULARIZED D-BAR METHOD FOR THE INVERSE CONDUCTIVITY PROBLEM

    DEFF Research Database (Denmark)

    Knudsen, Kim; Lassas, Matti; Mueller, Jennifer

    2009-01-01

    A strategy for regularizing the inversion procedure for the two-dimensional D-bar reconstruction algorithm based on the global uniqueness proof of Nachman [Ann. Math. 143 (1996)] for the ill-posed inverse conductivity problem is presented. The strategy utilizes truncation of the boundary integral...... the convergence of the reconstructed conductivity to the true conductivity as the noise level tends to zero. The results provide a link between two traditions of inverse problems research: theory of regularization and inversion methods based on complex geometrical optics. Also, the procedure is a novel...

  3. Updating ARI Educational Benefits Usage Data Bases for Army Regular, Reserve, and Guard: 2005 - 2006

    National Research Council Canada - National Science Library

    Young, Winnie

    2007-01-01

    .... For the Regular component, the report includes tabulations of program participation and benefit usage, type of educational program entered, and time between separation and start of education benefits...

  4. RES: Regularized Stochastic BFGS Algorithm

    Science.gov (United States)

    Mokhtari, Aryan; Ribeiro, Alejandro

    2014-12-01

    RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.

  5. A critical analysis of some popular methods for the discretisation of the gradient operator in finite volume methods

    Science.gov (United States)

    Syrakos, Alexandros; Varchanis, Stylianos; Dimakopoulos, Yannis; Goulas, Apostolos; Tsamopoulos, John

    2017-12-01

    Finite volume methods (FVMs) constitute a popular class of methods for the numerical simulation of fluid flows. Among the various components of these methods, the discretisation of the gradient operator has received less attention despite its fundamental importance with regards to the accuracy of the FVM. The most popular gradient schemes are the divergence theorem (DT) (or Green-Gauss) scheme and the least-squares (LS) scheme. Both are widely believed to be second-order accurate, but the present study shows that in fact the common variant of the DT gradient is second-order accurate only on structured meshes whereas it is zeroth-order accurate on general unstructured meshes, and the LS gradient is second-order and first-order accurate, respectively. This is explained through a theoretical analysis and is confirmed by numerical tests. The schemes are then used within a FVM to solve a simple diffusion equation on unstructured grids generated by several methods; the results reveal that the zeroth-order accuracy of the DT gradient is inherited by the FVM as a whole, and the discretisation error does not decrease with grid refinement. On the other hand, use of the LS gradient leads to second-order accurate results, as does the use of alternative, consistent, DT gradient schemes, including a new iterative scheme that makes the common DT gradient consistent at almost no extra cost. The numerical tests are performed using both an in-house code and the popular public domain partial differential equation solver OpenFOAM.

  6. Fast regularizing sequential subspace optimization in Banach spaces

    International Nuclear Information System (INIS)

    Schöpfer, F; Schuster, T

    2009-01-01

    We are concerned with fast computations of regularized solutions of linear operator equations in Banach spaces in case only noisy data are available. To this end we modify recently developed sequential subspace optimization methods in such a way that the therein employed Bregman projections onto hyperplanes are replaced by Bregman projections onto stripes whose width is in the order of the noise level

  7. A two-dimensional regularization algorithm for density profile evaluation from broadband reflectometry

    International Nuclear Information System (INIS)

    Nunes, F.; Varela, P.; Silva, A.; Manso, M.; Santos, J.; Nunes, I.; Serra, F.; Kurzan, B.; Suttrop, W.

    1997-01-01

    Broadband reflectometry is a current technique that uses the round-trip group delays of reflected frequency-swept waves to measure density profiles of fusion plasmas. The main factor that may limit the accuracy of the reconstructed profiles is the interference of the probing waves with the plasma density fluctuations: plasma turbulence leads to random phase variations and magneto hydrodynamic activity produces mainly strong amplitude and phase modulations. Both effects cause the decrease, and eventually loss, of signal at some frequencies. Several data processing techniques can be applied to filter and/or interpolate noisy group delay data obtained from turbulent plasmas with a single frequency sweep. Here, we propose a more powerful algorithm performing two-dimensional regularization (in space and time) of data provided by multiple consecutive frequency sweeps, which leads to density profiles with improved accuracy. The new method is described and its application to simulated data corrupted by noise and missing data is considered. It is shown that the algorithm improves the identification of slowly varying plasma density perturbations by attenuating the effect of fast fluctuations and noise contained in experimental data. First results obtained with this method in ASDEX Upgrade tokamak are presented. copyright 1997 American Institute of Physics

  8. A binary-decision-diagram-based two-bit arithmetic logic unit on a GaAs-based regular nanowire network with hexagonal topology

    International Nuclear Information System (INIS)

    Zhao Hongquan; Kasai, Seiya; Shiratori, Yuta; Hashizume, Tamotsu

    2009-01-01

    A two-bit arithmetic logic unit (ALU) was successfully fabricated on a GaAs-based regular nanowire network with hexagonal topology. This fundamental building block of central processing units can be implemented on a regular nanowire network structure with simple circuit architecture based on graphical representation of logic functions using a binary decision diagram and topology control of the graph. The four-instruction ALU was designed by integrating subgraphs representing each instruction, and the circuitry was implemented by transferring the logical graph structure to a GaAs-based nanowire network formed by electron beam lithography and wet chemical etching. A path switching function was implemented in nodes by Schottky wrap gate control of nanowires. The fabricated circuit integrating 32 node devices exhibits the correct output waveforms at room temperature allowing for threshold voltage variation.

  9. Holographic entanglement entropy in two-order insulator/superconductor transitions

    Energy Technology Data Exchange (ETDEWEB)

    Peng, Yan, E-mail: yanpengphy@163.com; Liu, Guohua

    2017-04-10

    We study holographic superconductor model with two orders in the five dimensional AdS soliton background away from the probe limit. We disclose properties of phase transitions mostly from the holographic topological entanglement entropy approach. Our results show that the entanglement entropy is useful in investigating transitions in this general model and in particular, there is a new type of first order phase transition in the insulator/superconductor system. We also give some qualitative understanding and obtain the analytical condition for this first order phase transition to occur. As a summary, we draw the complete phase diagram representing effects of the scalar charge on phase transitions.

  10. Oscillation of two-dimensional linear second-order differential systems

    International Nuclear Information System (INIS)

    Kwong, M.K.; Kaper, H.G.

    1985-01-01

    This article is concerned with the oscillatory behavior at infinity of the solution y: [a, ∞) → R 2 of a system of two second-order differential equations, y''(t) + Q(t) y(t) = 0, t epsilon[a, ∞); Q is a continuous matrix-valued function on [a, ∞) whose values are real symmetric matrices of order 2. It is shown that the solution is oscillatory at infinity if the largest eigenvalue of the matrix integral/sub a//sup t/ Q(s) ds tends to infinity as t → ∞. This proves a conjecture of D. Hinton and R.T. Lewis for the two-dimensional case. Furthermore, it is shown that considerably weaker forms of the condition still suffice for oscillatory behavior at infinity. 7 references

  11. New approach based on fuzzy logic and principal component analysis for the classification of two-dimensional maps in health and disease. Application to lymphomas.

    Science.gov (United States)

    Marengo, Emilio; Robotti, Elisa; Righetti, Pier Giorgio; Antonucci, Francesca

    2003-07-04

    Two-dimensional (2D) electrophoresis is the most wide spread technique for the separation of proteins in biological systems. This technique produces 2D maps of high complexity, which creates difficulties in the comparison of different samples. The method proposed in this paper for the comparison of different 2D maps can be summarised in four steps: (a) digitalisation of the image; (b) fuzzyfication of the digitalised map in order to consider the variability of the two-dimensional electrophoretic separation; (c) decoding by principal component analysis of the previously obtained fuzzy maps, in order to reduce the system dimensionality; (d) classification analysis (linear discriminant analysis), in order to separate the samples contained in the dataset according to the classes present in said dataset. This method was applied to a dataset constituted by eight samples: four belonging to healthy human lymph-nodes and four deriving from non-Hodgkin lymphomas. The amount of fuzzyfication of the original map is governed by the sigma parameter. The larger the value, the more fuzzy theresulting transformed map. The effect of the fuzzyfication parameter was investigated, the optimal results being obtained for sigma = 1.75 and 2.25. Principal component analysis and linear discriminant analysis allowed the separation of the two classes of samples without any misclassification.

  12. 77 FR 58579 - Certain Two-Way Global Satellite Communication Devices, System and Components Thereof...

    Science.gov (United States)

    2012-09-21

    ... Communication Devices, System and Components Thereof; Institution of Investigation Pursuant to 19 U.S.C. 1337... certain two-way global satellite communication devices, system and components thereof that infringe one or... within the United States after importation of certain two-way global satellite communication devices...

  13. Asymptotic analysis of a pile-up of regular edge dislocation walls

    KAUST Repository

    Hall, Cameron L.

    2011-12-01

    The idealised problem of a pile-up of regular dislocation walls (that is, of planes each containing an infinite number of parallel, identical and equally spaced dislocations) was presented by Roy et al. [A. Roy, R.H.J. Peerlings, M.G.D. Geers, Y. Kasyanyuk, Materials Science and Engineering A 486 (2008) 653-661] as a prototype for understanding the importance of discrete dislocation interactions in dislocation-based plasticity models. They noted that analytic solutions for the dislocation wall density are available for a pile-up of regular screw dislocation walls, but that numerical methods seem to be necessary for investigating regular edge dislocation walls. In this paper, we use the techniques of discrete-to-continuum asymptotic analysis to obtain a detailed description of a pile-up of regular edge dislocation walls. To leading order, we find that the dislocation wall density is governed by a simple differential equation and that boundary layers are present at both ends of the pile-up. © 2011 Elsevier B.V.

  14. Asymptotic analysis of a pile-up of regular edge dislocation walls

    KAUST Repository

    Hall, Cameron L.

    2011-01-01

    The idealised problem of a pile-up of regular dislocation walls (that is, of planes each containing an infinite number of parallel, identical and equally spaced dislocations) was presented by Roy et al. [A. Roy, R.H.J. Peerlings, M.G.D. Geers, Y. Kasyanyuk, Materials Science and Engineering A 486 (2008) 653-661] as a prototype for understanding the importance of discrete dislocation interactions in dislocation-based plasticity models. They noted that analytic solutions for the dislocation wall density are available for a pile-up of regular screw dislocation walls, but that numerical methods seem to be necessary for investigating regular edge dislocation walls. In this paper, we use the techniques of discrete-to-continuum asymptotic analysis to obtain a detailed description of a pile-up of regular edge dislocation walls. To leading order, we find that the dislocation wall density is governed by a simple differential equation and that boundary layers are present at both ends of the pile-up. © 2011 Elsevier B.V.

  15. On a correspondence between regular and non-regular operator monotone functions

    DEFF Research Database (Denmark)

    Gibilisco, P.; Hansen, Frank; Isola, T.

    2009-01-01

    We prove the existence of a bijection between the regular and the non-regular operator monotone functions satisfying a certain functional equation. As an application we give a new proof of the operator monotonicity of certain functions related to the Wigner-Yanase-Dyson skew information....

  16. A beam intensity monitor for the evaluation beamline for soft x-ray optical elements

    International Nuclear Information System (INIS)

    Imazono, Takashi; Moriya, Naoji; Harada, Yoshihisa; Sano, Kazuo; Koike, Masato

    2012-01-01

    Evaluation Beamline for Soft X-Ray Optical Elements (BL-11) at the SR Center of Ritsumeikan University has been operated to measure the wavelength and angular characteristics of soft x-ray optical components in a wavelength range of 0.65-25 nm using a reflecto-diffractometer (RD). The beam intensity monitor that has been equipped in BL-11 has observed the signal of the zero-th order light. For the purpose of more accurate evaluation of the performance of optical components, a new beam intensity monitor to measure the intensity of the first order light from the monochromator in BL-11 has been developed and installed in just front of RD. The strong positive correlation between the signal of the beam monitor and a detector equipped in the RD is shown. It is successful that the beam intensity of the first order light can be monitored in real time.

  17. Mechatronic modeling and simulation using bond graphs

    CERN Document Server

    Das, Shuvra

    2009-01-01

    Introduction to Mechatronics and System ModelingWhat Is Mechatronics?What Is a System and Why Model Systems?Mathematical Modeling Techniques Used in PracticeSoftwareBond Graphs: What Are They?Engineering SystemsPortsGeneralized VariablesBond GraphsBasic Components in SystemsA Brief Note about Bond Graph Power DirectionsSummary of Bond Direction RulesDrawing Bond Graphs for Simple Systems: Electrical and MechanicalSimplification Rules for Junction StructureDrawing Bond Graphs for Electrical SystemsDrawing Bond Graphs for Mechanical SystemsCausalityDrawing Bond Graphs for Hydraulic and Electronic Components and SystemsSome Basic Properties and Concepts for FluidsBond Graph Model of Hydraulic SystemsElectronic SystemsDeriving System Equations from Bond GraphsSystem VariablesDeriving System EquationsTackling Differential CausalityAlgebraic LoopsSolution of Model Equations and Their InterpretationZeroth Order SystemsFirst Order SystemsSecond Order SystemTransfer Functions and Frequency ResponsesNumerical Solution ...

  18. Two-Dimensional One-Component Plasma on Flamm's Paraboloid

    Science.gov (United States)

    Fantoni, Riccardo; Téllez, Gabriel

    2008-11-01

    We study the classical non-relativistic two-dimensional one-component plasma at Coulomb coupling Γ=2 on the Riemannian surface known as Flamm's paraboloid which is obtained from the spatial part of the Schwarzschild metric. At this special value of the coupling constant, the statistical mechanics of the system are exactly solvable analytically. The Helmholtz free energy asymptotic expansion for the large system has been found. The density of the plasma, in the thermodynamic limit, has been carefully studied in various situations.

  19. Viscous Growth in Spinodal Decomposition of the Two-component Lennard-Jones Model in Two Dimensions

    DEFF Research Database (Denmark)

    Laradji, M.; Toxvaerd, S.; Mouritsen, Ole G.

    1997-01-01

    The dynamics of phase separation of a two-component Lennard-Jones model in three dimensions is investigated by means of large scale molecular dynamics simulation. A systematic study over a wide range of quench temperatures within the coexistence region shows that the binary system reaches...

  20. New methods for the characterization of pyrocarbon; The two component model of pyrocarbon

    Energy Technology Data Exchange (ETDEWEB)

    Luhleich, H.; Sutterlin, L.; Hoven, H.; Nickel, H.

    1972-04-19

    In the first part, new experiments to clarify the origin of different pyrocarbon components are described. Three new methods (plasma-oxidation, wet-oxidation, ultrasonic method) are presented to expose the carbon black like component in the pyrocarbon deposited in fluidized beds. In the second part, a two component model of pyrocarbon is proposed and illustrated by examples.

  1. Comparison of Grocery Purchase Patterns of Diet Soda Buyers to Those of Regular Soda Buyers

    OpenAIRE

    James, Binkley; Golub, Alla A.

    2007-01-01

    The ultimate effect of regular and diet carbonated soft drinks on energy intakes depends on possible relations with other dietary components. With this motivation, this study compared grocery purchase patterns of regular and diet soft drink consumers using a large sample of US single person households. We tested for differences in food spending shares allocated to 43 food categories chosen mainly for their desirable/undesirable nutritional properties. We also investigated whether differences ...

  2. Representing Sheared Convective Boundary Layer by Zeroth- and First-Order-Jump Mixed-Layer Models: Large-Eddy Simulation Verification

    NARCIS (Netherlands)

    Pino, D.; Vilà-Guerau de Arellano, J.; Kim, S.W.

    2006-01-01

    Dry convective boundary layers characterized by a significant wind shear on the surface and at the inversion are studied by means of the mixed-layer theory. Two different representations of the entrainment zone, each of which has a different closure of the entrainment heat flux, are considered. The

  3. PID control of second-order systems with hysteresis

    NARCIS (Netherlands)

    Jayawardhana, Bayu; Logemann, Hartmut; Ryan, Eugene P.

    2008-01-01

    The efficacy of proportional, integral and derivative (PID) control for set point regulation and disturbance rejection is investigated in a context of second-order systems with hysteretic components. Two basic structures are studied: in the first, the hysteretic component resides (internally) in the

  4. Flexible Lithium-Ion Fiber Battery by the Regular Stacking of Two-Dimensional Titanium Oxide Nanosheets Hybridized with Reduced Graphene Oxide.

    Science.gov (United States)

    Hoshide, Tatsumasa; Zheng, Yuanchuan; Hou, Junyu; Wang, Zhiqiang; Li, Qingwen; Zhao, Zhigang; Ma, Renzhi; Sasaki, Takayoshi; Geng, Fengxia

    2017-06-14

    Increasing interest has recently been devoted to developing small, rapid, and portable electronic devices; thus, it is becoming critically important to provide matching light and flexible energy-storage systems to power them. To this end, compared with the inevitable drawbacks of being bulky, heavy, and rigid for traditional planar sandwiched structures, linear fiber-shaped lithium-ion batteries (LIB) have become increasingly important owing to their combined superiorities of miniaturization, adaptability, and weavability, the progress of which being heavily dependent on the development of new fiber-shaped electrodes. Here, we report a novel fiber battery electrode based on the most widely used LIB material, titanium oxide, which is processed into two-dimensional nanosheets and assembled into a macroscopic fiber by a scalable wet-spinning process. The titania sheets are regularly stacked and conformally hybridized in situ with reduced graphene oxide (rGO), thereby serving as efficient current collectors, which endows the novel fiber electrode with excellent integrated mechanical properties combined with superior battery performances in terms of linear densities, rate capabilities, and cyclic behaviors. The present study clearly demonstrates a new material-design paradigm toward novel fiber electrodes by assembling metal oxide nanosheets into an ordered macroscopic structure, which would represent the most-promising solution to advanced flexible energy-storage systems.

  5. Stochastic analytic regularization

    International Nuclear Information System (INIS)

    Alfaro, J.

    1984-07-01

    Stochastic regularization is reexamined, pointing out a restriction on its use due to a new type of divergence which is not present in the unregulated theory. Furthermore, we introduce a new form of stochastic regularization which permits the use of a minimal subtraction scheme to define the renormalized Green functions. (author)

  6. On the Alexander polynominals of alternating two-component links

    Directory of Open Access Journals (Sweden)

    Mark E. Kidwell

    1979-01-01

    Full Text Available Let L be an alternating two-component link with Alexander polynomial Δ(x,y. Then the polynomials (1−xΔ(x,y and (1−yΔ(x,y are alternating. That is, (1−yΔ(x,y can be written as ∑i,jcijxiyj in such a way that (−1i+jcij≥0.

  7. Regularity and chaos in cavity QED

    International Nuclear Information System (INIS)

    Bastarrachea-Magnani, Miguel Angel; López-del-Carpio, Baldemar; Chávez-Carlos, Jorge; Lerma-Hernández, Sergio; Hirsch, Jorge G

    2017-01-01

    The interaction of a quantized electromagnetic field in a cavity with a set of two-level atoms inside it can be described with algebraic Hamiltonians of increasing complexity, from the Rabi to the Dicke models. Their algebraic character allows, through the use of coherent states, a semiclassical description in phase space, where the non-integrable Dicke model has regions associated with regular and chaotic motion. The appearance of classical chaos can be quantified calculating the largest Lyapunov exponent over the whole available phase space for a given energy. In the quantum regime, employing efficient diagonalization techniques, we are able to perform a detailed quantitative study of the regular and chaotic regions, where the quantum participation ratio (P R ) of coherent states on the eigenenergy basis plays a role equivalent to the Lyapunov exponent. It is noted that, in the thermodynamic limit, dividing the participation ratio by the number of atoms leads to a positive value in chaotic regions, while it tends to zero in the regular ones. (paper)

  8. Quantum chromodynamical calculations of meson wave functions in the light-cone formalism by means of QCD sum rules

    International Nuclear Information System (INIS)

    Guellenstern, S.

    1991-09-01

    Using the technique of Cherniak and Zhitnitzky we have calculated the wavefunctions of ρ(770) and Φ(1020) within the framework of QCD sum rules. Whereas the standard approach assumes light-like distances of the quarks (z 2 = 0), we also have taken into account higher order terms in z 2 . Thus, we obtained non-vanishing orbital angular momentum contributions. The first few moments of various invariant functions have been calculated with the help of an especially developed REDUCE program package. In zeroth order (z 2 = 0) our results of the reconstructed wavefunctions agree with those in the literature. However, we got first order contributions in z 2 of an amount of almost 10% of the corresponding zeroth order. (orig.)

  9. Point-splitting regularization of composite operators and anomalies

    International Nuclear Information System (INIS)

    Novotny, J.; Schnabl, M.

    2000-01-01

    The point-splitting regularization technique for composite operators is discussed in connection with anomaly calculation. We present a pedagogical and self-contained review of the topic with an emphasis on the technical details. We also develop simple algebraic tools to handle the path ordered exponential insertions used within the covariant and non-covariant version of the point-splitting method. The method is then applied to the calculation of the chiral, vector, trace, translation and Lorentz anomalies within diverse versions of the point-splitting regularization and a connection between the results is described. As an alternative to the standard approach we use the idea of deformed point-split transformation and corresponding Ward-Takahashi identities rather than an application of the equation of motion, which seems to reduce the complexity of the calculations. (orig.)

  10. Risk-ranking IST components into two categories

    International Nuclear Information System (INIS)

    Rowley, C.W.

    1996-01-01

    The ASME has utilized several schemes for identifying the appropriate scope of components for inservice testing (IST). The initial scope was ASME Code Class 1/2/3, with all components treated equally. Later the ASME Operations and Maintenance (O ampersand M) Committee decided to use safe shutdown and accident mitigation as the scoping criteria, but continued to treat all components equal inside that scope. Recently the ASME O ampersand M Committee decided to recognize service condition of the component, hence the comprehensive pump test. Although probabilistic risk assessments (PRAs) are incredibly complex plant models and computer hardware and software intensive, they are a tool that can be utilized by many plant engineering organizations to analyze plant system and component applications. In 1992 the ASME O ampersand M Committee got interested in using the PRA as a tool to categorize its pumps and valves. In 1994 the ASME O ampersand M Committee commissioned the ASME Center for Research and Technology Development (CRTD) to develop a process that adapted the PRA technology to IST. In late 1995 that process was presented to the ASME O ampersand M Committee. The process had three distinct portions: (1) risk-rank the IST components; (2) develop a more effective testing strategy for More Safety Significant Components; and (3) develop a more economic testing strategy for Less Safety Significant Components

  11. Risk-ranking IST components into two categories

    Energy Technology Data Exchange (ETDEWEB)

    Rowley, C.W.

    1996-12-01

    The ASME has utilized several schemes for identifying the appropriate scope of components for inservice testing (IST). The initial scope was ASME Code Class 1/2/3, with all components treated equally. Later the ASME Operations and Maintenance (O&M) Committee decided to use safe shutdown and accident mitigation as the scoping criteria, but continued to treat all components equal inside that scope. Recently the ASME O&M Committee decided to recognize service condition of the component, hence the comprehensive pump test. Although probabilistic risk assessments (PRAs) are incredibly complex plant models and computer hardware and software intensive, they are a tool that can be utilized by many plant engineering organizations to analyze plant system and component applications. In 1992 the ASME O&M Committee got interested in using the PRA as a tool to categorize its pumps and valves. In 1994 the ASME O&M Committee commissioned the ASME Center for Research and Technology Development (CRTD) to develop a process that adapted the PRA technology to IST. In late 1995 that process was presented to the ASME O&M Committee. The process had three distinct portions: (1) risk-rank the IST components; (2) develop a more effective testing strategy for More Safety Significant Components; and (3) develop a more economic testing strategy for Less Safety Significant Components.

  12. Transforming Social Regularities in a Multicomponent Community-Based Intervention: A Case Study of Professionals' Adaptability to Better Support Parents to Meet Their Children's Needs.

    Science.gov (United States)

    Quiroz Saavedra, Rodrigo; Brunson, Liesette; Bigras, Nathalie

    2017-06-01

    This paper presents an in-depth case study of the dynamic processes of mutual adjustment that occurred between two professional teams participating in a multicomponent community-based intervention (CBI). Drawing on the concept of social regularities, we focus on patterns of social interaction within and across the two microsystems involved in delivering the intervention. Two research strategies, narrative analysis and structural network analysis, were used to reveal the social regularities linking the two microsystems. Results document strategies and actions undertaken by the professionals responsible for the intervention to modify intersetting social regularities to deal with a problem situation that arose during the course of one intervention cycle. The results illustrate how key social regularities were modified in order to resolve the problem situation and allow the intervention to continue to function smoothly. We propose that these changes represent a transition to a new state of the ecological intervention system. This transformation appeared to be the result of certain key intervening mechanisms: changing key role relationships, boundary spanning, and synergy. The transformation also appeared to be linked to positive setting-level and individual-level outcomes: confidence of key team members, joint planning, decision-making and intervention activities, and the achievement of desired intervention objectives. © Society for Community Research and Action 2017.

  13. Annotation of regular polysemy and underspecification

    DEFF Research Database (Denmark)

    Martínez Alonso, Héctor; Pedersen, Bolette Sandford; Bel, Núria

    2013-01-01

    We present the result of an annotation task on regular polysemy for a series of seman- tic classes or dot types in English, Dan- ish and Spanish. This article describes the annotation process, the results in terms of inter-encoder agreement, and the sense distributions obtained with two methods...

  14. Cognitive Component Analysis

    DEFF Research Database (Denmark)

    Feng, Ling

    2008-01-01

    This dissertation concerns the investigation of the consistency of statistical regularities in a signaling ecology and human cognition, while inferring appropriate actions for a speech-based perceptual task. It is based on unsupervised Independent Component Analysis providing a rich spectrum...... of audio contexts along with pattern recognition methods to map components to known contexts. It also involves looking for the right representations for auditory inputs, i.e. the data analytic processing pipelines invoked by human brains. The main ideas refer to Cognitive Component Analysis, defined...... as the process of unsupervised grouping of generic data such that the ensuing group structure is well-aligned with that resulting from human cognitive activity. Its hypothesis runs ecologically: features which are essentially independent in a context defined ensemble, can be efficiently coded as sparse...

  15. Three regularities of recognition memory: the role of bias.

    Science.gov (United States)

    Hilford, Andrew; Maloney, Laurence T; Glanzer, Murray; Kim, Kisok

    2015-12-01

    A basic assumption of Signal Detection Theory is that decisions are made on the basis of likelihood ratios. In a preceding paper, Glanzer, Hilford, and Maloney (Psychonomic Bulletin & Review, 16, 431-455, 2009) showed that the likelihood ratio assumption implies that three regularities will occur in recognition memory: (1) the Mirror Effect, (2) the Variance Effect, (3) the normalized Receiver Operating Characteristic (z-ROC) Length Effect. The paper offered formal proofs and computational demonstrations that decisions based on likelihood ratios produce the three regularities. A survey of data based on group ROCs from 36 studies validated the likelihood ratio assumption by showing that its three implied regularities are ubiquitous. The study noted, however, that bias, another basic factor in Signal Detection Theory, can obscure the Mirror Effect. In this paper we examine how bias affects the regularities at the theoretical level. The theoretical analysis shows: (1) how bias obscures the Mirror Effect, not the other two regularities, and (2) four ways to counter that obscuring. We then report the results of five experiments that support the theoretical analysis. The analyses and the experimental results also demonstrate: (1) that the three regularities govern individual, as well as group, performance, (2) alternative explanations of the regularities are ruled out, and (3) that Signal Detection Theory, correctly applied, gives a simple and unified explanation of recognition memory data.

  16. The neural substrates of impaired finger tapping regularity after stroke.

    Science.gov (United States)

    Calautti, Cinzia; Jones, P Simon; Guincestre, Jean-Yves; Naccarato, Marcello; Sharma, Nikhil; Day, Diana J; Carpenter, T Adrian; Warburton, Elizabeth A; Baron, Jean-Claude

    2010-03-01

    Not only finger tapping speed, but also tapping regularity can be impaired after stroke, contributing to reduced dexterity. The neural substrates of impaired tapping regularity after stroke are unknown. Previous work suggests damage to the dorsal premotor cortex (PMd) and prefrontal cortex (PFCx) affects externally-cued hand movement. We tested the hypothesis that these two areas are involved in impaired post-stroke tapping regularity. In 19 right-handed patients (15 men/4 women; age 45-80 years; purely subcortical in 16) partially to fully recovered from hemiparetic stroke, tri-axial accelerometric quantitative assessment of tapping regularity and BOLD fMRI were obtained during fixed-rate auditory-cued index-thumb tapping, in a single session 10-230 days after stroke. A strong random-effect correlation between tapping regularity index and fMRI signal was found in contralesional PMd such that the worse the regularity the stronger the activation. A significant correlation in the opposite direction was also present within contralesional PFCx. Both correlations were maintained if maximal index tapping speed, degree of paresis and time since stroke were added as potential confounds. Thus, the contralesional PMd and PFCx appear to be involved in the impaired ability of stroke patients to fingertap in pace with external cues. The findings for PMd are consistent with repetitive TMS investigations in stroke suggesting a role for this area in affected-hand movement timing. The inverse relationship with tapping regularity observed for the PFCx and the PMd suggests these two anatomically-connected areas negatively co-operate. These findings have implications for understanding the disruption and reorganization of the motor systems after stroke. Copyright (c) 2009 Elsevier Inc. All rights reserved.

  17. Perceived Synchrony of Frog Multimodal Signal Components Is Influenced by Content and Order.

    Science.gov (United States)

    Taylor, Ryan C; Page, Rachel A; Klein, Barrett A; Ryan, Michael J; Hunter, Kimberly L

    2017-10-01

    Multimodal signaling is common in communication systems. Depending on the species, individual signal components may be produced synchronously as a result of physiological constraint (fixed) or each component may be produced independently (fluid) in time. For animals that rely on fixed signals, a basic prediction is that asynchrony between the components should degrade the perception of signal salience, reducing receiver response. Male túngara frogs, Physalaemus pustulosus, produce a fixed multisensory courtship signal by vocalizing with two call components (whines and chucks) and inflating a vocal sac (visual component). Using a robotic frog, we tested female responses to variation in the temporal arrangement between acoustic and visual components. When the visual component lagged a complex call (whine + chuck), females largely rejected this asynchronous multisensory signal in favor of the complex call absent the visual cue. When the chuck component was removed from one call, but the robofrog inflation lagged the complex call, females responded strongly to the asynchronous multimodal signal. When the chuck component was removed from both calls, females reversed preference and responded positively to the asynchronous multisensory signal. When the visual component preceded the call, females responded as often to the multimodal signal as to the call alone. These data show that asynchrony of a normally fixed signal does reduce receiver responsiveness. The magnitude and overall response, however, depend on specific temporal interactions between the acoustic and visual components. The sensitivity of túngara frogs to lagging visual cues, but not leading ones, and the influence of acoustic signal content on the perception of visual asynchrony is similar to those reported in human psychophysics literature. Virtually all acoustically communicating animals must conduct auditory scene analyses and identify the source of signals. Our data suggest that some basic

  18. Zero-One Law for Regular Languages and Semigroups with Zero

    OpenAIRE

    Sin'ya, Ryoma

    2015-01-01

    A regular language has the zero-one law if its asymptotic density converges to either zero or one. We prove that the class of all zero-one languages is closed under Boolean operations and quotients. Moreover, we prove that a regular language has the zero-one law if and only if its syntactic monoid has a zero element. Our proof gives both algebraic and automata characterisation of the zero-one law for regular languages, and it leads the following two corollaries: (i) There is an O(n log n) alg...

  19. Feature selection and multi-kernel learning for adaptive graph regularized nonnegative matrix factorization

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-20

    Nonnegative matrix factorization (NMF), a popular part-based representation technique, does not capture the intrinsic local geometric structure of the data space. Graph regularized NMF (GNMF) was recently proposed to avoid this limitation by regularizing NMF with a nearest neighbor graph constructed from the input data set. However, GNMF has two main bottlenecks. First, using the original feature space directly to construct the graph is not necessarily optimal because of the noisy and irrelevant features and nonlinear distributions of data samples. Second, one possible way to handle the nonlinear distribution of data samples is by kernel embedding. However, it is often difficult to choose the most suitable kernel. To solve these bottlenecks, we propose two novel graph-regularized NMF methods, AGNMFFS and AGNMFMK, by introducing feature selection and multiple-kernel learning to the graph regularized NMF, respectively. Instead of using a fixed graph as in GNMF, the two proposed methods learn the nearest neighbor graph that is adaptive to the selected features and learned multiple kernels, respectively. For each method, we propose a unified objective function to conduct feature selection/multi-kernel learning, NMF and adaptive graph regularization simultaneously. We further develop two iterative algorithms to solve the two optimization problems. Experimental results on two challenging pattern classification tasks demonstrate that the proposed methods significantly outperform state-of-the-art data representation methods.

  20. Morphing Continuum Theory: A First Order Approximation to the Balance Laws

    Science.gov (United States)

    Wonnell, Louis; Cheikh, Mohamad Ibrahim; Chen, James

    2017-11-01

    Morphing Continuum Theory is constructed under the framework of Rational Continuum Mechanics (RCM) for fluid flows with inner structure. This multiscale theory has been successfully emplyed to model turbulent flows. The framework of RCM ensures the mathematical rigor of MCT, but contains new material constants related to the inner structure. The physical meanings of these material constants have yet to be determined. Here, a linear deviation from the zeroth-order Boltzmann-Curtiss distribution function is derived. When applied to the Boltzmann-Curtiss equation, a first-order approximation of the MCT governing equations is obtained. The integral equations are then related to the appropriate material constants found in the heat flux, Cauchy stress, and moment stress terms in the governing equations. These new material properties associated with the inner structure of the fluid are compared with the corresponding integrals, and a clearer physical interpretation of these coefficients emerges. The physical meanings of these material properties is determined by analyzing previous results obtained from numerical simulations of MCT for compressible and incompressible flows. The implications for the physics underlying the MCT governing equations will also be discussed. This material is based upon work supported by the Air Force Office of Scientific Research under Award Number FA9550-17-1-0154.

  1. Laplace-transformed atomic orbital-based Møller–Plesset perturbation theory for relativistic two-component Hamiltonians

    Energy Technology Data Exchange (ETDEWEB)

    Helmich-Paris, Benjamin, E-mail: b.helmichparis@vu.nl; Visscher, Lucas, E-mail: l.visscher@vu.nl [Section of Theoretical Chemistry, VU University Amsterdam, De Boelelaan 1083, 1081 HV Amsterdam (Netherlands); Repisky, Michal, E-mail: michal.repisky@uit.no [CTCC, Department of Chemistry, UIT The Arctic University of Norway, N-9037 Tromø (Norway)

    2016-07-07

    We present a formulation of Laplace-transformed atomic orbital-based second-order Møller–Plesset perturbation theory (MP2) energies for two-component Hamiltonians in the Kramers-restricted formalism. This low-order scaling technique can be used to enable correlated relativistic calculations for large molecular systems. We show that the working equations to compute the relativistic MP2 energy differ by merely a change of algebra (quaternion instead of real) from their non-relativistic counterparts. With a proof-of-principle implementation we study the effect of the nuclear charge on the magnitude of half-transformed integrals and show that for light elements spin-free and spin-orbit MP2 energies are almost identical. Furthermore, we investigate the effect of separation of charge distributions on the Coulomb and exchange energy contributions, which show the same long-range decay with the inter-electronic/atomic distance as for non-relativistic MP2. A linearly scaling implementation is possible if the proper distance behavior is introduced to the quaternion Schwarz-type estimates as for non-relativistic MP2.

  2. Laplace-transformed atomic orbital-based Møller–Plesset perturbation theory for relativistic two-component Hamiltonians

    International Nuclear Information System (INIS)

    Helmich-Paris, Benjamin; Visscher, Lucas; Repisky, Michal

    2016-01-01

    We present a formulation of Laplace-transformed atomic orbital-based second-order Møller–Plesset perturbation theory (MP2) energies for two-component Hamiltonians in the Kramers-restricted formalism. This low-order scaling technique can be used to enable correlated relativistic calculations for large molecular systems. We show that the working equations to compute the relativistic MP2 energy differ by merely a change of algebra (quaternion instead of real) from their non-relativistic counterparts. With a proof-of-principle implementation we study the effect of the nuclear charge on the magnitude of half-transformed integrals and show that for light elements spin-free and spin-orbit MP2 energies are almost identical. Furthermore, we investigate the effect of separation of charge distributions on the Coulomb and exchange energy contributions, which show the same long-range decay with the inter-electronic/atomic distance as for non-relativistic MP2. A linearly scaling implementation is possible if the proper distance behavior is introduced to the quaternion Schwarz-type estimates as for non-relativistic MP2.

  3. How insects overcome two-component plant chemical defence

    DEFF Research Database (Denmark)

    Pentzold, Stefan; Zagrobelny, Mika; Rook, Frederik

    2014-01-01

    Insect herbivory is often restricted by glucosylated plant chemical defence compounds that are activated by plant β-glucosidases to release toxic aglucones upon plant tissue damage. Such two-component plant defences are widespread in the plant kingdom and examples of these classes of compounds...... are alkaloid, benzoxazinoid, cyanogenic and iridoid glucosides as well as glucosinolates and salicinoids. Conversely, many insects have evolved a diversity of counteradaptations to overcome this type of constitutive chemical defence. Here we discuss that such counter-adaptations occur at different time points......, before and during feeding as well as during digestion, and at several levels such as the insects’ feeding behaviour, physiology and metabolism. Insect adaptations frequently circumvent or counteract the activity of the plant β-glucosidases, bioactivating enzymes that are a key element in the plant’s two...

  4. A component architecture for the two-phase flows simulation system Neptune

    Energy Technology Data Exchange (ETDEWEB)

    Bechaud, C; Boucker, M; Douce, A [Electricite de France (EDF-RD/MFTT), 78 - Chatou (France); Grandotto, M [CEA Cadarache (DEN/DTP/STH), 13 - Saint-Paul-lez-Durance (France); Tajchman, M [CEA Saclay (DEN/DM2S/SFME), 91 - Gif-sur-Yvette (France)

    2003-07-01

    Electricite de France (EdF) and the French atomic energy commission (Cea) have planed a large project to build a new set of software in nuclear reactors analysis. One of the main idea is to allow coupled calculations in which several scientific domains are involved. This paper presents the software architecture of the two-phase flows simulation Neptune project. Neptune should allow computations of two-phase flows in 3 dimensions under normal operating conditions as well as safety conditions. Three scales are identified: the local scale where there is only homogenization between the two phases, an intermediate scale where solid internal structures are homogenized with the fluid and the system scale where some parts of the geometry under study are considered point-wise or subject to one dimensional simplifications. The main properties of this architecture are as follow: -) coupling with scientific domains, and between different scales, -) re-using of quite all or parts of existing validated codes, -) components usable by the different scales, -) easy introducing of new physical modeling as well as new numerical methods, -) local, distributed and parallel computing. The Neptune architecture is based on the component concept with stable and well suited interface. In the case of a distributed application the components are managed through a Corba bus. The building of the components is organized in shell: a programming shell (Fortran or C++ routines), a managing shell (C++ language), an interpreted shell (Python language), a Corba shell and a global driving shell (C++ or Python). Neptune will use the facilities offered by the Salome project: pre and post processors and controls. A data model has been built to have a common access to the information exchanged between the components (meshes, fields, physical and technical information). This architecture has first been setup and tested on some simple but significant cases and is now currently in use to build the Neptune

  5. A component architecture for the two-phase flows simulation system Neptune

    International Nuclear Information System (INIS)

    Bechaud, C.; Boucker, M.; Douce, A.; Grandotto, M.; Tajchman, M.

    2003-01-01

    Electricite de France (EdF) and the French atomic energy commission (Cea) have planed a large project to build a new set of software in nuclear reactors analysis. One of the main idea is to allow coupled calculations in which several scientific domains are involved. This paper presents the software architecture of the two-phase flows simulation Neptune project. Neptune should allow computations of two-phase flows in 3 dimensions under normal operating conditions as well as safety conditions. Three scales are identified: the local scale where there is only homogenization between the two phases, an intermediate scale where solid internal structures are homogenized with the fluid and the system scale where some parts of the geometry under study are considered point-wise or subject to one dimensional simplifications. The main properties of this architecture are as follow: -) coupling with scientific domains, and between different scales, -) re-using of quite all or parts of existing validated codes, -) components usable by the different scales, -) easy introducing of new physical modeling as well as new numerical methods, -) local, distributed and parallel computing. The Neptune architecture is based on the component concept with stable and well suited interface. In the case of a distributed application the components are managed through a Corba bus. The building of the components is organized in shell: a programming shell (Fortran or C++ routines), a managing shell (C++ language), an interpreted shell (Python language), a Corba shell and a global driving shell (C++ or Python). Neptune will use the facilities offered by the Salome project: pre and post processors and controls. A data model has been built to have a common access to the information exchanged between the components (meshes, fields, physical and technical information). This architecture has first been setup and tested on some simple but significant cases and is now currently in use to build the Neptune

  6. Minor component ordering in wurtzite Ga1-xInxN and Ga1-xAlxN

    International Nuclear Information System (INIS)

    Laaksonen, K.; Ganchenkova, M.G.; Nieminen, R.M.

    2006-01-01

    The electronic and thermodynamic properties of materials are defined largely by their internal structure, in particular, composition uniformity. In this work the homogeneity of the minor component distribution in ternary Ga(In/Al)N alloys in the wurtzite polytype is studied. For this aim, the stability of different configurations of small clusters of the second component atoms is examined using Daft-based code Vas. The hydrostatic strain, induced in the matrix by the component atomic size mismatch, as well as external hydrostatic strain are taken under consideration. Clustering of In atoms along the [0001] direction is shown to be energetically favourable, whereas in the (0001) plane they repel each other. In contrast, Al atoms in GaN do not interact, irrespective of strain conditions. According to these results In in GaN should form [0001] aligned pairs or chains and, at higher In concentrations, zigzag chains in the c-direction, while Al forms a random alloy with the matrix material. The effect of the minor component (In/Al) ordering pattern on the band gap of the ternary Ga(In/Al)N alloy is also discussed

  7. Heavy quark form factors at two loops in perturbative QCD

    International Nuclear Information System (INIS)

    Ablinger, J.; Schneider, C.; Behring, A.; Falcioni, G.

    2017-11-01

    We present the results for heavy quark form factors at two-loop order in perturbative QCD for different currents, namely vector, axial-vector, scalar and pseudo-scalar currents, up to second order in the dimensional regularization parameter. We outline the necessary computational details, ultraviolet renormalization and corresponding universal infrared structure.

  8. Heavy quark form factors at two loops in perturbative QCD

    Energy Technology Data Exchange (ETDEWEB)

    Ablinger, J.; Schneider, C. [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation (RISC); Behring, A. [RWTH Aachen Univ. (Germany). Inst. fuer Theoretische Teilchenphysik und Kosmologie; Bluemlein, J.; Freitas, A. de; Marquard, P.; Rana, N. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Falcioni, G. [Nikhef, Amsterdam (Netherlands). Theory Group

    2017-11-15

    We present the results for heavy quark form factors at two-loop order in perturbative QCD for different currents, namely vector, axial-vector, scalar and pseudo-scalar currents, up to second order in the dimensional regularization parameter. We outline the necessary computational details, ultraviolet renormalization and corresponding universal infrared structure.

  9. Regularized Label Relaxation Linear Regression.

    Science.gov (United States)

    Fang, Xiaozhao; Xu, Yong; Li, Xuelong; Lai, Zhihui; Wong, Wai Keung; Fang, Bingwu

    2018-04-01

    Linear regression (LR) and some of its variants have been widely used for classification problems. Most of these methods assume that during the learning phase, the training samples can be exactly transformed into a strict binary label matrix, which has too little freedom to fit the labels adequately. To address this problem, in this paper, we propose a novel regularized label relaxation LR method, which has the following notable characteristics. First, the proposed method relaxes the strict binary label matrix into a slack variable matrix by introducing a nonnegative label relaxation matrix into LR, which provides more freedom to fit the labels and simultaneously enlarges the margins between different classes as much as possible. Second, the proposed method constructs the class compactness graph based on manifold learning and uses it as the regularization item to avoid the problem of overfitting. The class compactness graph is used to ensure that the samples sharing the same labels can be kept close after they are transformed. Two different algorithms, which are, respectively, based on -norm and -norm loss functions are devised. These two algorithms have compact closed-form solutions in each iteration so that they are easily implemented. Extensive experiments show that these two algorithms outperform the state-of-the-art algorithms in terms of the classification accuracy and running time.

  10. Updating ARI Educational Benefits Usage Data Bases for Army Regular, Reserve, and Guard: 2005 - 2006

    National Research Council Canada - National Science Library

    Young, Winnie

    2007-01-01

    This report describes the updating of ARI's educational benefits usage database with Montgomery GI Bill and Army College Fund data for Army Regular, Reserve, and Guard components over the 2005 and 2006 period...

  11. Generation of High-order Group-velocity-locked Vector Solitons

    OpenAIRE

    Jin, X. X.; Wu, Z. C.; Zhang, Q.; Li, L.; Tang, D. Y.; Shen, D. Y.; Fu, S. N.; Liu, D. M.; Zhao, L. M.

    2015-01-01

    We report numerical simulations on the high-order group-velocity-locked vector soliton (GVLVS) generation based on the fundamental GVLVS. The high-order GVLVS generated is characterized with a two-humped pulse along one polarization while a single-humped pulse along the orthogonal polarization. The phase difference between the two humps could be 180 degree. It is found that by appropriate setting the time separation between the two components of the fundamental GVLVS, the high-order GVLVS wit...

  12. Shape-persistent two-component 2D networks with atomic-size tunability.

    Science.gov (United States)

    Liu, Jia; Zhang, Xu; Wang, Dong; Wang, Jie-Yu; Pei, Jian; Stang, Peter J; Wan, Li-Jun

    2011-09-05

    Over the past few years, two-dimensional (2D) nanoporous networks have attracted great interest as templates for the precise localization and confinement of guest building blocks, such as functional molecules or clusters on the solid surfaces. Herein, a series of two-component molecular networks with a 3-fold symmetry are constructed on graphite using a truxenone derivative and trimesic acid homologues with carboxylic-acid-terminated alkyl chains. The hydrogen-bonding partner-recognition-induced 2D crystallization of alkyl chains makes the flexible alkyl chains act as rigid spacers in the networks to continuously tune the pore size with an accuracy of one carbon atom per step. The two-component networks were found to accommodate and regulate the distribution and aggregation of guest molecules, such as COR and CuPc. This procedure provides a new pathway for the design and fabrication of molecular nanostructures on solid surfaces. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. X-ray computed tomography using curvelet sparse regularization.

    Science.gov (United States)

    Wieczorek, Matthias; Frikel, Jürgen; Vogel, Jakob; Eggl, Elena; Kopp, Felix; Noël, Peter B; Pfeiffer, Franz; Demaret, Laurent; Lasser, Tobias

    2015-04-01

    Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method's strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.

  14. Experimental investigation of the factors influencing the polymer-polymer bond strength during two component injection moulding

    DEFF Research Database (Denmark)

    Islam, Mohammad Aminul; Hansen, Hans Nørgaard; Tang, Peter Torben

    2007-01-01

    Two component injection moulding is a commercially important manufacturing process and a key technology for Moulded Interconnect Devices (MIDs). Many fascinating applications of two component or multi component polymer parts are restricted due to the weak interfacial adhesion of the polymers...... effectively control the adhesion between two polymers. The effects of environmental conditions on the bond strength after moulding are also investigated. The material selections and environmental conditions were chosen based on the suitability of MID production, but the results and discussion presented....... A thorough understanding of the factors that influence the bond strength of polymers is necessary for multi component polymer processing. This paper investigates the effects of the process and material parameters on the bond strength of two component polymer parts and identifies the factors which can...

  15. Experimental investigation of the factors influencing the polymer-polymer bond strength during two-component injection moulding

    DEFF Research Database (Denmark)

    Islam, Aminul; Hansen, Hans Nørgaard; Bondo, Martin

    2010-01-01

    Two-component injection moulding is a commercially important manufacturing process and a key technology for combining different material properties in a single plastic product. It is also one of most industrially adaptive process chain for manufacturing so-called moulded interconnect devices (MIDs......). Many fascinating applications of two-component or multi-component polymer parts are restricted due to the weak interfacial adhesion of the polymers. A thorough understanding of the factors that influence the bond strength of polymers is necessary for multi-component polymer processing. This paper...... investigates the effects of the process conditions and geometrical factors on the bond strength of two-component polymer parts and identifies the factors which can effectively control the adhesion between two polymers. The effects of environmental conditions on the bond strength are also investigated...

  16. Measurements of Two-Phase Suspended Sediment Transport in Breaking Waves Using Volumetric Three-Component Velocimetry

    Science.gov (United States)

    Ting, F. C. K.; LeClaire, P.

    2016-02-01

    Understanding the mechanisms of sediment pickup and distribution in breaking waves is important for modeling sediment transport in the surf zone. Previous studies were mostly concerned with bulk sediment transport under specific wave conditions. The distribution of suspended sediments in breaking waves had not been measured together with coherent flow structures. In this study, two-phase flow measurements were obtained under a train of plunging regular waves on a plane slope using the volumetric three-component velocimetry (V3V) technique. The measurements captured the motions of sediment particles simultaneously with the three-component, three-dimensional (3C3D) velocity fields of turbulent coherent structures (large eddies) induced by breaking waves. Sediment particles (solid glass spheres diameter 0.125 to 0.15 mm, specific gravity 2.5) were separated from fluid tracers (mean diameter 13 µm, specific gravity 1.3) based on a combination of particle spot size and brightness in the two-phase images. The interactions between the large eddies and glass spheres were investigated for plunger vortices generated at incipient breaking and for splash-up vortices generated at the second plunge point. The measured data show that large eddies impinging on the bottom was the primary mechanism which lift sediment particles into suspension and momentarily increased near-bed suspended sediment concentration. Although eddy impingement events were sporadic in space and time, the distributions of suspended sediments in the large eddies were not uniform. High suspended sediment concentration and vertical sediment flux were found in the wall-jet region where the impinging flow was deflected outward and upward. Sediment particles were also trapped and carried around by counter-rotating vortices (Figure 1). Suspended sediment concentration was significantly lower in the impingement region where the fluid velocity was downward, even though turbulent kinetic energy in the down flow was

  17. Analysis of radiation pressure force exerted on a biological cell induced by high-order Bessel beams using Debye series

    International Nuclear Information System (INIS)

    Li, Renxian; Ren, Kuan Fang; Han, Xiang'e; Wu, Zhensen; Guo, Lixin; Gong, Shuxi

    2013-01-01

    Debye series expansion (DSE) is employed to the analysis of radiation pressure force (RPF) exerted on biological cells induced by high-order Bessel beams (BB). The beam shape coefficients (BSCs) for high-order Bessel beams are calculated using analytical expressions obtained by the integral localized approximation (ILA). Different types of cells, including a real Chinese Hamster Ovary (CHO) cell and a lymphocyte which are respectively modeled by a coated and five-layered sphere, are considered. The RPF induced by high-order Bessel beams is compared with that by Gaussian beams and zeroth-order Bessel beams, and the effect of different scattering processes on RPF is studied. Numerical calculations show that high-order Bessel beams with zero central intensity can also transversely trap particle in the beam center, and some scattering processes can provide longitudinal pulling force. -- Highlights: ► BSCs for high-order Bessel beam (HOBB) is derived using ILA. ► DSE is employed to study the RPF induced by HOBB exerted on multilayered cells. ► RPF is decided by radius relative to the interval of peaks in intensity profile. ► HOBB can also transversely trap high-index particle in the vicinity of beam axis. ► RPF for some scattering processes can longitudinally pull particles back

  18. Quasiclassical description of multi-band superconductors with two order parameters

    Energy Technology Data Exchange (ETDEWEB)

    Moor, Andreas

    2014-05-19

    This Thesis deals with multi-band superconductors with two order parameters, i.e., the superconductivity and the spin-density wave, also touching on one-band superconductors with a charge-density wave, as well as with only the superconducting order parameter. Quasiclassical description of suchlike structures is developed and applied to investigation of various effects, inter alia, the Josephson and the proximity effects, the Knight shift, the Larkin-Ovchinnikov-Fulde-Ferrell-like state, and the interplay of the order parameters in coexistence regime. The applicability of the developed approach to pnictides is discussed.

  19. Effective field theory dimensional regularization

    International Nuclear Information System (INIS)

    Lehmann, Dirk; Prezeau, Gary

    2002-01-01

    A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed

  20. Effective field theory dimensional regularization

    Science.gov (United States)

    Lehmann, Dirk; Prézeau, Gary

    2002-01-01

    A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed.